url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/19910/first-order-dynamic-system | # First order dynamic system
I have a system that is defined as follows:
$$\dot{x}(t) = Ax(t) + Bu(t),\quad\quad x(0)=x_0.$$
It has a well known solution:
$$x(t) = e^{tA}x_{0} + \int_0^t \! e^{(t-\tau)A}Bu(\tau) \, \mathrm{d}\tau$$
I know that one of the tasks during exam will be to draw the the graph of the solution given that A and B are numbers. If B is zero then this is no problem, but if it's not I don't really know how to calculate that integral. Could anyone please explain the steps that I need to perform to solve this?
PS> I've already found some examples but none of them showed the intermediate steps, so I can't figure out anything :/
-
What is $u(t)$? – Hans Lundmark Feb 1 '11 at 17:47
X(t) is the systems state while u(t) is the input. If you have eg. a RC (resistor-capacitor) system then the voltage on the capacitor would be x(t) (the systems current state) and the source voltage would be u(t). – kubal5003 Feb 1 '11 at 17:50
But shouldn't the solution depend on $u$ then? – Hans Lundmark Feb 1 '11 at 18:09
Yes, there is a missing $u(\tau)$ in the integral. In general, without further information about the input $u(t)$, you can't really simplify that integral further. – cch Feb 1 '11 at 18:58
You're right. I've investigated the problem further and probably the only possible options during the exam are that u(t) is constant , which is easy to solve. This is great, because I don't have to seek a "generic" solution to this problem. Hurray! :D PS> I am truly sorry for wasting your and anyone else's time. It took me some time to dig through all the materials and discover this. – kubal5003 Feb 1 '11 at 19:01
## 1 Answer
That integral is almost never computed. This is generally a Laplace transform problem where $u$ is typically a Heaviside(step), Dirac (delta) or some other Laplace transformable from the tables type of function, hence you don't need to assume $u$ a constant function. If one takes the Laplace transform of the state equation:
$$\begin{align} sX(s) - x(0) &= AX(s)+BU(s)\\ X(s) &= (sI-A)^{-1}x(0) + (sI-A)^{-1}BU(s) \end{align}$$ Possibly applying the Partial Fractional Expansion, via the inverse Laplace transform one gets $x(t)$ explicitly.
-
Answering the unanswered Movement! :) – user13838 Nov 27 '11 at 22:23
upvoting the unupvoted. – Gerry Myerson Feb 26 '12 at 2:07
@GerryMyerson Hahaha, thanks! – user13838 Feb 26 '12 at 8:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424996972084045, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/120239/list | ## Return to Question
2 added 193 characters in body
The context for this question is from page 284 - 287 of Berger's paper: http://pdn.sciencedirect.com/science?_ob=MiamiImageURL&_cid=272332&_user=209810&_pii=S0021869398976785&_check=y&_origin=article&_zone=toolbar&_coverDate=1999--01&view=c&originContentFamily=serial&wchp=dGLbVlS-zSkzS&md5=2dd8e0d714d264cf7c4acdd9ec58ac84&pid=1-s2.0-S0021869398976785-main.pdf
Particularly, in his assumption at the top of page 287, he says that "From now on, assume that our map $\pi_\mathfrak{p}$ surjects onto $\text{PU}_2(\zeta,\mathcal{O}_K/\mathfrak{p})\cong \text{PSL}_2(\mathbb{F}_q)$, that $q$ is odd, and that $(6,k) = 1$, where $k = \sharp\langle -\zeta\rangle$."
I'm guessing that from the previous page, he's assuming the conditions in proposition 2 (from the previous page) to be true, so that $\pi_\mathfrak{p}$ surjects onto $\text{PU}_2(\zeta,\mathcal{O}_K/\mathfrak{p}) = \text{PSU}_2(\mathcal{O}_K/\mathfrak{p})$, and that he's claiming that the latter group is isomorphic to $\text{PSL}_2(\mathbb{F}_q)$.
Is this generally true?
Also, on page 284, where he gives the matrix $H$ for the hermitian form, he claims that $H\in GL_{n-1}(\mathbb{Z}[t,t^{-1}])$, but the matrix he gives obviously does not lie in that group.
Where might I find a good book on unitary matrices over finite fields?
thanks,
• will
1
# When is PSU(2,q^2) = PSL(2,q) ?
The context for this question is from page 284 - 287 of Berger's paper: http://pdn.sciencedirect.com/science?_ob=MiamiImageURL&_cid=272332&_user=209810&_pii=S0021869398976785&_check=y&_origin=article&_zone=toolbar&_coverDate=1999--01&view=c&originContentFamily=serial&wchp=dGLbVlS-zSkzS&md5=2dd8e0d714d264cf7c4acdd9ec58ac84&pid=1-s2.0-S0021869398976785-main.pdf
Particularly, in his assumption at the top of page 287, he says that "From now on, assume that our map $\pi_\mathfrak{p}$ surjects onto $\text{PU}_2(\zeta,\mathcal{O}_K/\mathfrak{p})\cong \text{PSL}_2(\mathbb{F}_q)$, that $q$ is odd, and that $(6,k) = 1$, where $k = \sharp\langle -\zeta\rangle$."
I'm guessing that from the previous page, he's assuming the conditions in proposition 2 to be true, so that $\pi_\mathfrak{p}$ surjects onto $\text{PU}_2(\zeta,\mathcal{O}_K/\mathfrak{p}) = \text{PSU}_2(\mathcal{O}_K/\mathfrak{p})$, and that he's claiming that the latter group is isomorphic to $\text{PSL}_2(\mathbb{F}_q)$.
Is this generally true?
Where might I find a good book on unitary matrices over finite fields?
thanks,
• will | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8431153893470764, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/50519/integral-cohomology-stable-operations | ## Integral cohomology (stable) operations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There have been a couple questions on MO, and elsewhere, that have made me curious about integral or rational cohomology operations. I feel pretty familiar with the classical Steenrod algebra and its uses and constructions, and I am at a loss as to imagine some chain level construction of such an operation, other than by coupling mod p operations with bockstein and reduction maps. I am mostly just curious about thoughts in this direction, previous work, and possible applications. So my questions are essentially as follows:
1) Are there any "interesting" rational cohomology operations? I feel like I should be able to compute $H\mathbb{Q}^*H\mathbb{Q}$ by noticing that $H\mathbb{Q}$ is just a rational sphere and so there are no nonzero groups in the limit. Is this right?
2) Earlier someone posted a reference request about $H\mathbb{Z}^*H\mathbb{Z}$, and I am just curious about what is known, and what methods were used.
3) Is there a reasonable approach, ie explainable in this forum, for constructing chain level operations? the approaches I have seen seem to require some finite characteristic assumptions, but maybe I am misremembering things.
4) I am currently under the impression that a real hard part of the problem is integrating all the information from different primes, is this the main roadblock? or similar to what the main obstruction is?
My apologies for the barrage of questions, if people think it would be better split up, I would be happy to do so.
-
## 3 Answers
$HZ^nHZ$ is trivial for $n<0$. $HZ^0HZ$ is infinite cyclic generated by the identity operation. For $n>0$ the group is finite. So you know everything if you know what's going on locally at each prime. For $n>0$ the $p$-primary part is not just finite but killed by $p$, which means that you can extract it from the Steenrod algebra $H(Z/p)^{*}H(Z/p)$ and Bocksteins.
EDIT Here's the easier part: The integral homology groups of the space $K(Z,n)$ vanish below dimension $n$, and by induction on $n$ they are all finitely generated. Also $H_{n+k}K(Z,n)$ is independent of $n$ for roughly $n>k$, so that in this stable range $H_{n+k}K(Z,n)$ is $HZ_kHZ$, which is therefore finitely generated. This plus the computation of rational (co)homology gives that $HZ_kHZ$ is finite for $k>0$. Here's the funny part: Of course one expects there to be some elements of order $p^m$ for $m>1$ in the (co)homology of $K(Z,n)$, and in fact there are; the surprise is that stably this is not the case.
-
I do not understand how those groups can be finite for all n larger than 0. It seems that there should be an operation $\beta_p$ for each prime, a composition of the reduction mod p and then the integral bockstein, in $H\mathbb{Z}^1 H\mathbb{Z}$. – Sean Tilson Dec 28 2010 at 6:59
2
The composite of reduction mod p and the integral Bockstein is zero by construction as the integral Bockstein is the boundary map coming from the sequence $0\to\mathbb Z\to\mathbb Z\to\mathbb Z/p\to0$. – Torsten Ekedahl Dec 28 2010 at 9:08
I also believe that it is "well-known" that stable operations of degree $1$ are just given by Ext-groups of the two coefficient groups between which you want to calculate operations. Of course this implies that $H\mathbb{Z}^1H\mathbb{Z} = 0$. – Markus Land Mar 17 at 10:31
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
1. That is right.
2. The slightly easier calculation, I think, is $H\mathbb{Z}_*H\mathbb{Z}$, and this is easier to approach one prime at a time, i.e., by calculating `$H\mathbb{Z}_*H\mathbb{Z}_{(p)}$`, which is something you can do using the classical Adams spectral sequence. I don't have it handy to check, but I suspect that this calculation is carried out in Part III of Adams's "blue book" (Stable homotopy and generalised homology). The main thing to take away is that `$H\mathbb{Z}_nH\mathbb{Z}_{(p)}$` is $p$-torsion (i.e., in the kernel of multiplication by $p$) for all $n>0$.
3. Steenrod's original definition was by a chain level construction, called the cup-i product. This is discussed in some other questions, such as here.
Note that I'm discussing the (stable) homology of the Eilenberg MacLane spectrum. The homology of the integral Eilenberg MacLane spaces $H_*K(\mathbb{Z},n)$ are quite a bit more complicated.
-
So the cup-i construction works integrally? Also, I was under the impression that this data was hard to assemble together into one object. I get the impression from your answer, as well as Toms, that this is not the case. Is that a fair assessment? – Sean Tilson Dec 28 2010 at 7:01
Oh, integrally. The exact cup-i's may or may not be defined integrally ... you'd have to check. But they descend from a chain level construction which exists integrally. – Charles Rezk Dec 28 2010 at 14:35
A reference for 1. is Kraines, David Rational cohomology operations and Massey products. Proc. Amer. Math. Soc. 22 1969 238–241. – Mark Grant Dec 28 2010 at 15:57
Here is another interesting reference on $H\mathbb{Z}_* H\mathbb{Z}$ and $H\mathbb{Z}^* H\mathbb{Z}$:
Kochman, Stanley; Integral cohomology operations. Current trends in algebraic topology, Part 1 (London, Ont., 1981), pp. 437–478, CMS Conf. Proc., 2, Amer. Math. Soc., Providence, R.I., 1982.
It contains in particular the theorem explained above by Tom, that the $p$-primary part is killed by $p$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527374505996704, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/31116/testing-whether-two-regression-coefficients-are-significantly-different | # Testing whether two regression coefficients are significantly different
I'm hoping somebody can help me out with this question. For a study I did a path analysis, which looks like this:
````IV -->
Mediatior --> DV
IV -->
````
So I have two IV's leading to my mediator. With SPSS I did two regression analysisses from the IV's to the Mediator.
Now I want to test whether the two regression coefficients are signifficantly different from each other using SPSS, but I have no idea how to do this. Except that I assume that I have to use a T-test, but don't know how.
I have the two regression coefficients, their standard deviation and my N (this n is the same for both regressions, because it's all from one sample).
I suspect that this is pretty basic for some, but my skills in statistics are not that great. So help would be greatly appreciated!
-
1
Why do you want to do this? – Peter Flom Jun 26 '12 at 11:01
1
Because according to my regression coefficients IV(a) has more influence on the mediator than IV(b) does. And now I would want to know whether this difference between the regression coefficients is significant – user12208 Jun 26 '12 at 13:28
## 3 Answers
If you have the estimated covariance matrix for the coefficients, then you can construct the t-test as follows. Let the hypothesis, in its general form, be $R^T\beta = b$, and $\widehat{\Sigma} = \hat{\sigma}^2(X^TX)^{-1}$ be the estimated covariance matrix of the coefficients. In your case, assuming the test is that $\beta_2 = \beta_3$ and you have $K=3$ coefficients, $R^T = [0, 1, -1]$ and $b=0$. Then:
$T^* = \frac{R^T\beta - b}{\sqrt{R^T\widehat{\Sigma}R}}$
is distributed $t(N-K)$.
Source: Principles of Econometrics, Theil.
-
Thanks for all of the answers! – user12208 Jun 30 '12 at 13:36
I have the same question as user1 but jbowman's response isn't clear to me. Is it possible to clarify?? – Candace White May 3 at 13:00
@Candace White I think you will need to be more specific than "isn't clear" if you would like a focused answer to your comment. – whuber♦ May 3 at 13:17
I believe the correct approach here is to compare the fit of a model where IV(a) and IV(b) are allowed to vary - that is, your present model - with the fit of a model where IV(a) and IV(b) are fit to the same value (in which case the mediator is just an average of the two). The two models can be compared using a Chi-Square difference test.
This is simple enough to be performed by hand - I am not quite sure how to do that last, final step in SPSS. But all of the requisite values for calculating the Chi-Square difference will be available to you in SPSS, and there are several online calculators that could be used for determining the value of this test statistic and its p-value. I hope that helps!
-
CHCH's answer is interesting. But would it be simpler to use Fisher's Z transformation and the corresponding difference formula to compare the standardized regression coefficients of the two IVs?
-
+1, in particular, I think the part about standardized is crucial--otherwise the whole thing is nonsense from the start. – gung Jun 27 '12 at 1:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503796100616455, "perplexity_flag": "middle"} |
http://en.wikiversity.org/wiki/Maxwell's_Equations | # Maxwell's Equations
From Wikiversity
Maxwell's Equations, formulated around 1861 by James Clerk Maxwell, describe the interrelation between electric and magnetic fields. They were a synthesis of what was known at the time about electricity and magnetism, particularly building on the work of Michael Faraday, Charles-Augustin Coulomb, Andre-Marie Ampere, and others. These equations predicted the existence of Electromagnetic waves, giving them properties that were recognized to be properties of light, leading to the (correct) realization that light is an electromagnetic wave. Other forms of electromagnetic waves, such as radio waves, were not known at the time, but were subsequently demonstrated by Heinrich Hertz in 1888. These equations are considered to be among the most elegant edifices of mathematical physics.
Maxwell's equations serve many purposes and take many forms. On the one hand, they are used in the solution of actual real-world problems of electromagnetic fields and radiation. On the other hand, they are the subject of admiration for their elegance. There are many T-shirts, typically obtainable on college campuses, sporting various forms of these equations.
What follows is a survey of the various forms that these equations take, beginning with the most utilitarian and progressing to the most elegant. Which form you prefer depends on your scientific outlook, and perhaps your taste in T-shirts. The various $\nabla \cdot \mathbf{E}$ and $\nabla \times \mathbf{E}$ symbols appearing in some of the equations are the divergence and curl operators, respectively.
They are usually formulated as four equations (but later we will see some particularly elegant versions with only two), and the equations are usually expressed in differential form, that is, as Partial Differential Equations involving the divergence and curl operators. They can also be expressed with integrals.
They are often expressed in terms of four vector fields: E, B, D, and H, though the simpler forms use only E and B.
Here are the equations expressed in differential form:
Name E and B E, B, D, and H
Coulomb's law of electrostatics, or Gauss's Law: $\nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon}$ $\nabla \cdot \mathbf{D} = \rho$
Absence of magnetic monopoles: $\nabla \cdot \mathbf{B} = 0$ $\nabla \cdot \mathbf{B} = 0$
Faraday's Law of Induction: $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}$ $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}$
Ampère's Law, or the Biot-Savart Law,
plus displacement current:
$\nabla \times \mathbf{B} = \mu\ \mathbf{J} + \frac{1}{c^2} \frac{\partial \mathbf{E}} {\partial t}$ $\nabla \times \mathbf{H} = \mathbf{J} + \frac{\partial \mathbf{D}} {\partial t}$
In these, E denotes the electric field, B denotes the magnetic field, D denotes the electric displacement field, and H denotes the magnetic field strength or auxiliary field. J denotes the free current density, and $\rho$ denotes the free electric charge density.
The use of separate D and H fields is sometimes helpful in engineering problems involving dielectric polarizability and magnetic permeability of the materials involved. But the more elegant "pure" form removes D and H through the equations:
$\mathbf{D} = \epsilon \mathbf{E}$
$\mathbf{H} = \mathbf{B} / \mu$
where $\epsilon\,$ and $\mu\,$, often written $\epsilon_0\,$ and $\mu_0\,$, are physical constants known as the permittivity of the vacuum and the permeability of the vacuum. (The subscript zero refers to the vacuum values. By setting these constants to other values to denote the permittivity and permeability of other materials, one can solve various problems in electrodynamics and optics conveniently.)
The quantity $\epsilon \mu\,$ has units of seconds squared per meter squared. So, letting
$c = \frac{1}{\sqrt{\epsilon \mu}}\,$
we get $c\,$ as a speed. Laboratory measurements of $\epsilon\,$ and $\mu\,$ obtain a value of 3 $\times$ 108 meters per second, which is the speed of light. Careful mathematical analysis by Maxwell showed that these equations predict electromagnetic radiation at this speed.
Here are the equations expressed in integral form, though the differential versions are generally taken to be the "real" Maxwell equations. These forms can be seen to be equivalent to the differential forms through the use of the general Stokes' Theorem. The form known as Gauss's Theorem (k=3) takes care of the equations involving the divergence, and the form commonly known as just Stokes' Theorem (k=2) takes care of those involving the curl.
Name E and B E, B, D, and H
Coulomb's law: $\oint_S \mathbf{E} \cdot \mathrm{d}\mathbf{A} = \frac{1}{\epsilon} \int_V \rho\, \mathrm{d}V$ $\oint_S \mathbf{D} \cdot \mathrm{d}\mathbf{A} = \int_V \rho\, \mathrm{d}V$
Absence of monopoles: $\oint_S \mathbf{B} \cdot \mathrm{d}\mathbf{A} = 0$ $\oint_S \mathbf{B} \cdot \mathrm{d}\mathbf{A} = 0$
Faraday's Law: $\oint_C \mathbf{E} \cdot \mathrm{d}\mathbf{l} = - \int_S \frac{\partial\mathbf{B}}{\partial t} \cdot \mathrm{d} \mathbf{A}$ $\oint_C \mathbf{E} \cdot \mathrm{d}\mathbf{l} = - \int_S \frac{\partial\mathbf{B}}{\partial t} \cdot \mathrm{d} \mathbf{A}$
Ampère / Biot-Savart Law: $\oint_C \mathbf{B} \cdot \mathrm{d}\mathbf{l} = \mu \int_S \mathbf{J} \cdot \mathrm{d} \mathbf{A} + \frac{1}{c^2} \int_S \frac{\partial\mathbf{D}}{\partial t} \cdot \mathrm{d} \mathbf{A}$ $\oint_C \mathbf{H} \cdot \mathrm{d}\mathbf{l} = \int_S \mathbf{J} \cdot \mathrm{d} \mathbf{A} + \int_S \frac{\partial\mathbf{D}}{\partial t} \cdot \mathrm{d} \mathbf{A}$
## What the Four Equations mean
### Coulomb's Law
The first equation is just Coulomb's law of electrostatics, manipulated very elegantly (as usual) by Faraday and Gauss. Coulomb's law simply says that the electric force between two charged particles acts in the direction of the line between them, is repelling if they have like charges and attracting if unlike, is proportional to the product of the charges, and is inversely proportional to the square of the distance between them:
$F = \frac{q_1 q_2}{4 \pi \epsilon\ d^2}$
In this case, the constant defining the strength of the electric force is $4 \pi \epsilon\,$ in the denominator. More about that presently.
In SI units the charges are measured in Coulombs, the force in Newtons, the distance in Meters, and the value of $\epsilon$ is $8.854 \times 10^{-12}$ Coulombs2 per Newton meter2, or Farads per meter.
Michael Faraday reformulated the electric and magnetic forces in terms of fields He said that what was really happening was that each charge was creating an electric field (called E) that acted on the other charge. The field created by the charge q1, as observed at distance d, is
$E = \frac{q_1}{4 \pi \epsilon\ d^2}$
and points directly outward from that charge, in all directions. The force felt by charge q2 is
$F = q_2 E\,$
Now consider a sphere of radius d with the charge at the center. If $\rho$ is the charge density in Coulombs per cubic meter (Maxwell's equations are in terms of densities), the total charge in some volume is the integral, over that volume, of $\rho$.
So we have
$q = \int_V \rho\, \mathrm{d}V$
Now the field at the surface of the sphere is
$\frac{q_1}{4 \pi \epsilon\ d^2}$, or $\frac{1}{4 \pi \epsilon\ d^2} \int_V \rho\, \mathrm{d}V$
That field is directly outward, perpendicular to the sphere's surface, and is uniform over the surface. The integral of the field over the surface is $4 \pi d^2$ times that (the surface area of the sphere is $4 \pi d^2$; this is why we have the pesky factor of $4 \pi$ in various formulas; remember that d is the distance, and hence is the sphere's radius, not its diameter), so
$\int_V \frac{\rho}{\epsilon}\, \mathrm{d}V = \oint_S \mathbf{E} \cdot \mathrm{d}\mathbf{A}$
But, by Gauss's Theorem,
$\oint_S \mathbf{E} \cdot \mathrm{d}\mathbf{A} = \int_V \nabla \cdot \mathbf{E}\ \mathrm{d}V$
So
$\int_V \nabla \cdot \mathbf{E}\ \mathrm{d}V = \int_V \frac{\rho}{\epsilon}\, \mathrm{d}V$
Since this is true for any volume, we have
$\nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon}$
### Absence of Magnetic Monopoles
The second of the equations is just like the first, but for the magnetic field. The divergence of B must be the spatial density of magnetic monopoles. Since they have never been observed (though various Grand Unified Theories might allow for them), the value is zero.
This wasn't formulated initially in terms of monopoles, but was actually a statement that magnetic "lines of force" (the lines that intuitively describe the field) never end. They just circulate around various conductors carrying electric current. In contrast to this, lines of the electric field can be thought to "begin" and "end" on charged particles.
### Faraday's Law
The third equation contains the result of Faraday's experiments with "electromagnetic induction"—a changing magnetic field creates an electric field, and that electric field circulates around the area experiencing the change in total magnetic flux. (Remember that the curl operator measures the extent to which a vector field runs in circles.) We won't go into the details of his experiments, except to note that he discovered that moving a coil of wire (a loop to pick up a circulating electric field) through a magnetic field (for example, by putting it on a shaft and turning it) led to the invention of electric generators, and hence made a major contribution to the industrialization of the world. Not bad for a theoretician.
### Biot-Savart Law
The Biot-Savart Law, which see, tells how flowing electric current gives rise to a magnetic field. The form that is useful to us is "Ampere's circuital law", which says that, in the vicinity of an infinitely long straight wire carrying an electric current, the magnetic field goes in circles around the wire, following a right-hand rule. The field strength is:
$B = \frac{\mu\ I}{2 \pi R}$
Now consider the path integral of the magnetic field around a circle perpendicular to the wire, with radius R, and centered at the wire. The magnetic field is parallel to that path everywhere, and the path length is $2 \pi R\,$, so:
$\oint_C \vec{B} \cdot \vec{dl} = 2 \pi R B = \mu I$
Now, by Stokes' theorem, this path integral of the B field is the same as the surface integral of the curl of the B field:
$\iint (\nabla \times \vec{B}) \cdot \vec{dA} = \oint_C \vec{B} \cdot \vec{dl} = \mu\ I = \iint \mu \vec{J} \cdot \vec{dA}$
Where $\vec{J}\,$ is the current flow density, in amperes per square meter. The integral of $\vec{J}\,$ over any surface is just the total amount of current flowing across that surface. Since the contribution of current density to the magnetic field is linear, this formula must work for any surface, so we have:
$\nabla \times \vec{B} = \mu \vec{J}$
There is another term in the fourth of Maxwell's equations, called Maxwell's displacement current. This provides for a contribution to the magnetic field from a variation, in time, of the electric field. This phenomenon is hard to exhibit experimentally, and was added by Maxwell on theoretical grounds in order to satisfy charge conservation. So the final form is:
$\nabla \times \vec{B} = \mu\ \vec{J} + \frac{1}{c^2} \frac{\partial \vec{E}} {\partial t}$
## Formulations in Relativity
Relativity provides a unification of the electric and magnetic fields into a second-rank tensor defined on four-dimensional spacetime. This is called the Faraday tensor, denoted $F^{\alpha\beta}\,$. In this formulation, Maxwell's equations are
• $F^{\alpha\beta}_{;\beta} = \mu J^\alpha\,$
• $F_{[\alpha\beta;\lambda]} = 0\,$
Where $J^\alpha\,$ is the relativistic "four-current density"—its spatial components are the usual current density $\vec{J}\,$, and its time component is the charge density $\rho\,$. The subscripted semicolon in the first equation is the "covariant gradient" of tensor calculus. The subscript with brackets in the second equation is the "antisymmetrizer".
In the language of Exterior Calculus, the equations can be rewritten even more compactly as:
• $\mathrm{d}\bold{F}=0$
• $\mathrm{d}\ *\ \bold{F} = *\ \bold{J}$
where d is exterior derivative operator, * is the Hodge star operator, and F is the Faraday tensor. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 65, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943050742149353, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/1507/how-many-onsagers-solutions-are-there/1829 | # How many Onsager's solutions are there?
Update: I provided an answer of my own (reflecting the things I discovered since I asked the question). But there is still lot to be added. I'd love to hear about other people's opinions on the solutions and relations among them. In particular short, intuitive descriptions of the methods used. Come on, the bounty awaits ;-)
Now, this might look like a question into the history of Ising model but actually it's about physics. In particular I want to learn about all the approaches to Ising model that have in some way or other relation to Onsager's solution.
Also, I am asking three questions at once but they are all very related so I thought it's better to put them under one umbrella. If you think it would be better to split please let me know.
When reading articles and listening to lectures one often encounters so called Onsager's solution. This is obviously very famous, a first case of a complete solution of a microscopic system that exhibits phase transition. So it is pretty surprising that each time I hear about it, the derivation is (at least ostensibly) completely different.
To be more precise and give some examples:
1. The most common approach seems to be through computation of eigenvalues of some transfer matrix.
2. There are few approaches through Peierl's contour model. This can then be reformulated in terms of a model of cycles on edges and then one can either proceed by cluster expansion or again by some matrix calculations.
The solutions differ in what type of matrix they use and also whether or not they employ Fourier transform.
Now, my questions (or rather requests) are:
1. Try to give another example of an approach that might be called Onsager's solution (you can also include variations of the ones I already mentioned).
2. Are all of these approaches really that different? Argue why or why not some (or better yet, all) of them could happen to be equivalent.
3. What approach did Onsager actually take in his original paper. In other words, which of the numerous Onsager's solutions is actually the Onsager's solution.
For 3.: I looked at the paper for a bit and I am a little perplexed. On the one hand it looks like it might be related to transfer matrix but on the other hand it talks about quaternion algebras. Now that might be just a quirk of Onsager's approach to 4x4 matrices that pop basically in every other solution but I'll need some time to understand it; so any help is appreciated.
-
Seems to me the differences are just in the techniques. The solution of the system should remain the same, and that's why they call it Onsager's solution, while there are many different ways to arrive at it. Quite typical of physics problems no? Wether you solve the Kepler problem with Newtonian mechanics or Hamiltonian mechanics, the solution is the same. However, a new insight can be coupled to the different technique. Or one of the techniques can be more easily generalizable. If nobody comes along with a ready answer, I'm willing to look a bit more deeply into this. – Raskolnikov Dec 1 '10 at 19:46
@Raskolnikov: there are (at least) two different meanings of the word solution. One is result and the other is derivation. Mathematicians have it easier because they can talk about theorems and proofs. Here I am not asking about the result/theorem but rather about the nature of derivations/proofs and whether they are equivalent or not. This is because (similarly to mathematics) different derivations/proofs can give you new insights into results/theorems/problems... I am glad to hear that you might look into it later :-) – Marek Dec 1 '10 at 19:55
Yeah, I kinda got that was what you were looking after, the technique to arrive at the solution rather than just the solution itself. Maybe you should put a bounty on the question to attract people. :p – Raskolnikov Dec 1 '10 at 19:58
@Raskolnikov: yeah, good idea. – Marek Dec 1 '10 at 20:11
@Marek: I think the problem is that the community is still not very big in terms of theoretical topics; few people are familiar with the concepts of QFT. This might change with the proposed "theoretical Physics" page on StackExchange. We might shift some questions later from here to the new page if this is possible. – Robert Filter Dec 12 '10 at 9:48
show 1 more comment
## 4 Answers
I wish I could do your question justice, but I will content myself with a remark on the connection between two of the solution methods mentioned in Barry McCoy's article, namely Baxter's commuting transfer matrix method, and Onsager's original algebraic approach.
In a certain sense, these methods have to be considered distinct since Baxter's method is applicable to a vast family of additional models, whereas Onsager's method applies only to Ising and closely related models. A related fact is that, while the free energy and order parameter have been computed for a great many two-dimensional models, only for Ising are the correlation functions completely understood. (They can be written in terms of simple determinants.) Among solvable two-dimensional models, Ising appears to be very special. It lies at the intersection of many infinite families of models. Although all solvable lattice models have lots of unexpected structure - in particular, they have infinitely many conserved quantities - Ising is even more special. Onsager's original method of solution exploited some of this special structure - in particular, the direct-product structure of the transfer matrices.
Since Baxter's commuting transfer matrix method does not exploit this special structure, it can be used to solve the many other models that don't have it. His method uses the Yang-Baxter relation to establish that the transfer matrices commute for different values of the spectral parameter (which, in the Ising model, parametrizes the difference between horizontal and vertical coupling strengths). Since the eigenvectors must therefore be independent of the spectral parameter, one can derive functional relations for the eigenvalues, which can then be solved.
Onsager's method was expanded upon by Dolan and Grady, who showed that a certain set of commutation relations implies the existence of an infinite set of conservation laws. In the 1980s, a solvable n-state generalization of the Ising model, known as the superintegrable chiral Potts model, was discovered that satisfies Dolan and Grady's conditions and, as a consequence, has transfer matrices with the same direct-product structure that Onsager exploited in 1944. Interestingly, the superintegrable chiral Potts model corresponds to a special point in a one-parameter family of solvable models, the integrable chiral Potts models. The latter are solvable by Baxter's method, but can be solved by Onsager's method only at the superintegrable point. There seems to be a lot of work going on currently on the correlation functions of the superintegrable chiral Potts model.
The other solution methods that Barry McCoy mentions in his Scholarpedia article - Kaufman's free fermions, the combinatorial method, Baxter and Enting's 399th solution - also seem to make use of the particular structure of the Ising model. In this sense, they are more akin to Onsager's original method than to Baxter's commuting-transfer-matrix method. As you have already suggested, there may be some equivalences among them, but I would have to give this more study before commenting further.
-
Thank you for your answer! I have no time to read it now, but just skimming through, it looks useful so I gave +1. Will come back later to ask further questions :-) – Marek Dec 13 '10 at 10:23
After reading this through I have to mark it as the answer: it answers all of my questions (at least partially) and greatly elaborates on my second question (which interested me most). I am sure a great deal more could be said but for now you left me with lot to think about and many great references I have to sort through (in particular chiral Potts model). Big thanks again! – Marek Dec 13 '10 at 22:18
Seeing as no one is trying to give an answer, I'll take a stab at it myself.
Shortly after writing this question I learned (in this cute answer of Raskolnikov's) about Baxter's wonderful book on exact solutions in statistical mechanics. Slowly but surely I realized that Ising model has been solved so many times by some many different methods by virtually every famous physicist (I'll list some of the solutions later ) that it became clear that my question is hopefully inadequate and only reflects my huge ignorance.
To make up for that, I started reading papers. The Onsagar's paper itself came out in 1944. In 1949 there appeared Bruria Kaufman's paper where she notes that the transfer matrix can be interpreted as $2^n$-dimensional representation of $2n$-dimensional rotations. So she introduces spinor analysis (e.g. Pauli and Dirac matrices) and goes ahead to solve the problem. I must say I am in love with this approach (okay, you got me, I am a group person).
In 1952 Kac and Ward used a purely combinatorial method of some polygons (which I don't yet quite understand, but it probably has to do with Peierl's contours). Other papers note the duality with free fermionic field. Or note that Ising is just a special case of Random Cluster Model; or a dimer model. These papers carry names (in no particular order) such as Potts, Ward, Kac, Kasteleyn, Yang, Baxter, Fisher, Montroll and others. It's quite obvious that it will take me some time to understand (or indeed, even read) all those papers.
So I took a different road and used asked google. Querying all the names above at once returns precious gems:
1. Amazing article over at Scholarpedia. It contains historical treatment, main methods of solution, references to the papers I mentioned and much much more.
2. paper History of Lenz-Ising model
3. paper Spontaneous magnetization of the Ising model
-
The connection between quaternions and spinors should show that Onsager's original approach probably just nicely maps onto Kaufman's approach. Quaternions form a Clifford algebra, and Spin(n) are also Clifford algebras I think, or at least related. Seems like you can take the +100 points. ;) – Raskolnikov Dec 8 '10 at 19:26
@Raskolnikov: I also thought about that connection but I am still not quite comfortable with either of the two papers (honestly, I haven't studied them much) to make it really precise. – Marek Dec 8 '10 at 19:39
@Raskolnikov: By the way, I think you can't really award bounty to your own answer (or at least, it won't gain you reputation), so I would be happier to give it to someone else. If you have more to say about these matters (and it seems to me you do) then please, leave an answer :-) – Marek Dec 8 '10 at 19:39
I think when I studied the 2D Ising model in the mathphys course I had, we used the Peierl's method. But it's such a long time ago, I barely can remember if we really did the full proof or if it was just mentioned. I'm not sure I still have the text of that course. That's a pity. I'm quite busy lately, and I didn't find the time to read Onsager's paper. Besides I'm a bit upset by Onsager's sloppiness. – Raskolnikov Dec 8 '10 at 19:42
@Raskolnikov, one last remark: Spin(n) are not a Clifford algebra but are very closely related. They are (modulo reflections) the unit vectors in $Cℓ(n)$ (e.g. Spin(3) = SU(2) are precisely the unit quaternions). – Marek Dec 8 '10 at 19:56
show 4 more comments
I am about halfway the most important part of Onsager's paper, so I'll try to summarize what I've understood so far, I'll edit later when I have more to say.
Onsager starts by using the 1D model to illustrate his methodology and fix some notations, so I'm gonna follow him but I'll use some more "modern" notations.
In the 1D Ising model, only neighbouring spins interact, therefore, the energy of interactions is represented by
$$E=-J\mu^{(k)}\mu^{(k-1)}$$
where $J$ is the interaction strength.
The partition function is
$$Z = \sum_{\mu^{(1)},\ldots,\mu^{(N)}=\pm 1} e^{-\sum_k J\mu^{(k)}\mu^{(k-1)}/kT}$$
Onsager notes that the exponential can be seen as a matrix component:
$$\langle \mu^{(k-1)}| V | \mu^{(k)} \rangle = e^{-J\mu^{(k)}\mu^{(k-1)}/kT}$$
The partition sum becomes the trace of a matrix product in this notation
$$Z = \sum_{\mu^{(1)},\mu^{(N)}=\pm 1} \langle \mu^{(1)}| V^{N-1} | \mu^{(N)} \rangle$$
So for large powers $N$ of $V$, the largest eigenvalue will dominate. In this case, $V$ is just a $2\times 2$ matrix and the largest eigenvalue is $2\cosh(J/kT)=2\cosh(H)$, introducing $H=J/kT$.
Now, to construct the 2D Ising model, Onsager proposes to build it by adding a 1D chain to another 1D chain, and then repeat the procedure to obtain the full 2D model.
First, he notes that the energy of the newly added chain $\mu$ will depend on the chain $\mu'$ to which it is added as follows:
$$E = -\sum_{j=1}^n J \mu_j \mu'_j$$
But if we exponentiate this to go to the partition formula, we get the $n$th power of the matrixwe defined previously, so using notation that Onsager introduced there
$$V_1 = (2 \sinh(2H))^{n/2} \exp(H^{*}B)$$
with $H^{*}=\tanh^{-1}(e^{-2H})$ and $B=\sum_j C_j$ with $C_j$ the matrix operator that works on a chain as follows
$$C_j |\mu_1,\ldots,\mu_j,\ldots,\mu_n \rangle = |\mu_1,\ldots,-\mu_j,\ldots,\mu_n \rangle$$
Then, to account for the energy contribution from spins within a chain, he notes that the total energy is
$$E = -J' \sum_{j=1}^n \mu_j\mu_{j+1}$$
adding periodicity, that is the $n$th atom is a neighbor to the 1st. Also note that the interaction strength should not be equal to the interchain interaction strength. He introduces new matrix operators $s_j$ which act on a chain as
$$s_j|\mu_1,\ldots,\mu_j,\ldots,\mu_n \rangle = \mu_j |\mu_1,\ldots,\mu_j,\ldots,\mu_n \rangle$$
and in this way constructs a matrix
$$V_2 = \exp(H'A) = \exp(H'\sum_j s_j s_{j+1})$$
Now, the 2D model can be constructed by adding a chain through application of $V_1$ and then define the internal interactions by using $V_2$. So one gets the following chain of operations
$$\cdots V_2 V_1 V_2 V_1 V_2 V_1 V_2 V_1 V_2 V_1$$
It is thus clear that the matrix to be analyzed in our 2D model is $V=V_2 V_1$. This is our new eigenvalue problem:
$$\lambda | \mu_1,\ldots,\mu_n \rangle = \exp(H'\sum_j s_j s_{j+1}) \sum_{\mu'_1,\ldots,\mu'_n=\pm 1} \exp(H\sum_j \mu_j \mu'_{j})| \mu'_1,\ldots,\mu'_n \rangle$$
Now, the quaternions come into play. Onsager notes that the operators $s_j$ and $C_j$ he constructed form a quaternion algebra.
Basically, the basis elements $(1,s_j,C_j,s_jC_j)$ generate the quaternions and since for different $j$'s the operators commute, we have a tensor product of quaternions, thus a quaternion algebra.
-- To be continued --
-
+1 already and looking forward to the continuation. I will eventually read and (hopefully also understand) Onsager's paper myself but right now it's not my top priority, so your summary is very much welcome! – Marek Dec 11 '10 at 21:07
By the way, meanwhile I learned that Bruria Kaufman was actually Onsager's student and her approach is indeed a simplification of Onsager's. So it probably doesn't make much sense to try to understand the original Onsager's paper. But still, I'd like to (eventually) know and understand both solutions. – Marek Dec 11 '10 at 21:14
@Raskolnikov: Grats and upvote. Waiting for your continuation :) – Robert Filter Dec 12 '10 at 9:50
I got to this place as well. Following thing is the most interesting -- but up to now I'm stuck... – Kostya Dec 12 '10 at 18:14
I get the idea of what he's trying to do, but it's so long that I'm not sure how to summarize it. Maybe I'll skip the derivation of the various commutation relations and just assume them. – Raskolnikov Dec 12 '10 at 18:25
show 1 more comment
Not to state the obvious, but it seems the information at the scholaropaedia article which @marek mentioned in his answer, is more comprehensive than any answer I or anyone else is likely to come up with.
To quote this article "there are five different methods which have been used to compute the free energy of the Ising model". For details best check out the link above. Anything more I add will just be repetition.
As for the bounty, it should go to Barry McCoy - the author of the scholaropaedia article ;)
-
Well, yes and no. It lists five different methods but meanwhile I read somewhere that the combinatorial method and the Onsager/Kaufman method are actually equivalent. And it's quite possible that other methods are too. And that was the biggest part of my question (actually the only one that needs something more than just digging through literature). Seeing as other two parts are more or less answered (giving more solutions by me and clarification of Onsager's by Raskolnikov) I will perhaps ask the remaining part as a separate question and expand on what I mean by equivalent. – Marek Dec 12 '10 at 9:05
1
@space_cadet: Thank you for pointing to the work of Prof. McCoy. I wonder why doctors on spaceships have the time to care about statistical physics. scnr ;) – Robert Filter Dec 12 '10 at 9:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554822444915771, "perplexity_flag": "middle"} |
http://math.stackexchange.com/users/8343/sam-lisi?tab=activity&sort=all&page=2 | # Sam Lisi
reputation
515
bio
website homepages.ulb.ac.be/~samulisi
location Nantes, France
age
member for 2 years, 2 months
seen 1 hour ago
profile views 283
Postdoc working in symplectic/contact topology. I'm currently at the Université de Nantes.
| | | bio | visits | | |
|------------|------------------|----------------|-------------------------------|------------|-------------------|
| | 1,617 reputation | website | homepages.ulb.ac.be/~samulisi | member for | 2 years, 2 months |
| 515 badges | location | Nantes, France | seen | 1 hour ago | |
# 219 Actions
| | | |
|-------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| May1 | comment | An other question about Theorem 3.1 from Morse theory by Milnor |
| May1 | comment | An other question about Theorem 3.1 from Morse theory by MilnorDo you understand what $\phi_t$ is? |
| Apr30 | comment | An other question about Theorem 3.1 from Morse theory by MilnorThe tricky part is that $\phi_{b-a}$ maps $M^a$ to $M^b$. To check this, calculate what $f( \phi_{b-a}(x))$ is, if $f(x) \le a$. This is where you use the fact that your new vector field flows downwards so that the time $t$ flow changes the value of $f$ by $t$. |
| Apr30 | comment | An other question about Theorem 3.1 from Morse theory by MilnorThe fact that it is a diffeomorphism onto its image follows just from the existence/uniqueness of ODE, the long-time existence (essentially by construction) and smoothness with respect to initial condition. I.e. this is standard ODE theory. |
| Apr30 | comment | An other question about Theorem 3.1 from Morse theory by MilnorThe purpose of (1) is to make (2) obvious and to make it easy to write down a formula for $r_t$. Can you construct a simple example (e.g. for domains in $\mathbb{R}^2$) and see what he is doing? Can you say more about where you are stuck? |
| Apr30 | answered | Implicit Function Theorem and Rank Theorem Misunderstandings. |
| Apr30 | comment | Dynamical Systems. Bendixson's and Dulac-Bendixson's theorems.Great timing, I was just typing essentially the same thing. While this is clear from what you wrote, I want to emphasise that $U$ has positive area because $C$ is embedded. ($C$ is embedded by uniqueness of solutions to ODE.) |
| Apr27 | answered | Non-degenerate solutions to constant Hamiltonian flow |
| Apr27 | comment | Hamiltonian Isotopy in Symplectic geometryCan you give a bit more detail about what you are looking for? What do you mean by "visualize"? what makes a context "good"? Have you studied any classical physics where the Hamiltonian is giving the total energy of a conservative system? if so, this is the right visualization (at least on cotangent bundles). |
| Apr27 | answered | Lagrangian subspaces |
| Apr16 | comment | Area of flux homomorphism in symplectic topologyI don't understand what you are trying to do, specifically. Are you trying to compute this explicitly? for that to make sense, it seems to me that you need some more explicit information. Or are you trying to do something else? this isn't clear to me. |
| Apr13 | comment | global vector fields in local coordinatesIn general you can't, for the reason you explained. What are you trying to do? |
| Apr13 | comment | Structure on manifoldsI should add the question: have you studied surfaces in $\mathbb{R}^3$? This is the easiest scenario with a shape operator, and it might be worthwhile to understand this first. If you already know about this use of the shape operator, I can't tell you anything further. |
| Apr13 | comment | Structure on manifolds@Merri: I don't know the book you mention, and I know very little about uses of shape operators. However, I would guess that the key feature of a "shape operator" is that it is the differential of a "Gauss map". There are many situations in which you have something that looks like a Gauss map (but isn't the classical one), so maybe there are many non-standard shape operators. I don't really know. |
| Apr11 | comment | Can the chain rule be relaxed to allow one of the functions to not be defined on an open set? |
| Apr11 | comment | Can the chain rule be relaxed to allow one of the functions to not be defined on an open set?I think you have some typos in your proposed modification of Theorem 3, or else I have no idea what you are asking for. In particular, I think you mean $f(E)$ instead of $F(E)$, but I still don't understand what exactly you are trying to accomplish. |
| Apr11 | comment | Can the chain rule be relaxed to allow one of the functions to not be defined on an open set?Observe that if the initial data is contained in the interior of the orthant ($\Omega$ above), then, for small time, the solution curve stays in $\Omega$, and thus you can use the standard chain rule. What's not at all clear is what behaviour you want to see at the boundary. |
| Apr11 | comment | Can the chain rule be relaxed to allow one of the functions to not be defined on an open set?I don't understand your motivation. Call $\Omega$ the (strictly) positive orthant. Then the non-negative orthant is $\overline \Omega$. If your initial data $x_0 \in \Omega$, then for small time, your solution curve $x(t)$ will live in $\Omega$. However, if you have initial data at the boundary $x_0 \in \partial \overline{\Omega}$, there is no reason for your integral curve to stay in $\overline \Omega$ even infinitesimally. In the former case, one expects $V(x(t))$ to make sense for short time, whereas in the latter, you don't. |
| Apr11 | comment | Symplectic geometry as a prequisite for Heegaard Floer homologyDo you have someone in your department who can give you some guidance? there is a lot of symplectic geometry in both of these books that is not relevant to you and there are a number of technical points about holomorphic curves that are not explained in these books. A correct answer to your question, imo, requires more information about what you know and about what you are interested in doing. Unfortunately, even with this information, I wouldn't be qualified to give you a good answer, since I don't work in Heegaard Floer homology. |
| Apr8 | comment | Why is Cartan Formula just an avatar of Leibniz rule?Thank you for the bounty. I am grateful to you for the video. | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474071264266968, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/32865/is-the-stress-finite-at-the-centre-of-a-spherical-continuous-charge-distribution?answertab=votes | # Is the stress finite at the centre of a spherical continuous charge distribution?
At the centre of a spherical continuous charge distribution with no external electric fields, the electric field is zero from symmetry arguments. But does the stress at the centre remain finite?
-
Stress from what? From the electromagnatic field, or from the other stuff? – Ron Maimon Jul 26 '12 at 1:46
@RonMaimon from the electric field of the charges acting on the centre. I thought someone might come up with a nice simple argument on the limiting process of $dr^3$ and how it affects the answer. – Physiks lover Jul 26 '12 at 11:54
## 1 Answer
The electromagnetic stress would still be zero, yes, because you can calculate it from
$$\sigma_{i j} \equiv \epsilon_0 \left(E_i E_j - \frac{1}{2} \delta_{ij} E^2\right) + \frac{1}{\mu_0} \left(B_i B_j - \frac{1}{2} \delta_{ij} B^2\right)$$
At a point where $\vec{E} = 0$, and given that this is an electrostatic situation so $\vec{B} = 0$, all components of the stress tensor will be zero.
Quantities which might not be zero would be those which involve integrals of the electric field, for example. (Also the mechanical pressure - the diagonal components of the mechanical stress tensor - would be nonzero in practice, but I'm guessing that's not what you are asking about.)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9069235324859619, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2010/08/03/measurable-functions-on-pulled-back-measurable-spaces/?like=1&source=post_flair&_wpnonce=d0199d491e | # The Unapologetic Mathematician
## Measurable Functions on Pulled-Back Measurable Spaces
We start today with a possibly surprising result; pulling back a $\sigma$-ring puts significant restrictions on measurable functions. If $f:X\to Y$ is a function from a set into a measurable space $(Y,\mathcal{T})$, and if $g:X\to\mathbb{R}$ is measurable with respect to the $\sigma$-ring $f^{-1}(\mathcal{T})$ on $X$, then $g(x_1)=g(x_2)$ whenever $f(x_1)=f(x_2)$.
To see this fix a point $x_1\in X$, and let $F_1\subseteq Y$ be a measurable set containing $f(x_1)$. Its preimage $f^{-1}(F_1)\subseteq X$ is then a measurable set containing $x_1$. We can also define the level set $\{x\in X\vert g(x)=g(x_1)\}$, which is a measurable set since $g$ is a measurable function. Thus the intersection
$\displaystyle\{x\in X\vert g(x)=g(x_1)\}\cap f^{-1}(F_1)\subseteq X$
is measurable. That is, it’s in $f^{-1}(\mathcal{T})$, and so there exists some measurable $F\subseteq Y$ so that $f^{-1}(F)$ is this intersection. Clearly $f(x_1)\in F$, and so $f(x_2)$ is as well, by assumption. But then $x_2\in f^{-1}(F)\subseteq\{x\in X\vert g(x)=g(x_1)\}$, and we conclude that $g(x_2)=g(x_1)$.
From this result follows another interesting property. If $f:X\twoheadrightarrow Y$ is a mapping from a set $X$ onto a measurable space $(Y,\mathcal{T})$, and if $g:(X,f^{-1}(\mathcal{T})\to(Z,\mathcal{U})$ is a measurable function, then there is a unique measurable function $h:(Y,\mathcal{T})\to(Z,\mathcal{U})$ so that $g=h\circ f$. That is, any function that is measurable with respect to a measurable structure pulled back along a surjection factors uniquely through the surjection.
Indeed, since $f$ is surjective, for every $y\in Y$ we have some $x\in X$ so that $f(x)=y$. Then we define $h(y)=g(x)$, so that $g(x)=h(f(x))$, as desired. There is no ambiguity about the choice of which preimage $x$ of $y$ to use, since the above result shows that any other choice would lead to the same value of $g(x)$. What’s not immediately apparent is that $h$ is itself measurable. But given a set $M\in\mathcal{U}$ we can consider its preimage $\{y\in Y\vert h(y)\in M\}$, and the preimage of this set:
$\displaystyle\begin{aligned}f^{-1}\left(\{y\in Y\vert h(y)\in M\}\right)&=\left\{x\in X\big\vert f(x)\in\{y\in Y\vert h(y)\in M\}\right\}\\&=\{x\in X\vert h(f(x))\in M\}\\&=\{x\in X\vert g(x)\in M\}\\&=g^{-1}(M)\end{aligned}$
which is measurable since $g$ is a measurable function. But then this set must be the preimage of some measurable subset of $Y$, which shows that the preimage $h^{-1}(M)\subseteq Y$ is measurable.
It should be noted that this doesn’t quite work out for functions $f$ that are not surjective, because we cannot uniquely determine $h(y)$ if $y$ has no preimage under $f$.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429781436920166, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/109287/decorations-in-szabos-combinatorial-spectral-sequence | ## Decorations in Szabo’s combinatorial spectral sequence
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Szabo in http://arxiv.org/abs/1010.4252 gives a combinatorial candidate for what an explicit calculation of the spectral sequence of branched double covers should yield. In other words he gives a conjectural combinatorial model for HF-hat of branched double covers of links.
However the input for his algorithm is not a bare link diagram; it's a "dcorated" diagram. The decoration is a choice of orientations for the arcs which connect the two strands of the link at crossings. Szabo sais in the paper that these decorations are analogues of the extra structure given by Heegaard diagrams and almost complex structures in Heegard-Floer homology. But I don't understand how.
So, my question is, what is the analogue of Szabo's decorations in Heegaard-Floer homology? (Since the two theories are expected to be isomorphic, there should be such an analogue.) I suspect this has to do with Lipshitz's cylindrical reformulation but am not sure. By the way, this whole story is over Z/2 so signs are not involved.
-
## 1 Answer
I'm pretty sure those orientations on arcs correspond to an orientation of the surgery link in the construction of Ozsvath-Szabo's original spectral sequence from Khovanov homology to HF of the branched double cover.
More precisely, to construct this spectral sequence, you start with a link projection, and take the branched double cover. Each crossing is contained in a small ball with two strands, whose branched double cover is the solid torus. The arc connecting these two strands lifts to a knot in the branched double cover (the core of the solid torus), and taking all the crossings together you get a link $L$.
Thus, choosing orientations at each crossing, as in Szabo's paper, is the same as orienting this link $L$. In the end, the spectral sequence is independent of these orientations, but you have to choose them at the beginning.
These orientations are also used in constructing odd Khovanov homology (http://arxiv.org/abs/0710.4300), for the same reason (it's the sign convention you naturally get on the Khovanov complex by considering it as the $E_1$ term of the spectral sequence to HF of the branched double cover).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172489047050476, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/79828/can-a2-2b2-have-a-solution-where-a-b-are-in-z-but-not-zero | # Can a^2 = 2b^2 have a solution where a, b are in Z but not zero? [duplicate]
Possible Duplicate:
How can you prove that the square root of two is irrational?
Can $a^2 = 2b^2$ have a solution where $a, b$ are in $\mathbb{Z}$ but not zero?
$\mathbb{Z}$ = positive and negative whole numbers
-
4
If you can solve $a^2=2b^2$, then $$2=\left(\frac{a}{b}\right)^2$$ which means that the square root of $2$ is rational. See the linked page for some proofs that this is impossible. – Eric♦ Nov 7 '11 at 11:34
## marked as duplicate by Eric♦, Gerry Myerson, Hans Lundmark, JavaMan, lhfNov 7 '11 at 11:40
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 1 Answer
If you take square root of the both sides you get:
$|a|=\sqrt{2} \cdot |b|$
So the LHS represents an integer while RHS represents an irrational number therefore equality isn't true so there is no solution of this equation in the set of integers without zero.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412748217582703, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/60230/are-extensions-of-nuclear-frechet-spaces-nuclear/87212 | ## Are extensions of nuclear Fréchet spaces nuclear?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider the category of Fréchet spaces, the morphisms being continuous linear maps with closed image. Suppose that we have a short exact sequence in that category:
$0 \rightarrow V_1 \rightarrow V_2 \rightarrow V_3 \rightarrow 0$.
Of course $V_1$ and $V_3$ are nuclear if $V_2$ is. I recently asked myself if the converse might be true. I haven't found anything useful in the standard literature (Treves, Schaefer) but that might be just me being too ignorant to see the obvious. I'm grateful if someone could shed some light on this.
Cheers,
Ralf
-
2
I suspect that nuclearity of $V_3$ might force the extension to split, because nuclear Fr\'echet spaces have some kind of lifting property... does this sound like it might work? – Yemon Choi Mar 31 2011 at 20:19
2
Yemon, this isn't true. See my comment on Ralf's answer. – Andrew Stacey Apr 1 2011 at 7:34
1
Your "category" isn't a category because morphisms with closed image aren't closed under composition. Indeed, every continuous linear map $f: X \to Y$ is the composition of $X \to X \times Y, x \mapsto (x,f(x))$ which has closed image because the graph is closed by continuity and the projection $X \times Y \to Y$ which also has closed image. You can still speak of short exact sequences by requiring that the left morphism be the kernel of the right morphism and that the right morphism be the cokernel of the left morphism. Google for quasi-abelian categories and exact categories for more on this – Theo Buehler Feb 2 2012 at 17:21
Theo, is the cokernel in the category of Frechet spaces and continuous linear maps really what we want in this case? Don't we want a smaller class of "exact sequences", namely those which are exact as diagrams in Vect? (I guess this is addressed in eg your Expositiones article) – Yemon Choi Feb 2 2012 at 18:10
## 2 Answers
You question was answered even for locally convex spaces by S. Dierolf and W. Roelcke in proposition 3.8 of the article "On the three-space-problem for topological vector spaces". Collect. Math. 32, p. 13-35 (1981).
The splitting theory for Frechet spaces is nowadays very well understood by results of D. Vogt and others. This can be found in my "Derived functors in functional analysis", Springer Lecture Notes in Mathematics 1810 (2003).
-
1
But answered which way? Could you add a brief note at least saying what the answer actually is so that those casually reading this, or those without access to the references, can at least know what the answer is. – Andrew Stacey Feb 1 2012 at 9:28
2
In Ralf's question, $V_2$ is nuclear whenever $V_1$ and $V_3$ are nuclear. This can be reformulated without exact sequences: A locally convex space is nuclear whenever it has a nuclear subspaces such that the corresponding quotient is also nuclear. In Dierolf's and Roelcke's article there many more results of this type as well as many counterexamples. – Jochen Wengenroth Feb 1 2012 at 9:40
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Thank you for the hint, Yemon. You are indeed right. I found the following paper which proves the lifting property that you mentioned: emis.de/journals/PM/55f1/pm55f107.ps.gz The splitting follows from exmaple 3 on p. 96. Thanks again :-)
-
Glad to hear that my vague memory was correct - I wrote the comment not to be cryptic, but because I was in a rush earlier and didn't have time to chase down the references. – Yemon Choi Mar 31 2011 at 22:06
2
Example 3 on p96 says "Each nuclear DF -space has the lift property for the class of Frechet spaces" so it **does not apply. Indeed, here's an example of a short exact sequence of nuclear Frechet spaces that does not split: $L_\flat \mathbb{R} \to L\mathbb{R} \to \mathbb{R}^{\mathbb{N}}$. The middle is smooth loops in R and the left-hand is smooth loops that are infinitely flat at the identity. This does not split, but all are nuclear Frechet spaces. – Andrew Stacey Apr 1 2011 at 7:34
Indeed, I read in the introduction to that paper "Gejler has proved that a nuclear Frechet space has the lifting property for the class of all nuclear Frechet spaces if and only if it is finite dimensional." – Andrew Stacey Apr 1 2011 at 8:00
Thanks for the corrections, Andrew. – Yemon Choi Apr 2 2011 at 4:51
Oh, my bad and a prime example of wishful reading. Thank you for the correction and the counterexample, Andrew. – Ralf Apr 13 2011 at 0:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453005194664001, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/57275/gravitationally-bound-systems-in-an-expanding-universe | # Gravitationally bound systems in an expanding universe
This isn't yet a complete question; rather, I'm looking for a qual-level question and answer describing a gravitationally bound system in an expanding universe. Since it's qual level, this needs a vastly simplified model, preferably one in a Newtonian framework (if this is even possible) that at least shows the spirit of what happens. If it isn't possible, and there is some important principle that any Newtonian model will miss, that is probably the first thing to point out. Otherwise, ...
I'm thinking of a setup that starts something like this:
A body of mass $m$ orbits another of mass $M \gg m$ (taken to be the origin) in a circle of radius $R$ (and consequently with a period $T = 2 \pi \sqrt{ R^3 / GM }$). At time $t = 0$, space begins to slowly expand at a constant Hubble rate $H$ (where "slowly" means $HT \ll 1$, so that the expansion in one period is negligible compared to the orbital radius).
Now here is where I'm not sure how to phrase the problem. I'm thinking instead of something like the "bead on a pole" problems. Then the bead has generalized Lagrangian coordinates relative to a fixed position on the pole, but the pole is expanding according to some external function $a(t)$, a la FLRW metric (For a constant Hubble rate, $H = \dot a / a$ and so $a(t) = e^{Ht}$). So, we can call the bead's distance from the origin in "co-moving units" the Lagrangian coordinate $q$. Then the distance of the bead from the origin is $d(t) = q(t) a(t)$, giving us a time dependent Lagrangian.
Basically, I need a 3D version of this that will work for the gravitationally bound system described. I also hope that the problem can be solved as an adiabatic process, which might mean changing coordinates to be around the original position relative to the more massive body at the origin, rather than co-moving coordinates.
EDIT 1: I won't claim that I fully understood the details of Schirmer's work, but I think one of the big takeaway points is that the cosmological expansion damages the stability of circular orbits and causes them to decay. I finished my very hand-waivey "Newtonian" model of a solar system in an expanding universe, and I don't think it captures the essence of this part of the GR solution. The model does have several sensical properties:
• There is a distance at which bound solutions are impossible and the two bodies are guaranteed to expand away from each other
• There is still a circular orbit (I haven't checked that it is stable) whose radius is modified by a term involving the Hubble constant.
• Using solar parameters, this shift is entirely negligible; it is larger (though still small) on a galactic scale.
Here is the model:
Without this scaling factor, the Lagrangian would be $$\mathcal{L} = \frac{1}{2} m \left( \dot r^2 + r^2 \dot \theta^2 + r^2 \sin^2 \theta \, \dot \phi^2 \right) + \frac{GMm}{r}$$ Adding in the scale factor, all the kinetic terms will pick up a factor $e^{2Ht}$ and the potential a factor $e^{-Ht}$: $$\mathcal{L} = \frac{1}{2} m \left(\dot r^2 + r^2 \dot \theta^2 + r^2 \sin^2 \theta \, \dot \phi^2 \right) e^{2Ht}+ \frac{GMm}{r} e^{-Ht}$$ (I have not added in any derivatives of the scaling factor; this would mean I was just changing coordinates and not adding in an externally controlled expansion of space). Since the problem is spherically symmetric, the total angular momentum is conserved and the motion lies in a plane (the expansion of space does not change this) at $\theta = \pi/2$. This reduces the effective Lagrangian to $$\mathcal{L} = \frac{1}{2} m \left( \dot r^2 + r^2 \dot \phi^2 \right) e^{2Ht} + \frac{GMm}{r} e^{-Ht}$$ Now let's change coordinates to $r = \eta e^{-Ht}$. This represents a point that is stationary with respect to the origin (i.e. the massive body that the smaller body orbits). Then $\dot r e^{Ht}= \dot \eta - H \eta$, so the Lagrangian becomes $$\mathcal{L} = \frac{1}{2} m \left((\dot \eta - H \eta)^2 + \eta^2 \dot \phi^2 \right) + \frac{GMm}{\eta}$$ This looks like a Lagrangian that has been modified slightly by the parameter $H$. After these manipulations, $\phi$ is still a cyclic coordinate, and its conjugate momentum $p_\phi = m\eta^2 \dot \phi = L$ is still conserved. The momentum conjugate to $\eta$ is $$p_{\eta} = m (\dot \eta - H \eta)$$ So, the equation of motion for $\eta$ is $$m \frac{d}{dt} (\dot \eta - H \eta) = m (\ddot \eta - H \dot \eta) = - mH(\dot \eta - H \eta) + m \eta \dot \phi^2 - \frac{GMm}{\eta^2}$$ Canceling terms and substituting in the equation of motion for $\phi$, this becomes $$\ddot \eta = H^2 \eta + \frac{L^2}{m^2 \eta^3} - \frac{GM}{\eta^2}$$ So, the expansion of space shows up as a force term proportional to $\eta$. For large $\eta_0$ at time $t = 0$, there is a solution $\eta(t) = \eta_0 e^{Ht}$, where $\dot \eta = H \eta$ agrees with Hubble's law (the corresponding comoving coordinate is $r(t) = \eta_0$). There is also an a highly unstable orbital solution for large $\eta$, where gravitational force just balances cosmological expansion. This new term also shifts the location of the "stable orbit" slightly (I haven't checked that it actually is stable in this situation). The location of the stable circular orbit when $H = 0$ is $$\eta_0 = \frac{L^2}{GMm^2}$$ Then let $$\eta = \eta_0 + \delta \eta = \eta_0 \left( 1 + \frac{\delta \eta}{\eta_0} \right)$$ To lowest order, the shift in location of the stable orbit is $$\frac{\delta \eta}{\eta_0} = \frac{H^2}{GM/\eta_0^3 - H^2}$$ For the orbit of the earth around the sun, this shift would be entirely negligible- about 15 pm. Taking the mass to be the mass of the Milky way and the distance to the distance of the Sun from the center, the shift would be a bit larger - a fractional amount of about 2e-7, which is several hundred AU - but still largely small.
-
## 1 Answer
I wrote an unpublished paper on this once, and presented the poster at an AAS meeting. You can actually do some relatively simple model building with this if you look at Schwarzschild-de Sitter solutions of Einstein's equation.${}^{1}$
Once you have the exact solution, you can then look at circular orbits, and do the usual stability analysis that you use to derive the $r > 6M$ limit that you get in the Schwarzschild solution. What you will find is that, in the Schwarzschild-de Sitter solution, you will get a cubic equation. One solution will give you a $r < 0$ value, which is unphysical, but you will also get an innermost stable circular orbit and an outermost stable circular orbit. The latter represents matter getting pulled off of the central body by the cosmology.
There are also exact solutions that involve universes with nonzero matter densities other than the cosmological constant with a central gravitating body. These solutions generically don't have a timelike killing vector nor globally stable orbits. I ran numerical simulations of these orbits and published them in my dissertation's appendices.
EDIT: stack exchange seems to be killing hotlinks, but search for my name on the arxiv if you're interested.
${}^{1}$EDIT 2: you can find the Schwarzschild-de Sitter solution by replacing $1 - \frac{2M}{r}$ everywhere with $1-\frac{2M}{r} - \frac{1}{3}\Lambda r^{2}$. It should be easy enough to prove that that solution satisfies Einstein's equation for vacuum plus cosmological constant.
-
– anna v Mar 19 at 4:42
@annav: a few comments things: if you put in the mass of a galaxy's SMBH and the cosmological constant and look for the OSCO, you'll get something that is order of magnitude correct for a galaxy's radius, so while the cosmology is almost certainly insigificant at the solar system scale, it probably has a significant effect for galaxy formation. – Jerry Schirmer Mar 19 at 5:07
2) expanding universes tend to cause orbits to decay in asymptotically FRLW solutions that aren't asymptotically de Sitter (the mass of the central object gets blueshifted, if you will), so things will tend to not get unbound, but fall into the central object. For solar system parameters and observed Lambda and rho, you get decay times that are longer than the age of the universe. – Jerry Schirmer Mar 19 at 5:10
If I understand you correctly the solutions for the problem have to do with large enough masses pertinent to the beginnings of the universe and not of the "steady expanding state" we find ourselves in now. I was puzzled by the the solution with innermost and outermost orbits, that you interpret as mass pulled from inner body. ( sounded like a solar system to my naive ears) – anna v Mar 19 at 6:42
1
OK. Thanks for your clarifications. – anna v Mar 19 at 13:19
show 5 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371746182441711, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/97743-serious-trouble-finding-big-o.html | # Thread:
1. ## serious trouble finding big O
Original statement:
$f(n)=(n^3+4log_2 n)/(n+4)$
I'm sure the n+4 in both the denominator and numerator have something to do with it, but if i make $n$in the denominator a $n^3$, wouldn't that then make it smaller the the LHS?
I'm suppose to prove
$f(n)=0(n^2)$
which means i need the inequality to make the RHS bigger than the LHS
2. Originally Posted by Roclemir
Original statement:
$f(n)=(n^3+4log_2 n)/(n+4)$
I'm sure the n+4 in both the denominator and numerator have something to do with it, but if i make $n$in the denominator a $n^3$, wouldn't that then make it smaller the the LHS?
I'm suppose to prove
$f(n)=0(n^2)$
which means i need the inequality to make the RHS bigger than the LHS
For $n>1$ (which makes everything of interest here positive):
$f(n)=\frac{n^3+4\log_2 n}{n+4}<\frac{n^3+4\log_2 n}{n}$ $=n^2+\frac{4 \log_2 (n)}{n}$
Then as $\frac{4 \log_2 (n)}{n} \to 0$ as $n \to \infty$ for large enough $n$ we have:
$\frac{4 \log_2 (n)}{n}<n^2$
So for large enough $n$ :
$f(n)<2n^2$
CB
3. ## thanks mate, just on one line...
Thanks. All makes sense except that one line starting with "Then as... we have:", hate to be a noob, but can you please explain what you mean with the arrows and stuff thanks.
4. ## Beautiful CB, just filling in some details for him
That notation means as n approaches infinity, $\frac{4log_2(n)}{n}$ becomes arbitrarily close to 0. It is read the limit of $\frac{4log_2(n)}{n}$ as n approaches infinity is 0.
So given any $\epsilon >0$, there exists an integer N for which, $|\frac{4log_2(n)}{n}|<\epsilon$ for all n>N.
In particular, if $\epsilon=1$, there exists an N, such that for all n>N, $\frac{4log_2(n)}{n}<1$, this means we clearly have the identity $\frac{4log_2(n)}{n}<1<n^2\Rightarrow n^2+\frac{4log_2(n)}{n}<n^2+n^2=2n^2$.
Thus $|\frac{x^2+\frac{4log_2(n)}{n}}{n^2}|\leq 2$ for all n>N. Which is the definition of $f(n)=O(n^2)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447283148765564, "perplexity_flag": "middle"} |
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_10&oldid=31703 | # User:Michiexile/MATH198/Lecture 10
### From HaskellWiki
Revision as of 22:01, 22 November 2009 by Michiexile (Talk | contribs)
IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ.
This lecture will be shallow, and leave many things undefined, hinted at, and is mostly meant as an appetizer, enticing the audience to go forth and seek out the literature on topos theory for further studies.
## Contents
### 1 Subobject classifier
One very useful property of the category Set is that the powerset of a given set is still a set; we have an internal concept of object of all subobjects. Certainly, for any category (small enough) C, we have a contravariant functor $Sub(-): C\to Set$ taking an object to the set of all equivalence classes of monomorphisms into that object; with the image Sub(f) given by the pullback diagram
If the functor Sub( − ) is representable - meaning that there is some object $X\in C_0$ such that Sub( − ) = hom( − ,X) - then the theory surrounding representable functors, connected to the Yoneda lemma - give us a number of good properties.
One of them is that every representable functor has a universal element; a generalization of the kind of universal mapping properties we've seen in definitions over and over again during this course; all the definitions that posit the unique existence of some arrow in some diagram given all other arrows.
Thus, in a category with a representable subobject functor, we can pick a representing object $\Omega\in C_0$, such that Sub(X) = hom(X,Ω). Furthermore, picking a universal element corresponds to picking a subobject $\Omega_0\hookrightarrow\Omega$ such that for any object A and subobject $A_0\hookrightarrow A$, there is a unique arrow $\chi: A\to\Omega$ such that there is a pullback diagram
One can prove that Ω0 is terminal in C, and we shall call Ω the subobject classifier, and this arrow $\Omega_0=1\to\Omega$ true. The arrow χ is called the characteristic arrow of the subobject.
In Set, all this takes on a familiar tone: the subobject classifier is a 2-element set, with a true element distinguished; and a characteristic function of a subset takes on the true value for every element in the subset, and the other (false) value for every element not in the subset.
### 2 Defining topoi
Definition A topos is a cartesian closed category with all finite limits and with a subobject classifier.
It is worth noting that this is a far stronger condition than anything we can even hope to fulfill for the category of Haskell types and functions. The functional proogramming relevance will take a back seat in this lecture, in favour of usefulness in logic and set theory replacements.
### 3 Properties of topoi
The meat is in the properties we can prove about topoi, and in the things that turn out to be topoi.
Theorem Let E be a topos.
• E has finite colimits.
#### 3.1 Power object
Since a topos is closed, we can take exponentials. Specifically, we can consider $[A\to\Omega]$. This is an object such that $hom(B,[A\to\Omega]) = hom(A\times B, \Omega) = Sub(A\times B)$. Hence, we get an internal version of the subobject functor. (pick B to be the terminal object to get a sense for how global elements of $[A\to\Omega]$ correspond to subobjects of A)
((universal property of power object?))
#### 3.2 Internal logic
We can use the properties of a topos to develop a logic theory - mimicking the development of logic by considering operations on subsets in a given universe:
Classically, in Set, and predicate logic, we would say that a predicate is some function from a universe to a set of truth values. So a predicate takes some sort of objects, and returns either True or False.
Furthermore, we allow the definition of sets using predicates:
$\{x\in U: P(x)\}$
Looking back, though, there is no essential difference between this, and defining the predicate as the subset of the universe directly; the predicate-as-function appears, then, as the characteristic function of the subset. And types are added as easily - we specify each variable, each object, to have a set it belongs to.
This way, predicates really are subsets. Type annotations decide which set the predicate lives in. And we have everything set up in a way that open sup for the topos language above.
We'd define, for predicates P,Q acting on the same type:
$\{x\in A : \top\} = A$
$\{x\in A : \bot\} = \emptyset$
$\{x : (P \wedge Q)(x)\} = \{x : P(x)\} \cap \{x : Q(x)\}$
$\{x : (P \vee Q)(x)\} = \{x : P(x)\} \cup \{x : Q(x)\}$
$\{x\in A : (\neg P)(x) \} = A \setminus \{x\in A : P(x)\}$
$\{x : \exists y. P(x,y) \}$ is given by the construction (...)
$\{x : \forall y. P(x,y) \}$ is given by the construction (...)
We could then start to define primitive logic connectives as set operations; the intersection of two sets is the set on which both the corresponding predicates hold true, so $\wedge = \cap$. Similarily, the union of two sets is the set on which either of the corresponding predicates holds true, so $\vee = \cup$. The complement of a set, in the universe, is the negation of the predicate, and all other propositional connectives (implication, equivalence, ...) can be built with conjunction (and), disjunction (or) and negation (not).
So we can mimic all these in a given topos:
We say that a universe U is just an object in a given topos.
A predicate is a subobject of the universe.
Given predicates P,Q, we define the conjunction $P\wedge Q$ to be the pullback (pushout?)
Image:ToposConjunction.png
This mimics, closely, the idea of the conjunction as an intersection.
We further define the disjunction $P\vee Q$ to be the pushout (pullback?)
Image:ToposDisjunction.png
And we define negation $\neg P$ by (...)
We can expand this language further - and introduce predicative connectives.
Now, a statement on the form $\forall x. P(x)$ is usually taken to mean that on all $x\in U$, the predicate P holds true. Thus, translating to the operations-on-subsets paradigm, $\forall x. P(x)$ corresponds to the statement P = U.
((double check in Awodey & Barr-Wells!!!!))
So we can define a topoidal $\forall x. P(x)$ by ((diagrams))
And similarily, the statement $\exists x. P(x)$ means that P is non-empty.
### 4 Exercises
No homework at this point. However, if you want something to think about, a few questions and exercises:
1. blah | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8995941877365112, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/205625-having-problems-coming-up-equation-problem.html | 1Thanks
• 1 Post By Plato
# Thread:
1. ## Having problems coming up with equation for this problem.
The sum of the squares of two consecutive positive odd numbers is 35914. What are the numbers? (Hint: If one of the numbers is x, then the other number is x+2.
$x^2+(x+2)^2=35914$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190049767494202, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/138369/inscrutable-proof-in-humphreys-book-on-lie-algebras-and-representations?answertab=oldest | # Inscrutable proof in Humphrey's book on Lie algebras and representations
This is a question pertaining to Humphrey's Introduction to Lie Algebras and Representation Theory
Is there an explanation of the lemma in §4.3-Cartan's Criterion? I understand the proof given there but I fail to understand how anybody could have ever devised it or had the guts to prove such a strange statement...
Lemma: let $k$ be alg. closed of caracteristic $0$, $V$ finite dimensional over $k$, $A\subset B\subset \mathrm{End}(V)$ subspaces and $M$ the set of endomorphisms $x$ of $V$ such that $[x,B]\subset A$. Suppose $x\in M$ is such that $\forall y\in M, \mathrm{Tr}(xy)=0$, then $x$ is nilpotent.
The proof uses the diagonalisable$+$nilpotent decomposition, and goes on to show that all eigenvalues of $x$ are $=0$ by showing that the $\mathbb{Q}$ subspace of $k$ they generate has only the $0$ linear functional.
Added: (t.b.) here's the page from Google books for those without access:
-
It would help enormously if you reproduced at least the lemma, even better the proof, so that this question isn't only answerable by people with access to Humphreys. – Qiaochu Yuan Apr 29 '12 at 9:38
Ok ^^ I was hoping to avoid it as I am wriring this from a phone but you're right. – Olivier Bégassat Apr 29 '12 at 9:41
7
I included links and an image of the lemma and its proof. I hope that's what you intended. – t.b. Apr 29 '12 at 9:51
@t.b. That would be lovely, do I have to do something for these to appear in the question? – Olivier Bégassat Apr 29 '12 at 9:56
Your edit and mine were more or less simultaneous. I merged the two edits. Reload the page and see if that's okay now. – t.b. Apr 29 '12 at 9:57
show 8 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549842476844788, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/differential-geometry+ads-cft | # Tagged Questions
1answer
74 views
### Help with the understanding of boundary conditions on $AdS_3$
So I am trying to reproduce results in this article, precisely the 3rd chapter 'Virasoro algebra for AdS$_3$'. I have the metric in this form: ...
1answer
272 views
### Diffeomorphisms and boundary conditions
I am trying to find out how did the authors in this paper (arXiv:0809.4266) found out the general form of the diffeomorphism which preserve the boundary conditions in the same paper. I found this ...
0answers
132 views
### An introductory resource for learning AdS space
Can someone please point me to introductory resources about the geometry of Anti DeSitter Space ? What are some examples of other spaces used in theoretical physics ?.I'm learning Differential ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8915058374404907, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/15652/whats-the-relation-between-virtual-photons-and-electromagnetic-potentials | # What's the relation between virtual photons and electromagnetic potentials?
Given that:
1) virtual photons mediate the electric and magnetic force fields
2) the magnetic field is the curl of the magnetic vector potential
3) the electric field is the negative gradient of the electric scalar potential
How do we understand the vector and scalar potentials in terms of virtual photons?
Specifically, I mean curl-free magnetic vector and gradient-free electric scalar potential fields, where no electromagnetic waves or physical forces are present.
For instance, what's actually happening outside of the solenoid in the Aharonov-Bohm experiment that alters the phase of the passing electron? Are virtual particles involved there? This question has really stumped me because I only have an undergrad level of understanding in physics. Thanks for any help.
-
1
here is an excellent descrption of virtual particles -in particular, why we should NOT think of them as particles at all. – FrankH Oct 13 '11 at 13:03
@FrankH: at last some interesting laymen description of 'virtual particles'. (Thank you for the link) – Helder Velez Mar 21 '12 at 23:07
## 2 Answers
Virtual photon clouds are responsible for potentials, not electric and magnetic fields, and this is what makes the explanation of forces in terms of photon exchange somewhat difficult for a newcomer. The photon propagation is not gauge invariant, and the Feynman gauge is the usual one for getting the forces to come out from particle exchange. In another useful gauge, Dirac's, the photons are physical, and the electrostatic force is instantaneous.
When you have a solenoid, the photons are generated by the currents in the solenoid, and a charge moving through this virtual photon cloud has an altered energy and canonical momentum according to the distribution of the photons at any point in space. The effect can be understood from the current-current form of the interaction:
$$J^\mu(x) J^\nu(y) G_{\mu\nu}(x-y)$$
Where G is the propagation function, and the current J is the probability amplitude for emitting/absorbing a photon. The propagation function reproduces the vector potential from a source J, as it acts on another source J at another point.
There is no difference between classical sources producing photons and classical currents producing a vector potential--- they are the same. The electric and magnetic field description is not fundamental, and the gauge dependence of the photon propagator is just something you have to live with.
-
The electric and magnetic potentials are the temporal and spatial parts, respectively, of a four-vector which play the role of connection in a U(1) gauge theory. The quantized connection provides the interpretation of what you call "virtual photons" which mediate (in a sense) electromagnetic interactions. Yes, I guess these virtual photons are responsible for the AB effect but the electric and magnetic fields associated with these photons outside the internal region of the solenoid never come to existence namely, they never go 'on shell'. This means that no physical electromagnetic field can be detected in that region whatsoever.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053744077682495, "perplexity_flag": "middle"} |
http://www.reference.com/browse/wiki/Affine_transformation | Related Searches
Definitions
# Affine transformation
In geometry, an affine transformation or affine map or an affinity (from the Latin, affinis, "connected with") between two vector spaces (strictly speaking, two affine spaces) consists of a linear transformation followed by a translation:
$x mapsto A x+ b$
In the finite-dimensional case each affine transformation is given by a matrix A and a vector b, satisfying certain properties described below.
Physically, an affine transformation is one that preserves
1. Collinearity between points, i.e., three points which lie on a line continue to be collinear after the transformation
2. Ratios of distances along a line, i.e., for distinct colinear points $p_1$, $p_2$, $p_3$, the ratio $|p_2-p_1| / |p_3-p_2|$ is preserved
In general, an affine transform is composed of zero or more linear transformations (rotation, scaling or shear) and a translation (or "shift"). Several linear transformations can be combined into a single matrix, thus the general formula given above is still applicable.
## Representation of affine transformations
Ordinary vector algebra uses matrix multiplication to represent linear transformations, and vector addition to represent translations. Using an augmented matrix, it is possible to represent both using matrix multiplication. The technique requires that all vectors are augmented with a "1" at the end, and all matrices are augmented with an extra row of zeros at the bottom, an extra column — the translation vector — to the right, and a "1" in the lower right corner. If A is a matrix,
$$
begin{bmatrix} vec{y} 1 end{bmatrix} = begin{bmatrix} A & vec{b} 0, ldots, 0 & 1 end{bmatrix} begin{bmatrix} vec{x} 1 end{bmatrix}
is equivalent to the following
$$
vec{y} = A vec{x} + vec{b}.
Ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a translation, in which the origin must necessarily be mapped to some other point. By appending a "1" to every vector, one essentially considers the space to be mapped as a subset of a space with an additional dimension. In that space, the original space occupies the subset in which the final index is 1. Thus the origin of the original space can be found at (0,0, ... 0, 1). A translation within the original space by means of a linear transformation of the higher-dimensional space is then possible (specifically, a shear transformation). This is an example of homogeneous coordinates.
The advantage of using homogeneous coordinates is that one can combine any number of affine transformations into one by multiplying the matrices. This is used extensively by graphics software.
## Properties of affine transformations
An affine transformation is invertible if and only if A is invertible. In the matrix representation, the inverse is:
$$
begin{bmatrix} A^{-1} & -A^{-1}vec{b} 0,ldots,0 & 1 end{bmatrix}.
The invertible affine transformations form the affine group, which has the general linear group of degree n as subgroup and is itself a subgroup of the general linear group of degree n + 1.
The similarity transformations form the subgroup where A is a scalar times an orthogonal matrix. If and only if the determinant of A is 1 or –1 then the transformation preserves area; these also form a subgroup. Combining both conditions we have the isometries, the subgroup of both where A is an orthogonal matrix.
Each of these groups has a subgroup of transformations which preserve orientation: those where the determinant of A is positive. In the last case this is in 3D the group of rigid body motions (proper rotations and pure translations).
For any matrix A the following propositions are equivalent:
• A – I is invertible
• A does not have an eigenvalue equal to 1
• for all b the transformation has exactly one fixed point
• there is a b for which the transformation has exactly one fixed point
• affine transformations with matrix A can be written as a linear transformation with some point as origin
If there is a fixed point we can take that as the origin, and the affine transformation reduces to a linear transformation. This may make it easier to classify and understand the transformation. For example, describing a transformation as a rotation by a certain angle with respect to a certain axis is easier to get an idea of the overall behavior of the transformation than describing it as a combination of a translation and a rotation. However, this depends on application and context. Describing such a transformation for an object tends to make more sense in terms of rotation about an axis through the center of that object, combined with a translation, rather than by just a rotation with respect to some distant point. For example "move 200 m north and rotate 90° anti-clockwise", rather than the equivalent "with respect to the point 141 m to the northwest, rotate 90° anti-clockwise".
Affine transformations in 2D without fixed point (so where A has eigenvalue 1) are:
• pure translations
• scaling in a given direction, with respect to a line in another direction (not necessarily perpendicular), combined with translation that is not purely in the direction of scaling; the scale factor is the other eigenvalue; taking "scaling" in a generalized sense it includes the cases that the scale factor is zero (projection) and negative; the latter includes reflection, and combined with translation it includes glide reflection.
• shear combined with translation that is not purely in the direction of the shear (there is no other eigenvalue than 1; it has algebraic multiplicity 2, but geometric multiplicity 1)
## Affine transformations and linear transformations
In a geometric setting, affine transformations are precisely the functions that map straight lines to straight lines.
A linear transformation is a function that preserves all linear combinations; an affine transformation is a function that preserves all affine combinations. An affine combination is a linear combination in which the sum of the coefficients is 1.
An affine subspace of a vector space (sometimes called a linear manifold) is a coset of a linear subspace; i.e., it is the result of adding a constant vector to every element of the linear subspace. A linear subspace of a vector space is a subset that is closed under linear combinations; an affine subspace is one that is closed under affine combinations.
For example, in R3, the origin, lines and planes through the origin and the whole space are linear subspaces, while points, lines and planes in general as well as the whole space are the affine subspaces.
Just as members of a set of vectors are linearly independent if none is a linear combination of the others, so also they are affinely independent if none is an affine combination of the others. The set of linear combinations of a set of vectors is their "linear span" and is always a linear subspace; the set of all affine combinations is their "affine span" and is always an affine subspace. For example, the affine span of a set of two points is the line that contains both; the affine span of a set of three non-collinear points is the plane that contains all three. Vectors
v1, v2, ..., vn
are linearly dependent if there exists a vector a
a = [a1, a2, … ,an]T
such that both:
∃ i ∊ [1, …, n]: ai ≠ 0
and
[v1, v2, … , vn] × a = 0
are true.
Similarly they are affinely dependent if the same is true and also
$sum_\left\{i=1\right\}^n a_i = 1$
Vector a is an affine dependence among the vectors v1, v2, …, vn.
The set of all invertible affine transformations forms a group under the operation of composition of functions. That group is called the affine group, and is the semidirect product of Kn and GL(n, k).
## Affine transformation of the plane
To visualise the general affine transformation of the Euclidean plane, take labelled parallelograms ABCD and A′B′C′D′. Whatever the choices of points, there is an affine transformation T of the plane taking A to A′, and each vertex similarly. Supposing we exclude the degenerate case where ABCD has zero area, there is a unique such affine transformation T. Drawing out a whole grid of parallelograms based on ABCD, the image T(P) of any point P is determined by noting that T(A) = A′, T applied to the line segment AB is A′B′, T applied to the line segment AC is A′C′, and T respects scalar multiples of vectors based at A. [If A, E, F are colinear then the ratio length(AF)/length(AE) is equal to length(A′F′)/length(A′E′).] Geometrically T transforms the grid based on ABCD to that based in A′B′C′D′.
Affine transformations don't respect lengths or angles; they multiply area by a constant factor
area of A′ B′ C′ D′ / area of ABCD.
A given T may either be direct (respect orientation), or indirect (reverse orientation), and this may be determined by its effect on signed areas (as defined, for example, by the cross product of vectors).
## Example of an affine transformation
The following equation expresses an affine transformation in GF(2) (with "+" representing XOR):
$$
{,a',} = M{,a,} + {,v,}.
where [M] is the matrix
$$
begin{bmatrix} 1&0&0&0&1&1&1&1 1&1&0&0&0&1&1&1 1&1&1&0&0&0&1&1 1&1&1&1&0&0&0&1 1&1&1&1&1&0&0&0 0&1&1&1&1&1&0&0 0&0&1&1&1&1&1&0 0&0&0&1&1&1&1&1end{bmatrix}
and {v} is the vector
$$
begin{bmatrix} 1 1 0 0 0 1 1 0 end{bmatrix}.
For instance, the affine transformation of the element {a} = x7 + x6 + x3 + x = {11001010} in big-endian binary notation = {CA} in big-endian hexadecimal notation, is calculated as follows:
$a_0\text{'} = a_0 oplus a_4 oplus a_5 oplus a_6 oplus a_7 oplus 1 = 0 oplus 0 oplus 0 oplus 1 oplus 1 oplus 1 = 1$
$a_1\text{'} = a_0 oplus a_1 oplus a_5 oplus a_6 oplus a_7 oplus 1 = 0 oplus 1 oplus 0 oplus 1 oplus 1 oplus 1 = 0$
$a_2\text{'} = a_0 oplus a_1 oplus a_2 oplus a_6 oplus a_7 oplus 0 = 0 oplus 1 oplus 0 oplus 1 oplus 1 oplus 0 = 1$
$a_3\text{'} = a_0 oplus a_1 oplus a_2 oplus a_3 oplus a_7 oplus 0 = 0 oplus 1 oplus 0 oplus 1 oplus 1 oplus 0 = 1$
$a_4\text{'} = a_0 oplus a_1 oplus a_2 oplus a_3 oplus a_4 oplus 0 = 0 oplus 1 oplus 0 oplus 1 oplus 0 oplus 0 = 0$
$a_5\text{'} = a_1 oplus a_2 oplus a_3 oplus a_4 oplus a_5 oplus 1 = 1 oplus 0 oplus 1 oplus 0 oplus 0 oplus 1 = 1$
$a_6\text{'} = a_2 oplus a_3 oplus a_4 oplus a_5 oplus a_6 oplus 1 = 0 oplus 1 oplus 0 oplus 0 oplus 1 oplus 1 = 1$
$a_7\text{'} = a_3 oplus a_4 oplus a_5 oplus a_6 oplus a_7 oplus 0 = 1 oplus 0 oplus 0 oplus 1 oplus 1 oplus 0 = 1.$
Thus, {a′} = x7 + x6 + x5 + x3 + x2 + 1 = {11101101} = {ED}
## See also
• the transformation matrix for an affine transformation
• matrix representation of a translation
• affine geometry
• homothetic transformation
• similarity transformation
• linear (the second meaning is affine transformation in 1D)
• 3D projection
• flat (geometry)
## External links
• Geometric Operations: Affine Transform, R. Fisher, S. Perkins, A. Walker and E. Wolfart.
• by Bernard Vuilleumier, The Wolfram Demonstrations Project.
• Affine Transform on PlanetMath | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8789305090904236, "perplexity_flag": "middle"} |
http://particlephd.wordpress.com/ | # High Energy PhDs
A discussion of particle physics and strings
August 17, 2009
## Wilsonian vs 1PI Actions
Posted by Sparticles under Uncategorized
[2] Comments
In SUSY gauge theories there’s a big distinction between the Wilsonian and the 1PI effective actions. Seiberg makes a big distinction between the two during his lectures (e.g. see the discussion that arose during his explanation of Seiberg Duality in the SIS07 school). This isn’t explained in any of the usual QFT textbooks, so I figured it was worth writing a little note that at least collects some referenes.
The most critical application of the distinction is manifested in the beta function for supersymmetric gauge theories. The difference between the 1PI and Wilsonian effective actions ends up being the difference between the 1-loop exact beta function of and the NSVZ “exact to all orders” beta function that includes multiple loops. For a discussion of this, most paper point to Shifman and Vainshtein’s paper, “Solution of the anomaly puzzle inSUSY gauge theories and the Wilson operator expansion.” [doi:10.1016/0550-3213(86)90451-7]. It’s worth noting that Arkani-Hamed and Murayama further clarified this ambiguity in terms of the holomorphic versus the canonical gauge coupling in, “Holomorphy, Rescaling Anomalies, and Exact beta Functions in SUSY Gauge Theories,” [hep-th/9707133].
The distinction between the two is roughly this:
• The Wilsonian effective action is given by setting a scale $\mu$ and integrating out all modes whose mass or momentum are larger than this scale. This quantity has no IR subtleties because IR divergences are cut off. To be explicit, the Wilsonian action is a theory with a cutoff. It is a theory where couplings run according to the Wilsonian RG flow, i.e. it is a theory that we still have to treat quantum mechanically. We still have to perform the path integral.
• The 1PI effective action is the quantity appearing in ithe generating functional of 1PI diagrams, usually called $\Gamma$. This quantity is formally defined including all virtual contributions coming from loops so that the tree-level diagrams are exact. (Of course we end up having to calculate in a loop expansion.) The one-loop zero-momentum contribution is the Coleman-Weinberg potential. The 1PI effective action is the quantity that we deal with when we Legendre transform the action with respect to sources and classical background fields. The 1PI effective action is meant to be classical in the sense that all quantum effects are accounted for. Because it takes into account all virtual modes, it is sensitive to the problems of massless particles. Thus the 1PI effective action can have IR divergences, i.e. it is non-analytic. It can get factors of log p coming from massless particles running in loops. Seiberg says a good example of this is the chiral Lagrangian for pions.
Further references not linked to above:
• Bilal, “(Non) Gauge Invariance of Wilsonian Effective Actions in (SUSY) Gauge Theories: A Critical Discussion.” [0705.0362].
• Burgess, “An Introduction to EFT.” [hep-th/0701053]. An excellent pedagogical explanation of the Wilsonian vs 1PI and how they are connected. A real pleasure to read.
• Seiberg, “Naturalness vs. SUSY Non-renormalization.” [hep-ph/9309335]. Mentions the distinction.
• Polchinski, “Renormalization and effective Lagrangian.” [doi:10.1016/0550-3213(84)90287-6] Only mentions Wilsonian effective action, but still a nice pedagogical read.
• Tim Hollowood’s renormalization notes (http://pyweb.swan.ac.uk/~hollowood/) are always worth looking at.
February 25, 2009
## PSI lectures to be made available
Posted by Sparticles under Announcements
1 Comment
Good news for theoretical physics graduate education: the unofficial buzz is that Perimeter Institute will be making their “Perimeter Scholars International” (PSI) lectures publicly available through the institute’s PIRSA video archive.
For those of who don’t know, PSI is a new masters-degree course for training theoretical physicists. The program has a unique breakdown of intense 3-week terms where students focus on progressively specialized material. The choice to have more terms with fewer courses (and I assume more weekly hours per course) allows the program to recruit some really big-name faculty from around the world to come and lecture.
The course also includes a research component and I suspect the inaugural batch of students will have quite an experience interacting with the lecturers and each other. This aspect of the program, of course, cannot be reproduce online — but I suspect that the online lectures will serve as an excellent advertisement for prospective students while also acting as a unique archive of pedagogical lectures for those of us who have already started our PhDs.
February 21, 2009
## MiniBooNE anomaly
Posted by Sparticles under Experiment, Recent
[2] Comments
Caveat: I am not an experimentalist and I do not pretend to properly understand experimental nuances… but I’m doing my best to try to keep up with what I think are interesting results in particle physics. This post is primarily based on notes from the talk `Updated Oscillation Results from MiniBooNE.’
Oh no! Alien robot drones coming to enslave us! No, just kidding. They're just MiniBooNE photomultiplier tubes. Image from Interactions.org image bank.
The MiniBooNE experiment’s initial goal when it started taking data in 2002 was to test the hypothesis of neutrino mixing with a heavy sterile neutrino that had been proposed to explain the so-called `LSND-anomaly.’ In 2006 (07?) many watched as the collaboration revealed data that disproved this hypothesis, though their data set had an unexplained excess in low energy (below 475 MeV) electrons. Since this was in a region of large background and didn’t affect the fits used in the neutrino mixing analysis, they mentioned this in passing and promised to look into it.
A couple of months ago the collaboration came back with an improved background analysis showing that the low-energy excess still appears with over 3 sigma confidence (0812.2243). One novel model came from the paper `Anomaly-mediated neutrino-photon interactions at finite baryon density,’ (0708.1281), which was apparently a theorists’ favorite. The model, however, predicted a similar excess for anti-neutrinos, which the latest analysis does not indicate (see the `2009 tour‘ talk of the MiniBooNE spokesperson, R. van de Water).
Some background
Neutrinos are slippery little particles that only interact via the weak force. They have also been of interest for beyond-the-standard model theorists since they are they key to several approaches to new physics, including:
• lepton flavor physics (the PMNS matrix as the analogy for the CKM matrix in the quark sector)
• see-saw mechanism (neutrino masses coming from mixing with GUT-scale right-handed neutrinos)
• majorana mass terms (lepton number violating)
• leptogenesis (transferring CP violation from the lepton sector to the hadron sector).
One of the big events of 1998 was the discovery of neutrino mixing (i.e. masses). This is actually a rather subtle topic, as recent confusion over the GSI anomaly has shown; my favorite recent pedagogical paper is 0810.4602. (See also 0706.1216 for an excellent discussion of why neutrinos oscillate rather than the charged leptons.)
The mixing probability between two neutrino mass eigenstates goes like $\sin^2(\Delta m^2 L/E)$. I’ve dropped a numerical factor in the second sine, but this is a heuristic discussion anyway. The L and E represent the `baseline’ (distance the neutrinos travel) and the neutrino energy. A similar expression occurs for three neutrino mixing, such as between the three light neutrinos that we’ve come to know and love since 1998.
The early probes of neutrino oscillations came from `medium’ and `long’ baseline experiments where the neutrinos detected came from cosmic ray showers in the atmosphere and the sun respectively. The LSND experiment was the first to probe `short-baseline’ neutrinos, with an L/E of about 30 m / 50 MeV. What LSND found was incompatible with the standard story of three light neutrino mixing (hep-ex/0104049). They found a 3.8 sigma excess of electron anti-neutrinos over what one would expect, leading to the suggestion that this `LSND anomaly’ may have been due to mixing of the light neutrinos with a fourth, heavy `sterile’ neutrino.
MiniBooNE
MiniBooNE set out to test the sterile neutrino hypothesis by looking at the muon neutrino to electron neutrino mixing (LSND loked at anti-mu neutrinos to anti-e neutrinos). The experimental set-up had Fermilab shooting 8 GeV protons into a fixed target to produce pions and kaons. These are focused with a magnetic `horn’ so that they decay into relatively collimated neutrinos (mu and e) and charged particles. The horn can be run with opposite polarity to study the analogous anti-neutrino processes. The leptons then go through around 500 meters of dirt, which provides ample matter to stop the charged particles while leaving the neutrinos to hit an 800 ton mineral oil detector. (The energy and baseline are chosen to match the L/E of LSND.) These neutrinos may produce charged leptons, which produce Cerenkov radiation (the electromagnetic equivalent of sonic-booms) which is picked up by an array of photomultiplier tubes to read out information about the particle energy. Apparently the pattern of Cerenkov light is even enough to distinguish muons from electrons.
I don’t know why, but the MiniBooNE people measure their data in terms of protons-on-target (POT). It seems to me that the natural units would be something like luminosity or number of neutrino candidates… but perhaps these are less-meaningful in this sort of experiment?
The first results
Here’s what MiniBooNE had to say in 2007 (0704.1500):
MiniBooNE 2007 Results, from 0704.1500
As is standard in particle physics, did a blind analysis, i.e. analyzed the data without looking at the entire dataset, to prevent the analysts from inserting bias in their cuts. The found that their signal does not fit the LSND sterile neutrino hypothesis (pink and green solid lines on the bottom plot). Part of their blind analysis was to focus on the data above 475 MeV, since the data below this had larger backgrounds (top plot). Above this scale their data is very close to a fit to the standard 3-light-neutrino model. Below this, however, they found an odd excess of low energy electrons. Since this region has more difficult background than the 475+ MeV region and the latter region had [conclusively?] ruled out the natural LSND interpretation, they decided to publish their result look more carefully into the low energy region.
One year later
The group has since put up a further analysis of the sub 475 MeV region (0812.2243), and the result is that there is still a 3-sigma deviation. The new analysis includes several improvements to get better handles on backgrounds. I do not properly understand most of these (“theorist’s naievte”), but will mention a few to the extent that I am capable:
• Pion neutral current distributions were reweighted to model pion kinematics properly
• “Photonuclear absorption” was taken into account. This is the process where a pion decays into two photons and one of the photons is absorbed by a carbon atom. The remaining photon Cerenkov radiates is misidentified in the detector as an electron. (This apparently contributed 25% to the background!)
• A new cut on the data was imposed to get rid of pion-to-photon decay backgrounds in the dirt (a generic term for earthy matter) immediately outside the detector. Signals that are pointing opposite the neutrino beam and originate near the exterior of the detector are removed since Monte Carlo simulations showed that these events are primarily background.
If I understand correctly, systematic errors are now smaller for the 200-475 MeV region (13% compared to 15% in the 475+ MeV region). The new result is:
Still an "electron-like" excess at low energies, from 0812.2243
The significance at each energy range is something like
• 200 – 300 MeV: 1.7 sigma
• 300 – 475 MeV: 3.4 sigma
• 475 – 1250 MeV: 0.6 sigma
The 200 – 475 MeV data combine to an overall 3 sigma discrepancy. The MiniBooNE spokesperson also points out that since we now understand this low energy region better than the high-energy region (in terms of systematic errors), this is still solid indication that there is a MiniBooNE excess. It seems we’ve justed traded the `LSND anomaly’ for a `MiniBooNE anomaly’.
[For theorists: this is a familiar concept in duality called anomaly matching. For experimentalists: that was a bad joke.]
At this point, people who were model building for MiniBooNE could rest easy and keep earning their wages.
Anti-neutrinos
That is, until we look at the anti-neutrino data. Since the neutrino/anti-neutrino character of the beam is based on the “horn” polarity, the experiment can run in either nu or anti-nu mode, but not both simultaneously. Last December the MiniBooNE collaboration also “unblinded” their antineutrino data and found…
MiniBooNE antineutrino data fits background.
… what is it? A big piece of coal in their stocking. Well, ok, that was overly harsh. In fact, this is actually rather interesting. Recall that LSND was running in the antineutrino mode when it found its anomaly. [In fact, I don't properly understand why MiniBooNE didn't run in the antineutrino mode initially? I suspect it may have something to do with how well one can measure electrons vs positrons in the detector.]
The antineutrino data matches the background predictions rather well. No story here. Unfortunately this non-signal killed the favorite model for the electron excess (axial anomaly mediation), which predicted an analogous excess in the antineutrino mode. In fact, it killed most of the interesting interpretations. Apparently the betting game during the blind antineutrino analysis was to use the neutrino data to predict the excess in the antineutrino data in various scenarios. The unblinded data suggests the excess only exists in the neutrino mode.
One of the few models that survived the antineutrino data is based on “multichannel oscillations,” nucl-th/0703023.
Where to go from here
MiniBooNE continues to take data and the collaboration is planning on combining the neutrino and antineutrino data (I’m not sure what this means). They’re waiting on a request for extra running, with some proposals for exploring this anomaly at different baselines. Since background goes as $1/L^2$ while oscillations go as $\sin^2(L/E)/E$, this L/E dependence can shed some light about the nature of the electron excess. One proposal is to actually just move the MiniBooNE detector to 200m. Since this would be closer, it would only take one year to accumulate data equivalent to the entire seven year run so far.
There was also a nice remark by the MiniBooNE spokesperson to `cover all the bases,’ so to speak: even if the low energy excess in MiniBooNE is completely background (i.e. uninteresting), this would still turn out to be very important for long baseline experiments (T2K, NoVA, DUSEL-FNAL) since they operate in this energy range.
February 16, 2009
## That crazy leptonic sector: multi-muon model-making
Posted by Sparticles under Uncategorized
[5] Comments
The CDF multi-muon anomaly has been an experimental curiosity for a few months now, but it seems to have taken a back seat to PAMELA/ATIC for `exciting experimental directions’ in particle phenomenology. [Of course, it's still doing better than the LHC...]
The end of 2008 for hep-ph‘ists was marked by three interesting leptonic signals. The CDF multi-muon anomaly, PAMELA/ATIC, and the MiniBooNE excess. Of these three, PAMELA/ATIC have gotten the lion’s share of papers — but there have been rumblings that the Fermi/GLAST preliminary results are favoring a `vanilla’ astrophysical explanation. There have been a couple of notable papers attempting to explain PAMELA/ATIC along these lines (0812.4457, 0902.0376) along these lines. As for MiniBooNE, not very much has been said about the excess in low-energy electrons. I hope to be able to learn a bit more about this before I blog about it.
The original multi-muon paper is quite a read (and the associated initial model-building attempt), and indeed produced an interesting `response’ from a theorist (much of which is an excellent starting point for multi-muon model-building), which in-turn produced a response on Tommaso’s blog… which eventually turned a bit ugly in the comments section. Anyway, the best `armchair’ reading on the multi-muon anomaly is still Tommaso’s set of notes: part 0, part 1, part 2, part 3, part 4. An excellent theory-side discussion can be found at Resonaances.
There have been a handful of model attempts by theorists since the above discussions. General remarks on the hidden-valley context and how to start thinking about this signal can be found in the aforementioned paper 0811.1560. A connection between CDF multi-muons and the cosmic ray lepton excess was presented in 0812.4240. A very recent paper also attacks CDF + PAMELA with a hidden valley scalar, 0902.2145. As usual any map from possible new signals to variants of the MSSM is surjective (though never one-to-one), so it’s no surprise that people have found a singlet extension to the MSSM to fit the CDF anomaly in 0812.1167. An exploration of `what can we still squeeze out of the Tevatron’ comes from Fermilab, which explains that a very heavy t’ could not only be found at the Tevatron, but could explain the CDF anomaly, 0902.0792.
There are some very respectable theorists trying their hand at multi-muon model building, though there generally seems to be some reluctance from the community as a whole to devote much effort towards it. Maybe people are holding their breath for direct production of new physics at the LHC, or are otherwise convinced that the thing to do right now is construct theories of dark matter since we know dark matter must eventually show up in a particle experiment.
For me, the threshold for jumping into the field head-first was waiting to hear what the D0 collaboration had to say about this. According to rumors, however, it seems like the Tevatron’s other detector won’t have anything to say since it won’t be doing this analysis. From what I understand, this comes from the way that the D0 collaboration skims their data. (What does `skim’ mean?) Rumor has it that it’s very difficult for them to do the same analysis that CDF did, and they’ve decided that (1) the likelihood of new physics is so unlikely that it’s not worth their effort to jump in and try to get in on the glory, and (2) the signal is so absurd that it’s not even worth their effort to do the analysis to disprove their friendly rivals at CDF. Not being an experimentalist I can’t comment on the validity or rationale for this — if it is indeed true — but as a phenomenologist I’m smacking my head.
If the multi-muon signal pans out, it could be an experimental discovery that would launch a thousand theorists (Helen of Troy reference indended). If not, a cross check with D0 would have definitively (to the extent that anything is definite in science) put the issue to rest. There are some people in the CDF collaboration who are really convinced by their analysis, and I hope that there will be an opportunity in the near future to cross-check those results at another detector.
February 15, 2009
## A Brief Introduction to Instantons
Posted by lanzr under Field Theory
1 Comment
This brief instroduction is based on David Tong’s TASI Lectures on Solitons Lecture 1:Instantons.
We’ll first talk about the instantons arise in SU(N) Yang-Mills theory and then explain the connection between them and supersymmetry. By the end we’ll try to explain how string theory jumps in this whole business.
Instantons are nothing but a special kind of solution for the pure SU(N) Yang-Mills theory with action $S=\frac{1}{2 e^2}\int d^4x Tr F_{\mu\nu}F^{\mu\nu}$. Motivated by semi-classical evaluation of path integral, we search for finite action solutions to the Euclidean equations of motion, $\mathcal{D}_{\mu} F^{\mu\nu}=0$. In order to have a finite action, we need field potential $A_{\mu}$ to be pure gauge at the boundary $\partial R^4=S^3$, i.e. $A_{\mu}=ig^{-1}\partial_{\mu}g$.
Then the action will be given roughly by a surface integral which depends on the third fundamental group of SU(N), given by $\Pi_3(SU(N))=k$, k is usually called the charge of the instanton. For the original action to be captured by this number k, we need to have self-dual or anti self-dual field strength.
A specific solution of $k=1$ for SU(2) group can be given by $A_{mu}=\frac{\rho^2(x-X)_{\nu}}{(x-X)^2((x-X)^2+\rho^2)}\bar{\eta_{\mu\nu}}^i(g\sigma^i g^{-1})$ where $Xs$ are coordinate parameter $\rho$ is scale parameter and together with the three generator of the group, we have 8 parameters called collective coordinates. $\eta$ is just some matrix to interwine the group index $i$ with the space index $\mu$.
For a given instanton charge $k$ and a given group SU(2), an interesting question is how many independent solutions we have. The number is usually counted by given a solution $A_{\mu}$and we try to find how many infinitisiaml perturbation of this solution $\delta_\alpha A_\mu$, known as zero modes, $\alpha$ is the index for this solution space, usually called moduli space.
When we consider a Yang-Mills theory with an instanton background instead of a pure Yang-Mills theory. We’d like to know if we still have non-trivial solutions, and especially if these solutions will give rise to even more collective coordinates. This is where fermion zero modes and supersymmetry comes in. For $\mathcal{N}=2$ or 4 supersymmetry in $D=4$, it’s better to promote the instanton to be a string in 6 dimentiona or a 5 brane in 10 dimensions respectively. The details of how to solve the equation will be beyond the scope of this introduction, and We’ll refer the reader to the original lecture notes by David Tong.
February 7, 2009
## The spin connection
Posted by Sparticles under Field Theory
[4] Comments
I’ve been spending some time thinking about spinors on curved spacetime. There exists a decent set of literature out there for this, but unfortunately it’s scattered across different `cultures’ like a mathematical Tower of Babel. Mathematicians, general relativists, string theorists, and particle physicists all have a different set of tools and language to deal with spinors.
Particle physicists — the community from which I hail — are the most recent to use curved-space spinors in mainstream work. It was only a decade ago that the Randall-Sundrum model for a warped extra dimension was first presented in which the Standard Model was confined to a (3+1)-brane in a 5D Anti-deSitter spacetime. Shortly after, flavor constraints led physicists to start placing fields in the bulk of the RS space. Grossman and Neubert were among the first to show how to place fermion fields in the bulk. The fancy new piece of machinery (by then an old hat for string theorists and a really old hat for relativists) was the spin connection which allows us to connect the flat-space formalism for spinors to curved spaces. [I should make an apology: supergravity has made use of this formalism for some time now, but I unabashedly classify supergravitists as effective string theorists for the sake of argument.]
One way of looking at the formalism is that spinors live in the tangent space of a manifold. By definition this space is flat, and we may work with spinors as in Minkowski space. The only problem is that one then wants to relate the tangent space at one spacetime point to neighboring points. For this one needs a new kind of covariant derivative (i.e. a new connection) that will translate tangent space spinor indices at one point of spacetime to another.
By the way, now is a fair place to state that mathematicians are likely to be nauseous at my “physicist” language… it’s rather likely that my statements will be mathematically ambiguous or even incorrect. Fair warning.
Mathematicians will use words like the “square root of a principle fiber bundle” or “repere mobile” (moving frame) to refer to this formalism in differential geometry. Relativists and string theorists may use words like “tetrad” or “vielbein,” the latter of which has been adopted by particle physicists.
A truly well-written “for physicists” exposition on spinors can be found in Green, Schwartz, and Witten, volume II section 12.1. It’s a short section that you can read independently of the rest of the book. I will summarize their treatment in what follows.
We would like to introduce the a basis of orthonormal vectors at each point in spacetime, $e^a_\mu(x)$, which we call the vielbein. This translates to `many legs’ in German. One will often also hear the term vierbein meaning `four legs,’ or `funfbein’ meaning `five legs’ depending on what dimensionality of spacetime one is working with. The index $\mu$ refers to indices on the spacetime manifold (which is curved in general), while the index $a$ labels the different basis vectors.
If this makes sense, go ahead and skip this paragraph. Otherwise, let me add a few words. Imagine the tangent space of a manifold. We’d like a set of basis vectors for this tangent space. Of course, whatever basis we’re using for the manifold induces a basis on the tangent space, but let’s be more general. Let us write down an arbitrary basis. Each basis vector has $n$ components, where $n$ is the dimensionality of the manifold. Thus each basis vector gets an undex from 1 to $n$, which we call $mu$. The choice of this label is intentional, the components of this basis map directly (say, by exponentiation) to the manifold itself, so these really are indices relative to the basis on the manifold. We can thus write a particular basis vector of the tangent space at $x$ as $e_\mu(x)$. How many basis vectors are there for the tangent space? There are $n$. We can thus label the different basis vectors with another letter, $a$. Hence we may write our vector as $e^a_\mu(x)$.
The point, now, is that these objects allow us to convert from manifold coordinates to tangent space coordinates. (Tautological sanity check: the $a$ are tangent space coordinates because they label a basis for the tangent space.) In particular, we can go from the curved-space indices of a warped spacetime to flat-space indices that spinors understand. The choice of an orthonormal basis of tangent vectors means that
$e^a_\mu (x) e_{a\nu}(x) = g_{\mu\nu}(x)$,
where the $a$ index is raised and lowered with the flat space (Minkowski) metric. In this sense the vielbeins can be thought of as `square roots’ of the metric that relate flat and curved coordinates. (Aside: this was the first thing I ever learned at a group meeting as a grad student.)
Now here’s the good stuff: there’s nothing `holy’ about a particular orientation of the vielbein at a particular point of spacetime. We could have arbitrarily defined the tangent space z-direction (i.e. $a = 3$, not \$\mu=3\$) pointing in one direction ($x_\mu=(0,0,0,1)$) or another ($x_\mu=(0,1,0,0)$) relative to the manifold’s basis so long as the two directions are related by a Lorentz transformation. Thus we have an $SO(3,1)$ symmetry (or whatever symmetry applies to the manifold). Further, we could have made this arbitrary choice independently for each point in spacetime. This means that the symmetry is local, i.e. it is a gauge symmetry. Indeed, think back to handy definitions of gauge symmetries in QFT: this is an overall redundancy in how we describe our system, it’s a `non-physical’ degree of freedom that needs to be `modded out’ when describing physical dynamics.
Like any other gauge symmetry, we are required to introduce a gauge field for the Lorentz group, which we shall call $\omega_{\mu\phantom{a}\nu}^{\phantom{mu}a}(x)$. From the point of view of Riemannian geometry this is just the connection, so we can alternately call this creature the spin connection. Note that this is all different from the (local) diffeomorphism symmetry of general relativity, for which we have the Christoffel connection.
What do we know about the spin connection? If we want to be consistent with general relativity while adding only minimal structure (which GSW notes is not always the case), we need to impose consistency when we take covariant derivatives. In particular, any vector field with manifold indices ($V^\mu(x)$) can now be recast as a vector field with tangent-space indices ($V^a = e^\mu_a(x)V^\mu(x)$). By requiring that both objects have the same covariant derivative, we get the constraint
$D_\mu e^a_\mu(x) = 0$.
Note that the covariant derivative is defined as usual for multi-index objects: a partial derivative followed by a connection term for each index. For the manifold index there’s a Christoffel connection, while for the tangent space index there’s a spin connection:
$D_\mu e^a_\mu(x) = \partial_\mu e^a_\nu - \Gamma^\lambda_{\mu\nu}e^a_\nu + \omega_{\mu\phantom{a}b}^{\phantom\mu a}e^b_\nu$.
This turns out to give just enough information to constrain the spin connection in terms of the vielbeins,
$\omega^{ab}_\mu = \frac 12 g^{\rho\nu}e^{[a}_{\phantom{a}\rho}\partial_{\nu}e^{b]}_{\phantom{b]}\nu}+ \frac 14 g^{\rho\nu}g^{\tau\sigma}e^{[a}_{\phantom{[a}\rho}e^{b]}_{\phantom{b]}\tau}\partial_{[\sigma}e^c_{\phantom{c}\nu]}e^d_\mu\eta_{cd}$,
this is precisely equation (11) of hep-ph/980547 (EFT for a 3-Brane Universe, by Sundrum) and equation ( 4.28 ) of hep-ph/0510275 (TASI Lectures on EWSB from XD, Csaki, Hubisz, Meade). I recommend both references for RS model-building, but note that neither of them actually explain where this equation comes from (well, the latter cites the former)… so I thought it’d be worth explaining this explicitly. GSW makes a further note that the spin connection can be using the torsion since they are the only terms that survive the antisymmetry of the torsion tensor.
Going back to our original goal of putting fermions on a curved spacetime, in order to define a Clifford algebra on such a spacetime it is now sufficient to consider objects $\Gamma_mu(x) = e^a_\mu(x)\gamma_a$, where the right-hand side contains a flat-space (constant) gamma matrix with its index converted to a spacetime index via the position-dependent vielbein, resulting in a spacetime gamma matrix that is also position dependent (left-hand side). One can see that indeed the spacetime gamma matrices satisfy the Clifford algebra with the curved space metric, $\{\Gamma_\mu(x),\Gamma_\nu(y)\} = 2g_{\mu\nu}(x)$.
There’s one last elegant thought I wanted to convey from GSW. In a previous post we mentioned the role of topology on the existence of the (quantum mechanical) spin representation of the Lorentz group. Now, once again, topology becomes relevant when dealing with the spin connection. When we wrote down our vielbeins we assumed that it was possible to form a basis of orthonormal vectors on our spacetime. A sensible question to ask is whether this is actually valid globally (rather than just locally). The answer, in general, is no. One simply has to consider the “hairy ball” theorem that states that one cannot have a continuous nowhere-vanishing vector field on the 2-sphere. Thus one cannot always have a nowhere-vanishing global vielbein.
Topologies that can be covered by a single vielbein are actually `comparatively scarce’ and are known as parallelizable manifolds. For non-parallelizable manifolds, the best we can do is to define vielbeins on local regions and patch them together via Lorentz transformations (`transition functions’) along their boundary. Consistency requires that in a region with three overlapping patches, the transition from patch 1 to 2, 2 to 3, and then from 3 to 1 is the identity. This is indeed the case.
Spinors must also be patched together along the manifold in a similar way, but we run into problems. The consistency condition on a triple-overlap region is no longer always true since the double-valuedness of the spinor transformation (i.e. the spinor transformation has a sign ambiguity relative to the vector transformation). If it is possible to choose signs on the spinor transformations such that the consistency condition always holds, then the manifold is known as a spin manifold and is said to admit a spin structure. In order to have a consistent theory with fermions, it is necessary to restrict to a spin manifold.
January 2, 2009
## LaTeX Figures with PGF and TikZ
Posted by Sparticles under Tools
[6] Comments
Here’s a new installment to the never-ending debate about the best way to draw figures in LaTeX. (A previous suggestion: Adobe Illustrator.)
Stereographic projection image made using TikZ by Thomas Trzeciak, available at TeXample.net.
A promising solution is the combination of PGF (“Portable Graphics Format”) and TikZ (“TikZ ist kein Zeichenprogramm,” or “TikZ is not a drawing program”), both developed by Till Tantau, whom you may know better for creating Beamer.
PGF is the ‘base system’ that provides commands to draw vector images. This layer of the graphics system is tedious to use directly since it only provides the most basic tools. TikZ is a frontend that provies a user-friendly environment for writing commands to draw diagrams.
Like other LaTeX drawing packages (the `picture` environment, `axodraw`, etc.), the TikZ figures are a series of in-line commands and so are extremely compact and easy to modify. Unlike other LaTeX packages, though, TikZ provides a powerful layer of abstraction that makes it relatively easy to make fairly complicated diagrams. Further, the entire system is `PDFLaTeX`-friendly, which is more than one can say for `pstricks`-based drawing options.
The cost is that the system has a bit of a learning curve. Like LaTeX itself, there are many commands and techniques that one must gradually become familiar with in order to make figures. Fortunately, there is a very pedagogical and comprehensive manual available. Unfortunately the manual is rather lengthy and many of the examples contain small errors that prevent the code from compiling. TeXample.net comes to the rescue, however, witha nice gallery of TikZ examples (with source code), including those from the manual. If you’re thinking about learning TikZ, go ahead and browse some of the examples right now; the range of possibilities is really impressive.
To properly learn how to use TikZ, I would suggest setting aside a day or two to go through the tutorials (Part I) of the manual. Start from the beginning and work your way through one page at a time. The manual was written in such a way that you can’t just skip to a picture that you like and copy the source code, you need to be sure to include all the libraries and define all the variables that are discussed over the course of each tutorial. (You can always consult the source code at the gallery of TikZ examples in a pinch.)
Let me just mention two really neat things that TikZ can do which sold me onto the system.
TikZ Feynman Diagram by K. Fauske, available at TeXample.net.
The first example is, of course, drawing a Feynman diagram. The TikZ code isn’t necessarily any cleaner than what one would generate using Jaxodraw, but TikZ offers much more control in changing the way things look. For example, one could turn all fermions blue without having to modify each line.
TikZ arrows on a Beamer presentation. Image by K. Fauske, available at TeXample.net
The next example is a solution to one of the most difficult aspects of Beamer: drawing arrows between elements of a frame. This is consistently the feature that PowerPoint, Keynote, and the ‘ol chalkboard always do better than Beamer. No longer!
Check out the source code for the example above. Adding arrows is as easy as defining some nodes and writing one line of code for each line. The lines are curved ‘naturally’ and the trick works with Beamer’s overlays. (Beamer is also built on PGF.)
Anyway, for those with the time to properly work through the tutorials, TikZ has the potential to be a very powerful tool to add to one’s LaTeX arsenal.
Pros
• Tools for creating high-quality vector graphics
• “Node” structure is very useful for drawing charts and Feynman diagrams
• Works with `PDFLaTeX`
• Images are drawn ‘in-line’ (no need to attach extra files)
• Easy to insert TeX into images
Cons
• A bit of a learning curve to overcome
• No standard GUI interface
Download PGF/TikZ. Installation instructions: place the files into your texmf tree. For Mac OS X users, this means putting everything into a subdirectory of `~/Library/texmf/tex/latex/`. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 51, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185199737548828, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/27990/list | ## Return to Question
4 added 16 characters in body
My question is related to this one: http://mathoverflow.net/questions/22923/computing-the-galois-group-of-a-polynomial.
I was wondering if there is a faster algorithm just to compute the order of the group rather than the group itself.
Also, has anybody compared the performance of GAP and Magma in computing Galois groups? I just heard Magma is very good at it.
I asked this question because I encounter every so often new bug with Magma's implementation and I wanted to see if I can implement something similar. But at this time I'm just interested in the exponent at the first place. This is the last annoying error that I get for basically any deg 5 poly that has Gal group $S_5$.
````k := FiniteField(2);
kx<x> := RationalFunctionField(k);
kxbyb<y> := PolynomialRing(kx);
MinP := y^5 + y + x^2 + x;
print GaloisGroup(MinP);
````
The result is:
````Runtime error: too much looping
````
Which I don't understand what it means (Magma Ver 2.16-8).
To be more clear, my ultimate goal is to check a lot of polynomials and throw out those with $S_n$ Gal group and focus on those which are not such. As you see even an upper bound over the exponent is enough for me.
3 Clarify the goal of the computation
My question is related to this one: http://mathoverflow.net/questions/22923/computing-the-galois-group-of-a-polynomial.
I was wondering if there is a faster algorithm just to compute the order of the group rather than the group itself.
Also, has anybody compared the performance of GAP and Magma in computing Galois groups? I just heard Magma is very good at it.
I asked this question because I encounter every so often new bug with Magma's implementation and I wanted to see if I can implement something similar. But at this time I'm just interested in the exponent at the first place. This is the last annoying error that I get for basically any deg 5 poly that has Gal group $S_5$.
k := FiniteField(2);
kx := RationalFunctionField(k);
kxbyb := PolynomialRing(kx);
MinP := y^5 + y + x^2 + x;
print GaloisGroup(MinP);
The result is:
Runtime error: too much looping
Which I don't understand what it means (Magma Ver 2.16-8).
To be more clear, my ultimate goal is to check a lot of polynomials and throw out those with $S_n$ Gal group and focus on those which are not such. As you see even an upper bound over the exponent is enough for me.
2 Added Magma code
My question is related to this one: http://mathoverflow.net/questions/22923/computing-the-galois-group-of-a-polynomial.
I was wondering if there is a faster algorithm just to compute the order of the group rather than the group itself.
Also, has anybody compared the performance of GAP and Magma in computing Galois groups? I just heard Magma is very good at it.
More also,
I asked this is my first time here, question because I encounter every so often new bug with Magma's implementation and I wanted to see if I can implement something similar. But at this time I'm not following some of just interested in the protocols correctly please correct meexponent at the first place. This is the last annoying error that I get for basically any deg 5 poly that has Gal group $S_5$.
k := FiniteField(2);
kx := RationalFunctionField(k);
kxbyb := PolynomialRing(kx);
MinP := y^5 + y + x^2 + x;
print GaloisGroup(MinP);
The result is:
Runtime error: too much looping
Which I don't understand what it means (Magma Ver 2.16-8).
1
# Computing only the order of Galois group (not the group itself).
My question is related to this one: http://mathoverflow.net/questions/22923/computing-the-galois-group-of-a-polynomial.
I was wondering if there is a faster algorithm just to compute the order of the group rather than the group itself.
Also, has anybody compared the performance of GAP and Magma in computing Galois groups? I just heard Magma is very good at it.
More also, this is my first time here, so if I'm not following some of the protocols correctly please correct me. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244065284729004, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-equations/128044-exact-equation.html | # Thread:
1. ## Exact equation
I already know this equation is not exact. The thing is, I never learned how to do partial derivatives so that is what I need help with. The equation is $(e^xsiny+3y)dx-(3x-e^xsiny)dy=0$
I know that the sign in the middle needs to be + so then you have $(e^xsiny+3y)dx+(-3x+e^xsiny)dy=0$.
Then M= $e^xsiny+3y$ and N= $-3e^x+e^xsiny$. So, I need help getting $M_y and N_x$.
2. Originally Posted by steph3824
I already know this equation is not exact. The thing is, I never learned how to do partial derivatives so that is what I need help with. The equation is $(e^xsiny+3y)dx-(3x-e^xsiny)dy=0$
I know that the sign in the middle needs to be + so then you have $(e^xsiny+3y)dx+(-3x+e^xsiny)dy=0$.
Then M= $e^xsiny+3y$ and N= $-3e^x+e^xsiny$. So, I need help getting $M_y and N_x$.
To do partial derivatives, just hold all other variables that u r not taking the derivative constant
So
$M_y = e^xcosy+3$
and
$N_x = -3e^x+e^xsiny$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476190209388733, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/93842/calculate-an-angle-from-tan-cos-or-sine | Calculate an angle from Tan, Cos, or Sine
I'd like to calculate an angle using nothing but either tangent, cosine, or sine, or any combination of the two. Is this possible? If so, how?
I know how to program, but I find that I can't really get much farther until I start learning higher level mathematics. Therefore, I'm writing out C functions to teach me maths :D.
-
So this is for programming? You'll want to look into two-argument arctangent (`atan2()` in some languages). – J. M. Dec 24 '11 at 8:44
2 Answers
You want to learn programming and advanced mathematics then find root of $$f(\phi) = \cos\theta - \cos \phi$$
using Bisection method or Newton-Raphson method. Writing program for any of the methods is easy.
Note: Here $\cos \theta$ is value of cosine given to you and $\phi$ is your guess.
-
Inverse Trigonometric Functions - Wikipedia
-
Providing me with a link isn't going to help much - I already looked through that before asking here. There is tons to read, and for you (as a contributor) to post only that seems...well, lacking if I do say so myself. – about blank Dec 25 '11 at 6:26
@Holland I interpreted your question as "Given the sine,cosine, or tangent of an angle, how do I find the angle?", which my link answers in far more depth than any answer I could ever give. I'm sorry if that link isn't helpful to you, but you gave me very little to go on in your question, so I had to do the best I could. In the future, you should provide more detail about what you've looked at and where you're stuck. That way, we can give better answers, and also know that you've done the obligatory google search that surprisingly many askers here neglect to do. – Alex Becker Dec 25 '11 at 6:35
1
Understandable. I'll be sure to note that the next time I ask a question. – about blank Dec 25 '11 at 6:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541768431663513, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/219474/question-about-divisors-of-polynomials-over-finite-fields | # Question About Divisors of Polynomials Over Finite Fields
I am going through a paper, and theres one point in the proof I've been having problems understanding.
We have $\alpha \in G$ with minimal polynomial $h$, where $G$ is the algebraic closure of a field $F$, a finite field of prime order $q$. Also take $[F(\alpha):F]=d$, and take $P_q(n)$ as the set of all monic polynomials over the field of order $q$ of order $n$. We have $1\leq k<q$.
I'm having problems showing that: $$[ \prod_{i=1}^n(x-x^{q^i})]^k \cdot \sum_{f\in P_q(n), h \nmid f}\frac{1}{f^k} \equiv 0 \pmod {(x-\alpha)^{k\lfloor \frac{n}{d} \rfloor}}$$
-
3
The algebraic closure of a field is always infinite. Since $F$ is finite, we will have $[G:F]=\infty$ – Dennis Gulko Oct 23 '12 at 15:55
What is $k$? An arbitrary integer? – Thomas Andrews Oct 23 '12 at 16:11
Any what is $n$, another arbitrary integer? By the way the $n$ in $P_q(n)$ is probably meant to do something; is it the degree of the monic polynomials? – Marc van Leeuwen Oct 23 '12 at 16:19
What paper is it? This might help us – Alexander Gruber Oct 23 '12 at 16:39
Fixed some things that should answer some questions. Also, the title of the paper is "Sums of REciprocals of Polynomials over Finite Fields" by Kenneth Hicks, Xiang-dong Hou, and Gary L. Mullen. – Frank White Oct 23 '12 at 16:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292476773262024, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/110633/monge-ampere-operator/110765 | ## monge-ampere operator
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
hello everybody, i'm studying the article of Bedford-Taylor "Fine topology silov boundary..." but i don't understand the proof of the following proposition.
Let u,v plurisubharmonic function defined on $\Omega$ open subset of $\mathbb{C}^n$ such that $u,v\in L^{\infty}_{loc}(\Omega)$.
Let $O$ be a fine-open subset of $\Omega$ where $u>v$. Then $(dd^{c}\max(u,v))^{n}|O=(dd^{n}u)^{n}|O$.
They say that if $O$ is open then it is obvious. while if $O$ is just fine open then using a decreasing approximation sequence $u_k$ of smooth functions for $u$.
Then since $O=\bigcap_{k}(u_k>v)$ and since $(dd^{c}\max(u_k,v))^{n}|O_k=(dd^{n}u_k)^{n}|O_k$ (this dont'understand),
where $O_k=(u_k>v)$, the statement holds.
Maybe it is trivial but i'm not able to see it. thanks
-
On $O_k$, $max\{u_k,v\}=u_k$. Or is it something else that is the problem? – Margaret Friedland Oct 26 at 1:02
This isn't a complete answer, but may help. For smooth u_k, u_k >v is an open set. For non-smooth u, this is not necessarily the case. It may have a boundary. Just outside the boundary max(u,v) = v and inside max(u,v) = u. So, whilst testing the current against test functions whose support "ends" at the boundary, it isn't obvious (to me) that dd^c(max(u,v)) = dd^c (u). – Vamsi Oct 26 at 15:48
## 2 Answers
yes i see it, but even on $O$ $u=\max(u,v)$ so i don't understand why he passes to an approximation sequence.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Hi digital: I can not give comments, hence give answer here for your answer. If O is not open, then u=max(u,v) on O does not neccessarily imply that dd^cu=dd^cmax (u,v) on a neighborhood of O, so that when restricting to O you get the equality. (If you want to do derivatives, you need to do it on an open set).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9182959794998169, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/53020/how-to-interpret-the-magnetic-vector-potential | # How to interpret the magnetic vector potential?
In electromagnetism, we can re-write the electric field in terms of the electric scalar potential, and the magnetic vector potential. That is:
$E = -\nabla\phi - \frac{\partial A}{\partial t}$, where $A$ is such that $B = \nabla \times A$.
I have an intuitive understanding of $\phi$ as the electric potential, as I am familiar with the formula $F = -\nabla V$, where $V$ is the potential energy. Therefore since $E = F/q$, it is easy to see how $\phi$ can be interpreted as the electric potential, in the electrostatic case.
I also know that $F = \frac{dp}{dt}$, where $p$ is momentum, and thus this leads me to believe that $A$ should be somehow connected to momentum, maybe like a "potential momentum". Is there such an intuitive way to understand what $A$ is physically?
-
## 1 Answer
1) OP wrote (v1):
[...] and thus this leads me to believe that ${\bf A}$ should be somehow connected to momentum, [...].
Yes, in fact the magnetic vector potential ${\bf A}$ (times the electric charge) is the difference between the canonical and the kinetic momentum, cf. e.g. this Phys.SE answer.
2) Another argument is that the scalar electric potential $\phi$ times the charge
$$\tag{1} q\phi$$
does not constitute a Lorentz invariant potential energy. If one recalls the Lorentz transformations for the $\phi$ and ${\bf A}$ potentials, and one goes to a boosted coordinate frame, it is not difficult to deduce the correct Lorentz invariant generalization
$$\tag{2} U ~=~ q(\phi - {\bf v}\cdot {\bf A})$$
that replaces $q\phi$. The caveat of eq. (2) is that $U$ is a velocity-dependent potential, so that the force is not merely (minus) a gradient, but rather on the form of (minus) a Euler-Lagrange derivative
$$\tag{3}{\bf F}~=~\frac{d}{dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}}.$$
One may show that eq. (3) reproduces the Lorentz force
$$\tag{4}{\bf F}~=~q({\bf E}+{\bf v}\times {\bf B}),$$
see e.g. Ref. 1.
References:
1. Herbert Goldstein, Classical Mechanics, Chapter 1.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9107247591018677, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/88296/an-identity-of-differential-forms | # An identity of differential forms
Let $\omega=\sum_j \omega_jdy^j$. We want to show that $d(F^{*}\omega)=F^{*}(d\omega)$, where $F: M \to N$ is a map from manifold $M$ to manifold $N$. Local coordinate systems of $M, N$ are $(y^1, \ldots, y^m), (x^1, \ldots, x^n)$ respectively.
Now
$$F^{*}\omega=\sum_{j,k}\omega_j\frac{\partial y^j}{\partial x^{k}}dx^k,$$
$$d(F^{*}\omega)=\sum_{j,k,i}\frac{\partial ( \omega_j\frac{\partial y^j}{\partial x^k} )}{\partial x^i}dx^i \wedge dx^k = \sum_{j,k,i}\frac{\partial \omega_j}{\partial x^i} \frac{\partial y^j}{\partial x^k} dx^i \wedge dx^k + \sum_{j,k,i} \omega_j \frac{\partial^2 y^j}{\partial x^i \partial x^k} dx^i \wedge dx^k.$$
$$d\omega=\sum_{j,l}\frac{\partial \omega_j}{\partial y^i} dy^l \wedge dy^j,\qquad F^{*}d\omega = \sum_{i,j,k,l} \frac{\partial \omega_j}{\partial y^l}\frac{\partial y^l}{\partial x^i} \frac{\partial y^l}{\partial x^i} dx^i \wedge dx^k = \sum_{i,j,k} \frac{\partial \omega_j}{\partial x^i} \frac{\partial y^l}{\partial x^i} dx^i \wedge dx^k.$$
My question is why $\displaystyle\sum_{j,k,i} \omega_j \frac{\partial^2 y^j}{\partial x^i\; \partial x^k} dx^i \wedge dx^k = 0$? Thank you very much.
-
## 2 Answers
Hint: $$\frac{\partial^2 y^j}{\partial x^i\partial x^k} = \frac{\partial^2 y^j}{\partial x^k\partial x^i}$$ but $$dx^i \wedge dx^k = - dx^k \wedge dx^i.$$
-
Forget about $j$ for a moment.
The point is that for a twice continuously differentiable function $f$, you have
$$\sum_{k,i} \frac{\partial^2 f}{\partial x^i \partial x^k} dx^{i} \wedge dx^{k} = 0 \quad (*)$$ This results from:
a) Schwarz's lemma $\frac{\partial^2 f}{\partial x^i \partial x^k} =\frac{\partial^2 f}{\partial x^k \partial x^i}$
b) $dx^{i} \wedge dx^{k}=-dx^{k} \wedge dx^{i}$ (for $i\neq k)$
c) $dx^{i} \wedge dx^{i}=0$
and a regrouping of the sum $(*)$ of $n^2$ terms into three partial sums that I'll leave to you.
The sum you are interested in consists in multiplying the zero expression $(*)$ by $\omega_j$ on the left and adding for all $j$: obviously you will still obtain zero.
Warning: Schwarz $\neq$ Schwartz
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923699676990509, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Diagonal | # Diagonal
For the avenue in Barcelona, see Avinguda Diagonal.
The diagonals of a cube with side length 1. AC' (shown in blue) is a space diagonal with length $\sqrt 3$, while AC (shown in red) is a face diagonal and has length $\sqrt 2$.
A diagonal is a line joining two nonconsecutive vertices of a polygon or polyhedron. Informally, any sloping line is called diagonal. The word "diagonal" derives from the ancient Greek διαγώνιος diagonios,[1] "from angle to angle" (from διά- dia-, "through", "across" and γωνία gonia, "angle", related to gony "knee"); it was used by both Strabo[2] and Euclid[3] to refer to a line connecting two vertices of a rhombus or cuboid,[4] and later adopted into Latin as diagonus ("slanting line").
In mathematics, in addition to its geometric meaning, a diagonal is also used in matrices to refer to a set of entries along a diagonal line.
## Non-mathematical uses
A stand of basic scaffolding on a house construction site, with diagonal braces to maintain its structure
In engineering, a diagonal brace is a beam used to brace a rectangular structure (such as scaffolding) to withstand strong forces pushing into it; although called a diagonal, due to practical considerations diagonal braces are often not connected to the corners of the rectangle.
Diagonal pliers are wire-cutting pliers defined by the cutting edges of the jaws intersects the joint rivet at an angle or "on a diagonal", hence the name.
A diagonal lashing is a type of lashing used to bind spars or poles together applied so that the lashings cross over the poles at an angle.
In association football, the diagonal system of control is the method referees and assistant referees use to position themselves in one of the four quadrants of the pitch.
The diagonal is a common measurement of display size.
## Polygons
As applied to a polygon, a diagonal is a line segment joining any two non-consecutive vertices. Therefore, a quadrilateral has two diagonals, joining opposite pairs of vertices. For any convex polygon, all the diagonals are inside the polygon, but for re-entrant polygons, some diagonals are outside of the polygon.
Any n-sided polygon (n ≥ 3), convex or concave, has
$\frac{n^2-3n}{2}\,$
or
$\frac{n(n-3)}{2}\,$
diagonals, as each vertex has diagonals to all other vertices except itself and the two adjacent vertices, or n − 3 diagonals.
Sides Diagonals
3 0
4 2
5 5
6 9
7 14
8 20
9 27
10 35
Sides Diagonals
11 44
12 54
13 65
14 77
15 90
16 104
17 119
18 135
Sides Diagonals
19 152
20 170
21 189
22 209
23 230
24 252
25 275
26 299
Sides Diagonals
27 324
28 350
29 377
30 406
31 434
32 464
33 495
34 527
Sides Diagonals
35 560
36 594
37 629
38 665
39 702
40 740
41 779
42 819
## Matrices
In the case of a square matrix, the main or principal diagonal is the diagonal line of entries running from the top-left corner to the bottom-right corner. For a matrix $A$ with row index specified by $i$ and column index specified by $j$, these would be entries $A_{ij}$ with $i = j$. For example, the identity matrix can be defined as having entries of 1 on the main diagonal and zeroes elsewhere:
$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$
The top-right to bottom-left diagonal is sometimes described as the minor diagonal or antidiagonal. The off-diagonal entries are those not on the main diagonal. A diagonal matrix is one whose off-diagonal entries are all zero.
A superdiagonal entry is one that is directly above and to the right of the main diagonal. Just as diagonal entries are those $A_{ij}$ with $j=i$, the superdiagonal entries are those with $j = i+1$. For example, the non-zero entries of the following matrix all lie in the superdiagonal:
$\begin{pmatrix} 0 & 2 & 0 \\ 0 & 0 & 3 \\ 0 & 0 & 0 \end{pmatrix}$
Likewise, a subdiagonal entry is one that is directly below and to the left of the main diagonal, that is, an entry $A_{ij}$ with $j = i - 1$. General matrix diagonals can be specified by an index $k$ measured relative to the main diagonal: the main diagonal has $k = 0$; the superdiagonal has $k = 1$; the subdiagonal has $k = -1$; and in general, the $k$-diagonal consists of the entries $A_{ij}$ with $j = i+k$.
## Geometry
By analogy, the subset of the Cartesian product X×X of any set X with itself, consisting of all pairs (x,x), is called the diagonal, and is the graph of the identity relation. This plays an important part in geometry; for example, the fixed points of a mapping F from X to itself may be obtained by intersecting the graph of F with the diagonal.
In geometric studies, the idea of intersecting the diagonal with itself is common, not directly, but by perturbing it within an equivalence class. This is related at a deep level with the Euler characteristic and the zeros of vector fields. For example, the circle S1 has Betti numbers 1, 1, 0, 0, 0, and therefore Euler characteristic 0. A geometric way of expressing this is to look at the diagonal on the two-torus S1xS1 and observe that it can move off itself by the small motion (θ, θ) to (θ, θ + ε). In general, the intersection number of the graph of a function with the diagonal may be computed using homology via the Lefschetz fixed point theorem; the self-intersection of the diagonal is the special case of the identity function.
## References
1. Strabo, Geography 2.1.36–37
2. Euclid, Elements book 11, proposition 28
3. Euclid, Elements book 11, proposition 38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028497934341431, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/48147?sort=oldest | Generalize the Proj construction?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm wondering if there is a generalization of the Proj construction used in algebraic geometry. Given a graded ring R, which is a monoid homomorphism $R\to \mathbb{N}$, we can form the scheme Proj(R), which is the union of Spec$(R_f)_0$.
I'm wondering if there is a way to extend this construction to general homomorphisms from a ring to a (sharp) monoid.
In particular, I want to see if there is a Proj'' construction which gives the product $\mathbb{P}^m\times\mathbb{P}^n$ from the bidegree $k[x_0,\ldots,x_m,y_0,\ldots,y_n]\to\mathbb{N}^2$.
(I know this can be seen via two consecutive Proj, but want to see if this generalizes, as I want to see what happens if the degree map take value in a general toric monoid.)
-
What does "sharp" mean? That is has an identity and embeds into its Grothendieck group? Googling produces many instances of the phrase "sharp monoid", but not a definition. – Anton Geraschenko♦ Dec 3 2010 at 7:39
1
Sharp means no (non-trivial) units. – 36min Dec 3 2010 at 7:43
2
This is called "multi-proj" for submonoids of Z^n and is annoyingly hard to google due to google thinking you really meant "multi-project". Look in Miller and Sturmfels. – Ben Webster♦ Dec 3 2010 at 8:31
@Ben: I found the book at books.google.com/…, but can you give a more detail reference on sections, I can't search Multi-Proj in it either. – Yuhao Huang Dec 3 2010 at 8:40
By the way, the only varieties you'll ever get are the usual projs of taking the graded pieces for multiples of a single generic vector in your monoid (maybe after saturation). You might also want to look into geometric invariant theory. – Ben Webster♦ Dec 3 2010 at 8:45
show 2 more comments
2 Answers
My naive guess from the examples is the following:
Spec$k[P^{gp}]$ (which is just a product of $\mathbb{G}_m$'s) (somehow) acts on Spec$R$, and a GIT-quotient gives the construction you need, because the Proj construction is just a GIT-quotient of Spec$R$ by $\mathbb{G}_m$ w.r.t a certain linearization.
I'm not sure if the above construction generalizes directly, maybe some extra data is necessary.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The monoid and the sharpness condition feel misplaced to me. After all, there's no reason a ring must be positively graded in order to take Proj of it.
A grading on $R$ is "really" a $\mathbb G_m$-action on $R$, which happens to be the same as a monoid homomorphism from $R$ to the character group of $\mathbb G_m$. $Proj(R)$ is obtained by cutting the $\mathbb G_m$-fixed locus out of $Spec(R)$ and quotienting what's left by the action of $\mathbb G_m$. So it feels like the more natural generalization to shoot for is "Proj" of a ring with group action.
But maybe not. I think a productive case to think about is $R=k[x,xy,x^2y]$ with the bi-grading inherited from $k[x,y]$. It seems to me unlikely that any reasonable generalized $Proj(R)$ would depend on which specific monoid you take, whether its $\mathbb N\times \mathbb N$, or something bigger, or something smaller. It may as well be $\mathbb Z\times \mathbb Z$.
-
Anton- You should know better than this. By your recipe, what is the Proj of k[x,y] with x of weight 1 and y of weight -1? This is one of the standard examples for why geometric invariant theory is a good reason and it involves (surprise, surprise) picking a positive direction. The monoid you use for your proj should be a submonoid of the ample cone on the result; it doesn't sound like you really want to have lots of units. Such line bundles can only see the affinization of your variety. – Ben Webster♦ Dec 3 2010 at 8:42
@Ben: I don't understand your objection. The recipe produces something over Spec of the zero-graded (invariant) subring. In your example, you get the non-separated line over the line Spec(k[xy]). It's weird, but what's so unreasonable about it? You're right that cutting out the fixed locus is probably the wrong thing to do, since you could artificially change this locus--the right thing is to remove points with large stabilizers. But once you do that, this recipe is basically the quotient of the pre-stable locus. It seems like I don't have to pick line bundle to construct things like ℙ^n. – Anton Geraschenko♦ Dec 3 2010 at 15:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941176176071167, "perplexity_flag": "middle"} |
http://samjshah.com/2010/02/21/some-of-my-algebra-ii-class-on-friday/ | Some of my Algebra II class on Friday
Posted on February 21, 2010 by
I enter my Algebra II classroom two minutes before class, open my computer and plug it into the SmartBoard. By the time it powers up, most of my students have entered the room and are sitting down and chatting. I pull up the day’s SmartBoard and I get started. The day before was exhausting, and I was in a cranky mood then. (My Algebra II kids didn’t see this, because I gave them a test that day.) I tell my kids we all have bad days, but that when I was thinking “argh, bad day!” I started thinking of all the good that I have, and I thought of my wonderful Algebra II class. (Which they are.) So I wanted to let them know that. They liked hearing that. I liked saying that. It was a nice 30 seconds.
I then pointed to the SmartBoard
And we got started. I talked about how we’ve done so much algebraic manipulation and solving so far. Absolute value equations, exponent rules, radical equation, inequalities. And we’ve done some baby graphing (lines, crazy functions which we used our calculator to graph). But today, I said, was going to be a turning point in our course — and graphing would be the emphasis.
I introduce the discriminant, $b^2-4ac$ and we talk about where we’ve seen that (answer: in the quadratic formula). I tell them they will soon see the use of it. But first we should get familiar with it. [1] We calculate it for a few quadratics. And then I asked them “so what? what does this thing tell you?”
(Silence.)
I move on and say, “Okay, we just calculated a discriminant of -11 for a quadratic equation. Tell me something.”
I didn’t have them talk in partners, and when I got more silence, I highlighted the discriminant in the quadratic formula:
And then I asked “what is something mathematical you can tell me about the quadratic if the discriminant is -11?”
A few hands went up, and then I should have had them talk in partners. But I didn’t. I called on one, who said “there will be $i$.” “What do you mean?” “The solutions will have imaginary numbers.” “Right!”
I then go on to explain it in more detail to those who still don’t see it. And then I explain how the two zeros are going to be complex (because they have a real part and an imaginary part). I see nods. I feel comfortable moving on.
I then ask “what happens if the discriminant is 10?”
I call on a random student whose hand is not raised, who answers “they will be real.” I ask for clarification, and they said “the solutions will be real.”
So I go to the next SmartBoard page and I start codifying our conclusions:
I’m hankering for someone to ask the obvious next question, and indeed, a student does. “What happens if the discriminant is 0?”
And we discuss, and realize there will only be one real solution. This gets added to the chart.
I then ask them to spend 15 seconds thinking about this — what they just learned. To see if it makes sense, or if they have any questions. Just some time.
I’m not surprised (in fact, I’m delighted) when a student asks: “Can you ever have a discriminant equal zero?”
I suddenly realized that for some of my kids, we’re now in the land of abstraction. There is this new thingamabob with a weird name, the discriminant, and the students don’t know what it’s for or why we use it. We’ve been talking about $a$s, $b$s, and $c$s and even though we’ve done a few examples, it isn’t “there” for the kids yet.
I throw $x^2+2x+1$ on the board. He nods approvingly. Then I ask what the solution or solutions are for that equation, and they find the one real solution. Which gets repeated twice when we factor.
I then give them 5 minutes to check themselves by asking them to do the following 3 problems:
I walk around. Two students are actually doing the quadratic formula. So I go up to the board and underline the things in blue — and ask “do you need the full force of the quadratic formula to answer THIS question?” (Secretly I grimace, because who the heck cares if they use the QF or use the discriminant to answer the question? But if I’m teaching something, I want my kids to practice it.)
When we all come back together, I project the answers
And I get called out (rightfully so!) on improper mathematical language (imaginary vs. complex). So I fix that. I’m feeling slightly guilty about asking the two students to use the discriminant instead of the QF to answer the question, because who cares!, and so I tell the class that the discriminant is just a short way to tell the number and nature of the solutions, but don’t worry if you forget it, because you can always pull out the big guns: the quadratic formula. Which will not only tell you the number and nature of the solutions, but also what the solutions are!
I have them make a new heading in their notes
And I have them work with a desk partner to solve three quadratic equations using any method they like (they only know factoring, the quadratic formula, and completing the square).
They get the right answers, for the most part. The ones who aren’t getting it right are having trouble using their calculator to enter in their quadratic formula result. I want to move on, because of time, so I tell them that we can go over calculator questions in the next class but I want them to put those aside so we can see the bigger picture now.
They then are asked to graph the following three equations on a standard window:
We also talk about the difference between the two things they are working with:
We then look at the graph:
At this point, I haven’t pointed out the x-intercepts, but I asked students to see if they can relate the two questions. I grow a bit impatient, and I point to one x-intercept, and then the other, for the $x^2-5x-3=y$ equation. Hm. They start to see it. We then look at the $x^2+2x+1=y$ equation. Finally, the $x^2+x+2=y$ equation. Which doesn’t have any x-intercepts. Which confused some.
Someone said “that’s because the solutions are complex.” I pointed and said something like “yeah!” and then tried to explain. Some students got confused, because we did plot complex numbers on a complex plane, so they were like “you can plot complex solutions too!” I tried to address their concerns, by saying that the x-intercepts show us the solution when $y=0$. But the x-axis is a REAL number line, not a complex number line. I don’t think they all got what I was saying.
We codify what we know:
This all took about 30 or 35 minutes.
I somehow totally forgot to do something key: bring the discussion back to discriminants. I didn’t ask them “so what can a discriminant tell you about a graph of a quadratic?” It might be obvious to us, but I guarantee you that only a few kids would actually be able to answer that after our lesson.
We spend the remaining 15 or 20 minutes on graphing quadratics of the form $y=x^2+bx+c$ and $y=-x^2+bx+c$ by hand. The students were working in pairs. Then at the end we make some observations as a class.
Class ended, and then I had more work to do.
The point of this post is two fold:
1) I’m in a teacher funk. You can see it in this class. I didn’t work backwards. I gave them what they needed to know (the discriminant), and then motivated it second. I did lots of teacher centered things. I rarely let them discuss things with each other. Blah. Especially for something conceptual, not good. Not terrible, not good.
2) Teaching is exhausting. Anyone who teaches knows that even in a non-interesting lesson like this, a teacher has to constantly be thinking “what do my kids get?” and “do I need to say that again and reword it?” and “do I address the calculator issue of 2 kids when 15 seem to be okay?” Basically, every 10 seconds is a choice that needs to be made, a thought about how to adapt, where to go, what to do.
[1] Honestly, personally, I think the whole idea of the discriminant is stupid and I would have no problem doing away with it. It’s a term with very little meaning and almost no use. But I am asked to teach it, so I do.
Like this:
This entry was tagged Algebra II. Bookmark the permalink.
10 thoughts on “Some of my Algebra II class on Friday”
1. Regarding 1). I think doing teacher centered things are fine, especially if you are re-mediating while you teach.
Students come to me sometimes much lower than they should be in a honors class. Letting them discuss would be a waste of time in the best case. Worst case is they confuse each other and learn the wrong things, they are better off if they didn’t even chat about math in those instances.
Regarding 2) I totally agree with that. It’s also exhausting sometimes when you are figuring out where a student has a misconception. Sometimes they resist what you teach because they learned something incorrectly in a previous class, so you have to do some diagnosis instead of just going with “this is the way it is, and I’m the teacher so I’m right.” It’s also harder when you have 40+ students in a class. I ask many questions and walk around a lot in class.
Another connection you can try is rewriting the quadratic formula as x= -b/2a +- sqrt(b^2-4ac)/2a. x=-b/2a is the axis of symmetry and sqrt(b^2-4ac)/2a is the horizontal distance from the axis of symmetry to the zeroes of the quadratic on the x-axis. I keep reminding them that they don’t have to remember a separate formula for axis of symmetry; it’s the first part of the quadratic formula.
When the discriminant is positive, there’s 2 zeroes that are sqrt(b^2-4ac)/2a units away from the axis of symmetry on the x-axis (one on the left and one on the right, hence the +-). When the discriminant is zero, the 2 zeroes are right on the axis of symmetry. When the discriminant is negative, well you get the idea.
You can also go with easier examples at first for graphing that highlight the differences:
y=x^2-2x
y=x^2-2x+1
y=x^2-2x+2
This might also be a good time to show them how to use different line styles for different graph on TI calculators.
Also try asking students what c needs to be to have 0 solution, 1 solution, and 2 solutions for quadratics like:
y=x^2+4x+c
y=2x^2-12x+c
A quadratic has 2 distinct solutions that are additive inverses. What can you tell me about a, b, and c?
If a quadratic has no real solutions, must the solutions be complex conjugates?
I don’t see discriminants coming up in meaningful ways outside of math competitions. I guess I half-agree with [1].
2. Well sheesh, Sam! Hand in your teaching card right now.
1. The fact that you’re reflecting on this makes you a better teacher than half of them at my school.
2. The fact that THIS lesson is causing you problems makes you better than most of the rest. This is an abstract lesson and it’s really hard to get students to understand/figure this out on their own on the best of days (yours and theirs).
We all get in these funks and “regress” to this type of teaching. From my experience, believe it or not, SOME (not all) students actually feel more comfortable with this type of teacher-centered learning.
Anyhow, we’ve all been there. It’s days like these that I apply my saying, “The worst thing about a bad lesson is when the kids don’t even know.” Your students probably do not even recognize this as a “bad” lesson. On the one hand, it can make you feel better about not ruining their lives. On the other hand, we still hope they got the lesson in a more meaningful way than just being talked at.
You’ll climb out of this funk. Step 1 is admitting your problem (in a public forum for accountability).
3. The fact that you were able to teach a lesson as complex as this on a Friday and have them all engaged is amazing if you ask me. I think you are not giving yourself enough credit. If you go back in on Monday and do a little review and reteach, I think you’ll see the light bulbs going off right and left. Some concepts just need to marinate in the brain a little while.
4. Mike
I think your approach here is fine. You can’t hit a home run on every lesson, but you should keep swinging for the fences!
Is it worth nit-picking that the real solutions are also complex solutions? One of the most common mistakes I come across when I ask students for all “complex” solutions to a polynomial is that they will only give the non-real ones (even though I can see in their work the real solutions).
It’s just a thought… I’ve been reading your blog for about 6 months and it’s so thoughtful and reflective. I was a little surprised to see real solutions characterized as different from complex ones.
Mike
5. @all
I should be clear that I didn’t post this because I think it’s a bad lesson. It wasn’t great, it wasn’t bad. And I’m not *upset* about it in anyway, or think I’m a bad teacher about it or anything. It just was sort of backwards. But I teach like this a lot, and I am going to own that. I am just not as student centered as I want yet.
Part of the post was that I am in a bit of a funk in general. But really I should have said that the point of this was to show people who argue how easy teaching is that even for a lesson like this, that SO MUCH needs to happen. That you have to be ON all the time. You have to carefully construct the order of things. You have unanticipated problems. You have to adapt and connect and go forth, adapt and connect and go forth.
@CalcDave, “We all get in these funks and “regress” to this type of teaching. From my experience, believe it or not, SOME (not all) students actually feel more comfortable with this type of teacher-centered learning.” Yes. I was that student.
@Mike, yes, yes, I should have said “no real solutions” or “two distinct complex solutions” — especially since they just were asked true-false questions about the number systems on their last assessment.
6. Kate Nowak
I’d actually like to see more people writing up posts like this. What, how, what order they presented something, how they knew whether it was working, what decisions they made along the way. Those who blog myself included I think sometimes feel obligated to only write up the groundbreaking lessons with cool-factor. But honestly many people, myself included are delivering much of their instruction this way due to forces beyond our control. I learned a ton from you deciding to write up a narrative this way. Let’s get more people to do it.
• Here here! (Or is it hear! hear!) We talk about the great. We talk about the failures. (Sometimes, anyway.) We talk about interesting mnemonics and fun ways to get into the material. We talk about funny, precious, poignent moments. But what about the everyday? For me, this is more like the everyday than most other things I write about.
7. I don’t teach the discriminant as its own topic. Sometimes when we’re working on graphing quadratics, it comes up. Sometimes it’s even a student who brings it up. (That’s a big plus, in my book!)
>even in a non-interesting lesson like this
I know about being in a teacher funk. But I think one thing that helped me was finding the interesting-ness of each lesson. I love how equations and graphs weave together, and I love this lesson for that reason. (Though, like I said, I don’t emphasize the discriminant.)
There were years for me where I’d get bored by all that I had to write on the board. Now my focus has switched subtly from teaching the material to teaching/reaching the students, and if I have a group I’ve managed to connect with, I seldom get bored, or feel like the lesson isn’t interesting. (Even though my lessons are seldom as interesting as dy/dan’s.)
8. McMathTeacher
One little “extra” I like to throw in with the discriminant is how the kids can use it to help determine if a quadratic is factorable. I help them write a quick program for the 83/84, and then they can use it to decide if they can factor. I think this serves two purposes. The better kids will see the connection between the “nice” zeros and factored form, and the weaker kids will have a technique they can use in the future to find the “factorability” of a quadratic.
9. Tony
Don’t brush off the applicability of the discriminant so quickly! Even though they could solve many of these problems using the quadratic formula it is much quicker and less error prone to use the discriminant – especially once the QF is memorized. It’s like building an additional floor on a skyscraper; you don’t need buy more land. Teaching them to use that unwieldy part of the QF helps keep it in mind because the radical is no longer mysterious.
I’m trying to remember my days as a high school student, currently a college freshman.
Tags
This is a work in progress (not all posts are tagged yet). But it's all due to the efforts of @crstn85. Thank you!
Algebra II
Pre-Calculus
Calculus
Multivariable Calculus
Standards Based Grading
General Ideas for the Classroom
Big Teaching Questions
Good Math Problems
Mathematical Communication
Other
Email Subscription
Join 250 other followers
Blogroll
(0.6014,-0.1169) (-0.4777,0.1747)
Blog at WordPress.com. | Theme: Confit by Automattic.
Follow
Follow “Continuous Everywhere but Differentiable Nowhere”
Get every new post delivered to your Inbox.
Join 250 other followers
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606701731681824, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/250466/question-on-dimension | # Question on dimension
In some text book of Commutative Algebra, the authors defined the height of an ideal $I$ of a commutative ring $R$ is the maximal of length of a prime ideal chains : $\mathfrak{p}_{0}\subset \mathfrak{p}_{1}\subset...\subset \mathfrak{p}_{k}=\mathfrak{p}$
The Krull dimension of a ring $R$ is defined to be the sup ht$\mathfrak{p}$ for $\mathfrak{p}$ is a prime ideal of $R$.
My first question is : 1- Does $\mathfrak{p}_0$ must be different from the zero ideal ?
In this link the author claimed that $\mathbb{Z}/n\mathbb{Z}$ has Krull dimension 0. Now, take $n=4$, then in $\mathbb{Z}/4\mathbb{Z}$ we have : $\bar{0}\subset \bar{2}$, then I think the Krull dimension is 1.
The second question is : 2- What is wrong in my argument ?
Update Thanks to Sanchez for the answer. My question now is what is the prime ideal in $\mathbb{Z}/n\mathbb{Z}$? Can we list them to conclude that $\mathbb{Z}/n\mathbb{Z}$ has dimension 0 ?
-
4
1. No. 2. 0 is not a prime ideal in $\mathbb{Z}/4\mathbb{Z}$, for 2 is a zero divisor. – Sanchez Dec 4 '12 at 4:26
@morse: what you wrote in the first paragraph is the height of a prime ideal. – QiL'8 Dec 4 '12 at 7:57
1
Your questions show not much effort to do things by yourself. Do you know the ideals of $\mathbb{Z}/n\mathbb{Z}$? Do you know the fundamental theorem of isomorphism for rings? How look the quotient ring of $\mathbb{Z}/n\mathbb{Z}$ by an arbitrary ideal? Which one of these quotients are integral domains (=fields)? – YACP Dec 4 '12 at 8:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397004246711731, "perplexity_flag": "head"} |
http://www.indiana.edu/~fluid/ | Home
Complete CV
Research Focus
Recent Publications
Recent Talks
Teaching
Current Ph D Students
Shouhong Wang
Professor Department of Mathematics Indiana University Bloomington, IN 47405 . Phone: 812 855 8350 Fax: 812 855 0046 Email: showang@indiana.edu
# New Theory of Dark Energy and Dark Matter
### Tian Ma & Shouhong Wang, Unified theory of dark energy and dark matter
Books:
• Geometric Theory of Incompressible Flows with Applications to Fluid Dynamics, AMS Monograph and Mathematical Survey Series, vol. 119, 2005 (with Tian Ma)
• Bifurcation Theory and Applications, World Scientific Series on Nonlinear Science, Series A - Vol. 53, 2005 (with Tian Ma)
• Stability and Bifurcation of Nonlinear Evolutions Equations, Science Press, Beijing, April, 2007, pp. 437 (with Tian Ma)
• Phase Transition Dynamics in Nonlinear Sciences, Springer-Verlag, to appear, 2012, about 669pp. (with T. Ma)
### Recent Research Highlights
Phase Transitions
It is well-known that the gas-liquid coexistence curve terminates at a critical point, also called the Andrews critical point. It is a longstanding open question why the Andrews critical point exists and what is the order of transition going beyond this critical point. For the first time, we show that 1) the gas-liquid co-existence curve can be extended beyond the Andrews critical point, and 2) the transition is {\it first order} before the critical point, {\it second-order} at the critical point, and {\it third order} beyond the Andrews critical point. This clearly explains why it is hard to observe the gas-liquid phase transition beyond the Andrews critical point. Furthermore, the analysis leads naturally the introduction of a general asymmetry principle of fluctuations and the preferred transition mechanism for a thermodynamic system. The theoretical results derived in this article are in agreement with the experimental results obtained in (K. Nishikawa and T. Morita, Fluid behavior at supercritical states studied by small-angle X-ray scattering, Journal of Supercritical Fluid, 13 (1998), pp. 143-148) and their related articles. Also, the derived second-order transition at the critical point is consistent with the result obtained in (M. Fisher, Specific heat of a gas near the critical point, Physical Review, 136:6A (1964), pp. A1599-A1604).
We have derived new Ginzburg-Landau type of models for liquid helium-3 [78], helium-4 [77] and their mixture [82], leading to various physical predictions, such as the existence of a new phase $C$ for helium-3. Although these predictions need yet to be verified experimentally, they certainly offer new insights to both theoretical and experimental studies for a better understanding of the underlying physical problems.
Geophysical Fluid Dynamics and Climate Dynamics
• New Mechanism of El Nino Southern Oscillation (ENSO) [91, 86]
We discovered a new mechanism of the ENSO, as a self-organizing and self-excitation system, with two highly coupled oscillatory processes: 1) the oscillation between the two metastable warm (El Nino phase) and cold events (La Nina phase), and 2) the spatiotemporal oscillation of the sea surface temperature (SST) field. The interplay between these two processes gives rises the climate variability associated with the ENSO, leads to both the random and deterministic features of the ENSO, and defines a new natural feedback mechanism, which drives the sporadic oscillation of the ENSO. The randomness is closely related to the uncertainty/fluctuations of the initial data between the narrow basins of attractions of the corresponding metastable events, and the deterministic feature is represented by a deterministic coupled atmospheric and oceanic model predicting the basins of attraction and the sea-surface temperature (SST). It is hoped this mechanism based on a rigorous mathematical theory could lead to a better understanding and prediction of the ENSO phenomena.
• Thermohaline Circulation [88]
Oceanic circulation is one of key sources of internal climate variability. One important source of such variability is the thermohaline circulation (THC). Physically speaking, the buoyancy fluxes at the ocean surface give rise to gradients in temperature and salinity, which produce, in turn, density gradients. These gradients are, overall, sharper
in the vertical than in the horizontal and are associated therefore with an overturning or THC. A mathematical theory associated with the thermohaline circulations (THC) is derived in [88], using the Boussinesq system, governing the motion and states of the large-scale ocean circulation. First, it is shown that the first transition is either to multiple steady states or to oscillations (periodic solutions), determined by the sign of a non-dimensional parameter $K$, depending on the geometry of the physical domain and the thermal and saline Rayleigh numbers. Second, for both the multiple equilibria and periodic solutions transitions, both Type-I (continuous) and Type-II (jump) transitions can occur, and precise criteria are derived in terms of two computable nondimensional parameters $b_1$ and $b_2$. Associated with Type-II transitions is the hysteresis phenomena, and the physical reality is represented by either metastable states or by a local attractor away from the basic solution, showing more complex dynamical behavior. Third, a convection scale law is introduced, leading to an introduction of proper friction terms in the model in order to derive the correct circulation length scale. In particular, the dynamic transitions of the model with the derived friction terms suggest that the THC favors the continuous transitions to stable multiple equilibria.
Last updated on December 28, 2012 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8817782402038574, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/246376/given-an-arbitrary-commutative-ring-a-does-integral-closure-always-exist?answertab=oldest | # Given an arbitrary commutative ring $A$, does integral closure always exist?
As for an arbitrary field $K$, we know that its algebraic closure always exists and it is unique up to an isomorphism. However, when we talk about integral closure of some commutative ring $A$, we are always given $A$ as some subring of a larger ring $B$ and its closure is defined to be all the elements of $B$ integral over $A$.
It seems like, due to the lack of cancellation law for multiplication, there doesn't seem to be a natural choice for even "simple extensions" by a root of a polynomial, hence making the concept rather meaningless. It still seems, however, possible to arbitrary append $A$ with a root of a polynomial in $A[x]$ and get some "integral extension" of $A$, albeit it not being a natural choice. For example, $\overline{\mathbb{Q}}$ in $\mathbb{C}$ might be considered as an integral closure of $\mathbb{Z}$.
So here is my question: Given a commutative ring $A$, is there a ring $B$ such that $B$ is integral over $A$ and every polynomial in $A[x]$ somehow splits completely in $B[x]$?
-
## 1 Answer
For a domain $A$ the absolute integral closure $\widetilde{A}$ always exists and is unique up to $A$-isomorphisms. It is defined to be a maximal integrally closed domain, which is an integral extension of $A$. Quite similar to the case of an algebraically closed field. One gets $\widetilde{A}$ by taking the integral closure of $A$ within an algebraic closure of the fraction field of $A$.
Your last remark seems to imply that you have the impression that polynomials $f\in A[X]$ split completely over $\widetilde{A}$. Of course this is not the case in general, and is not intended by the definition. Instead polynomials $f\in A[X]$ split over $\widetilde{A}$ into linear factors and factors without a root in $\widetilde{A}$.
The case of a ring with zero-divisors is more complicated, and I cannot summarize the situation by heart.
-
Thanks for the insight. I see how linear factors might not have a root in its absolute integral closure. As for domains, it seemed like it would work out, although I should go work out the details myself. Although, my main concern was about $A$ not being a domain. I don't even know where to start. I tried to come up with a counterexample, but to no avail. – Sam Nov 28 '12 at 9:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9661670923233032, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/6408/from-hash-to-cryptographic-hash | # From hash to Cryptographic hash
After reading some excellent papers on SipHash, I understood that good non-cryptographic hashes such as MurmurHash and CityHash are not secure for MAC usage, due to a certain type of DDos attack becoming possible, thanks to a combination of Hash-table degeneracy into a list and predictable collisions.
The central hypothesis is that it is possible to build multiple secret-key-independant collisions, because the inner hash loop evolution is secret-key-independant.
The reasoning and solution is clever, however i see that it requires a manual investigation of the algorithm by cryptographic expert. This conclusion wasn't drawned from automated tools. I guess the near-perfect distribution and avalanche of MurmurHash would have defeated such automated analysis. But i may be wrong.
I would like to apply this learning to xxHash. Consider this a learning exercise.
xxHash seems to avoid the specific issue described into the SipHash paper by integrating the secret-key into internal state right at the beginning, before the inner loop. As a consequence, since the next internal state depends on the previous one, the difference between 2 consecutives internal state cannot be secret-key-independant. Therefore, it looks like it prevents the existence of secret-key independant collisions, and solve, or partly solve, the collision-resistance property.
Currently, xxHash is not cryptographic mostly because its output is 32-bits. However, creating 64-bits & 128-bits variants from it can be pretty straighforward.
So let's pretend that the 32-bits output is not a problem (brute-force solution is accessible, but if that is the only solution, then the underlying principles are considered "about okay").
Question : Is there a way to analyse xxHash and tell, either by human analysis, or with an automated tool, that this hash function is or is not cryptographic ? And if not, what needs to be solved ?
PS : as an obvious source of information, let's check Wikipedia on what is a cryptographic hash function. Apparently, pre-image and collision resistance is all it takes to make an already good Hash function become a cryptographic one. Is this correct ?
[EDIT]
Here is a candidate solution to produce a secret-key independant collision for xxHash. Well, almost. Note that many restrictions apply, and that this trick is not guaranteed to work, due to the secret key. In best circumstances however, it's likely to work with a probability of 50%.
1) At position $P$, such as $P$ is a multiple of $4$, there is a 32-bits sequence $S$.
2) Add the value $A =$ 0xBD1C0000 to $S$. This will add a fixed value to $Op1(S) = S * prime2$, which is $Op1(A) = A * prime2 = 000A0000 = bit18$ (due to multiplication transitivity)
3) However, what we really want is not to add $bit18$, but to modify the 18th bit of $op1(S)$, hence from 0 to 1. The reason is, that if $Op1(S) & bit18 == 1$ , by adding $bit18$, we will not only modifiy the 18th bits, but also one or several other upper bits, due to carry spillover. Since we want a predictable result, we want to avoid that.
4) As a consequence, we need to select $P$ containing sequence $S$ such as $Op1(S)$ respects this condition (18th bit = 0). Since, for a random input, there is a 50% chance that the 18th bit of $Op1(S)$ is equal to zero, for any long enough input, a solution is very likely to be found (with S = 0 being an obvious solution).
5) However, it's not finished. $Op1(S+A)$ is then added to an unknown internal state, that we will call $Z$. This internal state is dependant on the secret key.
6) The same condition as before apply : in order to have a controllable effect, we need the 18th bit of $Z$ to be 0. However there is no way to check nor guarantee that condition. As a consequence, the method presented here is not guaranteed to work properly (probability 50%... for now)
7) In fact, it is even worse than that : we need both to ensure that the 18th bit of $Z$ is zero, and that the addition of lower bits of $Z$ and $Op1(S+A)$ do not add up to the 18th bit, as a consequence of carry spillover.
8) Fortunately, this latest condition can be made more probable by ensuring that lower bits of $Op1(S)$ are also zero, such as the 17th bit, then the 16th bit, and so on. Unfortunately, with each more zero condition, the probability to find a suitable $S$ decrease. With respect to this latest condition, the best possible input is $S=0$.
9) Now, if all above conditions are met, then the result of $Z1 = Z + Op1(S)$ has been slightly altered, by turning on its 18th bit : $newZ1 = Z + Op1(S+A) = oldZ1 + bit18$.
10) As a consequence, $Z2 = op2(Z1)$ is slightly altered too. Since $Op2$ is a left-rotation of 32-bits field by 13 bits, we have $newZ2 = oldZ2 + bit31$ (bit31 is highest bit).
11) As a consequence, since $Op3$ is a simple multiplication by a prime number, the $bit31$ effect will remain unmodified.
12) This $bit31$ will be counter-balanced at position $P+16$. Here, it's enough to add to the initial sequence $S2$ the value $B =$ 0x80000000, because $Op1(B) = 80000000 = B$. This will cancel the difference from previous round.
[EDIT 2]
Here is a better solution. This one needs nothing special regarding secret key, and its success rate is almost 100%.
1) We'll do the other way round : we will add just $1$ to $A = Op1(S)$. To get this effect, it's enough to add B6C92F47 to the initial sequence $S$ situated at position $P$ such as $P$ being a multiple of 4.
2) The beauty of this method is that it works as long as the lowest 17 bits of $Op1(S)$ are different from all 1. Which means that it almost always work. Moreover, it's easy to check if the condition is respected.
3) If the above condition is respected, $Op2$ will merely shift the added 1 by 13 bits. So $B = Op2(A)$ is now modified by +(2^13).
4) Now we can calculate the impact on $C=Op3(B)$. Since $newB = oldB + bit13$, we have $newC = oldC + Op3(bit13)$. $Op3(bit13) =$ EF362000.
5) Now, we have to cancel this on the next round. To get this result it's enough to add -EF362000 = 10C9E000. To add 10C9E000, we need to add 981D2000 to $S2$, because $Op1(981D2000) = 10C9E000$.
6) We now have a couple of values. Add B6C92F47 to the 32-bits field at position $P$, such as $P$ is a multiple of 4. Then add 981D2000 at position $P+16$. This solution works with a probability > 99.9%, requiring no special knowledge on secret key.
-
Which of the two inputs do you call "secret key"? The "seed" or the "input" argument? – Paŭlo Ebermann♦ Feb 20 at 20:18
"secret key" = "seed" – Cyan Feb 21 at 10:11
## 1 Answer
Well, first of all, you need to be clear about the meanings of various cryptographical primitives.
• Cryptographic hash function; this is a function that takes an input string, and generates a hash. The idea is that we don't know how to create two input strings with the same hash, and so the hash can be used as a replacement for the original string. Now, cryptographic hash functions don't take a secret key, because we need to assume that anyone is able compute them; xxHash is obviously not a cryptographic hash function. And, yes, there are "keyed hash functions" which do take secret keys; xxHash isn't one of those either.
• Message authentication codes; this is a function that takes a message and a key; it generates an output string (a MAC). The idea is that if someone doesn't know the key, they can't generate the MAC for any message that they haven't seen the MAC for. This is rather closer to what you are hoping xxHash to be.
That being said, to answer the question, yes, there is a way to analyse xxHash and determine whether it is a secure message authentication code. And, the answer is whether it is a secure message authentication code is "no" (and if the answer was "yes", there would have been no really good way to determine that).
One way to see that is that, in fact, xxHash is not second preimage resistant, even to someone who doesn't know the key. Specifically, given a valid message/MAC pair (where the message is at least 32 bytes long), it is possible to construct a second message that, with probability $p=0.5$, evaluates to the same MAC. Remember that this is a violation of the Message Authentication Code security guarantees, not specifically because it is a collision, but because it allows the attacker to compute the MAC of a message he hasn't been given the MAC to (and it's a violation independent of whether that MAC value happens to be a value he has seen before).
As to how to find the alternative message, well, you said to treat this as a learning exercise, this can be considered an exercise to the reader. However, here's how it works overall; you introduce a change at step $X$, which disturbs the state. You also introduce a second change at step $X+1$ (actually, because of how xxHash works, 16 bytes later), and this second change cancels out the effects of the first change, leaving the internal state while processing the altered message exactly the same as the corresponding step for the original message. The success probability $p=0.5$ is there because whether the change is precisely what we expect depends on whether a certain internal carry happens during an addition. For, how can we arrange these two changes to make this happen?
-
Thanks for clarifying, the difference between cryptographic Hash and MAC is much better now. – Cyan Feb 21 at 10:18
That looks like an excellent answer. The principles which make xxHash not suitable for MAC usage seem identical to those targeting MurmurHash : mainly, the claim is that it is possible to forge 2 consecutives sequences which will cancel each other, whatever the underlying internal state is, making this operation independant of the secret key. I would feel better, nonetheless, if i could find such consecutive sequences. – Cyan Feb 21 at 10:32
@Cyan: here's step one in an approach to attacking this; one of the steps within xxHash is the assignment "v1 += XXH_LE32(p) * PRIME32_2". Suppose that, which a certain block from a certain message, the value of 'v1' was $X$ (which you do not know). How could you modify the block being processed at this point so that the value of 'v1' after this step is the value $X+A$ (for a value $A$ that you pick)? – poncho Feb 22 at 3:27
Yes, sure, in this case a solution would be easy. It would be possible to create a A, and then -A. The problem is, this is not the full line of operations. The full line is : v1 += XXH_LE32(p) * PRIME32_2; v1 = XXH_rotl32(v1, 13); v1 *= PRIME32_1; Operations 1 alone would match your hypothesis, Operation 2 makes it more complex but i feel it would still be possible to find a solution; but operation 3 (v1 *= PRIME32_1;) seems to scramble everything. I don't see how it could be made "independant of X", which is the whole point. – Cyan Feb 22 at 10:15
@Cyan: I said it was step 1; step 2 of finding the attack would be 'how do we pick A so that, after the rotate, we have a predictable result; if the real packet had the value $Y$ at that point, our altered packet would have a good chance of being $Y+B$ (for some $B$ we know). Originally, I found an $A$ for which this would hold with $p=0.5$; I now see other $A$ values which have $B$ values that hold with much higher probabilities. In any case, work on step 1; you'll learn from it. – poncho Feb 22 at 16:21
show 4 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468514919281006, "perplexity_flag": "middle"} |
http://nrich.maths.org/6583/index?nomenu=1 | ## 'Scientific Curves' printed from http://nrich.maths.org/
### Show menu
Curve sketching is an essential art in the application of mathematics to science. A good sketch of a curve does not need to be accurately plotted to scale, but will encode all of the key information about the curve: turning points, maximum or minimum values, asymptotes, roots and a sense of the scale of the function.
Sketch $V(r)$ against $r$ for each of these tricky curves, treating $a, b$ and $c$ as unknown constants in each case. As you make your plots, ask yourself: do different shapes of curve emerge for different ranges of the constants, or will the graphs look similar (i.e. same numbers of turning points, regions etc.) for the various choices?
1. An approximation for the potential energy of a system of two atoms separated by a distance $r$
$$V(r) = a\left[\left(\frac{b}{r}\right)^{12}-\left(\frac{b}{r}\right)^6\right]$$
2. A radial probability density function for an electron orbit
$$V(r) = ar^2e^{-\frac{r}{b}}$$
3. Potential energy for the vibrational modes of ammonium
V(r)=ar^2+be^{-cr^2} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8825227618217468, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/79969-analysis-rational-functions-using-calculus.html | # Thread:
1. ## Analysis of Rational Functions (Using Calculus)
Hi,
So the question is to analyze this rational function... I am easily able to get the first derivative. The second derivative causes my problems though because I don't have the chain rule! My first and most obvious question would be: This isn't an assignment... And we learn the chain rule early this week. Should I just ignore this question until we get the chain rule?
If the answer is no, then here is the first derivative (it is done correctly, as it was an example)
f'(x)= (-5x^2+5)/(x^2+1)^2
I guess, based on theory, that the best form to get this in, would be factored top and bottom. The example goes through one method and I can follow it for quite a while, but then it does one tricky thing that, as far as I can tell, would either be an ambiguous case, or else I don't understand it, so I am asking if someone here could go through the steps how how they would find the second derivative now.
Thanks in advance!
Mike
2. Hi
You managed to find the derivative of $\frac{5x}{x^2+1}$ using the formula $\left(\frac uv \right) '= \frac{u'v-uv'}{v^2}$
The derivative is $\frac{-5x^2+5}{(x^2+1)^2}$
The second derivative is obtained exactly the same way
Set $u(x) = -5x^2+5$ and $v(x) = (x^2+1)^2$ and use the above formula
3. I understand that part... Problem is, it gets really difficult to get the final answer of the second derivative in factored form.
That's where I need someone to show me how that works out. Especially so with this case
4. Using the formula I can get as second derivative
$\frac{10x(x^2-3)}{(x^2+1)^3}$
5. I understand that much... And I have the answer available. But my problem is still the STEPS to get there. How did you get that in factored form? The steps in the example keep it in factored form, and again, the question is, is this an ambiguous case? A rare instance where it works that I can keep it in factored form?
6. $f'(x) = \frac{-5x^2 + 5}{(x^2 + 1)^2}$
$f'(x) = -5 \cdot \frac{x^2-1}{(x^2 + 1)^2}$
$f''(x) = -5 \cdot \frac{(x^2+1)^2 \cdot 2x - (x^2-1) \cdot 4x(x^2+1)}{(x^2+1)^4}$
factor out $2x(x^2+1)$ from both terms in the numerator ...
$f''(x) = -5 \cdot \frac{2x(x^2+1)[(x^2+1) - 2(x^2-1)]}{(x^2+1)^4}$
combine like terms in the last factor of the numerator ...
$f''(x) = -5 \cdot \frac{2x(x^2+1)[3 - x^2]}{(x^2+1)^4}$
cancel the common factors of $(x^2+1)$ and clean up ...
$f''(x) = \frac{10x(x^2-3)}{(x^2+1)^3}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9646336436271667, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/hamiltonian-formalism?sort=faq&pagesize=15 | # Tagged Questions
The hamiltonian-formalism tag has no wiki summary.
9answers
2k views
### Book about classical mechanics
I am looking for a book about "advanced" classical mechanics. By advanced I mean a book considering directly Lagrangian and Hamiltonian formulation, and also providing a firm basis in the geometrical ...
3answers
2k views
### When is the Hamiltonian of a system not equal to its total energy?
I thought the Hamiltonian was always equal to the total energy of a system but have read that this isn't always true. Is there an example of this and does the Hamiltonian have a physical ...
1answer
181 views
### Sympletic structure of General Relativity
Inspired by physics.SE: http://physics.stackexchange.com/questions/15571/does-the-dimensionality-of-phase-space-go-up-as-the-universe-expands/15613 It made me wonder about symplectic structures in ...
5answers
1k views
### Why not using Lagrangian, instead of Hamiltonian, in non relativistic QM?
When we studied classical mechanics on the undergraduate level, on the level of Taylor, we covered Hamiltonian as well as Lagrangian mechanics. Now when we studied QM, on the level of Griffiths, we ...
2answers
257 views
### Hamiltonian mechanics and special relativity?
Is there a relativistic version of Hamiltonian mechanics? If so, how is it formulated (what are the main equations and the form of Hamiltonian)? Is it a common framework, if not then why? It would be ...
3answers
441 views
### How does one quantize the phase-space semiclassically?
Often, when people give talks about semiclassical theories they are very shady about how quantization actually works. Usually they start with talking about a partition of $\hbar$-cells then end up ...
2answers
356 views
### Energy operator
Does the Hamiltonian always translate to the energy of a system? What about in QM? So by the Schrodinger equation, is it true then that $i\hbar{\partial\over\partial t}|\psi\rangle=H|\psi\rangle$ ...
3answers
565 views
### Why is the symplectic manifold version of Hamiltonian mechanics used in Newtonian mechanics?
Books such as Mathematical methods of classical mechanics describe an approach to classical (Newtonian/Galilean) mechanics where Hamiltonian mechanics turn into a theory of symplectic forms on ...
1answer
157 views
### Find the Hamiltonian given $\dot p$ and $\dot q$
I have these equations: $$\dot p=ap+bq,$$ $$\dot q=cp+dq,$$ and I have to find the conditions such as the equations are canonical. Then, I have to find the Hamiltonian $H$. To answer to the first ...
2answers
257 views
### Hamiltonian or not?
Is there a way to know if a system described by a known equation of motion admits a Hamiltonian function? Take for example $$\dot \vartheta_i = \omega_i + J\sum_j \sin(\vartheta_j-\vartheta_i)$$ ...
4answers
588 views
### Lagrangian to Hamiltonian in Quantum Field Theory
While deriving Hamiltonian from Lagrangian density, we use the formula $$\mathcal{H} ~=~ \pi \dot{\phi} - \mathcal{L}.$$ But since we are considering space and time as parameters, why the formula ...
3answers
129 views
### Quantizing first-class constraints for open algebras: can Hermiticity and noncommutativity coexist?
An open algebra for a collection of first-class constraints, $G_a$, $a=1,\cdots, r$, is given by the Poisson bracket $\{ G_a, G_b \} = {f_{ab}}^c[\phi] G_c$ classically, where the structure constants ...
2answers
161 views
### Weyl Ordering Rule
While studying Path Integrals in Quantum Mechanics I have found that [Srednicki: Eqn. no. 6.6] the quantum Hamiltonian $\hat{H}(\hat{P},\hat{Q})$ can be given in terms of the classical Hamiltonian ...
3answers
181 views
### What are some mechanics examples with a globally non-generic symplecic structure?
In the framework of statistical mechanics, in books and lectures when the fundamentals are stated, i.e. phase space, Hamiltons equation, the density etc., phase space seems usually be assumed to be ...
2answers
868 views
### Lagrangian mechanics vs Hamiltonian mechanics
First of all, what are the differences between these two: Lagrangian mechanics and Hamiltonian mechanics? And secondly, do I need to learn both in order to study quantum mechanics and quantum field ...
3answers
364 views
### Type of stationary point in Hamilton's principle
In this question it is discussed why by Hamilton's principle the action integral must be stationary. Most examples deal with the case that the action integral is minimal: this makes sense - we all ...
2answers
305 views
### Correct application of Laplacian Operator
Not a physicist, and I'm having trouble understanding how to apply the Laplacian-like operator described in this paper and the original. We let: \hat{f}(x) = f(x) + \frac{\int H(x,y)\psi(y) ...
3answers
189 views
### Factors of $c$ in the Hamiltonian for a charged particle in electromagnetic field
I've been looking for the Hamiltonian of a charged particle in an electromagnetic field, and I've found two slightly different expressions, which are as follows: H=\frac{1}{2m}(\vec{p}-q \vec{A})^2 ...
4answers
218 views
### Non-Integrable systems
Integrable systems are systems which have $2n-1$ time-independent, functionally independent conserved quantities (n being the number of degrees of freedom), or n whose Poisson brackets with each other ...
3answers
199 views
### Writing $\dot{q}$ in terms of $p$ in the Hamiltonian formulation
In the Hamiltonian formulation, we make a Legendre transformation of the Lagrangian and it should be written in terms of the coordinates $q$ and momentum $p$. Can we always write $dq/dt$ in terms of ...
1answer
52 views
### Hamiltonian of polymer chain
I'm reading up on classical mechanics. In my book there is an example of a simple classical polymer model, which consists of N point particles that are connected by nearest neighbor harmonic ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904681921005249, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/104432?sort=oldest | ## Circle-arc number of a knot
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to build knots in $\mathbb{R}^3$ from arcs of unit-radius (planar) circles, joined together at points where the tangents match. Thus the knot will have curvature $1$ at all but the joints. Here is an example of how two arcs might join:
Define the circle-arc number $C(K)$ of a knot $K$ as the fewest number of such arcs from which one can build a nonselfinterecting curve in space representing $K$. This number is analogous to the stick number of a knot, except that the pieces are arcs, and there is a tangent-joining condition.
I would be interested to learn of bounds on $C(K)$ in terms of other knot quantities, for example, the stick number, or the crossing number cr$(K)$.
Here is an example of what I have in mind. It appears that one might be able to build a trefoil from six arcs, something like this:
However, the above picture is actually planar, and I have not verified carefully that this is achievable in $\mathbb{R}^3$!
Has this concept been studied before? If so, pointers would be welcomed. Thanks!
Addendum. The trefoil can be realized with six arcs:
(The black triangle vertices indicate the circle centers on the plane before their arcs are twisted into 3D.)
-
1
In your planar diagram above, I suspect you might acheive 9 arcs by splitting each big arc directly (perhaps approximately) in half. Gerhard "Think Of Those Magic Rings" Paseman, 2012.08.10 – Gerhard Paseman Aug 11 at 0:03
How do you make such wonderful pictures, Joe? Is there a resource for this? (is it Mathematica?) – Jon Bannon Aug 23 at 15:49
@Jon: Thanks! :-) Yes, that one was produced via Mathematica, with a little post-Photoshop (because what I can get out of Mathematica directly is lower quality). – Joseph O'Rourke Aug 23 at 15:54
## 3 Answers
Not exactly your question, but in this paper, (knots of constant curvature, Jenelle McAtee, 2004), the author shows that every knot can be representated as a $C^2$ curve of constant curvature, so if you don't insists on planar curvature arcs, the answer is that your number equals $1.$
-
@Igor: Thanks for this reference, of which I was unaware! But the number is not 1, unless I misunderstand (which is quite possible!). She shows the number is finite, if helices are included. The number I defined is the number of distinct arcs. A number of 1 means just the trivial knot represented as a geometric circle. – Joseph O'Rourke Aug 10 at 21:46
[As you note, her $C^2$ condition is different from my $C^1$ condition.] – Joseph O'Rourke Aug 10 at 21:51
@Joseph: you do misunderstand what I am saying (my fault). You are asking for PLANAR arcs of constant curvature one. What I am saying is that you remove the planarity condition, you only need a single arc of constant curvature 1. But in fact, if you read her construction (she uses the braid representation of the knot, you can tweak her argument to get your planar arcs...) – Igor Rivin Aug 10 at 23:59
@Igor: Ah, I see your point! Apologies for misunderstanding! – Joseph O'Rourke Aug 11 at 0:28
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This figure is from the paper by Jenelle McAtee to which Igor refers. Her results are impressive! Her goals are worthy but rather different from mine.
-
You can bound this number from below by the crossing number. The projection of the arcs of two unit circles can cross in at most two points, so $cr(K)\leq C(K)(C(K)-1)$. Also, you ought to be able to bound it quadratically from above by the grid number. If you have a knot presented by a grid diagram, you can represent the knot by a linear number of segments of linear length. Each one of these can be made into a linear number of arcs of unit circles by putting in wiggles. Since grid number is bounded above linearly by crossing number, one obtains inequalities of the form $$\sqrt{cr(K)}\leq C(K)\leq A\cdot cr(K)^2$$ for some constant $A$. Notice that the grid number can sometimes be like $O(\sqrt{cr(K)})$, so I don't expect the upper bound to be sharp. For example, for certain torus knots you'll have $C(K)=O(cr(K))$.
-
Brilliant! Thanks so much! – Joseph O'Rourke Aug 16 at 23:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377103447914124, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/4348/sum-of-odd-numbers-results-in-a-square-number/4354 | Sum of odd numbers results in a square number [closed]
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Recently I discovered by myself that sum of N sequential odd numbers will result into N2.
Can anyone explain this to me? I want to know why not a proof.
Some examples:
```1 12 1
1+3 22 4
1+3+5 32 9
...
1+3+5+7+9+11+13+15 82 64
```
This is not homework, but personal knowledge. I'm also sorry in advance if it's too simple.
Also an apology if it's tagged wrong.
-
It is surely at the low end of what is interesting here. I recommend AoPS. – Greg Kuperberg Nov 6 2009 at 14:34
1
Upvoted because, although the question isn't really appropriate, the answers are great. – David Speyer Nov 6 2009 at 16:14
As Greg said, this really belongs on artofproblemsolving.com/Forum/index.php . – Anton Geraschenko♦ Nov 6 2009 at 16:43
7
This is the question that has drawn the most highly rated number theory answer??? – Lavender Honey Nov 6 2009 at 19:46
3 Answers
Draw an $n \times n$ square:
```xxxx
xxxx
xxxx
xxxx```
(here $n$ = 4, and there are 16 x's). Now divide the square into $n$ symmetric L-shapes:
```dcba
dcbb
dccc
dddd```
As you can see, we have 1 a, 3 b's, 5 c's and 7 d's, so 16 = 1 + 3 + 5 + 7.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Another way of seeing this is to compare two consecutive square numbers, say n2 and (n+1)2. If we expand the larger one, we get (n+1)2=n2+2n+1, so it is exactly (2n+1) more than the previous square. As n increases, this expression represents the consecutive odd numbers. So squares always differ by consecutive odd numbers.
-
I know you didn't ask for a proof, but actually the best way of seeing WHY something happens is understanding a good, simple proof for it. Here is my try to your nice observation, using the formula for the sum of several consecutive terms of an arithmetic progression (which, by the way, has also a nice, simple proof):
You know that if $\{an+b\}$ is an arithmetic progression and you look at some of its consecutive terms, then their sum is "(the first one plus the last one) times (the number of terms added) divided by 2".
In your case, you have the progression $\{2n-1\}$, starting at $1$ and finishing in some $2N-1$, which has $N$ terms. Then, by the formula above, you get that their sum is $$S = \frac{(1+2N-1)\cdot N}{2} = \frac{2N\cdot N}{2} = N^2.$$
This, if you look at it by the reverse side, is caused by the fact that the substraction of two consecutive squares is a particular odd number, $(N+1)^2 - N^2 = N^2+2N+1-N^2 = 2N+1$. You could also use this to prove your claim by induction, but it wouldn't be, in my opinion, a clarifying proof of the kind you were looking for.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9629227519035339, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-statistics/82037-minimizing-value.html | # Thread:
1. ## minimizing value
Let x be a random variable with support {1, 2, 3, 5, 15, 25, 50}, each point of which has the same probability 1/7. Argue that c=5 is the value that minimizes h(c)=E(|X-c|). Compare this to the value of b that minimizes g(b)=E[(X-b)^2].
2. Hello,
Originally Posted by antman
Let x be a random variable with support {1, 2, 3, 5, 15, 25, 50}, each point of which has the same probability 1/7. Argue that c=5 is the value that minimizes h(c)=E(|X-c|). Compare this to the value of b that minimizes g(b)=E[(X-b)^2].
You can have a look here : http://www.medicine.mcgill.ca/epidem...n-elevator.pdf
and here for a "similar" problem : http://www.mathhelpforum.com/math-he...blem-50-a.html (I think that the relevant messages are from post #15)
For b, it's the problem of least squares. You can google for "least squares"
3. Originally Posted by antman
Let x be a random variable with support {1, 2, 3, 5, 15, 25, 50}, each point of which has the same probability 1/7. Argue that c=5 is the value that minimizes h(c)=E(|X-c|). Compare this to the value of b that minimizes g(b)=E[(X-b)^2].
Consider $\{a,b\},\ b>a$, for any $c \in [a,b]$
$<br /> |a-c|+|b-c|=b-a<br />$
and if $c \not\in [a,b]$ :
$|a-c|+|b-c|>b-a$
Hence any $c \in [a,b]$ minimises $|a-c|+|b-c|$
Apply this sequentially to $\{1,50\},\ \{2,25\},\ \{3,15\}$ shows that any $c \in [3,15]$ minimises:
$|1-c|+|2-c|+|3-c|+|15-c|+|25-c|+|50-c|$
and as the last point not accounted for is in this interval and $|5-c|$ is minimised when $c=5$ we have $c=5$ minimises:
$<br /> E(|X-c|)=(1/7) \sum |x_i-c|<br />$
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8860378265380859, "perplexity_flag": "middle"} |
http://mathhelpforum.com/number-theory/129946-applying-fermat-wilson-s-theorem.html | # Thread:
1. ## Applying Fermat and Wilson's Theorem
Hi everyone! I just had a couple questions about a few things I read in a textbook.
There were a couple points made that did not show any proof. I was able to prove some of them but the two listed below gave me some trouble. I like to see why things are so and need the proof in order to accept its validity. I was wondering if anyone here could help me prove these?
--- If b > 1 is not prime, then (b-1)! is not congruent to -1 (mod b)
--- If p is an odd prime, then (p-3)! ≡ (p-1)/2 (mod p)
Thank You!
2. #1: If you know Wilson's theorem, then what is the contrapositive of its statement?
#2: By Wilson's theorem, $(p-1)! \equiv - 1 \ (\text{mod } p)$
But: $(p-1)! = (p-1)(p-2)(p-3)!$
Can you finish off?
3. Originally Posted by o_O
#1: If you know Wilson's theorem, then what is the contrapositive of its statement?
#1 is not the contrapositive of Wilson's theorem!
Suppose $p|b$, $p \neq b$. Then $p|(b-1)!$. If $(b-1)!+1\equiv 0 \mod b$ then $(b-1)!+1 = kb$ for some $k \in \mathbb{Z}$. Now $p|b, p|(b-1)!$ but $p \nmid 1$, which is impossible.
4. Hmm .. you sure? Wilson's theorem is a biconditional statement. So if we take the direction that: $(b-1)! \equiv -1 \ (\text{mod b}) \ \Rightarrow \ b \text{ is prime.}$
Then this is logically equivalent to saying if b is not prime, then etc. etc.
5. Oh, in that case you are correct! In my mind Wilson's theorem was "If $p$ is prime, then $(p-1)! \equiv -1 \mod p$", but it makes sense to include the other direction as well. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9696532487869263, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/20590?sort=votes | ## How does categoricity interact with the underlying set theory?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Here's the setup: you have a first-order theory T, in a countable language L for simplicity. Let k be a cardinal and suppose T is k-categorical. This means that, for any two models
M,N |= T
of cardinality k, there is an isomorphism f : M --> N.
Supposing all this happens inside of ZFC, let's say I change the underlying model of ZFC, e.g by restricting to the constructible sets, or by forcing new sets in. I would like to understand what happens to the k-categoricity of T.
I'll assume the set theory doesn't change so drastically that we lose L or T. Then, a priori, a bunch of things may happen:
(i) We may lose all isomorphisms between a pair of models M,N of cardinality k; (ii) Some models that used to be of cardinality k may no longer have bijections with k; (iii) k may become a different cardinal, meaning new cardinals may appear below it, or others may disappear by the introduction of new bijections; (iv) some models M, or k itself, may disappear as sets, leading to a new set being seen as "the new k".
Overall, nearly every aspect of the phrase "T is k-categorical" may be affected. How likely is it to still be true? Do some among (i)-(iv) not matter, or is there some cancellation of effects? (Say, maybe all isomorphisms M-->N disappear, but so do all bijections between N and k?)
-
## 3 Answers
Categoricity is absolute.
By the Ryll-Nardzewski theorem, for a countable language, $\aleph_0$-categoricity of a complete theory $T$ is equivalent to $T$ proving for each natural number $n$ that there are only finitely many inequivalent formulas in $n$ variables. This property is evidently arithmetic and, thus, absolute.
Likewise, again in a countable language, it follows from the Baldwin-Lachlan theorem that a theory is categorical in some (hence, by Morley's theorem, all) uncountable cardinality just in case every model is prime and minimal over a strongly minimal set. Moreover, the strongly minimal formula may be taken to be defined over the prime model and the primality and minimality of every model over this strongly minimal formula is something which will be witnessed by an explicit analysis, hence, something arithmetic and absolute.
For uncountable languages, the situation is a little more complicated, but again categoricity is equivalent to an absolute property. Shelah shows that either the theory is totally transcendental and Morley's analysis in the case of countable languages applies, or the theory is strictly superstable though unidimensional.
-
Just to clarify a bit further: what's going on here is that you can use Schoenfield's absoluteness theorem: Pi^1_2 statements are absolute between transitive models of ZFC which contain all the same ordinals (so this takes care of forcing or restricting to L). So the claim is that all the properties mentioned in this answer can be written in the form: "For any subset X of $\omega$, there is a subset $Y$ of omega such that ..." followed by a statement where all quantifiers range over either $\omega$ or the theory $T$ you're talking over. – John Goodrick Apr 7 2010 at 6:30
...and I do believe Tom's answer is right (and it's a good answer), but checking the absoluteness of properties like "T is totally transcendental" always gives me a little pause. Good exercises for a set theory class! – John Goodrick Apr 7 2010 at 6:32
Hmm, actually the situation for uncountable languages isn't clear to me. Unlike in the case where T is countable, you can no longer describe categoricity by quantifying over countable models (since there might not be any!) and so I don't see how to apply Shoenfield absoluteness. – John Goodrick Apr 7 2010 at 6:54
@John: The key is that Morley rank (and all good ordinal rankings) is absolute for models that have the same ordinals. This is also the key idea behind Shoenfield absoluteness. IIRC, another way to see it is that scattered compact Hausdorff spaces are also absolute. – François G. Dorais♦ Apr 7 2010 at 13:42
Francois: thanks for the clarification about totally transcendence. I'm satisfied now that this is absolute, but still curious about what Tom had in mind for the case where L is uncountable (is there a stronger absoluteness theorem we could use?). – John Goodrick Apr 7 2010 at 14:29
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In contrast to absoluteness of categoricity in First Order Logic, there are many interesting non-absoluteness phenomena at low infinite cardinals outside of first order. Perhaps the most important is the following:
(Under wGCH at $\kappa$, i.e. $2^\kappa<2^{\kappa^+}$) For every abstract elementary class $(\mathfrak K,\prec_{\mathfrak K})$ with $LS(\mathfrak K)\leq \kappa$, if $\mathfrak K$ is categorical in $\kappa$ and fails to have the amalgamation property at $\kappa$ ($AP_\kappa$), then $\mathfrak K$ is not categorical in $\kappa^+$ (indeed, it has the maximum number of models of size $\kappa^+$, $2^{\kappa^+}$).
In contrast, Martin's Axiom provides a completely different picture:
There exists an AEC (axiomatizable in $L_{\omega_1,\omega}(Q)$) $\mathfrak K_r$ with $LS(\mathfrak K_r)=\aleph_0$, $\mathfrak K_r$ is categorical in $\aleph_0$ and fails $AP_{\aleph_0}$ that is categorical in $\aleph_1$ in the presence of $MA_{\aleph_1}$.
The theorem and the example are due to Shelah (but have had improved presentations due to various other authors - Grossberg and Baldwin most prominent). Notice that here categoricity of a certain class in $\aleph_1$ is NOT absolute. Many open questions remain.
-
1
Hola Andrés. Bienvenido! – Andres Caicedo Feb 29 2012 at 19:02
1
Welcome, Andrés, inventor of unfoldable and strongly unfoldable cardinals! – Joel David Hamkins Feb 29 2012 at 20:23
Hi Andres! Great to see you here! – Todd Eisworth Mar 1 2012 at 0:53
(ii) Some models that used to be of cardinality k may no longer have bijections with k; (iii) k may become a different cardinal, meaning new cardinals may appear below it, or others may disappear by the introduction of new bijections;
This might happen if you are interested in a (mildly) non-first order theory whose standard model of cardinality continuum, cf. the thesis of Martin Bays for an example of such an L\omega1\omega-theory. The question whether the theory is categorical in the cardinality of its standard model depends on the value of continuum, and it is much easier to prove when the continuum is \aleph_1....
Although I believe that some of the Scanlon's reply applies here as well, but you need some assumptions on your non-first order theory, e.g. assuming that the relevant AEC class is excellent.
-
Hi mmm, thanks for your answer. I have only vague familiarity with non-first-order systems, but a theory whose categoricity depends on the value of c would certainly shed light on the tamer case of FOL. – Pietro KC Jun 3 2010 at 6:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309741258621216, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/41680/subtracting-greatest-possible-prime/41696 | ## subtracting greatest possible prime
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given an infinite set $A$ of positive integers, $\min A:=a_0$. For $x\geq a_0$ define $f(x)=x-a$, where $a\leq x$, $a\in A$ is greatest possible. Then for positive integer $x$ iterations $x$, $f(x)$, $f(f(x))$, $\dots$ finally come to some element of the set ${0,1,\dots,a_0-1}$. Denote this final number $F(x)$. For example, if $A$ is the set of primes, $F(x)$ equals either 0 or 1. Do there always exist frequencies $\lim \frac{|F^{-1}(s)\cap [1,N]|}{N}$ for $s=0,1,\dots,a_0-1$? If not, what is criterion of existing such frequencies? Do they exist, say, for $A$=primes?
-
sorry? I do not see typos – Fedor Petrov Oct 10 2010 at 20:24
ah, ok my dictionaries claim that substraction is also ok (see for example answers.com/topic/substraction) – Fedor Petrov Oct 10 2010 at 21:11
1
That dictionary marks it [Obs.] (obsolete), which means it's not used in modern works. – Charles Oct 10 2010 at 21:16
## 3 Answers
This is Sloane's A121559, which essentially iterates A064722. The behavior is controlled by the (appropriately weighted) distribution of prime gaps below N.
Heuristically, with $\ell=1-1/\log N$, you'd expect something like $(1-\ell)\left(1+\ell^3+\ell^5+\ell^7+\ell^{11}+\cdots\right)$ 1s below N, where the exponents are 1 less than the locations of the 1s. You can stop the sum around $\log^2 N$.
-
1
+1 for finding it in Sloane. – Gerry Myerson Oct 11 2010 at 2:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is easy to construct $A$ for which the limit does not exist. Consider the following set $A$. Include all even numbers in the intervals $[10^n, 10^{n+1}]$ for even $n=0,2,...$, and all numbers divisible by 3 in the intervals $(10^n,10^{n+1})$ for odd $n$. Now if $N=10^k$ with $k$ even then the probability of 0 is $\ge .9$, and if $k$ is odd, then the probability of $1$ is $\ge .9*1/3=.3$. In general, the limit exists if the intervals between consecutive numbers in $A$ are "uniformly spaced". In particular, if $A$ is a set of primes, I do not know how to show that the limit exists. It may be a hard number theory problem. For example, we know (Green and Tao) that the set of primes contains arbitrary long arithmetic progressions, but it is not clear (to me) how often these occur and how often progressions start at relatively small numbers and are relatively long.
-
Thanks, the question about arbitrary $A$ was indeed naive. But which information do we really need about differences of neighbour terms? Specification of this question is the case of primes. – Fedor Petrov Oct 10 2010 at 17:29
I guess you need to consider the probability of 1 for a number below $a_n$, the $n$-th number in $A$. Assume that $a_{n+1}/a_n$ is always $\lt 2$ (as in the case of primes). Then the probability of $1$, $p(a_{n+1})$, is the probability that your number is $\lt a_n$ times $p(a_n)$ plus the probability that your number is $\gt a_n$ times $p(a_{n+1}-a_n)$. This gives some sort of recurrent equation. – Mark Sapir Oct 10 2010 at 18:05
I suspect that modulo RH the limit exists and the probability of 1 is about .65... (an empirical calculation for $N=5*10^7$). I do not know whether known, weaker than RH, statements would suffice. I hope that specialists in number theory can help. – Mark Sapir Oct 10 2010 at 18:35
@Mark: Did you mean the probability of 0? The probability of 1 is clearly at most 1/2. – Charles Oct 10 2010 at 19:10
@Charles: Why $1/2$? – Mark Sapir Oct 10 2010 at 19:13
show 8 more comments
If I understand you correctly, this is effectively the question about (greedy) systems of numeration for the natural numbers. These are well understood in the case when $A$ satisfies some recurrence relation -- like the Fibonacci sequence (Zeckendorf) or the denominators $q_n$ of the CF convergents for some irrational $\alpha$ (Ostrowski).
If $A$ grows subexponentially, this is usually not good news for the ergodic'' questions like this.
-
Would you please give more details and references? – Mark Sapir Oct 10 2010 at 18:26
Off the top of my head: A. Fraenkel, Systems of numeration, Amer. Math. Monthly 92 (1985), 105–114. P. Grabner, P. Liardet and R. Tichy, Odometers and systems of numeration, Acta Arith. 80 (1995), 103–123. My own survey paper (maths.manchester.ac.uk/~nikita/ad.pdf) may be also somewhat useful, though it is mostly about systems of numeration for the reals and vectors rather than the integers. – Nikita Sidorov Oct 10 2010 at 18:42
yes, exactly, that's greedy representation of $x$ as a sum of primes (in general setting, of elements of $A$) thanks for your links! – Fedor Petrov Oct 10 2010 at 18:51
Sure, no problem. – Nikita Sidorov Oct 10 2010 at 19:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036547541618347, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/212725-select-any-two-intergers-between-12-12-will-become-solutions-system-print.html | # select any two intergers between -12 and +12 which will become solutions to a system
Printable View
• February 7th 2013, 10:15 AM
brooks77
select any two intergers between -12 and +12 which will become solutions to a system
I need help selecting two intergers and how to make it into a solution to a system of two equations
• February 7th 2013, 11:20 AM
topsquark
Re: select any two intergers between -12 and +12 which will become solutions to a sys
Quote:
Originally Posted by brooks77
I need help selecting two intergers and how to make it into a solution to a system of two equations
May I be the first to say: Huh??
-Dan
• February 7th 2013, 12:22 PM
Plato
Re: select any two intergers between -12 and +12 which will become solutions to a sys
Quote:
Originally Posted by brooks77
I need help selecting two intergers and how to make it into a solution to a system of two equations
I too have absolutely no idea what that may mean.
But here are comments on the wording.
It is generally accepted that the wording "n is an integer between -12 and 12" means $-12<n<12$, that is neither -12 nor 12 can be included. If we want them included write "n is an integer from -12 to 12".
The second point is the use of the word "two". Saying "x and y are two integers from -12 to 12" means $x\ne y$. Because otherwise we would not have two but one listed twice. If we want to allow that possibility write "If each of x and y is an integer from -12 to 12..."
All times are GMT -8. The time now is 04:23 PM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9693074226379395, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/03/05/ | # The Unapologetic Mathematician
## Natural Numbers
UPDATE: added paragraph explaining the meaning of the commutative diagram more thoroughly.
I think I’ll start in on some more fundamentals. Today: natural numbers.
The natural numbers are such a common thing that everyone has an intuitive idea what they are. Still, we need to write down specific rules in order to work with them. Back at the end of the 19th century Giuseppe Peano did just that. For our purposes I’ll streamline them a bit.
1. There is a natural number ${}0$.
2. There is a function $S$ from the natural numbers to themselves, called the “successor”.
3. If $a$ and $b$ are natural numbers, then $S(a)=S(b)$ implies $a=b$.
4. If $a$ is a natural number, then $S(a)\neq0$.
5. For every set $K$, if ${}0$ is in $K$ and the successor of each natural number in $K$ is also in $K$, then every natural number is in $K$.
This is the most common list to be found in most texts. It gives a list of basic properties for manipulating logical statements about the natural numbers. However, I find that this list tends to obscure the real meaning and structure of the natural number system. Here’s what the axioms really mean.
The natural numbers form a set $\mathbb N$. The first axiom picks out a special element of $\mathbb N$, called ${}0$. Now, think of a set containing exactly one element: $\{*\}$. A function from this set to any other set $S$ picks out an element of that set: the image of $*$. So the first axiom really says that there is a function $0:\{*\}\rightarrow\mathbb N$.
The second axiom plainly states that there is a function $S:\mathbb N\rightarrow\mathbb N$. The third axiom says that this function is injective: any two distinct natural numbers have distinct successors. The fourth says that the image of the successor function doesn’t contain the image of the zero function.
The fifth axiom is where things get really interesting. So far we have a diagram $\{*\}\rightarrow\mathbb N\rightarrow\mathbb N$. What the fifth axiom is really saying is that this is the universal such diagram of sets! That is, we have the following diagram:
with the property that if $K$ is any set and $z$ and $s$ are any functions as in the diagram, then there exists a unique function $f:\mathbb N\rightarrow K$ making the whole diagram commute. In fact, at this point the third and fourth Peano axioms are extraneous, since they follow from the universal property!
Remember, all a commutative diagram means is that if you have any two paths between vertices of the diagram, they give the same function. The triangle on the left here says that $f(0(*))=z(*)$. That is, since $K$ has a special element, $f$ has to send ${}0$ to that element. The square on the right says that $f(S(n))=s(f(n))$. If I know where $f$ sends one natural number $n$ and I know the function $s$, then I know where $f$ sends the successor of $n$. The universal property means just that $\mathbb N$ has nothing in it but what we need: ${}0$ and all its successors, and ${}0$ is not the successor of any of them.
Of course, by the exact same sort of argument I gave when discussing direct products of groups, once we have a universal property any two things satisfying that property are isomorphic. This is what justifies talking about “the” natural number system, since any two models of the system are essentially the same.
This is a point that bears stressing: there is no one correct version of the natural numbers. Anything satisfying the axioms will do, and they all behave the same way.
The Bourbaki school like to say that the natural numbers are the following system: The empty set $\emptyset$ is zero, and the successor function is $S(n)=n\cup\{n\}$. But this just provides one model of the system. We could just as well replace the successor function by $S(n)=\{n\}$, and get another perfectly valid model of the natural numbers.
In the video of Serre that I linked to, he asks at one point “What is the cardinality of 3?” This betrays his membership in Bourbaki, since he clearly is thinking of 3 as some particular set or another, when it’s really just a slot in the system of natural numbers. The Peano axioms don’t talk about “cardinality”, and we can’t build a definition of such a purely set-theoretical concept out of what properties it does discuss. The answer to the question is “無!” (“mu”). The Bourbaki definition doesn’t define the natural numbers, but merely shows that within the confines of set theory one can construct a model satisfying the given abstract axioms.
This is how mathematics works at its core. We define a system, including basic pieces and relations between them. We can use those pieces to build more complicated relations, but we can only make sense of those properties inside the system itself. We can build models of systems inside of other systems, but we should never confuse the model with the structure — the map is not the territory.
This point of view seems to fetishize abstraction at first, but it’s really very freeing. I don’t need to know — or even care — what particular set and functions define a given model of the natural numbers. Anything I can say about one model works for any other model. As long as I use the properties as I’ve defined them everything will work out fine, and $1+2=3$ whether I use Bourbaki’s model or not.
Posted by John Armstrong | Fundamentals, Numbers | 11 Comments
## New Turaev Paper
Turaev has just posted a new paper on the arXiv. The abstract says he introduces a sort of cobordism relation for knots in thickened surfaces.
The “thickened surfaces” bit means that instead of putting them into space we put them into some (possibly complicated) surface that’s just thick enough for the strands to get past each other without touching. Imagine wrapping a string around a donut. Not only can the string tangle up with itself, but it also can circle the donut in many different ways. If the donut weren’t there, lots of these donut-knots would be the same, but since we can’t pull the string through the donut they aren’t the same in this setup.
The word “cobordism” is harder to explain. When I get to more about knots in general I’ll eventually get to it. Still, it’s worth a look if you know some topology or you just want to see what a knot theory paper looks like. Turaev is a really big name, and I’m looking forward to a chance to sit down and work through this latest offering.
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 44, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382507801055908, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/13146/define-h-colon-mathbb-z-sim-16-to-mathbbz-sim-24-by-ha16-3a2 | # Define $h\colon \mathbb{ Z}/\sim 16 \to \mathbb{Z}/\sim 24$ by $h([a]16) = [3a]24$
Define $h\colon \mathbb{Z}/\sim ~16 \to \mathbb{Z}/\sim 24$ by $h([a]16) = [3a]24$
a. Prove $h$ is well defined.
b. Compute $h(a)$ where $a = \{[0]16, [3]16, [6]16\}$.
c. Compute $h^{-1}([10]24)$
Is the following correct:
a. $h$ is well defined since each for all $a$ $|h([a]16)|\equiv 1$.
b. $\{[0]24, [9]24, [18]24\}$
c. $\emptyset$
-
b) and c) are right (but you should probably show your work if you're turning it in as homework) Part a) isn't quite as simple as what you have there. Ask yourself what $a$ represents, as opposed to $[a]16$. – Zarrax Dec 5 '10 at 20:56
For a. how about for h of all equivalence classes of Z/~16 there is one equivalence class in Z/~24? – Bradley Barrows Dec 5 '10 at 21:18
## 1 Answer
Your answers to (b) and (c) are correct.
For (a), you need to prove that if a=b mod 16, i.e. $a-b=16n$ for some integer $n$, then $3a=3b$ mod 24, i.e. $3a-3b=24m$ for some integer $m$. Can you do that?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428175091743469, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/87684/why-is-a-lie-group-simply-connected-iff-it-is-simple-and-connected | # Why is a Lie Group simply connected iff it is simple and connected?
Yesterday i heard that a connected Lie group $G$ is simply connected (i.e. $\pi_1(G)=0$) if and only if $G$ is simple as a group, i.e. $G$ has no nontrivial normal subgroup. That sounds too good to be wrong, it explains why one would call this property "simply connectedness". Could someone briefly explain why this is true or give me some reference where i could find it?
Greetings, atajh
-
## 2 Answers
Unfortunately this is not true. For instance $\mathbb{R}^n$ is simply connected and, being an infinite abelian group, is very far from being simple. When $n > 1$ it is not even simple as a Lie group, i.e., it has nontrivial connected, closed normal subgroups.
Also the other direction is false: $\operatorname{PSL}_2(\mathbb{R})$ is simple as an abstract group, but not simply connected, since $\operatorname{SL}_2(\mathbb{R}) \rightarrow \operatorname{PSL}_2(\mathbb{R})$ is a degree $2$ connected covering map. (If am not mistaken, then at least for linear groups what is true is that an abstractly simple group is of adjoint type, i.e., the dual condition to being simply connected.)
Anyway, you did the right thing: if you have a healthy level of interactions with other members of the mathematical community (students, faculty, etc.) then you will "hear things" which are quite new to you. Not everything you hear is actually true -- or especially, what you heard / remembered is not actually true -- so it's important to try to verify / disprove these things that you hear, or ask someone else about them.
-
Thanks a lot, to both of you. – atajh Dec 2 '11 at 10:39
It's not true. $SU(2)$ is simply connected but is not simple as a group -- the center $\{\pm I\}$ is nontrivial. (There seems to be a concept called a simple Lie group which is not just Lie groups that happen to be simple, in that it asks for connected normal subgroups, so SU(2) does qualify there).
On the other hand $SO(3)$ is not simply connected, but is a simple group.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9596372246742249, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/5819/how-does-one-build-up-intuition-in-physics/5845 | # How does one build up intuition in physics? [closed]
How does one build up an intuitive gut feeling for physics that some people naturally have? Physics seems to be a hodgepodge of random facts.
Is that a sign to quit physics and take up something easier?
Thanks for all the answers. On a related note, how many years does it take to master physics? 1-2 years for each level multiplied by many levels gives?
-
– Dilaton Apr 15 at 10:18
## closed as not constructive by Ϛѓăʑɏ βµԂԃϔ, David Zaslavsky♦Apr 15 at 7:05
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 6 Answers
If physics seems to be a hodgepodge of random facts... well, it always seems that way when you're first learning. But physics is all about patterns and relationships, and if you keep working with it you should eventually get to a point where you see how different facts, formulas, and principles are related to each other. In my experience, for any given "piece" of physics this point comes 1-2 years after you first learn it. :-P
EDIT: In response to your related note, I don't know that anyone ever really "masters" physics. I'm not even sure exactly what that would mean, since physics as a whole is far too broad for any one person to ever understand in its entirety. But my impression is that, given the usual educational path (college then grad school), it takes anywhere from 3-10 years, depending on the topic and your level of preparation, to learn enough that you can start contributing to research in a single specific topic. Of course you never really stop learning.
Also, to clarify, the 1-2 year figure I mentioned isn't something you can multiply by a number to figure out how long it takes to learn physics ;-) And physics doesn't neatly split itself into levels (well, sort of, although the splitting you'll probably encounter is just a result of the requirements of our educational system). All I meant is that, from the first time you encounter any given physics concept (just to name a few: Newton's laws, the Euler-Lagrange equation, multipole expansion, free energy, quantum entanglement), you may not feel like that concept makes sense to you until 1-2 years later, on average. And that's perfectly normal.
-
2
– Marek Feb 24 '11 at 23:47
1
@Marek: true indeed ;-) I guess it's the easy stuff that takes a year or two. – David Zaslavsky♦ Feb 25 '11 at 2:21
Good answer, David. – Luboš Motl Feb 25 '11 at 6:03
Maybe if there is a next edit you could add facility with maths as a necessary condition for any mastering of physics concepts. It would be hopeless to persist in physics if one has a hard time in the necessary mathematics courses. A large part of physics intuition comes from mathematical relationships. – anna v Feb 25 '11 at 9:29
Intuition is just pattern recognition. This comes with doing many problems that force you to think and keep your brain completely engaged with the material. I remember when I first started physics, I thought it was impossible to keep track of all of the types of problems. There were balls rolling down inclined planes and masses on pulleys and people pushing blocks up hills with friction. Then there was a certain point when a lightbulb went off and it made sense: these are all the same problem. All you do is figure out where all the forces are pointing and how strong they are, then add them up to find the net force, and you've got the equation of motion in hand. Eventually, it becomes second nature, so I can glance at a problem in mechanics and pretty quickly have an intuition for what should happen.
The same thing happens in more advanced physics. You start to recognize classes of physical situations and types of symmetry. Let me give you a a more advanced example that was a similar lightbulb for me - it's a bit lengthy, and you don't need to understand every step. The important part is the conclusion. Try evaluating this integral: $\int_{-\infty}^{\infty}e^{ip(x-x_0)/\hbar}e^{-|p|/p_0}dp$
This looks horrible, doesn't it? And yes, if you just charge in and try to integrate it, you'll wind up doing a long series of painful integrations by parts. But there are a bunch of things here that a practiced physics student will recognize that make it easy.
Since $e^{ip(x-x_0)/\hbar}=\cos(p(x-x_0)/\hbar)+i\sin(p(x-x_0)/\hbar)$ (that's clever trick no. 1), this is actually an integral of a cosine plus a sine, multiplied by a decaying exponential, over $(-\infty,\infty)$. But sine is an odd function, so it evaluates to zero over $(-\infty,\infty)$ (clever trick no. 2)! That means the integral is actually $\int_{-\infty}^{\infty}\cos(p(x-x_0)/\hbar)e^{-|p|/p_0}dp$ Now, we have an even function inside the integral, so its value over $(-\infty,\infty)$ is twice its value over $(0,\infty)$ (clever trick no. 2, reused). Furthermore, $\cos(p(x-x_0)/\hbar)=Re(e^{ip(x-x_0)/\hbar})$ (going backwards with clever trick no. 1). This makes the integral
$2Re(\int_{0}^{\infty}e^{ip(x-x_0)/\hbar}e^{-p/p_0}dp)$
$2Re(\int_{0}^{\infty}e^{(i(x-x_0)/\hbar - 1/p_0)p}dp)$
$2Re[\frac{1}{i(x-x_0)/\hbar + 1/p_0}]$
$\frac{2p_0}{1+(x-x_0)^2p_0^2/\hbar^2}$ (Maybe clever trick no. 3 if you aren't very used to complex numbers?)
Here's the kicker: If you do it enough, this sort of integral starts coming intuitively and quickly. The even-and-odd tricks become obvious; the conversion back and forth between exponential and trigonometric functions becomes totally natural. An integral like this flies by at the speed of thought, and I don't need to write out all those steps - it just feels sort of obvious at each point. That's just pattern recognition from doing it so many times - or, if you like, "intuition."
If you want to build up your intuition, do many, many problems. As many as you can get your hands on. That's what it takes.
-
I would have thought the intuitive thing to do with this integral would be to start by setting x_0 = 0 and p_0 = 1. – Marty Green Aug 12 '11 at 4:12
Intuition is great, but it can also be a trap. I had extreme inutition about physics during highschool and undergraduate days. Probably at the one in a million level. It made much of physics and math almost effortlessly easy. But I basically trainwrecked in grad school. And the reason is that things reached a level of abstraction that was beyond what I could intuit. And all my classmates -none of whom half the intuition I had, had been doing it the hard way, learning symbolically, and mathematically for many years -probably out of necessity. And I suddenly realized I would need to back up several years and relearn things that way. And that was far too much backtrcking to contemplate at that stage.
So I suspect, that having one area of cognition that you are supergood in at is actually a setup for failure. Train all of the needed abilities with respect to physics. Not just the ones that you are most good at. Otherwise you might find yourself in a col-de-sac.
-
Interesting advice. Useful for teachers. You were unlucky, from highschool you should have been on a special track that challenged the level of your intuition. I had a terrible physics teacher in highschool and my intuition was working overtime correctly :). Fortunately I had a very good and strict maths teacher who demanded the best, and gave more and more difficult problems to challenge the good students. – anna v Feb 25 '11 at 9:41
1
I can't complain about my HS physics teacher. He lent me his college textbook, since I was obviously well beyond the level being taught. I think what I'm trying to promote here, is that well balanced talents will take you further than deep and narrow talent, and students should train accordingly. – Omega Centauri Feb 26 '11 at 16:58
This question has similarities to at least this previous Stack question on learning physics. I would advise that you check the online links there.
An intuition for physics is built by learning the subject and finding out the major theories and their domain of application. So considering a question on Gravity (say working out the gravity on the surface of the Moon) what physics theories apply? As one learns more physics one learns about the limitations of older theories. Perhaps velocities have to be small, or masses below a certain limit: above that limit another related theory applies.
There are general principles from the major theories which derive much of one's analysis: energy and its conservation and transmission, radiation (of the various kinds). In applied examples a major test of intuition is to include the contributions of factors that are relevant and omit those which are irrelevant. So for example does the chemical composition of the Moon affect its surface gravity, are hollows and bumps on its surface likely to be relevant; could its nearness to the Earth affect the calculation? If we are considering a Pulsar (Neutron star) rather than the Moon do any of these factors change in significance?
Mathematically there are parallels between many theories which contain fields which decay at $1/r^2$; there are many phenomena describable by similar mathematical models like Harmonic Oscillators (from electronics to quantum physics);then there is the Wave Equation. This allows one to learn to expect similar behaviour in a unfamiliar sub-branch of the subject whenever similar phenomena appear. Indeed one may have already solved a given problem in other variables already.
Finally there is some evidence to support the hope that all of physics is united under a set of underlying equations and principles. They may not all have been discovered yet, but this suggests that understanding one set of concepts very well could teach us how everything works. We already know of the major branches of physics which are very comprehensive: Thermodynamics; Electromagnetism; Gravitation; Dynamics; Quantum Theory; then there are the subatomic theories.
-
Speaking of patterns, you may want to read Shive's Similarities in Physics. It discusses common behaviour patterns such as filtering, dispersion, resonance, interference, exponential decay, noise, etc which can be observed across acoustics, optics, mechanics and electronics.
There is a book by Pikovsky et al Synchronization: A Universal Concept in Nonlinear Sciences. is also interesting in this regard.
Also take a look at Terence Tao's article on Universality in large dimensions.
-
I think the intuitive sense (your 'intuitive gut feeling'mentioned above)that one develops for physics starts with a passion for the subject itself. In my case that's where it all started. As a child, I recalled asking my father why his airplane flies. Later, when I was able to study physics, I could recall some relational experiences such as scuba diving (gas laws), airplane flight (Bernoulli's principle), list goes on. In all instances, I recall thinking oh yes, this makes sense, and from yhere I could extrapolate to another scenerio for example; fluid flow in tunnels as related to Bernoulli's principle. It gives you a sense of being part and parcel of the whole physics experience which is extremely gratifying. As I progressed through my graduate curriculuum to a PhD, the main areas of physics; Newtonian mechanics, quantum, statistical mechanics and electromagnetic theory seemed to be intricately linked together to explain the structure of the universe along with processes occuring within. This is not a 'hodge-podge of random facts' as you mention. By studying physics and getting the feel and understanding of physics, one doesn't neccessarily have to memorize equations, they just come naturally.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9641990065574646, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/4589/irreducible-polynomials-in-two-complex-variables | # Irreducible Polynomials in two complex variables
I am seeking methods to see if a polynomial $f \in O(\mathbb{C^2},0)$ is irreducible. The subject is really new to my and I am studying for myself, for which I don't see about this subject.
would thank them a lot they could provide me some bibliography where to read about this topic
-
What sort of "methods", theoretical, algorithms? For what intended applications? You need to say more to get a good answer. – Gone Sep 14 '10 at 0:49
uy! really any method is welcome, Believe me, I would read. I'm reading a book about desingularizacion of vector fields, this book gives a result about how to calculate the multiplicity of intersection of two divisors, the theorem says roughly the following: Let Editing wait.... – Keinohrhasen Sep 14 '10 at 0:57
Theorem: The intersection of an irreducible local divisor $D_f$ with another effective local divisor $D_g$ is isolated if and only if the germ $g\left( \tau \right)$ is not identically zero, and multiplicity $D_{f}\overset{0}{.}D_{g}$ of this intersection is equal to ther order $ord_{0}\left( g\left( \tau \right) \right)$ I need examples, therefore, need to read a little about the criteria for irreducible polynomials in two complex variables – Keinohrhasen Sep 14 '10 at 1:06
## 1 Answer
Here's one idea: if $f(x, y)$ is reducible, then the projective closure of $f(x, y) = 0$ has at least two components, which intersect by Bezout's theorem. These intersections are singularities and therefore can be found by setting all partial derivatives to zero. If no singularities exist then $f$ is irreducible.
-
thank you very much, I'm chewing the idea – Keinohrhasen Sep 14 '10 at 3:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186038970947266, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/black-holes+statistical-mechanics | # Tagged Questions
1answer
87 views
### Where and how is the entropy of a black hole stored?
Where and how is the entropy of a black hole stored? Is it around the horizon? Most of the entanglement entropy across the event horizon lies within Planck distances of it and are short lived. Is ...
1answer
139 views
### Black hole entropy
Bekenstein and Hawking derived the expression for black hole entropy as, $$S_{BH}={c^3 A\over 4 G \hbar}.$$ We know from the hindsight that entropy has statistical interpretation. It is a measure ...
0answers
145 views
### How is the logarithmic correction to the entropy of a non extremal black hole derived?
I`ve just read, that for non extremal black holes, there exists a logarithmic (and other) correction(s) to the well known term proportional to the area of the horizon such that \$S = \frac{A}{4G} + K ...
2answers
215 views
### Reconstruction of information stored in an evaporating black hole from the emission spectrum?
For simple setups, where the radiation field deviates not too far from thermodynamic equilibrium (< 10 %), corrections to the Planckian thermal emission spectrum can be calculated (and measured) ...
1answer
267 views
### Are black hole states completely mixed?
A completely mixed state is a statistical mixture with no interference terms, and (QMD, McMahon, pg 229): $$\rho = \dfrac{1}{n}I$$ $$Tr(\rho^2) = \dfrac{1}{n}$$ Are black hole quantum states ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8751313090324402, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/2204/what-specifically-does-the-phrase-continuum-limit-mean/2205 | # What specifically does the phrase “continuum limit” mean?
I'm interested in the meaning of the phrase "continuum limit" specifically as it is often used in expressions relating to the ability of a quantum gravity theory to recover GR in the continuum limit.
I believe I understand the meaning but want to make sure I am not missing some important part of the precise definition as my intuition may be off and I have not seen it defined anywhere.
Pointers to a place where it is defined in an online resource would be appreciated. A google search just turned up many references to it being used in papers and articles and such.
Thank you.
-
1
– inflector Dec 23 '10 at 18:56
## 1 Answer
Usually it relates to discrete models where it means "becoming less discrete".
To make this more formal and give a simple example, consider a set $$\Lambda_{\alpha}(M) := \{ x \in {\mathbb R}^d \,|\, x_i = k \alpha, \, 0 \leq k \leq \lfloor {M \over \alpha} \rfloor \}$$
and imagine there is an edge between nearest neighbors so that part of it looks like this
this can be thought of as discretization of a $d$-dimensional cube of side $M$ modelling some crystal. Now by continuum limit one understand the limit $\alpha \to 0$. In this case one would recover the original cube $[0, M]^d$. The reason this is important is that lattice is a much simpler object than a continuous space and we can compute lots of things there directly. In some cases we are then able to carry out limits and recover the original continuous theory.
As for your talk about GR, I think there is only one possible interpretation in the context of Loop Quantum Gravity which deals with discrete space-time. I don't know much about this theory but I'd assume that in order to recover GR one would need to perform the continuum limit of the space-time spacing.
But note that LQG doesn't mean quantum gravity in general. It is just one of proposed theories for quantum gravity and other theories (e.g. string theory) don't assume discrete space-time, so there is no continuum limit to carry out.
Update: David has made a very good point about the nature of continuum limit, so I decided to elaborate on the matter.
There are two semantically different concepts of continuum limit (although mathematically they are the same):
1. One can start with a discrete microscopic theory that is already complete (e.g. lattice of crystals; or with space-time in LQG). In this case the semantics of continuum limit is getting rid of microscopic physics. Once you perform the limit, you obtain a simpler theory.
2. But sometimes the complete theory can already be continuous (e.g. Quantum Field Theory) and one first wants to discretize it in order to be able to perform the calculations more easily. In this case when we are shrinking the scale we are actually making the discrete model a better approximation to the original theory until in continuum limit we recover the original theory completely. So here we also lose the microscopic degrees of freedom but we don't care because the discrete model wasn't physical anyway -- it was just a mathematical tool.
-
1
You're right that one would take the continuum limit of LQG to recover GR, but I think it's worth noting that LQG actually allows (if not assumes) some finite length scale for its spin networks. So if LQG is correct, that would mean that GR would fail to properly describe reality at sufficiently small scales. – David Zaslavsky♦ Dec 23 '10 at 21:58
@David: good point, see the update. – Marek Dec 23 '10 at 22:36
In this standard picture there is something which I never really understood. As far as I can see this limit leads to the set of rationals rather than the set of reals. In that sense you do not actually perform the continuum limit, only the "rational limit"! To get continuum, you'll need an extra "jump" at the end from Q to R, and there is no guarantee that everything valid for Q will be valid for R. – Igor Ivanov Dec 24 '10 at 19:16
1
@Marek — yes, I know how rational are introduced, and btw the reals are introduced in an even more non-physical way. But I am not considering a limit staying in Q. I am considering your procedure and what I see at the end is Q, not R. Take 1D lattice of fixed size (=1). You take small spacing a and take the limit $a \to 0$. These words are not enough, you need to give a specific prescription of what you exactly do and then prove the result is the same for all prescriptions. Let's adopt the procedure when we divide a by 2 at each step. (cont.) – Igor Ivanov Dec 25 '10 at 13:33
1
Then at first step we add a node at 1/2, then we add a node at 1/4, 3/4, etc. You see that you never add irrational points, and at the end you get a subset of Q. In order to claim that you have a true continuum limit, you need to devise an explicit procedure and show that every real appears at some stage. – Igor Ivanov Dec 25 '10 at 13:37
show 7 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943314790725708, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/2066/why-does-adding-solutes-to-pure-water-lower-the-the-specific-heat?answertab=votes | # Why does adding solutes to pure water lower the the specific heat?
We found that water with salt, sugar, or baking soda dissolved in it cools faster than pure water.
Water has a very high specific heat; how do these solutes lower it?
We heated a beaker (300ml) of water to 90° C and let it cool, checking the temperature every 5 minutes. We repeated the experiment adding a tablespoon of salt. At each 5 minute interval, the temperature was higher for pure water than for salt water. Same result with baking soda and sugar.
-
2
Slower cooling means higher specific heat, doesn't it? – gigacyan Dec 19 '10 at 18:28
I like the question, but please settle lower/slower problem. – Marek Dec 19 '10 at 18:45
Also please describe better the experiment: do you measure the cooling from the boiling point or from 100 degrees? It matters because obviously the cooling down is exponential, and as such $\frac{\mathrm{d}T}{\mathrm{d}t}$ varies at different temperatures. – Sklivvz♦ Dec 19 '10 at 22:29
@gigcyan, @marek: fixed, thanks – Anne Laks Dec 19 '10 at 22:36
1
One could argue that this is a chemistry question but to me it seems like asking about the physical explanation behind a chemical phenomenon. I like it. – David Zaslavsky♦ Dec 20 '10 at 2:27
show 4 more comments
## 4 Answers
I believe the reason is due to the solution trapping water molecules in a cage around it. The reason water has a high specific heat is because it can rotate freely around its center of mass, there is a large number of degrees of freedom that can randomly vibrate and rotate in the pure water. When you have molecules in solution, they trap several water molecules close to them in a lowest-energy stiff configuration, and these molecules are like a tiny rigid body where thermal motion is not possible, because the quantum of oscillation frequency is higher than kT. This reduces the specific heat by an amount directly proportional to the solute.
This is probably strongest with salt, since the charged ionic solutes will produce a very strong cage. I would expect the effect with alcohols to be weaker, sugars weaker still, since I think the charged groups are less charged in these in order.
-
The obvious answer for at least part of this simply concerns the new substance.
Water has a fairly high specific heat. It is greater than that of sugar, salt, baking soda, etc. The specific heat of the combination (solution) of these two is somewhere between that of either one alone (probably a weighted average by mass) simply by merit of the temperature change occurring to both substances rather than just one
However I suspect this answer is incomplete. There could be another phenomena in play to explain the cooling differences, perhaps associated with how the solute changes the temperature of phase changes (ie higher boiling pt, lower freezing)
-
Yes, this is the same feeling I have. I wonder to what degree those phenomena in the last paragraph are important. Perhaps dissociation of the water molecules into $H_2$, $O_2$ and creation of some new compounds also has to be taken into account. But I am not sure whether this increases or decreases the specific heat. Anyone cares to elaborate? – Marek Dec 20 '10 at 8:07
@Marek: the only way to convert water to $H_2$ and $O_2$ is stick electrodes in it. Dissociation to $H^+$ and $OH^-$ does not depend (in the first order) of salt concentration. – gigacyan Dec 20 '10 at 21:21
yes. it is the salts that dissociate into electrolytes. Water dissociation occurs either electrically or in the base/acid manner that gigacyan describes – jon_darkstar Dec 20 '10 at 21:23
Thanks @gigacyan, @jon. – Marek Dec 20 '10 at 21:31
1
It is not the correct explanation, because the salt was added to the water, it wasn't an equal volume of salt water – Ron Maimon May 3 '12 at 5:26
If your description of the experiment is accurate then the result you got is unexpected. It is true that specific heat capacity of salt solution (per mass unit) is lower than of pure water, you can estimate it as $$C_{p} = wC_{p}^{salt}+(1-w)C_p^{H_2O}$$ where w is the mass fraction of salt. However, as you describe it, you didn't keep the mass constant but increased it by adding some salt to a fixed volume of water so your total heat capacity should be the sum of the heat capacity of water (which is the same for all samples) and that of salt, sugar or baking soda.
Since w in your experiment was around 0.04, the effect you were measuring was quite small and could easily be smaller then the experimental error. This error consists of the accuracy of measuring volume of water, of measuring temperature, of timing. The easiest way to reduce these errors is to repeat each experiment several times in random order and see if the results are consistent.
Update: I found a plot of specific heat of soda solutions and I calculated heat capacity for two cases: a) 300 ml of pure water and b) 300 ml of water + 12.5 g of soda = 312.5 g of 4% soda solution. The heat capacity of pure water sample would be 1254 J/K and that of water with soda 1276 J/K - as I expected, it is higher but the difference is less than 2%.
-
We repeated the experiments several times, and with different solutes: sugar and baking soda. The differences in rate of cooling were small but consistent. – Anne Laks Dec 20 '10 at 19:54
@Anne: Do I understand it correctly that you had 300 ml of water in both cases? Meaning that you compared cooling of 300 g pure water versus 312 g salty water? – gigacyan Dec 20 '10 at 21:19
300 ml of water; 1 tablespoon = 15 ml of salt. Also tried 3 tablespoons of sugar, also 1 tablespoon baking powder. – Anne Laks Dec 20 '10 at 21:50
@Anne: did you prepared new water solution for every repetition or did you reuse the same one? Also, try to calculate specific heat based on your observations. Do do this, plot log(T) vs time and make a linear fit. The slope of the line is inversely proportional to the heat capacity. Use heat capacity of pure water as a reference and calculate values for other solutions. It would help if you new the mass of added salt but you can use a cooking table for an estimation (1 tbsp of salt is 12.5 g) – gigacyan Dec 21 '10 at 8:18
I prepared a new water solution each time we tried a new solute. – Anne Laks Dec 22 '10 at 0:16
show 2 more comments
i found this extremely exponential to my studies in the physics degree of science Specific heat capacity is simply the amount of energy needed to change the temperature of a substance by a certain amount. Heat capacity is expressed using the unit ‘joules’. It is expressed as the amount of energy needed to raise 1kg of substance by 1oC or 1 kelvin. Materials with high heat capacities need a lot of heat applied before they change temperature. Vice versa, materials with a lower heat capacity only need a little heat to be applied before they raise in temperature. The formula Q = mCΔT is used to determine the specific heat capacity. Q refers to the change in heat energy of the object or substance, m refers to the mass, C is the specific heat capacity and ΔT refers to the change in temperature. To apply heat energy to a substance (water), the simplest way is by using an electrical element. To measure the energy, the formula Energy = VxIxt can be used. V refers to the voltage, I refers to the current that is being passed through and t refers to the time for which the electrical element is being heated. enjoyyy fellow scientists
-
feel free to contact our business with any queeries, our hotlines are open 24-7 0488968486 – kerser May 3 '12 at 1:18
2
I would downvote this, except I don't like downvoting. All you've done is define terms, not answer the question. – Mike Dunlavey May 3 '12 at 2:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440445303916931, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=4257372 | Physics Forums
## Probability Density and Current of Dirac Equation
Hey,
I'm trying to determine the probability density and current of the Dirac equation by comparison to the general continuity equation. The form of the Dirac equation I have is
$$i\frac{\partial \psi}{\partial t}=(-i\underline{\alpha}\cdot\underline{\nabla}+\beta m)\psi$$
According to my notes I am supposed to determine the following sum to make the relevant comparisons to the continuity equation and therefore determine the probability density/current
$$\psi(Dirac)^{\dagger}+\psi^{\dagger}(Dirac)$$
Where 'Dirac' refers to the above equation. However I have tried this and I can only get it to work if I multiply one term by 'i' and the other by '-i' in the above.
$$\psi(i\frac{\partial \psi}{\partial t})^{\dagger}+\psi^{\dagger}(i\frac{\partial \psi}{\partial t})=-i\psi\frac{\partial \psi^{*}}{\partial t}+i\psi^{*}\frac{\partial \psi}{\partial t}\neq i\frac{\partial (\psi^{*}\psi)}{\partial t}$$
Any help is appreciated!
Thanks,
SK
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Blog Entries: 9 Recognitions: Homework Help Science Advisor Your notes should use the gamma matrices. It's the modern treatment on the Dirac equation/field and make the special relativity invariance easier to see.
Perhaps they should but I'm reckoning these are introduced later, so considering we don't 'know' these yet this appears to be the simplest way of demonstrating the probability density/current of the Dirac equation. I'm confused though, I perhaps maybe taking the adjoint of the Dirac equation incorrectly.
## Probability Density and Current of Dirac Equation
Have you considered that taking the hermitian conjugate is not only taking the complex conjugate but also the transposition?
I think so, taking the adjoint of ψ doesn't bring out a minus sign does it? With regards to the RHS of the Dirac equation I think β is diagonal and so the transposition doesn't affect it, though I'm a bit confused as how I'd go about doing the hermitian conjugate on the dot product of the alpha matrix with the ∇... I wouldn't be surprised though if this transposition is the issue, I'll keep looking at it!
Also, $\psi^{\dagger}\psi$ is a number, while $\psi\psi^{\dagger}$ is a matrix, so I don't really quite get the whole thing.
Quote by Sekonda I think so, taking the adjoint of ψ doesn't bring out a minus sign does it? With regards to the RHS of the Dirac equation I think β is diagonal and so the transposition doesn't affect it, though I'm a bit confused as how I'd go about doing the hermitian conjugate on the dot product of the alpha matrix with the ∇... I wouldn't be surprised though if this transposition is the issue, I'll keep looking at it!
The complicated thing is correctly taking the Hermitian conjugate of the Dirac equation. What I think is true is this:
1. $i \dfrac{d\Psi}{dt} = -i \alpha \cdot (\nabla \Psi) + \beta m \Psi$
2. $-i \dfrac{d\Psi^\dagger}{dt} = +i (\nabla \Psi^\dagger \cdot \alpha) + \Psi^\dagger \beta m$
Taking the conjugate reverses the order of matrices. So if you multiply the top equation on the left by $-i \Psi^\dagger$ and multiply the bottom equation on the right by $+i \Psi$ and add them, you get:
$\Psi^\dagger \dfrac{d\Psi}{dt} + \dfrac{d\Psi^\dagger}{dt}\Psi= - \Psi^\dagger \alpha \cdot (\nabla \Psi) - (\nabla \Psi^\dagger) \cdot \alpha \Psi$
(the terms involving $\beta$ cancel). You can rewrite this as (I think):
$\dfrac{d}{dt} (\Psi^\dagger \Psi) = - \nabla \cdot (\Psi^\dagger \alpha \Psi)$
This can be rearranged as a continuity equation for probability.
There's a different continuity equation for electric charge, but I've forgotten what that is.
Thanks Stevendaryl! This is exactly what I wanted, I can see what was incorrect now - brilliant! Thanks again, SK
Quote by stevendaryl The complicated thing is correctly taking the Hermitian conjugate of the Dirac equation. What I think is true is this: $i \dfrac{d\Psi}{dt} = -i \alpha \cdot (\nabla \Psi) + \beta m \Psi$ $-i \dfrac{d\Psi^\dagger}{dt} = +i (\nabla \Psi^\dagger \cdot \alpha) + \Psi^\dagger \beta m$ Taking the conjugate reverses the order of matrices. So if you multiply the top equation on the left by $-i \Psi^\dagger$ and multiply the bottom equation on the right by $+i \Psi$ and add them, you get: $\Psi^\dagger \dfrac{d\Psi}{dt} + \dfrac{d\Psi^\dagger}{dt}\Psi= - \Psi^\dagger \alpha \cdot (\nabla \Psi) - (\nabla \Psi^\dagger) \cdot \alpha \Psi$ (the terms involving $\beta$ cancel). You can rewrite this as (I think): $\dfrac{d}{dt} (\Psi^\dagger \Psi) = - \nabla \cdot (\Psi^\dagger \alpha \Psi)$ This can be rearranged as a continuity equation for probability. There's a different continuity equation for electric charge, but I've forgotten what that is.
My last comment was stupid. If you multiply probability current by the electron charge, you get the charge current. It's that simple. For some reason, though, that's not the case with the solutions of the Klein Gordon equation.
Thread Tools
| | | |
|------------------------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Probability Density and Current of Dirac Equation | | |
| Thread | Forum | Replies |
| | Quantum Physics | 6 |
| | Advanced Physics Homework | 0 |
| | Quantum Physics | 1 |
| | Quantum Physics | 5 |
| | Advanced Physics Homework | 2 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420216679573059, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?s=4482fb7b33faf904df250768cec326f2&p=4178897 | Physics Forums
## Which Dickey-Fuller test should I apply to this time series?
I have a time series of climate data that I'm testing for stationarity. Based on previous research, I expect the model underlying the data to have an intercept term, a positive linear time trend, and some normally distributed error term. In other words, I expect the underlying model to look something like this:
yt = a0 + a1t + β yt-1 + ut \$
where ut is normally distributed. Since I'm assuming the underlying model has both an intercept and a linear time trend, I tested for a unit root with equation #3 of the simple Dickey-Fuller test, as shown:
∇yt = α0+α1t+δ yt-1+ut
This test returns a critical value that would lead me to reject the null hypothesis and conclude that the underlying model is non-stationary. However, I question if I'm applying this correctly, since even though the underlying model is assumed to have an intercept and a time trend, this does not imply that the first difference ∇yt will as well. Quite the opposite, in fact, if my math is correct.
Calculating the first difference based on the equation of the assumed underlying model gives:
∇yt = yt - yt-1 = [a0 + a1 + β yt-1 + ut] - [a0 + a1(t-1) + β yt-2 + ut-1]
∇yt = [a0 - a0] + [a1t - a1(t-1)] + β[yt-1 - yt-2] + [ut - ut-1]
∇yt = a1 + β * ∇yt-1 + ut - ut-1\$
Therefore, the first difference ∇yt appears to only have an intercept, not a time trend.
Because the underlying model has an intercept and a time trend, should I use the Dickey-Fuller test that includes an intercept and time trend when it tests for a unit root, or should I use the Dickey-Fuller test that only includes an intercept because the first difference of the original time series only has an intercept?
Recognitions: Science Advisor You are correct that the difference operation a linear trend term (k)(t) produces a constant, not another linear trend term. It's interesting to try to read that Wikipedia article you linked. It's about some hypothesis tests, but it never manages to state the null hypotheses and the test statistics. Apparently you understand the test statistics - which is more than I know. However, I did find this page: http://stats.stackexchange.com/quest...ey-fuller-test where the reply mentions that using the test with term of the form (k)(t) implies we are investigating a model with a quadratic trend. So my guess is that you don't use that form to test your null hypothesis.
As far as I know, the null hypothesis is that a unit root exists and that the process is therefore stationary. I had found that thread you linked, and I'm the user who started the short but ongoing discussion in the comments. The linked post (listed in the right-hand column) is also mine, and is quite similar to my post here. http://stats.stackexchange.com/q/44647 I don't understand why using a test with a trend term a1t would be used with a quadratic term, however. Can you shed any light on that?
Recognitions:
Science Advisor
## Which Dickey-Fuller test should I apply to this time series?
The only light I can shed is that if $f(t) = kt^2$ then
$\triangle f(t) = f(t+1) - f(t) = k(t+1)^2 - kt^2 = k( t^2 + 2t + 1) - kt^2 = 2kt + k$
So the $\triangle$ of a model with a $t^2$ term has a term linear in $t$.
Thread Tools
| | | |
|-----------------------------------------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: Which Dickey-Fuller test should I apply to this time series? | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 4 |
| | Calculus | 1 |
| | Set Theory, Logic, Probability, Statistics | 0 |
| | Calculus & Beyond Homework | 21 |
| | Calculus & Beyond Homework | 13 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384559988975525, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/108738/list | ## Return to Answer
2 added 97 characters in body
I don't believe that this is correct. The easiest way to see this is to look at your second question: The automorphisms/deformations/obstructions of a curve come from $H^i(C, \mathcal{O}_C)$, T_C)\$, i.e. they are the sheaves
`$R^i p_*\mathcal{O}_U$p_*\omega_{U/\overline{\mathcal{M}_{g,n}}}^\vee$`
where `$p : U \to \overline{\mathcal{M}_{g,n}}$` is the universal family, and `$\omega_{U/\overline{\mathcal{M}_{g,n}}}$` the relative dualizing sheaf. But these do not depend on `$\overline{\mathcal{M}_{g,n}}(X, \beta)$` !
In the end, I think the issue is that you have the wrong exact sequence. What you want (to produce the relative obstruction theory) is the complex
`$R^i p_*f^*T_X$`
where the maps $p, f$ arise in the universal diagram
`$\overline{\mathcal{M}_{g,n}}(X, \beta) \longleftarrow_p U \longrightarrow_f X$`
It is not obvious to me that your sheaves should be the same as these ones.
1
I don't believe that this is correct. The easiest way to see this is to look at your second question: The automorphisms/deformations/obstructions of a curve come from $H^i(C, \mathcal{O}_C)$, i.e. they are the sheaves
`$R^i p_*\mathcal{O}_U$`
where `$p : U \to \overline{\mathcal{M}_{g,n}}$` is the universal family. But these do not depend on `$\overline{\mathcal{M}_{g,n}}(X, \beta)$` !
In the end, I think the issue is that you have the wrong exact sequence. What you want (to produce the relative obstruction theory) is the complex
`$R^i p_*f^*T_X$`
where the maps $p, f$ arise in the universal diagram
`$\overline{\mathcal{M}_{g,n}}(X, \beta) \longleftarrow_p U \longrightarrow_f X$`
It is not obvious to me that your sheaves should be the same as these ones. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577937722206116, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2007/05/02/more-knot-coloring/?like=1&source=post_flair&_wpnonce=8149a33672 | # The Unapologetic Mathematician
## More knot coloring
Two weeks ago I went over how to color knots with three colors. Of course, being mathematicians, we want to generalize the hell out of this.
Really, as is often the case with this sort of thing, “color” is just a metaphor. We labeled each arc of a diagram with one element of a set with three elements and put a condition on the labels attached to the arcs at each crossing. So, what sets do we know of with three elements. I’ll give you some time to think.
Back? Did you come up with “the abelian group $\mathbb{Z}_3$“? If so, great. If not, this is one of the first examples of groups I mentioned waaaaaay back in January. Basically, it’s made of the numbers in the set $\{0,1,2\}$. We add and subtract as usual, but we loop around from $2$ to ${}0$. For example, $2+2=1$ and $0-1=2$.
So let’s imagine we’re “coloring” a knot with $\mathbb{Z}_3$. What’s the condition at a crossing? Imagine we’ve got an overcrossing arc colored $a$, and we approach it along an undercrossing arc colored $b$. If $a=b$ then we have two arcs of the same color, and we have to leave along an arc with the same color. On the other hand, if $a\neq b$ we have to use the third color. So how do we recognize the third color? It turns out that there’s an easy way to do this: the third color is $2a-b$. In fact, this also works if $a=b$. Try writing down all nine combinations of $a$ and $b$ and seeing that $2a-b$ is always a valid coloring for the third arc.
Now there’s nothing in our notion of coloring that relies on the number $3$. All we’re really using in this new condition is the fact that we have an abelian group. So take your favorite abelian group $G$ and try coloring knots with it, requiring that if an arc colored $b$ undercrosses an arc colored $a$, it comes out the other side colored $2a-b$.
Does this new form of $G$-coloring depend on the knot diagram we use, or only on the knot. Well, it turns out just to depend on the knot. To show this, we again use Reidemeister moves.
In each of these, I’ve colored some of the end and worked across the diagram seeing what I’m required to color the other edges. In the first move, for example, if I color the top of the left $a$, the bottom will also get $a$. If I color the top of the right $a$, the arc crosses under itself so the bottom must be colored $2a-a$. But we see that $2a-a=a$, so this is the same end coloring. The same sort of argument works for the second and third moves.
So for every abelian group $G$, we have an invariant: the number of ways of $G$-coloring the arcs of any diagram of the knot. The colorings we did the last time were $\mathbb{Z}_3$-colorings.
But why does this rule work so well? It turns out that we’re not interested in $G$ as an abelian group. We take the underlying set of $G$ and equip it with the operation $a\triangleright b=2a-b$. This makes $G$ into an involutory quandle, called the “dihedral quandle” of $G$. The name comes from the fact that $b\mapsto2a-b$ is like “reflection through $a$“, and such reflections generate all symmetries of regular polygons in the plane. A polygon in the plane, of course, is a sort of degenerate polyhedron in space with only two sides: “dihedron”.
Anyhow, recall the axioms for an involutory quandle:
• $a\triangleright a=a$
• $a\triangleright(a\triangleright b)=b$
• $a\triangleright(b\triangleright c)=(a\triangleright b)\triangleright(a\triangleright c)$
First check that these axioms really do hold for the operation we defined on $G$. Then imagine using any other involutory quandle $Q$ to color knots. Go back to the Reidemeister diagrams and do just what we did for $G$-coloring, but use the quandle operation instead: if an arc colored $b$ undercrosses one colored $a$, it leaves colored $a\triangleright b$. Show that the number of $Q$-colorings is an invariant of the knot, not just of its diagrams.
### Like this:
Posted by John Armstrong | Knot theory
## 10 Comments »
1. [...] move on to More knot coloring by John Armstrong that extends the concept of knot colorings to arbitrary involutory quandles. Wow [...]
Pingback by | May 4, 2007 | Reply
2. [...] move on to More knot coloring by John Armstrong that extends the concept of knot colorings to arbitrary involutory quandles. Wow [...]
Pingback by | May 4, 2007 | Reply
3. You don’t need the quandle to be involutory to color knots and obtain an invariant. Just change the second axiom by “the operation b |–> a*b is a bijection for all a” and you get a quandle suitable for coloring as well.
Comment by Matias | May 14, 2007 | Reply
4. You’re right, though I’m currently just considering involutory quandles in the context of this approach.
Comment by | May 14, 2007 | Reply
5. [...] Fundamental Involutory Quandle As I discussed last time, coloring a knot with any abelian group is secretly using the dihedral quandle associated to that [...]
Pingback by | May 16, 2007 | Reply
6. How do you get that 2a-b is the third coloring?
Comment by Miguel | December 19, 2007 | Reply
7. Miguel, you mean in the two diagrams for the third Reidemeister move? On both sides, a strand colored $b$ passes under a strand colored $a$, and comes out colored $2a-b$. On the left it’s done, but on the right it goes on to cross over another strand. Since overcrossing strands keep their same colors, this doesn’t change anything.
Comment by | December 19, 2007 | Reply
8. [...] move on to More knot coloring by John Armstrong that extends the concept of knot colorings to arbitrary involutory quandles. Wow [...]
Pingback by | February 4, 2008 | Reply
9. [...] move on to More knot coloring by John Armstrong that extends the concept of knot colorings to arbitrary involutory quandles. Wow [...]
Pingback by | November 2, 2008 | Reply
10. [...] for the other arrow directions and orders of visiting the segments. The diagram comes from this article in John Armstrong’s blog “The Unapologetic Mathematician” which we keep in our [...]
Pingback by | January 13, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152421951293945, "perplexity_flag": "middle"} |
http://nrich.maths.org/31 | ### Prompt Cards
These two group activities use mathematical reasoning - one is numerical, one geometric.
### Exploring Wild & Wonderful Number Patterns
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules.
### Worms
Place this "worm" on the 100 square and find the total of the four squares it covers. Keeping its head in the same place, what other totals can you make?
# Consecutive Numbers
##### Stage: 2 Challenge Level:
Well I wonder how often you have noticed that there are numbers around the place that follow one after another $1, 2, 3, \ldots$ etc.? Sometimes they appear in reverse order when a countdown is happening for a launch of a rocket. But usually they happen in an order going up, like when you read through a book and notice the page numbers. These kinds of numbers are called consecutive numbers, you may have heard of the word before - it simply means that they are whole numbers that follow one after another.
You can start anywhere [ $3, 4, 5, 6, \ldots$ etc. or $165, 166, 167, 168, \ldots$ etc.] and they can be explored in a number of different ways, when they are not counting anything particular. This investigation is about using the idea of consecutive numbers and gives us other numbers that we can explore much further and find out all kinds of things. You may very well discover things that NO ONE else has discovered or written about before, and that's GREAT!!
So this is how it starts. You need to choose any four consecutive numbers and place them in a row with a bit of a space between them, like this:
When you've chosen your consecutive numbers, stick with those same ones for quite a while, exploring ideas before you change them in any way. Now place $+$ and $-$ signs in between them, something like this :
4 + 5 - 6 + 7
4 - 5 + 6 + 7
and so on until you have found all the possibilities. You should include one using all $+$'s and one that includes all $-$'s.
Now work out the answers to all your calculations (e.g. 4 - 5 + 6 + 7 = 12 and so on). Are you sure you've got them all?
If so, try other sets of four consecutive numbers and look carefully at the sets of answers that you get each time. It is probably a good idea to write down what you notice. This can lead you to test some ideas out by starting with new sets of consecutive numbers and seeing if the same things happen in the same way.
You might now be doing some predictions that you can test out...
FINALLY, it is good to ask the question "I wonder what would happen if I ... ?"
You may have thought up your own questions to explore further. Here are some we thought of:
"What would happen if I took the consecutive numbers in an order going down, instead of up?"
"What would happen if I only used sets of 3 consecutive numbers?"
"What would happen if I used more consecutive numbers?"
"What would happen if I changed the rule and allowed consecutive numbers to include fractions or decimals?"
"What would happen if I allowed a $+$ or $-$ sign before the first number?"
This problem was chosen as a favourite for the NRICH 10th Anniversary website by Bernard Bagnall. Find out why Bernard selected it in the Notes
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551500678062439, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/90789/noncommutative-fukaya-category | Noncommutative Fukaya category?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
After reading through part of Victor Ginzburg's notes on Calabi-Yau algebras, I have a question about a principle in mirror symmetry. Let $(X,X')$ be a mirror pair of Calabi-Yau varieties then mirror symmetry predicts a bijection between
$$M_\mathbb{C}(X) \leftrightarrow M_K(X'),$$ where $M_{\mathbb{C}}(X)$ are smooth CY-deformations of $X$ and $M_{K}(X')$ is the 'stringy' Kahler resolutions of $X'$. The homological mirror symmetry conjecture of Maxim Kontsevich predicts an equivalence of categories $D^b(coh(X_c))\simeq D(Fuk(X'_{c'}))$ where $c\leftrightarrow c'$ under the bijection on the moduli spaces. The problem is that there are singular Calabi-Yau varieties without any smooth deformations or any smooth crepant resolutions. So the homological mirror symmetry conjecture doesn't seem to have a lot of substance in these cases. An insight of Michael Van den Bergh is that every Calabi-Yau variety should have a noncommutative deformation or noncommutative crepant resolution. Then under the bijection on moduli spaces it is then possible for a noncommutative deformation to map to commutative Kahler resolution and vice verse. To extend homological mirror conjecture to include these noncommutative spaces it seems plausible to define the category of coherent sheaves on a noncommutative space using finitely generated projective modules. So here is my question:
The definition of the Fukaya category on a symplectic manifold uses techniques that only seem to be available in the geometric context, so is there a plausible definition of the `Fukaya category' of a noncommutative space in order to make the noncommutative homological mirror symmetry conjecture hold?
-
1
1st approximation: arxiv.org/pdf/math/9803140v1.pdf – Daniel Pomerleano Mar 10 2012 at 3:07
1 Answer
Since no one else has tried to answer, I'll take a shot. It seems to me that there are threads of ideas in this story that in the very distant future might be woven together to give a possible answer.
To begin, we should note that there seems to be a general idea, discussed in this mathoverflow question, http://mathoverflow.net/questions/37420/deformation-quantization-and-quantum-cohomology-or-fukaya-category-are-they. That one could define the Fukaya category as modules over a deformation quantization of $C^{\infty}(X)$ corresponding to the symplectic form $\omega$.
The basic idea is that in two naive respects this category of modules behaves a lot like the Fukaya category. Firstly, the Hochschild cohomology of the deformation quantization is almost by definition the Poisson cohomology of the symplectic form $\omega$, which in turn is known to be isomorphic to $H^*(X)((t))$. As an equation:
$$HH^*(A_\omega,A_\omega) \cong H^*(X)((t))$$
Second, one can define a reasonable notion of modules with support on a Lagrangian submanifold and for any Lagrangian L, produce canonical holonomic modules supported there. One can compute that $$Ext(M_L,M_L) \cong H^*(L)((t))$$ There is some hope that one can put in the instanton corrections in a formal algebraic way and a fair amount of work has been done in this direction.
This story works best so far for the Fukaya category of `$T^*X$` where the deformation quantization is roughly the algebra of differential operators. This is related to more work than I could competently summarize. I'll just mention, work of Nadler and Zaslow, Tsygan and Tamarkin. This approach is used by Kapustin and Witten to incorporate co-isotropic branes into the Fukaya category in their famous study of the Geometric Langlands. There, they are after some enlargement of Nadler's infinitesimal Fukaya category of $T^*(X)$. Note however that this not the same Fukaya category(the wrapped Fukaya category) that one studies in the context of mirror symmetry, but perhaps things will work better in the compact case if that is ever put on firm ground.
This was all a prelude to say that deformation quantization places you firmly in the land of non-commutative geometry anyways. Things like differential operators for non-commutative rings can make sense http://www.springerlink.com/content/r0rqguawu1960qxy/. I've never really looked at Van Den Bergh's work, but perhaps the passage from the sheaf of algebraic functions to the sheaf $C^\infty(X)$ is another stumbling point. One of Maxim's Kontsevich's ideas (see his Lefschetz lecture notes http://www.ihes.fr/~maxim/TEXTS/Kontsevich-Lefschetz-Notes.pdf) is that for any saturated dg-algebra there should maybe exist some nuclear algebra which bears the same formal relationship as the algebra of algebraic functions and smooth functions.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9060763120651245, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/8547/parity-of-k-d-d-prime-1-vs-textgcdd1-d-prime-1-for-distinct/8548 | # Parity of $k = d d^{\prime} > 1$ vs $\text{gcd}(d+1,d^{\prime} +1)$ for distinct divisors $d$ and $d^{\prime}$
I've recently come upon the following (seemingly) simple observation:
Claim: A positive integer $k = d d^{\prime} > 1$ has the opposite parity of $\text{gcd}(d+1,d^{\prime}+1)$ for any pair of distinct divisors $d$ and $d^{\prime}$ of $k$.
There are substantially simple proofs (see comments), and one proof follows from the formula: \begin{eqnarray} \text{gcd}(a,b) = a + b - ab + 2 \sum_{k=1}^{a-1} \lfloor \tfrac{kb}{a} \rfloor. \end{eqnarray}
Corollary: Given a multiplicative arithmetic function $f \colon \mathbb{N} \to \mathbb{N}$ and a pair of coprime integers $a$ and $b$, \begin{eqnarray} f(ab) + 1 \equiv \text{gcd}(f(a)+1,f(b)+1) \mod 2. \end{eqnarray}
Question: Do either of these simple results have any neat use(s) or interesting application(s)?
(NB: The purpose of this post is not so much about finding simple (or simpler) proofs, but rather more about determining possible applications.)
Thanks!
-
## 1 Answer
I don't have an answer for the applications question (so feel free to vote me down, I guess?), but the proof is much simpler conceptually if you do it in cases:
Let $k = ab$, and if a is odd and b is even, then the product is even and the divisors swap parity when you add one, so they don't have a factor of two in common. If they're both even, then the product is even but $a+1$ and $b+1$ are odd, so the gcd is odd. If they're both odd, then the product is odd but $a+1$ and $b+1$ are both even so the gcd is even.
-
2
Conceptual proofs are always preferable! But, as you surmised, the question is about applications. – user02138 Nov 1 '10 at 20:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083676338195801, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/164899/generalized-laplacian-operator | # Generalized Laplacian operator?
Suppose a surface $S$ is endowed with a metric given by the matrix $$M=\begin{pmatrix} E&F\\F&G\end{pmatrix}$$
And $f,g$ are scalar functions defined on the surface. What then is the (geometric) significance of the scalar function given by ${1\over \sqrt{\det(M)}}{\partial \over \partial x_i}\left(f\sqrt{\det(M)} (M^{-1})_{ij} {\partial \over \partial x_j} g\right)$?
I have been told that if we set $f=1$, we get an operator equivalent to the Laplacian acting on the function $g$. Why does the Laplacian become this form? Is there an intuitive geometric explanation of what is going on?
Thank you.
-
## 2 Answers
Yes, there is a more intuitive geometric explanation, though it is kind of difficult to see if you just get to see this formula. In differential (Riemannian) geometry one looks at curved (in contrast to flat, like Euclidean space) surfaces or higher dimensional manifolds. The metric you are looking at (more precisely: the pointwise norm associated with the pointwise scalar product it defines) is kind of an infinitesimal measure for distances in these surfaces.
It turns out that in this context you can set up differential calculus, the basic operation of which, when applied to vector fields, is the so called covariant differentiation. If you are looking at surfaces embedded in Euclidean 3 space this is basically a differentiation in that surrounding space followed by an orthogonal projection onto the tangent plane to the surface, but one can define this in an abstract manner, too (without an ambient manifold). If you then take a function and it's gradient (a concept which is also to be defined and depends on the metric) and take the covariant derivative of this object, the trace of this object (wrt the metric) is the Laplacian of the function (as is in Euclidean space, the Laplacian is the trace of the Hessian). In local coordinates this happens to look like the object you wrote down (when $f=1$). While in this form it looks a bit arbitrary it turns out to have some interesting properties. In particular it is invariant under coordinate changes, that is, it is well defined as a differential operator on the surface.
If you want to learn more about this you should fetch some basic textbook on Riemannian Geometry, like do Carmo.
-
Thank you, Thomas. I will look for a Riemannian Geometry textbook. If you don;t mind, could you explain a little about the covariant derivative, or perhaps provide a link to a more visual explanation for that? Thanks again! – Brio Jun 30 '12 at 13:29
@Brio I added a link (in the answer) to a wikipedia page about covariant differentiation. – user20266 Jun 30 '12 at 13:41
Thank you, Thomas! – Brio Jun 30 '12 at 16:36
Thomas has already given a good answer. However it might be helpful to mention that the operator in the question(v1) with $f=1$ is in fact the 2D Laplace-Beltrami operator.
-
Thank you, Qmechanic! – Brio Jun 30 '12 at 16:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466111660003662, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/26932/simple-examples-of-homotopy-colimits | ## Simple examples of homotopy colimits
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am following the explicit construction of homotopy colimits as described by Dugger in the paper: "Primer on homotopy colimits", which can be found here: http://www.uoregon.edu/~ddugger/hocolim.pdf As described in the appendix of Topological hypercovers and A1-realizations, Mathematische Zeitschrift 246 (2004) in the category of topological spaces no cofibrant-replacement functor is needed when computing the homotopy colimit of a small diagram in $\mathcal{T}op$.
For the index category $\mathcal{I} = \cdot \rightrightarrows \cdot$ and a small diagram $D: \mathcal{I} \rightarrow \mathcal{T}op$ with the image $X \rightrightarrows Y$ where $f, g: X \rightarrow Y$ this yields the space $T := (X \times \nabla^0 \amalg Y \times \nabla^0 \amalg X_g \times \nabla^1 \amalg X_f \times \nabla^1) / \sim$ where $\sim$ is given by: $(x, 1) \sim (x, (0,1)) \in X_f \times \nabla^n, X_g\times \nabla^n$, $(f(x), 1) \sim (x, (1,0)) \in X_f \times \nabla^n$ and $(g(x), 1) \sim (x, (1,0)) \in X_g \times \nabla^n$ for all $x \in X$.
Notation: $\nabla^n$ is the topogical n-simplex, $X_f$ and $X_g$ are just copies of X indexed by a map in the diagram to keep track of all the identifications
1) Are any requirements necessary for $T$ to be homotopy equivalent or weakly homotopy equivalent to $colim_{\mathcal{I}}{D}$?
2) What are the requirements for a homotopy pushout to be homotopy or weakly homotopy equivalent to the ordinary pushout?
3) What are the requirements for a homotopy colimit of a small diagram from the category obtained from the preorder $(\mathbb{N}, \leqslant)$ to $\mathcal{T}op$ to be be homotopy or weakly homotopy equivalent to the infinite mapping telescope as described in Section 3F (page 312) in the book about algebraic topology by Hatcher?
Since I barely know any model category theory, I would appreciate any elementary answers to this! Thank you very much!!
-
Isn't the proper setting for all of this model category theory (or at least some sort of setup where you have categories with weak equivalences? – Harry Gindi Jun 3 2010 at 16:03
Well, you are probably right, but my supervisor doesn't want me to use model category theory because we have not covered model categories in a lectures and thus it would take quite long to develop the language and cover everything that is needed. – unknown (google) Jun 3 2010 at 16:34
I'm guessing that by "$\nabla^n$", you mean the topological $n$-simplex (which people usually write $\Delta^n$). And that "$X_f$" and "$X_g$" are really just copies of $X$. Is that right? – Charles Rezk Jun 3 2010 at 16:35
Yeah, you're right! Thank you! – unknown (google) Jun 3 2010 at 20:17
3
I wouldn't be bothered by Harry; while model categories is one of the more general settings in which to understand homotopy colimits, the theory of homotopy colimits in model categories is inspired by the theory for topological spaces. So you are starting in historically the first place anyway. And an example (even a big example like this) is often a good way to get a handle on a general theory. – Allison Smith Jun 7 2010 at 19:50
## 2 Answers
Here's an answer to question 2: A sufficient condition for $\mathrm{colimit}(X \leftarrow A\rightarrow Y)$ to be weakly equivalent to the homotopy colimit, is (a) for one of the maps (say $A\to X$) to have the homotopy extension property. Another sufficient condition is that the diagram $(X\leftarrow A\rightarrow Y)$ is (b) an "excisive triad". Take a look at Chapter 10, section 7 of May's "Consise Course", where (b) is proved to be such a sufficient condition. You can show that (a) is a sufficient condition by showing directly that it is homotopy equivalent to the double mapping cylinder $X\cup A\times \Delta^1\cup Y$, which can be re-analyzed as an "excisive triad".
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Re homotopy pushouts: These are also analysed in the chapter on Cofibrations in my book Topology and Groupoids (2006).
These ideas of homotopy colimits are also useful in group theory. For example you can replace the trefoil group T which has presentation with generators $x,y$ and relation $x^2y^{-3}$ by the trefoil groupoid $T'$ which has two objects $0,1$, a generator $x$ at $0$, a generator $y$ at $1$, an arrow $\iota: 0 \to 1$ and a relation $y^3\iota = \iota x^2$. This corresponds to the fundamental groupoid on 2 base points of a double mapping cylinder of maps $S^1 \to S^1$, which is `better' than the ordinary pushout of these maps, as that is not Hausdorff.
-
A further point about the use of homotopy colimits in the extension from group to groupoid theory is that in the example given above of the trefoil groupoid, $T'$, the map $\{x,y\} \to T'$ is injective, just as in the double mapping cylinder topological model there are two 1-cells corresponding to the generators. This type of use of groupoids was pointed out to me a long time ago by Eldon Dyer. – Ronnie Brown May 3 2012 at 10:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267340898513794, "perplexity_flag": "head"} |
http://www.sagemath.org/doc/reference/hecke/sage/modular/hecke/module.html | # Hecke modules¶
class sage.modular.hecke.module.HeckeModule_free_module(base_ring, level, weight)¶
Bases: sage.modular.hecke.module.HeckeModule_generic
A Hecke module modeled on a free module over a commutative ring.
T(n)¶
Returns the $$n^{th}$$ Hecke operator $$T_n$$. This function is a synonym for hecke_operator().
EXAMPLE:
```sage: M = ModularSymbols(11,2)
sage: M.T(3)
Hecke operator T_3 on Modular Symbols ...
```
ambient()¶
Synonym for ambient_hecke_module. Return the ambient module associated to this module.
EXAMPLE:
```sage: CuspForms(1, 12).ambient()
Modular Forms space of dimension 2 for Modular Group SL(2,Z) of weight 12 over Rational Field
```
ambient_hecke_module()¶
Return the ambient module associated to this module. As this is an abstract base class, return NotImplementedError.
EXAMPLE:
```sage: sage.modular.hecke.module.HeckeModule_free_module(QQ, 10, 3).ambient_hecke_module()
Traceback (most recent call last):
...
NotImplementedError
```
ambient_module()¶
Synonym for ambient_hecke_module. Return the ambient module associated to this module.
EXAMPLE:
```sage: CuspForms(1, 12).ambient_module()
Modular Forms space of dimension 2 for Modular Group SL(2,Z) of weight 12 over Rational Field
sage: sage.modular.hecke.module.HeckeModule_free_module(QQ, 10, 3).ambient_module()
Traceback (most recent call last):
...
NotImplementedError
```
atkin_lehner_operator(d=None)¶
Return the Atkin-Lehner operator $$W_d$$ on this space, if defined, where $$d$$ is a divisor of the level $$N$$ such that $$N/d$$ and $$d$$ are coprime.
EXAMPLES:
```sage: M = ModularSymbols(11)
sage: w = M.atkin_lehner_operator()
sage: w
Hecke module morphism Atkin-Lehner operator W_11 defined by the matrix
[-1 0 0]
[ 0 -1 0]
[ 0 0 -1]
Domain: Modular Symbols space of dimension 3 for Gamma_0(11) of weight ...
Codomain: Modular Symbols space of dimension 3 for Gamma_0(11) of weight ...
sage: M = ModularSymbols(Gamma1(13))
sage: w = M.atkin_lehner_operator()
sage: w.fcp('x')
(x - 1)^7 * (x + 1)^8
```
```sage: M = ModularSymbols(33)
sage: S = M.cuspidal_submodule()
sage: S.atkin_lehner_operator()
Hecke module morphism Atkin-Lehner operator W_33 defined by the matrix
[ 0 -1 0 1 -1 0]
[ 0 -1 0 0 0 0]
[ 0 -1 0 0 -1 1]
[ 1 -1 0 0 -1 0]
[ 0 0 0 0 -1 0]
[ 0 -1 1 0 -1 0]
Domain: Modular Symbols subspace of dimension 6 of Modular Symbols space ...
Codomain: Modular Symbols subspace of dimension 6 of Modular Symbols space ...
```
```sage: S.atkin_lehner_operator(3)
Hecke module morphism Atkin-Lehner operator W_3 defined by the matrix
[ 0 1 0 -1 1 0]
[ 0 1 0 0 0 0]
[ 0 1 0 0 1 -1]
[-1 1 0 0 1 0]
[ 0 0 0 0 1 0]
[ 0 1 -1 0 1 0]
Domain: Modular Symbols subspace of dimension 6 of Modular Symbols space ...
Codomain: Modular Symbols subspace of dimension 6 of Modular Symbols space ...
```
```sage: N = M.new_submodule()
sage: N.atkin_lehner_operator()
Hecke module morphism Atkin-Lehner operator W_33 defined by the matrix
[ 1 2/5 4/5]
[ 0 -1 0]
[ 0 0 -1]
Domain: Modular Symbols subspace of dimension 3 of Modular Symbols space ...
Codomain: Modular Symbols subspace of dimension 3 of Modular Symbols space ...
```
basis()¶
Returns a basis for self.
EXAMPLES:
```sage: m = ModularSymbols(43)
sage: m.basis()
((1,0), (1,31), (1,32), (1,38), (1,39), (1,40), (1,41))
```
basis_matrix()¶
Return the matrix of the basis vectors of self (as vectors in some ambient module)
EXAMPLE:
```sage: CuspForms(1, 12).basis_matrix()
[1 0]
```
coordinate_vector(x)¶
Write x as a vector with respect to the basis given by self.basis().
EXAMPLES:
```sage: S = ModularSymbols(11,2).cuspidal_submodule()
sage: S.0
(1,8)
sage: S.basis()
((1,8), (1,9))
sage: S.coordinate_vector(S.0)
(1, 0)
```
decomposition(bound=None, anemic=True, height_guess=1, sort_by_basis=False, proof=None)¶
Returns the maximal decomposition of this Hecke module under the action of Hecke operators of index coprime to the level. This is the finest decomposition of self that we can obtain using factors obtained by taking kernels of Hecke operators.
Each factor in the decomposition is a Hecke submodule obtained as the kernel of $$f(T_n)^r$$ acting on self, where n is coprime to the level and $$r=1$$. If anemic is False, instead choose $$r$$ so that $$f(X)^r$$ exactly divides the characteristic polynomial.
INPUT:
• anemic - bool (default: True), if True, use only Hecke operators of index coprime to the level.
• bound - int or None, (default: None). If None, use all Hecke operators up to the Sturm bound, and hence obtain the same result as one would obtain by using every element of the Hecke ring. If a fixed integer, decompose using only Hecke operators $$T_p$$, with $$p$$ prime, up to bound.
• sort_by_basis - bool (default: False); If True the resulting decomposition will be sorted as if it was free modules, ignoring the Hecke module structure. This will save a lot of time.
OUTPUT:
• list - a list of subspaces of self.
EXAMPLES:
```sage: ModularSymbols(17,2).decomposition()
[
Modular Symbols subspace of dimension 1 of Modular Symbols space of dimension 3 for Gamma_0(17) of weight 2 with sign 0 over Rational Field,
Modular Symbols subspace of dimension 2 of Modular Symbols space of dimension 3 for Gamma_0(17) of weight 2 with sign 0 over Rational Field
]
sage: ModularSymbols(Gamma1(10),4).decomposition()
[
Modular Symbols subspace of dimension 2 of Modular Symbols space of dimension 18 for Gamma_1(10) of weight 4 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 2 of Modular Symbols space of dimension 18 for Gamma_1(10) of weight 4 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 2 of Modular Symbols space of dimension 18 for Gamma_1(10) of weight 4 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 18 for Gamma_1(10) of weight 4 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 18 for Gamma_1(10) of weight 4 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 18 for Gamma_1(10) of weight 4 with sign 0 and over Rational Field
]
sage: ModularSymbols(GammaH(12, [11])).decomposition()
[
Modular Symbols subspace of dimension 1 of Modular Symbols space of dimension 9 for Congruence Subgroup Gamma_H(12) with H generated by [11] of weight 2 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 1 of Modular Symbols space of dimension 9 for Congruence Subgroup Gamma_H(12) with H generated by [11] of weight 2 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 1 of Modular Symbols space of dimension 9 for Congruence Subgroup Gamma_H(12) with H generated by [11] of weight 2 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 1 of Modular Symbols space of dimension 9 for Congruence Subgroup Gamma_H(12) with H generated by [11] of weight 2 with sign 0 and over Rational Field,
Modular Symbols subspace of dimension 5 of Modular Symbols space of dimension 9 for Congruence Subgroup Gamma_H(12) with H generated by [11] of weight 2 with sign 0 and over Rational Field
]
```
TESTS::
sage: M = ModularSymbols(1000,2,sign=1).new_subspace().cuspidal_subspace() sage: M.decomposition(3, sort_by_basis = True) [ Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 154 for Gamma_0(1000) of weight 2 with sign 1 over Rational Field, Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 154 for Gamma_0(1000) of weight 2 with sign 1 over Rational Field, Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 154 for Gamma_0(1000) of weight 2 with sign 1 over Rational Field, Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 154 for Gamma_0(1000) of weight 2 with sign 1 over Rational Field, Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 154 for Gamma_0(1000) of weight 2 with sign 1 over Rational Field, Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 154 for Gamma_0(1000) of weight 2 with sign 1 over Rational Field ]
degree()¶
The degree of this Hecke module (i.e. the rank of the ambient free module)
EXAMPLE:
```sage: CuspForms(1, 12).degree()
2
```
diamond_bracket_matrix(d)¶
Return the matrix of the diamond bracket operator $$\langle d \rangle$$ on self.
EXAMPLES:
```sage: M = ModularSymbols(DirichletGroup(5).0, 3)
sage: M.diamond_bracket_matrix(3)
[-zeta4 0]
[ 0 -zeta4]
sage: ModularSymbols(Gamma1(5), 3).diamond_bracket_matrix(3)
[ 0 -1 0 0]
[ 1 0 0 0]
[ 0 0 0 1]
[ 0 0 -1 0]
```
diamond_bracket_operator(d)¶
Return the diamond bracket operator $$\langle d \rangle$$ on self.
EXAMPLES:
```sage: M = ModularSymbols(DirichletGroup(5).0, 3)
sage: M.diamond_bracket_operator(3)
Diamond bracket operator <3> on Modular Symbols space of dimension 2 and level 5, weight 3, character [zeta4], sign 0, over Cyclotomic Field of order 4 and degree 2
```
dual_eigenvector(names='alpha', lift=True, nz=None)¶
Return an eigenvector for the Hecke operators acting on the linear dual of this space. This eigenvector will have entries in an extension of the base ring of degree equal to the dimension of this space.
INPUT:
• name - print name of generator for eigenvalue field.
• lift - bool (default: True)
• nz - if not None, then normalize vector so dot product with this basis vector of ambient space is 1.
OUTPUT: A vector with entries possibly in an extension of the base ring. This vector is an eigenvector for all Hecke operators acting via their transpose.
If lift = False, instead return an eigenvector in the subspace for the Hecke operators on the dual space. I.e., this is an eigenvector for the restrictions of Hecke operators to the dual space.
Note
1. The answer is cached so subsequent calls always return the same vector. However, the algorithm is randomized, so calls during another session may yield a different eigenvector. This function is used mainly for computing systems of Hecke eigenvalues.
2. One can also view a dual eigenvector as defining (via dot product) a functional phi from the ambient space of modular symbols to a field. This functional phi is an eigenvector for the dual action of Hecke operators on functionals.
EXAMPLE:
```sage: ModularSymbols(14).cuspidal_subspace().simple_factors()[1].dual_eigenvector()
(0, 1, 0, 0, 0)
```
dual_hecke_matrix(n)¶
The matrix of the $$n^{th}$$ Hecke operator acting on the dual embedded representation of self.
EXAMPLE:
```sage: CuspForms(1, 24).dual_hecke_matrix(5)
[ 79345647584250/2796203 50530996976060416/763363419]
[ 195556757760000/2796203 124970165346810/2796203]
```
eigenvalue(n, name='alpha')¶
Assuming that self is a simple space, return the eigenvalue of the $$n^{th}$$ Hecke operator on self.
INPUT:
• n - index of Hecke operator
• name - print representation of generator of eigenvalue field
EXAMPLES:
```sage: A = ModularSymbols(125,sign=1).new_subspace()[0]
sage: A.eigenvalue(7)
-3
sage: A.eigenvalue(3)
-alpha - 2
sage: A.eigenvalue(3,'w')
-w - 2
sage: A.eigenvalue(3,'z').charpoly('x')
x^2 + 3*x + 1
sage: A.hecke_polynomial(3)
x^2 + 3*x + 1
```
```sage: M = ModularSymbols(Gamma1(17)).decomposition()[8].plus_submodule()
sage: M.eigenvalue(2,'a')
a
sage: M.eigenvalue(4,'a')
4/3*a^3 + 17/3*a^2 + 28/3*a + 8/3
```
Note
1. In fact there are $$d$$ systems of eigenvalues associated to self, where $$d$$ is the rank of self. Each of the systems of eigenvalues is conjugate over the base field. This function chooses one of the systems and consistently returns eigenvalues from that system. Thus these are the coefficients $$a_n$$ for $$n\geq 1$$ of a modular eigenform attached to self.
2. This function works even for Eisenstein subspaces, though it will not give the constant coefficient of one of the corresponding Eisenstein series (i.e., the generalized Bernoulli number).
factor_number()¶
If this Hecke module was computed via a decomposition of another Hecke module, this is the corresponding number. Otherwise return -1.
EXAMPLES:
```sage: ModularSymbols(23)[0].factor_number()
0
sage: ModularSymbols(23).factor_number()
-1
```
gen(n)¶
Return the nth basis vector of the space.
EXAMPLE:
```sage: ModularSymbols(23).gen(1)
(1,17)
```
hecke_matrix(n)¶
The matrix of the $$n^{th}$$ Hecke operator acting on given basis.
EXAMPLE:
```sage: C = CuspForms(1, 16)
sage: C.hecke_matrix(3)
[-3348]
```
hecke_operator(n)¶
Returns the $$n$$-th Hecke operator $$T_n$$.
INPUT:
• ModularSymbols self - Hecke equivariant space of modular symbols
• int n - an integer at least 1.
EXAMPLES:
```sage: M = ModularSymbols(11,2)
sage: T = M.hecke_operator(3) ; T
Hecke operator T_3 on Modular Symbols space of dimension 3 for Gamma_0(11) of weight 2 with sign 0 over Rational Field
sage: T.matrix()
[ 4 0 -1]
[ 0 -1 0]
[ 0 0 -1]
sage: T(M.0)
4*(1,0) - (1,9)
sage: S = M.cuspidal_submodule()
sage: T = S.hecke_operator(3) ; T
Hecke operator T_3 on Modular Symbols subspace of dimension 2 of Modular Symbols space of dimension 3 for Gamma_0(11) of weight 2 with sign 0 over Rational Field
sage: T.matrix()
[-1 0]
[ 0 -1]
sage: T(S.0)
-(1,8)
```
hecke_polynomial(n, var='x')¶
Return the characteristic polynomial of the $$n^{th}$$ Hecke operator acting on this space.
INPUT:
• n - integer
OUTPUT: a polynomial
EXAMPLE:
```sage: ModularSymbols(11,2).hecke_polynomial(3)
x^3 - 2*x^2 - 7*x - 4
```
is_simple()¶
Return True if this space is simple as a module for the corresponding Hecke algebra. Raises NotImplementedError, as this is an abstract base class.
EXAMPLE:
```sage: sage.modular.hecke.module.HeckeModule_free_module(QQ, 10, 3).is_simple()
Traceback (most recent call last):
...
NotImplementedError
```
is_splittable()¶
Returns True if and only if only it is possible to split off a nontrivial generalized eigenspace of self as the kernel of some Hecke operator (not necessarily prime to the level). Note that the direct sum of several copies of the same simple module is not splittable in this sense.
EXAMPLE:
```sage: M = ModularSymbols(Gamma0(64)).cuspidal_subspace()
sage: M.is_splittable()
True
sage: M.simple_factors()[0].is_splittable()
False
```
is_splittable_anemic()¶
Returns true if and only if only it is possible to split off a nontrivial generalized eigenspace of self as the kernel of some Hecke operator of index coprime to the level. Note that the direct sum of several copies of the same simple module is not splittable in this sense.
EXAMPLE:
```sage: M = ModularSymbols(Gamma0(64)).cuspidal_subspace()
sage: M.is_splittable_anemic()
True
sage: M.simple_factors()[0].is_splittable_anemic()
False
```
is_submodule(other)¶
Return True if self is a submodule of other.
EXAMPLES:
```sage: M = ModularSymbols(Gamma0(64))
sage: M[0].is_submodule(M)
True
sage: CuspForms(1, 24).is_submodule(ModularForms(1, 24))
True
sage: CuspForms(1, 12).is_submodule(CuspForms(3, 12))
False
```
ngens()¶
Number of generators of self (equal to the rank).
EXAMPLE:
```sage: ModularForms(1, 12).ngens()
2
```
projection()¶
Return the projection map from the ambient space to self.
ALGORITHM: Let $$B$$ be the matrix whose columns are obtained by concatenating together a basis for the factors of the ambient space. Then the projection matrix onto self is the submatrix of $$B^{-1}$$ obtained from the rows corresponding to self, i.e., if the basis vectors for self appear as columns $$n$$ through $$m$$ of $$B$$, then the projection matrix is got from rows $$n$$ through $$m$$ of $$B^{-1}$$. This is because projection with respect to the B basis is just given by an $$m-n+1$$ row slice $$P$$ of a diagonal matrix D with 1’s in the $$n$$ through $$m$$ positions, so projection with respect to the standard basis is given by $$P\cdot B^{-1}$$, which is just rows $$n$$ through $$m$$ of $$B^{-1}$$.
EXAMPLES:
```sage: e = EllipticCurve('34a')
sage: m = ModularSymbols(34); s = m.cuspidal_submodule()
sage: d = s.decomposition(7)
sage: d
[
Modular Symbols subspace of dimension 2 of Modular Symbols space of dimension 9 for Gamma_0(34) of weight 2 with sign 0 over Rational Field,
Modular Symbols subspace of dimension 4 of Modular Symbols space of dimension 9 for Gamma_0(34) of weight 2 with sign 0 over Rational Field
]
sage: a = d[0]; a
Modular Symbols subspace of dimension 2 of Modular Symbols space of dimension 9 for Gamma_0(34) of weight 2 with sign 0 over Rational Field
sage: pi = a.projection()
sage: pi(m([0,oo]))
-1/6*(2,7) + 1/6*(2,13) - 1/6*(2,31) + 1/6*(2,33)
sage: M = ModularSymbols(53,sign=1)
sage: S = M.cuspidal_subspace()[1] ; S
Modular Symbols subspace of dimension 3 of Modular Symbols space of dimension 5 for Gamma_0(53) of weight 2 with sign 1 over Rational Field
sage: p = S.projection()
sage: S.basis()
((1,33) - (1,37), (1,35), (1,49))
sage: [ p(x) for x in S.basis() ]
[(1,33) - (1,37), (1,35), (1,49)]
```
system_of_eigenvalues(n, name='alpha')¶
Assuming that self is a simple space of modular symbols, return the eigenvalues $$[a_1, \ldots, a_nmax]$$ of the Hecke operators on self. See self.eigenvalue(n) for more details.
INPUT:
• n - number of eigenvalues
• alpha - name of generate for eigenvalue field
EXAMPLES: These computations use pseudo-random numbers, so we set the seed for reproducible testing.
```sage: set_random_seed(0)
```
The computations also use cached results from other computations, so we clear the caches for reproducible testing.
```sage: ModularSymbols_clear_cache()
```
We compute eigenvalues for newforms of level 62.
```sage: M = ModularSymbols(62,2,sign=-1)
sage: S = M.cuspidal_submodule().new_submodule()
sage: [A.system_of_eigenvalues(3) for A in S.decomposition()]
[[1, 1, 0], [1, -1, 1/2*alpha + 1/2]]
```
Next we define a function that does the above:
```sage: def b(N,k=2):
... t=cputime()
... S = ModularSymbols(N,k,sign=-1).cuspidal_submodule().new_submodule()
... for A in S.decomposition():
... print N, A.system_of_eigenvalues(5)
```
```sage: b(63)
63 [1, 1, 0, -1, 2]
63 [1, alpha, 0, 1, -2*alpha]
```
This example illustrates finding field over which the eigenvalues are defined:
```sage: M = ModularSymbols(23,2,sign=1).cuspidal_submodule().new_submodule()
sage: v = M.system_of_eigenvalues(10); v
[1, alpha, -2*alpha - 1, -alpha - 1, 2*alpha, alpha - 2, 2*alpha + 2, -2*alpha - 1, 2, -2*alpha + 2]
sage: v[0].parent()
Number Field in alpha with defining polynomial x^2 + x - 1
```
This example illustrates setting the print name of the eigenvalue field.
```sage: A = ModularSymbols(125,sign=1).new_subspace()[0]
sage: A.system_of_eigenvalues(10)
[1, alpha, -alpha - 2, -alpha - 1, 0, -alpha - 1, -3, -2*alpha - 1, 3*alpha + 2, 0]
sage: A.system_of_eigenvalues(10,'x')
[1, x, -x - 2, -x - 1, 0, -x - 1, -3, -2*x - 1, 3*x + 2, 0]
```
weight()¶
Returns the weight of this Hecke module.
INPUT:
• self - an arbitrary Hecke module
OUTPUT:
• int - the weight
EXAMPLES:
```sage: m = ModularSymbols(20, weight=2)
sage: m.weight()
2
```
zero_submodule()¶
Return the zero submodule of self.
EXAMPLES:
```sage: ModularSymbols(11,4).zero_submodule()
Modular Symbols subspace of dimension 0 of Modular Symbols space of dimension 6 for Gamma_0(11) of weight 4 with sign 0 over Rational Field
sage: CuspForms(11,4).zero_submodule()
Modular Forms subspace of dimension 0 of Modular Forms space of dimension 4 for Congruence Subgroup Gamma0(11) of weight 4 over Rational Field
```
class sage.modular.hecke.module.HeckeModule_generic(base_ring, level, category=None)¶
Bases: sage.modules.module.Module_old
A very general base class for Hecke modules.
We define a Hecke module of weight $$k$$ to be a module over a commutative ring equipped with an action of operators $$T_m$$ for all positive integers $$m$$ coprime to some integer $$n`(the level), which satisfy `T_r T_s = T_{rs}$$ for $$r,s$$ coprime, and for powers of a prime $$p$$, $$T_{p^r} = T_{p} T_{p^{r-1}} - \varepsilon(p) p^{k-1} T_{p^{r-2}}$$, where $$\varepsilon(p)$$ is some endomorphism of the module which commutes with the $$T_m$$.
We distinguish between full Hecke modules, which also have an action of operators $$T_m$$ for $$m$$ not assumed to be coprime to the level, and anemic Hecke modules, for which this does not hold.
anemic_hecke_algebra()¶
Return the Hecke algebra associated to this Hecke module.
EXAMPLES:
```sage: T = ModularSymbols(1,12).hecke_algebra()
sage: A = ModularSymbols(1,12).anemic_hecke_algebra()
sage: T == A
False
sage: A
Anemic Hecke algebra acting on Modular Symbols space of dimension 3 for Gamma_0(1) of weight 12 with sign 0 over Rational Field
sage: A.is_anemic()
True
```
character()¶
The character of this space. As this is an abstract base class, return None.
EXAMPLE:
```sage: sage.modular.hecke.module.HeckeModule_generic(QQ, 10).character() is None
True
```
dimension()¶
Synonym for rank.
EXAMPLE:
```sage: M = sage.modular.hecke.module.HeckeModule_generic(QQ, 10).dimension()
Traceback (most recent call last):
...
NotImplementedError: Derived subclasses must implement rank
```
hecke_algebra()¶
Return the Hecke algebra associated to this Hecke module.
EXAMPLES:
```sage: T = ModularSymbols(Gamma1(5),3).hecke_algebra()
sage: T
Full Hecke algebra acting on Modular Symbols space of dimension 4 for Gamma_1(5) of weight 3 with sign 0 and over Rational Field
sage: T.is_anemic()
False
```
```sage: M = ModularSymbols(37,sign=1)
sage: E, A, B = M.decomposition()
sage: A.hecke_algebra() == B.hecke_algebra()
False
```
is_full_hecke_module()¶
Return True if this space is invariant under all Hecke operators.
Since self is guaranteed to be an anemic Hecke module, the significance of this function is that it also ensures invariance under Hecke operators of index that divide the level.
EXAMPLES:
```sage: M = ModularSymbols(22); M.is_full_hecke_module()
True
sage: M.submodule(M.free_module().span([M.0.list()]), check=False).is_full_hecke_module()
False
```
is_hecke_invariant(n)¶
Return True if self is invariant under the Hecke operator $$T_n$$.
Since self is guaranteed to be an anemic Hecke module it is only interesting to call this function when $$n$$ is not coprime to the level.
EXAMPLES:
```sage: M = ModularSymbols(22).cuspidal_subspace()
sage: M.is_hecke_invariant(2)
True
```
We use check=False to create a nasty “module” that is not invariant under $$T_2$$:
```sage: S = M.submodule(M.free_module().span([M.0.list()]), check=False); S
Modular Symbols subspace of dimension 1 of Modular Symbols space of dimension 7 for Gamma_0(22) of weight 2 with sign 0 over Rational Field
sage: S.is_hecke_invariant(2)
False
sage: [n for n in range(1,12) if S.is_hecke_invariant(n)]
[1, 3, 5, 7, 9, 11]
```
is_zero()¶
Return True if this Hecke module has dimension 0.
EXAMPLES:
```sage: ModularSymbols(11).is_zero()
False
sage: ModularSymbols(11).old_submodule().is_zero()
True
sage: CuspForms(10).is_zero()
True
sage: CuspForms(1,12).is_zero()
False
```
level()¶
Returns the level of this modular symbols space.
INPUT:
• ModularSymbols self - an arbitrary space of modular symbols
OUTPUT:
• int - the level
EXAMPLES:
```sage: m = ModularSymbols(20)
sage: m.level()
20
```
rank()¶
Return the rank of this module over its base ring. Returns NotImplementedError, since this is an abstract base class.
EXAMPLES:
```sage: sage.modular.hecke.module.HeckeModule_generic(QQ, 10).rank()
Traceback (most recent call last):
...
NotImplementedError: Derived subclasses must implement rank
```
submodule(X)¶
Return the submodule of self corresponding to X. As this is an abstract base class, this raises a NotImplementedError.
EXAMPLES:
```sage: sage.modular.hecke.module.HeckeModule_generic(QQ, 10).submodule(0)
Traceback (most recent call last):
...
NotImplementedError: Derived subclasses should implement submodule
```
sage.modular.hecke.module.is_HeckeModule(x)¶
Return True if x is a Hecke module.
EXAMPLES:
```sage: from sage.modular.hecke.module import is_HeckeModule
sage: is_HeckeModule(ModularForms(Gamma0(7), 4))
True
sage: is_HeckeModule(QQ^3)
False
sage: is_HeckeModule(J0(37).homology())
True
```
#### Previous topic
General Hecke Algebras and Hecke Modules
#### Next topic
Ambient Hecke modules
### Quick search
Enter search terms or a module, class or function name. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 56, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7320584058761597, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/80872/moment-generating-function-and-exponentially-decaying-tails-of-probability-distr | # Moment generating function and exponentially decaying tails of probability distribution
This is a follow-up to this previous question.
Suppose I have a mean-zero symmetrically-distributed random variable $X$ over the support $\mathbb{R}$. If $X$ has a moment-generating function $M_X(t)$ that is smooth around 0, $X$ has an exponentially decaying tail probability, by Chernoff bound (Lemma 11.9.1 in Cover and Thomas's "Elements of Information Theory" 2nd edition).
Now, suppose that $X$ has an $M_X(t)$ that is not smooth around 0. Suppose that $\mathbf{E}[X^k]=\infty$ for all even $k>n$, where $n$ is a positive integer. Is there $X$ that has exponentially-decaying tail probability in that case? Or would the tail probability always be a power-law?
Also, what happens to the tail if $M_X(t)$ is not defined, i.e. the integral in the transform diverges?
EDITS: Clarified the question based on helpful comments from @cardinal.
-
1
A somewhat pedantic response to your question is that no such random variable $X$ can exist in the first place based on the set of conditions you've placed on it. Do you see why? (Hint: Consider $k > n$ where $k$ is odd.) – cardinal Nov 10 '11 at 16:42
Hmmm... I see what you are saying. But does this mean that any symmetric zero-mean $X$ has to have all finite moments? Or is there a symmetric zero-mean $X$ that has all the moments but for which $M_X(t)$ is not smooth around 0? Which condition should I weaken? – M.B.M. Nov 10 '11 at 17:07
1
No, quite the opposite. A symmetric distribution about zero need not have any (raw) odd moments at all, but all (raw) even moments will exist, even if they are not finite. – cardinal Nov 10 '11 at 17:16
I think I am confused about what it means by "not having moment $\mu_n$". I interpret that only as "$\mu_n=\infty$". That is, my interpretation of "having moments" means "having finite momemts, possibly equal to zero". Would removing "Suppose that $\mathbf{E}[X_k]=\infty$ for all $k>n$, where $n$ is a positive integer" put more sense into my question? I'm mainly interested in what happens for $X$ with $M_X(t)$ that is not smooth around zero. – M.B.M. Nov 10 '11 at 17:26
And I was confused by your first comment because Student's t has finite moments up to its degree of freedom and further even moments are infinite or the integral in the mgf diverges for odd ones. Anyway, my bad -- I've edited my question. – M.B.M. Nov 10 '11 at 17:33
show 1 more comment
## 1 Answer
I assume that "exponentially decaying tail probability" means that $P(|X| > t) \le C e^{-\epsilon t}$ for some $C, \epsilon$. Any such random variable has finite moments of all orders. This follows from the formula $$E[|X|^p] = \int_0^\infty p t^{p-1} P(|X| > t) dt$$ which you can prove with Fubini's theorem and the fundamental theorem of calculus.
-
Thanks! That's the result I've been looking for. – M.B.M. Nov 10 '11 at 18:54
@Bullmoose: Note that this is the converse of the statement you originally quoted in another question. So, just be sure it actually is the result you were looking for. :) – cardinal Nov 10 '11 at 22:04
@cardinal Basically, I wanted to know whether not having finite moments of all orders implies that the tail probabilities do not decay exponentially. I think this answer answers it (as a contrapositive.) The statement I made in the comment to the other question states that if MGF is smooth around 0 then its tails decay exponentially, and it has all finite moments. I understand that having finite moments of all orders doesn't necessarily result in having exponential tails (the lognormal example), but for my problem it suffices to state that having a smooth MGF implies moments and exp-tails. – M.B.M. Nov 10 '11 at 22:49
@Bullmoose: I was referring to the following statement of yours in this question: I've heard somewhere that all finite moments of A with support R means that the tails of A decay exponentially. Is that true? – cardinal Nov 10 '11 at 23:20
Ahh, right. And that statement was shown to not be true. :) – M.B.M. Nov 11 '11 at 0:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593663215637207, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/75142/list | ## Return to Question
2 added 74 characters in body; edited title
# SymmetryRotationalsymmetry group of QxQ
What is the rotational symmetry group of $\mathbb{Q}\times \mathbb{Q}$, the subset of the real plane consisting of rational points? i.e. are there infinite rotations, is this a named group ?
1
# Symmetry group of QxQ
What is the symmetry group of $\mathbb{Q}\times \mathbb{Q}$, the subset of the real plane consisting of rational points? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9136324524879456, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/41720/understanding-the-schwarz-reflection-principle?answertab=votes | # Understanding the Schwarz reflection principle
I am currently reading Stein and Shakarchi's Complex Analysis, and I think there is something I am not quite understanding about the Schwarz reflection principle. Here is my problem:
Suppose $f$ is a holomorphic function on $\Omega^+$ (an open subset of the upper complex plane) that extends continuously to $I$ (a subset of $\mathbb{R}$). Let $\Omega^-$ be the reflection of $\Omega^+$ across the real axis.
Take $F(z) = f(z)$ if $z \in \Omega^+$ and $F(z) = f(\overline{z})$ is $z \in \Omega^-$. We can extend $F$ continuously to $I$. Why isn't the function $F$ holomorphic on $\Omega^+ \cup I \cup \Omega^-$?
I think there's some detail of a proof that I overlooked. My intuition tells me that $F$ isn't holomorphic for the same reason that a function defined on $\mathbb{R}^+$ isn't necessarily differentiable at zero if you extend it to be an even function.
-
## 2 Answers
$f(\bar{z})$ is not holomorphic (unless $f$ is constant), as $d(f(\bar{z}))=f'(\bar{z}) d\bar{z}$ is not a multiple of $dz$ (but rather of $d\bar{z}$). Perhaps more intuitively: holomorphic = conformal and orientation-preserving; $f(\bar{z})$ is conformal, but changes the orientation (due to the reflection $z\mapsto\bar{z}$). Hence your function $F$ is not holomorphic on $\Omega^-$.
On the other hand, $\overline{f(\bar{z})}$ is holomorphic, as there are two reflections. If $f$ is real on $I$ then by gluing $f(z)$ on $\Omega^+$ with $\overline{f(\bar{z})}$ on $\Omega^-$ you get a function continuous on $\Omega^+\cup I\cup\Omega^-$ and holomorphic on $\Omega^+\cup\Omega^-$. It is then holomorphic on $\Omega^+\cup I\cup\Omega^-$ e.g. by Morera theorem.
-
Thank you! This was exactly my problem! I had assumed that $f(\overline{z})$ was holomorphic because it was a pretty "nice and simple" function. I just started studying complex analysis, so I had the definition of holomorphic memorized without much intution, but I understand the idea a lot better now! – Alan C May 27 '11 at 23:49
You left out the condition that $f$ takes real values on $I.$ Pretend that $0 \in I,$ it changes nothing. Once we succeed in extending to a holomorphic $G$ on both sides of the real axis, this says that the power series of $G$ around $0$ has all real coefficients, if for no better reason than that all derivatives of $G$ at $0$ are real. In turn, this says that $$G( \bar{z}) = \overline{G(z)}$$
-
Thanks! That was the statement and construction given by the book, and while I could understand it, I didn't understand why my construction didn't work. – Alan C May 27 '11 at 23:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9608444571495056, "perplexity_flag": "head"} |
http://mathematica.stackexchange.com/questions/10359/how-to-solve-or-linearsolve-a-i-matrix-equation/10361 | # How to Solve or LinearSolve $A = I$ matrix equation?
I'd like to solve this equation for $A = B$ where $B = I$, which represents 3 systems of 3 linear equations, for $a, b, c, d, e, f$, without writing `LinearSolve` 3 times. What is a simple way to accomplish this?
Input:
```` A = {
{a + b, -a + b, a + 2 b},
{c + d, -c + d, c + 2 d},
{e + f, -e + f, e + 2 b}
};
MatrixForm[A]
B = IdentityMatrix[3]
(*want to solve A == B, but probably wrong...*)
M = LinearSolve[A, IdentityMatrix[3]]
MatrixForm[M]
````
-
So you want to invert `A` ? – b.gatessucks Sep 8 '12 at 20:38
@b.gatessucks If `A == I` then also `A^-1 == I`. Thus he just wants to find `a,b,c,d,e,f`rather than `A^-1`. – Artes Sep 9 '12 at 0:44
@T.Webster Could you explain what you'd expected from an answer, if existing ones were not satisfactory ? – Artes Sep 15 '12 at 23:48
@Artes Sorry I have been away and thanks for your answer. – T. Webster Sep 17 '12 at 8:08
The question related to finding the left inverse of some matrix $A$. But a solution I found from my linear algebra course is to use Gaussian elim. (RowReduce) with the augmented matrix $[A I]$. – T. Webster Sep 17 '12 at 8:13
show 1 more comment
## 2 Answers
As the documentation says : `LinearSolve[m,b]` finds an `x` which solves the matrix equation `m.x == b`, i.e. in your case it finds `x` such that `A.x == B` (`Dot[A,x] == B`). However your task is to find `A` solving an adequate system of `9 linear equations` for 6 variables `a,b,c,d,e,f` knowing that `B` is an `IdentityMatrix`. You are trying to solve an overdetermined system of linear equations and there could exist any solutions only if certain compatibility conditions were satisfied.
For your task use simply `Solve` :
````Solve[A == B, {a, b, c, d, e, f}]
````
````{}
````
or
````Reduce[A == B, {a, b, c, d, e, f}]
````
````False
````
This means that there are no solutions, i.e. the above equation is contradictory. You could use `Variables[A]` instead of specifying variables `{a, b, c, d, e, f}`.
Consider a different matrix equation where we have `4` unknowns and `4` independent equations e.g. :
````A1 = {{a + b, a - 2 b}, {a - c, c + d}};
Solve[ A1 == IdentityMatrix[2], {a, b, c, d}]
````
````{{a -> 2/3, b -> 1/3, c -> 2/3, d -> 1/3}}
````
i.e. there is only one solution.
Edit
`Inverse[A]` could be a solution assuming that you wanted `x` in the matrix equation `A.x == IdentityMatrix[3]` when `A` was given. There exists an inverse matrix to `A` under this condition :
````Det[ A] != 0
````
i.e.
````-4 b^2 c + 4 a b d + 4 b c f - 4 a d f != 0
````
Neither `A` exists nor this assumption can be satisfied when `A` is defined as in your question and `B` is an identity matrix.
-
Use `Inverse` to invert a matrix :
````bigA = {{a + b, -a + b, a + 2 b}, {c + d, -c + d, c + 2 d}, {e + f, -e + f, e + 2 b}};
bigM=Inverse[bigA]
````
-
lang-mma | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8842369318008423, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Distributivity | # Distributive property
(Redirected from Distributivity)
In abstract algebra and logic, distributivity is a property of binary operations that generalizes the distributive law from elementary algebra. In propositional logic, distribution refers to two valid rules of replacement. The rules allow one to reformulate conjunctions and disjunctions within logical proofs.
For example, in arithmetic:
2 × (1 + 3) = (2 × 1) + (2 × 3) but 2 /(1 + 3) ≠ (2 / 1) + (2 / 3).
In the left-hand side of the first equation, the 2 multiplies the sum of 1 and 3; on the right-hand side, it multiplies the 1 and the 3 individually, with the products added afterwards. Because these give the same final answer (8), we say that multiplication by 2 distributes over addition of 1 and 3. Since we could have put any real numbers in place of 2, 1, and 3 above, and still have obtained a true equation, we say that multiplication of real numbers distributes over addition of real numbers.
## Definition
Given a set S and two binary operators · and + on S, we say that the operation ·
• is left-distributive over + if, given any elements x, y, and z of S,
x · (y + z) = (x · y) + (x · z);
• is right-distributive over + if, given any elements x, y, and z of S:
(y + z) · x = (y · x) + (z · x);
• is distributive over + if it is left- and right-distributive.[1]
Notice that when · is commutative, then the three above conditions are logically equivalent.
## Propositional logic
Transformation rules
Propositional calculus
Rules of inference
Rules of replacement
• Associativity
• Commutativity
• Distributivity
Double negation
• De Morgan's laws
• Transposition
• Material implication
• Material equivalence
• Exportation
• Tautology
Predicate logic
Universal generalization
### Rule of replacement
In standard truth-functional propositional logic, distribution[2][3][4] are two valid rule of replacement. The rules allow one to distribute certain logical connectives within logical expressions in logical proofs. The rules are:
$(P \and (Q \or R)) \Leftrightarrow ((P \and Q) \or (P \and R))$
and
$(P \or (Q \and R)) \Leftrightarrow ((P \or Q) \and (P \or R))$
where "$\Leftrightarrow$" is a metalogical symbol representing "can be replaced in a proof with."
### Truth functional connectives
Distributivity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functional tautologies.
Distribution of conjunction over conjunction
$(P \and (Q \and R)) \leftrightarrow ((P \and Q) \and (P \and R))$
Distribution of conjunction over disjunction[5]
$(P \and (Q \or R)) \leftrightarrow ((P \and Q) \or (P \and R))$
Distribution of disjunction over conjunction[6]
$(P \or (Q \and R)) \leftrightarrow ((P \or Q) \and (P \or R))$
Distribution of disjunction over disjunction
$(P \or (Q \or R)) \leftrightarrow ((P \or Q) \or (P \or R))$
Distribution of implication
$(P \to (Q \to R)) \leftrightarrow ((P \to Q) \to (P \to R))$
Distribution of implication over equivalence
$P \to (Q \leftrightarrow R) \leftrightarrow ((P \to Q) \leftrightarrow (P \to R))$
Distribution of disjunction over equivalence
$(P \or (Q \leftrightarrow R)) \leftrightarrow ((P \or Q) \leftrightarrow (P \or R))$
Double distribution
$((P \and Q) \or (R \and S)) \leftrightarrow (((P \or R) \and (P \or S)) \and ((Q \or R) \and (Q \or S)))$
$((P \or Q) \and (R \or S)) \leftrightarrow (((P \and R) \or (P \and S)) \or ((Q \and R) \or (Q \and S)))$
## Examples
1. Multiplication of numbers is distributive over addition of numbers, for a broad class of different kinds of numbers ranging from natural numbers to complex numbers and cardinal numbers.
2. Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive.
3. The cross product is left- and right-distributive over vector addition, though not commutative.
4. Matrix multiplication is distributive over matrix addition, though also not commutative.
5. The union of sets is distributive over intersection, and intersection is distributive over union.
6. Logical disjunction ("or") is distributive over logical conjunction ("and"), and conjunction is distributive over disjunction.
7. For real numbers (and for any totally ordered set), the maximum operation is distributive over the minimum operation, and vice-versa: max(a,min(b,c)) = min(max(a,b),max(a,c)) and min(a,max(b,c)) = max(min(a,b),min(a,c)).
8. For integers, the greatest common divisor is distributive over the least common multiple, and vice-versa: gcd(a,lcm(b,c)) = lcm(gcd(a,b),gcd(a,c)) and lcm(a,gcd(b,c)) = gcd(lcm(a,b),lcm(a,c)).
9. For real numbers, addition distributes over the maximum operation, and also over the minimum operation: a + max(b,c) = max(a+b,a+c) and a + min(b,c) = min(a+b,a+c).
## Distributivity and rounding
In practice, the distributive property of multiplication (and division) over addition may appear to be compromised or lost because of the limitations of arithmetic precision. For example, the identity ⅓ + ⅓ + ⅓ = (1+1+1)/3 appears to fail if the addition is conducted in decimal arithmetic; however, if many significant digits are used, the calculation will result in a closer approximation to the correct results. For example, if the arithmetical calculation takes the form: 0.33333+0.33333+0.33333 = 0.99999 ≠ 1, this result is a closer approximation than if fewer significant digits had been used. Even when fractional numbers can be represented exactly in arithmetical form, errors will be introduced if those arithmetical values are rounded or truncated. For example, buying two books, each priced at £14.99 before a tax of 17.5%, in two separate transactions will actually save £0.01, over buying them together: £14.99×1.175 = £17.61 to the nearest £0.01, giving a total expenditure of £35.22, but £29.98×1.175 = £35.23. Methods such as banker's rounding may help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable.
## Distributivity in rings
Distributivity is most commonly found in rings and distributive lattices.
A ring has two binary operations (commonly called "+" and "*"), and one of the requirements of a ring is that * must distribute over +. Most kinds of numbers (example 1) and matrices (example 4) form rings. A lattice is another kind of algebraic structure with two binary operations, ∧ and ∨. If either of these operations (say ∧) distributes over the other (∨), then ∨ must also distribute over ∧, and the lattice is called distributive. See also the article on distributivity (order theory).
Examples 4 and 5 are Boolean algebras, which can be interpreted either as a special kind of ring (a Boolean ring) or a special kind of distributive lattice (a Boolean lattice). Each interpretation is responsible for different distributive laws in the Boolean algebra. Examples 6 and 7 are distributive lattices which are not Boolean algebras.
Failure of one of the two distributive laws brings about near-rings and near-fields instead of rings and division rings respectively. The operations are usually configured to have the near-ring or near-field distributive on the right but not on the left.
Rings and distributive lattices are both special kinds of rigs, certain generalizations of rings. Those numbers in example 1 that don't form rings at least form rigs. Near-rigs are a further generalization of rigs that are left-distributive but not right-distributive; example 2 is a near-rig.
## Generalizations of distributivity
In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the above conditions or the extension to infinitary operations. Especially in order theory one finds numerous important variants of distributivity, some of which include infinitary operations, such as the infinite distributive law; others being defined in the presence of only one binary operation, such as the according definitions and their relations are given in the article distributivity (order theory). This also includes the notion of a completely distributive lattice.
In the presence of an ordering relation, one can also weaken the above equalities by replacing = by either ≤ or ≥. Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion of sub-distributivity as explained in the article on interval arithmetic.
In category theory, if (S, μ, η) and (S', μ', η') are monads on a category C, a distributive law S.S' → S'.S is a natural transformation λ : S.S' → S'.S such that (S' , λ) is a lax map of monads S → S and (S, λ) is a colax map of monads S' → S' . This is exactly the data needed to define a monad structure on S'.S: the multiplication map is S'μ.μ'S².S'λS and the unit map is η'S.η. See: distributive law between monads.
A generalized distributive law has also been proposed in the area of information theory.
## Notes
1. Moore and Parker
2. Copi and Cohen
3. Hurley
4. Russell and Whitehead,
5. Russell and Whitehead,
## References
• Ayres, Frank, Schaum's Outline of Modern Abstract Algebra, McGraw-Hill; 1st edition (June 1, 1965). ISBN 0-07-002655-6. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8797017931938171, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showpost.php?p=825156&postcount=6 | View Single Post
Quote by George Jones Are you working with real or complex Lie algebras?
Apologies George - I should have said, the Lie Algebras are complex. Any hints (however vague) would be much appreciated. :)
Quote by matt grime Have you considered that there might be more than one basis you can think of?
Hmmm, ok - so at the moment I'm considering a basis of $$\mathfrak{so}(4)$$ of the form:
$$\left{\left(\begin{array}{cccc}0&1&0&0\\-1&0&0&0\\0&0&0&0\\0&0&0&0\end{array}\right), \left(\begin{array}{cccc}0&0&1&0\\0&0&0&0\\-1&0&0&0\\0&0&0&0\end{array}\right), \quad \textrm{etc.}\right}$$
But perhaps if I think about suitable combinations of these, I'll get something more like the form I'm looking for?
The help is much appreciated guys. :) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.974455714225769, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/89759/how-to-prove-that-2-arctan-sqrtx-arcsin-fracx-1x1-frac-pi2 | # How to prove that $2 \arctan\sqrt{x} = \arcsin \frac{x-1}{x+1} + \frac{\pi}{2}$
I want to prove that$$2 \arctan\sqrt{x} = \arcsin \frac{x-1}{x+1} + \frac{\pi}{2}, x\geq 0$$
I have started from the arcsin part and I tried to end to the arctan one but I failed.
Can anyone help me solve it?
-
## 3 Answers
Define $\displaystyle g(x)=2\arctan(\sqrt{x})-\arcsin\left(\frac{x-1}{x+1}\right)-\frac{\pi}{2}$. Differentiate to find that $g'(x)=0$. Since this was true on the connected set $[0,\infty)$ you can conclude that $g$ is contant. Note then that $g(0)=0$ to finish.
-
+1. A nice way to look at it but unnecessarily complicated. – user17762 Dec 9 '11 at 0:33
Let $\arctan{\sqrt{x}} = \theta$. Then we have $x = \tan^2 (\theta)$. Hence, $$\frac{x-1}{x+1} = \frac{\tan^2 (\theta)-1}{\tan^2 (\theta)+1} = \sin^2(\theta) - \cos^2(\theta) = - \cos(2 \theta)= \sin \left(2 \theta - \frac{\pi}{2} \right)$$ Hence, $$2 \arctan{\sqrt{x}} = 2 \theta = \arcsin \left(\frac{x-1}{x+1} \right) + \frac{\pi}{2}$$
-
Very nice.. One short comment: $\frac{\tan^2 (\theta)-1}{\tan^2 (\theta)+1} = - \cos(2 \theta)$ is just the standard $t =\tan(\frac{x}{2})$ substitution :) – N. S. Dec 9 '11 at 0:41
One way would be to notice that both functions have the same derivative and then find out what the "constant" is by plugging in $x=0$.
Here's another way. Look at $$\frac\pi2 - 2\arctan\sqrt{x} = 2\left( \frac\pi4 - \arctan\sqrt{x} \right) = 2\left( \arctan1-\arctan\sqrt{x} \right).$$ Now remember the identity for the difference of two arctangents: $$\arctan u - \arctan v = \arctan\frac{u-v}{1+uv}.$$ (This follows from the usual identity for the tangent of a sum.) The left side above becomes $$2\arctan\frac{1-\sqrt{x}}{1+\sqrt{x}}.$$ The double-angle formula for the sine says $\sin(2u)=2\sin u\cos u$. Apply that: $$\sin\left(2\arctan\frac{1-\sqrt{x}}{1+\sqrt{x}}\right) = 2 \sin\left(\arctan\frac{1-\sqrt{x}}{1+\sqrt{x}}\right)\cos\left(\arctan\frac{1-\sqrt{x}}{1+\sqrt{x}}\right)$$
Now remember that $\sin(\arctan u) = \dfrac{u}{\sqrt{1+u^2}}$ and $\cos(\arctan u) = \dfrac{1}{\sqrt{1+u^2}}$
Then use algebra: $$2\cdot\frac{\left(\frac{1-\sqrt{x}}{1+\sqrt{x}}\right)}{\sqrt{1+\left(\frac{1-\sqrt{x}}{1+\sqrt{x}}\right)^2}}\cdot \frac{1}{\sqrt{1+\left(\frac{1-\sqrt{x}}{1+\sqrt{x}}\right)^2}} = \frac{1-x}{1+x}.$$
-
OK, I hope I've now got everything except any remaining minor typos into shape in this posting. – Michael Hardy Dec 9 '11 at 0:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417777061462402, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/32044/list | ## Return to Answer
2 corrected a few typos
Let $d=\deg S$. If $d\ge 2$ and $S$ is non-degenerate near infinity, in the sense that $|\nabla S(x)|^2 \ge c|x|^{2d-2}$ for $|x|\ge R$ for some positive $R$ and $c$ then the integral converges in a sense close to what you propose. Namely, $$\lim_{r\rightarrow \infty} \int_{\mathbb R^n} \phi(|x|/r) e^{iS(x)} dx$$ exists where $\phi$ is any sufficiently smooth function with $\phi(t)\equiv 1$ for $0\le t\le 1$ and $\phi (t)\equiv 0$ for $t\ge 2$.
To prove this use integration by parts. Fix $r < r'$ and let $$I_{r,r'} = \int_{\mathbb R^n} \phi_{r,r'}(x) e^{i S(x)}dx,$$where $\phi_{r,r'}(x)=\phi(|x|/r)-\phi(|x|/r')$. Assume $r >R$. Multiply and divide by $|\nabla S(x)|^2$, noting that $$|\nabla S(x)|^2e^{i S(x)}= -i\nabla S(x)\cdot \nabla e^{i S(x)}.$$
So we have, after IBP, $$I_{r,r'} = -i \int_{\mathbb R^n} \left [ \nabla \cdot \left ( \frac{\phi_{r,r'}(x)}{|\nabla S(x)|^2} \nabla S(x) \right ) \right ] e^{i S(x)} d x.$$ The integrand is supported in the region $r \le |x| \le 2 r'$, and you can check that it is bounded in magnitude by some constant times $|x|^{-d}$. |x|^{1-d}$. (It is useful to note that second derivatives an$m$-th derivative of$S$are is bounded from above by$|x|^{d-2}$|x|^{d-m}$ since they are polynomials it is a polynomial of degree $d-2$.) If $d$ is large enough this may be enough to control the integral. If not repeat the procedure as many times as necessary to produce a factor that is integrable and you can use to show that $I_{r,r'}$ is small for $r,r'$ sufficiently large. Basically, each time you integrate by parts after multiplying and dividing by $|\nabla S(x)|^2$ you produce a an extra factor of size $|x|^{d-2}$. |x|^{1-d}\$.
All of the above can be extended to integrals with $S$ not necessarily a polynomial, but sufficiently smooth in a neighborhood of infinity and with derivatives that satisfy suitable estimates. The real difficulty with such integrals is not proving that they exist, but estimating their size. Here stationary phase is useful, when applicable, but I don't know of much else.
1
Let $d=\deg S$. If $S$ is non-degenerate near infinity, in the sense that $|\nabla S(x)|^2 \ge c|x|^{2d-2}$ for $|x|\ge R$ for some positive $R$ and $c$ then the integral converges in a sense close to what you propose. Namely, $$\lim_{r\rightarrow \infty} \int_{\mathbb R^n} \phi(|x|/r) e^{iS(x)} dx$$ exists where $\phi$ is any sufficiently smooth function with $\phi(t)\equiv 1$ for $0\le t\le 1$ and $\phi (t)\equiv 0$ for $t\ge 2$.
To prove this use integration by parts. Fix $r < r'$ and let $$I_{r,r'} = \int_{\mathbb R^n} \phi_{r,r'}(x) e^{i S(x)}dx,$$where $\phi_{r,r'}(x)=\phi(|x|/r)-\phi(|x|/r')$. Assume $r >R$. Multiply and divide by $|\nabla S(x)|^2$, noting that $$|\nabla S(x)|^2e^{i S(x)}= -i\nabla S(x)\cdot \nabla e^{i S(x)}.$$
So we have, after IBP, $$I_{r,r'} = -i \int_{\mathbb R^n} \left [ \nabla \cdot \left ( \frac{\phi_{r,r'}(x)}{|\nabla S(x)|^2} \nabla S(x) \right ) \right ] e^{i S(x)} d x.$$ The integrand is supported in the region $r \le |x| \le 2 r'$, and you can check that it is bounded in magnitude by some constant times $|x|^{-d}$. (It is useful to note that second derivatives of $S$ are bounded by $|x|^{d-2}$ since they are polynomials of degree $d-2$.) If $d$ is large enough this may be enough to control the integral. If not repeat the procedure as many times as necessary to produce a factor that is integrable and you can use to show that $I_{r,r'}$ is small for $r,r'$ sufficiently large. Basically, each time you integrate by parts after multiplying and dividing by $|\nabla S(x)|^2$ you produce a an extra factor of size $|x|^{d-2}$.
All of the above can be extended to integrals with $S$ not necessarily a polynomial, but sufficiently smooth in a neighborhood of infinity and with derivatives that satisfy suitable estimates. The real difficulty with such integrals is not proving that they exist, but estimating their size. Here stationary phase is useful, when applicable, but I don't know of much else. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498921632766724, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/15107/heat-equation-and-bessels-function | # Heat equation and Bessel's function
Could someone please explain why if the time-independent heat equation can, via changing of variables, take the form of Bessel's equation that the $\sqrt\lambda$ should take the values of the zeros of $J_0$ from slides 20-21 of this powerpoint printout? Thank you. (Also, what is that $a$?)
-
1
The $a$ is the radius of the disc (see slide 18). The need for $\sqrt J a$ to be a zero is because of that boundary condition ($u(a,t) = 0$; apparently the edge is in contact with a heat bath). – genneth Sep 26 '11 at 13:41
@genneth: Thanks! – Sillybilly Sep 27 '11 at 10:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8775243163108826, "perplexity_flag": "middle"} |
http://www.reference.com/browse/Ordinary+elliptic+curve | Definitions
# Elliptic curve cryptography
Elliptic curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S. Miller in 1985.
Elliptic curves are also used in several integer factorization algorithms that have applications in cryptography, such as, Lenstra elliptic curve factorization, but this use of elliptic curves is not usually referred to as "elliptic curve cryptography."
## Introduction
Public key cryptography is based on the creation of mathematical puzzles that are difficult to solve without certain knowledge about how they were created, but easy to solve with that knowledge. The creator keeps that knowledge secret (the private key) and publishes the puzzle (the public key). The puzzle can then be used to scramble a message in a way that only the creator can unscramble. Early public key systems, such as the RSA algorithm, used the product of two large prime numbers as the puzzle: a user picks two large random primes as his private key, and publishes their product as his public key. While finding large primes and multiplying them together is computationally easy, reversing the process is thought to be hard (see RSA problem). However, due to recent progress in factoring integers (one way to solve the problem), FIPS 186-3 recommends that RSA public keys be at least 1024 bits long to provide adequate security.
Another class of puzzle involves solving the equation ab = c for b when a and c are known. Such equations involving real or complex numbers are easily solved using logarithms (i.e. b=log(c)/log(a)). However, in some large finite groups, finding solutions to such equations is quite difficult and is known as the discrete logarithm problem. The mathematical theory of elliptic curves provides a class of finite groups that have proven quite suitable for cryptographic use.
An elliptic curve is a plane curve defined by an equation of the form
$y^2 = x^3 + ax + b$
The set of points on such a curve—all solutions of the above equation together with a point at infinity—form an abelian group, with the point at infinity as identity element. If the coordinates x and y are chosen from a finite field, the solutions form a finite abelian group. If the finite field is large, the discrete logarithm problem on such elliptic curve groups is believed to be more difficult than the corresponding problem in the underlying finite field's multiplicative group. Thus keys in elliptic curve cryptography can be chosen to be much shorter for a comparable level of security compared to integer-based methods. (See: cryptographic key length)
As for other popular public key cryptosystems, no mathematical proof of difficulty has been published for ECC as of 2006. However, the U.S. National Security Agency has endorsed ECC technology by including it in its Suite B set of recommended algorithms and allows their use for protecting information classified up to top secret with 384-bit keys. Although the RSA patent has expired, there are patents in force covering some aspects of ECC.
## Mathematical introduction
Elliptic curves used in cryptography are typically defined over two types of finite fields: fields of odd characteristic ($mathbb\left\{F\right\}_p$, where $p > 3$ is a large prime number) and fields of characteristic two ($mathbb\left\{F\right\}_\left\{2^m\right\}$). When the distinction is not important we denote both of them as $mathbb\left\{F\right\}_q$, where $q=p$ or $q=2^m$. In $mathbb\left\{F\right\}_p$ the elements are integers ($0 le x < p$) which are combined using modular arithmetic. The case of $mathbb\left\{F\right\}_\left\{2^m\right\}$ is slightly more complicated (see finite field arithmetic for details): one obtains different representations of the field elements as bitstrings for each choice of irreducible binary polynomial $f\left(x\right)$ of degree $m$.
The set of all pairs of affine coordinates $\left(x,y\right)$ for $x, y in mathbb\left\{F\right\}_q$ form the affine plane $mathbb\left\{F\right\}_q times mathbb\left\{F\right\}_q$. An elliptic curve is the locus of points in the affine plane whose coordinates satisfy a certain cubic equation together with a point at infinity $O$ (the point at which the locus in the projective plane intersects the line at infinity). In the case of characteristic p > 3 the defining equation of $E\left(mathbb\left\{F\right\}_p\right)$ can be written:
$y^2 = x^3 + a x + b,$
where $a in mathbb\left\{F\right\}_p$ and $b in mathbb\left\{F\right\}_p$ are constants such that $4 a^3 + 27 b^2 ne 0$. In the binary case the defining equation of $E\left(mathbb\left\{F\right\}_\left\{2^m\right\}\right)$ can be written:
$y^2 + x y = x^3 + a x^2 + b ,$
where $a in mathbb\left\{F\right\}_\left\{2\right\}$ and $b in mathbb\left\{F\right\}_\left\{2^m\right\}$ are constants and $b ne 0$. Although the point at infinity $O$ has no affine coordinates, it is convenient to represent it using a pair of coordinates which do not satisfy the defining equation, for example, $O=\left(0,0\right)$ if $b ne 0$ and $O=\left(0,1\right)$ otherwise. According to Hasse's theorem on elliptic curves the number of points on a curve is close to the size of the underlying field; more precisely: $\left(sqrt q - 1\right)^2 leq |E\left(mathbb\left\{F\right\}_q\right)| leq \left(sqrt q + 1\right)^2$.
The points on an elliptic curve form an abelian group $\left(E\left(mathbb\left\{F\right\}\right), +\right)$ with $O$, the distinguished point at infinity, playing the role of additive identity. In other words, given two points $P, Q in E\left(mathbb\left\{F\right\}_q\right)$, there is a third point, denoted by $P+Q ,$ on $E\left(mathbb\left\{F\right\}_q\right)$, and the following relations hold for all $P, Q, R in E\left(mathbb\left\{F\right\}_q\right)$
• $P+Q = Q+P$ (commutativity)
• $\left(P+Q\right)+R = P+\left(Q+R\right)$ (associativity)
• $P+O = O+P = P$ (existence of an identity element)
• there exists $\left(-P\right)$ such that $-P + P = P + \left(-P\right) = O$ (existence of inverses)
We already specified how $O$ is defined. If we define the negative of a point $P = \left(x,y\right)$ to be $-P = \left(x,-y\right)$ for $P in E\left(mathbb\left\{F\right\}_p\right)$ and $-P = \left(x,x+y\right)$ for $P in E\left(mathbb\left\{F\right\}_\left\{2^m\right\}\right)$, we can define the addition operation as follows:
• if $Q = O$ then $P + Q = P$
• if $Q = -P$ then $P + Q = O$
• if $Q ne P$ then $P + Q = R$, where
• in the prime case $x_R = lambda^2 - x_P - x_Q$, $y_R = lambda\left(x_P - x_R\right) - y_P$, and $lambda = frac\left\{y_Q-y_P\right\}\left\{x_Q-x_P\right\}$, or
• in the binary case $x_R = lambda^2 + lambda + x_P + x_Q + a$, $y_R = lambda \left(x_P + x_R\right) + x_R + y_P$, and $lambda = frac\left\{y_P + y_Q\right\}\left\{x_P + x_Q\right\}$
(Geometrically, $P+Q$ is the inverse of the third point of intersection of the cubic with the line through $P$ and $Q$.)
• if $Q = P$ then $P + Q = R$, where
• in the prime case $x_R = lambda^2 - 2 x_P$, $y_R = lambda\left(x_P - x_R\right) - y_P$, and $lambda = frac\left\{3 x_P^2 + a\right\}\left\{2 y_P\right\}$, or
• in the binary case $x_R = lambda^2 + lambda + a$, $y_R = x_P^2 + \left(lambda + 1\right) x_R$, and $lambda = x_P + frac\left\{y_P\right\}\left\{x_P\right\}$
(Geometrically, $2P$ is the inverse of the second point of intersection of the cubic with its tangent line at $P$.)
Certicom's Online ECC Tutorial contains a Java applet that can be used to experiment with addition in different EC groups.
We already described the underlying field $mathbb\left\{F\right\}_q$ and the group of points of elliptic curve $E\left(mathbb\left\{F\right\}_q\right)$ but there is yet another mathematical structure commonly used in cryptography — a cyclic subgroup of $E\left(mathbb\left\{F\right\}_q\right)$. For any point $G$ the set
$\left(O, G, G+G, G+G+G, G+G+G+G, ldots\right)$
is a cyclic group. It is convenient to use the following notation: $0 G = O$, $1 G = G$, $2G = G+G$, $3G = G+G+G$, etc. The calculation of $k G$, where $k$ is an integer and $G$ is a point, is called scalar multiplication.
## Cryptographic schemes
Since the (additive) cyclic group described above can be considered similar to the (multiplicative) group of powers of an integer $g$ modulo prime $p$: $\left(g^0, g, g^2, g^3, g^4, ldots\right)$, the problem of finding $k$ given points $k G$ and $G$ is called the elliptic curve discrete logarithm problem (ECDLP). The assumed hardness of several problems related to the discrete logarithm in the subgroup of $E\left(mathbb\left\{F\right\}_q\right)$ allows cryptographic use of elliptic curves. Most of the elliptic curve cryptographic schemes are related to the discrete logarithm schemes which were originally formulated for usual modular arithmetic:
• the Elliptic Curve Diffie-Hellman key agreement scheme is based on the Diffie-Hellman scheme,
• the Elliptic Curve Digital Signature Algorithm is based on the Digital Signature Algorithm,
• the ECMQV key agreement scheme is based on the MQV key agreement scheme.
Not all the DLP schemes should be ported to the elliptic curve domain. For example, the well known ElGamal encryption scheme was never standardized by official bodies and should not be directly used over an elliptic curve (the standard encryption scheme for ECC is called Elliptic Curve Integrated Encryption Scheme). The main reason is that although it is straightforward to convert an arbitrary message (of limited length) to an integer modulo $p$, it is not that simple to convert a bitstring to a point of a curve (it is not true that for every $x$ there is a $y$ such that $\left(x,y\right) in E\left(mathbb\left\{F\right\}_q\right)$). (Another factor is that ElGamal scheme is vulnerable to chosen-ciphertext attacks.)
Some believe that ECDLP-based cryptography is going to replace cryptography based on integer factorization (e.g., RSA) and finite-field cryptography (e.g., DSA). At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information.
Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups (such as the Weil and Tate, eta and ate pairings) have been introduced. Schemes based on these primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption (see The Pairing-Based Crypto Lounge as well as P1363.3).
## Implementation considerations
Although the details of each particular elliptic curve scheme are described in the article referenced above some common implementation considerations are discussed here.
### Domain parameters
To use ECC all parties must agree on all the elements defining the elliptic curve, that is domain parameters of the scheme. The field is defined by $p$ in the prime case and the pair of $m$ and $f$ in the binary case. The elliptic curve is defined by the constants $a$ and $b$ used in its defining equation. Finally, the cyclic subgroup is defined by its generator (aka. base point) $G$. For cryptographic application the order of $G$, that is the smallest non-negative number $n$ such that $n G = O$, must be prime. Since $n$ is the size of a subgroup of $E\left(mathbb\left\{F\right\}_q\right)$ it follows from the Lagrange's theorem that the number $h = frac$
>{n} is integer. In cryptographic applications this number $h$, called cofactor, at least must be small ($h le 4$) and, preferably, $h = 1$. Let us summarize: in the prime case the domain parameters are $\left(p,a,b,G,n,h\right)$ and in the binary case they are $\left(m,f,a,b,G,n,h\right)$.
Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters must be validated before use.
The generation of domain parameters is not usually done by each participant since this involves counting the number of points on a curve which is time-consuming and troublesome to implement. As a result several standard bodies published domain parameters of elliptic curves for several common field sizes:
Test vectors are also available
If one (despite the said above) wants to build his own domain parameters he should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods:
• select a random curve and use a general point-counting algorithm, for example, Schoof's algorithm or Schoof-Elkies-Atkin algorithm,
• select a random curve from a family which allows easy calculation of the number of points (e.g., Koblitz curves), or
• select the number of points and generate a curve with this number of points using complex multiplication technique.
Several classes of curves are weak and shall be avoided:
• curves over $mathbb\left\{F\right\}_\left\{2^m\right\}$ with non-prime $m$ are vulnerable to Weil descent attacks.
• curves such that $n$ divides $p^B-1$ (where $p$ is the characteristic of the field – $q$ for a prime field, or $2$ for a binary field) for sufficiently small $B$ are vulnerable to MOV attack which applies usual DLP in a small degree extension field of $mathbb\left\{F\right\}_p$ to solve ECDLP. The bound $B$ should be chosen so that discrete logarithms in the field $mathbb\left\{F\right\}_\left\{p^B\right\}$ are at least as difficult to compute as discrete logs on the elliptic curve $E\left(mathbb\left\{F\right\}_q\right)$.
• curves such that $|E\left(mathbb\left\{F\right\}_q\right)| = q$ are vulnerable to the attack that maps the points on the curve to the additive group of $mathbb\left\{F\right\}_q$
### Key sizes
Since all the fastest known algorithms that allow to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need $O\left(sqrt\left\{n\right\}\right)$ steps, it follows that the size of the underlying field shall be roughly twice the security parameter. For example, for 128-bit security one needs a curve over $mathbb\left\{F\right\}_q$, where $q approx 2^\left\{256\right\}$. This can be contrasted with finite-field cryptography (e.g., DSA) which requires 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires 3072-bit public and private keys. The hardest ECC scheme (publicly) broken to date had a 109-bit key (that is about 55 bits of security). For the prime field case, it was broken near the beginning of 2003 using over 10,000 Pentium class PCs running continuously for over 540 days (see ). For the binary field case, it was broken in April 2004 using 2600 computers for 17 months (see ).
### Projective coordinates
A close examination of the addition rules shows that in order to add two points one needs not only several additions and multiplications in $mathbb\left\{F\right\}_q$ but also an inversion operation. The inversion (for given $x in mathbb\left\{F\right\}_q$ find $y in mathbb\left\{F\right\}_q$ such that $x y = 1$) is one to two orders of magnitude slower than multiplication. Fortunately, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were proposed: in the projective system each point is represented by three coordinates $\left(X,Y,Z\right)$ using the following relation: $x = frac\left\{X\right\}\left\{Z\right\}$, $y = frac\left\{Y\right\}\left\{Z\right\}$; in the Jacobian system a point is also represented with three coordinates $\left(X,Y,Z\right)$, but a different relation is used: $x = frac\left\{X\right\}\left\{Z^2\right\}$, $y = frac\left\{Y\right\}\left\{Z^3\right\}$; in the modified Jacobian system the same relations are used but four coordinates are stored and used for calculations $\left(X,Y,Z,aZ^4\right)$; and in the Chudnovsky Jacobian system five coordinates are used $\left(X,Y,Z,Z^2,Z^3\right)$. Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used.
### Fast reduction (NIST curves)
Reduction modulo $p$ (which is needed for addition and multiplication) can be executed much faster if the prime $p$ is a pseudo-Mersenne prime that is $p approx 2^d$, for example, $p = 2^\left\{521\right\} - 1$ or $p = 2^\left\{256\right\} - 2^\left\{32\right\} - 2^9 - 2^8 - 2^7 - 2^6 - 2^4 - 1$. Compared to Barrett reduction there can be an order of magnitude speed-up. The curves over $mathbb\left\{F\right\}_p$ with pseudo-Mersenne $p$ are recommended by NIST. Yet another advantage of the NIST curves is the fact that they use $a = -3$ which improves addition in Jacobian coordinates.
### NIST-Recommended Elliptic Curves
NIST recommends fifteen elliptic curves. Specifically, FIPS 186-2 has ten recommended finite fields. There are five prime fields $mathbb\left\{F\right\}_p$ for $p192$, $p224$, $p256$, $p384$ and $p521$. For each of the prime fields one elliptic curve is recommended. There are five binary fields $mathbb\left\{F\right\}_\left\{2^m\right\}$ for $2^\left\{163\right\}$, $2^\left\{233\right\}$, $2^\left\{283\right\}$, $2^\left\{409\right\}$, and $2^\left\{571\right\}$. For each of the binary fields one elliptic curve and one Koblitz curve was selected. Thus five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency.
### Side-channel attacks
Unlike DLP systems (where it is possible to use the same procedure for squaring and multiplication) the EC addition is significantly different for doubling ($P = Q$) and general addition ($P ne Q$) depending on the coordinate system used. Consequently, it is important to counteract side channel attacks (e.g., timing and simple power analysis attacks) using, for example, fixed pattern window (aka. comb) methods (note that this does not increase the computation time).
### Patents
At least one ECC scheme (ECMQV) and some implementation techniques are covered by patents. Uncertainty about the availability of unencumbered ECC has limited the acceptance of ECC.
## Implementations
### Proprietary/commercial
• MIRACL: Multiprecision Integer and Rational Arithmetic C/C++ Library
• CNG API in Windows Vista and Windows Server 2008 with managed wrappers for CNG in .NET Framework 3.5
• Sun Java System Web Server 7.0 and later
• Java SE 6
• Java Card
• Security Builder Crypto
## References
• Standards for Efficient Cryptography Group (SECG), SEC 1: Elliptic Curve Cryptography, Version 1.0, September 20, 2000.
• D. Hankerson, A. Menezes, and S.A. Vanstone, Guide to Elliptic Curve Cryptography, Springer-Verlag, 2004.
• I. Blake, G. Seroussi, and N. Smart, Elliptic Curves in Cryptography, London Mathematical Society 265, Cambridge University Press, 1999.
• I. Blake, G. Seroussi, and N. Smart, editors, Advances in Elliptic Curve Cryptography, London Mathematical Society 317, Cambridge University Press, 2005.
• L. Washington, Elliptic Curves: Number Theory and Cryptography, Chapman & Hall / CRC, 2003.
• Anoop MS, Elliptic Curve Cryptography -- An Implementation Tutorial, Tata Elxsi, India, January 5, 2007.
• The Case for Elliptic Curve Cryptography, National Security Agency
• Online Elliptic Curve Cryptography Tutorial, Certicom Corp.
• K. Malhotra, S. Gardner, and R. Patz, Implementation of Elliptic-Curve Cryptography on Mobile Healthcare Devices, Networking, Sensing and Control, 2007 IEEE International Conference on, London, 15-17 April 2007 Page(s):239 - 244 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 164, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237303733825684, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/13987?sort=oldest | ## Distance measure on weighted directed graphs
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There is a simple and well-defined distance measure on weighted undirected graphs, namely the least sum of edge weights on any (simple) path between two vertices.
Can one devise a meaningful distance metric for weighted directed graphs? (assume non-negative weights and that the distance must be symmetric and have the triangular property)
-
Could you not do the same thing as with undirected graphs, which gives a non-symmetric metric D, and then symmetrize, i.e. define a metric d by d(x,y) = max{D(x,y), D(y,x)}? – Tom Leinster Feb 3 2010 at 15:58
Yes...I had considered d(x,y) = (D(x,y)+D(y,x))/2 but that just didn't feel right. Did you mean 'min'? If so that seems much more satisfying for edge distance. but...I don't think it'll work for paths. – Mitch Harris Feb 3 2010 at 16:07
1
Min does not satisfy the triangle inequality, only max does. – domotorp Feb 3 2010 at 16:23
1
Mitch, I meant max (and agree with domotorp). And yes, I also had it in mind that you could use + to symmetrize, rather than max. It sounds like you have some kind of design criteria in mind for how this distance should behave. Maybe you could add something to the question explaining them? (There's an "edit" button.) – Tom Leinster Feb 3 2010 at 17:24
That min doesn't work but max does is not obvious to me (and I can't seem to think through it). Is there a short counterexample? – Mitch Harris Feb 3 2010 at 20:24
show 1 more comment
## 1 Answer
Since you're asking for what is "meaningful", I think that one can then argue against some of the question.
The value of metrics on graphs in combinatorial geometry as an area of pure mathematics are more or less the same as their value in applied mathematics, for instance in direction-finding software in Garmin or Google Maps. In any such setting, the triangle inequality is essential for the metric interpretation, but the symmetry condition $d(x,y) = d(y,x)$ is not. An asymmetric metric captures the idea of one-way distance. You can have perfectly interesting geometry of many kinds without the symmetry condition. For instance, you can study asymmetric Banach norms such that $||\alpha x|| = \alpha ||x||$ for $\alpha > 0$, but $||x|| \ne ||-x||$, or the corresponding types of Finsler manifolds.
On the other hand, if you want to restore symmetry, you can. For instance, $$d'(x,y) = d(x,y) + d(y,x)$$ has a natural interpretation as round-trip distance, which again works equally well for pure and applied purposes. You can also use max, directly, or even min with a modified formula. Namely, you can define $d'$ to be the largest-distance metric such that $$d'(x,y) \le \min(d(x,y),d(y,x))$$ for all $x$ and $y$. All of these have natural interpretations. For instance, the min formula could make sense for streets that are one-way for cars but two-way for bicycles.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456995725631714, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/119043/reference-request-preparation-for-learning-a-little-smooth-infinitesimal-analys | # Reference request: preparation for learning a little smooth infinitesimal analysis?
I'm interested in learning a little smooth infinitesimal analysis. There is a free book by Kock: Smooth Differential Geometry, http://home.imf.au.dk/kock/ . As I dive into it, I feel that I'm not quite sufficiently prepared. He seems to be assuming some category theory and maybe also some knowledge of nonaristotelian logic.
I have copies of Lawvere and Rosebrugh, Sets for Mathematics, and Priest, An Introduction to Non-Classical Logic. It seems like I probably need to read the first $m$ chapters of Lawvere and the first $n$ chapters of Priest. Does this seem about right, and if so, what would be the values of $m$ and $n$? Does $n=0$? Amazon will let you see the table of contents of the two books with their "click to look inside" feature:
http://www.amazon.com/Sets-Mathematics-F-William-Lawvere/dp/0521010608/ref=sr_1_1?s=books&ie=UTF8&qid=1331505464&sr=1-1
http://www.amazon.com/Introduction-Non-Classical-Logic-Graham-Priest/dp/052179434X/ref=sr_1_1?s=books&ie=UTF8&qid=1331505449&sr=1-1
I have browsed the beginning of the Priest book and found it enjoyable, but haven't solved any of the problems. Category theory seemed dull and pointless to me, which is why I haven't really tackled much of Lawvere -- but maybe the light will dawn and some point and I'll realize why I should care about the subject.
If anyone wants to suggest replacing Kock, Lawvere, or Priest with some other book, that would be fine. I'm not hoping to become technically adept in smooth infinitesimal analysis, just to understand the basic ideas. Is there much of a distinction between smooth infinitesimal analysis, which is what I want to know about, and the subject of Kock's book, whose title says it's about smooth differential geometry?
Does this subject relate to topos theory?
[EDIT] It sounds like I may want to read Bell, A Primer of Infinitesimal Analysis rather than Kock.
-
2
– Michael Greinecker Mar 11 '12 at 23:12
2
You can learn the main ideas of synthetic differential geometry without going to much into category theory. For a very brief overview look at Mike Shulman's notes www.math.ucsd.edu/~mshulman/papers/sdg/pizza-seminar.pdf and then Bell's book recommended by Michael Greinecker. – Omar Mar 12 '12 at 4:27
## 1 Answer
Sets for Mathematics, nice as it is, is not sufficient preparation for studying the model theory of synthetic differential geometry. You will need to know topos theory to a level closer to, say, Chapter VI of Mac Lane and Moerdijk's Sheaves in Geometry and Logic.
But one does not need to dive in to the model theory straight away: after all, the point of synthetic differential geometry is that it can be presented axiomatically. In terms of logical requirements, you only need to know intuitionistic higher order logic – which is essentially the same as the logic of ordinary mathematics modulo the law of excluded middle. No knowledge of modal logic is required. (The internal logic of a topos is not modal!) It is good to do some exercises in intuitionistic logic to get a feel for what modes of reasoning remain valid. For example, you might want to try these:
1. Show that $\lnot (p \lor q)$ is equivalent to $\lnot p \land \lnot q$.
2. Show that $\lnot p \lor \lnot q$ implies $\lnot (p \land q)$.
3. Show that $\lnot (p \land q)$ does not imply $\lnot p \lor \lnot q$: find a Heyting algebra $\mathfrak{A}$ and a valuation $[-] : \{ p, q \} \to \mathfrak{A}$ so that $[\lnot (p \land q)] \nleq [ \lnot p \lor \lnot q ]$.
In much the same way as real analysis forms the foundations for differential geometry, smooth infinitesimal analysis forms the foundation on which synthetic differential geometry is built. So one might regard it as a branch of synthetic differential geometry.
Now, a word about why we should care about topos theory in synthetic differential geometry: it's all very well to work axiomatically, but at some stage one has to worry about whether the axioms are consistent. Because synthetic differential geometry is a non-classical higher-order theory, one has to look beyond ordinary model theory to find models for the axioms. Fortunately, topos theory provides a ready-made solution: the internal logic of a topos is intuitionistic higher-order logic, and so the axioms can be interpreted in any topos without any work; our problem is then reduced to finding a topos in which the axioms are satisfied. There are several constructions of varying complexity which provide models for synthetic differential geometry, but the most intriguing ones are the models which admit a nice embedding of the category of smooth manifolds: these models are called ‘well-adapted’, and the existence of well-adapted models tells us that reasoning in the framework of synthetic differential geometry is sound for doing ordinary differential geometry!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945649266242981, "perplexity_flag": "head"} |
http://johncarlosbaez.wordpress.com/2012/01/29/a-quantum-hammersley-clifford-theorem/ | # Azimuth
## A Quantum Hammersley–Clifford Theorem
I’m at this workshop:
• Sydney Quantum Information Theory Workshop: Coogee 2012, 30 January – 2 February 2012, Coogee Bay Hotel, Coogee, Sydney, organized by Stephen Bartlett, Gavin Brennen, Andrew Doherty and Tom Stace.
Right now David Poulin is speaking about a quantum version of the Hammersley–Clifford theorem, which is a theorem about Markov networks. Let me quickly say a bit about what he proved! This will be a bit rough, since I’m doing it live…
The mutual information between two random variables is
$I(A:B) = S(A) + S(B) - S(A,B)$
The conditional mutual information between three random variables $C$ is
$I(A:B|C) = \sum_c p(C=c) I(A:B|C=c)$
It’s the average amount of information about $B$ learned by measuring $A$ when you already knew $C.$
All this works for both classical (Shannon) and quantum (von Neumann) entropy. So, when we say ‘random variable’ above, we
could mean it in the traditional classical sense or in the quantum sense.
If $I(A:B|C) = 0$ then $A, C, B$ has the following Markov property: if you know $C,$ learning $A$ tells you nothing new about $B.$ In condensed matter physics, say a spin system, we get (quantum) random variables from measuring what’s going on in regions, and we have short range entanglement if $I(A:B|C) = 0$ when $C$ corresponds to some sufficiently thick region separating the regions $A$ and $B.$ We’ll get this in any Gibbs state of a spin chain with a local Hamiltonian.
A Markov network is a graph with random variables at vertices (and thus subsets of vertices) such that $I(A:B|C) = 0$ whenever $C$ is a subset of vertices that completely ‘shields’ the subset $A$ from the subset $B$: any path from $A$ to $B$ goes through a vertex in a $C.$
The Hammersley–Clifford theorem says that in the classical case we can get any Markov network from the Gibbs state
$\exp(-\beta H)$
of a local Hamiltonian $H,$ and vice versa. Here a Hamiltonian is local if it is a sum of terms, one depending on the degrees of freedom in each clique in the graph:
$H = \sum_{C \in \mathrm{cliques}} h_C$
Hayden, Jozsa, Petz and Winter gave a quantum generalization of one direction of this result to graphs that are just ‘chains’, like this:
o—o—o—o—o—o—o—o—o—o—o—o
Namely: for such graphs, any quantum Markov network is the Gibbs state of some local Hamiltonian. Now Poulin has shown the same for all graphs. But the converse is, in general, false. If the different terms $h_C$ in a local Hamiltonian all commute, its Gibbs state will have the Markov property. But otherwise, it may not.
For some related material, see:
• David Poulin, Quantum graphical models and belief propagation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8760200142860413, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/65678/lagrangian-submanifolds-in-deformation-quantization | ## Lagrangian Submanifolds in Deformation Quantization
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose I have a symplectic manifold $M$, and have a deformation quantization of it, i.e. an associative product $\ast:C(M)[[\hbar]]\otimes C(M)[[\hbar]]\to C(M)[[\hbar]]$ so that $f\ast g=fg+\{f,g\}\hbar+O(\hbar^2)$, where $\{\cdot,\cdot\}:C(M)\otimes C(M)\to C(M)$ is the Poisson bracket coming from the symplectic structure.
According to Lu (http://www.ams.org/mathscinet-getitem?mr=1244874 page 395), a Lagrangian submanifold $L\subseteq M$ should correspond to a left-module over $C(M)[[\hbar]]$, that is, we have a deformed module structure $\ast:C(M)[[\hbar]]\otimes C(L)[[\hbar]]\to C(L)[[\hbar]]$, which reduces to the standard module structure when $\hbar=0$, and is compatible with the start product on $C(M)[[\hbar]]$. (Actually, Lu thought of the left-ideal $I_L\subseteq C(M)[[\hbar]]$ so that $C(L)[[\hbar]]=C(M)[[\hbar]]/I_L$, but I perfer to think of the deformation of the module structure.)
Then we can write, for $f\in C(M)$ and $g\in C(L)$, that $f\ast g=fg+\{f,g\}\hbar+O(\hbar^2)$, for some brackets $\{\cdot,\cdot\}:C(M)\otimes C(L)\to C(L)$.
What are these brackets? How are the related to the symplectic structure?
It seems that in general the existence of a deformed module structure on $C(L)[[\hbar]]$ is nontrivial (perhaps not even known for arbitrary $L$?), but I am hoping that perhaps the deformation to first order is something that one can understand more easily.
-
## 2 Answers
Well, in the symplectic case, the situation is somehow much simpler as in the general Poisson case where you only can speak about coisotropic (there is no good meaning of minimal coisotropic as the rank may vary). In the symplectic case you have a theorem of Weinstein which states that a there is a tubular neighbourhood of $L$ which is symplectomorphic to a neighbourhood of the zero section of $T^*L$. Thus the question of a module structure is reduced to the case of a cotangent bundle since star products are local. For cotangent bundles there is a good understanding whether you can have a module structure on the functions on the configuration space $L$: the characteristic class of $\star$ has to be trivial. In fact, together with Martin Bordemann and Nikolai Neumaier we constructed such module structures in a series of papers in the end of the nineties. Also Markus Pflaum has some papers on this. Thus the global statement is that on $L$ you have a module structure for $\star$ iff the characteristic class of $\star$ is trivial in an tubular neighbourhood of $L$.
The module structures have (for particular star products) a very nice interpretation as global symbol calculus for differential operators on $L$. Moreover, if the char. class is not trivial but at least integral (up to some $2\pi$'s) then there is a module structure on the sections of some line bundle over $L$, coming quite close to the functions on $L$. Physically, this is important for the quantization of Dirac's magnetic monopole.
As DamienC already said, the situation in the general Poisson case or even in the general coisotropic case on symplectic manifolds is much more involved. Here my answer to http://mathoverflow.net/questions/64452/in-the-dictionary-between-poisson-and-quantum-what-corresponds-to-coisotropic/64478#64478 might also be of interest for you.
Oh, I forgot: the first order term can be obtained as in the flat case, at least morally. On the configuration space $L$ you chose your favorite connection then the first order term of the module structure is something like "half the Poisson bracket" which means that \begin{equation} f \bullet \psi =
\iota^* f \psi + i \hbar \iota^* \frac{\partial f}{\partial p_i} \frac{\partial \psi}{\partial q^i} + \cdots \end{equation} (modulo some constants I forgot) where $q^i$ are coordinates on $L$ and $p_i$ are the corresponding fiber coordinates on the cotangent bundle. This has an intrinsic meaning. Here $\iota: L \longrightarrow T^*L$ is the zero section...
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One has to choose an identification of a tubular neighborhood $U$ of $L$ in $M$ with a tubular neighborhood $V$ of the zero section of the normal bundle $NL$.
Once one has done this, it means that we have a projection $p:U\to L$. Then your bracket is "simply", for $f\in C^\infty(M)$ and $g\in C^\infty(L)$, `$\{f,g\}:=\{f_{|U},p^*(g)\}_{|L}$`.
But $C^\infty(C)$ is, to my opinion, not the right object to consider (see e.g. the work of Cattaneo-Felder).
Remark: another approach is via Tsygan's Oscillatory modules.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229414463043213, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/120724-help-optimizing-problems.html | # Thread:
1. ## Help! Optimizing problems!
A net enclosure for golf practice is open at one end....( the figure is like a batting cage... a rectangle on its long side with one square shaped end open)
find the dimensions that require the least amount of netting if the volume of the enclosure is to be 250/3 cubic meters.
2. Originally Posted by mishmash918
A net enclosure for golf practice is open at one end....( the figure is like a batting cage... a rectangle on its long side with one square shaped end open)
find the dimensions that require the least amount of netting if the volume of the enclosure is to be 250/3 cubic meters.
Let the width be $w$ and the length be $l$ , then from what you say above the height is also $w$.
The area of netting is:
$A=w^2+3 \times (w \times l)\ \ \text{m}^2$
(one end piece of $w \times w$ and two sides and a top of $w \times l$)
Also the volume:
$<br /> V=w^2 \times i=\frac{250}{3}\ \ \text{m}^3<br />$
Now use the last equation above to eliminate wither $w$ or $l$ from the first equation, which leaves you with an expression in a single variable for $A$, and you know how to find the minimum for such a problem.
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8877785801887512, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/8182?sort=votes | ## Is a polynomial with 1 very large coefficient irreducible?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am asking for some sort of generalization to Perron's criterion which is not dependent on the index of the "large" coefficient. (the criterion says that for a polynomial $x^n+\sum_{k=0}^{n-1} a_kx^k\in \mathbb{Z}[x]$ if the condition $|a_{n-1}|>1+|a_0|+\cdots+|a_{n-2}|$ and $a_0\neq 0$ holds then it is irreducible.)
This would answer a second question about the existence of n+1-tuples $(a_0,\dots,a_n)$ of integers for which $\sum_{k=0}^n a_{\sigma(k)}x^k$ is always irreducible for any permutation $\sigma$. What happens if we restrict $|a_i| \le O(n)$ ? $|a_i| \le O(\log n)$ ?
-
1
As far as I checked (didn't check linear terms) Vladimir's example can be simplified to taking the coefficients $p, q, pq, pq, \dots, pq$ for primes $p\ne q$. – jp Dec 12 2009 at 17:59
## 4 Answers
OK, about your second question. Let's consider the polynomial $x^n+2(x^{n-1}+\ldots+x^2+x)+4$. I claim that this polynomial is almost good for your purposes: if we permute all coefficients except for the leading one, it remains irreducible. Proof: if the constant term becomes 2 after the permutation, use Eisenstein, if not, look at the Newton polygon of this polynomial mod 2 - you can observe that if it is irreducible, it has to have a linear factor, which is easily impossible.
[If we allow to permute all coefficients, I would expect that something like $9x^n+6(x^{n-1}+\ldots+x^2+x)+4$ would work for nearly the same reasons...]
-
Thanks! The suggestion works (I think) with (4,9,25,30,30...,30). The problem with (4,9,6...) is when a_k=4,a_{n-k}=9 for some k, which by looking at the Newton polygon mod2, mod3 gives the polynomial as a product fg where f is degree k and f,g are irreducible mod2, mod3. I couldn't find a contradiction in that case. When we introduce 25 such "symmetric" factorizations are ruled out. I haven't checked when 4,9 or 25 are leading yet (Eisenstein doesn't apply anymore). – Gjergji Zaimi Dec 9 2009 at 16:08
1
If someone needs a reference for the statement about the Newton polygon (like I did), take a look at the corollary at the bottom of page two in math.umn.edu/~garrett/m/number_theory/… – jp Dec 9 2009 at 16:19
@Gjergji: good point - I was thinking along the same line after I walked in the street having written that suggestion. However, what you say about how to mend my example seems to be a good idea. – Vladimir Dotsenko Dec 9 2009 at 18:11
@jp: sorry for not providing a reference - I was in a hurry. An excellent source on many a theorem about polynomials is <a href="books.google.com/…'s book</a> (for facts I am referring to, see page 53, Dumas' theorem (and a bit before this theorem), but there are lots of other useful things there). – Vladimir Dotsenko Dec 9 2009 at 18:12
okay, I don't know what happened to the long google books url in the previous comment, so let me re-type it in a better way: tinyurl.com/prasolov – Vladimir Dotsenko Dec 9 2009 at 18:15
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Thinking of what Perron's criterion essentially means, I would not really expect anything similar to hold for other coefficient. Basically, if the absolute value of the k-th coefficient is greater than the sum of absolute values of other coefficients, it's easy to show that k zeros of the polynomial are strictly inside the unit circle, and $(n-k)$ --- strictly outside it (by Rouche's theorem). So, easy contradictions are to be expected only for $k=n-1$ (and $k=1$ if we invert $x$). Two other remarks supporting this comment: polynomials $x^n-N^n$ suggest that you should not expect anything for the constant term, and polynomials $(x^2-Nx+1)(x^2+Nx+1)=x^4+(2-N^2)x^2+1$ show that $k=2$ (or $n-2$) would not work...
As for the second question, it seems quite likely that such examples exist, even with rather restrictive bounds on coefficients.
-
To your 2nd question:
Taking n+1 different primes $p_0, p_1, \dots, p_n$ you can define $a_i := \prod_{j \ne i} p_j$. By a theorem of Eisenstein ("Eisenstein's irreducibility criterion"), you get that any permutation yields an irreducible polynomial.
-
Cute! But it badly fails the desired restriction $|a_i|=O(n)$. – Mark Meckes Dec 8 2009 at 17:59
Maybe this should have gone in the comments, but I couldn't see the button.
In any case, I'm wondering what you hope to be true. There are some obvious 'bad' examples (i.e. $10^{20} x^{2} - 1$, or $x^{2} - 10^{20}$, or $x^{2} - 2 10^{10} x + 10^{20}$) s.t. some coefficients can be arbitrarily larger than (any function of) the others, while the polynomial remains reducible. This isn't terrible (I can't come up with examples like this for all coefficients and all degrees), but certainly means you can't get a condition which just involves the largest coefficient, without regards to spacing.
There are also some 'nice' examples. In the same paper that he proves the criterion you mention above, Perron also proves that a polynomial is irreducible if $a_{n-2}$ is sufficiently larger than the rest.
The paper 'irreducibility of polynomials' by Dorwart (from the monthly in 1935 (!)) came up on a quick google search, and may be worth looking at.
For the last question, playing around with the various divisibility criteria (and Maple) seems to give many, many examples for moderate degree, but my algebra is not strong enough to turn this into a theorem. Of course, if you are only interested in infinitely many n (not all n, since this only works for n being a prime - 1), the cyclotomic polynomials seem like good examples, with all coefficients 1! Is there any reason that you believe a restriction on the size of the coefficients would do something?
-
Well, I was trying (1,1,..,k) for large enough k, restricting the height makes the search...harder. The cyclotomic polynomials are unfortunately the only family of arbitrarily large degree I know. Then there are values like (1,1,1,0,...,0,0) which for all n have a very small fraction of reducible polynomials. – Gjergji Zaimi Dec 8 2009 at 15:33
How about if the polynomial is monic and has constant term 1, and the total norm of all of the small coefficients is bounded by a constant. – Greg Kuperberg Dec 8 2009 at 19:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452334046363831, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/211611-help-matrices-print.html | # Help with matrices
Printable View
• January 19th 2013, 06:57 AM
Tutu
Help with matrices
HI I'm really close to the answer, but do not know where I went wrong..
Consider the two equations 4x+8y=1 and 2x-ay=11 ( One is on top of the other) so in matrix form it is
$\begin{bmatrix} 4 & 8 \\2 & -a \end{bmatrix}$ $\begin{bmatrix} x \\y \end{bmatrix}$ = $\begin{bmatrix} 1 \\11 \end{bmatrix}$
For what values of a does the system have a unique solution?
Arranging in augmented matrix and using row operations, I reduced the matrix to
$\begin{bmatrix} 4 & 8 \\0 & 2a+8 \end{bmatrix}$ $\begin{bmatrix} x \\y \end{bmatrix}$ = $\begin{bmatrix} 1 \\-22 \end{bmatrix}$
I had to invert the matrix premultiplied to the matrix containing x and y, so I had to get the det, I got 8a+32
$\begin{bmatrix} x \\y \end{bmatrix}$ = $\frac{1}{8a+32}\begin{bmatrix} 2a+8 & -8 \\0 & 4 \end{bmatrix}\begin{bmatrix} 1\\-22\end{bmatrix}$
Then I multiplied the determinant into the inversed matrix, simplified, was left with
$\begin{bmatrix}\frac{a+4}{4a+16} &\frac{-1}{a+4} \\0 &\frac{1}{2a+8} \end{bmatrix}\begin{bmatrix} 1 \\-22 \end{bmatrix}$
multiplying them, I get
$\begin{bmatrix}\frac{a+4}{4a+16}+\frac{22}{a+4}\\0 +\frac{-22}{2a+8}\end{bmatrix}$
So my answer is x = $\frac{a+92}{4a+16}$.. and y = $\frac{-22}{2a+8}$
but it's wrong.. can someone show me where I went wrong?
Will really appreciate it, thank you so much!
• January 19th 2013, 08:16 AM
Soroban
Re: Help with matrices
Hello, Tutu!
First of all, you didn't answer the question.
Quote:
$\text{Given: }\:\begin{bmatrix} 4 & 8 \\2 & \text{-}a \end{bmatrix}$ $\begin{bmatrix} x \\y \end{bmatrix}$ = $\begin{bmatrix} 1 \\11 \end{bmatrix}$
$\text{For what values of }{\color{red}a}\text{ does the system have a unique solution?}$
The system does not have a unique solution if its determinant equals zero.
. . $D \:=\:\begin{vmatrix}4&8 \\ 2&\text{-}a \end{vmatrix} \;=\;(4)(\text{-}a) - (8)(2) \;=\;\text{-}4a - 16$
. . If $D = 0\!:\;\text{-}4a - 16 \:=\:0 \quad\Rightarrow\quad a \:=\:\text{-}4$
$\text{The system has a unique solution for all }a \ne \text{-}4.$
. . . . $a \,\in\,(\text{-}\infty,\text{-}4) \cup (\text{-}4,\infty)$
All times are GMT -8. The time now is 03:08 PM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442363977432251, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2011/02/18/more-kostka-numbers/?like=1&source=post_flair&_wpnonce=f9365a8c97 | # The Unapologetic Mathematician
## More Kostka Numbers
First let’s mention a few more general results about Kostka numbers.
Among all the tableaux that partition $n$, it should be clear that $(n)\triangleright\mu$. Thus the Kostka number $K_{(n)\mu}$ is not automatically zero. In fact, I say that it’s always $1$. Indeed, the shape is a single row with $n$ entries, and the content $\mu$ gives us a list of numbers, possibly with some repeats. There’s exactly one way to arrange this list into weakly increasing order along the single row, giving $K_{(n)\mu}=1$.
On the other extreme, $\lambda\triangleright(1^n)$, so $K_{\lambda(1^n)}$ might be nonzero. The shape is given by $\lambda$, and the content $(1^n)$ gives one entry of each value from $1$ to $n$. There are no possible entries to repeat, and so any semistandard tableau with content $(1^n)$ is actually standard. Thus $K_{\lambda(1^n)}=f^\lambda$ — the number of standard tableaux of shape $\lambda$.
This means that we can decompose the module $M^{(1^n)}$:
$\displaystyle M^{(1^n)}=\bigoplus\limits_{\lambda}f^\lambda S^\lambda$
But $f^\lambda=\dim(S^\lambda)$, which means each irreducible $S_n$-module shows up here with a multiplicity equal to its dimension. That is, $M^{(1^n)}$ is always the left regular representation.
Okay, now let’s look at a full example for a single choice of $\mu$. Specifically, let $\mu=(2,2,1)$. That is, we’re looking for semistandard tableaux of various shapes, all with two entries of value $1$, two of value $2$, and one of value $3$. There are five shapes $\lambda$ with $\lambda\trianglerighteq\mu$. For each one, we will look for all the ways of filling it with the required content.
$\displaystyle\begin{array}{cccc}\lambda=(2,2,1)&\begin{array}{cc}\bullet&\bullet\\\bullet&\bullet\\\bullet&\end{array}&\begin{array}{cc}1&1\\2&2\\3&\end{array}&\\\hline\lambda=(3,1,1)&\begin{array}{ccc}\bullet&\bullet&\bullet\\\bullet&&\\\bullet&&\end{array}&\begin{array}{ccc}1&1&2\\2&&\\3&&\end{array}&\\\hline\lambda=(3,2)&\begin{array}{ccc}\bullet&\bullet&\bullet\\\bullet&\bullet&\end{array}&\begin{array}{ccc}1&1&2\\2&3&\end{array}&\begin{array}{ccc}1&1&3\\2&2&\end{array}\\\hline\lambda=(4,1)&\begin{array}{cccc}\bullet&\bullet&\bullet&\bullet\\\bullet&&&\end{array}&\begin{array}{cccc}1&1&2&2\\3&&&\end{array}&\begin{array}{cccc}1&1&2&3\\2&&&\end{array}\\\hline\lambda=(5)&\begin{array}{ccccc}\bullet&\bullet&\bullet&\bullet&\bullet\end{array}&\begin{array}{ccccc}1&1&2&2&3\end{array}&\end{array}$
Counting the semistandard tableaux on each row, we find the Kostka numbers. Thus we get the decomposition
$\displaystyle M^{(2,2,1)}=S^{(2,2,1)}\oplus S^{(3,1,1)}\oplus2S^{(3,2)}\oplus2S^{(4,1)}\oplus S^{(5)}$
## 1 Comment »
1. Realy good, Complete: allows translation into programming (Mathematica).
Thanks, John.
Comment by | January 15, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8833505511283875, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/123887/fourier-transform-of-free-resolvent-kernel-in-three-dimensions | ## Fourier transform of free resolvent kernel in three dimensions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The free resolvent in $\mathbb{R}^3$ has this rapresentation: $$(R_0(z)f)(x)=\int_{\mathbb{R}^3}\frac{e^{i\sqrt{z}|x-y|}}{4\pi|x-y|}f(y)dy$$ with $\Im\sqrt{z}>0$. So its integral kernel is $$K(x,y)=\frac{e^{i\sqrt{z}|x-y|}}{4\pi|x-y|}$$ What is its Fourier transform with respect to $x$? According to my calculation it should be $$\frac{e^{-iy\cdot\xi}}{|\xi|^2-z}$$ Is it right?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351681470870972, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/20705/list | 2 edited title
1
# Is there a "natural" characterization of when $X \times \beta \mathbb{N}$ is normal?
As per a recent question of mine, $\omega_1 \times \beta \mathbb{N}$ is not normal. I'm wondering whether there's some sort of "natural" condition that describes when a space has a normal product with $\beta \mathbb{N}$, analagous to Dowker's characterisation that $X$ is countably paracompact iff $X \times [0, 1]$ is normal.
The context here is that I'm looking for something I can weaken to something sensible, as I have a property implied by $X \times \beta \mathbb{N}$ being normal which I am "surprised" isn't an equivalence ($\omega_1$ has this property) and would like to see if I can show its equivalenct to some slightly weaker topological condition. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.967539370059967, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/45288/non-repeating-random-number-generation-with-xi1-xi-increment-mod-m?answertab=oldest | # Non repeating random number generation with x(i+1) = x(i) + increment mod m
I have to generate millions of non-repeating random numbers and came across this equation: $x_{i+1} = x_i+c \space(mod \ m)$, where c and m are relative primes and $m \geq total\ to\ be\ generated$.
This works ok since I don't need good random numbers and don't have to memorize them. My question is, what is the demonstration and name of this method? I will have to write about it, but can't find any information not knowing how to formulate my question.
-
this is a number generator but not a random number generator – miracle173 Jun 14 '11 at 11:50
4
– Jyrki Lahtonen Jun 14 '11 at 12:01
I added the random tag. Just in case. – Jyrki Lahtonen Jun 14 '11 at 12:24
## 2 Answers
Your method is called a "linear congruential generator".
Please have also a look at this question:
how to generate real random numbers
The linear congruential generators are commonly considered to be a bad choice, with much better algorithms available, but it will depend on your application which generator turns out to be good or bad.
-
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052982926368713, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/228676/finding-number-of-different-paths | # Finding number of different paths
Consider a finite grid, suppose points a and b have positive x and y components ,and a has coordination of (x1,y1) and b has coordination of (x2,y2) and we want to go from a to b, how many possible paths exist if our movement is just going up and going right?
-
actually this is not HW question , this is part of my php programing for graphing calculator, – user1667315 Nov 4 '12 at 5:06
Thanks man, very helpful – user1667315 Nov 4 '12 at 5:08
is there a formula that i can plug in x y coordinate for a and b ? – user1667315 Nov 4 '12 at 5:12
If these answers are helpful to you, they may be helpful to others as well. Please do not vandalize the question after it is answered. – Ross Millikan Nov 13 '12 at 19:05
rolled back edit to give answer context. – robjohn♦ Nov 13 '12 at 21:48
## 1 Answer
I assume you mean for this problem to deal only with integer coordinates and also $x_1 \leq x_2$ and $y_1 \leq y_2$ (otherwise, one cannot move only up and right).
Let $x = x_2 - x_1$ and $y = y_2 - y_1$ (the horizontal and vertical distance between the points). We can imagine a path from the two points as a binary string of with exactly $x$ 0's and exactly $y$ 1's, where we interpret a 0 as "move right" and a 1 as "move up". Notice the string has total length $x+y$.
To form such a string, we need only pick the locations for the 0's and then fill the remaining locations with 1's. That is, we choose exactly $x$ locations (without order) out of the total $x+y$ locations, which can be done in $\binom{x+y}{x}$ ways.
-
Thank you, I already wrote a program to calculate answer using brute force but I could’d find formula from results , thanks man – user1667315 Nov 4 '12 at 5:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393410682678223, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/21839/induced-arrivals-in-an-m-m-1-queue | ## “Induced” arrivals in an M/M/1 queue?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm a newcomer to the realm of queueing theory, so please bear with me :)
I'd like to model web server traffic with a modified M/M/1 queue. In the simple case we have two parameters - $\lambda$ for the arrival rate and $\mu$ for the departure (or service) rate.
If I understand correclty, the general way to get the performance evaluation equations (average number of requests in the queue, for example) is to draw a flow diagram, and solve the equlibrium equation system, namely for the M/M/1 model:
$0 = -\lambda p_{0}$ + $\mu p_{1}$
$0 = \lambda p_{n-1} - (\lambda + \mu) p_{n} +\mu p_{n+1}$, n = 1, 2, ...
I don't know how I could extend the model the fit the real-world scenario the most. Each normal request induces a number of image requests, for example, let it be $u$ on average, and let it's service rate be $\sigma$. How can I factor these into the equations?
-
## 1 Answer
Just introduce extra states. The total description of a state will include the length of the queue and the stage of service for the current customer. For instance, if each initial service may result in the second stage service with probability $q$ and the departure rate for this second stage service is $\sigma$, then you'll get 2 equations corresponding to 2 possible states with queue length $n$: $\mu p_n(1)+\lambda p_n(1)-\mu p_{n+1}(1)(1-q)-\sigma p_{n+1}(2)-\lambda p_{n-1}(1)=0$ and $\sigma p_n(2)+\lambda p_n(2)-\mu p_{n+1}(1)q-\lambda p_{n-1}(2)=0$ (Just look at how you can depart from the state and put the corresponding terms with plus and then look at how you can arrive to the state and put the corresponding terms with minus. For instance, the terms in the first equation correspond to having been served at stage 1, new arrival to the queue serving a stage 1 customer, completely finishing serving a stage 1 customer in a queue with $n+1$ customers, finishing serving a stage 2 customer in that queue, and arrival of a new customer to the queue of length $n-1$ serving a stage 1 customer).
Sometimes you can simplify resulting big systems to smaller ones but the general idea is always to start with the set of states that fully describes everything that may happen in the system, not just the parameters you want in the end.
-
Okay, I get the general idea, thanks. What is a bit misleading is that I don't need two "stages": the induces/generated requests should be the same "type" as a newly arrived one. if I undestand correctly, you used p_{n}(1) and p_{n}(2) to essentially split it into two queues. – Zoltan Apr 20 2010 at 10:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219481348991394, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=4164410 | Physics Forums
Thread Closed
Page 9 of 10 « First < 6 7 8 9 10 >
## Light Clock Problem
Quote by harrylin Oops I had overlooked that error in his "givens"! Thanks I'll also correct that for consistency.
Quote by altergnostic Please see my answer above and also my reply to Peter.
Your "givens" are according to SR selfcontradictory; when I corrected it, I showed this by two methods of calculation and I explained it with words. Note that both my approaches differed a little from that of Dalespam and PeterDonis; I only used your input.
Quote by harrylin It seems that we were talking past each other, so I'll first only address the first two or three points:
Yes, it is getting harder by the relative minute to keep up with so many posts.
Once more: there is no spatial rotation here. Nothing rotates at all. And once more, that was already explained in the link I gave, and of which you claimed to understand it (but evidently you don't): http://physicsforums.com/showthread.php?t=574757 Please explain it in your own words before discussing further.
The spatial rotation refers to the observer line of sight. I propose that he places his x axis in line with AB so he can give all the motion to beam and solve without knowledge of the clock's velocity.
From the link, I will comment only upon the same issue I have been discussing here: the beam leaves A and reaches B in the rest frame (the frame that is at rest relative to the clock) and moves at c as seen from a detector at B. From the perspective of an observer that sees the clock in relative motion, the brings no data and he can't say anything about its observed velocity, since it is not observed. By including a signal sent from B to A at the moment the beam reaches B, the observer will have the observed time for that event and may determine time of the event as recorded on the detector by taking into consideration the observed time and the predetermined distance between him and the detector at B, just like the standard relative V implies a given distance/unit of time.
Yes that is what I wrote: you clarified that S' is the rest frame, in which the light clock is moving; and S is the moving frame, in which the light clock is in rest. If you agree, then there is no misunderstanding about this.*
Maybe my terminology is incorrect? The rest frame is meant to be the frame at rest relative to the clock. The moving frame is supposed to be the observer who sees the moving clock.
Still it may be that the problem comes from confusion between frames, as you next write that "This setup presents a novel configuration, which is light that doesn't reach the observer" - however light always only reaches light detectors!*
The beam never reaches the observer at A. Only the signal from B does. He then marks the observed time of event B and plots it against the given L. He can then find the time of event B as marked on the detector's clock by subtracting the time it takes light to cross the distance between the coordinates of the event at B from the observed time. From that, he can calculate the distance from source to receiver as seen from the clock itself by transforming the calculated speed of the beam into c (the speed of light as seen locally, which is the speed supported by a huge amount of evidence). So you see, we only need to know the predetermined distance from A to B, the given time of detection as seen from the clock and the observed time of event B, to figure everything else
It may also come from a misunderstanding of the light postulate, or a combination of both, as you next write: "The postulate of the speed of light strictly states that c is absolute for the source and the observer regardlesd of motion".* That is certainly not the second postulate. Did you study the Michelson-Morley calculation? And if so, do you understand it? Then please explain it.
The interferometer was supposed to test the existance of the aether and find a variance of the speed of light relative to the ether. The setup was built so we had perdicular light paths. As the apparatus (and the Earth) revolved, no fringe displacement was detected (actally, no significant displacement). So it was concluded that the speed of light was constant regardless of the orientation of the beams or the detectors relative to an absolute frame. Notice that in the equations applied to this experiment, V was the relative velocity wrt the aether, so there was only one possible V, as the aether was absolute. Since then, the speed if light was measured with ever increasing accuracy, always in the same manner: by noting the return time if light as it went forth and back a specified distance. The time of detection after emission is always proportional to c. This is exactly the premise of my setup: light can't be detected to move at any V other than c. Just notice that the beam is not detected by A, only the signal from B is. There's no return, no detection, nothing that relates the path of the beam with the experiments that tested the speed of light.
PS. This forum is meant to explain how SR works. It is not meant to "prove" a theory. As a matter of fact, such a thing is impossible!
This is no theory other than SR itself, since the same postulates apply to detected light, consistent with every experiment ever done. A correction to an incorrect or incomplete diagram or a new thought problem that does not contradict but expands theory is not a new theory, since it is built upon the same postulates. This is merely the outcome of a realisation that this thought problem has never been done before and that the postulate of the speed of light applies to source and receiver, but not necessarily to a non-receiver. This setup is meant to analyse this realisation, keeping all aspects of Einstein's original theory intact and remaining consistent with all available empirical data.
It is the preconception that the postulate of the speed of light also applies to undetected light that keeps you from aknowledging the possibility I intend to discuss here.
Under close inspection, you may realise that neither the postulate nor the evidence are in contradiction with my conclusions, but that it is a necessary outcome of relativity. It is only ligical, if light reaches us at a constant speed, distance events will be seen at a later time than the time of the event determined locally (for an observer in close proximity and at rest relative to the event). Hence, if the observed time is delayed and distances are given, the observed speed must be smaller.
To realise this is consistent with SR may be difficult, but it must be, since the conclusion is strictly dependant upon the acceptance of the constancy of the speed of any received or detected light.
Quote by harrylin Your "givens" are according to SR selfcontradictory; when I corrected it, I showed this by two methods of calculation and I explained it with words. Note that both my approaches differed a little from that of Dalespam and PeterDonis; I only used your input.
Why are they self contradictory? If I walked from A to B and measured the distance, wouldn't this distance be the same you would measure by walking from B to A? This is as symmetrical as any relative velocity, as it must be. If there is no symmetric distance between frames, there can be no symmetric speed. Do you disagree? If so, please explain why and show me how to determine relative velocity without a equally symmetric relative distance.
Are there other givens you still disagree with?
A footnote: length contraction usually applies to the distance between two bodies in relative motion, meaning, it usually applies to a length that is changing with time. In the proposed setup, the distance is fixed and has been previously marked. It is the symmetric distance between A and B.
Mentor
Quote by altergnostic h' is not being measured, it is given
I realize that, which is why I said that your givens are wrong. Givens can be wrong either by being inconsistent with themselves (e.g. given a right triangle with three equal sides) or by being inconsistent with the laws of physics (e.g. given a moving mass 2m which comes to rest after colliding elastically with a mass m initially at rest).
Quote by altergnostic it is supposed to be the constituent L of the relative V that is usually given in SR problems, which we are not given in this setup and is the unknown we seek.
I thought that v=0.5c was also a given, is it not? If v is not given then you can simply plug your other givens into $c^2 t^2 = y^2 + v^2 t^2$ where t=h'/c and then solve for v.
Mentor
Quote by altergnostic A footnote: length contraction usually applies to the distance between two bodies in relative motion, meaning, it usually applies to a length that is changing with time.
This is an incorrect understanding of length contraction. I recommend starting with the Wikipedia article on length contraction. Length contraction has nothing to do with the distance between two bodies in motion. It also does not refer to a change in length over time, it refers to a difference in the same length as measured by two different frames.
Quote by DaleSpam I realize that, which is why I said that your givens are wrong. *Givens can be wrong either by being inconsistent with themselves (e.g. given a right triangle with three equal sides) or by being inconsistent with the laws of physics (e.g. given a moving mass 2m which comes to rest after colliding elastically with a mass m initially at rest). I thought that v=0.5c was also a given, is it not? *If v is not given then you can simply plug your other givens into $c^2 t^2 = y^2 + v^2 t^2$ where t=h'/c and then solve for v.
I have reestaded many times that you could ditch the speed of the light clock, since it is useless to this problem. What speed would you insert into gamma in the train and embankment problem? We would have to insert the speed of the projectile/beam and align our x axis with AB in my setup, do you see? If you were given the projectile speed and the times you could calculate the distance AB, this is simply the other way around: given the distance and times you can calculate the speed. The speed of the light clock is completely trivial and you can solve even if the observer at A doesn't know it, since he is given distances and knows all times. From my analysis you can actually derive both the speed of the beam and of the clock, since after finding the total speed for the beam, you can subtract c from that and find the speed of the clock.
Quote altergnostic
A footnote: length contraction usually applies to the distance between two bodies in relative motion, meaning, it usually applies to a length that is changing with time.
Quote by DaleSpam This is an incorrect understanding of length contraction. *I recommend starting with the Wikipedia article on length contraction. *Length contraction has nothing to do with the distance between two bodies in motion. *It also does not refer to a change in length over time, it refers to a difference in the same length as measured by two different frames.
Yes, but it is usually the difference measured by two frames in relative motion (you could have length contraction between two observers at rest also). But the main point is your final sentence: it is a difference between two distances being measured by different frames. The given distance h' is not being measured at all in my setup. It has been previously marked with lightsecond signs, remember? This distance is visual data, not vt (how could it be measured with no v, after all?).
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by altergnostic I prefer this analysis over the previous one, but the problem remains: you are not given any v
You are missing the point of this version of my analysis; I let v be unknown. It is the speed of the light clock relative to the observer, but I assumed no value for it. My whole point was to show that, even if you do not know the speed of the light clock relative to the observer, you can still show that the velocity of the "projectile" inside the clock is different relative to the observer than it is relative to the clock, *if* the "projectile" moves slower than light. But if the "projectile" is a light beam, then its speed is 1 (i.e., c) in both the clock frame and the observer frame. My understanding was that that was your point of confusion: you couldn't understand how a projectile's velocity could change from frame to frame if it moves slower than light, but yet the velocity of a light beam does *not* change from frame to frame. I've now given you an explicit formula that shows why that's true for the clock scenario.
You have said several times now that you can "ditch the speed of the light clock" in the analysis. I don't understand what this means. The light clock itself is part of the scenario, so you have to model its motion to correctly analyze the scenario. If you just mean that you can't assume you *know* what its speed is, that's fine; as I said above, I let that speed be unknown in my latest analysis, and showed how it doesn't matter for the question I thought you were interested in--why the projectile's velocity changes from frame to frame while the light beam's does not.
If you mean that we can somehow model the scenario without including the light clock at all, I don't see how. The motion of the parts of the light clock gives a critical constraint on the motion of the projectile/light beam inside the clock. Also, the light clock does not change direction in the scenario, so you can define a single inertial "clock frame"; but the projectile *does* change direction, so there's no way to define a single inertial "projectile frame". That means we can't just focus on the "velocity of the projectile", because that velocity *changes* during the scenario; it's not a "fixed point" that we can use as a reference.
As for "assuming" that the light beam travels at c, I have not assumed that. The only assumptions I have made are translation and rotation invariance, plus the principle of relativity, plus an assumption about how the light clock's "projectile" reflects off the mirror. It's probably futile at this point to walk through the chain of reasoning again, but I'll do it once more anyway. I'll focus on your "triangle diagram" since it's a good illustration of the spatial geometry in the unprimed frame.
We have a "projectile clock" consisting of a source/detector, which moves from A to C to D in the triangle diagram, and a mirror/reflector, which moves on a line parallel to the source/detector that passes through B at the same time (in the unprimed frame) that the source/detector passes through C. The clock as a whole moves at speed $v$ relative to the observer who remains motionless at A (in the unprimed frame).
The "projectile" within the clock moves at some speed $v_p$ in the unprimed frame (which we take to be unknown at this point), along the line A to B, then B to D. At the instant that this "projectile" (call it P1) reaches B, a second "projectile" (call it P2), moving at the *same* speed $v_p$, is emitted back towards A, to carry the information to the observer at A that the first projectile has reached B.
Here are some key facts about the geometry that follow from the above:
- Angle ABC equals angle CBD.
- Distance AB equals distance BD.
- Distance AC equals distance CD.
- Line BC is perpendicular to line AD.
We can also define the following times of interest (in the unprimed frame): T_AB = the time for P1 to travel from A to B; T_BD = the time for P1 to travel from B to D; T_BA = the time for P2 to travel from B to A. It is then easy to show from the above that all three of these times are equal: T_AB = T_BD = T_BA. We also have T_AC = the time for the light clock source/detector to travel from A to C, and T_CD = the time for the source/detector to travel from C to D. And we have T_AC = T_CD = T_AB = T_BD, because projectile P1 and the source/detector are co-located at A and D and both of their speeds are constant.
Thus, we have the following spacetime events:
A0 = D0 = the spacetime origin; the light clock source/detector passes the observer at A at the instant projectile P1 is emitted from the source/detector.
B1 = P1 reaches the mirror/reflector and bounces off; P2 is emitted back towards A.
C1 = the light clock source/detector passes point C.
D2 = P1 reaches the light clock source/detector and is detected.
A2 = P2 reaches the observer at A.
The above facts about the times and the motion of the light clock imply that B1 and C1 are simultaneous (in the unprimed frame), and D2 and A2 are simultaneous (in the primed frame).
Finally, we have formulas about the speeds:
$$v_p = \frac{AB}{T_{AB}} = \frac{BD}{T_{BD}} = \frac{AB}{T_{BA}}$$
$$v = \frac{AC}{T_{AC}} = \frac{CD}{T_{CD}} = \frac{AC}{AB} v_p$$
The last equality follows from the fact that T_AC = T_AB. Do you see what it says? It says that, if you know v_p, AC, and AB, you know v, the "light clock velocity". You stipulated v_p, AC, and AB in your statement of the problem; therefore you implicitly gave a value for v as well. But the value for v that you stated was a *different* v; it did not satisfy the above equation, which is enforced by the geometry that you yourself gave for the problem; that's why your "givens" were inconsistent.
I'll stop at this point since this post is getting long. What I'm trying to say is that the scenario you proposed has more constraints in it than you think it does; it already *contains* the information about how the projectile/light beam's velocity has to transform between frames.
Mentor
Quote by altergnostic I have reestaded many times that you could ditch the speed of the light clock, since it is useless to this problem.
Yes, you have restated many times many incorrect things in this thread. Obviously, whenever you are interested in the effect of speed on the operation of a clock then the speed of the clock is important.
However, if it is not a given in the problem then everything becomes easier. Simply plug the givens into the equation I posted above and we find that v = 0.45 c. At 0.45 c the time dilation factor is 1.12, so in 1.12 s the clock travels .5 ls and the light travels a distance of 1.12 ls. 1.12 ls / 1.12 s = c.
Quote by altergnostic given the distance and times you can calculate the speed.
Certainly. And as long as your givens are both self-consistent and consistent with physics then you will always get that light travels at c.
Quote by altergnostic (you could have length contraction between two observers at rest also).
No, you cannot.
Quote by altergnostic The given distance h' is not being measured at all in my setup. It has been previously marked with lightsecond signs
This is yet another self-contradictory statement. The lightsecond signs are a measuring device, and comparing anything to them is performing a measurement.
Quote by altergnostic Why are they self contradictory? [..]
I trust that that has been solved now; so we simply ditch v and find v=0.45 from your 1.12.
Quote by altergnostic [..] server line of sight. I propose that he places his x axis in line with AB so he can give all the motion to beam and solve without knowledge of the clock's velocity.
- spatial rotation is useless. Standard is to orient the system speed along X (don't you know that the Lorentz transformation uses that convention?)
- he can give [B]no[B] motion to the beam; and how he places his coordinates can have no effect on the beam! Seriously, nothing of that makes any sense to me.
From the link, [http://physicsforums.com/showthread.php?t=574757] I will comment only upon the same issue I have been discussing here: the beam leaves A and reaches B in the rest frame [..]
Sorry, the first thing you discussed here and which was not solved were the speed and direction - and those also appear in your last example, so you must be sure to get it right...
Maybe my terminology is incorrect? The rest frame is meant to be the frame at rest relative to the clock. The moving frame is supposed to be the observer who sees the moving clock.
That is exactly how I understand it; thus you disagree about the calculation method, which is identical to the method to explain the Michelson-Morley experiment.
The beam never reaches the observer at A. Only the signal from B does. He then marks the observed time of event B and plots it against the given L. He can then find the time of event B as marked on the detector's clock by subtracting the time it takes light to cross the distance between the coordinates of the event at B from the observed time. From that, he can calculate the distance from source to receiver as seen from the clock itself by transforming the calculated speed of the beam into c (the speed of light as seen locally, which is the speed supported by a huge amount of evidence). [..]
That is wrong: it appears that you heard some stuff about SR and some stuff about GR and mixed them up. There is nothing to "transform"; as you earlier claimed there is no transformation to make, as all distance and time measurements are made with the single reference system S', the "rest" system.
[MMX:] The interferometer was supposed to test the existance of the aether and find a variance of the speed of light relative to the ether. The setup was built so we had perdicular light paths.
Note: the essential point of the experiment is that the light paths in the ether are not perpendicular; I hope that that is clear.
As the apparatus (and the Earth) revolved, no fringe displacement was detected (actally, no significant displacement). So it was concluded that the speed of light was constant regardless of the orientation of the beams or the detectors relative to an absolute frame.
Not exactly, no... There was only one detector. And no effect was detected from changing orientation regardless of the velocity (although apparently Michelson only measured it at one velocity; others repeated it at other velocities).
Notice that in the equations applied to this experiment, V was the relative velocity wrt the aether, so there was only one possible V, as the aether was absolute. Since then, the speed if light was measured with ever increasing accuracy, always in the same manner: by noting the return time if light as it went forth and back a specified distance. The time of detection after emission is always proportional to c. This is exactly the premise of my setup: light can't be detected to move at any V other than c. Just notice that the beam is not detected by A, only the signal from B is. There's no return, no detection, nothing that relates the path of the beam with the experiments that tested the speed of light. [..]
I still can't make any sense of that last part. Sorry. But there are sufficient glitches in your descripton, and sufficent lack of equations, to make me confident that before anything else it will be better to go through a special relativity calculation example of MMX. And this thread is already too long. Please start a new one on MMX. I do think it likely that this whole topic here will disappear after that exercise. Especially because:
This is merely the outcome of a realisation that this thought problem has never been done before and that the postulate of the speed of light applies to source and receiver, but not necessarily to a non-receiver. It is the preconception that the postulate of the speed of light also applies to undetected light that keeps you from aknowledging the possibility I intend to discuss here. [..] the conclusion is strictly dependant upon the acceptance of the constancy of the speed of any received or detected light.
No. Once more: there are detectors where you like, even non-detected light is assumed to go at c, and such experiments as the one you describe have been done, both in theory and in practice. As a matter of fact, radar uses that same set-up. And the basic calculations are similar as with MMX.
if light reaches us at a constant speed, distance events will be seen at a later time than the time of the event determined locally (for an observer in close proximity and at rest relative to the event). Hence, if the observed time is delayed and distances are given, the observed speed must be smaller.
Do you think that the traffic police will observe speeding cars further away as going slower - or that GPS will tell that you are going slower as the satellite is further away??
Peter, thanks for your patiance and effort. I'll make a couple observations regarding your reply. You misinterpret my assertion that we can ditch the velocity of the light clock to mean that it is an unknown that must enter the equations nonetheless. No, I mean that we don't need it to solve at all. To determine the speed of the beam all the observer needs to know is the distance teavelled over the observed time - this will be the apparent velocity. Conversely, if you were given the relative velocity of the beam/projectilr from A to B (the vector addition of the upward speed of the projectile and the perpendicular speed of the projectile-clock) you could find the distance AB. Given the distance AB you can directly calculate the observed velocity as seen from A, just plot it over the observed time. You have to put yourself in the observer shoes and really imagine how he calculate the speed of the beam. Since we are given the time of event B as measured by the clock itself, we can easily find the time that event is observed at a distance AB. Note this is the case for any scenario, we wouldn't need a beam travelling fron A to B at all. If the blinker at B simply turns on at t=1s (as measured with the blinker's own clock), you just have to consider the spatial separation from the event and the observer to get the observed time. Assuming the event is caused by a beam going from A to B, he only has to plot the given distance over the observed time. The mistake I keep pointing you to is the fact that you assumed the velocity of the clock enters the equations. But it would enter only if we were doing some kind if vector addition, but since the clock is going from A to D and the observer sees the event from B (at a time T), there's no reason to take that velocity into consideration, you are only looking for the time separation between events A and B to figure out at what speed something has to travel this distance to cause the observed times. Later you state that T_AB = T_BA, but that is an assumption. The observer has to calculate T_AB from the observed time of event B, which is the local or proper time of event B plus the time it takes for that event to be seen at A. We know T_BA as seen from A from experiment, the speed of any directly detected light must be c. We don't know T_AB as seen from A because T_AB is only observed after T_BA, orherwise the observer doesn't see event B at all. T_AB must be calculated in this setup. Of course, if the observer at rest were to send a beam of light towards B, that would take the same time as a beam would need to come back from B to A. But notice the very important fact here that, if that was the case, the time of event B as seen from B would NOT be 1s (the time light takes to cross y'), but actually 1.12s (the time it would take for light to cross h'). Your last remark reassures me that you are assuming what you are trying to prove. The setup has no information on the beam, only on the events, which are good to calculate the speed of the beam in each frame.
Quote by altergnostic [..] To determine the speed of the beam all the observer needs to know is the distance teavelled over the observed time - this will be the apparent velocity. [...]
Just one more remark: you are mixing up reference systems, just as we already discovered - in fact you try to use t instead of t' for S'. That doesn't work. This will become clearer when you discuss MMX, which is the "mother" of all such calculations.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by altergnostic You misinterpret my assertion that we can ditch the velocity of the light clock to mean that it is an unknown that must enter the equations nonetheless. No, I mean that we don't need it to solve at all. To determine the speed of the beam all the observer needs to know is the distance teavelled over the observed time - this will be the apparent velocity.
You are correct that you could set up the scenario so that the distances AB, AC, and BC were pre-determined; then you would have to control the speed of the light clock so that the beam actually hit the light clock's mirror at point B, instead of at some other point along the mirror's trajectory. That's fine, but it doesn't change anything about my analysis; my analysis is still correct, because even if you don't need the light clock's speed to do the analysis (which you actually do--see below--but for the moment I'll assume for the sake of argument that you don't), that is not the same as saying that an analysis which does use the light clock speed (even if it is left unknown) is incorrect.
Quote by altergnostic Conversely, if you were given the relative velocity of the beam/projectilr from A to B (the vector addition of the upward speed of the projectile and the perpendicular speed of the projectile-clock) you could find the distance AB.
Yes, that's true, but you did not specify the upward speed of the projectile in the unprimed (observer) frame. You specified it in the primed (clock) frame. (Btw, I mis-stated this somewhat in my previous post; I said that you specified $v_p$, but I should have said that you specified $v'_p$. I can go back and continue the analysis I was doing in my last post with that corrected, but it may not be worth bothering.)
So before you do this vector addition, you have to first transform the upward speed of the projectile from the primed to the unprimed frame. The upward distance (AB in the primed frame; BC in the unprimed frame, since the clock is moving in that frame) does not change when you change frames, but the *time* does, because of time dilation, so the upward speed of the projectile (i.e., the upward *component* of its velocity) is different in the unprimed frame than in the primed frame. So you do need to know the relative velocity of the light clock and the observer; without that you can't transform the upward velocity in the primed frame to the upward velocity component in the unprimed frame.
Quote by altergnostic Given the distance AB you can directly calculate the observed velocity as seen from A, just plot it over the observed time. You have to put yourself in the observer shoes and really imagine how he calculate the speed of the beam.
Yes, let's do that. We have a light beam traveling from A to B, and a second light beam (the one that is emitted at the instant the first one strikes the mirror) traveling from B to A. The round-trip travel time is measured by the observer at A, and he already knows the distance AB because he measured it beforehand (and then controlled the speed of the light clock to ensure that the mirror was just passing B at the instant the first beam hit it). So we have two light beams each covering the same distance; if we assume that both beams travel at the same speed in the unprimed frame (even if we don't assume that that speed is c), then we can simply divide the round-trip time by the round-trip distance (2 * AB) to get the beam speed. Fine. See below for further comment.
Quote by altergnostic Later you state that T_AB = T_BA, but that is an assumption.
Only in the sense that we assume that both light beams (the one from A to B and the one from B to A) travel at the same speed. Do you challenge that assumption? Both beams are "observed" in your sense--one endpoint of each beam is directly observed by the observer at A. It's impossible for *both* endpoints of either beam to be directly observed by the same observer, so if that's your criterion for a beam being "directly observed", then no beam is ever directly observed. But if you accept that *receiving* a beam counts as directly observing it, then *emitting* a beam should also count as directly observing it; either one gives the observer direct knowledge of one endpoint of the beam.
Quote by altergnostic The observer has to calculate T_AB from the observed time of event B, which is the local or proper time of event B plus the time it takes for that event to be seen at A.
But how do we know the time it takes for that event to be seen at A? Are you assuming that the beam from B to A travels at c? If so, then why not also assume that the beam from A to B travels at c? What makes a received beam any different from an emitted beam?
By contrast, I am only assuming that the two beams (A to B and B to A) travel at the *same* speed, *without* assuming what that speed is (we *calculate* that by dividing round-trip distance by round-trip time, as above). That seems like a much more reasonable approach, since it does not require assuming that there is any difference between an emitted beam and a received beam.
Quote by altergnostic We know T_BA as seen from A from experiment, the speed of any directly detected light must be c.
But only one endpoint of the light is directly detected. Why should received light count as directly detected but not emitted light?
Quote by altergnostic We don't know T_AB as seen from A because T_AB is only observed after T_BA
No, they are both "observed" (by any reasonable definition of "observed") at the same time, when the beam from B to A is received and its time of reception is observed. At that point the observer knows the round-trip travel time and the round-trip distance and can calculate the beam speed.
Quote by altergnostic T_AB must be calculated in this setup.
So must T_BA. The observer doesn't directly observe the emission of the beam from B to A, any more than he directly observes the reception of the beam from A to B. He has to calculate the times of both those events. The way he does that is to use the fact that both events occur at the same instant, by construction.
Quote by altergnostic Of course, if the observer at rest were to send a beam of light towards B, that would take the same time as a beam would need to come back from B to A.
Unbelievable; you now *admit* this, yet you were arguing that we could *not* assume this before.
Quote by altergnostic But notice the very important fact here that, if that was the case, the time of event B as seen from B would NOT be 1s (the time light takes to cross y'), but actually 1.12s (the time it would take for light to cross h').
Which it is; the time of event B, *in the unprimed frame*, *is* 1.12s (if we allow v, the velocity of the light clock relative to the observer, to be set appropriately to 0.45 instead of 0.5, per the comments of DaleSpam, harrylin, and myself). The time of event B, in the *primed* frame, is 1s; but that's not what the observer at A is interested in. He's interested in the time of event B in his frame, the unprimed frame, and that time is different from the time of event B in the primed frame because of time dilation. Which, of course, requires you to know the velocity of the light clock relative to the observer, contrary to your repeated erroneous claim that you don't.
Mentor
Quote by altergnostic You misinterpret my assertion that we can ditch the velocity of the light clock to mean that it is an unknown that must enter the equations nonetheless. No, I mean that we don't need it to solve at all. To determine the speed of the beam all the observer needs to know is the distance teavelled over the observed time
No matter how you try to get rid of it, it is there anyway since both the distance traveled and the observed time depend on the speed of the clock.
Giving 1.12 to the time it takes the beam to travel from A to B as seen from A is a mistake. That would only be true if the time of event B were also 1.12s. If you send a light beam from A to B, B would receive it at t'=1.12s. But, in the proposed setup, the time of this event in the clock's frame is t'=1s. Folliwung what has been stated in this thread, the time of event B can be either 1s or 1.12s as seen from the clock's frame and in both cases the speed of the beam as seen from A would be c, but this is only possible if light crosses the distance h' (the hypotenuse) as seen in the clock's frame in one case and the distance y' in the other case, but it is clear that in this setup the beam crosses y' in the clock's frame. I really don't know how else to state this, but as a last resort, I will ask you to ignore the beam completely and just analyse the events as seen from A, when x is in line with AB, as if no beam was causing the events at all: A0 = A'0 = 0,0,0 B1 = B'1+X = 2.12,1.12,0 While B happens at t'=1s as recorded from B itself. What would be the presumed speed an object would need, as seen from A, to travel from event A0 to B1, if it was to leave A at t=0s and reach B at the time of event B (TB1 = 2.12s)? TB1 = 2.12 X = 1.12 VAB = X/TB1 = 0.528 The observer can subtract the delay caused from BA (the time light takes to travel from B to A) to find the time of the event as measured by a clock placed at B: T'B = TB - TBA = 2.12 - 1.12 = 1s If you calculate the speed of the beam straight from the times of events and the given distances, that's what you would find, and as the observer standing at A only has access to the events themselves, that's how he would calculate it. We can only disagree on two points, I think: 1: The given distance is not correct or useful or consistent with SR 2: The observed time for B1 as seen from A is not the local or proper time of the event plus the time separation between A and B. Regarding 1, I remind you that any given V implies a given L, which L I'm giving as the distance from A to B (or B to A) measured locally - a proper distance? I don't see why wouldn't we be allowed to be given this length if we are given a relative V in other situations. Regarding 2, I can't see how this isn't the case. The only argument against this is that the primed time is not 1s but 1.12s, which is inconsistent with the light clock's own measurements. This thread is way too long already and I think that if we haven't reached an agreement yet, we won't reach it anytime. Maybe there's an experiment out there that actually determines the speed of an undetected (or indirectly detected) light beam so we could check this, but I believe that this hasn't been done at all. Every experiment built to determine the speed of light that I know about works by measuring the return time or some other setup that relies o direct detection or emission. I repeat that the observer at A is not the source nor the receiver of the beam. The source is the bottom mirror and the receiver is the top mirror. The observer at A is the receiver of the signal travelling at c from B1 to A0. The mirror at B is the receiver of the beam travelling at c from A'0 to B'1 (which equals CB). The postulate of the constancy of the speed if light necessarily applies to those light paths.
Blog Entries: 9
Recognitions:
Gold Member
Science Advisor
Quote by altergnostic Giving 1.12 to the time it takes the beam to travel from A to B as seen from A is a mistake. That would only be true if the time of event B were also 1.12s. If you send a light beam from A to B, B would receive it at t'=1.12s. But, in the proposed setup, the time of this event in the clock's frame is t'=1s.
You keep on mixing up quantities in different frames. When we say that the "time" for the beam to travel from A to B is 1.12s, we mean in the unprimed frame; i.e., t = 1.12s. That is perfectly consistent with that time being 1s in the primed frame; i.e., t' = 1s.
Quote by PAllen Folliwung what has been stated in this thread, the time of event B can be either 1s or 1.12s as seen from the clock's frame
No, it can't. Where are you getting that from? The time of event B (event B1 in my nomenclature) is 1.12s in the unprimed frame (observer's frame), and 1s in the primed frame (clock frame). No one has said that the time of event B1 is 1.12s in the primed frame.
Quote by PAllen analyse the events as seen from A, when x is in line with AB, as if no beam was causing the events at all: A0 = A'0 = 0,0,0 B1 = B'1+X = 2.12,1.12,0 While B happens at t'=1s as recorded from B itself.
What does "recorded from B itself" mean? Who is doing the recording? If the observer at B doing the recording is at rest relative to the observer at A, then he will record t = 1.12s. To record t' = 1s, he would need to be moving with the light clock, i.e., at an angle to the x-axis with the orientation of axes you are using. That means such an observer is not "at B" except at the instant when he records the arrival of the light beam there.
Also, I don't understand your coordinate values. Is the first number supposed to be time? If so, where does 2.12 come from?
I can't even make sense of the rest of your analysis, because I don't understand where you're getting the initial numbers it's based on.
Quote by PAllen I repeat that the observer at A is not the source nor the receiver of the beam. The source is the bottom mirror and the receiver is the top mirror.
But the observer at A is co-located with the bottom mirror when it emits the light beam, so he can directly observe its time of emission by any reasonable definition of "directly observe". By your extremely strict definition of "directly observe", practically no events are ever directly observed.
Quote by PAllen The observer at A is the receiver of the signal travelling at c from B1 to A0. The mirror at B is the receiver of the beam travelling at c from A'0 to B'1 (which equals CB). The postulate of the constancy of the speed if light necessarily applies to those light paths.
And that, all by itself, is enough to show that *all* the other light beams in your scenario also move at c. That was one of the points of my various posts analyzing the scenario. But I think you are right that, if we haven't reached agreement on that point by now, we're not likely to.
Thread Closed
Page 9 of 10 « First < 6 7 8 9 10 >
Tags
light clock, special relativity, time dilation
Thread Tools
| | | |
|------------------------------------------|------------------------------|---------|
| Similar Threads for: Light Clock Problem | | |
| Thread | Forum | Replies |
| | Special & General Relativity | 8 |
| | Special & General Relativity | 55 |
| | Special & General Relativity | 14 |
| | Special & General Relativity | 31 |
| | Special & General Relativity | 7 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9674667119979858, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-statistics/206696-probability-space-sigma-algebras-intuition.html | # Thread:
1. ## Probability space/ Sigma Algebras intuition
I've got 2 questions regarding probability spaces/events/sigma algebras etc.
1. If you got the event {1,2,3}, you could see this is as 1 or 2 or 3, right ? Is this the standard notation for events ?
How would you notate the event 1 and 2 and 3 ?
2. Why would you take the set of events to be smaller than all the subsets of the sample space?
In my book, they are talking about the properties of a sigma algebra field.
And if the set of events are all the subsets of the sample space this makes sense to me.
But I'm like, why would bother about smaller sets.
For example:
If you throw a dice, and you got the sample space {1,2,3,4,5,6}.
I could just only put the zero set and the sample space in my set of events...
But I'm like, why would you only look at those events ?
And why can I look at some smaller sets, like the trivial one, but not at the set of the events {2,4,6} and {1,2,3,4,5,6}?
I mean, I can see that this is not a sigma field. But to me the trivial sigma field is as useless as set with the events {2,4,6} and {1,2,3,4,5,6}.
Anyone here who can help me out a little bit ?
2. ## Re: Probability space/ Sigma Algebras intuition
Originally Posted by kasper90
1. If you got the event {1,2,3}, you could see this is as 1 or 2 or 3, right ? Is this the standard notation for events ?
How would you notate the event 1 and 2 and 3 ?
2. Why would you take the set of events to be smaller than all the subsets of the sample space?
In my book, they are talking about the properties of a sigma algebra field.
And if the set of events are all the subsets of the sample space this makes sense to me.
I have absolutely no idea what #1 could mean. Can you make it clearer?
#2
Do you know the properties of a sigma algebra?
3. ## Re: Probability space/ Sigma Algebras intuition
Sorry, to be confusing, let's put it in an other way:
We are throwing a dice one time. So the sample space is {1,2,3,4,5,6}.
How can I interpret the event {1,2,3}?
About #2, I know the properties. I can proof something is a sigma-algebra etc.
4. ## Re: Probability space/ Sigma Algebras intuition
Originally Posted by kasper90
Sorry, to be confusing, let's put it in an other way:
We are throwing a dice one time. So the sample space is {1,2,3,4,5,6}.
How can I interpret the event {1,2,3}?
I understand that you are Dutch. The singular of dice in English is die.
The event $\{1,2,3\}$ is tossing a die and getting a number less than four.
Or perhaps, getting a number that is at most three.
Please, I still do not know what you are asking in #2.
5. ## Re: Probability space/ Sigma Algebras intuition
Okay, so I can say that I can interpret the event {1,2,3} here as getting 1 or 2 or 3.
Can I say that the event {a,b,c} means getting a or b or c ?
Trying to answer my own #2 question:
Could I say that you normally always choose the set of events as power set of the sample space.
As you want to assign a probability to every event in the power set of the sample space.
But when the sample space becomes uncountable infinite, than there arise problems if you want to assign a probability to every event in the power set of the sample space.
I don't understand exactly why, but, in some cases you have to take a smaller set, because to some events you cannot assign a probability.
In those cases, you are looking at subsets of the power set of the sample space that is a sigma algebra.
... ? Something like that ?
Still a little bit confused as you can see, but this kind of the answer I got when searching on google.
6. ## Re: Probability space/ Sigma Algebras intuition
Originally Posted by kasper90
Okay, so I can say that I can interpret the event {1,2,3} here as getting 1 or 2 or 3.
Can I say that the event {a,b,c} means getting a or b or c ?
That has no meaning apart from a well defined experiment.
It could mean selecting the first there letters of the alphabet.
Or it could mean selecting three letters from the word "abacus"
Originally Posted by kasper90
Trying to answer my own #2 question:
Could I say that you normally always choose the set of events as power set of the sample space.
As you want to assign a probability to every event in the power set of the sample space.
Here the point is that we want the whole space to be an event with probability 1.
We want the emptyset to be an event with probability 0.
If $A$ is an event, we want $A^c$ to be an event with probability $1-\mathcal{P}(A)$.
7. ## Re: Probability space/ Sigma Algebras intuition
Okay, I understand that.
Another question if I may:
Is it true that, if you deal with finite sample spaces you normally choose the "domain" of the probability function as the power set of the sample space ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417029023170471, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/91889?sort=oldest | ## What is the smallest variety of algebras containing all fields?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A field is a ring whose nonzero elements form a commutative group under multiplication. A field is also a commutative inverse semigroup with respect to multiplication. The unique multiplicative inverse $y$ of an element $x$ (in the sense that $xyx=x$ and $yxy=y$) is $y=x^{-1}$ if $x \neq 0$ and $y=0$ if $x = 0$.
To simplify the discussion, define an inverse ring to be a ring which is an inverse semigroup with respect to multiplication. Denote the multiplicative inverse operation by $()^{-1}$. (Warning: The notion of an inverse ring doesn't exist outside of this question.) Both rings and inverse rings form a variety of algebras, i.e. they can be defined by a set of operations ($+$, $*$, $-()$, $()^{-1}$, $0$, $1$ in this case) together with set of identities satisfied by these operations. I think that the commutative inverse rings are the smallest variety of algebras containing all fields.
Question
A direct product of a family of fields is no longer a field. However, it is still a commutative inverse ring. My question is whether every commutative inverse ring is a subdirect product of a family of fields.
(Note that subdirect product here must refer to either rings or inverse rings, because the notion of subalgebra isn't defined otherwise. The answer to my question should be independent of which one we choose, but referring to inverse rings would make more sense to me.)
Note This question is identical to this question at math.stackexchange.com.
-
Can you give an example for your notion. For example, in a direct product, what is the multiplicative inverse of $(0,1)$ ? – Ralph Mar 22 2012 at 7:42
@Ralph The multiplicative inverse of $(0,1)$ is $(0,1)$. Each idempotent element is its own inverse. – Thomas Klimpel Mar 22 2012 at 9:50
## 3 Answers
If I'm not mistaken, your answer is 'yes' : Let $M(A)$ be the set of maximal ideal of your commutative inverse ring $A$. Then you have a map :
$$A \rightarrow \prod_{\rho \in M(A) } A / \rho$$.
Each projection is surjective. the kernel of this map si the jacobson radical $R$ of $A$
so let $x$ be in $R$ then $(1-xy)$ is invertible for all in $y$, in particular for $y$ the multiplicative inverse of $x$. but as $y(1-xy) =0$, then automaticaly $y=0$, and hence $x=0$.
So the previous map is injective, and $A$ is a subdirect product.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Commutative inverse semigroups are examples of completely regular semigroups, that is, semigroups where each element belongs to a subsemigroup which is a group. It is an old result that any ring whose multiplicative reduct is completely regular is a subdirect product of division rings. I don't have the reference off hand, but I can find it. Basically one shows the Jacobson radical is trivial and then one observes that a primitive ring which is completely regular must be a division ring. A key step is to show the idempotents are central and so you have a possibly noncommutative inverse ring.
Added. I haven't quite found the old reference yet, but here is the proof. If R is a ring whose multiplicative reduct is completely regular, then it is von Neumann regularb and so its Jacobson radical is trivial. Thus it suffices to handle the primitive case, so assume R has a faithful simple module. Clearly 1 and 0 are then the only central idempotents of R. Now we show all idempotents of R are central. By Clifford's structure theorem for completely regular semigroups, R is a semilattice of completely simple semigroups. So it suffices to show these completely simple semigroups are groups and then R will be an inverse semigroup with central idempotents. For this it suffices to show $\mathcal R$-equivalent and $\mathcal L$-equivalent idempotents e,f are equal. By symmetry we assume eR=fR. Then $(e-f)^2=0$ and hence e-f=0 since R is completely regular.
Thus R is a completely regular inverse monoid whose only idempotents are 0,1 and so R-{0} is a group, i.e., R is a division ring.
I believe the old paper I can't find shows a ring R is completely regular iff it is what ring theorists call strongly regular.
-
It's good to know that this is a special case of an old result. It's also uplifting that completely regular multiplicative semigroups of rings turn out to be Clifford semigroups. (I really like Clifford semigroups, because of $ab=ba \Leftrightarrow a^{-1}b=ba^{-1}$.) – Thomas Klimpel Mar 22 2012 at 23:57
There is a notion of von Neumann regular rings. For commutative rings $A$, this notion has many equivalent definitions (for proofs see David F. Anderson, Zero-Dimensional Commutative Rings):
1) $A$ is zero-dimensional and reduced.
2) Every localization of $A$ is a field.
3) For every $x \in A$ we have $x^2 | x$.
4) For every $x \in A$ there is a unique $y \in A$ with $x = x^2 y$ and $y = y^2 x$; this $y$ is called the weak inverse of $x$.
So this is what you have called an inverse ring. The resulting variety is the smallest one containing all fields, since every reduced ring embeds into a product of fields.
-
Condition 4 in the non-commutative case is called strong regular. See my answer above. – Benjamin Steinberg Mar 23 2012 at 1:39
In a von Neumann regular ring, every element has at least one inverse, but the inverse elements are not unique, so the multiplicative inverse operation is not part of the definition. A strongly von Neumann regular ring is also an inverse ring (as I learned from the answer by Benjamin Steinberg), so one can define a multiplicative inverse operation there. That operation is important, because the resulting variety of algebras depends on the defined operations. ($\mathbb Z$ is a subalgebra of $\mathbb Q$ in the context of rings, but not in the context of inverse rings.) – Thomas Klimpel Mar 24 2012 at 22:08
Are you saying that 4) is not equivalent to being von Neumann regular? For commutative rings (and I only speak about commutative rings) I am pretty sure that the weak inverses are unique. $\mathbb{Z}$ is not von Neumann regular. – Martin Brandenburg Mar 25 2012 at 8:57
All I'm saying is that a multiplicative inverse operation is (implicitly) defined for a strongly von Neumann regular ring, but is not part of the definition of a von Neumann regular ring (and I tried to explain why). So defining "inverse rings" was easier than trying to work with "regular von Neumann rings". The wiki page tries to circumvent the issue by stating: "A commutative von Neumann regular ring $R$ is a subring of a product of fields closed under taking weak inverses". This is awkward, because of "...closed under...". Also, "subring of a product" is weaker than "subdirect product". – Thomas Klimpel Mar 25 2012 at 19:46
To clarify, what I'm trying to explain is that the statement: "So this is what you have called an inverse ring. The resulting variety is the smallest one containing all fields, since every reduced ring embeds into a product of fields." is incorrect in a subtle way. I introduced "inverse rings", because I needed a context where a multiplicative inverse operation is part of the definition of the algebra. In the context of rings, the variety of algebras containing all fields is larger, for example it contains $\mathbb Z$ (which I also tried to explain). – Thomas Klimpel Mar 25 2012 at 20:02
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929803729057312, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/40828?sort=oldest | ## Obstruction for real subvariety to be embedded as complex subvariety
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a nonsingular complex projective variety. Suppose $X$ is embedded as a nonsingular real subvariety of complex projective space ${\mathbb{CP}}^n$.
When can we embed ${\mathbb{CP}}^n$ in some larger complex projective space ${\mathbb{CP}}^N$ such that the image of $X$ is now a nonsingular complex subvariety of this larger ${\mathbb{CP}}^n$?
-
Colin, do you mean to embed $\mathbb{CP}^n$ as a real submanifold? – Oleg Eroshkin Oct 2 2010 at 15:29
The fact that $\mathbb{CP}^n$ has a complex structure seems to be irrelevant for Colin's question. The question only uses the structure of real variety of $\mathbb{CP}^n$. – André Henriques Oct 2 2010 at 18:24
I mean embed both $X$ and ${\mathbb{CP}}^n$ as complex submanifolds of ${\mathbb{CP}}^N$. – Colin Tan Oct 3 2010 at 3:09
## 2 Answers
I am assuming from your comment that you are demanding that the embedding $X \to \mathbb{CP}^N$ preserves the complex structure that $X$ originally had (although it is still not completely clear). In this case, the real embedding $X \to \mathbb{CP}^n$ has to be a complex embedding. Otherwise, the actions of multiplication by $i$ on the tangent spaces of $X$ and $\mathbb{CP}^N$ will be incompatible.
For example, you can take $X = \mathbb{CP}^1$, with complex conjugation as the real embedding into $\mathbb{CP}^1$. This is a real-algebraic isomorphism, but orientation-reversing. Then, you will never have a compatible way to embed both varieties as complex subvarieties in $\mathbb{CP}^N$ by complex algebraic maps.
-
thanks scott. you answered my question (in this form well). this question still does not get at the situation i'm really considering. when i can think of how better to phrase my question I will ask it again on MO. – Colin Tan Oct 3 2010 at 6:38
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Actually answer is no even when you allow real embedding of $\mathbb{CP}^n$. Consider any null-homotopic embedding of $X$ into $\mathbb{CP}^n$. Such embedding cannot be a pullback of a complex submanifold of $\mathbb{CP}^N$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9257487654685974, "perplexity_flag": "head"} |
http://mathhelpforum.com/statistics/154389-solving-problem-involving-normal-distributions.html | Thread:
1. Solving problem involving normal distributions
Asiah found that the time taken to travel from her house to her office is normally distributed with a mean of 40 minutes and a varience of 64 minutes^2.If she leaves home at 7:30am,Find the probability that she will arrive after 8:15 am
2. Transform it to the standard $Z$ distribution and look it up on your normal probability tables...
3. Originally Posted by mastermin346
Asiah found that the time taken to travel from her house to her office is normally distributed with a mean of 40 minutes and a varience of 64 minutes^2.If she leaves home at 7:30am,Find the probability that she will arrive after 8:15 am
$\sigma^2=64\Rightarrow\sigma=+\sqrt{64}=8\;\text{\ footnotesize\ minutes}$
You want the probability that the time taken is >45 minutes...
therefore, you require
$P\left(Z>\frac{x-\mu}{\sigma}\Rightarrow\ Z>\frac{45-40}{8}\right)$
4. thanks archie meade
In standard sports of a school,all students took part in the 100m race.The time taken to finish the race by a Form 2 student in normally distributed with a mean of 15 seconds and variance of 25 seconds^2.For a student who took less than 16 seconds,1 mark was awarded and for one who took less than 14 seconds,2 marks were awarded.
a)find the percentage of Form 2 students who got 1 mark each.
b)Find the number of Form 2 students who got 2 marks if 200 Form 2 students took part.
5. Originally Posted by mastermin346
thanks archie meade
In standard sports of a school,all students took part in the 100m race.The time taken to finish the race by a Form 2 student in normally distributed with a mean of 15 seconds and variance of 25 seconds^2.For a student who took less than 16 seconds,1 mark was awarded and for one who took less than 14 seconds,2 marks were awarded.
a)find the percentage of Form 2 students who got 1 mark each.
b)Find the number of Form 2 students who got 2 marks if 200 Form 2 students took part.
(a)
Again, the question gives the variance, so the standard deviation is the square root of that...
$\sigma=+\sqrt{25}=5$
$\mu=15$
$Z=\displaystyle\frac{x-\mu}{\sigma}=\frac{16-15}{5}=0.2$
Therefore, you need to find
$P(Z<0.2)$
The percentage of students who got one mark is then 100 times the probability
of a student finishing the race in less than 16 seconds.
Try that and then maybe (b) is do-able. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638456106185913, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/3674/random-sequence-generator-function/3675 | # Random Sequence Generator function
By someone's suggestion, I am posting this question from math.stackexchange.com.
I want to find out a function or algorithm, whichever is suitable, which can provide me a random sequence. Like
Input: 3
Output: {1,2,3} or {1,3,2} or {2,1,3} or {2,3,1} or {3,1,3} or {3,2,1}
Same as if I will enter a number N, output will be a random permutation of the set {1,2,...N}
How can a I write this type of algorithm. Actually I want to find out the logic behind it.
-
No closed form function can describe a truly random permutation by definition. But if $N = 2^k$, then block ciphers come a significant fraction of the way. – Thomas Aug 29 '12 at 17:27
– CodesInChaos Aug 29 '12 at 18:12
## 2 Answers
If for some reason the solution given by @poncho does not please you (e.g. you want $N$ to be on the magnitude of a few billions but you do not have a few gigabytes of RAM), then there are other solutions, in which you get the permutation as an evaluable procedure (in other words, a block cipher).
A practical solution is the Thorp shuffle. It is approximate, but the approximation can be made as good as needed by adding more rounds (except that, as a Feistel-derivative, it implements only even permutations, so if the attacker knows the output for $N-2$ inputs he can compute the last two outputs with 100% certainty). There is also a "perfect" solution but it involves some floating-point operations which needs potentially unbounded accuracy, so in practice it is very expensive.
-
1
Can't any even-permutations-only generating cipher be made "perfect" by a trivial postprocessing conditionally switching two outputs (with a 50% probability based on the key)? – maaartinus Aug 31 '12 at 0:23
@maaartinus: I tend to think so, but it would deserve some careful analysis. – Thomas Pornin Aug 31 '12 at 13:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231433272361755, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/2795/why-are-characters-so-well-behaved/2819 | ## Why are characters so well-behaved?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Last year I attended a first course in the representation theory of finite groups, where everything was over C. I was struck, and somewhat puzzled, by the inexplicable perfection of characters as a tool for studying representations of a group; they classify modules up to isomorphism, the characters of irreducible modules form an orthonormal basis for the space of class functions, and other facts of that sort.
To me, characters seem an arbitrary object of study (why take the trace and not any other coefficient of the characteristic polynomial?), and I've never seen any intuition given for their development other than "we try this, and it works great". The proof of the orthogonality relations is elementary but, to me, casts no light; I do get the nice behaviour of characters with respect to direct sums and tensor products, and I understand its desirability, but that's clearly not all that's going on here. So, my question: is there a high-level reason why group characters are as magic as they are, or is it all just coincidence? Or am I simply unable to see how well-motivated the proof of orthogonality is?
-
3
There are two issues conflated here: why are representations of finite groups over $\mathbb C$ so nice, namely, a semisimple category, and then why can one study that representation ring so nicely with characters. Look into how ${\mathbb Z}/p$ acts on vector spaces over ${\mathbb F}_p$, using Sylow's theorem applied to $GL_n({\mathbb F}_p)$, and you'll begin to see that you're unfairly crediting characters with the nice behavior of the representations. – Allen Knutson Jun 29 2011 at 16:48
## 10 Answers
Orthogonality makes sense without character theory. There's an inner product on the space of representations given by dim Hom(V, W). By Schur's lemma the irreps are an orthonormal basis. This is "character orthogonality" but without the characters.
How to recover the usual version from this conceptual version? Notice dim Hom(V,W) = dim Hom(V \otimes W*, 1) where 1 is the trivial representation. So in order to make the theory more concrete you want to know how to pick off the trivial part of a representation. This is just given by the image of the projection 1/#G \sum_g g. The dimension of a space is the same as the trace of the projection onto that space, so dim Hom(V \otimes W*, 1) = tr(1/#G \sum_g \rho_{V \otimes W*}(g)) = 1/#G \sum_g \chi_V(g) \chi_W(g^-1) using the properties of trace under tensor product and duals.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The trace is about the strongest general way we have to linearly project a non-abelian situation (matrices) to an abelian situation (scalars): tr(AB)=tr(BA). By using the trace, the representation theory of non-abelian groups begins to resemble the representation theory of abelian groups, i.e. Fourier analysis. (Note though that the correspondence becomes less tight when considering triple products: tr(ABC) != tr(CBA). For related (though not identical) reasons, the theory of tensor products of representations is far richer in the nonabelian world (Littlewood-Richardson coefficients, etc.) than it is in the abelian world (convolution), and characters aren't always the best way to proceed here.)
This of course raises the question of why Fourier analysis is so miraculous, but I tend to take that as axiomatic. :-)
-
As to "why take the trace and not any other coefficient of the characteristic polynomial", note that for completely elementary reasons the trace of the whole representation still knows the characteristic polynomial of each individual element: for instance the second-from-top coefficient of the characteristic polynomial of rho(g) is 1/2(tr(rho(g))^2 - tr(rho(g^2))). Writing down the formula for subsequent coefficients is an exercise with symmetric functions. On the other hand, the higher coefficients of the characteristic polynomial do lose information -- e.g. non-isomorphic representations rather often have the same determinant.
-
Frobenius's analysis of the group determinant for a finite group is just another way of encoding $\det \sum f(g)\pi(g)$ in a uniform way for all functions $f : G \to \mathbb C$. Do you know if this is enough information to determine $\pi$ up to equivalence? – L Spice Apr 1 2010 at 2:06
Maybe it's silly to add this as a separate answer, since Noah basically said what I would have, but he didn't emphasize the one-sentence version of the answer: "the dimension of a subspace is the trace of the projection to that subspace."
That and linearity are what's special about trace.
-
Here is another answer: the representation ring of G and the center of k[G] are both commutative algebras. When your ground field is C, these are both isomorphic to each other, and are both isomorphic to C^{r}. The rows and columns of the character table give this isomorphism explicitly.
When doing representation theory in other contexts, most statements about characters turn into either statements about the representation ring or statements about the center of k[G]. But the lack of such an explicit presentation of the ring means that you have to work harder.
-
The representation ring as an abelian group is $\mathbb{Z}^r$, where $r$ is the number of conjugacy classes; I don't see how it could be isomorphic to $C^r$ where $C$ is a field. And btw tensoring $R(G)$ with a field loses a lot of information. – Pierre Jun 29 2011 at 19:25
A bit unrelated, but nonetheless interesting:
A finite von Neumann algebra is a von Neumann algebra together with a trace, i.e., a linear functional which is
• positive ($tr(a^*a)\geq 0$ for all $a$)
• faithful ($tr(a^*a)=0$ implies $a=0$)
• normalized ($tr(1)=1$)
• tracial ($tr(ab)=tr(ba)$)
This trace does almost everything for a finite von Neumann algebra. It gives us the standard representation of $M$ on $L^2(M)$, the predual $L^1(M)$, and the noncommutative $L^p$-spaces. In a finite factor (trivial center), it gives us a total ordering on projections (for a $II_1$-factor, it gives a "continuous" version of what Ben is saying). If we have an inclusion of finite von Neumann algebras $N\subset M$, we get a canonical conditional expectation $E\colon M\to N$, which is one of the basic tools. If they are subfactors, if we get a trace on $M_1$, we can iterate the basic construction (nicely that is). The list goes on...
So what does this all have to do with groups? Well von Neumann algebras are exactly the commutants of (unitary) group representations. Moreover, from a unitary group representation, we can form its left regular von Neumann algebra $L(G)$, which is the easiest way to construct an example of a $II_1$-factor (the group must have all infinite conjugacy classes other than that of the identity). Moreover, the study of $II_1$-subfactors generalizes the study of representation theory, including induction-restriction, etc. (see Kate's question).
Basically, every time I see an algebra, I wonder if it has a trace. So from my point of view, or with this hindsight I should say, it's no surprise to me that characters do everything!
-
For what it's worth, I had exactly the same question as you and worked out the proof that Noah sketches in some detail here, although I don't use the word "projection" explicitly.
-
More a question than an answer, but anyway: It's plain that if one wants to study representations up to equivalence, one would like to attach to every matrix an invariant by conjugation. One could then look for polynomial functions on End(V) or GL(V) (V a finite dimensional vector space), which display such an invariance. We are really looking for the fixed orbits of the linear action of GL(V) on P(End(V)) (which incidentally, is itself an infinite dimensional representation of an algebraic group). In fact this is a subalgebra of P(End(V)) and maybe one can determine a set of generators. David Savitt observation could lead to the question: does tr(A^n) suffice to generate our algebra?
-
1
Ok, the answer should be yes and it has been proved something analogous even for several matrices: kryakin.com/files/Invent_mat_%282_8%29/110/…. – Gian Maria Dall'Ara Oct 27 2009 at 15:36
To add to D. Savitt's comment: Given $g \in G$ ($G$ a finite group), the element $\rho(g) \in GL(V)$ is diagonalizable over the complex numbers since its minimal polynomial is separable. More specifically, it is a divisor of $x^n - 1$ for some $n$ -- once you have the minimal polynomial, you can compute the projection operators to the various eigenspaces as polynomials in $\rho(g)$, but generally this will require complex coefficients and working in a field which is not algebraically complete will only allow you to project to the kernel of irreducible factors.
Now, a diagonal matrix is determined up to conjugacy by its eigenvalues. And as was pointed out $\mbox{tr } \rho(g^k)$ gives the sum of $k$th powers of the eigenvalues. From these numbers, one can actually recover the set of eigenvalues of the matrix. You can prove this with symmetric functions, but you can also view the spectrum as a measure, and view $\mbox{tr } \rho(g^k)$ as the k'th (complex) moment / Fourier coefficient of the measure. These numbers, together with their complex conjugates give all the Fourier coefficients. They are periodic, so you do not need all of them (you could apply Stone Weierstrass, but that would be very wasteful). We already know the finitely many (say, $n$) candidate eigenvalues from the fact that they satisfy $z^n - 1 = 0$, so we can easily design a polynomial which vanishes on all but one of the eigenvalues and hence determine the multiplicity of each eigenvalue. Namely: $f(z) = \prod_{i \neq j} (z - \lambda_i)$. Then $\mbox{tr } f( \rho(g) )$ (the trace of the projection operator up to a constant) tells you the multiplicity of $\lambda_j$. (Is this exactly the proof through symmetric functions? If not, what is the relationship?)
Inspecting the above argument, it does not seem like the complex numbers play too large a role. But even for a more general unitary representation, the eigenvalues live on a circle and we can view them as a measure $\mu$. We can still obtain $\int z^k d\mu$ for all integers $k$. By Stone-Weierstrass or Fourier inversion, we get the whole spectrum in this way.
This is the extent to which I've thought about the above things so I'd be really happy to know if anyone can say more about it or give more points of view. For example, what very critical type of information do you miss out on by only considering one cyclic subgroup at a time? What about other unitary representations, possibly infinite groups or possibly infinite dimensional? (Should I start a separate thread?)
-
There are lots of good points in other answers, so I want to to add one specific thing about why representations are uniquely defined by their characters.
Irreducible representations are uniquely determined by highest weight, therefore all representations can be described by functions `highest weight -> ZZ`. Now going from highest weight to the character and back is an invertible transformation on the space of such functions (moreover, the matrix of this transformation is uppertriangular).
This construction can be encountered in several generalizations, e.g. perverse sheaves in geometric representation theory, so it appears to be a fundamental representation-theoretical fact.
-
It is important to realise that you are talking here specifically about representations of compact or (which is much the same) complex Lie groups. For other groups, the representation theory can be much more subtle. – L Spice Mar 8 2010 at 0:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281104803085327, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/148836/uniformly-placed-copies-of-ell-1n | # Uniformly placed copies of $\ell_1^n$.
Let $\ell_1^n$ denote $\mathbb{C}^n$ endowed with the $\ell_1$-norm. Is it possible that a reflexive space contains isometric copies of all $\ell_1^n$s complemented by projections with norm bounded by some positive $\delta>0$. It seems unlikely but I don't know how to disprove it.
EDIT: Since the question has a trivial positive solution, I'd like to ask if the situation described above is possible in $\ell_p$-spaces for $p\in (1,\infty)$.
-
1
Why not take $X=\bigoplus_{n=1}^\infty \ell_1^n$ where the direct sum is given the $\ell_2$ norm? The dual is $X^*=\bigoplus_{n=1}^\infty \ell_{\infty}^n$, and the bidual is $X$ again. The projections are of norm 1. Am I missing something? – user31373 May 23 '12 at 16:26
Right, I was mistaken. – MarkNeuer May 23 '12 at 16:28
I don't quite understand the edit. Did you mean "$\mathcal L^p$-spaces"? – user31373 May 23 '12 at 16:34
The usual spaces $\ell_p$. – MarkNeuer May 23 '12 at 16:56
1
But those spaces are strictly convex, and so cannot contain an isometric copy of $\ell_1^2$. – user31373 May 23 '12 at 16:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415059089660645, "perplexity_flag": "head"} |
http://quant.stackexchange.com/questions/tagged/valuation+risk-neutral-measure | # Tagged Questions
0answers
82 views
### Pricing a Power Contract derivative security
I'm trying to price a "power contract" and would appreciate guidance on the next step. The payoff at time $T$ is $(S(T)/K)^\alpha$, where $K > 0$, $\alpha \in \mathbb{N}$, $T > 0$. $S$ is ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941815197467804, "perplexity_flag": "middle"} |
http://mathhelpforum.com/trigonometry/194554-how-solve-equation.html | # Thread:
1. ## How to solve an equation
The equation is:
cotg(X) = (sqrt(3)*cos(50º) + sin(50º))*(1-2*sqrt(3)*cos(50º)+2*sin(50º)) /
(sqrt(3)*sin(50º) - cos(50º))
Using a calculator, I find X = 50º.
How to develop the equation to find the solution?
Thanks.
2. ## Re: How to solve an equation
What do you mean by "develop" the equation? The right side is simply a number. Find that number and take its inverse cotangent.
3. ## Re: How to solve an equation
Doing what you propose with the calculator we arrive to 50 degrees.
So, if X = 50, it would be possible to "develop" or simplify the trigonometric expression on the second member of the equation to arrive to cotg(50).
Don't you think so, considering that in the expression we have only trigonometric functions of the same angle?
4. ## Re: How to solve an equation
Originally Posted by TOZE
Don't you think so, considering that in the expression we have only trigonometric functions of the same angle?
Consider any angle $\alpha$ instead of $50^0$ and expand $\cot \alpha$ in the following way:
$\cot \alpha =\cot \;[30^0+(\alpha -30^0)]=\frac{\cot 30^0\cdot \cot (\alpha -30^0)-1}{\cot 30^0+\cot (\alpha -30^0)}=$
$\frac{\sqrt{3}\cdot \dfrac{\cos (\alpha-30^0)}{\sin (\alpha-30^0)}-1}{\sqrt{3}+\dfrac{\cos (\alpha-30^0)}{\sin (\alpha-30^0)}}}=\ldots$
Let's see if you get the right side of the equation (with $\alpha$ instead of $50^0$ ).
P.D. I haven't checked it, it is only a proposal. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8956897258758545, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/159909/how-equal-distribute-scale-of-graph-for-min-and-max-value-of-scale | # How equal distribute Scale of graph for Min and Max value of Scale??
I have min and max value of the scale and i want it to divide the scale in N partitions.
Let i have `Min = 0 and Max = 200`.
That i have tried to get the interval for getting equal distance between two points as:
````void Main()
{
double s = 012345678;
double e = 123456789;
double fx = (e - s)/10;
double index = s;
while(index <= e)
{
Console.WriteLine(index);
index += fx;
}
}
````
which does some trick and somewhat divided it as i want, but not the solid solution. Sometimes the Min and Max values are too much high as:
````Max: 564094825.8374691 Min: 544792373.69823289
Interval = (Max/ Min)/20;
````
Now it causing problem.
The Result distribution of scale result should be as:
````Let xMin= 0 and xMax = 10, I want this scale to divide in 5 intervals
Then it should return
0,2,4,6,8,10
````
Min and Max value could be negative as possible while drawing a graph for `-5 to 5` x scale or `-10 to -5`.
I went through this- Take set of values and change scale , is it somewhat relevant to my problem???
Please suggest me the way to achieve this..
thanks in Advance.
-
## 2 Answers
If your Min is $m$ and your Max is $M$ and you want to divide the range between them into $N$ equal pieces, then let $h=(M-m)/N$ and let your division points be $$m,m+h,m+2h,m+3h,\dots,m+(Nh)=M$$
-
+1 thanks for your answer, i will check and update you whether it solved my problem or not.. thanks again.. – Niranjan Kala Jun 19 '12 at 5:21
well it is same as i am doing, but little difference. thanks it works. – Niranjan Kala Jun 19 '12 at 5:48
Suppose that the minimum possible value is $a$, and the maximum possible value is $b$. We want to map the interval $[a,b]$ to the interval $[0,200]$.
There are a great many possibilities. One of the simplest is a linear mapping, where $x$ is mapped to $f(x)$, with $f(x)=px+q$.
We want $f(a)=0$, so $pa+q=0$. We want $f(b)=200$, so $pb+q=200$.
Now solve the system of equations $pa+q=0$, $\,pb+q=200$ for $p$ and $q$. We get $$p=\frac{200}{b-a},\qquad q=-\frac{200a}{b-a},$$ and therefore $$f(x)=\frac{200}{b-a}\left(x-a\right).$$ You may want to modify this so that the scaled values are integers. If so, then a sensible modification is to use, instead of $f(x)$, the function $g(x)$, where $g(x)$ is the nearest integer to $f(x)$.
Added: If you want to have only $N$ values, equally spaced with bottom at $0$ and top at $200$, then $N-1$ should divide $200$. Let $D=\frac{200}{N-1}$. Take the integer nearest to $\frac{f(x)}{D}$, and multiply the result by $D$. For example, if you want $26$ different values $0, 8, 16, 24, \dots, 200$, then $N=26$, so $N-1=25$ and $D=8$. We find the nearest integer to $\frac{f(x)}{8}$ and multiply the result by $8$.
Remark: One important real world example is when you want to convert the temperature $x$, in degrees Fahrenheit, to temperature in degrees Celsius. In the Fahrenheit scale, the freezing point of water is $32$ degrees, and the boiling point is $212$. In degrees Celsius it is $0$ and $100$. So we want to transform the interval $[32,212]$ to the interval $[0,100]$. the relevant $f(x)$ is equal to $\frac{100}{212-32}(x-32)$, which simplifies to $f(x)=\frac{5}{9}(x-32)$.
-
Thanks @Andre for your nice explanation. please review the updated question modification and suggest me about that.. i want equal intervals between min and max values. i know only min and max value, not mapping some external scale. i am not doing exact that done in link specified in my question. – Niranjan Kala Jun 19 '12 at 5:03
Min and max look like all you need. If you only want $N$ equally spaced values, like $N=11$ (we want $N-1$ to divide $200$, the final rounding should be a little different from the one I described. I will add something about that to the post. – André Nicolas Jun 19 '12 at 5:16
+1:thanks @Andre for reply. i will check and wait for your edits.. – Niranjan Kala Jun 19 '12 at 5:23
@NiranjanKala: The first time I wrote out the modification, I made a mistake. It is now right. – André Nicolas Jun 19 '12 at 5:29
well, it's alright i will try to check @Gerry answer and your will also help others.. have a nice time.. – Niranjan Kala Jun 19 '12 at 5:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8917593955993652, "perplexity_flag": "middle"} |
http://blog.smaga.ch/ | # CFA Level 2: Quantitative Methods – Autoregressive Processes
Hello again everybody,
We’re getting towards the final straight line before the exam, and I will post here the content of all the little flash cards that I created for myself.
Starting back where I left, in the Quantitative Methods this post will be about Autoregressive Processes sometime denoted AR.
These processes are of the following form:
$$x_{t+1} = b_0 + b_1 x_t$$
Where $x_t$ is the value of process $X$ at time $t$, and $b_0$ and $b_1$ are the parameters we are trying to estimate.
To estimate the parameters $b_0$ and $b_1$, we proceed as follows:
1. Estimate the parameters using linear regression
2. Calculate the auto-correlations of the residuals
3. Test whether these auto-correlations are significant:
Note that we cannot use the Durbin-Watson test we used previously in this section of the CFA curriculum; we will be using a t-test that works this way:
$$t = \frac{\rho_{\epsilon_t, \epsilon_{t+k}}}{\frac{1}{\sqrt{T}}}=\rho_{\epsilon_t, \epsilon_{t+k}} \sqrt{T}$$
Where $\epsilon_t$ is the residual term of the regression at time $t$, and $T$ is the number of observation. The t statistic has $T-2$ degrees of freedom. If they are statistically significant, then we cannot continue our analysis because of reasons I’ll explain a bit later in the post.
With AR processes, you are trying to actually predict the next values of a given process using a linear relationship between successive values an by applying simple linear regression. The thing is, if you want to be able to trust your estimated $b_0$ and $b_1$ parameters, you need the process to be covariance-stationary.
Now, a bit of math. If a process has a finite mean-reverting level, then it is covariance-stationary. What is the mean-reverting level? Well it simple the value $x_t$ at which $x_{t+1}=x_t$. So, let’s write this in an equation:
$$x_{t+1} = x_t = b_0 + b_1 x_t$$
$$(1-b_1) x_t = b_0 \iff x_t=\frac{b_0}{1-b_1}$$
So, $X$ is covariance stationary if $b_1 \neq 1$.
The test for auto-correlations we did in the point 3) guarantees that the process is covariance-stationary if the auto-correlations are not statistically significant.
What if the process X is not covariance-stationary? Well you create a new process $Y$ where:
$$y_t = x_t – x_{t-1}$$
So, you have a new model
$$y_t = b_0 + b_1 y_{t-1} + \epsilon_t$$
which models the next change in the process X which is then covariance stationary. You can use that for the analysis.
This little “trick” is called first differencing.
That’s it, stay tuned for more soon!
# CFA Level II: Quantitative Methods, ANOVA Table
Good evening everyone,
Following my last post on multiple regression, I would like to talk about ANOVA tables as they are a very important part of the Level II curriculum on quantitative methods. ANOVA stands for ANAlysis Of VAriance; it helps to understands how well a model does at explaining the dependent variable.
First of all, recall that $Y=\{Y_i\} ~ i=1,…,n$ denote the real values of the dependent variables and recall that $\hat{Y}=\{\hat{Y}_i\} ~ i=1,…,n$ are the values estimated by the model. We define the following values:
Total Sum of Squares (SST) :
$$\text{SST}=\sum_{i=1}^n (Y_i – \bar{Y})^2$$
This is the total variation of the process $Y$, i.e. the squared deviations from $Y_i$ from the mean of the process denoted $\bar{Y}$. With the regression, this total variation is what we are trying to reproduce.
$$\text{RSS}=\sum_{i=1}^n (\hat{Y}_i – \bar{Y})^2$$
This is the variation explained by the regression model. If the model fitted perfectly the dependent variable, we would have $\text{RSS}=\text{SST}$.
Sum of Squared Errors (SSE):
$$\text{SSE}=\sum_{i=1}^n (Y_i – \hat{Y}_i )^2=\sum_{i=1}^n \epsilon_i ^2$$
Finally this is the unexplained variation; the sum of the differences between the real values of the process $Y_i$ and the values estimated by the model $\hat{Y}_i$.
As expected, the total variation is equal to the sum of the explained variation and the unexplained variation:
$$\text{SST}=\text{RSS} + \text{SSE}$$
Note that the CFA does not require candidates to be able to compute this values (it would take too long) but I thought that having the definitions helps understanding the concepts.
From these values, we can get the first important statistic we want to look at when discussing the quality of a regression model:
$$\text{R}^2=\frac{\text{RSS}}{\text{SST}}$$
The $\text{R}^2$ measures the part of the total variation that is being explained by the regression model. Its value is bounded from 0 to 1, and the closer it gets to 1 the better the model fits the real data.
Now we also want to compute the average of $\text{RSS}$ and $\text{SSE}$ (the mean sum of squares):
$$\text{MSR} = \frac{\text{RSS}}{k}$$
$$\text{MSE} = \frac{\text{SSE}}{n-k-1}$$
where $n$ is the size of the sample and $k$ is the number of dependent variables used in the model. These values are “intermediary” computations and are use for different statistics computations. First we can compute the standard error of the error terms $\epsilon$ (SEE):
$$\text{SEE}=\sqrt{\text{MSE}}$$
Note that this is just the application of the classic computation of the standard deviation with $k+1$ degrees of freedom. If the model fits well the data, then $\text{SEE}$ will be close to 0 (its lower bound).
Now, there is an important test in regression analysis which is called the F-statistic. Basically, this test has the null hypothesis that all the coefficients of the regression are statistically insignificant: $H_0 : b_i=0 ~ \forall i$. It is computed as follows:
$$\text{F}=\frac{\text{MSR}}{\text{MSE}}$$
$\text{F}$ is a random variable distributed under an F-Statistic with $k$ degrees of freedom in the numerator and $n-k-1$ degrees of freedom in the denominator. The critical value of the variable can be found in the F distribution form attached to the CFA exam. It is very important to understand that if you reject $H_0$, you say that at least one of the coefficients is statistically significant. This, by no mean, implies that all of them are!
To sum up, you can look at the following table known as the ANOVA table:
Source of variation Degrees of Freedom Sum of Squares Mean Sum of Squares
Error (unexplained) n-k-1 SSE MSE
Total n-1 SST
That’s it for tonight, I hoped you enjoyed the ride!
# CFA Level II: Quantitative Methods, Multiple Regression
Following my first post about the Level II and specifically correlation, I am now moving on to the main topic if this year’s curriculum: multiple regression. Before I get started, I want to mentioned that the program talks about regression in 2 steps: it starts by discussing the method with 1 independent variable and then with multiple variables. I will only talk about the multiple variable version, because it is generalized. If you understand the general framework, you can answer any question about the specific 1 variable case.
Multiple Regression is all about describing a dependent variable $Y$ with a linear combination of $k$ variables $X_k, k \in 1..K$. This is expressed mathematically as follows:
$$Y_i = b_0 + b_1 X_{1,i} + …. + b_K X_{K,i} + \epsilon_i$$
Basically, you are trying to estimate the variable $Y_i$ with the values of the different $X_{k,i}$. The regression process consists in estimating the parameters $b_0, b_1, … , b_k$ with an optimization method over a sample of size $n$ (there are $n$ known values for each of the independent variable $X_k$ and the dependent variable $Y$, represented by the $i$ index). When all $X_k=0$, $Y$ has a default value $b_0$ (called the intercept). The error term $\epsilon_i$ is there because the model will not be able to determine $Y_i$ exactly; there is hence a residual part of the value of $Y$ which is unexplained by the model which is normally distributed with mean 0 and constant variance $\forall i$.
So to sum up, the inputs are:
• $n$ known values of $Y$
• $n$ known values of each $X_i$
and the outputs are:
• $b_0, b_1, … , b_k$
So, say you have a set of new values for each $X_k$, you can estimate the value for Y, denoted $\hat{Y}$ by doing:
$$\hat{Y} = b_0 + b_1 X_1 + …. + b_K X_K$$
The most important part of this section is the enumeration of its underlying assumptions:
1. There is a linear relationship between the independent variable $X_k$ and the dependent variable $Y$.
2. The independent variable $X_k$ are not random and not correlated with each other.
3. The expected value of $\epsilon_i$ is 0 for all $i$.
4. The variance of $\epsilon_i$ is constant for all $i$.
5. The variable $\epsilon$ is normally distributed,
6. The error terms $\epsilon_1, …, \epsilon_n$ are not correlated with each other.
If one of these assumptions is not verified for the sample being analyzed, then the model is misspecified and we will see in a subsequent post how to detect this problem and how to handle it. Note that point 2) only mentions colinearity between the independent variables; there is no problem if $Y$ is correlated to one of the $X_k$, it’s what we’re looking for.
Now, remember in my first post on the Quant Methods I said that one we would have to compute the statistical significance of estimated parameters. This is exactly what were are going to do now. The thing is, the output parameters of a regression (the coefficients $b_0, b_1, … b_K$) are only statistical estimates. As a matter of fact, there is uncertainty about this estimation. Therefore, the regression algorithm usually outputs the standard deviations $s_{b_k}$ for each parameter $b_k$. This allows us to create a statistical test to determine whether the estimate $\hat{b}_k$ is statistically different from some hypothesized value $b_k$ with a level of confidence of $1-\alpha$. The null hypothesis $H_0$, which we want to reject, is that $\hat{b}_k=b_k$. The test goes as follows:
$$t=\frac{\hat{b}_k – b_k}{s_{b_k}}$$
If the null hypothesis $H_0$ is verified, the variable $t$ follows a t distribution with $n-K-1$ degrees of freedom. So, you can simply look at the value in the t-distribution form for the desired level on confidence to find the critical value $c_\alpha$. If $t>c_\alpha$ or if $t<-c_\alpha$, then $H_0$ can be rejected and we can conclude that $\hat{b}_k$ is statistically different from $b_k$.
Usually, we are asked to determine whether some estimate $\hat{b}_k$ is statistically significant. As explained in the previous post, this means that we want to test the null hypothesis $H_0: \hat{b}_k = 0$. So, you can just run the same test than before with $b=0$:
$$t=\frac{\hat{b}_k}{s_{b_k}}$$
That’s it for today. The concepts presented here are essential to succeed the Quantitative Method part of the CFA Level II exam. They are nonetheless quite easy to grasp and the formulas are very simple. Next, we will look at the method to analyze how well a regression model does at explaining the dependent variable.
# CFA Level II: Economics, Exchange Rates Basics
Good evening everyone,
My weekly task is to go through the Economics part of the Level II curriculum. I was a bit afraid of it, because it was clearly my week point at the Level I and because I think that this topic covers a lot of material compared to its allocated number of questions.
In this level, the first challenge is to take into account the bid-ask spread for currency exchange rates. Just as we saw for security markets at Level I, exchange rate do not value a single “value”. That is, you cannot buy and sell a currency at the same price instantaneously. This is because you need to go through a dealer who has to make money for providing liquidity: this economical gain is provided by the bid-ask spread.
Let’s go back to the basics by looking at the exchange rate $\frac{CHF}{EUR}$. The currency in the denominator is the base currency; it is the asset being traded. The currency in the numerator is the price currency; it is the currency used to price the underlying asset which is in this case another currency. This is exactly like if you were trading a stock $S$. The price in CHF could be see as the $\frac{CHF}{S}$ “exchange rate”, i.e. the number of CHF being offered for one unit of $S$. Now as mentioned before, exchange are quoted with bid and ask prices:
$$\frac{CHF}{EUR} = 1.21 \quad – \quad 1.22$$
This means that $\frac{CHF}{EUR}_\text{bid}$ is 1.21 and $\frac{CHF}{EUR}_\text{ask}$ is 1.22. Again, you are trading the base currency: here Euros.
• The bid price is the highest price you can sell it for to the dealer.
• The ask price is the lowest price you can buy it for to the dealer.
If you want to make sure you got it right, just make sure you can’t instantaneously buy the base currency at a given price (which you believe to be the ask) and sell it at a higher price (which you believe to be the bid). In this example, you can buy a EUR for 1,22 CHF and sell it instantaneously for 1.21 CHF making a loss of 1.22-1.21=-0.01 CHF. In fact, the loss can be seen as the price of liquidity which is the service provided by the dealer for which he has to be compensated. So the lower value is the bid, the higher value is the ask (also called the offer).
Recall from Level I that you could convert exchange rates by doing:
$$\frac{CHF}{EUR} = \frac{1}{\frac{EUR}{CHF}}$$
This is simple algebra and it works fine as long as you don’t have the bid-ask spread to take into account. The problem is that at Level II, you do. To invert the exchange rate with this higher level of complexity, you have to learn the following formula:
$$\frac{EUR}{CHF}_\text{bid} = \frac{1}{\frac{CHF}{EUR}_\text{ask}}$$
This might look complicated at first, but I got something in my bag to help you learning it. Look do the following steps:
• Define what you want on the left-hand side of the equation (currency in the numerator, currency in the denominator, bid or ask).
$$\frac{A}{B}_\text{side}$$
• On the right-hand side of the equation, write the inverse function:
$$\frac{A}{B}_\text{side}=\frac{1}{\cdot}$$
• On the right-hand side, replace the $\cdot$ by the inverted exchange rate:
$$\frac{A}{B}_\text{side}=\frac{1}{\frac{B}{A}_\cdot}$$
• Finally, replace the remaining dot on the right-hand side by the opposite side:
$$\frac{A}{B}_\text{side}=\frac{1}{\frac{B}{A}_\text{opp. side}}$$
Let’s take show how this work using our base example:
$$\frac{EUR}{CHF}_\text{bid}=\frac{1}{\frac{CHF}{EUR}_\text{ask}}$$
Simple. You can simply apply this method interchangeably to suit your needs. Actually, you might wonder what you need that to compute cross rates, which will be the subject of another post. Until that, grasp the concepts presented here and stay tuned on this blog!
# CFA Level II, Quantitative Methods: Correlation
Good evening everyone,
So I’m finally getting started to write about the CFA Level II material, and as I process in the classic order of the curriculum, I will start with the Quantitative Methods part. If you have a bit of experience in quantitative finance, I believe this part is quite straightforward. Actually, it talks about many different things and therefore I will make a different post for each topic to keep each of them as short as possible. This will hopefully make it as readable and accessible as possible for all types of readers.
As a brief introductory note, I would say that this section is very much like the rest of the CFA Level II curriculum, it builds on the concepts learnt in the Level I. Let me write that again: the concepts – not essentially all the formulas. For the Quant Methods part, you have to be comfortable with the Hypothesis Testing part I recapped in this post.
So let’s get started for the first post which will be about correlation. This part is actually quite easy because you’ve pretty much seen everything at the Level I. Let me restate the two simple definitions of the sample covariance and the sample correlation of two sample $X$ and $Y$:
$$\text{cov}_{X,Y} = \frac{\sum_{i=1}^n (X_i – \bar{X})(Y_i – \bar{X})}{n-1}$$
$$r_{X,Y} = \frac{\text{cov}_{X,Y}}{s_X s_Y}$$
where $s_X$ and $s_Y$ are the standard deviation of the respective samples.
The problem of the sample covariance is that it doesn’t really give you a good idea of the strength of the relationship between the two processes; it very much depends on each the samples’ variances. This is where the sample correlation is actually useful because it is bounded: $r_{X,Y} \in [-1,1]$. Taking the simple example, of using twice the same sample, we have $\text{cov}_{X,X} = s_X^2$ and $r_{X,X}=1$.
In general, I would say that the Quant Methods part of the Level II mainly focuses on understanding underlying models and their assumptions, not on learning and applying formulas – it was more the case in Level I. For correlation, there is one key thing to understand, it detects a linear relationship between two samples. This means that the correlation is only useful to detect a relationship of the kind $Y = aX+b+\epsilon$.
A good way to visualize whether there is a correlation between two samples is to look at a scatter plot. To create a few examples, I decided to use MATLAB as we can generate random processes and scatter plots very easily. So I will create 3 processes:
• A basic $X \sim \mathcal{N}(0,1)$ of size $n=1000$
• A process which is a linear combination of $X$.
• A process which is not a linear combination of $X$.
• And another process $Y \sim \mathcal{N}(0,1)$ of size $n=1000$ independent from $X$.
```%Parameter definition
n=1000;
%Process definitions
x=randn(n,1);
linear_x=5+2*x;
squared_x=x.^2;
y=randn(n,1);
%Plotting
figure();
scatter(x,y);
figure();
scatter(x,squared_x);
figure();
scatter(x,linear_x);
```
The script presented above generates 3 different scatter plots which I will now present and comment. First, let’s take the example of the two independent process X and Y. In this instance, there is not correlation by definition. This give a scatter plot like this:
Scatter plot of independent processes
Now, let’s look at the scatter plot for the process which is a linear combination of X. Obviously, this is an example of two processes with positive correlation (actually, perfect correlation).
Scatter plot of perfectly correlated processes
Graphically, we can say that there is correlation between two samples if there is a line on the scatter plot. If the line has a positive slope (it goes from down-left to up-right), the correlation will be positive. If it has a negative slope (it goes from top-left to bottom-right), the correlation will be negative. The magnitude of the correlation is determined by how much the points lie on a straight line. In the example above, all the points perfectly line on a straight line. So a general framework to “estimate” correlation from a scatter plot would be the following:
• Do the points lie approximately on a straight line?
• If no, then there is no correlation, stop here.
• If yes, then there is correlation and continue:
• Is the slope of the line positive or negative?
• If positive, then the correlation $r \geq 0$
• If negative, then the correlation $r \leq 0$
• If there is no slope (vertical or horizontal straight line), then one of the sample is constant and there is no correlation $r=0$.
• How much are the points on a straight line?
• The more they look to be on a straight line, the more $|r|$ is close to 1.
In the example above, the points look like a straight line, the slope is positive, and the points are very much on a straight line, so $r \simeq 1$.
Finally, let’s look at the third process which is simply the squared values of X, so the relationship is not linear:
Scatter plot of a squared process
If we apply the decision framework presented above, we can clearly see that the points do not lie on a straight line, and hence we conclude that there is no correlation between the samples.
Now let’s look at the values given by MATLAB for the correlations:
Samples Correlation
Independent processes X and Y 0.00
X and Linear Combination of X 1.00
X and squared X 0.02
As we can see, MATLAB confirms what we determined looking at the scatter plots.
I now want to explain a very important concept in the Level II curriculum: the statistical significance of an estimation. Quite simply, given the fact that we estimated some value $\hat{b}$, we say that this estimation is statistically significant with some confidence level $(1-\alpha)$, if we manage to reject the hypothesis $H_0 : b=0$ with that confidence level. To do so, we will perform a statistical test which will depend on the value we are trying to estimate.
For the given post, we would like to determine whether the sample correlation $r$ we estimate is statistically significant. There is a simple test given by the CFA institute:
$$t = \frac{r \sqrt{n-2}}{\sqrt{1-r^2}}$$
This variable $t$ follows a Student-t law, with $n-2$ degrees of freedom. This means that, if you know alpha, you can simply look at the critical value $t_c$ in the distribution table and reject the hypothesis $H_0 : r=0$ if $t < -t_c$ or $t > t_c$. Although you are supposed to know how to compute this test yourself, you can also be given the p-value of the estimated statistic. Recall from Level I, the p-value is the minimal $\alpha$ for which $H_0$ can be rejected. Quite simply, if the p-value is smaller than the $\alpha$ you consider for the test, you can reject the null hypothesis $H_0$ and conclude that the estimated value is statistically significant.
Let’s look at the p-values MATLAB gives me for the correlation we discussed in the example:
Samples Correlation P-Value
Independent processes X and Y -0.01 0.77
X and Linear Combination of X 1.00 0.00
X and squared X 0.02 0.53
If we consider an $\alpha$ of 5%, we can see that we cannot reject the null hypothesis for the independent and squared process, so they are not statistically significant. On the other hand, the linear combination of X has a p-value close to 0, which means that it is statistically significant for virtually any $\alpha$.
One last word on the important points to keep in mind about correlation:
• Correlation detects only a linear relationship; not a nonlinear one.
• Correlation is very sensible to outliers (“weird” points, often erroneous, included in the datasets).
• Correlation can be spurious; you can detect a statistically significant correlation whereas there is no economic rationale behind it.
I hope you enjoyed this first post on the CFA Level II, and I’ll be back shortly with more! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 138, "mathjax_display_tex": 30, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305406212806702, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/4288-practical-use-calculus.html | # Thread:
1. ## Practical use of Calculus
15.1
A rectangular container has the following dimensions (cm)
Length $3x$
Width $90-3x$
depth $\frac{x}{3}$
Determine
15.1.1 Volume V in terms of $x$
15.1.2 The value for $x$ for which it has a max volume
15.2 The sum of two positive numbers x and y is 48
15.2.1 find y in terms of x
15.2.2 Prove that the sum of thier squares can be given by $2x^3-96x+2304$
15.2.3 Find the samllest possible value of the sum of thier squares
I have no idea >< I made a couple of attempts but it doesnt look right. weh!
2. Originally Posted by NeF
15.1
A rectangular container has the following dimensions (cm)
Length $3x$
Width $90-3x$
depth $\frac{x}{3}$
Determine
15.1.1 Volume V in terms of $x$
15.1.2 The value for $x$ for which it has a max volume
15.2 The sum of two positive numbers x and y is 48
15.2.1 find y in terms of x
15.2.2 Prove that the sum of thier squares can be given by $2x^3-96x+2304$
15.2.3 Find the samllest possible value of the sum of thier squares
I have no idea >< I made a couple of attempts but it doesnt look right. weh!
Hello,NeF,
$V=l\cdot w\cdot d \Longrightarrow V(x)=3x\cdot (90-x)\cdot \frac{x}{3}=-3x^3+90x^2$
You'll get the maximum value for V if $\frac{dV}{dx}=0$. Thus:
$\frac{dV}{dx}=-9x^2+180x$. Therefore: $-9x^2+180x=0$.
Solve this equation for x and you'll get x = 0 or x = 20. x = 0 produces obviously the minimum value of V, so at x = 20 V has its maximum.
to 15.2:
x + y = 48. Therefore:
y = 48-x
Sum of squares: $x^2+(48-x)^2=x^2+2304-96x+x^2=2x^2-96x+2304$. I assume that there isa typo in your text.
As you can see $s(x)=2x^2-96x+2304$ is the equation of a parabola opening upward. So the smallest value is at the vertex where the derivative of s is zero:
$\frac{ds}{dx}=4x-96$. this expression is zero if x = 24. That means if x = y = 24 you get the smallest sum of the squares.
Greetings
EB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8791322112083435, "perplexity_flag": "middle"} |
http://www.wikiwaves.org/Free-Surface_Green_Function | # Free-Surface Green Function
From WikiWaves
The Free-Surface Green function is one of the most important objects in linear water wave theory. It forms the basis on many of the numerical solutions, especially for bodies of arbitrary geometry. It first appeared in John 1949 and John 1950. It is based on the Frequency Domain Problem. The exact form of the Green function depends on whether we assume the solution is proportional to $\exp(i\omega t)$ or $\exp(-i\omega t)$. It is the fundamental tool for the Green Function Solution Method There are many different representations for the Green function.
## Equations for the Green function
The Free-Surface Green function is a function which satisfies the following equation (in Finite Depth)
$\nabla_{\mathbf{x}}^{2}G(\mathbf{x},\mathbf{\xi})=\delta(\mathbf{x}-\mathbf{\xi}), \, -h<z<0$
$\frac{\partial G}{\partial z}=0, \, z=-h,$
$\frac{\partial G}{\partial z} = \alpha G,\,z=0.$
where $\alpha$ is the wavenumber in Infinite Depth which is given by $\alpha=\omega^2/g$ where $g$ is gravity. We also require a condition as $\mathbf{x} \to \infty$ which is the Sommerfeld Radiation Condition. This depends on whether we assume that the solution is proportional to $\exp(i\omega t)$ or $\exp(-i\omega t)$. We assume $\exp(i\omega t)$ through out this.
We define $\mathbf{x}=(x,y,z)$ and $\mathbf{\xi}=(a,b,c)$
## Two Dimensional Representations
Many expressions for the Green function have been given. We present here a derivation for finite depth based on an Eigenfunction Matching Method. We write the Green function as
$G(x) = \sum_{n=0}^\infty a_n(x)f_n(z)$
where
$f_n(z)=\frac{\cos(k_n(z+h))}{N_n}$
$k_n$ are the roots of the Dispersion Relation for a Free Surface
$\alpha +k_n\tan{(k_n h)}= 0\,$
with $k_0$ being purely imaginary with negative imaginary part and $k_n,$ $n\geq 1$ are purely real with positive real part ordered with increasing size. $N_n$ is chosen so that the eigenfunctions are orthonormal, i.e.,
$\int_{-h}^{0} f_m(z) f_n(z)\mathrm{d}z = \delta_{mn}.\,$
and are given by
$N_n = \sqrt{\frac{\cos(k_nh)\sin(k_nh)+k_nh}{2k_n}}$
The Green function as written needs to only satisfy the condition
$(\partial_x^2 + \partial_z^2 )G = \delta(x-a)\delta(z-c).$
We can expand the delta function as
$\delta(z-c)=\sum_{n=0}^\infty f_n(z)f_n(c).$
Therefore we can derive the equation
$\sum_{n=0}^\infty (\partial_x^2 - k_n^2 )a_n(x)f_n(z)= \delta(x-a)\sum_{n=0}^\infty f_n(z)f_n(c).$
so that we must solve
$(\partial_x^2 - k_n^2 )a_n(x) = \delta(x-a)f_n(c).$
This has solution
$a_n(x) = -\frac{e^{-|x-a|k_n}f_n(c)}{2 k_n}.$
The Green function can therefore be written as
$G(x,\xi) = \sum_{n=0}^\infty -\frac{e^{-|x-a|k_n}}{2 k_n}f_n(c)f_n(z)$
$= \sum_{n=0}^\infty -\frac{e^{-|x-a|k_n}}{2 k_n N_n^2} \cos(k_n(z+h))\cos(k_n(c+h))$
It can be written using the expression for $N_n$ as
$G(\mathbf{x},\mathbf{\xi}) = \sum_{n=0}^\infty -\frac{e^{-|x-a|k_n}}{\cos(k_nh)\sin(k_nh)+k_nh} \cos(k_n(z+h))\cos(k_n(c+h))$
We can use the Dispersion Relation for a Free Surface which the roots $k_n$ satisfy to show that $\alpha = - k_n\tan k_n h$ and $\alpha ^2+k_n^2 = k_n^2\sec^2k_n h$ so that we can write the Green function in the following forms
$G(\mathbf{x},\mathbf{\xi}) = \sum_{n=0}^\infty \frac{e^{-|x-a|k_n}}{k_n/\alpha \sin(k_nh) - k_nh} \cos(k_n(z+h))\cos(k_n(c+h))$
or
$G(\mathbf{x},\mathbf{\xi}) = \sum_{n=0}^\infty \frac{(\alpha ^2+k_n^2)e^{-|x-a|k_n}}{\alpha - (\alpha ^2+k_n^2)k_nh } \cos(k_n(z+h))\cos(k_n(c+h))$
There are some numerical advantages to these other forms. Note that the expression give in Mei 1983 and Wehausen and Laitone 1960 is incorrect (by a factor of -1).
### Incident at an angle
In some situations the potential may have a simple $e^{i k_y y}$ dependence (so that it is pseudo two-dimensional). This is used to allow waves to be incident at an angle. We require the Green function to satisfy the following equation
$\left(\partial_x^2 + \partial_z^2 - k_y^2\right) G(\mathbf{x},\mathbf{\xi})=\delta(\mathbf{x}-\mathbf{\xi}), \, -\infty<z<0$
$\frac{\partial G}{\partial z}=0, \, z=-h,$
$\frac{\partial G}{\partial z} = \alpha\phi,\,z=0.$
The Green function can be derived exactly as before except we have to include $k_y$
$G(\mathbf{x},\mathbf{\xi}) = \sum_{n=0}^\infty -\frac{k_n}{\sqrt{k_n^2+k_y^2}} \frac{e^{-|x-a|\sqrt{k_n^2+k_y^2}}}{\cos(k_nh)\sin(k_nh)+k_nh} \cos(k_n(z+h))\cos(k_n(c+h))$
### Infinite Depth
The Green function for infinite depth can be derived from the expression for finite depth by taking the limit as $h\to\infty$ and converting the sum to an integral using the Riemann sum. Alternatively, the expression can be derived using Fourier Transform Mei 1983
### Solution for the singularity at the Free-Surface
We can also consider the following problem
$\nabla^{2} G=0, \, -h<z<0$
$\frac{\partial G}{\partial z}=0, \, z=-h,$
$-\frac{\partial G}{\partial z} + \alpha G = \delta(x-a),\,z=0.$
It turns out that the solution to this is nothing more than the Green function we found previously restricted to the free surface, i.e.
$G(\mathbf{x},\mathbf{\xi}) = \sum_{n=0}^\infty -\frac{e^{-|x-a|k_n}}{2 k_n N_n^2} \cos(k_n(z+h))\cos(k_n h) = \sum_{n=0}^\infty \frac{e^{-|x-a|k_n}}{2 \alpha N_n^2} \cos(k_n(z+h))\sin(k_n h)$
### Matlab code
Code to calculate the Green function in two dimensions without incident angle for point source and field point on the free surface can be found here two_d_finite_depth_Green_surface.m
## Three Dimensional Representations
Let $(r,\theta)$ be cylindrical coordinates such that
$x - a = r \cos \theta,\,$
$y - b = r \sin \theta,\,$
and let $R_0$ and $R_1$ denote the distance from the source point $\mathbf{\xi} = (a,b,c)$ and the distance from the mirror source point $\bar{\mathbf{\xi}} = (a,b,-c)$ respectively, $R_0^2 = (x-a)^2 + (y-b)^2 + (z-c)^2$ and $R_1^2 = (x-a)^2 + (y-b)^2 + (z+c)^2$.
The most important representation of the finite depth free surface Green function is the eigenfunction expansion given by John 1950. He wrote the Green function in the following form
$\begin{matrix} G(\mathbf{x};\mathbf{\xi}) & = & \frac{i}{2} \, \frac{\alpha ^2-k^2}{(\alpha ^2-k^2)h-\alpha }\, \cosh k(z+h)\, \cosh k(c+h) \, H_0^{(1)}(k r) \\ & + & \frac{1}{\pi} \sum_{m=1}^{\infty} \frac{k_m^2+\alpha ^2}{(k_m^2+\alpha ^2)h-\alpha }\, \cos k_m(z+h)\, \cos k_m(c+h) \, K_0(k_m r), \end{matrix}$
where $H^{(1)}_0$ and $K_0 ,\,$ denote the Hankel function of the first kind and the modified Bessel function of the second kind, both of order zero as defined in Abramowitz and Stegun 1964, $k$ is the positive real solution to the Dispersion Relation for a Free Surface and $k_m$ are the imaginary parts of the solutions with positive imaginary part. This way of writing the equation was primarily to avoid complex values for the Bessel functions, however most computer packages will calculate Bessel functions for complex argument so it makes more sense to write the Green function in the following form
$G(\mathbf{x};\mathbf{\xi}) = \frac{1}{\pi} \sum_{m=0}^{\infty} \frac{k_m^2+\alpha ^2}{(k_m^2+\alpha ^2)h-\alpha }\, \cos k_m(z+h)\, \cos k_m(c+h) \, K_0(k_m r),$
where $k_m$ are as before except $k_0=ik$.
An expression where both variables are given in cylindrical polar coordinates is the following
$G(r,\theta,z;s,\varphi,c)= \frac{1}{\pi} \sum_{m=0}^{\infty} \frac{k_m^2+\alpha ^2}{h(k_m^2+\alpha ^2)-\alpha }\, \cos k_m(z+h) \cos k_m(c+h) \sum_{\nu=-\infty}^{\infty} K_\nu(k_m r_+) I_\nu(k_m r_-) \mathrm{e}^{\mathrm{i}\nu (\theta - \varphi)},$
where $r_+=\mathrm{max}\{r,s\}$, and $r_-=\mathrm{min}\{r,s\}$; this was given by Black 1975 and Fenton 1978 and can be derived by applying Graf's Addition Theorem to $K_0(k_m|r\mathrm{e}^{\mathrm{i}\theta}-s\mathrm{e}^{\mathrm{i}\varphi}|)$ in the definition of $G(\mathbf{x};\mathbf{\xi})$ above.
In three dimensions and infinite depth the Green function $G$, for $r>0$, was given by Havelock 1955 as
$\begin{matrix} G(\mathbf{x};\mathbf{\xi}) &= \frac{i \alpha }{2} e^{\alpha (z+c)} \, H_0^{(1)}(\alpha r) + \frac{1}{4 \pi R_0} + \frac{1}{4 \pi R_1} \\ & - \frac{1}{\pi^2} \int\limits_{0}^{\infty} \frac{\alpha }{\eta^2 + \alpha ^2} \big( \alpha \cos \eta (z+c) - \eta \sin \eta (z+c) \big) K_0(\eta r) \mathrm{d}\eta. \end{matrix}$
It should be noted that this Green function can also be written in the following closely related form,
$\begin{matrix} G(\mathbf{x};\mathbf{\xi}) & = \frac{i \alpha }{2} e^{\alpha (z+c)} \, H_0^{(1)}(\alpha r) + \frac{1}{4 \pi R_0} \\ & + \frac{1}{2 \pi^2} \int\limits_{0}^{\infty} \frac{(\eta^2 - \alpha ^2) \cos \eta (z+c) + 2 \eta \alpha \sin \eta (z+c)}{\eta^2 + \alpha ^2} K_0(\eta r) \mathrm{d}\eta \end{matrix}$
Linton and McIver 2001. An equivalent representation is due to Kim 1965 for $r>0$, although implicitly given in the work of Havelock 1955, and is given by
$G(\mathbf{x};\mathbf{\xi}) = \frac{1}{4 \pi R_0} + \frac{1}{4 \pi R_1} - \frac{\alpha }{4} e^{\alpha (z+c)} \Big(\mathbf{H}_0(\alpha r) + Y_0(\alpha r) - 2i J_0 (\alpha r) + \frac{2}{\pi} \int\limits_{z+c}^0 \frac{e^{-\alpha \eta}}{\sqrt{r^2 + \eta^2}} \mathrm{d}\eta \Big),$
where $J_0$ and $Y_0$ are the Bessel functions of order zero of the first and second kind and $\mathbf{H}_0$ is the Struve function of order zero.
The expression due to Peter and Meylan 2004 is
$G(\mathbf{x};\mathbf{\xi}) = \frac{i \alpha }{2} e^{\alpha (z+c)} h_0^{(1)}(\alpha r) + \frac{1}{\pi^2} \int\limits_0^{\infty} \Big( \cos \eta z + \frac{\alpha }{\eta} \sin \eta z \Big) \frac{\eta^2}{\eta^2+\alpha ^2} \Big( \cos \eta c + \frac{\alpha }{\eta} \sin \eta c \Big) K_0(\eta r) \mathrm{d}\eta.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 81, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9035431742668152, "perplexity_flag": "head"} |