url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://physics.stackexchange.com/questions/tagged/homework?page=2&sort=active&pagesize=15 | # Tagged Questions
Applies to questions of primarily educational value - not only questions that arise from actual homework assignments, but any question where it is preferable to guide the asker to the answer rather than giving it away outright.
learn more… | top users | synonyms
1answer
35 views
### Diagonal matrix in k-space
I'm having some trouble with an integration I hope you guys can help me with. I have that: ${{\mathbf{v}}_{i}}\left( \mathbf{k} \right)=\frac{\hbar {{\mathbf{k}}_{i}}}{m}$ and ...
2answers
30 views
### Voltmeter forming a closed circuit
A battery is connected to a 10Ω resistor as shown in Figure 2. The emf (electromotive force) of the battery is 6.0 V. When the switch is open the voltmeter reads 6.0 V and when it is closed it ...
0answers
44 views
### Air pressure in balloon
I have to calculate the air pressure inside of an hot air balloon. After some searching I found out that I can use the ideal gas law: PV = nRT (from Wikipedia) So to get the pressure in the balloon I ...
1answer
65 views
### Proof $\left[ {\hat H,{{\hat p}_i}} \right] = - \frac{\hbar }{i}\frac{{\partial \hat H}}{{\partial {{\hat q}_i}}}$ [closed]
I have a problem with the Hamiltonian, I don't think anything to solve it!! So could you give me some hints! Knowing that: \left[ {{{\hat p}_i},{{\hat q}_k}} \right] = \frac{\hbar }{i}{\delta ...
1answer
41 views
### Center of mass of three particles of masses 1kg, 2kg, 3kg lies at the point (1,2,3) [closed]
Center of mass of three particles of masses 1kg, 2kg, 3kg lies at the point (1,2,3) and center of mass of another system of particles 3kg and 2kg lies at the point (-1,3,-2). Where should we put a ...
0answers
56 views
### Relativistic canonical transformation
What is relativistic canonical transformation? I need every piece of information about it. Does anyone know a reference or an article about relativistic canonical transformation? For example, in ...
1answer
41 views
### How large of a solar sail would be needed to travel to mars in under a year?
I'm attempting to approach this using the identity $$F/A = I/c$$ I can solve for Area easily enough $$A = F(c/I)$$ and I know the distance $d$ is $$d=1/2(at^2)$$ But I'm having difficulty trying to ...
1answer
202 views
### Potential due to a spherical surface charge
The potential at the surface of an insulating sphere (radius R) is given by $$V(R,\theta) = k \cos(3\theta)$$ where $k$ is a constant. Use separation of variables to find the potential inside the ...
1answer
74 views
### Simple harmonic oscillator system and changes in its total energy
Suppose I have a body of mass $M$ connected to a spring (which is connected to a vertical wall) with a stiffness coefficient of $k$ on some frictionless surface. The body oscillates from point $C$ to ...
1answer
77 views
### Killing vector argument gone awry?
What has gone wrong with this argument?! The original question A space-time such that $$ds^2=-dt^2+t^2dx^2$$ has Killing vectors \$(0,1),(-\exp(x),\frac{\exp(x)}{t}), ...
1answer
198 views
### Electric field due to nonconducting sphere
For calculating electric field outside a nonconducting sphere with a hollow spherical cavity. When I use the rule (Charge density= $dQ/dV$), I don't know exactly what is $dV$, is the volume here ...
1answer
209 views
### What is the general relativistic calculation of travel time to Proxima Centauri?
It has already been asked here how fast a probe would have to travel to reach Alpha Centauri within 60 years. NASA has done some research into a probe that would take 100 years to make the trip. But ...
2answers
108 views
### Geodesic equations
I am having trouble understanding how the following statement (taken from some old notes) is true: For a 2 dimensional space such that $$ds^2=\frac{1}{u^2}(-du^2+dv^2)$$ the timelike geodesics ...
0answers
49 views
### A slender rod with a ball at the end
A slender rod is attached to a block accelerating horizontally. The rod is free to rotate without friction. At the end of the rod is a ball. As the block accelerates, the slender rod will be deviated ...
1answer
96 views
### Chain of balls on an inclined plane
Suppose we have some inclined plane, and there is some chain of balls of length $l$ and mass $m$ lying on it. No friction at all in the system. 1) What is $x_0$ (the vertical hanging part of the ...
2answers
572 views
### Electric field and electric potential of a point charge in 2D and 1D
in 3D, electric field of a piont charge is inversely proportional to the square of distance while the potential is inversely proportional to distance. We can derive it from Coulomb's law. however, I ...
0answers
35 views
### Magnetic Field Lines predict? [closed]
Question: Magnetic field lines determine: (A) only the direction of the field (B) the relative strength of the field (C) both the relative strength and the detection of the field (D) only the ...
3answers
2k views
### Finding Angular Acceleration of rod given radius and angle
A uniform rod is 2.0 m long. The rod is pivoted about a horizontal, frictionless pin through one end. The rod is released from rest at an angle of 30° above the horizontal. What is the angular ...
1answer
53 views
### The second resonance of string?
What is the relationship between "the second resonance " and string and the wavelength. Like in this question: if the length of the string is 2cm with second resonance, then what is wavelength?
2answers
102 views
### Potential Inside Conducting Cube
A cubical box with sides of length L consists of six metal plates. Five sides of the box { the plates at $x=0, x=L, y=0, y=L, z=0$ - are grounded. The top of the box (at z = L) is made of a separate ...
1answer
65 views
### How is torque equal to moment of inertia times angular acceleration divided by g?
How is the following relation true $$\tau = \large\frac{I}{g} \times \alpha$$ where $\tau$ is torque, $I$ is moment of inertia, $g= 9.8ms^{-2}$, and $\alpha=$ angular acceleration.
0answers
46 views
### What force does the seat exert on the rider at the top and bottom of the ride? [closed]
A 75 kg person rides a Ferris wheel which is rotating uniformly. The centripetal force acting on the person is 45 N. What force does the seat exert on the rider at the top and the bottom of the ride?
3answers
71 views
### How to determine the direction of medium's displacement vectors of a standing wave?
Consider the following problem taken from a problem booklet. My questions are: What is displacement vector? And how to determine the direction of displacement vector at a certain point? Where is the ...
1answer
82 views
### Liquid benzene magnetic susceptibility
In a solid state physics problem, I'm asked to make a rough estimate of the contribution to the diamagnetic susceptibility of the outer electron of each carbon atom. The wavefunction of these ...
2answers
144 views
### Why does the quantum Heisenberg model become the classical one when $S\to\infty$?
The Hamiltonian of the spin $S$ quantum Heisenberg model is $$H = J\sum_{<i,j>}\mathbf{S}_{i}\cdot\mathbf{S}_{j}$$ I have read that when the spin quantum number $S\to\infty$, quantum fluctuation ...
1answer
86 views
### How can the derivative of this trace be constrained?
I am studying for my exam on relativity and I am going through some problems sets including ones where I was not very successful in so I want to know how to do this problem. (Convergence of ...
3answers
948 views
### “Find the net force the southern hemisphere of a uniformly charged sphere exerts on the northern hemisphere”
This is Griffiths, Introduction to Electrodynamics, 2.43, if you have the book. The problem states Find the net force that the southern hemisphere of a uniformly charged sphere exerts on the ...
1answer
26 views
### How to find time taken for a spinning top to stop? [closed]
The angular position of a spinning top is given by $\theta = t^3 - 72t$, where $t$ is in seconds and $\theta$ in "radian".
1answer
44 views
### A theoretical problem on Mechanics [closed]
Two particles with masses $m_1$ and $m_2$ are moving in 3D space with some Cartesian coordinate system. There are known the laws of motion of these particles, i.e. the position vectors $\vec{r_1}(t)$ ...
3answers
111 views
### Projectiles and escape velocity
Q: The escape velocity for a body projected vertically upwards from the surface of earth is 11 km/s. If the body is projected at an angle of $45^\circ$ with vertical, the escape velocity will be? ...
0answers
26 views
### Assume the following vectors [closed]
Assume the following vectors: $a = \frac{1}{\sqrt{2}} [0,1,1]$ and $b = [1,2,3]$. (a) Calculate the scalar product c = a c b (b) Calculate the value of vector d where d = b - ca (c) What is the ...
1answer
46 views
### Forces on a particle moving in a vertical circle
In the diagrma, a particle (A, mass 0.6kg) is moving in a vertical circle. The question is: When it gets to the lowest point, what is the tension in the light rod that is between the center of the ...
0answers
35 views
### Physical Optics [closed]
Monochromatic light is used to illuminate a pair of narrow slits 0.3 mm apart and the interference pattern is observed on a source 0.91 mm away. The second dark band appears 3.0 mm from the center. ...
0answers
23 views
### Absolute Viscosity of Water at certain temperatures
I just started my class on fluid mechanics. There's a problem that requires absolute viscosity $u$ of water at 25C. I looked up my table in the book and I only have it for water at 20C and 30C. I ...
4answers
2k views
### Calculating impact force for a falling object?
Good evening, I'm trying to calculate what kind of impact force a falling object would have once it hit something. This is my attempt so far: Because $x= \frac{1}{2} at^2$, $t=\sqrt{2x/a}$ $v=at$, ...
2answers
105 views
### Why is $r'/r^2 = -1/r$? [closed]
If $r=r(t)$, why is $\frac{r'(t)}{(r(t))^2}$ = $\frac{1}{r(t)}$ where $'$ denotes the derivative? I saw it in a lecture. Can you please explain?
1answer
287 views
### Simple work and energy problem
I have the following problem: A man that weights 50kg goes up running the stairs of a tower in Chicago that is 443m tall. What is the power measured in watts if he arrives at the top of the tower ...
0answers
45 views
### Work And Energy Question [closed]
$H = 3\text{ m}$,$m=2\text{ kg}$ The right side is rough. I want to figure: what is the coefficient of friction $\mu$? How high and exceed the maximum return on the plane right body? I know ...
1answer
81 views
### When should angles be expressed in degrees vs. radians?
I am trying to calculate the albedo of a given latitude by following the methods of Brutsaert (1982), I have copied the formula below: 3.6 Shortwave and long-wave radiative fluxes Albedo ...
2answers
48 views
### Constant of gravity in earth fixed coordinate system
I have this problem: If the constant of gravity is measured to be $g_0$ in an earth fixed coordinate system, what is the difference $g-g_0$ where $g$ is the real constant of gravity as ...
0answers
49 views
### Calculating pressure in accelerated fluids in closed and open vessels?
The question asked was "what should be the acceleration such that the pressure at both the points marked by thick dots be equal? the vessel is open and cubic with side 5m?" Initially i considered ...
1answer
90 views
### Joule-Thomson effect of Van der Waals gas
I'm supposed to calculate the inversion pressure $p_i$ of a Van der Waals gas. The state equation of the Van der Waals gas is: $$(p + \frac{a}{V^2})(V-b) = RT.$$ To get a hold of the inversion ...
2answers
51 views
### Electron in an infinite potential well
Does this problem have any sense? Suppose an electron in an infinite well of length $0.5nm$. The state of the system is the superposition of the ground state and the first excited state. Find the ...
0answers
30 views
### Lagrangian of electromagnetic tensor in light cone coordinates? [closed]
I have Lagrangian Density of Electromagnetic field Tensor in light cone coordinates using D'Alembertian operator and Lagrangian density in Cartesian coordinates. I couldn't figure out the way to ...
0answers
28 views
### Mercury's Orbital Precession in Special Relativity
I am researching Mercury's orbital precession. I have considered most perturbations and general relativity. I am still not satisfied. I need your help. I need a solution to Exercise 13, Chapter 6, in ...
4answers
133 views
### Resistors in Parallel
From my book: "A length of wire is cut into five equal pieces. The five pieces are then connected in parallel, with the resulting resistance being 2.00 Ω. What was the resistance of the ...
2answers
54 views
### Angle between acceleration and velocity
Problem: A particle is constrained to move in a circle with a 10-meter radius. At one instant, the particle's speed is 10 meters per second and is increasing at a rate of 10 meters per second ...
2answers
66 views
### Is it possible to “add cold” or to “add heat” to systems?
Amanda just poured herself a cup of hot coffee to get her day started. She took her first sip and nearly burned her tongue. Since she didn't have much time to sit and wait for it to cool down, ...
4answers
291 views
### Trace and adjoint representation of $SU(N)$
In the adjoint representation of $SU(N)$, the generators $t^a_G$ are chosen as $$(t^a_G)_{bc}=-if^{abc}$$ The following identity can be found in Taizo Muta's book "Foundations of Quantum ...
2answers
43 views
### Simple conservation of momentum and frame of reference problem
I'm making a very simple physics engine based on momentum, and I'm solving what response to use for a collision from each involved object's frame of reference. However, something about how I'm ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243711829185486, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/tagged/random+numerics | Tagged Questions
1answer
150 views
RandomReal closed on left & open on right?
I have a number of algorithms that depend on uniform random reals in half-open intervals such as $[0,1)$. In particular, I need a (pseudo) random-number generator that produces machine-precision ...
2answers
1k views
Efficient Langevin Equation Solver
This question is not about good algorithms for solving stochastic differential equations. It is about how to implement simple codes in Mathematica efficiently exploiting Mathematica's programming ...
1answer
229 views
How to fix errors in Gram-Schmidt process when using random vectors?
I first make a function to get a random vector on unit sphere in a swath around the equator. That is what the parameter $\gamma$ controls; if $\gamma = 1/2$, the vectors can be chosen anywhere on the ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8494082093238831, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/144051/counterexample-to-cancellation-law-in-cardinals-addition | # Counterexample to cancellation law in cardinals addition
Charles C.Pinter - Set theory
Let $a,b,c$ be any cardinal numbers.
Give a counterexample to the rule: $$a+b = a+c \implies b=c$$
Does there exist a counterexample?
-
What is an infinite cardinal plus a finite cardinal? – David Mitra May 11 '12 at 22:54
– Asaf Karagila May 11 '12 at 23:01
Why was this question downvoted? – MJD May 11 '12 at 23:40
## 3 Answers
$$\begin{align}&1.\quad\aleph_0=\aleph_0+0=\aleph_0+1\\&2.\quad 0\neq 1\end{align}$$
Where $\aleph_0$ is the cardinality of countably infinite sets, e.g. the non-negative integers, $\mathbb N$.
-
This is certainly the very simplest counterexample. – Michael Hardy May 11 '12 at 23:59
or any infinite cardinal – Greg Martin May 12 '12 at 0:03
@Greg: Assuming the axiom of choice, yes. – Asaf Karagila May 12 '12 at 10:06
@AsafKaragila: aha, equivalently the well-ordering principle - interesting. Can one construct a counterexample if the axiom of choice isn't assumed? – Greg Martin May 12 '12 at 19:41
@Greg: Indeed. We say that A is a Dedekind-finite set if whenever $B\subsetneq A$, $|B|<|A|$. Equivalently this is to say that $|A|<|A|+1$. Every finite set is Dedekind-finite, and assuming the axiom of choice the opposite is also true: Dedekind-finite sets are finite sets. However it is consistent that without the axiom of choice there may be infinite Dedekind-finite sets. These sets may be used to construct such counterexamples. – Asaf Karagila May 12 '12 at 21:48
Here's one: $|\Bbb N| + |\{1\}| =|\Bbb N| + |\{2, 3\}|$, but $|\{1\}| \ne|\{2,3\}|$.
-
You will probably learn later that for any infinite cardinal $a$ the equality $$a+a=a\cdot a=a$$ holds. This implies that for any infinite cardinal $a$ you have a counterexample $a+a=a+0$.
More generally, if $b$, $c$ are infinite cardinals then $$b+c=b\cdot c=\max\{b,c\}.$$
The proof of these general result is not that simple, it requires axiom of choice. See e.g. the following questions: About a paper of Zermelo and Simple cardinal arithmetic
However, showing the above result for some special cases, like $a=\aleph_0$ or $a=2^{\aleph_0}$ is not that difficult and it might be a useful exercise for someone learning basics of set theory and cardinal arithmetic.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8869536519050598, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/95192/can-you-approximate-a-vector-field/187189 | Can you approximate a vector field?
Say you have a physical simulation, there are "wind current" vectors stored in a 2d space.
So you know that the vectors near each other will likely be similar in direction.
Can we capitalize on the "similarity" across the vector field, and use it to write an approximation to the vector field?
So is there an alternative way to represent a vector field (something like a Fourier Transform for vector fields?)
-
– Zhen Lin Dec 30 '11 at 14:41
I disagree Zhen, I think the question is asking: given a collection of tangent vectors in $\mathbb{R}^2$, is there always a continuous vector field that extends it? – Zev Chonoles♦ Dec 30 '11 at 14:43
Look at the component functions. If the vector field is smooth, then you could perform a Taylor expansion (for instance) on the components, thus approximating the vector field. Sketch the case where one of the components is constant. – dls Dec 30 '11 at 15:50
3
Anything you can do to a scalar field, you can do to the components of a vector field... Well, not quite: anything linear in the scalar values can be applied to the components of a vector field, and the result will be sensible under change of basis of the vector values. So you can do bilinear interpolation, spline interpolation, componentwise Fourier transforms, and so on... – Rahul Narain Aug 26 '12 at 19:14
1
This is an essential problem in numerical fluid dynamics. It's not enough to come up with some smooth interpolation of the given data. One should also take care that "conservation laws" at work in these data are represented in the interpolation. Otherwise during numerical processing the system will be fueled, e.g., with extraneous energy not present in the situation on the ground. – Christian Blatter Feb 7 at 10:33
show 1 more comment
1 Answer
Your main point about vector fields is that vectors near each other are fairly similar. Now, assuming we have well-behaved functions that do follow that behavior, one way of "capitalizing on the similarity of a vector field" is to simply "zoom out" by taking the average of all vectors within a certain area and getting the average vector in the center of the area. This will yield a vector field with less vectors, which, because of our approximations, stand for a generalized behavior.
If, in the same scenario, we wished to emphasize the differences, then I'd suggest rather than taking an average vector for a rectangular area, you may want to divvy up your vector field such that there are regions defined by a 'central' vector and some range of tolerance from it.
However, what I think you are looking for is simply the divergence. Given your vector field $\vec F(x, y)$, making a contour plot of $D(x, y) = \nabla \cdot \vec F(x, y)$ and then choosing values of $D$ to be the contours, you can demonstrate how similar the direction and magnitude of a vector is by each curve's spacing. This also causes a loss of information, which is what you were looking for.
-
What do you mean by "less vectors"? By definition, a vector field is a total relation on the state space. Do you mean the image is more restricted because of the averaging? – alancalvitti Oct 26 '12 at 14:51
1
Yes - there would be less information about the field in the image. For example, if you plot some vector field $\vec v$ on a grid with steps $\Delta y$ and $\Delta x$, then after averaging $\vec v'(x, y) = (v(x, y) + v(x+\Delta x, y) + v(x, y+ \Delta y) + v(x-\Delta x, y) + v(x, y- \Delta y))/5$ and plotting a vector on every point on the grid with steps $2\Delta y$ and $2\Delta x$, you'll end up with half the vectors you started with. – VF1 Oct 26 '12 at 17:06
So you're referring to a discrete grid where the number of vectors is a function of the grid subsample, as opposed to differentiable manifold, where your assertion is false? (Btw, in the example you gave above wouldn't there be 1/4 the number of vectors b/c it's quadratic in 2 directions?). – alancalvitti Oct 26 '12 at 21:37
It would be one half, because there is an overlap in the information being used for the averages. But the actual implementation of averaging the is immaterial - you could think of lots of ways to do so. Yes, I was referring to a discrete grid of vectors that represents the vector field - that was what your question was asking - how to show general trends - no? – VF1 Oct 27 '12 at 2:14
@alancalvitti: I suppose for a continuous field, one could take the mean over small regions and thus discretize. (As Mark Kac said:"Be wise, discretize!") – Raskolnikov May 10 at 6:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403232336044312, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/297088/dim-cx-mathbbr-infty-we-need-to-show-x-infty | # $\dim$ $C(X,\mathbb{R})<\infty$ we need to show $|X|<\infty$,
$X$ be a compact hausdorf space such that $\dim$ $C(X,\mathbb{R})<\infty$ we need to show $|X|<\infty$, I must say I have no idea how to prove this result. please help. Thank you!
-
Can you show that you can always extend a function defined on a finite set of points on $X$ to a continuous function on all of $X$? – Zhen Lin Feb 7 at 11:51
1
– Martin Feb 7 at 15:04
## 1 Answer
Let $N$ be the dimension of $C(X,\Bbb R)$. Assume that there are $N+1$ distinct points $x_1,\dots,x_{N+1}$ in $X$. As $X$ is Hausdorff, by Urysohn lemma, we can find for each $j$ a continuous function $g_j\colon X\to\Bbb R$ such that $g_j(x_k)=0$ when $k\neq j$ and $g_j(x_j)=1$.
The family $\{g_j,1\leqslant j\leqslant N+1\}$ is necessarily linearly dependent, so we can assume that $g_{N+1}=\sum_{j=1}^Ng_j$. Evaluating this equality at $x_{N+1}$ yields a contradiction.
This proves that $\dim C(X,\Bbb R)=\operatorname{card}(X)$.
The result is not necessarily true when $X$ is not assumed to be Hausdorff. For example, take $X$ an infinite set with the topology $\{\emptyset,X\}$. The only continuous functions are constant ones.
-
2
Perhaps it's worth pointing out where we used the compactness assumption... – Zhen Lin Feb 7 at 13:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8869645595550537, "perplexity_flag": "head"} |
http://gravityandlevity.wordpress.com/2010/08/30/our-stability-is-but-balance-freeman-dyson-on-how-to-imagine-quantum-fields/ | ## A blog about the big ideas in physics, plus a few other things
In the last post I told the story of my own struggles with quantum field theory and what it is supposed to mean. In this post (as promised), I want to let someone much more intelligent and eloquent tell the story of quantum fields.
The following are excerpts from Freeman Dyson‘s beautiful essay “Field Theory”, written in 1953, as presented in his book From Eros to Gaia. I have tried to copy the most essential and visual arguments from the essay and have made no attempt to keep things short or to extract pithy quotations. Everything below (except for bold section headings) is quoted, even though I have dropped the quotation marks. I also took some liberties with the paragraph breaks to make it easier to read in online format.
I hope that the picture he is painting is as wonderful and awe-inspiring to you as it was to me.
$\hspace{10mm}$
$\hspace{10mm}$
$\hspace{10mm}$
On the historical purpose of quantum field theory
Next, a remark about the purpose of the theory. … The point is that the theory is descriptive and not explanatory. It describes how elementary particles behave; it does not attempt to explain why they behave so. To draw an analogy from a familiar branch of science, the function of chemistry as it existed before 1900 was to describe precisely the properties of the chemical elements and their interactions. Chemistry described how the elements behave; it did not try to explain why a particular set of elements, each with its particular properties, exists.
… Looking backward, it is now clear that nineteenth-century chemists were right to concentrate on the how and to ignore the why. They did not have the tools to begin to discuss intelligently the reasons for the individualities of the elements. They had to spend a hundred years building up a good descriptive theory before they could go further. And the result of their labors … was not destroyed or superseded by the later insight that atomic physics gave.
… Our justification for concentrating attention on the existing theory, with its many arbitrary assumptions, is the belief that a working descriptive theory of elementary particles must be established before we can expect to reach a more complete understanding at a deeper level. The numerous attempts to by-pass the historical process, and to understand the particles on the basis of general principles without waiting for a descriptive theory, have been as unsuccessful as they were ambitious. The more ambitious they are, the more unsuccessful. These attempts seem to be on a level with the famous nineteenth-century attempts to explain atoms as “vortices in the ether”.
$\hspace{10mm}$
$\hspace{10mm}$
On how to think about classical fields
A classical field is a kind of tension or stress that can exist in empty space in the absence of matter. It reveals itself by producing forces, which act on material objects that happen to lie in the space the field occupies. The standard examples of classical fields are the electric and magnetic fields, which push and pull electrically charged objects and magnetized objects respectively.
… What, then, is the picture we have in mind when we try to visualize a classical field? Characteristically, modern physicists do not try to visualize the objects they discuss.
In the nineteenth century it was different. Then it seemed that the universe was built of solid mechanical objects, and that to understand an electric field it was necessary to visualize the field as a mechanical stress in a material substance. It was possible, indeed, to visualize electric and magnetic fields in this way. To do so, people imagined a material substance called the ether, which was supposed to fill the whole of space and carry the electric and magnetic stresses. But as the theory was developed, the properties of the ether became more and more implausible. Einstein in 1905 finally abandoned the ether and proposed a new and simple version of the Maxwell theory in which the ether was never mentioned. Since 1905 we gradually gave up the idea that everything in the universe should be visualized mechanically. We now know that mechanical objects are composed of atoms held together by electric fields, and therefore it makes no sense to try to explain electric fields in terms of mechanical objects.
… It is still convenient sometimes to make a mental picture of an electric field. For example, we may think of it as a flowing liquid which fills a given space and which at each point has a certain velocity and direction of flow. The velocity of the liquid is a model for the strength of the field.
But nobody nowadays imagines that the liquid really exists or that it explains the behavior of the field. The flowing liquid is just a model, a convenient way to express our knowledge about the field in concrete terms. It is a good model only so long as we remember not to take it seriously. … To a modern physicist the electric field is a fundamental concept which cannot be reduced to anything simpler. It is a unique something with a set of known properties, and that is all there is to it.
This being understood, the reader may safely think of a flowing liquid as a fairly accurate representation of what we mean by a classical electric field. The electric and magnetic fields must then be pictured as two different liquids, both filling the whole of space, moving separately and interpenetrating each other freely. At each point there are two velocities, representing the strengths of the electric and magnetic components of the total electromagnetic field.
$\hspace{10mm}$
$\hspace{10mm}$
On visualizing the quantum field
Unfortunately, the quantum field is even more difficult to visualize than the classical field. The basic axiom of quantum mechanics is the uncertainty principle. This says that the more closely we look at any object, the more the object is disturbed by our looking at it, and the less we can know about the subsequent state of the object. Another, less precise, way of expressing the same principle is to say that objects of atomic size fluctuate continually; they cannot maintain a precisely defined position for a finite length of time.
… At the risk of making some professional quantum theoreticians turn pale, I shall describe a mechanical model that may give some idea of the nature of a quantum field. Imagine the flowing liquid which served as a model for a classical electric field. But suppose that the flow, instead of being smooth, is turbulent, like the wake of an ocean liner. Superimposed on the steady average motion there is a tremendous confusion of eddies, of all sizes, overlapping and mingling with one another. In any small region of the liquid the velocity continually fluctuates, in a more or less random way. The smaller the region, the wilder and more rapid the fluctuations.
… The model does not describe correctly the detailed quantum-mechanical properties of a quantum field; no classical model can do that. But … the model makes clear that it is meaningless to speak about the velocity of the liquid at any one point. The fluctuations in the neighborhood of the point become infinitely large as the neighborhood becomes smaller. The velocity at the point itself has no meaning. The only quantities that have meaning are velocities averaged over regions of space and over intervals of time.
$\hspace{10mm}$
$\hspace{10mm}$
On particles, and on the physical world
It is not possible to explain in nontechnical language how particles arise mathematically out of the fluctuations of a field. It cannot be understood by thinking about a turbulent liquid or any other classical model. All I can say is that it happens. And it is the basic reason for believing that the concept of a quantum field is a valid concept and will survive any changes that may later be made in the details of the theory.
The picture of the world that we have finally reached is the following. Some ten or twenty different quantum fields exist. Each fills the whole of space and has its own particular properties. There is nothing else except these fields; the whole of the material universe is built of them. Between various pairs of fields there are various kinds of interaction. Each field manifests itself as a type of elementary particle. The number of particles of a given type is not fixed, for particles are constantly being created or annihilated or transmuted into one another.
$\hspace{10mm}$
$\hspace{10mm}$
On wonder
Even to a hardened theoretical physicist, it remains perpetually astonishing that our solid world of trees and stones can be built of quantum fields and nothing else. The quantum field seems far too fluid and insubstantial to be the basic stuff of the universe.
Yet we have learned gradually to accept the fact that the laws of quantum mechanics impose their own peculiar rigidity upon the fields they govern, a rigidity which is alien to our intuitive conceptions but which nonetheless effectively holds the earth in place. We have learned to apply, both to ourselves and to our subject, the words of Robert Bridges:
Our stability is but balance, and our wisdom lies
In masterful administration of the unforeseen.
### Like this:
from → physics
8 Comments leave one →
1. August 31, 2010 12:10 am
It is this fact—that the knowing physicist cannot know and knows he cannot know—that bothers the layperson.
As an undergrad student of QM, I struggled with this for a long time. After all, everything up until that point made sense to one degree or another. Thermodynamics took much longer to understand, and in the understanding there had to be a certain period of suspension of disbelief, where I saw an equation, saw the experiment that gave us the equation, and didn’t bother asking why. But eventually, the “why” was produced and TD made sense. Or at least I thought so.
But QM was the crusher. There was nothing to make sense of. Many, many had tried, all of which were smarter than I could ever hope to be, and all had failed. So rather than waste time thinking of why, it’s better to just crunch some more equations and find whatever interesting things we could glean.
Even today, I sometimes stare at Schrodinger’s Equation and think I can see something like a dampened spring equation. But the details pop out and crush any hope I had of making sense of it.
Beyond QM, physicists just learn to stop asking “why” and only focus on the “how”. There is a short period of time where physicists enjoy talking about possible “whys”, but in the end, they grow up and move beyond that.
2. ExPhysicsStudent
August 31, 2010 2:16 am
Thank you for the beautiful and brilliant article! It explains in a loving way why it is so hard to talk about these matters.
It reminds me of a personal experience in college taking a quantum physics course. I initially thought I wanted to be a physics major, and took the physics courses. It was partly the quantum mechanics course that taught me physics was not for me (or more accurately: I was not meant for physics). I remember being just flabbergasted by this notion of “spin”: it was not spin in the sense of a rotating object, but rather some indescribable property of a particle. No one could explain to me what precisely was meant by spin or how to visualize the spin of a particle. The attitude was, just calculate and don’t ask such meaningless questions. That philosophy took a little while for me to absorb.
3. Ellison
November 18, 2011 11:09 am
This leads me back to the my comment from your previous article, that the Universe is a giant computer monitor. Imagine if one of the pixels on the monitor somehow became self-aware, how would it interpret the Universe? Can a pixel be self-ware or is the AI running on the computer that’s driving the monitor, controlling the pixel making it appear to be self-aware?
4. Clinton
April 1, 2013 4:17 pm
Would they look like a grid of over lapping fields,or at a distance apart from each other..also a field represents a particle,,so is the interaction turn in a particle from a field…?
### Trackbacks
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507049918174744, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/axiom-of-choice+functions | # Tagged Questions
2answers
602 views
### Is there a Cantor-Schroder-Bernstein statement about surjective maps?
Let $A,B$ be two sets. The Cantor-Schroder-Bernstein states that if there is an injection $f\colon A\to B$ and an injection $g\colon B\to A$, then there exists a bijection $h\colon A\to B$. I was ...
3answers
792 views
### What is a basis for the vector space of continuous functions?
A natural vector space is the set of continuous functions on $\mathbb{R}$. Is there a nice basis for this vector space? Or is this one of those situations where we're guaranteed a basis by invoking ...
3answers
304 views
### What is the set-theoretic definition of a function?
I'm reading through Asaf Karagila's answer to the question What is the Axiom of Choice and Axiom of Determinacy, and while reading the explanation of Bertrand Russell's analogy ("The Axiom of Choice ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9017273187637329, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/19303/good-example-of-a-non-continuous-function-all-of-whose-partial-derivatives-exist/19470 | ## Good example of a non-continuous function all of whose partial derivatives exist
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What's a good example to illustrate the fact that a function all of whose partial derivatives exist may not be continuous?
-
3
I removed the tag "math-education" (and replaced it by "examples"). Remember: this whole site is math-education in the sense that people are asking math questions and hoping to learn from the answers. The tag should be saved for questions with an explicit pedagogical component. – Pete L. Clark Mar 25 2010 at 14:43
## 4 Answers
The standard example I have seen is: $f(x,y)=\frac{2xy}{x^2+y^2}$.
-
But surely that's continuous on its domain of definition. – Dyke Acland Mar 25 2010 at 14:23
5
Define it to be 0 at (0,0) and it's discontinuous there, although the partial derivatives exist. – Mark Meckes Mar 25 2010 at 14:25
3
However you define f at the origin, it will be discontinuous since its limit along the coordinate axes is zero, whereas it is 1 along the diagonal of $\mathbb R^2$. You get wilder example by starting with any antisymmetric function on the unit circle (say non-measurable) and extending it linearly on all (vector)lines of the plane i.e. defining $f(ra)=rf(a)$ for $r \in \mathbb R$ and $a$ on the circle. – Georges Elencwajg Mar 25 2010 at 15:58
In George's "wilder example", further conditions on f will be needed to make the partial derivatives exist everywhere (is that what the proposer wanted?) – Bjorn Poonen Mar 25 2010 at 16:36
Bjorn is right, of course: the example was only meant to show that a function can be quite pathological and yet have directional derivatives in all directions at the origin. The construction is the source of a few amusing exercises: e.g. the extended function is continuous at the origin iff the function on the circle is bounded. On the other hand even if you start with a $C^{\infty}$ function on the circle (seen as submanifold of $\mathbb R^2$) the extended function on the plane will NOT be differentiable at the origin in general ( think bump function on the circle). – Georges Elencwajg Mar 25 2010 at 18:13
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I feel it's more informative to have the thought process that leads to the example rather than just be told some magic formula that works, however simple that formula might be.
Here is a way of explaining it. We'll decide that our function is going to be discontinuous at (0,0). In order to ensure that the partial derivatives there are defined we'll try the simplest thing and make the function zero on the two axes. How can we make sure that it is discontinuous at (0,0)? Well, a simple way might be to make the function equal to 1 on the line x=y. (If you object that that step was unmotivated, then read on -- it will become clear that I could have chosen pretty well any function and the argument still works.) How are we going to make sure that the function has partial derivatives everywhere? We could do it by trying to give it a nice formula everywhere. We care particularly about how the function behaves when you keep x or y constant, so let's see what happens if we try to choose the nicest possible dependence on y for each fixed x. If we do that then we'll be tempted to make the function linear. Since f(x,0)=0 and f(x,x)=1, this would tell us to choose f(x,y)=y/x.
Unfortunately, that doesn't work: we also need the function to tend to zero for fixed y as x tends to zero. But at least this has given us the idea that we would like to write f as a quotient. What properties would we need of g and h if we tried f(x,y)=g(x,y)/h(x,y)? We would want g(x,0)=g(0,y)=0, g(x,x)=h(x,x), and h(x,y) is never 0 (except that we don't mind what happens at (0,0). We also want g and h to be nice so that the partial derivatives will obviously exist. The simplest function that vanishes only if (x,y)=(0,0) is $x^2+y^2$. The simplest function that vanishes when x=0 or y=0 but not when $x=y\ne 0$ is $xy$. Multiply that by 2 to get 1 down the line x=y and there we are.
Now suppose we had wanted the value to be, say, $e^{1/x}$ at (x,x), so that the function is wildly unbounded near (0,0). Then we could just multiply the previous function by $\exp((2/(x^2+y^2))^{1/2})$.
The main point I want to make is that we could just as easily have chosen many other functions. For example, $\sin(x)\sin(y)/(x^4+y^4)$ vanishes on the axes and clearly does not tend to zero down the line x=y (in fact it tends to infinity).
-
I was travelling when this came up.
With the same results as Georges Elencwajg's comment and Marco's recent answer, take $$f(x,y) = \frac{2 x^2 y}{x^4 + y^2}$$ and set to $0$ at the origin $(0,0).$ Along any line through the origin $x = a t, \; y = b t$ the limit is 0, as $$| f(a t, b t) | \leq a^2 | t / b | .$$ However, along the parabola $y = x^2$ the value is 1, and along the parabola $y = - x^2$ the value is $-1.$
To get "directional derivative" 0 in every direction through the origin switch to $$g(x,y) = \frac{2 x^3 y}{x^6 + y^2}$$ as $$| g(a t, b t) | \; \leq \; t^2 \; | a^3 / b |$$ when $b \neq 0,$ but then if $y = \pm x^3$...the directional derivative generalizes to the Gateaux derivative in other settings.
-
Consider $f(x,y)=1$ on the set $\lbrace{(x,y):y=x^2}\rbrace\backslash\lbrace(0,0)\rbrace$ and $f(x,y)=0$ on any other point.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467259049415588, "perplexity_flag": "head"} |
http://mathhelpforum.com/number-theory/197528-intermediate-value-theorem.html | # Thread:
1. ## Intermediate Value Theorem
Hey guys,
Having a little difficulty trying to understand what this question is trying to ask me.
The questions says: "Suppose that f is continuous on [a,b] and f takes only rational values. What can be concluded about f?"
I know that the question is relating to the intermediate value theorem, as it is in that section of the text book, however I'm not sure what can be concluded from this.
My thinking so far has been this:
- f is on a closed, bounded interval, which implies a whole load of things such as continuity etc.
- I'm not sure if the intermediate value theorem works if the function only takes rational numbers, as the reals are dense with irrationals. But they are also dense with the rationals so maybe not?
I don't know how to go about the solution as I'm sure than it is more than "there exists a value c such that a<c<b and f(c) is rational"
Any idea what the question might be asking for???
Thanks in advance,
Mark
2. ## Re: Intermediate Value Theorem
Originally Posted by MarkJacob
Hey guys,
Having a little difficulty trying to understand what this question is trying to ask me.
The questions says: "Suppose that f is continuous on [a,b] and f takes only rational values. What can be concluded about f?"
I know that the question is relating to the intermediate value theorem, as it is in that section of the text book, however I'm not sure what can be concluded from this.
My thinking so far has been this:
- f is on a closed, bounded interval, which implies a whole load of things such as continuity etc.
- I'm not sure if the intermediate value theorem works if the function only takes rational numbers, as the reals are dense with irrationals. But they are also dense with the rationals so maybe not?
I don't know how to go about the solution as I'm sure than it is more than "there exists a value c such that a<c<b and f(c) is rational"
Any idea what the question might be asking for???
Thanks in advance,
Mark
How could it possibly be continuous when it only takes on rational values? There are always irrationals in between any two rational numbers...
3. ## Re: Intermediate Value Theorem
Sorry, my bad.
Silly mistake.
4. ## Re: Intermediate Value Theorem
I don't see any mistake, silly or not. Obviously, there exist an irrational number between any two rationals so if a function is continuous and takes on two different values, then it must take on all values between them and so some irrational values.
Result: if f is continuous and takes on only rational values, then f can only have one value. What kind of function has that property?
5. ## Re: Intermediate Value Theorem
Oh, this makes sense.
Are we to assume that a=/=b, as if it did would we be able to conclude anything about f at all? Or is there a part of the question that states this through definition?
And does that mean that f must be in the form f(x) = a, a is an element of the reals?
Is there a name for this kind of function?
Thanks
6. ## Re: Intermediate Value Theorem
Originally Posted by MarkJacob
Oh, this makes sense.
Are we to assume that a=/=b, as if it did would we be able to conclude anything about f at all? Or is there a part of the question that states this through definition?
And does that mean that f must be in the form f(x) = a, a is an element of the reals?
Is there a name for this kind of function?
Thanks
I think a should only be an element of the rationals. And this kind of function is called a constant function.
7. ## Re: Intermediate Value Theorem
Originally Posted by HallsofIvy
Result: if f is continuous and takes on only rational values, then f can only have one value. What kind of function has that property?
Is it $f(x)=\alpha \cdot {\cal X}_\mathbb{Q}(x)$ where ${\cal X}_\mathbb{Q}(x)=\left\{ \begin{array}{l}1,\, if\;\; x\in \mathbb{Q};\\0, otherwise.\end{array}\right.$ and $\alpha \in \mathbb{R}$.
Or is it possible at all for $f:\mathbb{Q}\to\mathbb{R}$ to be continuous over interval $[a,b]$? There are too many points in an interval at which f is undefined, or? Now I'm confused... totally. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9686171412467957, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/234/how-should-a-physics-student-study-mathematics | # How should a physics student study mathematics? [closed]
Note: I will expand this question with more specific points when I have my own internet connection and more time (we're moving in, so I'm at a friend's house).
This question is broad, involved, and to some degree subjective.
(I started out as a physics-only student, but eventually decided to add a mathematics major. I am greatly interested in mathematics; the typical curriculum required for physics students is not deep or thorough enough; mathematics is more general (that means work!); and it only requires a few more classes. Naturally, I enjoy mathematics immensely.)
This question asks mainly of undergraduate-level study, but feel free to discuss graduate-level study if you like.
Please do not rush your answer or try to be comprehensive. I realize the StackOverflow model rewards quick answers, but I would rather wait for a thoughtful, thorough (on a point) answer than get a fast, cluttered one. (As you probably know, revision produces clear, useful writing; and a properly-done comprehensive answer would take more than a reasonable amount of time and effort.) If you think an overview is necessary, that is fine.
For a question this large, I think the best thing to do is focus on a specific area in each answer.
Update: To Sklivvz, Cedric, Noldorin and everyone else: I had to run off before I could finish, but I wanted to say I knew I would regret this; I was cranky and not thinking clearly, mainly from not eating enough during the day. I am sorry for my sharp responses and for not waiting for my reaction to pass. I apologize.
Re: Curricula:
Please note that I am not asking about choosing your own curriculum in college or university. I did not explicitly say that, but several people believed that was my meaning. I will ask more specific questions later, but the main idea is how a physics student should study mathematics (on his or her own, but also by choosing courses if available) to be a competent mathematician with a view to studying physics.
I merely mentioned adding a mathematics major to illustrate my conclusion that physics student need a deeper mathematical grounding than they typically receive.
And now I have to run off again.
-
1
"the typical curriculum required for physics students is not deep or thorough enough; mathematics is more general (that means work!); and it only requires a few more classes. " this seems a bit contradictory to me... – Cedric H. Nov 4 '10 at 20:48
5
@Mark C: the main problem I have with this question is that you are writing constantly things like "This question is broad, involved", "Please do not rush your answer ", "insulting", "promise" ... just ask your question and let people answer if they understand what you want. – Cedric H. Nov 4 '10 at 21:25
1
not all university systems allow one to choose the curricula. not all university systems have undergraduate/graduate separation. i don't even know exactly what "adding a major" means. this said, the question has merit, and can be saved. note that university specific stuff is also sort of off topic. the basic question which has merit is: what approach/topics in maths are useful to study physics (or mathematical physics)? the rest of the question basically confuses me... i don't know how your university works (nor should i care). – Sklivvz♦ Nov 4 '10 at 21:57
4
Also the "Please do not rush your answer or try to be comprehensive." is flamebait, or at least meta- material!? – Sklivvz♦ Nov 4 '10 at 21:59
1
@Mark C - whatever your interpretation of the stackoverflow system is "too localized" is actually a valid reason for closing. Note that i just voted to close, and not closed. very big difference there. finally, i think i gave an explanation. you didn't like it, but that doesn't mean i didn't give one. – Sklivvz♦ Nov 4 '10 at 22:01
show 10 more comments
## closed as not constructive by David Zaslavsky♦Apr 14 at 3:31
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 9 Answers
I feel very strongly about this question. I believe that for an experimentalist, it's fine to not go very deeply into advanced mathematics whatsoever. Mostly experimentalists need to understand one particular experiment at a time extremely well, and there are so many skills an experimentalist needs to focus all of their time/energy on developing as students.
I believe experimentalists should derive their physical intuition from lots of time spent in the lab, whereas theoreticians should develop their physical intuition from a sense of "mathematical beauty" in the spirit of Dirac.
Theoreticians, in my opinion, should study mathematics like math majors, almost forgetting about physics for a time; this is the point I feel so strongly about. The thing is that math is such a big subject, and once you have the road map of what is important for theoretical physics; then it really takes years of study to learn all the mathematics. I think it's so bad how many physics proffessors, who are themselves experimentalists, teach math improperly to young theoretician's. I personally had to unlearn many of the things I thought I knew about math, once I took a course based on Rudin's "Principle's of Analysis".
-
Obviously this is not a comprehensive list, and my aim is simply to give you a pointer to the basic material you need to cover early on. As you progress you may become more specialised and your field may have particular mathematical techniques and formalisms which are particular to it.
Much of the mathematics used in physics is continuous. This ranges from the elementary calculus used to solve simple Newtonian systems to the differential geometry used in general relativity. With this in mind, it is generally necessary to cover calculus in depth, real and complex analysis, fourier analysis, etc.
Additionally, many physical transformation have very nice group structures, and so covering basic group theory is a very good idea.
Lastly, strong linear algebra is a prerequisite for many of the techniques used in the other areas I have mentioned above, and is also extremely important in the matrix formulation of quantum mechanics. Finding ground states of discrete systems (for example spin networks) means finding the minimum eigenvalue and corresponding eigenvector of the Hamiltonian.
-
It is important when studying mathematics to do so with the following perspective
### Mathematicians Allow Useless Non-computable Fantasy Objects
Mathematicians often choose to live in a world where the axiom of choice is true for sets of size the continuum. This is idiotic for many reasons, even for them, but it is especially idiotic for physics. There are easy intuitive arguments that establish that every set has a volume, or Lebesgue measure, and they go like this:
Given any set S in a big box B, choose points randomly and consider when they land in S. In the limit of many throws, define the measure of S to be the volume of B times the fraction of points which land in S. When this works, and it always works, every set is measurable.
This definition is not allowed in mathematics, because the concept of randomly choosing a point requires taking a limit of the random process of choosing the digits at random. The limiting random process must be defined separately from the approximation processes within usual mathematics, even when the approximations almost always converge to a unique answer! The only reason for this is that there are axiom of choice constructions of non-measurable sets, so that the argument above cannot be allowed to go through. This leads to many cumbersome conventions which inhibit understanding.
If you read mathematics, keep in the back of your head that every set of real numbers is really measurable, that every ordinal is really countable (even the ones that pretend to be uncountable collapse to countable ones in actual models of set theory), and that all the fantasy results of mathematics come from mapping the real numbers to an ordinal. When you map the real numbers to an ordinal, you are pretending that some set theory model, which is secretly countable by the Skolem theorem, contains all the real numbers. This causes the set of real numbers to be secretly countable. This doesn't lead to a paradox if you don't allow yourself to choose real numbers at random, because all the real numbers you can make symbols for are countable, because there are only countable many symbols. But, if you reveal this countability by admitting a symbol which represents a one-to-one map between some ordinal and the real numbers, you get Vitali theorems about non-measurable sets. These theorems can never impact physics, because these "theorems" are false in every real intepretation, even within mathematics.
Because of this, you can basically ignore the following:
• Advanced point set topology--- the nontrivial results of point-set topology are useless, because they are often analyzing the choice structure of the continuum. The trivial results are just restating elementary continuity properties in set theoretic language. The whole field is bankrupt. The only useful thing in it is the study of topologies on discrete sets.
• Elementary measure theory: while advanced measure theory (probability) is very important, the elementary treatments of measure theory are basically concerning themselves with the fantasy that there are non-measurable sets. You should never prove a set is measurable, because all sets are measurable. Ignore this part of the book, and skip directly to the advanced parts.
### Discrete mathematics is important
This is a little difficult for physicists to understand at first, because they imagine that continuous mathematics is all that is required for physics. That's a bunch of nonsense. The real work in mathematics is in the discrete results, the continuous results are often just pale shadows of much deeper combinatorial relations.
The reason is that the continuum is defined by a limiting process, where you take some sort of discrete structure and complete it. You can take a lattice, and make it finer, or you can take the rationals and consider Dedekind cuts, or you can take decimal expansions, or Cauchy sequences, or whatever. It's always through a discrete structure which is completed.
This means that every relation on real numbers is really a relation on discrete structures which is true in the limit. For example, the solution to a differential equation
$${d^2x\over dt^2} = - x^2$$
Is really an asymptotic relation for the solutions of the following discrete approximations
$$\Delta^2 X_n = -\epsilon x_n^2$$
The point is, of course, that many different discrete approximations give the same exact continuum object. This is called "existence of a continuum limit" in mathematics, but in statistical physics, it's called "universality".
When studying differential equations, the discrete structures are too elementary for people to remember them. But in quantum field theory, there is no continuum definition right now. We must define the quantum field theory by some sort of lattice model explicitly (this will always be true, but in the future, people will disguise the underlying discrete structure to emphasize the universal asymptotic relations, as they do for differential equations). So keep in mind the translation between continuous and asymptotic discrete results, and that the discrete results are really the more fundamental ones.
So do study, as much as possible:
• Graph theory: especially results associated with the Erdos school
• Discrete group theory: this is important too, although the advanced parts never come up.
• Combinatorics: the asymptotic results are essential.
• Probability: This is the hardest to recommend because the literature is so obfuscatory. But what can you do? You need it.
### Don't study mathematics versions of things that were first developed in physics
The mathematicians did not do a good job of translating mathematics developed in physics into mathematics. So the following fields of mathematics can be ignored:
• General relativity: Read the physicists, ignore the mathematicians. They have nothing to say.
• Stochastic processes: Read the physicists, ignore the mathematicians. They don't really understand path integrals, so they have nothing to say. The usefulness of this to finance has had a deleterious effect, where the books have become purposefully obfuscated in order to disguise elementary results. All the results are in the physics literature somewhere in most useful form.
• Quantum fields: Read the physicists, especially Wilson, Polyakov, Parisi, and that generation. they really solved the problem. The mathematicians are useless. Connes-Kreimer are an exception to this rule, as is but they are bringing back to life results of Zimmermann which I don't think anybody except Zimmermann ever understood. Atiyah/Segal on topological fields is also important, and Kac might as well be a physicist.
### Physics is the science of things that are dead. No logic.
There are many results in mathematics analyzing the general nature of a computation. These computations are alive, they can be as complex as you like. But physics is interested in the dead world, things that have a simple description in terms of a small computation. Things like the solar system, or a salt-crystal.
So there is no point to studying logic/computation/set-theory in physics, you won't even use it. But I think that this is short sighted, because logic is one of the most important fields of mathematics, and it is important for it's own sake. Unfortunately, the logic literature is more opaque than any other, although Wikipedia and math-overflow do help.
• Logic/computation/set-theory: You will never use it, but study it anyway.
-
3
Ohh, it was really enjoyable to read! – Martin Gales Sep 7 '11 at 5:44
9
-1: pointlessly dogmatic, and mostly wrong. You've missed the main point of continuous methods, which is to make things easier, not harder. Your quest for discretized solutions to everything seems to have caused the omission of Lie groups, which play a central role in understanding symmetry, but are continuous objects with very few finite subgroups. Also, you are misinterpreting the Löwenheim-Skolem theorem. – Scott Carnahan Oct 21 '11 at 6:07
3
@Scott: I have no "quest" for discretized solutions--- you are misinterpreting. What I said is that you have to understand continuous results as limits of discrete ones, and be conscious of the limiting process. I agree that continuous methods make things easier, in those cases where you already know the continuum structure, but people tend to believe they have exhausted the continuum, and they haven't. The renormalization process gives new continuum structures which haven't been given a continuum description yet, but their discrete description exists, and the limit is hard. – Ron Maimon Oct 21 '11 at 18:12
2
@Scott: I did not forget about Lie groups, its just that everyone already knows these. I tried to focus only on things that not everyone knows already. I understand the Lowenheim Skolem theorem like the back of my hand, I am not misinterpreting it. It proves that any axiomatic system has a countable model. This countable model is the real thing one studies, sorry to disagree with 90% of working mathematicians (not 90% of logicians, however). The fact that mathematicians get this wrong all the time means that it needs to be said by me. – Ron Maimon Oct 21 '11 at 18:15
2
@Joseph f. Johnson: Thank goodness its finally getting downvoted, I was worriedvthat I was preaching to the choir. – Ron Maimon Jan 15 '12 at 0:00
show 9 more comments
The question is way too broad. Different areas of physics requires different level (and area) of maths.
One general list is here: Gerard ’t Hooft, Theoretical Physics as a Challenge.
Also one approach is to learn maths when you encounter it in physics (*), making sure each time you learn a bit more than solely to understand (*).
-
Yes, sorry, I think today I will have time to put in some more specific questions. – Mark C Nov 16 '10 at 14:21
Read The Road to Reality: a complete guide to the laws of the universe by Roger Penrose. It provides a handy companion for undergraduate / first-year graduate students of physics. | The first sixteen chapters provide - (in outline form) - all the mathematical material needed for an undergraduate major in (specifically theoretical-) physics - written by a foremost theoretical physicist, (i.e. it provides the "depth" that you wouldn't otherwise find in textbooks or other standardized reading materials).
-
In brief, she should study it casually for pleasure and intensly as needed.
I find that many of the people that I know in scientific and technological fields who try to study everything get sidetracked and end up studying only obscure and relatively less useful topics. While it might provide for some interesting correlations, I prefer the physician's approach: "When you hear hoofbeats, look for horses, not zebras." The fact is that the bread and butter calculus, algebra, trigonometry, and geometry will get the average scientist a very long way. If you are going on to more advanced fields, differential and linear equations are also very helpful. Learn these fields well enough to use them on a regular basis and learn just enough about other branches of math to be able to spot their utility should the need arise.
PS - If you are looking for recommendations on books, my favorite is Mathematical Methods for Physics and Engineering by Riley, Hobson, and Bence.
-
they should study maths as they would do physics
-
The same reason a literature student needs to learn English. You can't express yourself otherwise. Long have the times past when you could have described physical phenomena using words; Faraday did so. At that time unexplained physical phenomena were at human scale and human language was enough. Today, frontiers of physics are far beyond meters, kilograms, amps and few eV. By a chance, we have discovered that Universe is much weirder than we could have ever imagined and so we resort to only expression of absolute meaning - mathematics. I do have urge to elaborate why mathematics is so efficient painting reality, but I usually get accused of Platonic extremism and, having so little time, I'll refrain.
-
I will give a very general and brief answer to the question, How to study maths etc.
Skip the proofs but study the definitions carefully.
Now I will add a very general remark made to me by a wise woman once: she never learned anything by reading (a paper or a book) except when she was reading it in order to solve a problem she had. But I wouldn't want you to think this implies that you should never read something except when you have a problem in mind....
Relevant to the OP is, ¿what is the difference between studying maths the way a mathematician would and the way a physicist would? I will just give two classic quotations. Nicolas Bourbaki (and André Weil) repeated the often said proverb:
« Depuis les Grecs, qui dit mathématiques dit démonstration »
But Dirac told Harish-Chandra
« I am not interested in proofs but only in what Nature does.»
-
2
Reading theorems and definitions without proofs is like reading the plaque without looking at the sculpture. – Ron Maimon Jan 15 '12 at 0:04
Well, that is Bourbaki's point of view too. The art thief, though, should concentrate on the plaques. I si agree to the very small extent that I would say that us lesser mortals should not imitate Dirac: what worked for the man who was able to re-invent spinors and distributions on his own would not work for me. But he did not even read mathematics papers as far as I know...I suggest physicists at least read them or take the course or something. – joseph f. johnson Jan 15 '12 at 2:52
you should either immitate Dirac, or not do physics. – Ron Maimon Jan 15 '12 at 4:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9586099982261658, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/4246?sort=votes | Why is Milnor K-theory not ad hoc?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
When Milnor introduced in "Algebraic K-Theory and Quadratic Forms" the Milnor K-groups he said that his definition is motivated by Matsumoto's presentation of algebraic for a field but is in the end purely ad hoc for . My questions are:
1. What exactly could Milnor prove with these -groups? What was his motivation except for Matsumoto's theorem?
2. Why did this ad hoc definition become so important? Why is it so natural?
-
6 Answers
Milnor K-theory gives a way to compute étale cohomology of fields (i.e. Galois cohomology): if E is a field of characteristic different from a prime l, there is a residue map from the nth Milnor K-group of E mod l to the nth étale cohomology group of E with coefficients in the sheaf of lth roots of unity to the n (i.e. tensored with itself n times). There is the Bloch-Kato conjecture, which predicts that these residue maps are bijectvive. It happens that the case l=2 was conjectured by Milnor (up to a reformulation I guess). The Milnor conjecture has been proved by Voevodsky (and it was the first great achievements of homotopy theory of schemes, which he initiated with Morel during the 90's), and he got his Fields medal in 2002 for this. Now Rost and Voevodsky claimed they have a proof of the full Bloch-Kato conjecture for any prime l (which should appear some day, thanks to the work of quite a few people, among which Charles Weibel is not the least). Note also that the Bloch-Kato conjecture makes sense for l=p=char(E), but then, you have to replace étale cohomology by de Rham-Witt cohomology (and this has also been proved by Bloch and Kato). Suslin and Voevodsky also proved that the Bloch-Kato conjecture implies the Beilinson-Lichtenbaum conjecture, which predicts the precise relationship between torsion motivic cohomology of varieties with torsion étale cohomology.
Milnor K-theory is related to motivic cohomology (i.e. higher Chow groups) in degree n and weight n H^n(X,Z(n)): for X=Spec(E), H^n(X,Z(n)) is the nth Milnor K-group. This is how homotopy theory of schemes enters in the picture (one of the main feature introduced by Voevodsky to study motivic cohomology with finite coefficients is the theory of motivic Steenrod operations). On the other hand, Rost studied Milnor K-theory for itself: among a lot of other things, he proved that, if you consider it as a functor from the category of fields, with all its extra structures (residue maps interacting well), you can reconstruct higher Chow groups of schemes (over a field), via some Gersten complex.
Milnor K-theory is also a crucial ingredient in Kato's higher class field theory.
-
3
Did Milnor really know about this (in form of conjectures of course)? Why exactly are the cohomology groups of the group of l-th roots of unity interesting? And is the Bloch-Kato conjecture now proven in a single paper or is this a whole collection of papers? (Sorry, several questions at once...) – S1 Nov 5 2009 at 15:49
(of course I don't want to take a look at the proof if there is one) – S1 Nov 5 2009 at 15:59
I have to agree with Arminius, at least (1) was quite a different question. – Ilya Nikokoshev Nov 5 2009 at 16:01
3
Milnor explicitely formulated his conjecture using Grothendieck-Witt rings, and new the relation with Galois cohomology (which was already rather well developped, after Tate and Bass): this is the subject of his paper "Algebraic K-theory and quadratic forms", Invent. Math. 9 (1970). The Bloch-Kato conjecture is proven in a whole collection of papers which you can find at the K-theory archives. I heard that Weibel planned to write a book on this. The proof of the Milnor conjecture (i.e. the case l=2) is published there: numdam.org/numdam-bin/… – Denis-Charles Cisinski Nov 5 2009 at 16:17
2
You can have look at the end of Milnor's paper. Otherwise, there are some lectures of Weibel here: math.rutgers.edu/~weibel/Kbook/trieste-8.dvi – Denis-Charles Cisinski Nov 5 2009 at 18:27
show 2 more comments
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Also, about the motivations of Milnor, it is quite natural to try to understand the Witt ring of a field, classifying quadratic forms over this field (in char not 2). This ring has a natural filtration by the fundamental ideal, and it is natural to try to understand the associated graded ring, which is simpler than the Witt ring. One approach is to understand it by generators and relations. The relations defining Milnor's K-theory are elementary ones obviously satisfied in the graded Witt ring, and there are very few of them. Milnor's conjecture (now a theorem) says that Milnor K-theory mod 2 is isomorphic to that graded Witt ring. It is equivalent to the formulation with étale cohomology, but probably, an important part of Milnor's original motivation was about quadratic forms, as one can see in his original paper.
It is quite surprising that, a posteriori, this simple K-theory appears as such a fundamental object in intersection theory. There is a nice (and seminal) paper by Totaro explaining in rather elementary and geometric terms the simplest case of the connexion between Milnor K-theory and higher Chow groups, mentioned by Denis-Charles Cisinski.
Milnor K-theory is the Simplest Part of Algebraic K-Theory, K-theory 6, 177-189, 1992
-
To help answer Question 1, Milnor proved a local-global theorem for Witt rings of global fields. Recall that The Grothendieck-Witt ring $\widehat{W}(k)$ of a field $k$ is the ring obtained by starting with the free abelian group on isomorphism classes of quadratic modules and moding out by the ideal generated by symbols of the form $[M]+[N]-[M']-[N']$, whenever $[M]\oplus[N]\simeq [M']+[N']$. The multiplication comes from tensor product of quadratic modules. There is a special quadratic module $H$ given by $x^2-y^2=0$. This is the hyperbolic module. The Witt ring $W(k)$ of a field $k$ is the quotient of $\widehat{W}(k)$ by the ideal generated by $[H]$.
Now, the main theorem of Milnor's paper is that there is a split exact sequence $$0\rightarrow W(k)\rightarrow W(k(t))\rightarrow \oplus_\pi W(\overline{k(t)}_\pi)\rightarrow 0,$$ where $\pi$ runs over all irreducible monic polynomials in $k[t]$, and $\overline{k(t)}_\pi$ denotes the residue field of the completion of $k(t)$ at $\pi$.
The morphisms $W(k(t))\rightarrow W(\overline{k(t)}_\pi)$ come from first the map $W(k(t))\rightarrow W(k(t)_\pi)$. Then, there is a map $W(k(t)_\pi)\rightarrow W(\overline(k(t))_\pi)$ that sends the quadratic module $u\pi x^2=0$ to $ux^2=0$, where $u$ is any unit of the local field.
Interestingly, Milnor $K$-theory is not used in the proof. However, the proof for Witt rings closely models the proof of a similar fact for Milnor $K$-theory: the sequence $$0\rightarrow K_n^M(k)\rightarrow K_n^M(k(t))\rightarrow\oplus_\pi K_{n-1}^M(\overline{k(t)}_\pi)\rightarrow 0.$$
The important new perspective is the formal symbolic perspective, which was already existent for lower $K$-groups, but is very fruitful for studying the Witt ring as well.
-
Help, I'm not sure why this is messed up. It looks fine in math preview. – Benjamin Antieau Nov 15 2009 at 16:25
1
Sorry about that. The problem is that Markdown gets its paws on the text before jsMath does, and it sometimes converts underscores to italics. When jsMath gets the text, it gets really confused by the italics. Until SE is out of beta, there isn't much we can do about this problem. Until then, a hacky workaround is to escape troublesome underscores with a backslash (write `\_` instead of `_`) so that Markdown leaves them alone for jsMath to process. – Anton Geraschenko♦ Nov 15 2009 at 18:55
Thanks for the fix. – Benjamin Antieau Nov 15 2009 at 19:34
To see a few places where $K_2$ shows up, consult arXiv:math/0311099v4.
-
Thank you! . – S1 Dec 30 2009 at 22:11
As already mentioned above by Denis-Charles Cisinski, Rost has shown (see "Chow Groups with Coefficients") that some version of higher Chow groups can be constructed via Milnor K groups.
In fact, Gillet in his survey "K Theory and Intersection Theory" (googleable, I believe originally in the K-Theory Handbook) explains on page 24 and most importantly page 25 (middle) how one may even motivate the defining relations of Milnor K (i.e. the Steinberg relation) by intersection-theoric ideas. Whether you find this explanation natural or not is your free choice, but there is some beauty in it.
-
Another application of Milnor K-groups:
1. The following are equivalent:
${a_1, \ldots, a_n} = 0 \in K^M_n(K)/2$
$\langle\kern-0.2em\langle{a_1, \ldots, a_n}\rangle\kern-0.2em\rangle$ (Pfister form) is totally hyperbolic
$\langle\kern-0.2em\langle{a_1, \ldots, a_n}\rangle\kern-0.2em\rangle$ is isotropic
$a_n$ is represented by $\langle\kern-0.2em\langle{a_1, \ldots, a_{n-1}\rangle\kern-0.2em}\rangle$
1. higher local class field theory: The class formation of an $n$-dimensional local field is $K^M_n(K)$. http://www.emis.de/journals/GT/ftp/main/m3/
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404443502426147, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/192327-find-all-inner-products-such-x-tx-0-a-print.html | # Find all "Inner Products" such that <x, Tx> = 0.
Printable View
• November 20th 2011, 10:57 AM
TaylorM0192
Find all "Inner Products" such that <x, Tx> = 0.
Hello,
I came across this problem while preparing for one of my midterms, and it erks me that I still can't find a solution.
Given an linear operator T: R^2 -> R^2 rotation by pi/2 (i.e. T(x, y) = T(-y, x)), find all inner products < , > such that for all vectors x in R^2, <x, Tx> = 0.
It is obvious that the standard inner (dot) product is one such inner product. A few others could certainly be imagined (i.e. scalar multiples of the dot product). But we are, of course, asked to find all possible cases.
I know that all inner products on finite dimensional inner product spaces are represented by a Hermitian matrix G (G = G*) which satisfies the additional condition [x]*G[x] > 0 for all vectors x in V with respect to some basis.
Conversely, when we have such a G satisfying the above, it always represents an inner product on V with respect to a certain basis B.
The inner product is of course given as <x, y> = [y]*[G][x], where [y]* is the conjugate transpose of the coordinate vector of y represented in the basis B, likewise for [x] without the conjugation/transpose.
In the question, the conjugation can be removed, so we have <x, y> = [y]G[x].
The condition imposed on the inner product is <x, Tx> = [Tx]G[x] = 0.
Thus, it seems to be, that to solve the problem we must find all possible G which satisfies this equation.
But the problem I face is that even if I was able to use this equation to find such G, how do I account for the representation of x and Tx in the corresponding basis that G represents an inner product? Needless to say, my attempts down this path of reasoning haven't led me to a solution.
If anyone could help extend this process to get the solution, or propose a different approach, I would appreciate it!
• November 20th 2011, 02:02 PM
Jose27
Re: Find all "Inner Products" such that <x, Tx> = 0.
$\langle x,Ay\rangle= \langle A^*x,y \rangle = \langle x, y \rangle _1$ characterizes all inner products, when $A$ varies over positive (symmetric) operators in $V$. Your hypothesis say $\langle Ax, Tx \rangle =0$ for all $x$, which means that $Ax$ must be orthogonal to the othogonal complement of $x$, and since you're in two dimensions this is just $Ax=\lambda_x x$. By linearity of $A$ it's not difficult to see that $\lambda_x=\lambda_y$ for all $x,y\in \mathbb{R}^2$. Since $A$ must be positive, this means $\lambda >0$, and so $A=\lambda I$ and all inner products are multiples of the usual.
• November 20th 2011, 02:21 PM
Deveno
Re: Find all "Inner Products" such that <x, Tx> = 0.
since we are dealing with R^2, G is a symmetric matrix. moreover, G must be positive-definite, that is, if (x,y) is not (0,0), and G =
[a b]
[b c],
$\begin{bmatrix}x&y\end{bmatrix} \begin{bmatrix}a&b\\b&c\end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix} > 0$
so $ax^2 + 2bxy + cy^2 > 0$
since (x,0) and (0,y) are not (0,0) for x,y ≠ 0, we must have a,c > 0.
moreover, by "completing the squares" we see that:
$ax^2 + 2bxy + cy^2 = a\left(x+\frac{b}{a}y\right)^2 + \left(\frac{ac - b^2}{a}\right)y^2$,
so we must also have, in addition to a,c > 0, that $|b| < \sqrt{ac}$.
now, in addition, we require that <x,Tx> = 0, that is:
$x^TGTx = 0$ so that:
$(c - a)xy + b(x^2 - y^2) = 0$ for all x,y in R. if we choose x = 1, y = 0, we see that b = 0.
if we choose x = y, we see that a = c, so the only possible candidates are matrices of the form aI, for a > 0, that is to say: POSITIVE scalar multiples of the usual inner product.
• November 20th 2011, 07:55 PM
TaylorM0192
Re: Find all "Inner Products" such that <x, Tx> = 0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411537051200867, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/116742/list | ## Return to Answer
1 [made Community Wiki]
Here's one idea. For every permutation $\pi$ of length $n$, there are $n^2+1$ permutation of length $n+1$ containing $\pi$. However, once you look at permutations of length $n+2$, this quantity depends on $\pi$. Ray and West gave a proof that for $\pi$ of length $n$ the number of permutations of length $n+2$ containing $\pi$ is $$(n^4+2n^3+n^2+4n+4-2j)/2,$$ where $0\le j\le k-1$ depends on $\pi$. Perhaps you could give a description of this statistic in terms of patterns of $\pi$?
References and a bit more discussion can be found in this paper: http://www.math.ufl.edu/~vatter/publications/pp2007-problems/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9025881290435791, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/110191/stopping-time-on-wiener-process | # Stopping time on Wiener Process
Let $W_t$ be a Wiener process and for $a\geq0$
$$\tau_a:=\inf \left\{ t\geq0: |W_t|=\sqrt{at+7} \right\}.$$
Is $\tau_a<\infty$ almost everywhere? What about $E(\tau_a)$ then?
-
2
By the law of the iterated logarithm, you should have $\tau_a<\infty$. – ShawnD Feb 17 '12 at 0:45
So I could use ${\limsup_{t\rightarrow \infty} \frac{W_t}{\sqrt{2t\cdot ln(ln(t))}}=1}$ almost surely. But still it is $\sqrt{at+7} \geq \sqrt{2tln(ln(t))}$, for $a>2$, isn't it? – user25070 Feb 18 '12 at 12:39
Got something out of the answer? – Did Mar 8 '12 at 8:02
## 1 Answer
As explained by Shawn, the law of the iterated logarithm shows that $\tau_a$ is almost surely finite.
Since $\tau_a=\inf\{t\geqslant0\mid W_t^2=at+7\}$ and $\tau_a$ is almost surely finite, $W_{\tau_a}^2=a\tau_a+7$. Likewise, for every $t\geqslant0$, $\tau_a\wedge t\leqslant\tau_a$ almost surely hence $W_{\tau_a\wedge t}^2\leqslant a(\tau_a\wedge t)+7$ almost surely and in particular $\mathrm E(W_{\tau_a\wedge t}^2)\leqslant a\mathrm E(\tau_a\wedge t)+7$. On the other hand, $(W_t^2-t)_{t\geqslant0}$ is a martingale hence $\mathrm E(W_{\tau_a\wedge t}^2)=\mathrm E(\tau_a\wedge t)$ for every $t\geqslant0$. One sees that $(1-a)\mathrm E(\tau_a\wedge t)\leqslant7$.
If $a\lt1$, one gets $(1-a)\mathrm E(\tau_a)\leqslant7$ hence $\tau_a$ is integrable. Since $W_{\tau_a}^2=a\tau_a+7$, $W_{\tau_a}^2$ is integrable as well and $\mathrm E(W_{\tau_a}^2)=a\mathrm E(\tau_a)+7=\mathrm E(\tau_a)$, which shows that $$\mathrm E(\tau_a)=\frac{7}{1-a}.$$ If $a\geqslant1$, $\tau_a\geqslant\tau_b$ for every $b\lt1$, hence $\mathrm E(\tau_a)\geqslant\mathrm E(\tau_b)=7/(1-b)$ for every $b\lt1$, in particular $\mathrm E(\tau_a)$ is infinite.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491646885871887, "perplexity_flag": "head"} |
http://cotpi.com/p/ | Sunday, March 31, 2013
## Boolean financial advisors
Alex and Bob work as financial advisors for the same company. They draw equal salaries from the company. They behave well at the office. Both work on similar assignments. Each assignment requires a yes-no decision. The company uses the decisions made by them to make profits.
After the recession hit the company very badly, one of them has to be fired. Both Alex and Bob have worked on almost the same number of assignments in the last ten years. Alex has been consistently taking about 80% decisions correctly every year. Bob, on the other hand, has been taking only about 5% correct decisions every year.
Assuming that the performances of Alex and Bob would remain the same in future, who should the company fire to maximize its profits in the years to come? Why?
[SOLVED]
Sunday, March 3, 2013
## Composite factorial plus one
How many positive integers $$n$$ are there such that $$n! + 1$$ is composite?
[SOLVED]
Sunday, January 20, 2013
## Average salary
A group of friends wants to know their average salary such that no individual salary can be deduced. There are at least three friends in the group. How can this problem be solved with a pen and paper?
[SOLVED]
Sunday, December 9, 2012
## Red and blue hats
There are 100 prisoners in a prison. The warden will set them free if they win a game involving red and blue hats. All the prisoners will be made to stand in a straight line. The warden will blindfold all the prisoners, then put either a blue hat or a red hat on each prisoner's head, and finally remove all the blindfolds. Each prisoner can then see the hats of all the prisoners in front of him but he cannot see his own hat or the hats of those behind him. If at least 99 prisoners can correctly declare the colour of his hat, the warden will set them free.
Once the game begins, each prisoner is allowed to utter "red" or "blue" only once to declare the colour of his hat. They will not be allowed to communicate in any other manner. The warden will give them one day to decide a strategy to win this game. What should their strategy be?
[SOLVED]
Sunday, November 4, 2012
## All horses are the same colour
All horses are the same colour; we can prove this by induction on the number of horses in a given set. Here's how: "If there's just one horse then it's the same colour as itself, so the basis is trivial. For the induction hypothesis, horses $$1$$ through $$n - 1$$ are the same colour, and similarly horses $$2$$ through $$n$$ are the same colour. But the middle horses, $$2$$ through $$n - 1$$, can't change colour when they're in different groups; these are horses, not chameleons. So horses $$1$$ and $$n$$ must be the same colour as well, by transitivity. Thus all $$n$$ horses are the same colour; QED." What, if anything, is wrong with this reasoning?
[SOLVED] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9627275466918945, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/296037/solve-recurrences-by-obtaining-a-bound-for-tn-given-that-t1-1/296139 | # Solve recurrences by obtaining a θ bound for T(N) given that T(1) = θ(1)
$T(N) = N + T(N-3)$
This is what I got so far
$$\begin{align}&= T(N-6) + (N-3)+N\\ &= T(N-9) + (N-6) + (N-3)+N \\ &= T(N-12) + (N-9) + (N-6) + (N-3)+ N\end{align}$$
I think I should use $(n^2 + n) / 2$.
im not sure if im doing it right or not!
Thanks :)
-
1
– user53153 Feb 6 at 5:12
$N,N-3,N-6,N-9,\ldots$ form an arithmetic progression. Use the formula for the sum of an AP – dexter04 Feb 6 at 5:12
## 1 Answer
Note: we generally express functions of integer arguments with subscripts, i.e. $T_n$ rather than $T(n)$.
This is a constant-coefficient difference equation. To specify the solution, you need three initial conditions, say, $T_1$, $T_2$, and $T_3$. In general, the solution takes the form
$$T_n = T_n^{(H)} + T_n^{(I)}$$
where $T_n^{(H)}$ is a homogeneous solution satisfying
$$T_n^{(H)} - T_{n-3}^{(H)} = 0$$
and the initial conditions, and T_n^{(I)}\$ is an inhomogeneous solution satisfying
$$T_n^{(I)} - T_{n-3}^{(I)} = n$$
The homogeneous piece takes the form $a r^n$ for some (potentially complex) value of $r$. When we plug this into the homogeneous equation, we get
$$r^3-1=0$$
which has solutions $r=1$, $r=\omega = e^{i 2 \pi/3}$, and $r=\omega^2 = e^{i 4 \pi/3}$. The homgeneous solution is then a linear combination of the solutions corresponding to these roots, i.e.,
$$T_n^{(H)} = A + B \omega^n + C \omega^{2 n}$$
where the constants $A$, $B$, and $C$ are determined by the initial conditions.
The inhomogeneous solution is determined by the factor of $n$, and because of the nature of the equation (a difference), we guess it takes the form $T_n^{(I)} = P n + Q n^2$. Plugging this into the equation, we see that
$$3 P + (6 n-9) Q = n$$
which implies that $Q = 1/6$ and $P = 1/2$. The general solution to the equation is then
$$T_n = A + B \omega^n + C \omega^{2 n} + \frac{1}{6} n (n+3)$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8884806036949158, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/47987/topology-and-quantum-mechanics/48023 | # Topology and Quantum mechanics
I have a very simple question. Can we know about the topology of the underlying space-time manifolds from Quantum mechanics calculations? If the Space-time is not simply connected, how can one measure the effects of this from calculating transition amplitudes in the path integral formalism?
-
1
Great question! I wonder if you wanted to generalize it slightly by saying "if the spacetime is not simply connected", since this also covers the interesting case where the spacetime is in one piece but has holes in it? – twistor59 Dec 31 '12 at 7:50
I think there have to be two cases here: (1) non-simply-connected spacetimes without closed timelike curves, and (2) those with. Transitioning from one to the other, you get a closed light-like curve, which according to Thorne causes bad things to happen. – Peter Shor Dec 31 '12 at 15:59
– Qmechanic♦ Dec 31 '12 at 16:35
There is a great question IMO on this site, I can't seem to find it, where the OP asks if one can deduce the space-time structure by computing commutators a bunch of times by making measurements in a lab. Since the commutators vanish for space-like separation, one could deduce the light-cone. – kηives Dec 31 '12 at 20:43
## 2 Answers
Here is an experimentalist's answer:
One is continually checking calculations, using the full Quantum Mechanical toolkit, with the the data, looking for deviations as hints for new physics beyond the Standard Model. The g-2 of the muon , for example, starts showing some difference between data and theory, not yet statistically significant.
The standard theoretical toolkit assumes simple connections for space time. Thus, any deviations from "classical" calculations are open to interpretation by new theories, which could include not simply connected space time. These new models would have to incorporate all the present agreements and justify the putative new deviations from the SM.
Here I am answering your " Can we know " question. If we see deviations from the classical calculation we can know that something has to give. If it is connectivity of space time somebody has to calculate its effect.
-
Let us for simplicity consider non-relativistic quantum mechanics of a particle on a spatial, $3$-dimensional, connected (possibly curved, possibly non-simply connected) manifold $M$ in the path integral formalism. Then the Feynman propagator
$$\tag{1} K({\bf q}_f, t_f; {\bf q}_i, t_i)~\sim ~ \int_{{\bf q}(t_i)={\bf q}_i}^{{\bf q}(t_f)={\bf q}_f} \!{\cal D}{\bf q}~ e^{\frac{i}{\hbar}S[q]}$$
in principle carries detailed information about the full manifold $M$, since we should sum over all histories in the path integral (1). For instance, if the manifold $M$ is not simply connected, then the overlap (1) will depend on path-histories that are not homotopic.
How the information about the manifold $M$ can be extracted from only knowing the Feynman propagator $K({\bf q}_f, t_f; {\bf q}_i, t_i)$ is an example of an inverse scattering problem.$^1$
--
$^1$ Note that these hearing-the-shape-of-a-drum-type problems may not always have unique solutions for the manifold $M$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288324117660522, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/77644?sort=votes | ## Are there nonobvious cases where equations have finitely many algebraic integer solutions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a scheme of finite type over $\mathbb{Z}$. Let $R$ be the ring of algebraic integers. My intuition is that $X(R)$ is practically always infinite.
More specifically, suppose that $X$ is faithfully flat over $\mathbb{Z}$, of relative dimension $\geq 1$, and the generic fiber is geometrically irreducible. Is that enough to guarantee infinitely many algebraic integer points?
This question is inspired by this one; I have no application in mind.
-
1
I think this is of relevance, though it may not answer everything. Once you have local-global, I suspect what you desire, should follow. math.uga.edu/~rr/ArithAllAlgInt.pdf In this note we will establish two theorems about Diophantine equations over the ring of all algebraic integers $O$. Let $V$ be a geometrically irreducible affine variety over $K$. The first theorem is a general local-global principle: $V$ has points over $O$ iff it has points over all the local $O_v$. The second theorem applies the first to get: Hilbert's Tenth problem has a positive solution over $O$. – Junkie Oct 10 2011 at 4:05
If $X$ is flat over $\mathbf{Z}$ and the generic fibre is geometrically irreducible, does it follow that $X\to \mathrm{Spec} \mathbf{Z}$ is surjective (since $\mathrm{Spec} \mathbf{Z}$ is a Dedekind scheme)? Moreover, as you probably know, if $X$ is proper over $\mathbf{Z}$, $X(R) = X_K(K)$, where $R=O_K$. Now, I'm pretty sure that $X_K(K)$ is infinite for some field $K$, but maybe you can find a curve $X_K/K$ and field extensions $L_1,L_2,\ldots$ of $K$ of increasing degree with $X_K(L_i)$ finite. Why would this not be possible? Anyway, it's the non-proper case which probably interests you. – Taicho Oct 10 2011 at 6:36
@Taicho. NB: If X is a (say, projective, flat model of) a projective curve of genus $g\geq 2$ over $\mathbf Q$, then $X_K(K)$ is finite for any algebraic number fields, by Faltings's Theorem (Mordell conjecture). – ACL Oct 10 2011 at 7:10
Don't you want to assume that $X$ is affine (with maybe some extra conditions) ? Note that if $X$ is proper over $\bf Z$ then $X(R)=X(\bar{\bf Q})$ and clearly $X(\bar{\bf Q})$ is infinite if the dimension of the generic fibre of $X$ is positive. – Damian Rössler Oct 10 2011 at 11:04
1
@Taicho: perhaps $Spec(\mathbf{Z}[X,Y]/(2XY-1))$ is an illustrative example. If I didn't make a mistake, this is flat over $\mathbf{Z}$ but not faithfully flat, and clearly has no integral points. – Kevin Buzzard Oct 10 2011 at 20:07
show 2 more comments
## 1 Answer
This is basically true, in view of a density theorem due to Robert Rumely (Arithmetic over the ring of all algebraic integers, J. reine u. angew. Math. 368, 1986, p. 127-133). It relies on Rumely's capacity theory, and his extension of the theorem of Fekete-Szegö.
For a generalization, and an algebraic proof, see also Laurent Moret-Bailly, Groupes de Picard et problèmes de Skolem. II. Annales scientifiques de l'École Normale Supérieure, Sér. 4, 22 no. 2 (1989), p. 181-194. (Numdam, http://www.numdam.org/item?id=ASENS_1989_4_22_2_181_0)
The hypothesis of Moret-Bailly's Theorem is that $X$ be irreducible, surjective and of positive relative dimension over ${\rm Spec}\mathbf Z$, and that its generic fiber be geometrically irreducible. Then, he proves that $X$ has $\overline{\mathbf Z}$-points which can be chosen arbitrarily close to a given $p$-adic point (end even more...).
-
Antoine, I think $X$ should also be quasi-projective. – Qing Liu Oct 10 2011 at 12:00
No, this is not a necessary hypothesis in Moret-Bailly's Theorem. In fact, he later proved similar results for Artin stacks. (see Problèmes de Skolem sur les champs algébriques, Compositio Math., 125, 1-30 (2001). – ACL Oct 10 2011 at 13:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8973131775856018, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/67328/about-cyclotomic-extensions-of-p-adic-fields/67352 | # About cyclotomic extensions of $p$-adic fields
I've been working on the problem of finding the maximal abelian extension of $\mathbb{Q}_5$ that is killed by $5$. In other words, find the abelian extension of $\mathbb{Q}_5$ with Galois group isomorphic to $\mathbb{Q}_5^\times/\mathbb{Q}_5^{\times 5}\simeq C_5\times C_5$. Now
$$(\mathbb{Q}_5^\times :\mathbb{Q}_5^{\times 5})=25$$
so essentially it's enough to find two disjoint $C_5$ extensions of $\mathbb{Q}_5$ and take their compositum. $C_5$ extensions don't seem too easy to construct, so by the local Kronecker-Weber theorem one could just look for cyclotomic extensions of degree divisible by $5$ and try to identify a subextension of degree $5$.
I've tried to find a good source about cyclotomic extensions of $p$-adics, but this seems to be a topic missing from all field theory books. Books on algebraic number theory seem to omit this topic too and our graduate algebra class didn't cover it either.
Now my intuition would guide me as follows. Working in $\mathbb{Q}$ one would immediately find $\mathbb{Q}(\zeta_{11}+\zeta_{11}^{-1})$ as a cyclic degree $5$ extension. We also have that $[\mathbb{Q}(\zeta_{25}):\mathbb{Q}]=20$, so that $[\mathbb{Q}(\zeta_{25}+\zeta_{25}^{-1}):\mathbb{Q}]=10$ and with $(\zeta_{25}+\zeta_{25}^{-1})^2=\zeta_{25}^2+2+\zeta_{25}^{-2}$ one could expect that setting $\alpha=\zeta_{25}^2+\zeta_{25}^{-2}$, we would have
$$[\mathbb{Q}(\zeta_{25}+\zeta_{25}^{-1}):\mathbb{Q}(\alpha)]=2\Rightarrow [\mathbb{Q}(\alpha):\mathbb{Q}]=5$$
(Note, I haven't actually checked if this first extension has degree $2$)
Finally, $\mathbb{Q}(\zeta_{25})\cap \mathbb{Q}(\zeta_{11})=\mathbb{Q}$, so we would have that
$$\mathbb{Q}(\zeta_{11}+\zeta_{11}^{-1},\alpha)/\mathbb{Q}$$
is a $C_5\times C_5$ extension of $\mathbb{Q}$. My question is then that does this work if we replace $\mathbb{Q}$ with $\mathbb{Q}_5$? More specifically:
1. What would the Euler totient functions look like if defined as the order of the $n^\textrm{th}$ cyclotomic extension of $\mathbb{Q}_p$? If $p-1\mid n$, it would at least have to be different. Is $[\mathbb{Q}_5(\zeta_{25}):\mathbb{Q}_5]=[\mathbb{Q}(\zeta_{25}):\mathbb{Q}]$?
2. Can we still expect to have e.g. $\mathbb{Q}_p(\zeta_n)\cap \mathbb{Q}_p(\zeta_m)=\mathbb{Q}_p(\zeta_{\gcd(n,m)})$? What about other local fields of characteristic $0$?
I haven't thought about these really at all as I would rather just find a good reference for this topic.
-
## 2 Answers
It sounds as if you are a graduate student trying to learn some algebra number theory. (I'm sorry if I've guessed wrong on this point.) In this case I would encourage you to try to solve this question yourself, rather than look for a reference.
One hint is to think about ramification:
You can make one (and only one) $C_5$-extension that is totally unramified. Such extensions are always cyclotomic extensions. (They are given by extensions of the corresponding residue fields, which are for finite fields are always cyclotomic.)
You can also find a $C_5$-extension which is totally ramified. This can also be taken to be cyclotomic. Which cyclotomic extensions will be totally ramified at $5$?
-
Thanks, you guessed right I'm a grad student. I did actually construct the unramified extension e.g. take a root of X^5-X+1 (I used the functorial correspondence with separable extensions of the residue field). I just hoped that cyclotomic fields would be easier to work with as that is the canonical thing to do over $\mathbb{Q}$. – Edvard F Sep 25 '11 at 14:21
@Edvard F: Dear Edvard, Using cyclotomic extensions is sensible. As I wrote in my answer, unramified extensions are cyclotomic (e.g. the degree $5$ extension of $F_{\mathbb 5}$ is obtained by adjoining a $3124$th root of $1$). In general, as Ted wrote in his answer, unramified extensions are obtained by adjoining $\zeta_n$, for some appropriate choice of $n$ prime to $p$. To understand the extensions obtained by adjoining $\zeta_{p^k}$, you can use the Eisenstein arguments that Ted mentions, or ramification arguments. The latter are more general, flexible, and powerful, so if you are ... – Matt E Sep 25 '11 at 17:45
... interested in learning algebraic number theory, I recommend that you learn them. Regards, – Matt E Sep 25 '11 at 17:45
Books on algebraic number theory should have this.
For $n$ not divisible by $p$, the Galois group of $\mathbb{Q}_p(\zeta_n)$ over $\mathbb{Q}_p$ is the same as the Galois group of $\mathbb{F}_p(\zeta_n)$ over $\mathbb{F}_p$, basically because of Hensel's Lemma. But finite extensions of finite fields are cyclic, so it's easy to calculate the latter Galois group: It's cyclic of degree $n/\gcd(n,p-1)$. The extension is also unramified.
For $n=p^k$, the Galois group of $\mathbb{Q}_p(\zeta_n)$ over $\mathbb{Q}_p$ is $(\mathbb{Z}/p^k \mathbb{Z})^{\times}$ , same as over $\mathbb{Q}$. For $k=1$, you can see directly that the cyclotomic polynomial $(x^p-1)/(x-1)$ is irreducible because it becomes an Eisenstein polynomial upon making the change of variable $x \mapsto x+1$. [Edit: This substitution doesn't always work for higher $k$; I'll have to think of another argument here.] The extension is totally ramified.
For general $n$, combine the previous 2 cases.
-
Thanks. The $n=p^k$ case was what caused me a headache, so seems like I need to think about. I also didn't find explicit mentioning of cyclotomic extensions of $p$-adics in Lang's or Neukirch's Algebraic Number Theory. I might have to look at Serre's Local Fields. – Edvard F Sep 25 '11 at 15:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612348079681396, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/109273/demonstration-of-a-divisibility-rule | # Demonstration of a divisibility rule
A friend of mine who's studying mathematics challenged me to demonstrate that:
For given integer numbers $n$ and $m$, we can say
$$\left(\prod_{i=n}^m i\right)/{(m-n)!} =Z,$$
where $Z$ is some integer. In other words, the product of $n(n+1)(n+2)...m$ can be divided by the factorial of the difference.
-
That's not what your formula says, since there is a separate factor $(m-n)!$ for every choice of $i$. For instance for $n=1$ and $m=3$ your product is $\frac1{2!}\times\frac2{2!}\times\frac3{2!}=\frac68=\frac34$, which is not an integer. I'll now make the formula match your words. Oops, JavaMan already did that. – Marc van Leeuwen Feb 14 '12 at 14:20
@Marc: I incorrectly edited the post. That mistake belongs to me. I have since fixed the post. – JavaMan Feb 14 '12 at 14:22
1
So your response to his/her challenge was to ask someone else to do it? – Cam McLeman Feb 14 '12 at 14:59
I looked at some divisibility rules on Wikipedia and I got demotivated! – Tom Dwan Feb 14 '12 at 15:37
## 1 Answer
Since the quantity
$${m \choose n} = \frac{m!}{n!(m-n)!} = \frac{m(m-1)\dots (n+2)(n+1)}{(m-n)!}$$
is always an integer, then it follows that $n$ times that quantity is also an integer.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447045922279358, "perplexity_flag": "middle"} |
http://www.code-words.com/ | CodeWords
Code, in words.
Wednesday, October 26, 2011
Transition to New Site
This blog is now being retired in favor of the notes on my new site, which has an explanation of the transition. This blog will remain but will no longer be updated.
Wednesday, October 5, 2011
Steve Jobs, Rest in Peace
Today a great entrepreneur, one of the great American capitalists, Steve Jobs, passed away at the rather young age of 56. As others have said, we are all a bit poorer for it.
I didn't always agree with his style, I wasn't always a fan of his company's products, and I certainly didn't like the personality cult that surrounded him, but I have great respect for the man who believed in what he did, thought big, sought to empower people, and did it with style and aesthetics. He is an inspiration to us all.
Labels: apple, steve jobs
Monday, August 29, 2011
Finding Files with PowerShell, Part 2
This version (see Finding Files with PowerShell) gives the full path of the items:
Labels: command line, powershell, windows
Tuesday, August 16, 2011
Finding Files with PowerShell
I miss the UNIX find command in Windows. I could get it with Cygwin or GNUWin32, but that requires littering my machine with extra stuff. So, PowerShell to the rescue:
Labels: command line, powershell, windows
Friday, July 15, 2011
Reddit's Unique Web Server
Reddit has quite a unique web server.
Labels: fun, humor, reddit, web
Monday, June 20, 2011
How Likely Are You to Be Dealt a Royal Flush?
In the game of poker, you have a royal flush when you have a 10, J, Q, K, and A all of the same suit. Assuming a uniform random distribution of cards in the deck, what is the probability of being dealt a royal flush?
A royal flush consists of a certain five cards of the same suit. Assume we are dealt one of these cards. Then the suit is determined, and we now need the remaining four cards of that suit. So, let's take the probability of being dealt one of the certain five cards (10, J, Q, K, A) of any suit. Since there are five such cards per suit, and there are four suits (the set of suits is $$\{ \diamondsuit, \heartsuit, \clubsuit, \spadesuit \}$$), then we can get any one of those cards to start building a royal flush. Therefore, the probability of getting a starting card is $P(\text{starting card}) = \frac{20}{52}.$ Once we have a starting card of a certain suit, our suit is restricted. That is, we may only get the remaining four cards from the same suit as our starting card. Then the probability of getting one of the four cards from the same suit is $P(\text{one of the four from the same suit}) = \frac{4}{51}.$ Likewise, after we get the second card, there are three left to get, so the probabilities are $$3/50$$, then $$2/49$$, and then $$1/48$$. What, then, is the probability that we will get all of the necessary cards? It is the product of all these probabilities: $P(\text{all}) = \frac{20}{52} \cdot \frac{4}{51} \cdot \frac{3}{50} \cdot \frac{2}{49} \cdot \frac{1}{48} = \frac{1}{649\,740} = 1.539\ldots \times 10^{-6} = 0.000\,153\,9\ldots \%.$ Not very good odds.
Labels: game theory, math, probability, statistics
Saturday, June 18, 2011
Calculating the Number of Trailing Zeros in a Factorial
The factorial function, denoted $$n!$$, where $$n \in \mathbb{Z}^*$$, is defined as follows: $n! = \begin{cases} 1 & \text{if $n = 0$,} \\ n(n - 1)! & \text{otherwise}. \end{cases}$ In the case of $$n = 10$$, $$10! = 3\,628\,800$$. Note that that value has two trailing zeros. When $$n = 20$$, $$20! = 2\,432\,902\,008\,176\,640\,000$$ and has four trailing zeros. Can we devise an algorithm to determine how many trailing zeroes there are in $$n!$$ without calculating $$n!$$?
Let us define a function $$z : \mathbb{N} \to \mathbb{N}$$ that accepts an argument $$n$$ that yields the number of trailing zeros in $$n!$$. So, using our two examples above, we know that $$z(10) = 2$$ and $$z(20) = 4$$. Now, what is $$z(100)$$?
An integer $$a$$ has $$k$$ trailing zeros iff $$a = b \cdot 10^k$$ for some integer $$b$$. Now, the prime factors of $$10$$ are $$2$$ and $$5$$. So, what if we consider the prime factors of each factor in a factorial product? For example, consider the first six terms (after $$1$$) of $$100!$$ and their prime factors. We omit $$1$$ because of its idempotence: $\begin{align} 2 &= 2 \\ 3 &= 3 \\ 4 &= 2^2 \\ 5 &= 5 \\ 6 &= 2 \cdot 3 \\ 7 &= 7 \end{align}$ We see that $$7!$$ contains one $$5$$ and (more than) one $$2$$. Therefore, $$7!$$ contains a factor of $$10$$, so we should expect there to be one trailing zero in $$7!$$. And we see that $$7! = 5040$$, which does indeed have one trailing zero. This suggests that we could, for any integer $$n$$ in $$n!$$, cycle through the integers from $$2$$ to $$n$$, finding the prime factorization of each integer, and counting pairs of $$2$$ and $$5$$ that we find.
You might have noticed from our example of $$7!$$ that there are four $$2$$s and only one $$5$$. Consider that there are $$n/2$$ even factors in $$n!$$—that is, every other integer factor of $$n!$$, starting at $$2$$, is even. So there are $$n/2$$ even factors. Each of these has at least one $$2$$ in its prime factorization, and many of them have more than one. Now consider how many factors of $$5$$ there are in $$n!$$: there are, in fact, $$n / 5$$. So, we can see that, for every $$5$$ we find in $$n!$$, there is a $$2$$ we can match with it. This suggests that we only need count the $$5$$s and not the pairs of $$2$$ and $$5$$.
This now suggests an algorithm, presented here in Java:
Using this algorithm, we find that $$z(100) = 24$$.
Labels: java, math, programming
Subscribe to: Posts (Atom)
About Me
Jeff Pratt
I am a "student of everything." Credo in sola veritate.
View my complete profile
Blog Archive
• ▼ 2011 (15)
• ► August (2)
• ► July (1)
• ► June (4)
• ► May (3)
• ► January (3)
• ► 2010 (9)
• ► December (2)
• ► November (3)
• ► June (1)
• ► March (2)
• ► January (1)
• ► 2009 (1)
• ► February (1)
• ► 2008 (3)
• ► November (1)
• ► August (1)
• ► July (1)
• ► 2007 (5)
• ► June (2)
• ► March (1)
• ► February (2)
• ► 2006 (3)
• ► October (2)
• ► September (1) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 60, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459078907966614, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/68528-proving-matrix-non-singular.html | # Thread:
1. ## Proving a matrix is non-singular
Hi again, came across this matrix question which has really got me stuck.
A = $\left(\begin{array}{cc}q&3\\-2&q-1\end{array}\right)$
1. Find det A, worked this out to be $q(q-1)+6$.
2. Show that A is non-singular for all values of q.
If it is non singular, then the det does not equal 0, I think? But how would I go about showing this, solve the quadratic equation in terms of q?
Thanks in advance
2. Originally Posted by craig
A = $\left(\begin{array}{cc}q&3\\-2&q-1\end{array}\right)$
1. Find det A, worked this out to be $q(q-1)+6$.
2. Show that A is non-singular for all values of q.
Are there any real solutions for $q^2 -q + 6=0$?
3. Originally Posted by Plato
Are there any real solutions for $q^2 -q + 6=0$?
Just found the answers, in there they have :
$q^2 -q + 6=(q-\frac{1}{2})^2 + 5\frac{3}{4}$
>0 for all real q.
No idea where they have got this from though?
4. I'm sure you're familiar with the method of "completing the square"?
5. I think I've got it, I knew it was completing the square, but wasn't sure why he was doing it exactly. But the $(q-\frac{1}{2})^2 + 5\frac{3}{4}$ is basically saying that whatever number q is, wether it is +ve or -ve,once it is squared and then added to $5\frac{3}{4}$, will always be greater than 0, therefore non singular.
Is this right?
Sorry it took this long to work this rather basic thing out, been revising since 11 this morning, mind just went blank.
Thanks for the help and patience
6. Originally Posted by craig
I think I've got it, I knew it was completing the square, but wasn't sure why he was doing it exactly. But the $(q-\frac{1}{2})^2 + 5\frac{3}{4}$ is basically saying that whatever number q is, wether it is +ve or -ve,once it is squared and then added to $5\frac{3}{4}$, will always be greater than 0, therefore non singular.
Is this right?
Yes, that is right. You could also have solve the equation using the quadratic formula and observed that it has two complex roots, no real root.
Sorry it took this long to work this rather basic thing out, been revising since 11 this morning, mind just went blank.
Thanks for the help and patience | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9785242676734924, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/34980?sort=oldest | ## How can an approach to $P$ vs $NP$ based on descriptive complexity avoid being a natural proof in the sense of Raborov-Rudich?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
EDIT: This question has been modified to make it a stand-alone question. Feel free to retract your votes for the previous version.
Here are Vinay Deolalikar's paper, and Richard Lipton's first post about it, and the wiki page on polymath site summarizing the discussions about it. His approach is based on descriptive complexity.
One of famous barriers for separating $NP$ from $P$ is Razborov-Rudich Natural Proofs barrier. Richard Lipton remarked about his paper and the natural proofs barrier that apparently "it exploits a uniform characterization of P that may not extend to give lower bounds against circuits". A question which is mentioned in one of the comments on Lipton's post is:
How essential is the uniformity of $P$ to his proof?
i.e is the uniformity of $P$ used in such an essential way that the barrier will not apply to it? (By essential I mean that the proof does not work for the non-uniform version.)
So here is my questions:
Are there any previous computational complexity results based on descriptive complexity that avoid the Razborov-Rudich natural proofs barrier (because of being based on descriptive complexity)?
How can an approach to $P$ vs $NP$ based on descriptive complexity avoid being a natural proof in the sense of Raborov-Rudich?
A related question is:
What are the complexity results using uniformity in an essential way other than proofs by diagonalization?
Related closed MO posts:
https://mathoverflow.net/questions/34947/when-would-you-read-a-paper-claiming-to-have-settled-a-long-open-problem-like-p
https://mathoverflow.net/questions/34953/whats-wrong-with-this-proof-closed
Discussion on meta:
http://meta.mathoverflow.net/discussion/590/whats-wrong-with-this-proof/
-
Could you rephrase your question? – Ryan Budney Aug 9 2010 at 8:17
@Ryan: Sure. How should I rephrase it? Do you have any specific suggestion on how to improve it? – Kaveh Aug 9 2010 at 8:22
2
Have you read it yourself? – Will Jagy Aug 9 2010 at 8:28
21
Kaveh, here is my suggestion: Read the paper first and only ask a technical question if you come across a difficulty that, after thinking about it for sufficient time, you cannot overcome. Also, please, remember that, as exciting as this new development might be, MO is not a seminar on Vinay Deolalikar's paper. – Victor Protsak Aug 9 2010 at 9:16
5
I think it more polite to let experts and the usual course of events decide whether specific parts of an unpublished work is correct or not. I think that MO is a terrible place to do this, in particular because anonymous and pseudonymous comments are possible. – Olivier Aug 9 2010 at 11:51
show 7 more comments
## 1 Answer
His proof is wrong. Completely. It makes no sense whatsoever. Haven't you heard?
-
3
This is not a helpful answer. If you care to look, the question was asked on August 9, before anyone had a clear idea about the proof. – Victor Protsak Aug 27 2010 at 11:13
3
I think you're being rather hard on Deolalikar here. It took an expert in descriptive complexity to find the fatal flaw in the proof. – Peter Shor Aug 27 2010 at 16:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389486312866211, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/68898-definite-integral-problem.html | # Thread:
1. ## Definite integral problem
Could someone show me the steps for a problem like this? I missed a class and you'd be saving my life.
$\displaystyle\int^{\pi}_0 \sec^2 \left(\frac{t}{3} \right) dt$
2. Originally Posted by Dana_Scully
Could someone show me the steps for a problem like this? I missed a class and you'd be saving my life.
$\displaystyle\int^{\pi}_0 \sec^2 \left(\frac{t}{3}\right) dt$
Note that $\frac{d [\tan (a t)]}{dt} = a \, \sec^2 (at)$.
3. Originally Posted by Dana_Scully
Could someone show me the steps for a problem like this? I missed a class and you'd be saving my life.
$\displaystyle\int^{\pi}_0 \sec ^2 \left(\frac{t}{3}\right) dt$
Make the substitution $u = \frac{t}{3}$ so $du = \frac{dt}{3}$. New limits of integration $t = 0 \; \Rightarrow \; u = 0, \; \; t = \pi \; \Rightarrow \; u = \frac{ \pi }{3}$
New problem
$3 \int_0^{\frac{\pi}{3}} \sec^2 u \,du$
You should recognize the antiderivative for this.
4. Thank you so much, I've got it now! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930503249168396, "perplexity_flag": "middle"} |
http://gauravtiwari.org/tag/real-analysis/ | # MY DIGITAL NOTEBOOK
A Personal Blog On Mathematical Sciences and Technology
Home » Posts tagged 'Real Analysis'
# Tag Archives: Real Analysis
## Proofs of Irrationality
Tuesday, February 14th, 2012 19:18 / 3 Comments
“Irrational numbers are those real numbers which are not rational numbers!”
Def.1: Rational Number
A rational number is a real number which can be expressed in the form of $\frac{a}{b}$ where $a$ and $b$ are both integers relatively prime to each other and $b$ being non-zero.
Following two statements are equivalent to the definition 1.
1. $x=\frac{a}{b}$ is rational if and only if $a$ and $b$ are integers relatively prime to each other and $b$ does not equal to zero.
2. $x=\frac{a}{b} \in \mathbb{Q} \iff \mathrm{g.c.d.} (a,b) =1, \ a \in \mathbb{Z}, \ b \in \mathbb{Z} \setminus \{0\}$.
(more…)
26.740278 83.888889
## The Area of a Disk
Friday, January 27th, 2012 16:22 / 1 Comment
[This post is under review.]
If you are aware of elementary facts of geometry, then you might know that the area of a disk with radius $R$ is $\pi R^2$.
The radius is actually the measure(length) of a line joining the center of disk and any point on the circumference of the disk or any other circular lamina. Radius for a disk is always same, irrespective of the location of point at circumference to which you are joining the center of disk. The area of disk is defined as the ‘measure of surface‘ surrounded by the round edge (circumference) of the disk.
26.740278 83.888889
## Triangle Inequality
Friday, January 20th, 2012 10:57 / 5 Comments
Triangle inequality has its name on a geometrical fact that the length of one side of a triangle can never be greater than the sum of the lengths of other two sides of the triangle. If $a$, $b$ and $c$ be the three sides of a triangle, then neither $a$ can be greater than $b+c$, nor$b$ can be greater than $c+a$, nor $c$ can be than $a+b$.
Triangle
Consider the triangle in the image, side $a$ shall be equal to the sum of other two sides $b$ and $c$, only if the triangle behaves like a straight line. Thinking practically, one can say that one side is formed by joining the end points of two other sides.
In modulus form, $|x+y|$ represents the side $a$ if $|x|$ represents side $b$ and $|y|$ represents side $c$. A modulus is nothing, but the distance of a point on the number line from point zero.
Visual representation of Triangle inequality
For example, the distance of $5$ and $-5$ from $0$ on the initial line is $5$. So we may write that $|5|=|-5|=5$.
Triangle inequalities are not only valid for real numbers but also for complex numbers, vectors and in Euclidean spaces. In this article, I shall discuss them separately. (more…)
26.740278 83.888889
## A Trip to Mathematics: Part III Relations and Functions
Read these statements carefully:
‘Michelle is the wife of Barak Obama.’
‘John is the brother of Nick.’
‘Robert is the father of Marry.’
‘Ram is older than Laxman.’
‘Mac is the product of Apple Inc.’
After reading these statements, you will realize that first ‘Noun’ of each sentence is some how related to other. We say that each one noun is in a RELATIONSHIP to other. Mischell is related to Barak Obama, as wife. John is related to Nick, as brother. Robert is related to Marry, as father. Ram is related to Laxman in terms of age(seniority). Mac is related to Apple Inc. as a product.These relations are also used in Mathematics, but a little variations; like ‘alphabets’ or ‘numbers’ are used at place of some noun and mathematical relations are used between them. Some good examples of relations are:
is less than
is greater than
is equal to
is an element of
belongs to
divides
etc. etc.
Some examples of regular mathematical statements which we encounter daily are:
4<6 : 4 is less than 6.
5=5 : 5 is equal to 5.
6/3 : 3 divides 6.
For a general use, we can represent a statement as:
”some x is related to y”
Here ‘is related to’ phrase is nothing but a particular mathematical relation. For mathematical convenience, we write ”x is related to y” as $x \rho y$. x and y are two objects in a certain order and they can also be used as ordered pairs (x,y).
$(x,y) \in \rho$ and $x \rho y$ are the same and will be treated as the same term in further readings. If $\rho$ represents the relation motherhood, then $\mathrm {(Jane, \ John)} \in \rho$ means that Jane is mother of John.
All the relations we discussed above, were in between two objects (x,y), thus they are called Binary Relations. $(x,y) \in \rho \Rightarrow \rho$ is a binary relation between a and b. Similarly, $(x,y,z) \in \rho \Rightarrow \rho$ is a ternary (3-nary) relation on ordered pair (x,y,z). In general a relation working on an n-tuple $(x_1, x_2, \ldots x_n) \in \rho \Rightarrow \rho$ is an n-ary relation working on n-tuple.
We shall now discuss Binary Relations more rigorously, since they have solid importance in process of defining functions and also in higher studies. In a binary relation, $(x,y) \in \rho$, the first object of the ordered pair is called the the domain of relation ρ and is defined by
$D_{\rho} := \{x| \mathrm{for \ some \ y, \ (x,y) \in \rho} \}$ and also the second object is called the range of the relation ρ and is defined by $R_{\rho} := \{y| \mathrm{for \ some \ y, \ (x,y) \in \rho} \}$.
There is one more thing to discuss about relations and that is about equivalence relation.
A relation is equivalence if it satisfies three properties, Symmetric, Reflexive and Transitive.
I mean to say that if a relation is symmetric, reflexive and transitive then the relation is equivalence. You might be thinking that what these terms (symmetric, reflexive and transitive) mean here. Let me explain them separately:
A relation is symmetric: Consider three sentences “Jen is the mother of John.”; “John is brother of Nick.” and “Jen, John and Nick live in a room altogether.”
In first sentence Jen has a relationship of motherhood to John. But can John have the same relation to Jen? Can John be mother of Jen? The answer is obviously NO! This type of relations are not symmetric. Now consider second statement. John has a brotherhood relationship with Nick. But can Nick have the same relation to John? Can Nick be brother of John? The answer is simply YES! Thus, both the sentences “John is the brother of Nick.” and “Nick is the brother of John.” are the same. We may say that both are symmetric sentences. And here the relation of ‘brotherhood’ is symmetric in nature. Again LIVING WITH is also symmetric (it’s your take to understand how?).
Now let we try to write above short discussion in general and mathematical forms. Let X and Y be two objects (numbers or people or any living or non-living thing) and have a relation ρ between them. Then we write that X is related by a relation ρ , to Y. Or X ρ Y.
And if ρ is a symmetric relation, we might say that Y is (also) related by a relation ρ to X. Or Y ρ X.
So, in one line; $X \rho Y \iff Y \rho X$ is true.
A relation is reflexive if X is related to itself by a relation. i.e., $X \rho X$. Consider the statement “Jen, John and Nick live in a house altogether.” once again. Is the relation of living reflexive? How to check? Ask like, Jen lives with Jen, true? Yes! Jen lives there.
A relation is transitive, means that some objects X, Y, Z are such that if X is related to Y by the relation, Y is related to Z by the relation, then X is also related to Z by the same relation.
i.e., $X \rho Y \wedge Y \rho Z \Rightarrow X \rho Z$. For example, the relationship of brotherhood is transitive. (Why?) Now we are able to define the equivalence relation.
We say that a relation ρ is an equivalence relation if following properties are satisfied: (i) $X \rho Y \iff Y \rho X$
(ii) $X \rho X$
(iii) $X \rho Y \ Y \rho Z \Rightarrow X \rho Z$.
Functions: Let f be a relation (we are using f at the place of earlier used ρ ) on an ordered pair $(x,y) : x \in X \ y \in Y$. We can write xfy, a relation. This relation is called a function if and only if for every x, there is always a single value of y. I mean to say that if $xfy_1$ is true and $xfy_2$ is also true, then always $y_1=y_2$. This definition is standard but there are some drawbacks of this definition, which we shall discuss in the beginning of Real Analysis .
Many synonyms for the word ‘function’ are used at various stages of mathematics, e.g. Transformation, Map or Mapping, Operator, Correspondence. As already said, in ordered pair (x,y), x is called the element of domain of the function (and X the domain of the function) and y is called the element in range or co-domain of the function (and Y the range of the function).
Here I will stop myself. I don’t want a post to be long (specially when writing on basic mathematics) that reader feel it boring. The intermediate mathematics of functions is planned to be discussed in Calculus and advanced part in functional analysis. Please note that I am regularly revising older articles and trying to maintain the accuracy and completeness. If you feel that there is any fault or incompleteness in a post then please make a comment on respective post. If you are interested in writing a guest article on this blog, then kindly email me at mdnb[at]live[dot]in.
###### Must Read
• Equivalence relations (gowers.wordpress.com)
26.740278 83.888889
## Study Notes Announcements and more…
Thursday, September 22nd, 2011 19:12 / 2 Comments
# Announcement
Hi all!
I know some friends, who don’t know what mathematics in real is, always blame me for the language of the blog. It is very complicated and detailed. I understand that it is. But MY DIGITAL NOTEBOOK is mainly prepared for my study and research on mathematical sciences. So, I don’t care about what people say (SAID) about the
A Torus
content and how many hits did my posts get. I feel happy in such a way that MY DIGITAL NOTEBOOK has satisfied me at its peak-est level. I would like to thank WordPress.com for their brilliant blogging tools and to my those friends, teachers and classmates who always encourage me about my passion. For me the most important thing is my study. More I learn, more I will go ahead. So, today (I mean tonight) I have decided to write some lecture-notes (say them study-notes, since I am not a lecturer) on MY DIGITAL NOTEBOOK. I have planned to write on Group Theory at first and then on Real Analysis. And this post is just to introduce you with some fundamental notations which will be used in those study-notes.
# Notations
Conditionals and Operators
$r /; c$ : Relation $r$ holds under the condition $c$.
$a=b$ : The expression $a$ is mathematically identical to $b$.
$a \ne b$ : The expression a is mathematically different from $b$.
$x > y$ : The quantity $x$ is greater than quantity $y$.
$x \ge y$ : The quantity $x$ is greater than or equal to the quantity $y$.
$x < y$ : The quantity $x$ is less than quantity $y$.
$x \le y$ : The quantity $x$ is less than or equal to quantity $y$.
$P := Q$ : Statement $P$ defines statement $Q$.
$a \wedge b$ : a and b.
$a \vee b$ : a or b.
$\forall a$ : for all $a$.
$\exists$ : [there] exists.
$\iff$: If and only if.
Sets & Domains
$\{ a_1, a_2, \ldots, a_n \}$ : A finite set with some elements $a_1, a_2, \ldots, a_n$.
$\{ a_1, a_2, \ldots, a_n \ldots \}$ : An infinite set with elements $a_1, a_2, \ldots$
$\mathrm{\{ listElement /; domainSpecification\}}$ : A sequence of elements `listElement` with some `domainSpecifications` in the set. For example, $\{ x : x=\frac{p}{q} /; p \in \mathbb{Z}, q \in \mathbb{N^+}\}$ $a \in A$ : $a$ is an element of the set A.
$a \notin A$: a is not an element of the set A.
$x \in (a,b)$: The number x lies within the specified interval $(a,b)$.
$x \notin (a,b)$: The number x does not belong to the specified interval $(a,b)$. Standard Set Notations
$\mathbb{N}$ : the set of natural numbers $\{0, 1, 2, \ldots \}$
$\mathbb{N}^+$: The set of positive natural numbers: $\{1, 2, 3, \ldots \}$
$\mathbb{Z}$ : The set of integers $\{ 0, \pm 1, \pm 2, \ldots\}$
$\mathbb{Q}$ : The set of rational numbers
$\mathbb{R}$: The set of real numbers
$\mathbb{C}$: The set of complex numbers
$\mathbb{P}$: The set of prime numbers.
$\{ \}$ : The empty set.
$\{ A \otimes B \}$ : The ordered set of sets $A$ and $B$.
$n!$ : Factorial of n: $n!=1\cdot 2 \cdot 3 \ldots (n-1) n /; n \in \mathbb{N}$
Other mathematical notations, constants and terms will be introduced as their need.
For Non-Mathematicians:
Don’t worry I have planned to post more fun. Let’s see how the time proceeds!
###### Some Other GoodReads:
• Using math symbols in wordpress pages and posts – LaTex (1) (etidhor.wordpress.com)
• Do complex numbers really exist? (math.stackexchange.com)
• LaTeX Typesetting – Basic Mathematics (r-bloggers.com)
26.740278 83.888889
## Everywhere Continuous Non-derivable Function
Thursday, July 7th, 2011 13:12 / 2 Comments
Weierstrass had drawn attention to the fact that there exist functions which are continuous for every value of $x$ but do not possess a derivative for any value. We now consider the celebrated function given by Weierstrass to show this fact. It will be shown that if
$f(x)= \displaystyle{\sum_{n=0}^{\infty} } b^n \cos (a^n \pi x) \ \ldots (1) \\ = \cos \pi x +b \cos a \pi x + b^2 \cos a^2 \pi x+ \ldots$ where $a$ is an odd positive integer, $0 < b <1$ and $ab > 1+\frac{3}{2} \pi$, then the function $f$ is continuous $\forall x$ but not finitely derivable for any value of $x$.
G.H. Hardy improved this result to allow $ab \ge 1$.
We have $|b^n \cos (a^n \pi x)| \le b^n$ and $\sum b^n$ is convergent. Thus, by Wierstrass’s $M$-Test for uniform Convergence the series (1), is uniformly convergent in every interval. Hence $f$ is continuous $\forall x$.
Again, we have $\dfrac{f(x+h)-f(x)}{h} = \displaystyle{\sum_{n=0}^{\infty}} b^n \dfrac{\cos [a^n \pi (x+h)]-\cos a^n \pi x}{h} \ \ \ldots (2)$
Let, now, $m$ be any positive integer. Also let $S_m$ denote the sum of the $m$ terms and $R_m$, the remainder after $m$ terms, of the series (2), so that
$\displaystyle{\sum_{n=0}^{\infty}} b^n \dfrac{\cos [a^n \pi (x+h)]-\cos a^n \pi x}{h} = S_m+R_m$. By Lagrange’s mean value theorem, we have
$\dfrac{|\cos {[a^n \pi (x+h)]} -\cos {a^n \pi x|}}{|h|}=|a^n \pi h \sin {a^n \pi(x+\theta h)}| \le a^n \pi |h|$,
$|S_m| \le \displaystyle{\sum_{n=0}^{m-1}} b^n a^n \pi = \pi \dfrac {a^m b^m -1}{ab-1} < \pi \dfrac {a^m b^m}{ab-1}$. We shall now consider $R_m$.
So far we have taken $h$ as an arbitrary but we shall now choose it as follows:
We write $a^m x=\alpha_m+\xi_m$, where $\alpha_m$ is the integer nearest to $a^m x$ and $-1/2 \le \xi_m < 1/2$.
Therefore $a^m(x+h) = \alpha_m+\xi_m+ha^m$. We choose, $h$, so that $\xi_m+ha^m=1$
i.e., $h=\dfrac{1-\xi_m}{a^m}$ which $\to 0 \ \text{as} \ m \to \infty$ for $0< h \le \dfrac{3}{2a^m} \ \ldots (3)$
Now, $a^n \pi (x+h) = a^{n-m} a^m (x+h.) \\ \ =a^{n-m} \pi [(\alpha_m +\xi_m)+(1-\xi_m)] \\ \ =a^{n-m} \pi(\alpha_m+1)$
Thus $\cos[a^n \pi (x+h)] =cos [a^{n-m} (\alpha_m-1) \pi] =(-1)^{\alpha_{m+1}}$.
$\cos (a^n \pi x) = \cos [a^{n-m} (a^m \pi x)] \\ \ =\cos [a^{n-m} (\alpha_m+\xi_m) \pi] \\ \ =\cos a^{n-m} \alpha_m \pi \cos a^{n-m} \xi_m \pi - \sin a^{n-m} \alpha_m \pi \sin a^{n-m} \xi_m \pi \\ \ = (-1)^{\alpha_m} \cos a^{n-m} \xi_m \pi$ for $a$, is an odd integer and $\alpha_m$ is an integer.
Therefore, $R_m =\dfrac{(-1)^{\alpha_m}+1}{h} \displaystyle{\sum_{n=m}^{\infty}} b^n [2+\cos (a^{n-m} \xi_m \pi] \ \ldots (4)$
Now each term of series in (4) is greater than or equal to 0 and, in particular, the first term is positive, $|R_m| > \dfrac{b^m}{|h|} > \dfrac{2a^m b^m}{3} \ \ldots (3)$
Thus $\left| {\dfrac{f(x+h) -f(x)}{h}} \right| = |R_m +S_m| \\ \ \ge |R_m|-|S_m| > \left({\frac{2}{3} -\dfrac{\pi}{ab-1}} \right) a^mb^m$
As $ab > 1+\frac{3}{2}\pi$, therefore $\left({\frac{3}{2} -\dfrac{\pi}{ab-1}} \right)$ is positive.
Thus we see that when $m \to \infty$ so that $h \to 0$, the expression $\dfrac{f(x+h)-f(x)}{h}$ takes arbitrary large values. Hence, $f'(x)$ does not exist or is at least not finite.
### Reference
A course of mathematical analysis
SHANTI NARAYAN
PK MITTAL
S. Chand Co.
26.740278 83.888889
## Dedekind’s Theory of Real Numbers
Image via Wikipedia
# Intro
Let $\mathbf{Q}$ be the set of rational numbers. It is well known that $\mathbf{Q}$ is an ordered field and also the set $\mathbf{Q}$ is equipped with a relation called “less than” which is an order relation. Between two rational numbers there exists infinite number of elements of $\mathbf{Q}$. Thus the system of rational numbers seems to be dense and so apparently complete. But it is quite easy to show that there exist some numbers (?) (e.g., ${\sqrt{2}, \sqrt{3} \ldots}$ etc.) which are not rational. For example, let we have to prove that $\sqrt{2}$ is not a rational number or in other words, there exist no rational number whose square is 2. To do that if possible, purpose that $\sqrt{2}$ is a rational number. Then according to the definition of rational numbers $\sqrt{2}=\dfrac{p}{q}$, where p & q are relatively prime integers. Hence, ${\left(\sqrt{2}\right)}^2=p^2/q^2$ or $p^2=2q^2$. This implies that p is even. Let $p=2m$, then $(2m)^2=2q^2$ or $q^2=2m^2$. Thus $q$ is also even if 2 is rational. But since both are even, they are not relatively prime, which is a contradiction. Hence $\sqrt{2}$ is not a rational number and the proof is complete. Similarly we can prove that why other irrational numbers are not rational. From this proof, it is clear that the set $\mathbf{Q}$ is not complete and dense and that there are some gaps between the rational numbers in form of irrational numbers. This remark shows the necessity of forming a more comprehensive system of numbers other that the system of rational number. The elements of this extended set will be called a real number. The following three approaches have been made for defining a real number.
1. Dedekind’s Theory
2. Cantor’s Theory
3. Method of Decimal Representation
The method known as Dedekind’s Theory will be discussed in this not, which is due to R. Dedekind (1831-1916). To discuss this theory we need the following definitions:
Rational number A number which can be represented as $\dfrac{p}{q}$ where p is an integer and q is a non-zero integer i.e., $p \in \mathbf{Z}$ and $q \in \mathbf{Z} \setminus \{0\}$ and p and q are
relatively prime as their greatest common divisor is 1, i.e., $\left(p,q\right) =1$.
Ordered Field: Here, $\mathbf{Q}$ is, an algebraic structure on which the operations of addition, subtraction, multiplication & division by a non-zero number can be carried out.
Least or Smallest Element: Let $A \subseteq Q$ and $a \in Q$. Then $a$ is said to be a least element of $A$ if (i) $a \in A$ and (ii) $a \le x$ for every $x \in A$.
Greatest or Largest Element: Let $A \subseteq Q$ and $b \in Q$. Then $b$ is said to be a least element of $A$ if (i) $b \in A$ and (ii) $x \le b$ for every $x \in A$.
## Dedekind’s Section (Cut) of the Set of All the Rational Numbers
Since the set of rational numbers is an ordered field, we may consider the rational numbers to be arranged in order on straight line from left to right. Now if we cut this line by some point $P$, then the set of rational numbers is divided into two classes $L$ and $U$. The rational numbers on the left, i.e. the rational numbers less than the number corresponding to the point of cut $P$ are all in $L$ and the rational numbers on the right, i.e. The rational number greater than the point are all in $U$. If the point $P$ is not a rational number then every rational number either belongs to $L$ or $U$. But if $P$ is a rational number, then it may be considered as an element of $U$.
Def.
### Real Numbers:
Let $L \subset \mathbf{Q}$ satisfying the following conditions:
1. $L$ is non-empty proper subset of $\mathbf{Q}$.
2. $a, b \in \mathbf{Q}$ , $a < b$ and $b \in L$ then this implies that $a \in L$.
3. $L$ doesn’t have a greatest element.
Let $U=\mathbf{Q}-L$. Then the ordered pair $< L,U >$ is called a section or a cut of the set of rational numbers. This section of the set of rational numbers is called a real number.
Notation: The set of real numbers $\alpha, \beta, \gamma, \ldots$ is denote by $\mathbf{R}$.
Let $\alpha = \langle L,U \rangle$ then $L$ and $U$ are called Lower and Upper Class of $\alpha$ respectively. These classes will be denoted by $L(\alpha)$ and $U(\alpha)$ respectively.
(more…)
26.740278 83.888889 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 226, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285687208175659, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/10741/quotient-geometries-known-in-popular-culture-such-as-flat-torus-asteroids-vi | # Quotient geometries known in popular culture, such as “flat torus = Asteroids video game”
In answering a question I mentioned the Asteroids video game as an example -- at one time, the canonical example -- of a locally flat geometry that is globally different from the Euclidean plane. It might be out of date in 2010. This raises its own question:
are there other real-life examples of geometries formed by identifications? We know the cylinder and Moebius strip and there are probably some interesting equivalents of those. Origami are coverings of a punctured torus and I heard that there are crocheted examples of complicated 2-d and 3-d objects. Are there simple, Asteroids-like conversational examples for flat surfaces formed as quotients? Punctures and orbifold points and higher dimensional examples all would be interesting, but I am looking less for mathematically sophisticated than conversationally relevant examples, such as a famous game or gadget that makes a cellphone function as a torus instead of a rectangle.
(edit: Pac-Man, board games such as Chutes and Ladders, or any game with magic portals that transport you between different locations, all illustrate identification of points or pieces of the space, but they lead to non-homogeneous geometries. The nice thing about Asteroids was that it was clearly the whole uniform geometry of the torus.
edit-2: the Flying Toasters screen-saver would have been an example of what I mean, except that video of it exists online and shows it to be a square window onto motion in the ordinary Euclidean plane.)
-
1
My students claimed to know what pac-man was. – Jack Schmidt Nov 17 '10 at 20:37
Also, a cylinder (rolled up piece of paper) is locally flat, but not globally the euclidean plane. – Jack Schmidt Nov 17 '10 at 20:39
I thought of Pac-man, but it is not evident that the whole screen is a torus rather than identifications of some parts or "portals". The cylinder and Moebius strip are good because they can be made from paper on the spot, but I was hoping for something "cool" and immediately recognizable like the video games. – T.. Nov 17 '10 at 20:40
1
– T.. Nov 17 '10 at 20:45
3
– Rahul Narain Nov 17 '10 at 21:37
show 3 more comments
## 5 Answers
You could view a kaleidoscope as a picture of a pattern in the quotient of the plane by a suitable group action.
Similarly, some Escher work is a picture of life inside hyperbolic surfaces -- quotients of the hyperbolic plane.
On the more extreme end, $\mathbb R^2$ is a quotient of $\mathbb R$, so presumably you have a stricter question in mind than the one you've actually written, as you can get all kinds of things as quotients. $SO_3$ (rotations in 3-space) are a quotient of the unit quaternions. This is used in computer graphics, among other things.
For the kaleidoscope, you're really viewing the quotient as an orbifold with its natural geometric structure, rather than just as a topological space.
-
I have trouble imagining $\mathbb{R}^2$ as a quotient of $\mathbb{R}$. I see $\mathbb{R}$ as a quotient of $\mathbb{R}^2$. Am I reading it wrong, or did you miss-type? If not, It'd be great to have help seeing this. Thanks. – yasmar Nov 17 '10 at 23:26
3
@yasmar, yes it's a corker! First step is there is a space-filling curve, by that I mean an onto continuous function $[0,1] \to [0,1]^2$. The $\mathbb Z^2$-translates of $[0,1]^2$ fill $\mathbb R^2$ so you can put together countably-many space-filling curves to construct an onto continuous function $\mathbb R \to \mathbb R^2$ which is a quotient map. – Ryan Budney Nov 17 '10 at 23:39
1
Thanks. I figured it had to be either something mind bending like that, or a mistake. I'm glad I asked. – yasmar Nov 18 '10 at 1:46
Many role-playing games have toroidal worlds. Especially Dragon Quest and Final Fantasy games. It's a pity that few go through the trouble of implementing a true spheric topology. I don't mind other topologies if they fit the game world, but in RPGs it's mostly understood that the world should be Earth-like.
And then, there is also this marvelous site:
http://www.geometrygames.org/HyperbolicGames/
-
The original Sonic the Hedgehog games (on the Sega Genesis and similar consoles) provide good examples. Many of the levels wrap around vertically, so that there is no bottom or top. They are equivalent to cylinders, I suppose.
Here are some examples from Sonic 3:
http://info.sonicretro.org/File:Ic1map.PNG
http://info.sonicretro.org/File:Mg1map.PNG
The designers used this to make the giant hill at the beginning of the ice cap level that is many times taller than the level itself. Also, the Sonic 3 Special stages are toroidal, just like Asteroids, though in the game they are presented with an apparent spherical curvature, which had me terribly lost and confused until I googled it. In hindsight, I should have realized that you can't cover a sphere with squares, but Sonic doesn't exactly give you a lot of time for thinking.
http://info.sonicretro.org/File:S%26KSS1.png | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412367343902588, "perplexity_flag": "middle"} |
http://nrich.maths.org/6222/solution | ### Summing Consecutive Numbers
Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way?
### Always the Same
Arrange the numbers 1 to 16 into a 4 by 4 array. Choose a number. Cross out the numbers on the same row and column. Repeat this process. Add up you four numbers. Why do they always add up to 34?
### Fibs
The well known Fibonacci sequence is 1 ,1, 2, 3, 5, 8, 13, 21.... How many Fibonacci sequences can you find containing the number 196 as one of the terms?
# Weekly Problem 43 - 2008
##### Stage: 3 Short Challenge Level:
$b-3s$
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solution
View the current weekly problem
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.900339663028717, "perplexity_flag": "middle"} |
http://mathhelpforum.com/statistics/185255-probability-getting-least-one-correct-answer-2-out-5-correct-options-2-print.html | Probability of getting at least one correct answer, with 2 out of 5 correct options
Printable View
Show 40 post(s) from this thread on one page
• July 29th 2011, 01:39 AM
CaptainBlack
Re: Probability of getting at least one correct answer, with 2 out of 5 correct optio
Quote:
Originally Posted by jbwtucker
This is a real world problem we're trying to solve, not a test question, so in a way, your help is all the more appreciated!
A student taking a test is presented with a question with five options. Among the 5 options presented, 2 of the 5 represent correct answers. (So, for example, if A, B, C, D and E are the options, it may be that B and E are correct.)
They may choose up to 2 options; they can opt to select only 1 option, but may select 2. They may not select more than 2. (Essentially, think of 5 check boxes: they may not select more than 2, but they do have the option of selecting only 1.)
We know that the probability of selecting both options correct (i.e., marking two check boxes, and both selections are correct) is represented by the following:
$\frac{2}{5}\times\frac{1}{4}=\frac{1}{10}$
Here's what we need help with. Please remember that they do have the option of only marking 1 out of the 5 check boxes.
1. What is the probability that they will get only 1 option correct (that either (a) they mark only 1 check box, and it is a correct one, or (b) they mark 2 check boxes, and 1 is right, but 1 is wrong)?
2. What is the probability that they will get at least 1 option correct (that either (a) they mark only 1 check box, and it is a correct one, (b) they mark 2 check boxes, and 1 of the 2 is correct, or also (c) they mark 2 check boxes, and both are correct)?
Essentially, we need to know the different probabilities for getting only 1 correct, versus getting at least 1 correct ... and we don't know how to account for their option to only mark 1 checkbox, not two.
Your question has no answer without further assumptions, for instance you seem to assume that the student knows nothing about the question and so will tick boxes at random. Is this realistic?
Also there is no decision problem in this, we need some information to make a informed model of how the student decides to tick zero, one or two boxes (or for that matter why not tick more and what marking rule is then applied).
CB
Show 40 post(s) from this thread on one page
All times are GMT -8. The time now is 11:53 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506916403770447, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/39019/field-due-to-current-in-a-wire?answertab=votes | # Field due to current in a wire
Suppose a current flows in a straight cylindrical wire so that an electric field $\textbf{E}$ is maintained in the wire. Will there be an electric field just outside the wire..?
-
## 3 Answers
From your setup, it sounds like you have an E field directed along the wire that is driving the current (i.e. the wire has some finite conductivity). If that is the case, then just outside the wire there must also be an E field, because of Faraday's Law. The curl of E must be finite, and if you had a discontinuity of the tangential E field at the surface of the wire then the curl would be infinite.
-
I believe it is Faraday's law here rather than Ampere's law. – Alfred Centauri Oct 4 '12 at 18:08
You are right, Faraday's Law. – user1631 Oct 4 '12 at 20:59
The voltage difference in a steady state current is independent of path, and deforming the path out of the wire, you can see that the electric field must be continuous. The electric field not only extends outside the wire, in conjunction with the magnetic field surrounding the wire, it is carrying the bulk of the momentum of the current.
-
Ron Maimon, isn't there another problem: what if you take your path very far from the wire, I assume the integral $\int E.ds$ is nonzero on that portion of path which is parallel to wire then you will find that the electric filed must be independent of distance from wire to give the same voltage and it seems nonphysical! Am I right? – Riza Apr 23 at 16:36
@richard: That's not a paradox. The field bend, so that you get a nonzero integral along the parts of the path relatively close to the wire, which will give the voltage difference between each point and infinity. – Ron Maimon Apr 23 at 18:46
why should field bend in a symmetric condition (long wire) ? actually I'm confused about this! because there are few texts talking about it which are not quite consistent. For example in Feynman lectures the electric field is assumed parallel to wire! – Riza Apr 23 at 19:39
@Richard, if it doesn't bend, E can't fall off with distance (imagine two opposite infinite plates of charge sourcing the current). The wire isn't infinite. – Ron Maimon Apr 24 at 14:21
When a steady current is flowing through the wire, the overall wire is charge neutral. In the presence of an external electric field $\textbf{E}$ the electrons are simply moving from one end of the wire to the other. The electron density at end of the wire from which the electrons are leaving gets replenished by (say) the battery. An equal flux of electrons is obviously coming out of the other end of the wire and going into the battery. In each region of the wire the free (or conducting) electron density matches that of the positive ion cores. As a result, the wire is charge neutral. Therefore there should not be any electric field outside the wire (Gauss's law).
Also, because you said that "$\textbf{E}$ is maintained," the magnetic field produced by the wire (Biot-Savart law) will be time independent. Therefore any electric field outside the wire, due to induction, will also not be possible (Lenz's law).
-
This is wrong. the integral of E from one point to another is the same along any path, requiring an E field continuously extending outside the wire when a current is flowing. – Ron Maimon Oct 5 '12 at 7:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498040676116943, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/83255/recognition-of-graph-families/83260 | ## Recognition of graph families.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let F be a family of finite simple graphs, such as planar graphs, Cayley graphs, 4-colorable graphs, etc. The basic question is whether there exists a polynomial-time algorithm to decide whether a given graph G belongs to F. Do you know any summary (for example in the form of table) with state-of-the-art for this problem: for which families F there is a polynomial algorithm (and what is the fastest known algorithm), for which families the problem is NP complete, and for which families it is open.
-
3
This question is too vague. Any computational problem can be encoded as a family of graphs, so you are effectively asking for a listing of all known problems together with their computational complexity. – Emil Jeřábek Dec 12 2011 at 17:14
1
Have you tried asking on cstheory.stackexchange.com ? – Zsbán Ambrus Dec 13 2011 at 10:59
1
Actually, not so vague: The question is about INTERESTING families, and the answer below from David Eppstein is exactly what I wanted. Ok, then, may I ask much more concrete question: can I efficiently test whether the given (uncolored) graph is the Cayley graph for some group G or not? – Bogdan Dec 14 2011 at 12:48
## 3 Answers
It's too big to fit into one table, but I believe what you want is the Information System on Graph Classes and their Inclusions. In particular, for each of over 1200 graph classes listed on this site, it includes the known results on the complexity of several important computational problems on graphs in that class, including recognition.
-
Thans! This is exactly what I have asked. But, surprisingly, this long list of 1200 classes does not include Cayley graphs. So, I still do not know can I efficiently test whether the given (uncolored) graph is the Cayley graph for some group G or not. – Bogdan Dec 14 2011 at 12:46
Ok, I see now that this is open. Thanks again. – Bogdan Dec 14 2011 at 13:45
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Permit me to direct you to the Wikipedia page on "Forbidden graph characterization," which contains a long table of graph classes that have a forbidden subgraph characterizations. For example, chordal graphs may be characterized as those with no cycles of length $\ge 4$. Then the "Robertson–Seymour theorem" is relevant, for it implies that
for every minor-closed family $F$, there is polynomial time algorithm for testing whether a graph belongs to $F$.
And the R-S theorem itself says that every minor-closed family can be defined by a finite set of forbidden minors, e.g., $K_5$ and $K_{3,3}$ for planar graphs.
-
Cubic time, in fact, if I remember correctly. – Will Sawin Dec 12 2011 at 19:45
@Will: Yes, cubic in the size of the graph, but with a constant superpolynomial in the minor size. – Joseph O'Rourke Dec 12 2011 at 20:41
4
The original algorithm of Robertson and Seymour ran in cubic time, but it has since been improved to quadratic by Kawarabayashi, Kobayashi and Reed. See research.nii.ac.jp/~k_keniti/quaddp1.pdf – Tony Huynh Dec 12 2011 at 22:02
Garey, Michael R. – Johnson, David S., Computers and intractability, a guide to the theory of NP-completeness, 1979. It's a nice book which includes many problems about decision problems on graphs. It even gives a general sufficient condition that makes recognizing a class of graphs NP-complete. I don't remember the details, but I think it has to do with the class being monotonous.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942581832408905, "perplexity_flag": "head"} |
http://quant.stackexchange.com/questions/3390/good-reference-on-sample-autocorrelation | Good reference on sample autocorrelation?
I'm not a statistician but I'm writing my thesis on mathematical finance and I think it would be neat to have a short section about independence of stock returns. I need to get better understanding about some assumptions (see below) and have a good book to cite.
I have a model for stock prices $S$ in which the daily ($t_i - t_{i-1}=1$) log-returns
$$X_n = \ln\left(\frac{S(t_n)}{S(t_{n-1})}\right), \ \ n=1,...,N$$
are normally distributed with mean $\mu-\sigma^2/2$ and variance $\sigma^2$. The autocorrelation function with lag 1 is
$$r = \frac{Cov(X_1,X_2)}{Var(X_1)}$$
which I estimate by
$$\hat{r} = \frac{(n+1)\sum_{i=1}^{n-1} \bigl(X_i - \bar{X} \bigr)\bigl(X_{i+1} - \bar{X} \bigr)}{n \sum_{i=1}^{n}\bigl(X_i - \bar{X} \bigr)^2}$$
where
$$\bar{X} = \frac{1}{n}\sum_{i=1}^N X_i$$
Now I understand that under some some assumptions it holds that
$$\lim_{n \rightarrow \infty} \sqrt{n}\hat{r} \in N(0,1)$$
I would be very glad if someone could point me towards a good book which I can cite in my thesis and read about these assumptions (I guess it has something to do with the central limit theorem).
Thank you in advance!
Crossposting at:
Mathematics: http://math.stackexchange.com/questions/139408/good-reference-on-sample-autocorrelation
Statistics: http://stats.stackexchange.com/questions/27465/good-reference-on-sample-autocorrelation
-
1
– chrisaycock♦ May 1 '12 at 15:23
Your claim that lim_{n->inf} sqrt(n) r_hat is supposed to be an element of the normal distribution does not make sense mathematically. A distribution is not a set. If you mean its support, that is the real line, an element of which your limit surely is. But there is little value in that claim. Besides, regarding autocorrelation basically autoregressive time series and fractional Brownian motion (and numerical approximation thereof) come to my mind. But I'm not sure wether either topic fits your background and time frame. – Konsta May 2 '12 at 21:37
1 Answer
Three good references are the
1. Asymptotic theory for econometricians, H. White
2. Stochastic Limit Theory, Davidson
3. Asymptotic Theory of Statistical Inference for Time Series, Taniguchi and Kakizawa
They are roughly in order of complexity.
The crux of the matter is to balance the requirements of finiteness of higher moments of $X$ with its dependence structure, or, put it differently, to balance the thickness of the tails with the memory of the process.
Don't peek into the abyss for too long.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9081753492355347, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/4976/bose-einstein-condensate-in-1d/4984 | # Bose-Einstein condensate in 1D
I've read that for a Bose-Einstein gas in 1D there's no condensation. Why this happenes? How can I prove that?
-
## 4 Answers
The claim is often that there is no condensation in $d<3$. The other answers are correct, but let's be clear, there are actually two assumptions present in the claim:
1. Assume you have $N$ noninteracting bosons in $d$-dimensions in a hypervolume $L^d$
2. Assume that these bosons have an energy-momentum relationship of $E(p) = Ap^s$.
Now, the way we calculate the critical temperature ($1/\beta_c$) for BEC requires satisfying the equation $$\int_0^\infty \frac{\rho(E)dE}{e^{\beta_c E}-1}=N$$
where $\rho(E)$ is the density of states. Whether this integral is convergent or not depends on the values of both $s$ and $d$. The details of the proof are up to you though. :)
-
@mbq and @wsc Thank's for your relevant and targeted answers. So, if I have understand, condensate exist ONLY in three dimension? Not in 1 or 2, or (if we imagine it's possible, but there is a mathematical speculation) in >3D ;) PS i vote +1 at your answers, but I can't select mor than 1 accepted answer :( – Boy Simone Feb 11 '11 at 1:19
>3D is just fine, and in fact if you work it out carefully, you'll see that this common proof fails to deny the existence of a condensate in 2D if the bosons have a linear dispersion, E(p)~p. But this is just math. Zoran Hadzibabic does some truly beautiful experiments with quasi-2D BECs. – wsc Feb 11 '11 at 1:24
2
There is a little more to this; for the not so degenerate case of purely non-interacting bosons, a BEC is a kind of quantum condensation where a continuous ($U(1)$) symmetry is broken; in general, such a symmetry breaking will yield arbitrarily low energy (Goldstone) bosons. In below 3D, these bosons will, at any finite temperature, be infinite in number, signalling a failure of the theory, i.e. it is not actually symmetry broken. In 1D this is absolute; in 2D the divergence is only logarithmic, so for a small sample it is indistinguishable from a broken symmetry state. – genneth Feb 11 '11 at 9:41
This is obviously pure geometry; precisely because the density of states of zero energy is not approaching 0 in $d<3$ (it behaves like $E^{\frac{d-2}{2}}$); (probably) the simplest proof can be done by showing that this explodes the number of particles.
-
It is necessary to clarify that a uniform, non interacting Bose gas (considered to be confined in a periodic box) in thermal equilibrium does not have a macroscopic occupation of the zero momentum mode if $d<3$. This is not quite accurate for $d=2$ as macroscopic occupation is achieved at T=0, or rather the critical temperature tends to zero in the limit of $N \to \infty$, $V \to \infty$, $N V = {\rm const}$.
This is however not the case if one has external potentials and makes no continuum approximation in the thermodynamics. Additionally attractive condensates $(a_s < 0)$ can form stable, self localised states (solitons) even without confinement in $d=1$. Such states satisfy the conditions for off diagonal long range order required for BEC.
-
Look at the derivation of the critical temperature of a Bose gas. In there, you should get nonsensical results for one dimension. This is because the density of states even of non-interacting particles depends on the dimension.
-
5
Although technically correct, I don't think this answer is very helpful. It doesn't really seem to answer the question, it just says, essentially, "look it up." – David Zaslavsky♦ Feb 11 '11 at 0:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201041460037231, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-geometry/85439-multiple-choice.html | # Thread:
1. ## Multiple Choice
Let $\{a_n\}$ be a sequence of positive terms. Which of the following statements is a deduction from :
the sequence $\{a_n\}$ is not summable:
(A) $\forall M > 0 \exists N \in \mathbb{N}$ such that $a_N > M$.
(B) $a_n \not\rightarrow 0$
(C) $\forall n \in \mathbb{N} \frac{a_{n+1}}{a_n} \le 1$.
(D) $\forall M>0 \exists N \in \mathbb{N}$ such that $\sum_{i=1}^{N} a_i > M$.
(E) None of the above.
any help would be apprecaiated - i dont think it can be B or C
2. Does the sequence $\left\{ {1,\frac{1}{2},\frac{1}{3}, \cdots ,\frac{1}{n}, \cdots } \right\}$ meet the conditions of this question?
3. Is the answer D) ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9112117290496826, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/17076/is-the-spring-constant-k-changed-when-you-divide-a-spring-into-parts?answertab=votes | Is the spring constant k changed when you divide a spring into parts?
I've always been taught that the spring constant $k$ is a constant — that is, for a given spring, $k$ will always be the same, regardless of what you do to the spring.
My friend's physics professor gave a practice problem in which a spring of length $L$ was cut into four parts of length $L/4$. He claimed that the spring constant in each of the new springs cut from the old spring ($k_\text{new}$) was therefore equal to $k_\text{orig}/4$.
Is this true? Every person I've asked seems to think that this is false, and that $k$ will be the same even if you cut the spring into parts. Is there a good explanation of whether $k$ will be the same after cutting the spring or not? It seems like if it's an inherent property of the spring it shouldn't change, so if it does, why?
-
2
– Qmechanic♦ Nov 16 '11 at 21:09
4 Answers
Well, the sentence
It seems like if it's an inherent property of the spring it shouldn't change, so if it does, why?
clearly isn't a valid argument to calculate the $k$ of the smaller springs. They're different springs than their large parent so they may have different values of an "inherent property": if a pizza is divided to 4 smaller pieces, the inherent property "mass" of the smaller pizzas is also different than the mass of the large one. ;-)
You may have meant that it is an "intensive" property (like a density or temperature) which wouldn't change after the cutting of a big spring, but you have offered no evidence that it's "intensive" in this sense. No surprise, this statement is incorrect as I'm going to show.
One may calculate the right answer in many ways. For example, we may consider the energy of the spring. It is equal to $k_{\rm big}x_{\rm big}^2/2$ where $x_{\rm big}$ is the deviation (distance) from the equilibrium position. We may also imagine that the big spring is a collection of 4 equal smaller strings attached to each other.
In this picture, each of the 4 springs has the deviation $x_{\rm small} = x_{\rm big}/4$ and the energy of each spring is $$E_{\rm small} = \frac{1}{2} k_{\rm small} x_{\rm small}^2 = \frac{1}{2} k_{\rm small} \frac{x_{\rm big}^2}{16}$$ Because we have 4 such small springs, the total energy is $$E_{\rm 4 \,small} = \frac{1}{2} k_{\rm small} \frac{x_{\rm big}^2}{4}$$ That must be equal to the potential energy of the single big spring because it's the same object $$= E_{\rm big} = \frac{1}{2} k_{\rm big} x_{\rm big}^2$$ which implies, after you divide the same factors on both sides, $$k_{\rm big} = \frac{k_{\rm small}}{4}$$ So the spring constant of the smaller springs is actually 4 times larger than the spring constant of the big spring.
You could get the same result via forces, too. The large spring has some forces $F=k_{\rm big}x_{\rm big}$ on both ends. When you divide it to four small springs, there are still the same forces $\pm F$ on each boundary of the smaller strings. They must be equal to $F=k_{\rm small} x_{\rm small}$ because the same formula holds for the smaller springs as well. Because $x_{\rm small} = x_{\rm big}/4$, you see that $k_{\rm small} = 4k_{\rm big}$. It's harder to change the length of the shorter spring because it's short to start with, so you need a 4 times larger force which is why the spring constant of the small spring is 4 times higher.
-
For a given spring, $k$ is a constant, As long as you're talking about an ideal spring. In other words, the definition of the ideal spring is that it applies the force proportional to its deformation length (at both endings of course).
I'm afraid both you and your professor are wrong. The correct formula should be:
$k_{new} = k_{orig}*4$
To show that let's do the following gedankenexperiment. Suppose you have your original spring in tension. It's deformed length $L$, and applies the appropriate force $F$.
Now imagine that your spring is actually 4 consequently connected springs of length $L/4$. Each spring is at rest, this means that for every spring the forces applied to both endings are equal. Since all the springs are connected and apply forces on each other - this means that all the forces applied to all the spring endings are the same. And obviously they equal to $F$.
OTOH each spring is deformed by only $L/4$. Hence - their "constants" are 4 times higher
-
In other way $k \times l = \text{constant}$ so $K$ varies $1/l$.
So $K\times l=K' \times l/4$ and $K=K'/4$ so $K'=4K$ $K'$ will be $4k$ ($k$=original spring constant)
-
To supplement the answer by Luboš Motl, I will come to this problem from a Material Science point of view.
What you mean by the inherent property of the string is not the spring constant, in fact, it is Young's modulus $E$, which is defined to be the same for some material (and not only for a string):
$$E = \frac{\text{tensile stress}}{\text{tensile strain}} = \frac{\sigma}{\varepsilon} = \frac{\text{force per area}}{\text{extension per length}} = \frac{F / A}{x / l} = \frac{F l }{x A}$$
Now use this definition to construct the Hooke's Law:
$$F = \frac{EA}{l} x = k x$$
where
$$k = \frac{EA}{l}$$
Now if you think, when cutting a string in half, you are keeping $E$ and $A$ constant (the same material and the same cross-sectional area), but you change $l$ by a factor of $4$. This makes the new $k$ to be 4 times smaller.
Also, $E$ is not always constant for some type of material, but greatly depends on the micro-structure of it, such as tiny imperfections in the crystalline structure, which influence $E$ in different ways.
Note, that a rigorous proof, would involve the geometry of the spring as well, but it would only influence the effective size of the $EA$ product - the scaling with length would remain the same.
Some rambling on $E$
I thought I might talk about why $E$ is always constant for some type of material. We can imagine a slab to be constructed by lots of tiny springs, which obey the Hooke's law. (This is possible because of one approximation, that the atoms in the material do not move far from their equilibrium position.)
Because of the energy conservation we already know (the answer by Luboš Motl), that if we connect several springs, then we will change the effective spring constant:
$$k_{new} = n * k$$ where n is the number of the springs.
This means, that the percentage part of the extension is more sensible to measure, i.e.: $$F = const * x / l = const * \varepsilon$$ using the previous definition.
The same if we connect several springs in parallel, the effective $k$ will scale with the number of springs we connect and the number of the strings will be proportional to the cross sectional area of the spring. $$F = const*A*x$$ Where we can denote the unknown constant as the Young's modulus.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547492861747742, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/55725/interesting-examples-of-flasque-sheaves/55728 | ## Interesting examples of flasque sheaves?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Does anyone know any interesting examples of flasque sheaves? Ideally, I would like to see one that both arises naturally and is geometric in some sense. On the other hand, I know so few examples other than direct products of stalks that I would be happy to see anything new.
-
The sheaf of rational functions on a variety. – a-fortiori Feb 17 2011 at 13:42
Skyscraper sheaves (and their direct products) are flasque. – Charles Staats Feb 17 2011 at 13:44
Another very common way you get flasque sheaves is as pushforwards of other flasques. Sometimes via other functors as well. – Karl Schwede Feb 17 2011 at 13:48
I probably should have been less vague in my statement. Other than direct products of stalks or constant sheaves on irreducible spaces, I don't know of any other examples of flasque sheaves. So I was hoping for something other than these. – AH Feb 17 2011 at 17:01
I am having a hard time thinking of unusual examples of flasque sheaves, but an easier time thinking of reasons why the usual examples are useful, hence perhaps "interesting". E.g. Karl's example reminds that this fact, plus Serre's vanishing theorem, implies affine morphisms do not change cohomology, i.e. the affine pushforward of any sheaf still has the same cohomology. – roy smith Feb 18 2011 at 17:48
show 4 more comments
## 4 Answers
Dear Rex, the field of rational functions $\mathcal K_X$ on an integral scheme $X$ ( for example an algebraic variety) is flasque and so is the sheaf of its invertible elements $\mathcal K^\ast_X$. This has as a nice consequence that the divisor class group $Cl(X)$ of Cartier divisors on $X$ is isomorphic to the Picard group $Pic(X)$ of isomorphism classes of line bundles on $X$. Indeed we have an exact sequence of sheaves of abelian groups on $X$:
$$0\to \mathcal O^\ast_X \to \mathcal K^\ast_X \to \mathcal K^\ast_X/ \mathcal O^\ast_X \to0$$
Taking the associated long exact sequence of cohomology we get the portion
$$\Gamma (X,\mathcal K^\ast_X) \to \Gamma (X, \mathcal K^\ast_X/ \mathcal O^\ast_X ) \to H^1(X,\mathcal O^\ast_X) \to H^1(X,\mathcal K^\ast_X)$$
The cokernel of the first arrow is precisely $Cl(X)$, whereas the cohomology group $H^1(X,\mathcal O^\ast_X)$ is the Picard group $Pic(X)$. And now for the sting: $H^1(X,\mathcal K^\ast_X)=0$ because $\mathcal K^\ast_X$ is flasque, hence acyclic ! And we have our isomorphism $Cl(X) \simeq Pic(X)$, the paraphrase of which being that every line bundle on $X$ comes from a Cartier divisor, unique up to linear equivalence..
-
Dear roy, I'm very happy to read you here, but your interesting example deserves a better fate than being just a comment to my answer! Why not post it as a new answer , so that I and others can show our appreciation by upvoting it? – Georges Elencwajg Feb 17 2011 at 22:47
Thank you Georges, I have moved it. I want to emphasize however this is just a brief account of the beautiful discussion in George Kempf's terrific book. Unfortunately this book has no major publisher so many have not seen it. – roy smith Feb 18 2011 at 2:43
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
An exemple of flasque sheaf is the sheaf of hyperfunction. It has important application in the theory of D-modules.
-
This sounds interesting. Would you mind saying more? – AH Feb 18 2011 at 12:13
In the spirit of Georges' beautiful example, note that the usual computation of cohomology of an invertible sheaf on a complete curve proceeds largely by means of the simplest types of flasque sheaves. Recall given L, the natural map from rational sections of L to the "principal parts" of those sections, is a flasque resolution of L, so computes H(L). I.e. the induced map on global sections is a linear map of infinite dimensional spaces with finite dimensional kernel and cokernel: H^0(L) and H^1(L). In particular, since this is a 2 step complex, H^r(L) = 0 if r > 1.
To compute more one typically approximates this resolution by a smaller one. Restricting to a fixed divisor D, we get a subresolution L(D)-->L(D)|D, with finite diml global sections, hence more useful for computing H(L). Since L(D) restricted to D is also flasque, we get the formula chi(L(D)) - chi(L) = deg(D), for all L,D. Using the result in Georges' answer above, this includes weak Riemann Roch: chi(L) - chi(O) = deg(L).
This is as far as we can go with only flasque sheaves since L(D) is not flasque. But if we choose D with H^1(L(D)) = 0, we have an acyclic subresolution L(D)-->L(D)|D, of the original flasque resolution of L. This gives a map of fairly explicit finite dimensional spaces with kernel and cokernel isomorphic to H^0(L), H^1(L).
Thus most of the standard theory of invertible sheaves on curves arises from these concrete examples of the simplest flasque sheaves, i.e. constant sheaves of rational sections as in Georges' answer, and direct sums of stalks, illustrating further how useful those apparently trivial cases can be. (see Kempf, Abelian Integrals.)
-
I don't think that they could be very geometric unless you regard spaces with Zariski-like topology as geometric. For example, sheaves naturally arising on smooth manifolds are rather soft, not flasque.
But on an irreducible topological space (e.g. an algebraic variety), there are examples. For example, any locally constant sheaf is flasque.
A useful example is that of injective modules. Assume your space $X$ is endowed with a sheaf of local rings $\mathcal{O}_X$. Then any injective $\mathcal{O}_X$-module is flasque. I don't think that this is geometric though, because injective modules are rather artificial monsters used to define derived functors than naturally arising objects.
Edit. Concerning the title of your question, I think of a flasque sheaf as a synonym for a very very uninteresting sheaf (i.e. for which most statements become trivial).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212874174118042, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/53235-full-rank-matrix-problem.html | # Thread:
1. ## full-rank matrix problem
I'm having trouble understanding one part of the proof I'm studying.
The matrix $\left[ \begin{array}{ccc} a & b & c \\ d & e & f \end{array} \right]$ is full-rank.
Now we suppose that this
$\left[ \begin{array}{cc} a & b \\ d & e \end{array} \right]$
is the full-rank part.
Why can we do this?
Thank you for your help, this is really confusing me.
2. saying that the matrix has full rank means that it is of rank 2 - so there are two columns which are independent. you don't really know which 2, but for most proofs it doesn't matter. so you say without loss of generality, the first two columns are independent and therefore $\left[ \begin{array}{cc} a & b \\ d & e \end{array} \right]$ is of rank 2. the proof would probably work as well for the 1st and 3rd columns if they are independent too, etc.
by the way, you might have a 2x3 matrix that every two columns are independent, yet it must be of rank 2, for example
$\left[ \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & 1 \end{array} \right]$
3. Thank you! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9732216000556946, "perplexity_flag": "head"} |
http://physics.stackexchange.com/tags/resistors/new | # Tag Info
## New answers tagged resistors
1
### Finding current using EMF & internal resistance
Hey you're getting wrong, the equivalent emf(voltage) of system will be 16V-8V because both have opposite poles facing each other, so their will be net flor of current (-ve to +ve) according to the cell of greater emf(16V cell).Then your total resistance is $$5+1.6+1.4 = 8 \Omega$$(all are in series) . I(Current) = E(e.m.f or Voltage)/R(Resistance) = 8/8 = ...
1
### Resistors in Parallel
Let the resistance of the original wire be R. R = ρ (L/A) Now l = L/5 R’ = ρ (L/5A) Or R’ = R/5 Now 5 resistors of R’ are connected in parallel 1/R(net) = 5/R + 5/R + 5/R + 5/R + 5/R or 1/2 = 25/R or R = 50 Ω.
-2
### Resistors in Parallel
The wires have resistance. You had 5 resistors in series. If connected in parallel, they give a known parallel resistance.
1
### Resistors in Parallel
Resistor is anything that pose resistance to flow of charges in a circuit. Here is how it looks like: $$R=\rho \dfrac lA$$ Where $\rho$ is material property, $l$ is length and $A$ is cross sectional area. Here is how 5 resistors in parallel look like: and the circuit diagram showing the same: In parallel configuration voltage drop across each ...
2
### Resistors in Parallel
Seems like you're not making the connection between the actual physical setup and the equations. So, here's a translation: 1) A length of wire is a resistor. By resistor what we mean is that when we apply a potential difference $V$ between the two ends (like from a battery) the resulting current $I$ is given by $V = IR$ where $R$ is the resistance. 2) The ...
1
### Confused on Calculating Resistance Distance Matrix
$\Gamma_{ii}$ is the $i$-th entry on the diagonal of $\Gamma = L^+ = (D-A)^+$, $\Gamma_{jj}$ is the $j$-th entry, and $\Gamma_{ij}$ is the entry located at row $i$, column $j$. Thus $\Omega_{ij}$ is a scalar, but you could assemble all such values into a matrix $\Omega$ that gives the resistances between all pairs of vertices.
Top 50 recent answers are included | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934726357460022, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/continuum-mechanics | # Tagged Questions
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles.
learn more… | top users | synonyms
1answer
76 views
### Why is this thought experiment flawed: A vast lever rotating faster than the speed of light [duplicate]
If there were a vast lever floating in free space, a rigid body with length greater than the width of a galaxy, made of a hypothetical material that could endure unlimited internal stress, and this ...
3answers
82 views
### Why is the (nonrelativistic) stress tensor linear and symmetric?
From wikipedia: "...the stress vector $T$ across a surface will always be a linear function of the surface's normal vector $n$, the unit-length vector that is perpendicular to it. ...The linear ...
1answer
91 views
### What is Relativistic Navier-Stokes Equation Through Einstein Notation?
Navier-Stokes equation is non-relativistic, what is relativistic Navier-Stokes equation through Einstein notation?
0answers
93 views
### 2-D Turbulence - how does it look like?
Consider parallel flow in the X direction over a 2D semi infinite flat plate. If turbulence is 2-D, in which axes should we expect the vortices to form. Also, are there any experimental/visualization ...
3answers
73 views
### Hooke's law limitation question
Let's consider a spring. I am a strong man(well, lets assume) and I am pulling the spring. the work I do is being stored in the spring in the form of its elastic potential energy. Then suddenly, ...
1answer
49 views
### Equivalence of turbulence in solid materials
The governing equations for a fluid and a solid are effectively the same and many times analysis can be done for a solid using the Navier-Stokes equations with the equation of state and/or the stress ...
1answer
42 views
### References on wave solutions in continuum mechanics [closed]
I am interested in literature on known wave solutions in continnum mechanics, precisely the following mechanical equation: $$\rho\partial_t^2u_i = C_{ijkl}\nabla_j\nabla_ku_{l}$$ My interest is spread ...
1answer
89 views
### Dispersion relation in continuum mechanics
I'm looking at the vibration of a solid having a lattice structure, they obey the following equation: $$\rho\partial_t^2u_i = C_{ijkl}\nabla_j\nabla_ku_{l}$$ with $u(\vec{x},t)$ the displacement to ...
1answer
100 views
### Normal modes of a flexible rod clamped at only one point
I am interested in the vibrations of a thin, flexible rod that would only be clamped at one point, properly I'd like to calculate its eigenvalue. But the way I learned it in wave mechanics doesn't ...
1answer
428 views
### Calculation of a bending moment
I'd like to calculate the bending moment of a cantilever, fixed at its base, and submitted to a certain stress on a specific spot, but I can't find the proper definition of this bending moment (first ...
2answers
89 views
### How local is the stress tensor?
I am confused by the definition of the stress tensor in a crystal (let's say a semi-conductor), I don't see how it could be "more local" than over an unit cell. I know that in field theory the stress ...
0answers
150 views
### How to solve fixed-fixed beam with finite difference method?
What equations to use on this system to form a matrix $A$ with dimensions $[n,n]$ and load vector $q$ with dimension $[n]$ ? I am trying to get vertical displacement $w$. $$w = A^{-1}\times q$$ ...
0answers
50 views
### Continuum mechanics and effects of stress
Going to word this question a bit more straightforward than I may have before. Also, I'm trying to use baby formulas so I can grasp exactly what's going on. Object A has an elasticity of ...
2answers
181 views
### Shape of wall's deformation wave caused by baseball's impact
Clicking through this year's top sports pictures, I stumbled upon this one. I was wondering about the shape the baseball is leaving on the wall. What phenomenon causes this peculiar shape? Why is ...
1answer
112 views
### (Botanical) branch bending under gravity
I'm a PhD student in maths, and attended my last physics class some 15 years ago, so you can imagine my competences in the field. My supervisor (also not a mechanist) cant tell me how to proceed ...
3answers
418 views
### Why are Navier-Stokes equations needed?
Can't we picture air or water molecules individually? Then, why are Navier-Stokes equations needed, after all? Can't we just aggregate individual ones? Or is it computationally difficult, or ...
1answer
98 views
### How wide does a wall of ice need to be to stay in place?
Let us say that we have unlimited manpower to construct a huge wall of water ice e.g. 200 m tall (700 feet). -and that the wall is placed in a climate, where the temperature never (for your purpose) ...
1answer
170 views
### A differential equation of Buckling Rod
I tried to solve a differential equation, but unfortunately got stuck at some point. The problem is to solve the diff. eq. of hard clamped on both ends rod. And the force compresses the rod at both ...
1answer
160 views
### What is the two dimensional equivalent of a spring?
I'm trying to model isotropic linear elastic deformation in two dimensions. In one dimension, I know that a linear elastic material can be thought of as a spring which obeys Hooke's law \$F=-k\Delta ...
1answer
191 views
### Tensors: relations between physics and linear algebra
In continuum mechanics we use finite deformation tensors to exprime deformations in a point. The 9 components of the tensor (in reality 6 because of its symmetry) are defined as ...
1answer
64 views
### Difference between using displacement and current configuration as unknown?
We could use either the current configuration $x$ or the displacement $u$ as unknown while solving for the deformation, for example, of a solid object. I want to know what's the difference between ...
1answer
110 views
### Decomposition of deformation into bend, stretch and twist?
I'm wondering is there any way to decompose the deformation of an object into different components? For example, into stretching, bending and twisting part respectively? The decomposition could be ...
2answers
91 views
### A problem of approximation [duplicate]
Possible Duplicate: Why are continuum fluid mechanics accurate when constituents are discrete objects of finite size? When we apply differentiation on charge being conducted with respect to ...
1answer
359 views
### Continuity equation for compressible fluid
A question is given as Consider a fluid of density $\rho(x, y, z, t)$ which moves with velocity $v(x, y, z, t)$ without sources or sink. Show that \$ \nabla \cdot \vec J + \frac{\partial \rho ...
1answer
398 views
### A conceptual problem with Euler-Bernoulli beam theory and Euler buckling
Euler-Bernoulli beam theory states that in static conditions the deflection $w(x)$ of a beam relative to its axis $x$ satisfies $$EI\frac{\partial^4}{\partial x^4}w(x)=q(x)\ \ \ \ (1)$$ where $E$ is ...
1answer
215 views
### Boundary conditions of Navier-Cauchy equation
I'm having difficulties with Neumann boundary conditions in Navier-Cauchy equations (a.k.a. the elastostatic equations). The trouble is that if I rotate a body then Neumann boundary condition should ...
0answers
345 views
### Interpretation of Stiffness Matrix and Mass Matrix in Finite Element Method
I would like to have a general interpretation of the coefficients of the stiffness matrix that appears in FEM. For instance if we are solving a linear elasticity problem and we modelize the relation ...
0answers
61 views
### Can a wave propagate in an elastic fluid in the absence of volume forces?
A motion (wave) $\mathbf{x}: \mathcal{B}_0 \times [t_0,t_1] \to \mathcal{E}:$ such that $q-o = \mathbf{x}(p,t)=(p-o)+\mathbf{a}_0 cos(\mathbf{k}_0\cdot(p-o) - \omega_0 t)$ can propagate in an elastic ...
0answers
69 views
### Physics for taffy pulling?
I am creating a simulation and am interested in pulling stretchy things and when they break, like taffy. I imagine this is a bit tougher then a simple equation like gravity, but I have no idea. Is ...
2answers
300 views
### Why are continuum fluid mechanics accurate when constituents are discrete objects of finite size?
Suppose we view fluids classically, i.e., as a collection of molecules (with some finite size) interacting via e&m and gravitational forces. Presumably we model fluids as continuous objects that ...
2answers
729 views
### Calculation of the maximum load to the bar
Looking for a way of calculating the maximum weight (W) to the rod with the given length (L) where the rod did not break and that only bend for (b) mm. Need only approximative solution (read: ...
1answer
55 views
### what is a difference in the width of the spinning bar?
The bar with length l, density r, diametr d, Young's modulus E, Poisson's ratio mu, is spinning around the cross-section, what is the change in the width of this bar?
0answers
141 views
### Does a thermally expanding torus experience internal stress?
I'm trying to learn continuum mechanics and thermo-mechanics. As we know, heating an object increases the mean atomic distance $a_0$ of the atoms in a rigid body. Let's assume it is a linear elastic ...
1answer
257 views
### Explain $\rho_{0}\dot{e} - \bf{P}^{T} : \bf{\dot{F}}+\nabla_{0} \cdot \bf{q} -\rho_{0}S = 0$
I am trying to understand the balance of energy -law from continuum mechanics, fourth law here. Could someone break this a bit to help me understand it? From chemistry, I can recall dU = \partial Q ...
1answer
116 views
### In continuum mechanics, what is work potential in the context of total potential energy?
I'm reading a book on the finite element method. Specifically I'm looking at the background material where they are discussing potential energy, equilibrium, and the Rayleigh-Ritz method. The book ...
3answers
277 views
### 2d soft body physics mathematics [duplicate]
Possible Duplicates: Modern references for continuum mechanics Good books on elasticity The definition of rigid body in Box2d is A chunk of matter that is so strong that the distance ...
0answers
52 views
### stress work of uniformly deforming continuum
I have a volume which is deforming (using explicit time-integration scheme) uniformly with velocity gradient $L$ and stress tensor $\sigma$. I would like to determine work done by the volume ...
2answers
558 views
### Calculate the weight a simple plank can support
I'd like to build a simple desk; just a single plank of wood (or a few side-by-side) with solid supports on each end of the desk. What I'm trying to figure out is how thick a plank I want to use for ...
1answer
218 views
### Can we have non continuous models of reality? Why don't we have them?
This question is about Godel's theorem, continuity of reality and the Luvenheim-Skolem theorem. I know that all leading physical theories assume reality is continuous. These are my questions: 1) Is ...
2answers
621 views
### Good books on elasticity
Can someone suggest good books/textbooks/treatises/etc on elasticity? Thanks.
1answer
738 views
### water flow in a sink
When one turns on the tap in the kitchen, a circle is observable in the water flowing in the sink. The circle is the boundary between laminar and turbulent flow of the water (maybe this is the wrong ...
2answers
741 views
### Stress tensor in a cube with shear forces
I want to calculate stress matrix in a cube with two faces parallel to x axis and perpendicular to z axis (sorry I don't know how can I put a picture in this post). There are two force uniform ...
7answers
920 views
### Rotate a long bar in space and get close to (or even beyond) the speed of light $c$
Imagine a bar spinning like a helicopter propeller, At $\omega$ rad/s because the extremes of the bar goes at speed $$V = \omega * r$$ then we can reach near $c$ (speed of light) applying some ...
8answers
824 views
### Modern references for continuum mechanics
I'm wondering what some standard, modern references might be for continuum mechanics. I imagine most references are probably more used by mechanical engineers than physicists but it's still a ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234858751296997, "perplexity_flag": "middle"} |
http://wiki.math.toronto.edu/TorontoMathWiki/index.php/2009_2010_Dispersive_PDE_Seminar | # 2009 2010 Dispersive PDE Seminar
### From TorontoMathWiki
This page contains information about an informal seminar on Dispersive PDEs during 2009-2010. The seminar is organized by J. Colliander and Hiro Oh.
Of related interest in Toronto (and sometimes cross-listed):
2009_Fall_Dispersive_PDE_Seminar_Resources
## July 27, Zworski, 13:10-14:00 @BA6183
| | | | | | | |
|------------------------------------------------------|--------------------------|---------|---------|-------------|--------|------------|
| Maciej Zworski [1] (Berkeley) | Dispersive PDE Seminar | Tuesday | July 27 | 13:10-14:00 | BA6183 | |
| Title: Quantized Poincare maps in chaotic scattering | | | | | | |
| Abstract: ... | | | | | | |
| arXiv | 2010_07_27_Zworski_Notes | | | | | 2010_07_27 |
## July 26, Pocovnicu, 14:10-15:00 @BA6183
| | | | | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|--------|---------|-------------|--------|------------|
| Oana Pocovnicu [2] (Université Paris-Sud, Orsay) | Dispersive PDE Seminar | Monday | July 26 | 14:10-15:00 | BA6183 | |
| Title: Soliton resolution and action-angle coordinates for the Szego equation in the case of rational fraction initial data | | | | | | |
| Abstract: The Szego equation is a model of a non-dispersive Hamiltonian equation. Like the 1-d cubic Schrodinger equation and KdV, it is known to be completely integrable in the sense that it enjoys a Lax pair structure. (The main operator in this Lax pair is the Hankel operator.) It turns out that a whole class of finite dimensional manifolds, consisting of rational fractions, is invariant under the flow of the Szego equation. In this talk, we consider the Szego equation on the real line. First, we show that solutions with generic rational fraction initial data can be decomposed, as time tends to infinity, into a sum of solitons plus a remainder. ("Soliton resolution".) Unlike the case of KdV, this remainder does not disperse, confirming the non-dispersive character of the Szego equation. To prove this result, we solve the inverse spectral problem for the Hankel operator in the finite dimensional case and find an explicit formula for solutions. Then, we use this result to introduce action-angle coordinates on the manifolds of generic rational fractions. In particular, this shows that the trajectories live on Lagrangian toroidal cylinders which are non-compact generalization of the Lagrangian tori in the Liouville-Arnold theorem. | | | | | | |
| [{{{arxiv}}} arXiv] | 2010_07_26_Pocovnicu_Notes | | | | | 2010_07_26 |
## July 26, Pocovnicu, 11:10-12:00 @BA6183
| | | | | | | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|--------|---------|-------------|--------|------------|
| Oana Pocovnicu [3] (Université Paris-Sud, Orsay) | Dispersive PDE Seminar | Monday | July 26 | 11:10-12:00 | BA6183 | |
| Title: Liouville-Arnold Theorem and action-angle coordinates (expository) | | | | | | |
| Abstract: In this talk, we consider finite dimensional integrable systems. In particular, we discuss Liouville-Arnold Theorem, the construction of the action-angle coordinates, and their application in perturbation theory. | | | | | | |
| [{{{arxiv}}} arXiv] | 2010_07_26_Pocovnicu_Notes | | | | | 2010_07_26 |
## June 22, Oh, 13:10-14:00 @BA6183
| | | | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------|---------|---------|-------------|--------|------------|
| Tadahiro Oh [4] (U. Toronto) | Dispersive PDE Seminar | Tuesday | June 22 | 13:10-14:00 | BA6183 | |
| Title: Normal form and I-method | | | | | | |
| Abstract: In this talk, I will discuss the main ideas and steps in Bourgain's paper: A remark on normal forms and the "$I$-method" for periodic NLS, J. Anal. Math. 94 (2004), 125--157. | | | | | | |
| [{{{arxiv}}} arXiv] | 2010_06_22_Oh_Notes | | | | | 2010_06_22 |
## June 18, Oh, 13:10-14:00 @BA6183
| | | | | | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------|--------|---------|-------------|--------|------------|
| Tadahiro Oh [5] (U. Toronto) | Dispersive PDE Seminar | Friday | June 18 | 13:10-14:00 | BA6183 | |
| Title: Unconditional Well-Posedness of mKdV on T | | | | | | |
| Abstract: In 1993, by introducing what is now known as the Bourgain space, Bourgain proved local-in-time well-posedness of the periodic modified KdV (mKdV) in $H^s, s \geq 1/2$. In this result, the uniqueness holds in $C([0, T]; H^s)$ intersected with this Bourgain space, since solutions in his construction necessarily belong to this auxiliary function space. In this talk, we present a simple proof of local well-posedness of mKdV for $s \geq 1/2$ based on differentiation by parts, which shows that the uniqueness holds unconditionally in $C([0, T]; H^s)$ (i.e. without intersecting with any auxiliary function space.) This is a joint work with Soonsik Kwon. | | | | | | |
| [{{{arxiv}}} arXiv] | 2010_06_18_Oh_Notes | | | | | 2010_06_18 |
## June 16, Czubak, 13:10-14:00 @BA6183
| | | | | | | |
|-------------------------------------|-------------------------|-----------|---------|-------------|--------|------------|
| Magdalena Czubak [6] ([7]) | Dispersive PDE Seminar | Wednesday | June 16 | 13:10-14:00 | BA6183 | |
| Title: Introduction to Gauge Theory | | | | | | |
| Abstract: | | | | | | |
| [{{{arxiv}}} arXiv] | 2010_06_16_Czubak_Notes | | | | | 2010_06_16 |
## June 02, Richards, 13:10-14:00 @BA6183
| | | | | | | |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|-----------|---------|-------------|--------|------------|
| Geordie Richards (TMW) ([8]) | Dispersive PDE Seminar | Wednesday | June 02 | 13:10-14:00 | BA6183 | |
| Title: Invariant Measures for Hamiltonian PDE | | | | | | |
| Abstract: We discuss the existence of invariant measures for Hamiltonian PDE, by interpreting these measures as the weak limit of finite dimensional measures in frequency space. As an example, we prove invariance of the Gibbs measure for the KdV equation. | | | | | | |
| [{{{arxiv}}} arXiv] | 2010_06_02_Richards_Notes | | | | | 2010_06_02 |
## May 25, Muñoz, 14:10-15:00 @BA6183
| | | | | | | |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------|---------|--------|-------------|--------|------------|
| Claudio Muñoz [9] (Versailles) | Dispersive PDE Seminar | Tuesday | May 25 | 14:10-15:00 | BA6183 | |
| Title: On the soliton dynamics under a slowly varying medium for generalized KdV equations | | | | | | |
| Abstract: We consider the problem of the soliton propagation, in a slowly varying medium, for a generalized Korteweg - de Vries equations (gKdV). We study the effects of inhomogeneities on the dynamics of a standard soliton. We prove that slowly varying media induce on the soliton solution large dispersive effects at large time. Moreover, unlike gKdV equations, we prove that there is no pure-soliton solution in this regime. | | | | | | |
| arXiv | 2010_05_25_Muñoz_Notes | | | | | 2010_05_25 |
## May 19, Selvitella, 13:10-14:00 @BA6183
| | | | | | | |
|--------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|-----------|--------|-------------|--------|------------|
| Alessandro Selvitella (TMW) (SISSA) | Dispersive PDE Seminar | Wednesday | May 19 | 13:10-14:00 | BA6183 | |
| Title: Introduction to Birkhoff Normal Form | | | | | | |
| Abstract: Some background and discussion of the Birkhoff Normal Form aimed at looking at recent developments to extend these ideas to the PDE setting. | | | | | | |
| [<arxiv> arXiv] | 2010_05_19_Selvitella_Notes | | | | | 2010_05_19 |
## May 12, Colliander, 13:10-14:00 @BA6183
| | | | | | | |
|-------------------------------------------------------------------------------------------------------------|-----------------------------|-----------|--------|-------------|--------|------------|
| James Colliander (TMW) (Toronto) | Dispersive PDE Seminar | Wednesday | May 12 | 13:10-14:00 | BA6183 | |
| Title: Organizational meeting for Summer | | | | | | |
| [<arxiv> arXiv] | 2010_05_12_Colliander_Notes | | | | | 2010_05_12 |
## December 10, Anapolitanos, 1:30pm-3:00pm
Speaker: Ioannis Anapolitanos
Title: A simple derivation for Mean-Field limits for many-body quantum Systems.
Abstract: In this talk we will discuss a new strategy for handling Mean-field limits of quantum mechanical systems. The strategy is developed in a series of papers by Pickl, Knowles-Pickl. The result is roughly saying that if the initial state of the many body system is a mutliple product of the same one particle state, then its evolution remains close to a mutliple product of a one-particle state that evolves according to the Hartree equation. The closeness is in the sense of expectation values of certain class of quantum observables. The method is based on "counting" the particles that fail to be in the state that evolves according to the Hartree equation. We will discuss a simple version of the general results where we will see how we can control the number of such particles. Also we will discuss how the latter implies closeness in the topology of expectations of observables.
Notes from the talk.
## October 29, Czubak, 1:30pm-3:00pm
Speaker: Magdalena Czubak
Title: Local conservation laws, virial identities, and interaction Morawetz estimates.
Abstract: Since the pioneering work of C.S.Morawetz on the Nonlinear Klein-Gordon equation, Morawetz type estimates have become an important tool in the study of NLS. In particular, the interaction Morawetz estimate in 3D is fundamental for the theory of GWP and scattering for both the energy critical and subcritical NLS. We will outline two proofs of this estimate: one based on the original averaging argument of CKSTT, and the second one based on the tensor product idea of A.Hassell. Local conservation laws and the generalized virial identity of Lin and Strauss are a starting point for the two methods. Time permitting we will discuss possible extensions to other equations.
Remarks:
Extensions of these ideas have been obtained by Colliander-Grillakis-Tzirakis and Planchon-Vega. Here are some notes from a talk on the CGTz work. A nice survey of these developments has been written by Ginibre-Velo.
## October 22, Richards, 1:30pm-3:00pm
Speaker: Geordie Richards
Title: Critical local well-posedness and perturbation theory.
Abstract: The proof of global well-posedness and scattering for the quintic defocusing NLS on $\mathbb{R}^{3}$ relies on the stability of $L^{10}_{t\in I,x\in\mathbb{R}^{3}}$ spacetime bounds (on the solution) under any perturbation of the initial data whose linear evolution is sufficiently small in $L^{10}_{t\in I}\dot{W}^{1,30/13}_{x\in\mathbb{R}^{3}}$. (Here I is a compact interval.) This stability holds even for large energy perturbations of so-called near solutions. We discuss these issues and establish this stability by proving two perturbation Lemmas.
Remarks: See Lemmas 3.9 and 3.10 from Section 3 of CKSTT Energy Critical Paper.
## October 15, Oh, 1:30pm-3:00pm
Speaker: Hiro Oh
Title: Basic H1-subcritical scattering theory: defocusing cubic NLS on $\mathbb{R}^3$.
Abstract: In this talk, we discuss the basic scattering theory for the H1-subcritical setting. First, we review the local-in-time Cauchy theory in $H^1(\mathbb{R}^3)$ in both subcritical and critical settings. Then, we discuss the issue of the existence of the wave operator as well as the asymptotic completeness for the defocusing cubic NLS. We prove the existence of the wave operator by considering the Cauchy problem from $t = +\infty$ to t = 0. Then, we prove the asymptotic completeness in several steps. We first show the asymptotic completeness from the strong space-time bound. Then, we show such strong bound follows from a weak space-time bound. Obtaining such weak space-time bound (called Morawetz inequality) is one of the main topics of the talk on Oct. 29 by M. Czubak.
Remarks: This talk will expose aspects of the paper posted here. Some related notes are posted here. Hiro's notes are posted here.
## October 8, Colliander, 1:30pm-3:00pm
Speaker: J. Colliander
Title: Overview of the energy critical defocusing NLS
Abstract: I will give an overview of the main pieces of the proof of scattering for the 3d energy critical quintic defocusing NLS. A pictorial overview of the topics is posted here. Subtopics will be developed in more detail later in the semester. Two talks on the same topic are posted here and here.
## October 1, Oh, 1:30pm-3:00pm
Speaker: Hiro Oh
Title: Cauchy problem of the cubic NLS on $\mathbb{T}$ (Part 2)
Abstract: I will discuss the ill-posedness issues as well as the weak continuity of the solution map.
Hiro's Lecture Notes
## September 24, Oh, 1:30pm-3:00pm
Speaker: Hiro Oh
Title: Cauchy problem of the cubic NLS on $\mathbb{T}$ (Part 1)
Abstract: In this (plus alpha) talk, I will discuss the basic well/ill-posedness issues on the cubic NLS on $\mathbb{T}$. The topics includes:
1.Basic properties of Xs,b spaces: definition, transference principle, linear estimates, time localization, etc.
2.Strichartz estimate: On $\mathbb{R}^d$: idea of proof, dimension counting for admissible pairs. On $\mathbb{T}$: L4-Strichartz (Zygmund, '74), L6 and L4-Strichartz (Bourgain '93.) This includes the number theoretic counting.
3.Well-posedness in $L^2(\mathbb{T})$.
4.Ill-posedness below $L^2(\mathbb{T})$: Failure of smoothness of the solution map (Bourgain '97), failure of uniform continuity (Burq-Gerard-Tzvetkov '02), discontinuity as a result of failure of weak continuity in $L^2(\mathbb{T})$ (Molinet '09.)
If time permits, I may give discussion on the weak continuity of the L2-subcritical NLS on $\mathbb{R}$ (Cui-Kenig '09) as well as well-posedness issue of the periodic (Wick ordered) cubic NLS outside $L^2(\mathbb{T})$.
Hiro's Lecture Notes | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8553093075752258, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/37622/negatve-mass-inside-a-black-hole?answertab=votes | # Negatve mass inside a black hole
With Hawking radiation, one half of virtual pair falls into horizon and this particle has negative energy.
What would an observer inside horizon observe when seeing negative particles ? How do these negative particles interact with ordinary matter ?
-
## 2 Answers
The Schwarzschild solution for a neutral black hole is $$c^2 {d \tau}^{2} = \left(1 - \frac{r_s}{r} \right) c^2 dt^2 - \left(1-\frac{r_s}{r}\right)^{-1} dr^2 - r^2 \left(d\theta^2 + \sin^2\theta \, d\varphi^2\right)$$ You see that the terms $dt^2$ and $dr^2$ are multiplied by $(1-r_s/r)$ or its inverse where $r$ is the radial coordinate and $r_s$ is a constant, the Schwarzschild radius.
An important subtlety of this $(1-r_s/r)$ is that it becomes negative for $r_s\lt r$; this is true for any black hole beneath its event horizon. That's why in the black hole interior, the changes of the coordinate $r$ are actually timelike and the changes of the coordinate $t$ are spacelike; the role of the space and time are interchanged in the interior relatively to the exterior!
When we say that the outgoing/infalling particles from the pair have positive/negative energy, we are talking about the energy that generates translations of the $t$ coordinate. However, the $t$ coordinate is really spacelike in the black hole interior so the $t$-component of the energy-momentum vector is interpreted as a spatial component of the energy-momentum vector by observers inside.
It means that from the internal observers' viewpoint who are capable of observing the infalling particle (and not all of them can!), it is just an ordinary particle with some value of the momentum $p_t$, which is a spatial component and may unsurprisingly be both positive and negative. There is certainly no "new kind of matter" occuring in general relativity right beneath the horizon. All the local physics obeys the same laws as it does outside the black hole.
This general assertion is a special example of the fact that the event horizon is a coordinate singularity, not an actual singularity. When one choose more appropriate, Minkowski-like coordinates for the region of the spacetime near the horizon, it looks almost flat – as seen from the fact that the Riemann curvature tensor has very small values, at least if the black hole is large enough. So an observer crossing the event horizon of a large enough black hole doesn't feel anything special at all. His life continues for some time before he approaches the singularity and this is where the curvature becomes intense and where he's inevitably killed by extreme phenomena. But life may continue fine near the horizon and even beneath it.
Note that the energy is the only conserved component of the energy-momentum vector on the Schwarzschild background – because this solution is time-translationally invariant but surely not space-translationally invariant. Due to the intense curvature caused by black hole, one must be careful and not interpret individual components of vectors "directly physically". We saw an example that what looks like a temporal component of a vector, namely energy, from the viewpoint of the observer at infinity, can really be a competely different, spatial, component from the viewpoint of coordinates appropriate for a different observer, one who is inside.
-
Let me try to explain first what is understood in this context by a negative energy particle. Then we will see that it behaves exactly as a normal particle, and in fact it is normal.
When the pair of virtual particles is created, its total energy is $0$. If one of the particles moves faster, its kinetic energy is larger, and that particle is the one having positive energy (relative to the event horizon). The potential energies are initially the same, since the particles are created at the same position. The potential energy is negative for both particles, since you have to give energy to the particles to move them far from the black hole.
The only difference between the negative energy particle and the positive energy one is that the negative energy particle has small velocity (w.r.t. the event horizon), so that its rest plus kinetic energy is smaller than the modulus of its potential energy. If the positive energy particle moves fast enough, it may escape the black hole, but the negative energy particle will fall in the black hole.
According to the principle of equivalence, it doesn't matter if the reference frame is near the event horizon of a large black hole, or far from any strong gravitational field. Locally, the physics near the event horizon, and even inside the black hole, is the same as anywhere. So, the fact that the particle is very close to the source of gravity doesn'e make it behave differently. The fact that it has smaller kinetic energy also doesn't matter, since the kinetic energy depends on the reference frame. So, nothing will make the negative energy particle behave differently than a normal particle. In particular, it interacts like a normal particle.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293445348739624, "perplexity_flag": "head"} |
http://amathew.wordpress.com/bibliography/ | # Climbing Mount Bourbaki
Thoughts on mathematics
## Bibliography
Here is a (periodically updated) list of books and sources that I have referred to, or plan to in the future, sorted by field:
Algebra
• Representation Theory: A First Course, by William Fulton and Joseph Harris
• Algebra, by Serge Lang
• Commutative Algebra: With a View Towards Algebraic Geometry, by David Eisenbud.
• Commutative Algebra, by Nicolas Bourbaki
• An introduction to Homological Algebra, by Charles Weibel
• Introduction to Commutative Algebra, by Michael Atiyah and Ian Macdonald
• Linear Representations of Finite Groups, by Jean-Pierre Serre
• Lie Groups and Lie Algebras, by Jean-Pierre Serre
• Introduction to Lie Algebras and Representation Theory, by James Humphreys
• Complex Semisimple Lie Algebras, by Jean-Pierre Serre
Algebraic geometry
• Algebraic Geometry, by Robin Hartshorne
• Elements de Géometrie Algébrique, by Alexandre Grothendieck and Jean Dieudonné
• FGA Explained, by Barbara Fantechi
• The Geometry of Schemes, by David Eisenbud and Joe Harris
• Basic Algebraic Geometry, by Igor Shafarevich
• Topologie algébrique et théorie des faisceaux, by Roger Godement
• Algebraic curves and Riemann surfaces, by Rick Miranda
• Introduction to Algebraic Geometry and Algebraic Groups, by Michel Demazure and Peter Gabriel
Differential geometry
• Differential Geometry, Lie Groups, and Symmetric Spaces, by Sigurdur Helgason
• Morse Theory, by John Milnor
• A Comprehensive Introduction to Differential Geometry, by Michael Spivak
• Foundations of Differential Geometry, by Shoshichi Kobayashi and Katsumi Nomizu
• Riemannian Geometry, by Manfredo do Carmo
Analysis
• Functional Analysis, by Peter Lax
• Real and Complex Analysis, by Walter Rudin
• Singular Integrals and Differentiability Properties of Functions, by Elias Stein
• Riemann Surfaces, by Hershel Farkas and Irwin Kra
• Partial Differential Equations, by Michael Taylor
• Introduction to Partial Differential Equations, by Gerald Folland
Number theory
• Algebraic Number Theory, by Serge Lang
• Local Fields, by Jean-Pierre Serre
• Algebraic Number Theory, by John Cassels and Albrecht Frohlich, eds.
• The Arithmetic of Elliptic Curves, by Joseph Silverman
• A Course in Arithmetic, by Jean-Pierre Serre
• Introduction to Cyclotomic Fields, by Lawrence Washington
Logic and computer science
• Introduction to the Theory of Computation, by Michael Sipser
• Introduction to the Metamathematics of Algebra, by Abraham Robinson
• Lectures on the Hyperreals, by Robert Goldblatt
• Mathematical Logic, by H.D. Ebbinghaus, J. Flum, and W. Thomas
• Model Theory, an Introduction, by David Marker
• Computational Complexity, by Christos Papadimitriou
Dynamical systems and ergodic theory
• Introduction to Ergodic Theory, by Peter Walters
• Ergodic Theory, by Paul Halmos
• Introduction to the Modern Theory of Dynamical Systems, by Anatole Katok and Boris Hasselblatt
Topology
• Elements of Algebraic Topology, by James Munkres
• Algebraic Topology, by Allen Hatcher
• Algebraic Topology, by Edwin Spanier
• Algebraic Topology: Homotopy and Homology, by Robert Switzer
• Topology and Geometry, by Glen Bredon
• Topology, by James Dugundji
• General Topology, by Nicolas Bourbaki
• A Concise Course in Algebraic Topology, by Peter May
• Spectral sequences in Algebraic Topology, by Allen Hatcher
• Homotopy theory, by Sze-Tsen Hu
• Stable homotopy theory and generalized homology, by J. Frank Adams
• Characteristic Classes, by John Milnor
Online sources
Here are some online sources:
• A Course in Riemannian Geometry, by David Wilkins
• Introduction to Representation Theory, by Pavel Etingof et al.
• James Milne’s notes on a variety of subjects (but aiming towards algebra and number theory)
• Liviu Nicolaescu’s notes on a variety of subjects in geometry and topology
• Allen Hatcher has notes and online textbooks on algebraic topology
• Ana Cannas de Silva has lecture notes on symplectic geometry, among other things
• Shlomo Sternberg’s online book on Lie algebras
• David Mumford and Tadao Oda’s book on algebraic geometry
Here is a more current reading list:
• SGA 1, 4, 4 1/2; EGA
• Kashiwara and Schapira, Categories and Sheaves
• Lurie, Higher Topos Theory
• Tamme, Introduction to Etale Cohomology
### 44 Responses to “Bibliography”
1. Arnav Says:
August 25, 2010 at 8:56 pm
Very impressive! I wish you the best at Harvard; please feel free to contact me (contact information generally available through the harvard facebook) if you have any questions about Harvard, or math more generally.
1. Akhil Mathew Says:
August 26, 2010 at 9:45 pm
Thanks! I suppose I’ll see you around campus.
2. Tom Says:
September 15, 2010 at 1:05 am
Akhil, you wrote that you read real and complex analysis. How did you find the exercises? I’ve had a crack at that book and the exercises in the first 2 chapters were pretty OK as was the 4th chapter but the third chapter was ridiculously difficult. If you’ve done it could you offer me some hints on how to do the exercises in chapter 3 specifically exercise 5? Also what whas your insight on the Riesz Fischer theorem?
Thanks!
1. Akhil Mathew Says:
September 26, 2010 at 9:03 am
Sorry that your comment took so long to appear—it got caught in the spam filter, and I only just saw it.
I think that, in general, Rudin’s book has nice exercises. For the one you asked about, geometrically the condition $\phi((x+y)/2) \leq (\phi(x)+\phi(y))/2$ says that on the graph of $\phi$, given two points $(x, \phi(x)), (y, \phi(y))$, the point corresponding to the midpoint $(s+y)/2$ lies below the line drawn through the two initial points. Repeating this inductively, one can get the desired inequality on $\phi(tx+(1-t)y)$ for $t$ a dyadic fraction, and continuity then implies it for all $t$ in the unit interval. If I am not mistaken, this is the idea of the argument (which can also be done formally).
The Riesz-Fischer theorem essentially says that each Hilbert space is isomorphic to some $\ell^2(A)$ for some \$A\$ (which is determined up to cardinality, and can be taken e.g. as an orthonormal basis). Some people call different things by the same name, though.
1. Tom Says:
September 29, 2010 at 11:10 pm
Hi Akhil,
Sorry I think I wasn’t clear. Exercise 5 is the one about a measure mu with mu(X) = 1 and the question asks to show that the L^r norm is <= L^s norm when r<s. Do you have any ideas on how to do it?
BTW, how far have you gotten into the book? Would you recommend the complex analysis portion or should I go to another book for that? Also would you recommend Rudin's functional analysis as the next step in analysis or are there better functional analysis books on the market?
Thanks!
1. Akhil Mathew Says:
September 29, 2010 at 11:21 pm
My mistake. For this one, I believe you can apply the Holder inequality with one of the functions being $f$ and the other being 1. The condition of measure one is necessary for one of the factors to be one (in general, this argument shows that on a space of *finite* measure, $L^r \subset L^s$ when $r>s$ and the inclusion is continuous).
I went through all of Rudin’s book at some point. I actually think the complex analysis part of the book is extremely cleanly done. It gets into some interesting material (e.g. nontangential estimates on Poisson integrals, Hardy spaces), but also does the basic material (e.g. Cauchy’s theorem) very thoroughly and rigorously (in that case, by giving an argument which is actually rather recent).
Rudin’s functional analysis book seemed comparatively dry to me, but it is thorough. I have enjoyed the book by Lax. Anyway, I would recommend talking to someone who actually knows analysis for such advice, though.
1. Tom Says:
September 30, 2010 at 3:05 am
Hi Akhil,
Firstly, sorry. I use a shared computer and had your website open and I think someone posted some kind of vulgarity on your website using the same computer (and I think the vulgarity came under my name too). Hopefully the spam filter caught it or something but in case it didn’t, please ignore it.
Anyway, may I ask how long you took to do Rudin’s real and complex analysis? It’s taking me a while. I don’t want to be spending too much time on it since it might not be worth the investment but would something like 1 year be about par? Or should I be finishing the book quicker? What was your experiences?
When you did the book, could you do all the exercises and were they do-able? I mean, did it take you “not so long” to do most or all of them? Just interested. I’m kind of hoping I’m not alone in my attacks at the problems that seem to come to no end! I know some very important results are there in the exercises so I want to do ‘em all so to speak but I don’t want to be spending too much time on them. What’s your advice on how to approach them and how hard they are? I know they’re interesting but are there any exercises whose proofs go for a page or longer?
Thanks Akhil!
Tom
3. Tom Says:
September 30, 2010 at 3:08 am
BTW, you wrote “I’d recommend talking to someone who actually knows analysis for such advice, though”. Surely this is modesty! I mean I’ve heard professors in mathematics say that completing Rudin’s book is like more than what an average math student learns in his entire PhD in analysis. And I’ve also heard that once you’ve done Rudin’s real and complex analysis, you could well begin reading books that would get you to a point of research in the area. Is this talk of Rudin’s book being “so great” just over-emphasising the point or are you just being modest?
1. Akhil Mathew Says:
September 30, 2010 at 8:57 am
I’m pretty sure a PhD student (especially in analysis) needs to and should know much more than what’s in Rudin, which is very much an introductory textbook. I’m afraid I can’t evaluate the last claim, because I haven’t read too many research papers in analysis.
I usually read and re-read books, so I can’t answer your question on length. I also work on the exercises in a rather piecemeal fashion. I never took a formal course on this material, but a more systematic approach would probably have been better. Some of the exercises are quite difficult.
1. Tom Says:
October 2, 2010 at 4:41 am
At Harvard, the requirement for PhD students seems to be only a knowledge of Rudin’s text for the analysis portion of the qual. exams. I get that PhD students in analysis need to know more but would it be fair to say Rudin is only an introductory book in analysis? I mean just by way of comparison and nothing more, no university in the world requires a knowledge of analysis for the general qual. exams more than what’s covered in Rudin.
You say Rudin’s functional analysis is a good book. Would that be research level in functional analysis or would you need to know more to do research in functional analysis?
I’m just wondering, surely there’s some standard of research. I mean reading Rudin takes a while and you don’t have 10 years to do a PhD! So there must be some finite value of textbooks one should be reading in analysis to get to a point of research and considering that not all undergraduates are as nearly well-prepared as yourself.
I understand that you’re still a student but you seem to have done a bit of mathematics in several directions. Which direction are you most experienced in and can you tell how much one should know before doing research in that direction for example? There’s this dilemma nearly all PhD students have: It’s near impossible to do research in topic X unless you have the “right” background but what is the right background? What’s your experiences with this and how much someone should know to do research? Thanks Akhil …
1. Akhil Mathew Says:
October 2, 2010 at 8:29 am
I believe Rudin is considered an introductory book in analysis in the same sense that Hartshorne is an introductory algebraic geometry book, or Spanier an intro algebraic topology book. I.e., the material is foundational, but the prior acquaintance assumed is at the level of a intro-level undergraduate course (e.g. point-set topology, elementary real analysis); “introductory” does not necessarily mean “easy reading.”
I’m afraid that I simply don’t know enough to answer your question about the background in functional analysis to do research. My current interests tend to be reflected in the choice of topics (i.e., algebraic topology as I write this), but I have not done research in it, and even had I done a little I would not be qualified to comment. Perhaps you might find this MO thread interesting, though.
1. Tom Says:
October 3, 2010 at 1:40 am
OK, I understand. That seems to make sense. But can you please clarify about Hartshorne? I mean R&C and algebraic topology don’t assume much background but Hartshorne does. I’ve heard you need to have read the tome commutative algebra by Eisendud. And surely that’s grad. level background.
Actually that brings me to an interesting question which I’d like to ask you. I know you read Hartshorne earlier (or started reading it). Can you tell me based on what you’ve read so far how much background is necessary to read Hartshorne? I mean is Atiyah and Macdonald enough for the comm. algebra. Or do you need more like Eisenbud? Basically I’m interested in the comm. algebra background necessary but it’d also be nice if you could tell me the necessary background in the other areas.
Thanks!
1. Akhil Mathew Says:
October 3, 2010 at 8:40 am
There are definitely points in Hartshorne that require more than what’s covered in Atiyah-Macdonald (e.g. the section on Kahler differentials). I’m pretty sure that Eisenbud includes everything you need, though. On the other hand, if you’re willing to accept the properties of Kahler differentials without proof, you can just read Hartshorne straight away I suppose. You also need to know basic general topology and homological algebra (e.g. some familiarity with derived functors). (Note that Hartshorne is generally *not* used as a first course on algebraic geometry, despite the fact that it opens with a chapter on varieties over alg. closed fields.)
4. Wei Yu Says:
October 3, 2010 at 1:57 am
I am a Chinese student, would like to ask you a few questions, English is not very good, do not know can not express what I mean. On differential geometry, that is,1. how to finding Griffiths-positive metric on an ample vector bundle.2.On moduli space,for example,moduli space on Calabi-Yau manifold.How to Finding the absruct new structure on it.I do not know if you have interest in this area, that area you are interested in mathematics? Physics are not interested in? For example, string theory, quantum field theory, mirror symmetry. Want to introduce you to learn way. My English is not good, and very worried about not properly express what I mean. Hope that it will communicate with you more mathematics or physics. I strive to learn English now.Hope to be friends with you.
1. Akhil Mathew Says:
October 3, 2010 at 8:35 am
Dear Wei, my background is insufficient for me to answer your questions. Perhaps you should try asking on MathOverflow.
5. Wei Yu Says:
October 3, 2010 at 3:42 am
Why the every pure sheaf has a unique Harder-Narasimhan filtration,but any semi-stable sheaf need not unique Jordan-Holder filtration.The geometry of moduli space of sheaf is very difficult.Whether proof the existence of symplectic structure of moduli space of quasi-coherent sheaf?Ask you for hyperbolic geometric flow of interested?For example,hypobolic Ricci flow,hypobolic Kahler-Ricci flow,hypobolic mean curvature flow,hypobolic Calabi flow and so on.
6. Tom Says:
October 3, 2010 at 8:57 pm
(previous thread was getting skinny!)
Thanks Akhil. I appreciate your advice. But can you tell me what book you used for a
first course in alg. geo. if you search up “alg. geo. first course” you end up with this book
with that title by some guy called “Joe Harris”. How did you find this book by Joe Harris?
Did you read it and if so how much background do you need to tackle it?
Also is maclane a good book for homological algebra. I see you’ve used Weibel but how’s
saunders maclane’s book? Both look good but I’d appreciate your views on which is good
in which respects.
Finally, did you read A&M and then Hartshorne or did you read A&M, Eisendud and then
Hartshorne? I’m sometimes uncomfortable with accepting facts as true without knowing
their proofs. Did you do this with Hartshorne or did you know all the necessary comm. algebra before reading it? If not how did you find it accepting facts with proofs in Hartshorne?
Thanks Akhil.
1. Akhil Mathew Says:
October 3, 2010 at 9:59 pm
I think I have used Shafarevich and James Milne’s notes (cf. the link on the page) as intro sources. I haven’t read Harris’s book, I’m afraid, but it’s probably good.
Maclane’s book is on general category theory, not homological algebra (I don’t think it covers derived functors, for instance). I didn’t really read this books in any kind of order, but it probably makes sense to start at least with A/M before trying Hartshorne (or any other book on schemes). I don’t think reading all of Eisenbud (which is a long book) is necessary before one starts Hartshorne; you can always refer back as necessary.
1. Tom Says:
October 4, 2010 at 7:56 am
Sorry, I think I wasn’t clear. I was referring to “Maclane’s Homology” book
in the grundlehren math series. I’m pretty sure this book covers hom.
algebra. But I think you’re right about Maclane’s “Categories for the working
mathematician” which is about category theory.
Algebraic geometry is such a interconnecting discipline. Do you need to
know things like algebraic topology or complex analysis (or even number theory) to read Hartshorne? Or are the prerequisites just restricted to comm.
algebra and a graduate algebra course?
I heard of this excellent book called “Principles of Algebraic Geometry” by Harris and Griffiths. Have you attacked this book yet? It looks like a real killer in terms of background math necessary but it also looks like the holy bible of alg. geo. I don’t think a knowledge of schemes is necessary for this book but it looks pretty good on the classical side of things. You might want to have a look at it if you haven’t already. I think it might supplement Hartshorne well since you’ve already done diff. geometry complex analysis and the like which are the only prereqs. for the book.
Thanks for being patient with me and sorry for asking so many questions! I’m
very much a beginner in alg. geo. I’m kind of scratching my head. The subject is like this vast theory of theories and it seems like there’s no clear path to take. How do you plan to approach alg. geo.?
1. Akhil Mathew Says:
October 4, 2010 at 9:01 am
Ah. Right, I haven’t read _Homology._ Ditto for Griffiths and Harris. Hartshorne doesn’t invoke any algebraic topology, but Griffiths and Harris do, as well as some several complex variables (which is sketched at the beginning). Lack of prerequisites prevented me from reading G+H in the past, but I should take another look sometime soon. I understand it’s standard.
For algebraic geometry, I’m currently not actively studying the subject outside of class, because I am distracted with algebraic topology. If I were, though, (and I probably will be soon) I would probably be trying to a) read Hartshorne and EGA and b) looking at other scheme-theoretic books, e.g. Ueno’s and Liu’s.
For how to learn algebraic geometry, cf. my question on MO: http://mathoverflow.net/questions/1291/a-learning-roadmap-for-algebraic-geometry
You will find answers from people immensely more qualified than I.
7. jujahju Says:
October 15, 2010 at 11:50 pm
Akhil, how do you work efficiently? I hear harvard have tonnes of work to do so how do you manage your time? Do you get any sleep?
1. Akhil Mathew Says:
October 17, 2010 at 10:03 pm
I wish I knew a good answer to that question!
8. gffsjr Says:
November 23, 2010 at 3:53 pm
Hi Akhil, I’m a undergraduate student at UFPI Brazil,and I want to study complex algebraic geometry, but first I know that I should learn the basic of algebraic geometry, what good references could you give to me for a first read?I’m in the second year and have a background in elementary algebra(I studied Dummit’s book until module theory) , topology and Real analysis.By the way I’m building a blog too,I hope see your comments there! Thanks !
1. Akhil Mathew Says:
November 23, 2010 at 4:21 pm
Shafarevich’s book is what we are using for algebraic geometry in my (introductory) course, though I have not read it all that carefully. I can also vouch for Fulton’s _Algebraic curves_ (available at people.reed.edu/~davidp/332/CurveBook.pdf), which I found useful in preparing for a reading course on elliptic curves some time back. Also, I recommend Milne’s online notes on algebraic geometry.
1. gffsjr Says:
November 23, 2010 at 8:29 pm
Thanks for the references , by the way could you give me good introductory text in complexity theory?Thanks again.
1. Akhil Mathew Says:
November 24, 2010 at 12:33 am
I know very little complexity theory. But my understanding was that Papadimitriou was the standard introduction.
9. David Says:
February 10, 2011 at 9:47 pm
Akhil, have you heard of the book “Partial differential equations” by Lawrence Evans? It’s a really good book on PDE’s and I noticed you listed some books on the topic above, so if you’re interested in PDE’s, you might wish to have a look at this book.
Also might be worthwhile taking a look at Gunning and Rossi’s “Several Complex Variables” at some point because of its connections with algebraic geometry. The book is very good too.
1. Akhil Mathew Says:
February 11, 2011 at 9:38 am
Dear David, thanks! I do intend to read those books at some point in my undergraduate education, perhaps this summer: the problem is, during the academic year, I seem to get very little time for sustained reading on a given topic because of coursework. (I tried to read Gunning-Rossi a while back but was unsuccessful; my current plan is to start with “Coherent analytic sheaves,” which seemed more accessible when I flipped through it.)
10. David Says:
February 10, 2011 at 9:49 pm
I got the wrong email address in my previous comment.
11. Max Muller Says:
April 18, 2011 at 11:09 am
Hi Akhil, that’s one impressive list…
I was wondering how you obtained all of these books. Did you actually buy them all? It seems to me that’s quite expensive to do so. Or did you borrow them at a library?
Thanks,
Max
1. Akhil Mathew Says:
April 18, 2011 at 11:20 am
Hi Max,
I was rather fortunate to be living, when in high school, close to several university libraries.
12. David Says:
April 25, 2011 at 10:30 pm
Another suggestion regarding analysis. Have you looked at Loukas Grafakos’ Classical Fourier Analysis and Modern Fourier Analysis? These are comprehensive and the materal is presented in a more “textbook” style unlike Stein’s monographs (which are also excellent, of course). You might already be familiar with the first book of Grafakos in which case the second book is ideal. (I’m not sure if you’ve seen the proof of the Carleson-Hunt theorem but if you haven’t, Grafakos’ Modern Fourier Analysis is great.)
But as you say, these are good books to read you’re not busy with your courses …
1. Akhil Mathew Says:
April 26, 2011 at 5:09 pm
Thanks for the recommendation! I’ve never read Grafakos (or, for that matter, thought about analysis anytime recently). I’ve never actually worked through the proof of Carleson-Hunt, but would like to someday — probably I’ll have the time to catch up on things like this once I don’t have problem sets due.
13. Marcus Says:
May 11, 2011 at 10:56 pm
Dear Akhil,
I really liked reading your algebraic geometry notes. But I unforunately didn’t have the background in category theory so I ddn’t appreciate the categorical language. (I pretended everything was for abelian groups, rings etc.) Can you please recommend some books on category theory that you think provide sufficient background to understand your AG notes and maybe EGA later on?
Thanks,
Marcus
1. Akhil Mathew Says:
May 12, 2011 at 8:47 am
Dear Marcus, EGA 0 provides a fair bit of introductory material on categories (starting with Yoneda’s lemma, fibered products, etc.). I found it very helpful. Other sources on categories that might be helpful are MacLane’s “Categories for the Working Mathematician” and Kashiwara-Schapira’s “Categories and Sheaves.” (I recommend the last one in particular.) At least for me, though, it was easier to learn basic category theory by seeing lots of examples, and thinking about it in the context of algebraic geometry (and algebraic topology).
For EGA, you’ll also need basic sheaf theory. (There are even points in EGA III where Grothendieck uses the Cech-to-derived-functor spectral sequence.) You can find this in Godement’s “Theorie des faisceaux,” or Chapter II of Hartshorne. Though I should note that you really have to do things differently for general sites (in which case you don’t have stalks to reason with, and you have to define e.g. sheafification in a more general way). Cf. for instance Tamme’s book “Introduction to Etale Cohomology.”
1. Marcus Says:
May 16, 2011 at 7:58 pm
Thank you Akhil! I wanted to ask you another question about Hartshorne’s book: I hear from a lot of people that Hartshorne’s book is very difficult to read and that the exercises are also very difficult. How much truth is there in this? I know you can only speak for yourself but I found the exercises in Chapter 2 of Hartshorne very routine (at least for the first few sections). Is it that these exercises are much easier than the ones in the other chapters? For example, how are the Chapter 1, 4 and 5 exercises? It seems plausible that these could be harder since they are on geometry whereas exercises on sheafs for example are routine but tedious. What did you find of Hartshorne and his exercises?
Marcus
1. Marcus Says:
May 16, 2011 at 7:58 pm
Also thanks for recommending “Categories and Sheaves”. This looks like a very good book!
2. Akhil Mathew Says:
May 16, 2011 at 10:13 pm
I’m not sure if I’d describe most of Hartshorne’s exercises as routine! I think they get a lot harder after the first few sections of chapter II and general sheaf theory. I should mention that sheaf theory on general sites feels a bit different (because you don’t have stalks, for instance) and the proofs (e.g. that the category of abelian sheaves has enough injectives) have to appeal to general principles. I’ve never really looked at chapters 4 or 5 (except for Riemann-Roch). Also, a lot of them turn out to be special cases of (often nicer) results in EGA. (One example: in II.7, he asks you to prove using divisors that for a regular noetherian scheme $X$ and a vector bundle $E$ on $X$, the Picard group of the projectivization is that of $X$ times $\mathbb{Z}$; but it’s possible to do this very cleanly using the formal function theorem as Grothendieck points out (without a full proof) in III.4, which I might blog about sometime.) Anyway, regardless of this, I’ve certainly learned a lot from the exercises, which strike me as one of the best features of the book.
On the other hand, I still think that Hartshorne often seems to leave out much (possibly because it’s such a short book) far too often: he omits the functorial characterization of smoothness/etaleness/unramifiedness (the latter two of which figure in only briefly as exercises), the quasi-finite form of Zariski’s Main Theorem, the Leray spectral sequence, etc. I think this is actually one of the reasons it is such a hard book; EGA, where things are developed both more leisurely and in a much more general (and, at least to me, natural) setting, gives a very different picture, and is certainly easier reading (at least if you don’t try to read it linearly!).
I would also note that there are plenty of successful algebraic geometers who have never read EGA. I suppose it’s a matter of taste, and as I am a beginning student, what I say should not be taken too seriously.
14. nehsb Says:
November 25, 2012 at 6:50 pm
Hi,
What are your thoughts on reading etale cohomology? I’m not sure as to whether it’s worth reading it thoroughly, as I’ve heard the advice that it’s better to just black-box all the etale cohomology theorems at the start. Do you have any thoughts on this, and what are the standard references? I’ve heard SGA 4/4.5, which is workable, but I’m at least an hour away (by car) from any library which has them, and I find it somewhat hard to read the scanned copies on a computer screen.
Thanks.
1. Akhil Mathew Says:
November 30, 2012 at 10:38 pm
Hi,
Sorry for the slow response — it’s been a busy past week. I’m certainly no expert on etale cohomology. I did a reading course about it once, which was a lot of fun, but I don’t know a lot about number theory. I do think that it definitely makes sense to black box a lot of it at the start. Tamme’s “Introduction to Etale Cohomology” is by far the friendliest introduction I’ve seen. SGA 4.5 is very helpful though terse, while SGA 4 seems to go on to no end (but which also has a lot of really important and useful foundational stuff on topoi which I’ve never properly learned). Freitag-Kiehl’s book on etale cohomology and Kiehl-Weissauer “Weil conjectures, perverse sheaves, and the l-adic Fourier transform” are pretty inspirational (and hard) stuff. You might look at Emerton’s advice at
http://terrytao.wordpress.com/career-advice/learn-and-relearn-your-field/
Something I’m trying to learn more about recently — and which I didn’t study at all when I took the reading course, because I didn’t know homotopy theory — is the refinement of etale cohomology to etale homotopy theory. Instead of getting cohomology groups (with torsion or l-adic coefficients), you get a pro-homotopy type, which is a lot more information. I’ve heard things to the effect that you can get Poincar\’e duality in etale cohomology (which is very difficult to prove directly) using homotopy-theoretic methods, although I do not know the details. (I could suggest people who know a lot about this that you could contact if you’re interested.)
Anyway, one reason it may make sense to black-box things is that modern technology not available in the days of SGA 4 enables simpler proofs!
Incidentally, I saw your blog — very impressive! You should, of course, feel free to contact me if you have further questions about math or other things (e.g., college applications).
1. nehsb Says:
December 1, 2012 at 8:37 am
Thanks for the suggestions!
I unfortunately don’t really know normal topological homotopy theory. It seems to motivate a lot of the recent work on algebraic geometry, so I should probably get to reading it at some point.
15. Toussaint Says:
January 2, 2013 at 1:55 pm
Hello Mr Akheel
I’m seeking advice and looking at your knowledge, your advice will really be useful.
I am new to abstract mathematics. By abstract I mean proving theorems and understanding what they mean in contrast to the computational side of mathematics.
I have read the book “How to prove it, a structured approach” and the book of proof and now I want to pick one branch of mathematics and practice theorem proving. Which one would you advice? Analysis, algebra, topology? I have tried with analysis with the Rudin book but it seems hard to me. So what would you advice and which book(s)?
I equally hope your answer will be useful to someone else in the future.
Regards
1. Akhil Mathew Says:
January 3, 2013 at 12:05 am
Dear Toussaint,
Usually when starting out with math the goal is not so much to prove theorems, but to understand how mathematical proofs work and to practice with exercises, which you’ll find in any textbook, and in any of the above three fields. I would only mention that learning general topology without knowing a little about metric spaces and real analysis is likely risky.
You might try math.stackexchange.com to look for lists of suggested textbooks (I myself found Rudin’s book very helpful, and I think Herstein’s algebra textbook; I’ve also heard that Munkres’s book is a good introductory topology textbook but have never read it). The book “Proofs from the book” is a lot of fun but is written more for enjoyment than serious study. Terence Tao’s blog (under “career advice”) is another helpful resource.
1. Toussaint Says:
January 3, 2013 at 1:21 am
Dear Mr Akheel!
Thank you for the helpful advice! I was right about to give up Rudin’s book in desperation because I could not prove the theorems. But now, it is clear one must first emphasize understanding of the proofs and doing the exercises (for the sake of repetition).
Indeed, I am on math.stackexchange.com and it is a wonderful resource for exercises, alternative proofs and references. And I did read Terrenece Tao advice(s) as well!
Regards
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365704655647278, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/39593?sort=oldest | ## A question on the construction of finite W-algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In a well known construction of finite W-algebras, one first constructs a certain nilpotent subalgebra $\mathfrak{m}$ along with a character $\chi:\mathfrak{m}\rightarrow \mathbb{C}$. Then one defines
$$U(\mathfrak{g},e)=(U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{m}_{\chi})^\mathfrak{m}$$
where $\mathfrak{m}_\chi$ is the set of all $m-\chi(m)$ and $\mathfrak{m}$ acts on $U(\mathfrak{g})$ by derivations, extending the adjoint action on $\mathfrak{g}$. Is this the same as
$$U(\mathfrak{g})^{\mathfrak{m}}/(U(\mathfrak{g})\mathfrak{m}_{\chi})^\mathfrak{m}?$$
Of course one can reformulate this question and ask if the following cohomology group vanishes:
$$H^1(\mathfrak{m},U(\mathfrak{g})\mathfrak{m}_{\chi})=0?$$ Maybe this follows from some Lynch style vanishing, but I am not very familiar with these theorems.
-
Two very minor edits. Maybe lie-algebras would also be a useful tag? – Jim Humphreys Sep 22 2010 at 16:46
Thanks, added lie-algebras tag. – Jan Weidner Sep 22 2010 at 18:31
## 1 Answer
Look at Propositions 5.1 and 5.2 of Gan and Ginzburg's paper Quantization of Slodowy slices. The "reason" behind the vanishing is its identification with algebraic deRham cohomology of an affine space.
-
Thanks, unfortunately I do not yet quite see, how to use this in my situation. Do you suggest to show that $U(g)mχ=\mathbb{C}[M]\otimes_\mathbb{C} W$ for some trivial rep $W$ and then conclude $H^i(m,U(g)mχ)=H^i(m,\mathbb{C}[M])\otimes W=0$? Or did I get this completely wrong? – Jan Weidner Sep 23 2010 at 9:01
I suggest you read the paper, where the statements you want are propositions cited, and the proof is given. – Ben Webster♦ Sep 23 2010 at 20:14
I already read this paper, quite thoroughly. What Gan and Ginzburg show in these propositions is that the higher cohomology groups $H^i(m,U(g)/U(g)m_\chi)$ vanish. What I want is that $H^i(M,U(g)m_\chi)$ vanishes. I don't see how their propositions imply my statement, nor how to adapt their proof. Maybe I miss something obvious. – Jan Weidner Sep 24 2010 at 7:40
1
Their proof goes as follows. By a spectral sequence argument they show that it is enough to check that $H^i(m,gr(U(g)/U(g)m_\chi))$ vanishes. Now they already showed that there is an equivariant isomorphism $gr(U(g)/U(g)m_\chi)=\mathbb{C}[M]\otimes \mathbb{C}[S]$ where the action on the second factor is trivial. Then they deduce $H^i(m,gr(U(g)/U(g)m_\chi))=H^i(m,\mathbb{C}[M])\otimes \mathbb{C}[S]$. Now as you already remarked $H^i(m,\mathbb{C}[M])$ is the algebraic deRham cohomology of $N\cong\mathbb(A)^n$, which vanishes for $i>0$. – Jan Weidner Sep 24 2010 at 7:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408841133117676, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/41903/parabolic-motion-and-air-drag | # Parabolic motion and air drag
Are this equations correct, in order to calculate the parabolic motion of an arrow with the computation of the drag with the air?
$$\begin{cases} x(t)=\left(v_0-\frac{1/2C_DA\rho v_0^2}{m}t\right)\cos(\theta)t\\ y(t)=\left(v_0-\frac{1/2C_DA\rho v_0^2}{m}t\right)\sin(\theta)t-\frac{1}{2}gt^2+h \end{cases}$$
Update: correction.
$$\vec{r}= \begin{vmatrix} \left(v_0-\frac{C_DA\rho v_0^2}{4m}t\right)\cos(\theta)t \\ \left(v_0-\frac{C_DA\rho v_0^2}{4m}t\right)\sin(\theta)t-\frac{1}{2} g t^2+h \\ 0 \end{vmatrix}$$
$$\vec{a} = \ddot{\vec{r}} = \begin{vmatrix} -\frac{C_DA\rho v_0^2}{2m}\cos(\theta) \\ -\frac{C_DA\rho v_0^2}{2m}\sin(\theta) -g \end{vmatrix}$$
Update for AlanSE:
\begin{equation} \begin{split} m\frac{d^2 x(t)}{d t^2}&=-\frac{C_DA\rho}{2}\sqrt{\left(\frac{dx(t)}{dt}\right)^2+\left(\frac{dy(t)}{dt}\right)^2}\frac{dx(t)}{dt},\\ m\frac{d^2 y(t)}{dt^2}&=-mg-\frac{C_DA\rho}{2}\sqrt{\left(\frac{dx(t)}{dt}\right)^2+\left(\frac{dy(t)}{dt}\right)^2}\frac{d y(t)}{dt}. \end{split} \end{equation}
Related Cauchy problem:
\begin{equation} \begin{cases} \displaystyle x(0)=0,\\ \displaystyle y(0)=h. \end{cases} \end{equation}
-
It depends. Is the air friction term linear or quadratic? Also, how did you get those? In which approximation do you need them? Do you have to track an actual arrow, or is it for a homework? – Ferdinando Randisi Oct 28 '12 at 14:49
2
First things first... You shouldn't ask users to check for your home-made set of equations :-) – Ϛѓăʑɏ βµԂԃϔ Oct 28 '12 at 14:53
Get rid of that $v_0^2$ in the drag terms, you don't need it. The rest of terms preceding the square root is what I had termed $k$. THe equations you now have are basically the ones in my answer to your question, replacing $v_x$ and $v_y$ with $dx/dt$ and $dy/dt$. – Jaime Nov 2 '12 at 3:07
@Jaime Corrected. Sure, I have rewritten it only to be sure to have understood correctly what you have explained me... – FormlessCloud Nov 2 '12 at 14:33
## 2 Answers
Ok lets see. Differentiate the positions two times to arrive at the acceleration vector and see if it obeys Newtons Laws.
$$\vec{r} = \begin{vmatrix} \left(v_0-\frac{C_DA\rho v_0^2}{2 m}t\right)\cos(\theta)t \\ \left(v_0-\frac{C_DA\rho v_0^2}{2 m}t\right)\sin(\theta)t-\frac{1}{2} g t^2+h \\ 0 \end{vmatrix}$$
$$\vec{a} = \ddot{\vec{r}} = \begin{vmatrix} -\frac{C_DA\rho v_0^2}{m}\cos(\theta) \\ -\frac{C_DA\rho v_0^2}{m}\sin(\theta) -g \end{vmatrix}$$
So 1) the acceleration does not depend on the instantaneous velocity, only the initial velocity. 2) The drag force is missing the $\frac{1}{2}$ coefficient.
So the answer is no.
-
Please, look at my updated question, with your corrections, have I understood? And why you have said "the acceleration does not depend on the instantaneous velocity, only the initial velocity." I don't see the link. – FormlessCloud Nov 1 '12 at 13:11
@FormlessCloud, only $v_0$ is used in $\vec{a}$, not $v(t)$. Drag force changes with time as speed changes. – ja72 Nov 1 '12 at 18:37
As ja72 points out, the formulas you have produced are, apart from a missing $\frac{1}{2}$ factor, what you would get if drag was proportional to the square of the initial velocity, not the instantaneous one.
For quadratic drag, you need to solve the following pair of equations:
$$m \dot{v_x} = -k\sqrt{v_x^2 + v_y^2} v_x,$$ $$m \dot{v_y} = -mg-k\sqrt{v_x^2 + v_y^2} v_y.$$
As fas as I know, unless $v_x=0$, or $g=0$, the above pair of differential equations cannot be solved analitically in terms of elementary functions. So your formulas are not correct, mostly because there are no such formulas.
-
Could you explain the expressions that you have written? for example what is $k$, $v_x$ and $v_y$? – FormlessCloud Nov 1 '12 at 9:48
@FormlessCloud The $v_x$ and $v_y$ are the velocities, which are the derivative of your $x(t)$ and $y(t)$. To be a complete set of differential equations we need to write $dx/dt=v_x$ etc. You also should formally write your initial conditions in order to have a mathematically complete problem. – AlanSE Nov 1 '12 at 13:20
@AlanSE Look at my updated question, have I understood correctly? What is the name of the equations suggested from Jaime? – FormlessCloud Nov 1 '12 at 21:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8917154669761658, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=3802900 | Physics Forums
## Does conservation of energy for unsteady ideal fluid hold?
My text was able to show that for an ideal (incompressible and inviscid) and steady fluid in a gravitational field, the energy density $E=\frac{1}{2}\rho u^2 + \rho\chi+P$ is constant for any fluid element, where $\chi$ is the gravitational potential. That is
$$\frac{DE}{Dt}=\frac{\partial E}{\partial t} + \mathbf{u}\cdot\nabla E=0$$.Does this hold for an unsteady ideal fluid? If not, what causes the change in the mechanical energy of the fluid element?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Hello dEdt, beware, energy density is $\epsilon = \frac{1}{2}\rho u^2 + \rho\chi$. Pressure does not contribute to energy density. For example, consider incompressible liquid; there can great pressure in it but no work is required to produce it, since it does not change its volume. The equation you wrote is supposed to mean that the right-hand side is constant along the streamline, even if the fluid is moving. This is true for incompressible liquid without friction. For compressible liquid there is enthalpy density there instead of pressure. Jano
Sure, but the force on a fluid element of volume $\delta V$ due to pressure is [itex]-\nabla P \delta V[itex]. So for that fluid element, it's possible to regard P as a potential energy (per unit volume). It might not be true energy, but I still think it's valid to regard it as a form of potential energy, at least for the purposes of calculations, no?
## Does conservation of energy for unsteady ideal fluid hold?
It is not correct to regard pressure of liquid as density of energy. You would get wrong total energy of the liquid.
Consider water in a tank.
The buoyant force on a small volume of fluid is indeed $\mathbf F = -\frac{\nabla p}{\rho} \Delta m$, so you can view $p/\rho$ as a potential of the buoyant force. But you should not ascribe this potential energy to the fluid element, because then if you sum up these energies, you will get $\int_V pdV = \int \rho g h dV$, which however is already accounted for by the potential energy in gravitational field. So counting pressure as energy actually accounts for this energy twice, which is wrong.
Quote by Jano L. But you should not ascribe this potential energy to the fluid element, because then if you sum up these energies, you will get $\int_V pdV = \int \rho g h dV$, which however is already accounted for by the potential energy in gravitational field. So counting pressure as energy actually accounts for this energy twice, which is wrong.
I don't see the problem with recognizing two sources of potential energy, one from gravity and the other from pressure. In your example, $K+\int{\rho ghdV} + \int{\rho ghdV}$ is conserved if $K+\int{\rho ghdV}$ is conserved, because h doesn't change. In other situations, such as in unsteady flow, where the total gravitational potential energy changes, it is only the first expression that is conserved, because although the walls of the container (ie the source of the pressure) don't do any work, pressure does do work when the fluid expands, which is the only time gravitational potential energy will ever change.
At any rate, you can frame my original question this way: in steady, ideal flow, there's a mysterious quantity $E=\frac{1}{2}\rho u^2+\rho\chi+P$ which is conserved for any fluid element. Is this quantity still conserved in unsteady ideal flow?
Think of it this way. Energy is defined in terms of work. How much work do you need to fill a tank of height $h$ and base area $S$ with water (density $\rho$), if the water is initially in the lake at the same level as the base of the tank? This work defines energy of the system. You can calculate this by adding the small amount of work needed to raise the level of the water by $\Delta h$. You will see the work is $\frac{1}{2} S\rho gh$ which defines the total energy of the system. There is no trace of pressure energy, precisely because the liquid is incompressible and it does not take any work to increase the pressure in the liquid. To your question: in nonstationary flows the expression is not constant. If the nonstationary flow is potential (means $\mathbf v = \nabla\phi$), the equation reads $$\frac{\partial \phi}{\partial t} + \frac{1}{2}v^2 +\chi + p/\rho = f(t)$$ for some function $f(t)$.
Okay, I've understood. Thanks for taking the time to explain.
Glad to be of help, Jano
Thread Tools
| | | |
|---------------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Does conservation of energy for unsteady ideal fluid hold? | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 1 |
| | General Physics | 1 |
| | Classical Physics | 3 |
| | Classical Physics | 0 |
| | Classical Physics | 18 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344167709350586, "perplexity_flag": "head"} |
http://nrich.maths.org/104 | ### Pies
Grandma found her pie balanced on the scale with two weights and a quarter of a pie. So how heavy was each pie?
### Red Balloons, Blue Balloons
Katie and Will have some balloons. Will's balloon burst at exactly the same size as Katie's at the beginning of a puff. How many puffs had Will done before his balloon burst?
### In the Money
There are a number of coins on a table. One quarter of the coins show heads. If I turn over 2 coins, then one third show heads. How many coins are there altogether?
# Pizza Portions
##### Stage: 2 Challenge Level:
My friends and I love pizza.
This is great because we always know what kind of food to make. There is a small problem. We often end up arguing over how much each person has eaten. We like to be fair and share out the pizzas equally.
Can you help us?
If I had $1$ pizza and wanted to share it equally between $2$ of us, how much pizza would we get each?
How much would each of us have if we had $1$ pizza to share equally between 3 people?
Sometimes, we have some pizza left over which we reheat the next day.
If I have $1/2$ a pizza and I want to split it between $2$ of us, what fraction of the whole pizza do we each get?
If there were $1/3$ of a pizza left and $2$ of us to share it, what fraction of the whole pizza would we get now?
How about if there were $3$ of us and we had $1/4$ of a pizza to share?
Look back at what you've done so far. Can you write down how you worked out your answers using numbers and maths symbols?
Can you see any patterns in the numbers that you've used?
Today, each of us would like $1/2$ a pizza. If I have $1$ pizza, how many people can have $1/2$ a pizza?
If I had $2$ pizzas altogether how many people can have $1/2$ a pizza each now?
Can you think of a way to write this using numbers and symbols?
Can you see any patterns in the numbers this time?
Sometimes, we may be a little hungrier so we'd like $2/3$ of a pizza each.
How many pizzas would I need to buy for $3$ people?
How many for $4$ of us? Would there be any left over?
How could you write this using numbers and symbols?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.972164511680603, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/98947/quermassintegrals-as-mean-curvature-integrals | ## Quermassintegrals as mean curvature integrals
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is well-known that the quermassintegrals $W_{i}$ of a convex body $K\subset \mathbb{R}^{n}$ can be written as a mean curvature integral $$W_{i}(K)=n^{-1}\int_{\partial K}H_{i-1} d\mathcal{H}^{n-1},$$ where $H_{i-1}$ is the $i$th elementary symmetric polynomial in the $n-1$ principal curvatures. This can for example be found in Schneider's book on convex bodies in (4.2.28) on p.210 for convex bodies whose boundary is a regular $C^{2}$ surface (here regular means all principal curvatures are positive). Other authors allow for a larger class, for example in Santaló's book on integral geometry III.13.6, p.224 only $C^{2}$ is assumed, although it seems to me that he also needed positive principal curvatures in the proof. So, I guess, my question is the following: Is this result true for convex $C^{2}$ surfaces without assuming that the surface is regular? If this is not the case, is there an easy counterexample?
-
## 1 Answer
These things hold in amazing generality, but you must dig into bit of geometric measure theory, which is the right language for this. Here is the paper that probably started the industry in this direction:
http://www.ams.org/journals/tran/1959-093-03/S0002-9947-1959-0110078-1/S0002-9947-1959-0110078-1.pdf
There are many works in this direction by Schneider, Fu, and Bernig.
-
Dear alvarezpaiva, thank you for your answer! Before I start with Federer's paper, I would be interested to know if the result I asked about is really contained in the paper and following; what makes me somewhat doubtful about this is that Schneider knew about these results and even contributed, but does not comment on this specific representation of $W_{i}$ for a more general class, although he has quite extensive notes in his book. – Sebastian Scholtes Jun 7 at 6:56
"5. The curvature measures. In this section several versions of Steiner's formula are derived by a modification of the classical method of [W]; the main innovation is the use of the algebra A**(£). By means of Steiner's formula the curvature measures corresponding to a set with positive reach are defined, and their basic properties are established. The proofs of the cartesian product formula and of the generalized Gauss-Bonnet Theorem were partly suggested by [H, 6.1.9] and by [A; FEl]." – alvarezpaiva Jun 7 at 9:27
Federer is never an easy read. On the other hand, this is the right language for many things in integral geometry so if you're working on this, you may as well learn it. – alvarezpaiva Jun 7 at 9:30
Actually I only need a citation for this result and the only one that I could find (and understand immediately (-; ) has a proof that is in my opinion flawed. I spent some time looking trough Federer's work and could not find anything resembling the mean curvature integral I'm looking for (but maybe if one knows what one is looking for it is quite easy to spot). Still I find it quite curious that Schneider would restrict the surfaces unnecessarily to be regular if he knows that he does not need to. Are you sure that for these surfaces the results of Federer give exactly the formula above for – Sebastian Scholtes Jun 7 at 14:36
the quermassintegrals? Sorry for being annoying, I appreciate your help! – Sebastian Scholtes Jun 7 at 14:37
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457806348800659, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/3004/how-to-calculate-the-density-of-relic-neutrinos?answertab=active | # How to calculate the density of relic neutrinos?
May be not neutrinos, but antineutrinos? Or both types? In the last case, why they didn't annihilate and what is the ratio of relic neutrinos to relic antineutrinos? Is that ratio somehow related to the barion asymmetry?
For reference: Relic neutrinos or cosmic neutrino background
Like the cosmic microwave background radiation (CMB), the cosmic neutrino background is a relic of the big bang
-
1
What are "relic neutrinos"? I'm not familiar with the term, so if you could add a link to some more information, that would be quite helpful. – David Zaslavsky♦ Jan 16 '11 at 9:22
@David, I have added the link – voix Jan 16 '11 at 9:57
Do anti-neutrinos exist? I thought the general belief was that neutrinos were Majorana particles, and so are their own anti-particle. – Peter Shor Jan 16 '11 at 15:56
"Relic neutrinos" are those that were coupled to the hot early universe (before temperatures dropped below the Z mass) and have since cooled to ridiculously low energies. They have very low cross-sections for interaction with ordinary matter by neutrino physics standards, and such interactions would be lost in the thermal noise anyway. – dmckee♦ Jan 16 '11 at 18:12
– voix Jan 16 '11 at 20:12
show 4 more comments
## 1 Answer
Back when I was in graduate school in the 1990s, the standard reference for this sort of thing was Kolb and Turner's book The Early Universe. Even after all these years, that book's treatment of this subject is probably still a good place to look.
Even if there's no asymmetry-producing process for neutrinos (like baryogenesis), you still expect a relic neutrino background that's a thermal (Fermi-Dirac) distribution of both neutrinos and antineutrinos, with a temperature of about 2 K. The reason is that, at a certain time in the evolution of the Universe, the density dropped low enough that the neutrino number "froze out": interactions that could change the number of neutrinos (such as primarily $e^- e^+ \leftrightarrow \nu_e\ \bar\nu_e$) became so rare that the time for any given particle to undergo such a reaction grew much longer than a Hubble time.
It's been a long time since I looked at baryogenesis models with any care, but as I recall some models would be expected to produce an asymmetry in the neutrino sector as well. But in practice I don't think that would change the prediction much. The reason is that baryogenesis only has to produce a one part in $10^9$ asymmetry (a billion and one protons for every billion antiprotons). That produces very noticeable effects today, because there was essentially complete annihilation of the antiprotons. But neutrino freeze-out occurs much earlier, while neutrinos are still relativistic, so we don't think that that massive annihilation happened for neutrinos. So even if there is a neutrino-antineutrino asymmetry comparable to the asymmetry produced by baryogenesis, it should only result in a tiny difference in the number of neutrinos over antineutrinos.
Let me put that another way. At early times, (temperature much greater than the proton mass), there were comparable numbers of photons, neutrinos, and protons. Baryogenesis resulted in an asymmetry of protons over antiprotons at that time. After that, nearly all of the protons and antiprotons annihilated, leaving the observed result that today there are a billion photons for every proton. But we expect the number of relic neutrinos to be of the same order as the number of photons, not protons, so a baryogenesis-level neutrino asymmetry won't be noticeable.
-
It's interesting to know what is the experimental cross-section of the collider reaction "electron-positron --> neutrino-antineutrino" when electron and positron just "disappear". – voix Jan 16 '11 at 20:33
1
@voix: How would you distinguish that case from non-interaction? You would have to be able to measure the individual particles removed from the beams at a precision sufficient to detect weak interactions. That's not in the cards at this time. – dmckee♦ Jan 16 '11 at 21:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368438124656677, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/54728/what-is-the-willmore-energy-of-the-earth-or-the-geoid?answertab=oldest | # What is the Willmore energy of the Earth (or the geoid)?
Wikipedia defines the Willmore energy as:
$$e[{\mathcal{M}}]=\frac{1}{2} \int_{\mathcal{M}} H^2\, \mathrm{d}A,$$
where $H$ stands for the mean curvature of the manifold $\mathcal{M}$.
What is the Willmore energy of the Earth, or the geoid?
-
## 1 Answer
Let's idealize the system for a moment :)
Say the earth is a sphere. Its mean curvature is $R^{-1}$ with $R$ the radius. Then the integral becomes
$\int_{\text{spherical surface}} R^{-2} R^2 d\Omega$=surface of a sphere=$4\pi$
as it should be.I guess this can somehow be nicely generalized to a geoid but I don't see how at the moment. Hope it still helps though!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8595240712165833, "perplexity_flag": "head"} |
http://amathew.wordpress.com/2012/07/14/understanding-the-derived-infinity-category/ | # Climbing Mount Bourbaki
Thoughts on mathematics
July 14, 2012
## Understanding the derived infinity-category
Posted by Akhil Mathew under algebra, category theory | Tags: derived category, infinity-categories |
The next thing I’d like to do on this blog is to understand the derived ${\infty}$-category of an abelian category.
Given an abelian category ${\mathcal{A}}$ with enough projectives, this is a stable ${\infty}$-category ${D^-(\mathcal{A})}$ with a special universal property. This universal property is specific to the ${\infty}$-categorical case: in the ordinary derived category of an abelian category (which is the homotopy category of ${D^-(\mathcal{A})}$), forming cofibers is not quite the natural process it is in ${D^-(\mathcal{A})}$ (in which it is a type of colimit), and one cannot expect the same results.
For instance, ${\mathcal{A}}$, and given a triangulated category ${\mathcal{T}}$ and a functor ${\mathcal{A} \rightarrow \mathcal{T}}$ taking exact sequences in ${\mathcal{A}}$ to triangles in ${\mathcal{T}}$, we might want there to be an extended functor
$\displaystyle D_{ord}^b(\mathcal{A}) \rightarrow \mathcal{T},$
where ${D_{ord}^b(\mathcal{A})}$ is the ordinary (1-categorical) bounded derived category of ${\mathcal{A}}$. We might expect this by the following rough intuition: given an object ${X}$ of ${D^b(\mathcal{A})}$ we can represent it as obtained from objects ${A_1, \dots, A_n}$ in ${\mathcal{A}}$ by taking a finite number of cofibers and shifts. As such, we should take the image of ${X}$ to be the appropriate combination of cofibers and shifts in ${\mathcal{T}}$ of the images of ${A_1, \dots, A_n}$. Unfortunately, this does not determine a functor because cofibers are not functorial or unique up to unique isomorphism at the level of a trinagulated category.
The derived ${\infty}$-category, though, has a universal property which, among other things, makes very apparent the existence of derived functors, and which makes it very easy to map out of it. One formulation of it is specific to the nonnegative case: ${D_{\geq 0}(\mathcal{A})}$ is obtained from the category of projective objects in ${\mathcal{A}}$ by freely adjoining geometric realizations. In other words:
Theorem 1 (Lurie) Let ${\mathcal{A}}$ be an abelian category with enough projectives, which form a subcategory ${\mathcal{P}}$. Then ${D_{\geq 0}(\mathcal{A})}$ has the following property. Let ${\mathcal{C}}$ be any ${\infty}$-category with geometric realizations; then there is an equivalence
$\displaystyle \mathrm{Fun}(\mathcal{P}, \mathcal{C}) \simeq \mathrm{Fun}'( D_{\geq 0}(\mathcal{A}), \mathcal{C})$
between the ${\infty}$-categories of functors ${\mathcal{P} \rightarrow \mathcal{C}}$ and geometric realization-preserving functors ${D_{\geq 0}(\mathcal{A}) \rightarrow \mathcal{C}}$.
This is a somewhat strange (and non-abelian) universal property at first sight (though, for what it’s worth, there is another more natural one to be discussed later). I’d like to spend the next couple of posts understanding why this is such a natural universal property (and, for one thing, why projective objects make an appearance); the answer is that it is an expression of the Dold-Kan correspondence. First, we’ll need to spend some time on the actual definition of this category.
1. The ${\infty}$-category of chain complexes
Let ${\mathcal{A}}$ be an abelian category, or more generally an additive category. Then one can contemplate a category ${\mathrm{Ch}(\mathcal{A})}$ of ${\mathcal{A}}$-valued chain complexes. Observe that given ${A_\bullet, B_\bullet \in \mathrm{Ch}(\mathcal{A})}$, there is a natural chain complex (in abelian groups) of maps
$\displaystyle \underline{Hom}(A_\bullet, B_\bullet),$
such that ${\underline{Hom}(A_\bullet, B_\bullet)_n = \prod \hom_{\mathcal{A}}(A_i, B_{i+n})}$. In other words, ${\mathrm{Ch}(\mathcal{A})}$ naturally acquires the structure of a differential graded category: there is a chain complex, rather than simply a set, of maps between any two objects. One can recover the set of maps between any two objects by taking ${\pi_0}$ of this chain complex.
A differential graded category can be used to manufacture a simplicial category. Namely, we have an equivalence of categories
$\displaystyle \mathrm{DK}: \mathrm{Ch}_{\geq 0}( \mathbf{Ab}) \simeq \mathrm{Fun}(\Delta^{op}, \mathbf{Ab})$
between nonnegatively graded chain complexes of abelian groups and simplicial abelian groups. (This is the Dold-Kan correspondence in the classical form.) This construction is lax monoidal via the Alexander-Whitney maps (though not actually monoidal), and consequently, given a category enriched over ${\mathrm{Ch}_{\geq 0}(\mathbf{Ab})}$, we can get a category enriched over simplicial abelian groups (in particular, over Kan complexes).
It thus follows that, for any additive category ${\mathcal{A}}$, we can make ${\mathrm{Ch}(\mathcal{A})}$ enriched over ${\mathrm{Ch}(\mathbf{Ab})}$, and applying the truncation functor ${\tau_{\geq 0}}$, we can make ${\mathrm{Ch}(\mathcal{A})}$ enriched over ${\mathrm{Ch}_{\geq 0}(\mathbf{Ab})}$. In view of the Dold-Kan equivalence, we find:
Proposition 2 There is a natural structure on ${\mathrm{Ch}(\mathcal{A})}$ of a simplicial category.
This is exactly what we want from an ${\infty}$-category: we want a mapping space (i.e., a Kan complex or topological space) between any two objects. So we can take the homotopy coherent nerve of this, to get an ${\infty}$-category of chain complexes in ${\mathcal{A}}$, which I’ll write as ${\mathbf{N}( \mathrm{Ch}(\mathcal{A}))}$ to avoid confusion.
We can say a little more.
Proposition 3 ${\mathbf{N}(\mathrm{Ch}(\mathcal{A}))}$ is a stable ${\infty}$-category, whose suspension functor is given by shifting by ${1}$.
This is somewhat interesting—we get the classical shift functor out of the ${\infty}$-categorical context. Also, we get the classical fact that the homotopycategory of chain complexes is a triangulated category. (After localizing this at the quasi-isomorphisms, one gets to the classical derived category.)
How might we prove such a result? We’re going to try to understand what pushouts and pull-backs look like in ${\mathbf{N}(\mathrm{Ch}(\mathcal{A}))}$. We will make a simplifying assumption: ${\mathcal{A}}$ is a full subcategory of an abelian category ${\mathcal{T}}$, and ${\mathcal{A}}$ is idempotent complete and contains pull-backs under ${\mathcal{T}}$-epimorphisms. The particular examples we have in mind are either an abelian category or the category of projective (or injective) objects in an abelian category, so we could just restrict to those cases.
First, observe that chain homotopy equivalences are actually equivalences in ${\mathbf{N}(\mathrm{Ch}(\mathcal{A}))}$—in fact, we can think of the 2-morphisms in ${\mathbf{N}(\mathrm{Ch}(\mathcal{A}))}$ as being given by chain homotopies (between the 1-morphisms, which are ordinary morphisms of chain complexes).
So let’s consider a square in ${\mathbf{N}(\mathrm{Ch}(\mathcal{A}))}$.
This is, in particular, a homotopy commutative square of chain complexes, and by replacing it with a weakly equivalent square we may assume that it is commutative on the nose. We want to show that it is a homotopy push-out if and only if it is a homotopy pull-back. If we replace ${B_\bullet \rightarrow D_\bullet}$ by something up to homotopy, we can assume that it is degreewise split (e.g. replace it by the mapping cylinder). In this case, we have to show that the square is homotopy cartesian if and only if it is homotopy cocartesian.
However, the point is that being homotopy cartesian in an ${\infty}$-category can be tested on the level of hom-spaces. That is,
needs to be a homotopy pull-back of topological spaces for any ${T_\bullet \in \mathrm{Ch}_{\geq 0}}$. (Here ${\hom}$ means the simplicial hom.) Since ${B_\bullet \rightarrow D_\bullet}$ is a degreewise split surjection, one can show that the map ${\hom(T_\bullet, B_\bullet) \rightarrow \hom(T_\bullet, D_\bullet)}$ is a Kan fibration: this corresponds to the fact that a surjection of simplicial abelian groups is a Kan fibration. So another way of saying this is that we have a homotopy equivalence of Kan complexes
$\displaystyle \hom(T_\bullet, A_\bullet) \simeq\hom(T_\bullet, C_\bullet) \times_{\hom(T_\bullet, D_\bullet)} \hom(T_\bullet, B_\bullet),$
for every ${T_\bullet}$. This is equivalent to saying that the chain complexes of morphisms
$\displaystyle \underline{Hom}(T_\bullet, A_\bullet) \rightarrow \underline{Hom}(T_\bullet, C_\bullet) \times_{\underline{Hom}(T_\bullet, D_\bullet)} \underline{Hom}(T_\bullet, B_\bullet)$
is a weak equivalence (of chain complexes of abelian groups), for every ${T_\bullet}$: that is, by the Dold-Kan correspondence in nonnegative homological degrees, and by shifting in general.
Since ${\mathcal{A}}$ admits pull-backs under surjections, this is equivalent to the condition that the map
$\displaystyle A_\bullet \rightarrow C_\bullet \times_{D_\bullet} B_\bullet$
be a homotopy equivalence (i.e., an equivalence in this ${\infty}$-category). In other words, we find that ${C_\bullet \times_{D_\bullet} B_\bullet}$ is the homotopy pull-back. Replacing the square by an equivalent one, we may assume that this is true up to isomorphism: that is, that ${A \simeq B_\bullet \times_{C_\bullet} D_\bullet}$.
The condition that the analogous square be a push-out is that
$\displaystyle B_\bullet \sqcup_{A_\bullet} C_\bullet \rightarrow D_\bullet$
be a homotopy equivalence. But if ${B_\bullet \rightarrow C_\bullet}$ is surjective and ${A_\bullet \rightarrow C_\bullet \times_{D_\bullet} B_\bullet}$ is an isomorphism, then ${B_\bullet \sqcup_{A_\bullet} C_\bullet \rightarrow D_\bullet }$ is an isomorphism. So we find that a pull-back square is a push-out square. The converse is similar.
Technically, we should also show the existence of finite limits and colimits, and pointedness. But this reduces to the existence of coproducts and products (which is straightforward—they are as in the 1-categorical case) and the zero object is the zero complex.
2. Towards the derived ${\infty}$-category
The classical derived category of an abelian category ${\mathcal{A}}$ is usually constructed in two steps. First, one constructs the homotopy category of chain complexes in ${\mathcal{A}}$: the objects are chain complexes and morphisms are homotopy classes of maps. This is already a triangulated category, but it’s not quite what we want. Then, one inverts the quasi-isomorphisms to get the derived category, and shows that this localization process preserves the triangulatedness.
That’s not the best approach in this context: while formally adjoining inverses can be done in ${\infty}$-categories, the philosophy that seems to predominate is that we should find localizations as subcategories. This is the philosophy of Bousfield localization, in which one localizes the stable homotopy category at morphisms inducing isomorphisms in ${E}$-homology for some spectrum ${E}$. This localization is equivalent to the subcategory of “${E}$-local objects” in the original stable homotopy category.
More to the point, there is an alternative description of the classical derived category, valid when there are enough projectives:
Description: The derived category (bounded-below) of ${\mathcal{A}}$ is the homotopy category of the category of chain complexes of projectives in ${\mathcal{A}}$.
This is nice, because a quasi-isomorphism of projectives is automatically a homotopy equivalence. So there’s no “formal inversion” of morphisms necessary: one just restricts to a subcategory. This motivates the following definition:
Definition 4 (Lurie) Given an abelian category ${\mathcal{A}}$ with enough projectives, let ${\mathcal{P}}$ be the subcategory of projectives. Then we define the derived ${\infty}$-category
$\displaystyle D^-(\mathcal{A}) = \mathbf{N}( \mathrm{Ch}_{\gg - \infty}(\mathcal{A}))$
to be the nerve of the subcategory of ${\mathbf{N}( \mathrm{Ch}_{\gg - \infty}(\mathcal{A}))}$ consisting of bounded-below complexes.
It is evident from this definition (and unraveling of what the morphism spaces are in ${\mathbf{N}( \mathrm{Ch}(\mathcal{A}))}$) that the homotopy category of this coincides with the second description of the classical derived category.
The derived ${\infty}$-category, as stated above, has a powerful universal property, which makes it much easier to map out than its 1-categorical shadow, the classical derived category. In the next post, we’ll see how this works. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 103, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9039121866226196, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2007/12/03/the-topological-field-of-real-numbers/?like=1&source=post_flair&_wpnonce=78d053c4d8 | # The Unapologetic Mathematician
## The Topological Field of Real Numbers
We’ve defined the topological space we call the real number line $\mathbb{R}$ as the completion of the rational numbers $\mathbb{Q}$ as a uniform space. But we want to be able to do things like arithmetic on it. That is, we want to put the structure of a field on this set. And because we’ve also got the structure of a topological space, we want the field operations to be continuous maps. Then we’ll have a topological field, or a “field object” (analogous to a group object) in the category $\mathbf{Top}$ of topological spaces.
Not only do we want the field operations to be continuous, we want them to agree with those on the rational numbers. And since $\mathbb{Q}$ is dense in $\mathbb{R}$ (and similarly $\mathbb{Q}\times\mathbb{Q}$ is dense in $\mathbb{R}\times\mathbb{R}$), we will get unique continuous maps to extend our field operations. In fact the uniqueness is the easy part, due to the following general property of dense subsets.
Consider a topological space $X$ with a dense subset $D\subseteq X$. Then every point $x\in X$ has a sequence $x_n\in D$ with $\lim x_n=x$. Now if $f:X\rightarrow Y$ and $g:X\rightarrow Y$ are two continuous functions which agree for every point in $D$, then they agree for all points in $X$. Indeed, picking a sequence in $D$ converging to $x$ we have
$f(x)=f(\lim x_n)=\lim f(x_n)=\lim g(x_n)=g(\lim x_n)=g(x)$.
So if we can show the existence of a continuous extension of, say, addition of rational numbers to all real numbers, then the extension is unique. In fact, the continuity will be enough to tell us what the extension should look like. Let’s take real numbers $x$ and $y$, and sequences of rational numbers $x_n$ and $y_n$ converging to $x$ and $y$, respectively. We should have
$s(x,y)=s(\lim x_n,\lim y_n)=s(\lim(x_n,y_n))=\lim x_n+y_n$
but how do we know that the limit on the right exists? Well if we can show that the sequence $x_n+y_n$ is a Cauchy sequence of rational numbers, then it must converge because $\mathbb{R}$ is complete.
Given a rational number $r$ we must show that there exists a natural number $N$ so that $\left|(x_m+y_m)-(x_n+y_n)\right|<r$ for all $m,n\geq N$. But we know that there’s a number $N_x$ so that $\left|x_m-x_n\right|<\frac{r}{2}$ for $m,n\geq N_x$, and a number $N_y$ so that $\left|y_m-y_n\right|<\frac{r}{2}$ for $m,n\geq N_y$. Then we can choose $N$ to be the larger of $N_x$ and $N_y$ and find
$\left|(x_m-x_n)+(y_m-y_n)\right|\leq\left|x_m-x_n\right|+\left|y_m-y_n\right|<\frac{r}{2}+\frac{r}{2}=r$
So the sequence of sums is Cauchy, and thus converges.
What if we chose different sequences $x'_n$ and $y'_n$ converging to $x$ and $y$? Then we get another Cauchy sequence $x'_n+y'_n$ of rational numbers. To show that addition of real numbers is well-defined, we need to show that it’s equivalent to the sequence $x_n+y_n$. So given a rational number $r$ does there exist an $N$ so that $\left|(x_n+y_n)-(x'_n+y'_n)\right|<r$ for all $n\geq N$? This is almost exactly the same as the above argument that each sequence is Cauchy! As such, I’ll leave it to you.
So we’ve got a continuous function taking two real numbers and giving back another one, and which agrees with addition of rational numbers. Does it define an Abelian group? The uniqueness property for functions defined on dense subspaces will come to our rescue! We can write down two functions from $\mathbb{R}\times\mathbb{R}\times\mathbb{R}$ to $\mathbb{R}$ defined by $s(s(x,y),z)$ and $s(x,s(y,z))$. Since $s$ agrees with addition on rational numbers, and since triples of rational numbers are dense in the set of triples of real numbers, these two functions agree on a dense subset of their domains, and so must be equal. If we take the ${0}$ from $\mathbb{Q}$ as the additive identity we can also verify that it acts as an identity real number addition. We can also find the negative of a real number $x$ by negating each term of a Cauchy sequence converging to $x$, and verify that this behaves as an additive inverse, and we can show this addition to be commutative, all using the same techniques as above. From here we’ll just write $x+y$ for the sum of real numbers $x$ and $y$.
What about the multiplication? Again, we’ll want to choose rational sequences $x_n$ and $y_n$ converging to $x$ and $y$, and define our function by
$m(x,y)=m(\lim x_n,\lim y_n)=m(\lim(x_n,y_n))=\lim x_ny_n$
so it will be continuous and agree with rational number multiplication. Now we must show that for every rational number $r$ there is an $N$ so that $\left|x_my_m-x_ny_n\right|<r$ for all $m,n\geq N$. This will be a bit clearer if we start by noting that for each rational $r_x$ there is an $N_x$ so that $\left|x_m-x_n\right|<r_x$ for all $m,n\geq N_x$. In particular, for sufficiently large $n$ we have $\left|x_n\right|<\left|x_N\right|+r_x$, so the sequence $x_n$ is bounded above by some $b_x$. Similarly, given $r_y$ we can pick $N_y$ so that $\left|y_m-y_n\right|<r_y$ for $m,n\geq N_y$ and get an upper bound $b_y\geq y_n$ for all $n$. Then choosing $N$ to be the larger of $N_x$ and $N_y$ we will have
$\left|x_my_m-x_ny_n\right|=\left|(x_m-x_n)y_m+x_n(y_m-y_n)\right|\leq r_xb_y+b_xr_y$
for $m,n\geq N$. Now given a rational $r$ we can (with a little work) find $r_x$ and $r_y$ so that the expression on the right will be less than $r$, and so the sequence is Cauchy, as desired.
Then, as for addition, it turns out that a similar proof will show that this definition doesn’t depend on the choice of sequences converging to $x$ and $y$, so we get a multiplication. Again, we can use the density of the rational numbers to show that it’s associative and commutative, that $1\in\mathbb{Q}$ serves as its unit, and that multiplication distributes over addition. We’ll just write $xy$ for the product of real numbers $x$ and $y$ from here on.
To show that $\mathbb{R}$ is a field we need a multiplicative inverse for each nonzero real number. That is, for each Cauchy sequence of rational numbers $x_n$ that doesn’t converge to ${0}$, we would like to consider the sequence $\frac{1}{x_n}$, but some of the $x_n$ might equal zero and thus throw us off. However, there can only be a finite number of zeroes in the sequence or else ${0}$ would be an accumulation point of the sequence and it would either converge to ${0}$ or fail to be Cauchy. So we can just change each of those to some nonzero rational number without breaking the Cauchy property or changing the real number it converges to. Then another argument similar to that for multiplication shows that this defines a function from the nonzero reals to themselves which acts as a multiplicative inverse.
## 13 Comments »
1. [...] Order on the Real Numbers We’ve defined the real numbers as a topological field by completing the rational numbers as a uniform space, and then extending [...]
Pingback by | December 4, 2007 | Reply
2. [...] Spaces and continuity of real-valued functions Now that we’ve got the real numbers which correspond to our usual notion of magnitudes like distances, let’s refine our concept [...]
Pingback by | December 10, 2007 | Reply
3. [...] in Series I As we’ve said before, the real numbers are a topological field. The fact that it’s a field means, among other things, that it comes equipped with an [...]
Pingback by | May 6, 2008 | Reply
4. Given that the theory of fields is not purely algebraic, what exactly constitutes a field object? That is, what would correspond to the condition that reciprocation’s domain should be precisely the non-zero elements? Do we need to work in a category with a notion of subobject complement or something like that?
Comment by Sridhar Ramesh | July 7, 2008 | Reply
5. I’m not sure offhand what you mean about the theory of fields not being algebraic. The theory of topological fields isn’t…
Comment by | July 7, 2008 | Reply
6. Sure, I understand what a topological field is; I’m just curious what a field object would amount to in general, in other categories.
Sorry, what I meant by “the theory of fields is not purely algebraic” was that, as a result of the stipulation “Reciprocation is defined for all and only the non-zero elements”, it wasn’t specifiable purely in terms of universal equations over some language of (total) operators, in the manner of universal algebra, the way groups, rings, vector spaces, etc., are; as a result, my simple understanding of “X object” in such cases [as a monoidal functor from a particular monoidal category whose structure is determined by those identities] couldn’t work as is.
Comment by Sridhar Ramesh | July 7, 2008 | Reply
7. Just to make my question clear, it is this: what exactly is the definition of a field object (where this definition is presumably interpretable in contexts beyond simply Set and Top)?
I would take it to be something like a pair of objects F and F* in a monoidal category, corresponding to the whole field and its multiplicative group, along with morphisms corresponding to the various field operators, satisfying the field identities _and_ satisfying some property along the lines of “F* should act as though it is all and only the nonzero elements of F”. The category theoretic-version of the property in quotes is what I can’t figure out.
Comment by Sridhar Ramesh | July 7, 2008 | Reply
8. Whoops, I shouldn’t have listed “vector spaces” above as an example of case where I already understand what “X object” would mean. Replace it with “modules”.
Comment by Sridhar Ramesh | July 7, 2008 | Reply
9. Ramesh, as you point out, the notion of field is not algebraic (i.e., is not given by a Lawvere algebraic theory, or for that matter by a suitable monad). To define the notion of a field in a category C, you therefore need to assume more of C than that it just has cartesian products.
A more or less satisfactory solution is to assume that C can support a certain amount of first-order logic, e.g., C is a pretopos or logos (see here for a quick sketch of the relevant notions), or more strongly, a topos. A lot of it boils down to certain assumptions on the structure of subobject lattices (for example, in a logos the subobject lattices form Heyting algebras, so that one can interpret operations like implication and negation), and on operations between them (pulling back or “substitution” operations, and pushing forward or “quantification” operations adjoint to pulling back, e.g., direct image operations as left adjoint to pulling back).
Probably a good place to begin learning about internalizing first-order logic in a category is by reading the book on topos theory by Mac Lane and Moerdijk.
Even so, there are subtleties because in the typical applications, pretoposes or logoses or toposes tend to be “intuitionistic”, i.e., the subobject lattices are not generally Boolean algebras, but Heyting algebras. For example, we could define a field by interpreting
forall x (not x = 0) => exists_y xy = 1
or by interpreting say
forall x (x = 0) or (exists_y xy = 1)
but in intuitionistic logic, the latter formulation will be strictly stronger than the former. Then one has to decide which is more “appropriate” for the application at hand (which I won’t get into here).
Comment by Todd Trimble | July 8, 2008 | Reply
10. Ah, alright. I have some familiarity with topos theory (and no qualms with intuitionistic logic, which has pretty much became my default manner of thinking), but I was wondering if something less heavy-duty would suffice.
It looks from your link as though a prelogos should be more than able to handle something like “for all x. (x = 0) or (the inverse of x is defined)”, since it seems to have the right structure to deal with disjunctions. I guess a logos would be able to go further and also handle “forall x. (not x = 0) (the inverse of x is defined)” [my preferred formulation], since Lawvere-style generalized universal quantification and the empty disjunction should let us express “the object of non-zero elements” fairly directly.
It’s a shame that this all seems still quite a bit heavier than “monoidal categories” or “categories with finite products”; I suppose one could pare down the structure of a logos into only what is needed in this case, but perhaps it wouldn’t be very natural to do so. Ah well.
Comment by Sridhar Ramesh | July 8, 2008 | Reply
11. I had written a bi-implication operator above whose angle brackets seem to have doomed it to be parsed as some kind of HTML tag instead. The second quote in the second paragraph above should read “forall x. (not x = 0) iff (the inverse of x is defined)”.
Comment by Sridhar Ramesh | July 8, 2008 | Reply
12. Looking back on this discussion, it occurs to me that Top is none of the kinds of logic-interpreting categories mentioned above (as it is not regular). But its subobject lattices will have pseudocomplements, I suppose, which should be enough for defining field objects.
Comment by Sridhar Ramesh | August 1, 2008 | Reply
13. [...] real numbers have a topology. In fact, that’s really their main characteristic. The rational numbers have a topology too, [...]
Pingback by | August 26, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 109, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423806071281433, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-algebra/197725-self-dual-matrix.html | Thread:
1. Self Dual matrix
Hi, I was wondering if anyone could help me solve this question from a past paper:
Let C be the code with parity check matrix:
$\begin{pmatrix}0 0 0 1 1 1 1 0 \\ 0 1 1 0 0 1 1 0 \\ 1 0 1 0 1 0 1 0 \\ 1 1 1 1 1 1 1 1 \\ \end{pmatrix}$
Prove that C is self dual. (You can assume that the dual of an (n,k)-code is an (n,n-k) code and that)
What is d for this code?
I have worked out that the matrix times by its transpose is equal to 0 to show it is orthogonal but I am unsure where to go from here | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9681347608566284, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/295337/gelfands-formula-rt-lim-n-to-infty-sqrtn-tn/295349 | # Gelfand's Formula. $r(T)=\lim_{n \to\infty}\sqrt[n]{\|T^{n}\|}$
Can you indicate me a material where I cand find the proof of Gelfand's Formula. I heard that there is a proof with polynomials.
Gelfand's Formula :
If $T \in B(X)$ then: $$r(T)=\lim_{n \to\infty}\sqrt[n]{\|T^{n}\|}$$
-
## 1 Answer
I do not know of any proof using just polynomials. All the proofs I have seen involve complex analysis to a certain extent, either Laurent series expansions, Liouville's theorem, or Hadamard's Theorem.
I am pretty sure that most books on functional analysis which cover Banach spaces and/or Banach algebras will contain a proof, for example:
G. F. Simmons - Introduction to Topology and Modern Analysis - page 312 in my (old) edition.
Riesz and Nagy - Functional Analysis - section 149 (page 425 in my Dover edition).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8505398035049438, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2010/12/03/dual-frobenius-reciprocity/?like=1&source=post_flair&_wpnonce=9a71b83933 | # The Unapologetic Mathematician
## Dual Frobenius Reciprocity
Our proof of Frobenius reciprocity shows that induction is a left-adjoint to restriction. In fact, we could use this to define induction in the first place; show that restriction functor must have a left adjoint and let that be induction. The downside is that we wouldn’t get an explicit construction for free like we have.
One interesting thing about this approach, though, is that we can also show that restriction must have a right adjoint, which we might call “coinduction”. But it turns out that induction and coinduction are naturally isomorphic! That is, we can show that
$\displaystyle\hom_H(W\!\!\downarrow^G_H,V)\cong\hom_G(W,V\!\!\uparrow_H^G)$
Indeed, we can use the duality on hom spaces and apply it to yesterday’s Frobenius adjunction:
$\displaystyle\begin{aligned}\hom_H(W\!\!\downarrow^G_H,V)&\cong\hom_H(V,W\!\!\downarrow^G_H)^*\\&\cong\hom_G(V\!\!\uparrow_H^G,W)^*\\&\cong\hom_G(W,V\!\!\uparrow_H^G)\end{aligned}$
Sometimes when two functors are both left and right adjoints of each other, we say that they are a “Frobenius pair”.
Now let’s take this relation and apply our “decategorifying” correspondence that passes from representations down to characters. If the representation $V$ has character $\chi$ and $W$ has character $\psi$, then hom-spaces become inner products, and (natural) isomorphisms become equalities. We find:
$\displaystyle\langle\psi\!\!\downarrow^G_H,\chi\rangle_H=\langle\psi,\chi\!\!\uparrow_H^G\rangle_G$
which is our “fake” Frobenius reciprocity relation.
## 2 Comments »
1. The last chi is missing it’s /.
Very interesting posts and well-written stuff, btw. Thanks.
Comment by Greg Simon | December 4, 2010 | Reply
• Thanks, and thanks.
Comment by | December 4, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175015687942505, "perplexity_flag": "middle"} |
http://blog.assafrinot.com/?p=318 | papers, preprints, slides and expository
# A topological reflection principle equivalent to Shelah’s strong hypothesis
Posted on September 30, 2011 by in categories: .
Abstract: We notice that Shelah’s Strong Hypothesis (SSH) is equivalent to the following reflection principle:
Suppose $\mathbb X$ is an (infinite) first-countable space whose density is a regular cardinal, $\kappa$.
If every separable subspace of $\mathbb X$ is of cardinality at most $\kappa$, then the cardinality of $\mathbb X$ is $\kappa$.
Citation information:
A. Rinot, A topological reflection principle equivalent to Shelah’s Strong Hypothesis, Proc. Amer. Math. Soc., 136(12): 4413-4416, 2008.
Update:
In a more recent paper, the arguments of the above paper were pushed further to show that SSH is also equivalent to the following:
Suppose $\mathbb X$ is a countably tight space whose density is a regular cardinal, $\kappa$.
If every separable subspace of $\mathbb X$ is countable, then the cardinality of $\mathbb X$ is $\kappa$.
This entry was posted in Publications and tagged 03E04, 03E65, 54G15, Shelah's Strong Hypothesis. Bookmark the permalink.
• ### Recent blog posts
• The S-space problem, and the cardinal invariant $\mathfrak b$ April 4, 2013
• An $S$-space from a Cohen real April 3, 2013
• Forcing with a Souslin tree makes $\mathfrak p=\omega_1$ April 1, 2013
• The S-space problem, and the cardinal invariant $\mathfrak p$ March 28, 2013
• Jones’ theorem on the cardinal invariant $\mathfrak p$ March 26, 2013
• Erdős 100 March 26, 2013
• Bell’s theorem on the cardinal invariant $\mathfrak p$ March 21, 2013
• The $\Delta$-system lemma: an elementary proof March 20, 2013 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8622709512710571, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/62048/determining-number-of-positive-integer-solutions-to-ax-by-cz-dw-z | # Determining number of positive integer solutions to Ax + By + Cz + Dw < Z ?
I would like a method to determine the number of positive integer solutions for an linear inequality, of the form:
$Ax + By + ... < Z$ given integer A,B, .. Z and integer $x,y,z,w \ge 0$
For example, there are 11 solutions to $3x + 5y < 15$
I know this is similar to the existing question ( Count the number of positive solutions for a linear diophantine equation ). However, I am unclear about extending it to cover the inequality - Do I need to apply the formula for each 0 .. Z ? Also, it seems difficult to go from even $Ax + By = N$ to $y + z = n$ while remaining in integers.
-
Your $2$-dimensional question could be reformulated as counting the number of lattice points in the triangle with corners $(x,y) = \{(0,0), (5,0), (0,3)\}$, but without counting the points on the long diagonal. In that case it is one less than half of the number of lattice points in the rectangle formed by those points and $(5,3)$, which contains $24$ points, so then it is $11$. – TMM Sep 5 '11 at 18:49
In $2$ dimensions you could generalize this for arbitrary $A, B, Z$, but for higher dimensions it becomes harder I think. – TMM Sep 5 '11 at 18:53
## 1 Answer
The number of positive solutions of $3x+5y<15$ is the same as the number of positive solutions of $3x+5y+z=15$. The same "trick" works in general. Thus (apart from an increase of $1$ in the number of variables), there is no great difference between $<K$ and $= K$.
The real problems, whether one is dealing with equality or inequality, come from the implicit congruential restrictions.
There are a number of ideas that one can use, none very pleasant. It is useful to reformulate the problem so that we are looking for non-negative solutions. Then we can use generating functions to obtain an explicit $F(t)$ such that, for any $n$, the number of solutions with right-hand side equal to $n$ is the coefficient of $t^n$ in the power series expansion of $F(t)$. That is unfortunately not necessarily a practical computational tool for finding an exact answer.
-
"come from the implicit congruential restrictions": Are you talking about the equality case here? Can you give a concrete example? – Srivatsan Sep 6 '11 at 5:39
@Srivatsan Narayanan: Consider $2x+3y \le n$ or equivalently $2x+3y+z=n$, non-negative solutions. The number of solutions is not too hard to compute. But we get $6$ slightly different "formulas," for $n congruent to$0$,$1$,$\dots 5$(modulo$6\$). – André Nicolas Sep 6 '11 at 10:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223105907440186, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/222663/understanding-the-derivative-geometrically?answertab=votes | # Understanding the derivative geometrically
I always seen the derivative of a function $y=f(x)$,$\frac{dy}{dx}$ at $x_1$ as the slope of the line tangent to the curve $y=f(x)$ drawn at $y=f(x_1)$.But I often fail to appreciate this when $\frac{dy}{dx}=0$ at some point $x_1$ .
Can anyone please tell me the geometrical significance of $\frac{dy}{dx}=0$
or
draw an analogy which would apply to the above-mentioned case?
(In fact, analytically, what is a tangent to a curve?)
Sorry for so many weird questions.I hope I am not being too incoherent.
-
## 3 Answers
When the derivative of a function $f$ is $0$ at say $(x_0,f(x_0))$, then that means that you have a horizontal like tangent to this point $(x_0,f(x_0))$ of the function.
-
I would comment, but lack reputation. http://upload.wikimedia.org/wikipedia/commons/7/7a/Graph_of_sliding_derivative_line.gif May help intuitively: $\frac{dy}{dx}=0$ corresponds to when the tangent is black in the graphic (i.e. horizontal).
As to your first question, mathematically the tangent to $f(x)$ at $a$ is $y=f(a)+f'(a)(x-a)$, which pops out naturally from the definition of the derivative, (i.e. $f'(a)=lim, x \rightarrow a (\frac{f(x)-f(a)}{x-a})$, as a tangent is (I think) the linear approximation to a function from a point (being more of an accurate approximation nearer the point).
-
Just to add more informally, derivative as well as slope shows how much $y$ changes for a "small" change in $x$. For a straight-line, this ratio doesnt change even if change in $x$ were not small, hence slope or derivative is constant. For differentiable curves in general, this ratio would be unique if you could zoom in to the curve as much as you want. In other words, differentiable functions would have any point $a$, another points (infinitly) close to it on left,$a^-$ and on right,$a^+$ such that all three of them, $a^-, a, a^+$ lie on straight line which is tangent to the curve at this point $a$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949161171913147, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/134335/the-digit-base-and-the-ntt-convolution?answertab=votes | # The digit base and the NTT convolution
Suppose I'm using a number theoretic transform (NTT) in an integer field $GF(p)$. I assume that $2n$-th root of unity exists for such a $p$, and I want to compute a convolution of two $n$-length numbers with digit base $BASE$, thereby obtaining the product of the numbers.
However, these two things concern me the most:
1. What about overflows during the computation of $a = NTT(\bar{x_1})$, $b = NTT(\bar{x_2})$ and $c = INTT(a\times b)$? I am concerned with this because I'd rather use primitive-typed (e.g. long) arrays than arbitrary-precision numeric types.
2. Even if I use arbitrary precision arithmetic for each coefficient... How big can operands be in terms of $n$ and $BASE$ in order to obtain a correct cyclic convolution, with no information loss during modular reduction in the computation of $a, b$ and $c$, so that $c$, after carries, is a correct product? Is it enough that $BASE^2$ is less than $p$ (something in my memory tells me that the numbers in the convolution can be as large as $BASE^2$, it may be incorrect...). Or are there other nuances?
I suppose the worst-case scenario is when both numbers are $BASE^k - 1$ for some integer $k$, so clarifications for this scenario are most appreciated...
Thank you very much in advance!
-
## 1 Answer
I found a good explanation in Overflow Detection in the Computation of Convolutions by Some Number TheoreticTransforms
HENRI J. NUSSBAUMER
0096-35 18/78/0200-0109\$00.75 0 1978 IEEE
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, SIGNAL AND PROCESSING, 108 VOL. ASSP-26, NO. 1 , FEBRUARY 1978
I cannot send to you the papaer because of copyright
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048110842704773, "perplexity_flag": "middle"} |
http://mathhelpforum.com/geometry/175026-co-ordinate-geometry.html | Thread:
1. Co-ordinate geometry!
http://www.haeseandharris.com.au/sam...a11gt-5_02.pdf
Can you go to this link and go to chapter 2, page 89 question 7. I really really need help with this question, I know how to parts a and b, but i need help with c!!
THANKS!
xx
2. What did you get in b? The correct answer is $b$, i.e., the same as the x-coordinate of R. This means that RS is perpendicular to OB.
If you did not get the answer $b$ in b, then what are the equations of the perpendicular bisectors you got in a?
3. No, I got x=b in b
But it is c that I am having trouble with, how you need to show that RS is perpendicular to OB.
I am not sure how to show that.
4. The x-coordinate of both R and S is b. This means that RS is vertical, i.e., perpendicular to OB. If this is not what you think is expected, then could you explain the definition of "perpendicular" that you use and the way you showed that some lines are perpendicular in your course?
5. co-ordinate geometry
Hello Tessarina,
S is the intersection of the perpendicular bisectors of two sides.It is therefore the circumcenter of circum scribed circle of triangle.All points from S to vertices are equal S lies on the perpendicular bisector of base.M the midpoint of base lies on the same line
bjh | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359367489814758, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/19261/list | ## Return to Question
2 added 134 characters in body
Every simple graph $G$ can be represented ("drawn") by numbers in the following way:
1. Assign to each vertex $v_i$ a number $n_i$ such that all $n_i$, $n_j$ are coprime whenever $i\neq j$. Let $V$ be the set of numbers thus assigned.
2. Assign to each maximal clique $C_j$ a unique prime number $p_j$ which is coprime to every number in $V$.
3. Assign to each vertex $v_i$ the product $N_i$ of its number $n_i$ and the prime numbers $p_k$ of the maximal cliques it belongs to.
Then $v_i$, $v_j$ are adjacent iff $N_i$ and $N_j$ are not coprime,
i.e. there is a (maximal) clique they both belong to. Edit: It's enough to assign $n_i = 1$ when $v_i$ is not isolated and does not share all of its cliques with another vertex.
Being free in assigning the numbers $n_i$ and $p_j$ lets arise a lot of possibilites, but also the following question:
QUESTION
Can the numbers be assigned systematically such that the greatest $N_i$ is minimal (among all that do the job) — and if so: how?
It is obvious that the $n_i$ in the first step have to be primes for the greatest $N_i$ to be minimal. I have taken the more general approach for other - partly answered - questions like "Can the numbers be assigned such that the set $\lbrace N_i \rbrace_{i=1,..,n}$ fulfills such-and-such conditions?"
1
# Drawing (graphs) by numbers: a minimality question
Every simple graph $G$ can be represented ("drawn") by numbers in the following way:
1. Assign to each vertex $v_i$ a number $n_i$ such that all $n_i$, $n_j$ are coprime whenever $i\neq j$. Let $V$ be the set of numbers thus assigned.
2. Assign to each maximal clique $C_j$ a unique prime number $p_j$ which is coprime to every number in $V$.
3. Assign to each vertex $v_i$ the product $N_i$ of its number $n_i$ and the prime numbers $p_k$ of the maximal cliques it belongs to.
Then $v_i$, $v_j$ are adjacent iff $N_i$ and $N_j$ are not coprime,
i.e. there is a (maximal) clique they both belong to.
Being free in assigning the numbers $n_i$ and $p_j$ lets arise a lot of possibilites, but also the following question:
QUESTION
Can the numbers be assigned systematically such that the greatest $N_i$ is minimal (among all that do the job) — and if so: how?
It is obvious that the $n_i$ in the first step have to be primes for the greatest $N_i$ to be minimal. I have taken the more general approach for other - partly answered - questions like "Can the numbers be assigned such that the set $\lbrace N_i \rbrace_{i=1,..,n}$ fulfills such-and-such conditions?" | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224411249160767, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/28478-prove.html | # Thread:
1. ## prove
im lookin for some help on this prove...
Let U be a set and let A and B be subsets of U.
Prove (A\B) compliement = A compliment Union B
thanks
2. $\left( {A\backslash B} \right)^c = \left( {A \cap B^c } \right)^c$
You should be able to carry it forward and finish.
3. would anybody be able to help work this out? I'm new at this, and the more info that could be included, the more it would help me out. thanks, i appreciate it
4. Originally Posted by rodemich
would anybody be able to help work this out? I'm new at this, and the more info that could be included, the more it would help me out. thanks, i appreciate it
Show us what you can do with Plato's hint.
-Dan
5. well i know to prove the equivalence, i need to prove that the statement is true both ways, right?
6. so i gotta prove both
(A/B)* is a subset of A* U B
and
A* U B is a subset of (A/B)*
7. Originally Posted by rodemich
well i know to prove the equivalence, i need to prove that the statement is true both ways, right?
I don't understand. If you can show $\left ( A \backslash B \right ) ^c$ can be transformed into $\left ( A \cap B ^c \right )^c$ then you are done.
-Dan
8. Originally Posted by rodemich
so i gotta prove both
(A/B)* is a subset of A* U B
and
A* U B is a subset of (A/B)*
that's the hard way. continue from where Plato left off, and apply one of DeMorgan's laws
(do you know DeMorgan's laws?)
9. im fairly familiar with De Morgans Laws, but not enough to use it on my own yet
10. Originally Posted by rodemich
im fairly familiar with De Morgans Laws, but not enough to use it on my own yet
if $X$ and $Y$ are sets, what would DeMorgan's laws transform $(X \cap Y)^c$ into? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499155879020691, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/204403-nature-critical-points-function.html | 1Thanks
• 1 Post By johnsomeone
# Thread:
1. ## The nature of critical points of a function
Can anyone help with determining the nature of the critical points for the function:
f(x,y) = x^3 - x + y^2 - 2xy?
By using the first partial derivatives and setting them to zero, I determined the critical points to be f(1,1) and f(-1/3,-1/3).
To determine the nature of these critical points I calculated the second partial deratives and the discriminant to find that f(1,1) is a local minimum which appears to agree with the 3d plot for the function.
However my problem is at f(-1/3,-1/3), the determinant is less than 0 which implies a saddlepoint, but I believe that it should be a local maximum from looking a the plot.
Can anyone assist me is seeing the error of my ways? Any suggestions would be greatly appreciated.
2. ## Re: The nature of critical points of a function
First I'll recap your work (since I have to do it anyway), then I'll explain the issue:
$\frac{\partial f}{\partial x} = 3x^2 -1 - 2y. \ \frac{\partial^2 f}{\partial x^2} = 6x. \ \frac{\partial^2 f}{\partial x \partial y} = -2.$
$\frac{\partial f}{\partial y} = 2y - 2x. \ \frac{\partial^2 f}{\partial y \partial x} = -2. \ \frac{\partial^2 f}{\partial y ^2} = 2.$
Stationary points are where $\frac{\partial f}{\partial x} = 0$ and $\frac{\partial f}{\partial y} = 0$, so where $3x^2 -1 - 2y = 0$ and $2y - 2x = 0$.
The 2nd equation gives $y = x$, so $3x^2 - 2x - 1= 0$, so
$x \( = y ) = \frac{-(-2) \pm \sqrt{(-2)^2 - 4(3)(-1)}}{2(3)} = \frac{2 \pm \sqrt{16}}{6} = \{1, -1/3\}$.
The Hessian is
$H(f) = \left(\begin{matrix} \frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} \\ \frac{\partial^2 f}{\partial y \partial x} & \frac{\partial^2 f}{\partial y ^2} \end{matrix}\right) = \left(\begin{matrix} 6x & -2 \\ -2 & 2 \end{matrix}\right)$.
So $Det(H(f)) = 12x - 4$, which is only 0 when x = 1/3. Thus both of those stationary points are non-degenerate.
Now the question, for each of the two stationary points (1, 1, -1) and (-1/3, -1/3, 5/27), is if H(f) positive definite (local min), negative definite(local max), or indefinite(saddle point)?
!!This is *not* entirely knowable by looking at the sign of the determinant of the Hessian. Ex: $\left(\begin{matrix} -1& 0 \\ 0 & -1 \end{matrix}\right)$ has determinant 1, but is negative definite!
Local max/min/saddle for f at a critical point are about the eigenvalues of the Hessian, not its deternimant. All positive means local min, all negative means local max, and mixed (some positive, some negative) means saddle point. None are zero, since "non-degenerate stationary point iff Hessian has non-zero determinant iff it has no zero eigenvalues."
In the special case of a 2x2 matrix, since there are only two eigenvalues (counting multiplicity), and neither of them is zero, and the determinant is their product, if the determinant is positive, then either they're both positive or both negative. If the determinant is negative, then one is positive and the other is negative.
Thus *only* in the special case of a 2x2 matrix, you can say this:
If f has a nondegenerate stationary point p, and Det(H(f)) < 0 at p, then p is a saddle point of f, and if Det(H(f)) > 0 at p, then p is either a local maximum or local minimum.
Remember, that's *only* in the special case of a 2x2 matrix. In higher dimensions, the first part (Det(H(f)) < 0 implies saddle point) generalizes to even dimensions, and the second part (Det(H(f)) > 0 implies it's either a local max or a local min) simply fails in general.
(Note: real symmetric matricies always have all real eigenvalues (counting mutliplicity)).
OK, so for this problem, need to examine the eignvalue for H(f) at (1, 1, -1) and (-1/3, -1/3, 5/27).
$H(f)|_{(-1/3, -1/3, 5/27)}= \left(\begin{matrix} 6(-1/3) & -2 \\ -2 & 2 \end{matrix}\right)= \left(\begin{matrix} -2 & -2 \\ -2 & 2 \end{matrix}\right)= -2\left(\begin{matrix} 1 & 1 \\ 1 & -1\end{matrix}\right)$.
Get characteristic polynomial first, then eigenvalues:
$det\left( \ \left(\begin{matrix} 1 & 1 \\ 1 & -1\end{matrix}\right) - \lambda \left(\begin{matrix} 1 & 0 \\ 0 & 1\end{matrix}\right) \ \right) = det\left( \ \left(\begin{matrix} 1-\lambda & 1 \\ 1 & -1-\lambda \end{matrix}\right) \ \right)$
$= (1-\lambda)(-1-\lambda) - (1)(1) = -(1-\lambda)(1+\lambda) - 1 = -(1-\lambda^2) -1 = \lambda^2-2$.
Thus the eigenvalues of H(f) at (-1/3, -1/3, 5/27) are $\pm \sqrt{2}$.
Therefore f has a saddle point at (-1/3, -1/3, 5/27).
Now for (1, 1, -1): $H(f)|_{(1, 1, -1)}= \left(\begin{matrix} 6(1) & -2 \\ -2 & 2 \end{matrix}\right)= 2 \left(\begin{matrix} 3 & -1 \\ -1 & 1 \end{matrix}\right)$. You get:
$det\left( \ \left(\begin{matrix} 3 & -1 \\ -1 & 1\end{matrix}\right) - \lambda \left(\begin{matrix} 1 & 0 \\ 0 & 1\end{matrix}\right) \ \right) = det\left( \ \left(\begin{matrix} 3-\lambda & -1 \\ -1 & 1-\lambda \end{matrix}\right) \ \right)$
$= (3-\lambda)(1-\lambda) - (-1)(-1) = (\lambda^2 - 4 \lambda + 3) - (1) = \lambda^2 - 4 \lambda + 2$
$= \lambda^2 - 4 \lambda + 4 - 2 = (\lambda - 2)^2 - 2$.
Thus the eigenvalues of H(f) at (1, 1, -1) are $2 \pm \sqrt{2}$, which are both positive, making H(f) positive definite there.
Therefore f has a local minimum at (1, 1, -1).
3. ## Re: The nature of critical points of a function
Thank you very much johnsomeone!!
That has made it clearer. I appreciate the time taken to set up this answer.
4. ## Re: The nature of critical points of a function
Nicely done, johnsomeone.
- Hollywood
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8897911906242371, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/02/23/conjugacy-classes-in-symmetric-groups/?like=1&source=post_flair&_wpnonce=9a71b83933 | The Unapologetic Mathematician
Conjugacy classes in symmetric groups
Let’s work out how symmetric groups act on themselves by conjugation. As I’m writing I notice that what I said before about composition of permutations is sort of backwards. It’s one of those annoying conventions that doesn’t really change anything, but can still be a bit confusing. From here on when we write permutations in cycle notation we compose by reading the cycles from right to left. That is, $(1\,2)(1\,3)=(1\,3\,2)$. Before I was reading them left to right. The new way behaves more like group actions. The exposition comes after the jump.
First of all, it’s useful to have a quick way of inverting a permutation. All we have to do is write it down in cycle notation, then reverse all the cycles. The inverse of $(1\,2\,3)$ is $(3\,2\,1)$. The inverse of $(1\,4)(2\,5\,3)$ is $(4\,1)(3\,5\,2)$.
Now let’s work out an example. Let $(1\,2\,3)$ act on $(1\,2)$ by conjugation. We calculate $(1\,2\,3)(1\,2)(3\,2\,1)=(2\,3)$. What about $(1\,3)$ acting on $(1\,2)$? We find $(1\,3)(1\,2)(3\,1)=(2\,3)$.
More generally, say $g$ is a permutation in $S_n$, and that $(a_1\,...a_k)$ is a $k$-cycle. Then we have the conjugation $h=g(a_1\,...\,a_k)g^{-1}$. Let’s see what it does to the symbol $x$.
Either $x$ is $g(a_i)$ for some $a_i$ in the cycle or not. If it is, then $h$ first sends $g(a_i)$ to $a_i$; then it sends that to $a_{i+1}$; then it sends that to $g(a_{i+1})$. If $x$ isn’t of this form, $h$ sends it back to itself. That is, $h$ is another $k$-cycle: $(g(a_1)\,...g(a_k))$.
For a product of disjoint cycles the answer is the same. Conjugation by $g$ replaces every $x$ in the cycle notation with $g(x)$. In particular, conjugation preserves the cycle structure of the permutation. On the other hand, given two permutations with the same cycle structure we can find a conjugation between them by writing them one above the other and sending a letter on the top to the letter just below it. If we have the two permutations
$(1\,4)(3\,5\,2)$
$(5\,2)(1\,3\,4)$
they are conjugate by $(1\,5\,3)(2\,4)$. Indeed we can check that $(1\,5\,3)(2\,4)(1\,4)(3\,5\,2)(3\,5\,1)(4\,2)=(1\,3\,4)(2\,5)$.
This is big. Permutations are conjugate if and only if they have the same cycle structure.
So what sort of cycle structures are there? The cycle notation for a permutation of $n$ letters breaks those letters into a bunch of different collections. There’s one cycle structure for every way of writing $n$ as the sum of a bunch of smaller numbers like this. We call such a way of adding up to $n$ a “partition” of $n$. For example, for $n=5$ we have
Partition Cycle Structure
$5$ $(1\,2\,3\,4\,5)$
$4+1$ $(1\,2\,3\,4)(5)$
$3+2$ $(1\,2\,3)(4\,5)$
$3+1+1$ $(1\,2\,3)(4)(5)$
$2+2+1$ $(1\,2)(3\,4)(5)$
$2+1+1+1$ $(1\,2)(3)(4)(5)$
$1+1+1+1+1$ $(1)(2)(3)(4)(5)$
How many permutations are in each class? Let’s say we’re looking at a partition of $n$ into $n_1+...+n_k$. We shuffle around all n\$ letters in $n!$ ways and then take the first $n_1$ of them, then the next $n_2$, and so on until only $n_k$ are left.
But now we’ve overcounted. If we rotate the letters in a cycle around we have the same permutation: $(1\,2\,3\,4)=(2\,3\,4\,1)$. For the cycle of $n_i$ letters there are $n_i$ choices here that all give the same permutation, so we have to divide $n!$ by each $n_i$.
We’ve still overcounted! If there are two cycles of the same length we don’t care if we do first one and then the other since they share no letters in common. We have to further divide by $m_j!$ where $m_j$ is the number of $j$‘s in the partition. Then we’re right. Let’s do this for $n=5$.
Cycle Structure Size of Conjugacy Class
$(1\,2\,3\,4\,5)$ $\frac{5!}{5}=24$
$(1\,2\,3\,4)(5)$ $\frac{5!}{4}=30$
$(1\,2\,3)(4\,5)$ $\frac{5!}{3*2}=20$
$(1\,2\,3)(4)(5)$ $\frac{5!}{3*2!}=20$
$(1\,2)(3\,4)(5)$ $\frac{5!}{2*2*2!}=15$
$(1\,2)(3)(4)(5)$ $\frac{5!}{2*3!}=10$
$(1)(2)(3)(4)(5)$ $\frac{5!}{5!}=1$
We can check that these numbers add up to $5!=120$, as they should. They also square with Mark Dominus’ post.
So how can we say this in terms of group actions? The group $S_n$ acts on itself by conjugation. There is one orbit for each partition of $n$. We can calculate the size of the orbit corresponding to a given partition as above. If we watch closely, we’ve also found the isotropy subgroup of a given permutation: it’s the subgroup generated by permutations that rotate cycles and those that swap cycles of different types. In fact, the size of this group is exactly what we use to calculate the size of a conjugacy class! The number of permutations conjugate to a given permutation $g$ is the number of permutations ($n!$) divided by the size of the isotropy subgroup of $g$.
Pay attention to these things, they get even more interesting.
Like this:
Posted by John Armstrong | Algebra, Group Actions, Group theory
3 Comments »
1. [...] An article in the February Notices Walt, over at Ars Mathematica points out that the February issue of the Notices of the AMS is up online. I won’t go into it all, but I did want to point out the “What Is…” column on Young Diagrams. I’ll be getting to these myself somewhere down the road, since they’re deeply related to conjugacy classes in symmetric groups. [...]
Pingback by | March 1, 2007 | Reply
2. [...] is why we could calculate the number of permutations with a given cycle type the way we did: we picked a representative of the conjugacy class and calculated [...]
Pingback by | October 26, 2007 | Reply
3. [...] symmetric group. And it turns out that some nice things happen in this case. We’ve actually already seen a lot of [...]
Pingback by | September 10, 2010 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 91, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190371036529541, "perplexity_flag": "head"} |
http://lukepalmer.wordpress.com/tag/algorithms/ | # Luke Palmer
Functional programming and mathematical philosophy with musical interludes
# DI Breakdown
By on January 15, 2013 | 6 Comments
I’m having a philosophical breakdown of the software engineering variety. I’m writing a register allocation library for my current project at work, referencing a not-too-complex algorithm which, however, has many degrees of freedom. Throughout the paper they talk about making various modifications to achieve different effects — tying variables to specific registers, brazenly pretending that a node is colorable when it looks like it isn’t (because it might work out in its favor), heuristics for choosing which nodes to simplify first, categorizing all the move instructions in the program to select from a smart, small set when the time comes to try to eliminate them. I’m trying to characterize the algorithm so that those different selections can be made easily, and it is a wonderful puzzle.
I also feel aesthetically stuck. I am feeling too many choices in Haskell — do I take this option as a parameter, or do I stuff it in a reader monad? Similarly, do I characterize this computation as living in the Cont monad, or do I simply take a continuation as a parameter? When expressing a piece of a computation, do I return the “simplest” type which provides the necessary data, do I return a functor which informs how the piece is to be used, or do I just go ahead and traverse the final data structure right there? What if the simplest type that gives the necessary information is vacuous, and all the information is in how it is used?
You might be thinking to yourself, “yes, Luke, you are just designing software.” But it feels more arbitrary than that — I have everything I want to say and I know how it fits together. My physics professor always used to say “now we have to make a choice — which is bad, because we’re about to make the wrong one”. He would manipulate the problem until every decision was forced. I need a way to constrain my decisions, to find what might be seen as the unique most general type of each piece of this algorithm. There are too many ways to say everything.
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, dependency injection, haskell
# Searchable Data Types
By on November 17, 2010 | 14 Comments
A few years ago, Martín Escardó wrote an article about a seemingly-impossible program that can exhaustively search the uncountably infinite "Cantor space" (infinite streams of bits). He then showed that spaces that can be thus searched form a monad (which I threw onto hackage), and wrote a paper about the mathematical foundations which is seemingly impossible to understand.
Anyway, I thought I would give a different perspective on what is going on here. This is a more operational account, however some of the underlying topology might peek out for a few seconds. I’m no topologist, so it will be brief and possibly incorrect.
Let’s start with a simpler infinite space to search. The lazy naturals, or the one-point compactification of the naturals:
````data Nat = Zero | Succ Nat
deriving (Eq,Ord,Show)
infinity = Succ infinity
````
Let’s partially instantiate `Num` just so we have something to play with.
````instance Num Nat where
Zero + y = y
Succ x + y = Succ (x + y)
Zero * y = Zero
Succ x * y = y + (x * y)
fromInteger 0 = Zero
fromInteger n = Succ (fromInteger (n-1))
````
We wish to construct this function:
````search :: (Nat -> Bool) -> Maybe Nat
````
Which returns a `Nat` satisfying the criterion if there is one, otherwise `Nothing`. We assume that the given criterion is total, that is, it always returns, even if it is given `infinity`. We’re not trying to solve the halting problem. :-)
Let’s try to write this in direct style:
````search f | f Zero = Just Zero
| otherwise = Succ <$> search (f . Succ)
````
That is, if the predicate worked for `Zero`, then `Zero` is our guy. Otherwise, see if there is an `x` such that `f (Succ x)` matches the predicate, and if so, return `Succ x`. Make sense?
And it seems to work.
````ghci> search (\x -> x + 1 == 2)
Just (Succ Zero)
ghci> search (\x -> x*x == 16)
Just (Succ (Succ (Succ (Succ Zero))))
````
Er, almost.
````ghci> search (\x -> x*x == 15)
(infinite loop)
````
We want it to return `Nothing` in this last case. It’s no surprise that it didn’t — there is no condition under which `search` is capable of returning `Nothing`, that definition would pass if `Maybe` were defined `data Maybe a = Just a`.
It is not at all clear that it is even possible to get what we want. But one of Escardó’s insights showed that it is: make a variant of `search` that is allowed to lie.
````-- lyingSearch f returns a Nat n such that f n, but if there is none, then
-- it returns a Nat anyway.
lyingSearch :: (Nat -> Bool) -> Nat
lyingSearch f | f Zero = Zero
| otherwise = Succ (lyingSearch (f . Succ))
````
And then we can define our regular `search` in terms of it:
````search' f | f possibleMatch = Just possibleMatch
| otherwise = Nothing
where
possibleMatch = lyingSearch f
````
Let’s try.
````ghci> search' (\x -> x*x == 16) -- as before
Just (Succ (Succ (Succ (Succ Zero))))
ghci> search' (\x -> x*x == 15)
Nothing
````
Woah! How the heck did it know that? Let’s see what happened.
````let f = \x -> x*x == 15
lyingSearch f
0*0 /= 15
Succ (lyingSearch (f . Succ))
1*1 /= 15
Succ (Succ (lyingSearch (f . Succ . Succ)))
2*2 /= 15
...
````
That condition is never going to pass, so `lyingSearch` going to keep taking the `Succ` branch forever, thus returning `infinity`. Inspection of the definition of `*` reveals that `infinity * infinity = infinity`, and `infinity` differs from `15` once you peel off 15 `Succ`s, thus `f infinity = False`.
With this example in mind, the correctness of `search'` is fairly apparent. Exercise for the readers who are smarter than me: prove it formally.
Since a proper `Maybe`-returning search is trivial to construct given one of these lying functions, the question becomes: for which data types can we implement a lying search function? It is a challenging but approachable exercise to show that it can be done for every recursive polynomial type, and I recommend it to anyone interested in the subject.
Hint: begin with a `Search` data type:
````newtype Search a = S ((a -> Bool) -> a)
````
Implement its Functor instance, and then implement the following combinators:
````searchUnit :: Search ()
searchEither :: Search a -> Search b -> Search (Either a b)
searchPair :: Search a -> Search b -> Search (a,b)
newtype Nu f = Roll { unroll :: f (Nu f) }
searchNu :: (forall a. Search a -> Search (f a)) -> Search (Nu f)
````
More Hint: is `searchPair` giving you trouble? To construct `(a,b)`, first find `a` such that there exists a `y` that makes `(a,y)` match the predicate. Then construct `b` using your newly found `a`.
The aforementioned Cantor space is a recursive polynomial type, so we already have it’s search function.
````type Cantor = Nu ((,) Bool)
searchCantor = searchNu (searchPair searchBool)
ghci> take 30 . show $ searchCantor (not . fst . unroll . snd . unroll)
"(True,(False,(True,(True,(True"
````
We can’t expect to construct a reasonable `Search Integer`. We could encode in the bits of an `Integer` the execution trace of a Turing machine, as in the proof of the undecidability of the Post correspondence problem. We could write a total function `validTrace :: Integer -> Bool` that returns `True` if and only if the given integer represents a valid trace that ends in a halting state. And we could also write a function `initialState :: Integer -> MachineState` that extracts the first state of the machine. Then the function `\machine -> searchInteger (\x -> initialState x == machine && validTrace x)` would solve the halting problem.
The reason this argument doesn’t work for `Nat` is because the `validTrace` function would loop on `infinity`, thus violating the precondition that our predicates must be total.
I hear the following question begging to be explored next: are there any searchable types that are not recursive polynomials?
Did you like this post?
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, haskell, math
# Collision Detection with Enneatrees
By on November 8, 2009 | 12 Comments
Many of my games boil down to, at some level, a large collection of circles (or spheres) interacting with each other. I use circles for collision detection, and otherwise whenever I can for organization, because the results look more natural than using boxes. If you organize using boxes, your simulation will tend to “align” to the axes, which nature surely does not do.
So naturally, detecting when two of these circles are touching each other is one of the larger computational problems that my games face. And bewilderingly, I have not found anywhere a good solution that covers circles in a wide variety of sizes. That is what this article is about.
If your circles are all about the same size, and in particular fairly small in comparison to the needed accuracy of the algorithm, the obvious and de-facto solution is to use a quadtree. That is, start with the whole screen and divide it up into 4 regions, and do so recursively until you have a nice spatial arrangement of your objects, categorizing objects by which region their center lies in.
When you have a circle and want to see what other circles are touching it, you just look for other objects in the smallest region. Well, except you are not a point, you are a circle, so you might penetrate the edge of the region, in which case you need to look at the adjacent region as well. Well, except other circles might penetrate the edges of their regions, so you actually need to look at all adjacent regions. Well, except you might be really big, so you need to look in all the regions that you touch. Well, except other objects might be really big, so… you need to look in all the regions.
A quadtree has not bought us anything if circles can be arbitrary sizes; collision detection is still linear. You could say things like the number of very big circles is usually small (because they would collide away their competition), so you can put an upper bound on the size of circles in the tree then just do a linear check on the big ones. But that is an awful hack with a tuning parameter and doesn’t generalize. Shouldn’t the right solution work without hacks like that (for the fellow game developers in my audience: despite your instincts, the answer is yes :-).
I have worked with quadtrees before, and saw this problem a mile away for my game with circles of many different sizes. The first variation I tried was to put circles into regions only if they fit entirely. Circles on the boundaries would be added to the lowest branch that they could fit into. Then to check collisions on an object, you look at where it is in the tree, check all subregions, and then check all superregions (but not the superregions’ subregions). You will get small things in your vicinity on your way down, and large things on your way up. And it’s still logarithmic time… right?
Wrong. Suppose that every object in the world lay on the horizontal and vertical center lines. Even very small objects would accumulate in the top node, and no region subdividing would ever occur. This degenerate situation causes linear behavior, even though just by shifting the center of the region a little bit you get a sane subdividing. It’s not just a “let’s hope that doesn’t happen” thing: there are a lot of bad situations nearby it, too. Any object lying on those center lines will be checked when any other object is testing for its collisions, because it is on the way up. This did, in fact, become a major issue in my 10,000 particle simulation.
But if we could somehow salvage the original idea of this solution without the degenerate case, then we would get logarithmic behavior (when the field size is large relative to the radii of the particles).
To understand this solution, I would like you to turn your attention to one dimensional simulations. Here the previous solution would be subdividing the real line into a binary tree.
But the new solution divides intervals into not two, not four, but three child intervals. However, not in thirds like you would expect. (0,1) is divided into (0,1/2), (1/4, 3/4), and (1/2,1) — the intervals overlap. Any “circle” (in 1-D, so it’s another interval) with diameter less than 1/4 must fit into at least one of these subregions.
We use that property to organize the circles so that their size corresponds to their depth in the tree. If you know the diameter of the circle, you know exactly how deep it will be. So now when we are looking downward, we really are looking for small circles, and when we are looking upward we are looking for large ones, and there is no possibility of an accumulation of small irrelevant circles above us that are really far away.
However, on your way up, you also have to check back down again. If you are in the left portion of the (1/4,3/4) interval, you might intersect something from the right portion of the (0,1/2) interval, so you have to descend into it. But you can prune your search on the way down, cutting out “most” (all but a logarithm) of it. For example, you don’t need to check the interval (1/16,1/8) and its children even though you are checking its parent (0, 1/2) — it is too far away.
When you are deciding which region to put a circle in and you have a choice, i.e. the circle fits in multiple subregions, always pick the center one. When the circles are in motion, this gets rid of the degenerate case where a circle bounces back and forth across a deep, expensive boundary.
If you generalize to 2D, you end up cutting a region into 9 subregions (thus the name enneatree). The way I did it was to alternate cutting horizontally and vertically, so that I didn’t have to have 9-way if-else chains. That had more subtleties than I had expected. I have yet to find a straightforward, elegant implementation.
The algorithm seems to work well — I am able to support 5,000 densely packed circles in C# without optimizations.
The inspiration for this algorithm comes from the field of computable real numbers (infinite — not arbitrary, but infinite precision numbers). You run into trouble if you try to represent computable reals infinite streams of bits, because some interactions might be “non-local”; i.e. you might have to look infinitely far down a stream to find out whether a bit should be 0 or 1. Those guys solve the problem in the same way I did, by dividing intervals into 3 overlapping subintervals, and using this 3-symbol system as their number representation.
I see a glimmer of a connection between the two problems, but I can’t see it formally. Maybe it would become clearer if I considered infinite collections of infinitesimal objects needing collision detection.
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, games
# Lazy Partial Evaluation
By on May 4, 2009 | 10 Comments
Inspired by Dan Piponi’s latest post, I have been looking into partial evaluation. In particular, I thought that a language which emphasizes currying really ought be good at partial evaluation. In this post I describe some ideas regarding partial evaluation in functional languages, and later sketch a partial evaluation machine I devised.
Supercombinator reduction machines, like GHC, do a limited form of partial evaluation. I.e. when you compile a program to supercombinators, you are optimizing it for specialization from left to right. So if f is defined in pointful form, “let a = f x in (a z, a z)” might be a better program than “(f x z, f x z)”. This is a nice property of combinator reduction. Unfortunately, it doesn’t generalize: “let a = flip f x in (a y, a y)” will never be better than “(f y x, f y x)”, because functions only specialize from left to right. I conjecture that this missing feature is more important than we realize.
Mogensen gives a very elegant partial evaluator in pure lambda calculus, which optimize as expected with the Futamura projections (see Dan’s post). This partial evaluator works on higher order abstract syntax, taking and returning descriptions of terms rather than the terms themselves. Essentially all it is is (very simple) machinery describing how to evaluate under a lambda.
The system in that paper takes many precautions to avoid unfolding dynamic arguments, because otherwise the partial evaluator might not terminate. Apparently he is not well-versed in our Haskell School of the Infinite, because the evaluator is compositional. So what he means by “not terminate” is “return an infinite program”. But an infinite program is fine if you interpret/compile it lazily!
In fact, I believe (I am intuiting — I have done no formal checking) that the simplest-minded of lazy partial evaluators is perfect: it maximally unfolds its static arguments, there is no need for the type inference machinery in that paper, and it will have the same termination properties as the program. I attribute the ease of this task with the built-in metacircularity of lambda calculus.
Cool as a self-embedded partial evaluator is, to do efficient partial evaluation you need to keep quotations of your programs everywhere, then compile them before you actually use them. Lambda calculus is hinting at something more: that simply by applying one of several arguments to a curried function, you are specializing it. Wouldn’t it be great if every time you did that, the program were maximally specialized automatically?
### A partial evaluation reduction strategy
It turns out that such an ambitious wish is nothing more than an evaluation order for the lambda calculus. Admittedly, it’s not a very common one. You try to reduce the leftmost application, even under lambdas. We would also like to combine this with call-by-need, so when an argument is duplicated it is only reduced once.
Here’s an example program I’ve been working with, with the standard definitions of the combinators Ix = x and Kxy = x.
``` flip (\x y. y I x) K K
```
It’s not much, but it gets the point across. Let’s look at it in call-by-name:
```[1] (\f a b. f b a) (\x y. y I x) K K
[2] (\a b. (\x y. y I x) b a) K K
[3] (\b. (\x y. y I x) b K) K
[4] (\x y. y I x) K K
[5] (\y. y I K) K
[6] K I K
[6'] (\m n. m) I K
[7] (\n. I) K
[8] I
```
Notice by the step [4] that we have lost the structure of flip (\x y. y I x) K, so any work we do from then on we will have to redo on subsequent applications of that function. Contrast this with the partial evaluation strategy:
```[1] (\f a b. f b a) (\x y. y I x) K K
[2] (\a b. (\x y. y I x) b a) K K
[3] (\a b. (\y. y I b) a) K K
[4] (\a b. a I b) K K
[5] (\b. K I b) K
[5'] (\b. (\m n. m) I b) K
[6] (\b. (\n. I) b) K
[7] (\b. I) K
[8] I
```
We got the function all the way down to a constant function before it was finally applied.
One thing that’s interesting to notice about this strategy is that it seems stricter than call-by-name. That is, if you have a nonterminating term N, then reducing the application (\x y. Ny) z will loop, whereas it won’t in CBN. However, recall that in the domain theory, (\x. ⊥) = ⊥. The only thing you can do to a function to observe it is apply it, and whenever you apply this function you will loop. So you are bound to loop anyway if you are evaluating this application.
### The lazy partial evaluation machine (sketch)
Here is a sketch of an efficient low-level machine for this evaluation strategy. It is simple stack machine code (“which I pull out of my bum for now, but I don’t think an algorithm to generate it will be any trouble”). The only tricky bit about it is that pieces are sometimes removed from the middle of the stack, so it can’t necessarily be represented linearly in memory. A singly linked representation should work (I realize this costs performance).
The values in this language look like [v1,v2,...] { instr1 ; instr2 ; … }. They are closures, where the vn are pushed on to the stack in that order before the instructions are executed. Values can also be “indirections”, which point to an absolute position in the stack. Indirections represent a logical reordering of the stack, and are used to model bound variables. When indirections are executed, they remove themselves and execute (and remove) the thing they point to. The instructions are as follows.
``` pop n -- delete the entry at position n
dup n -- push the entry at position n on the top of the stack
(other standard stack ops here)
abs n -- push an indirection pointing to position n
-- (skipping other indirections) on top of the stack
exec n -- pop and execute the closure at position n
closure [n1,n2,...] { instr1 ; instr2 }
-- remove positions n1,n2,... and add them
-- to the closure with the given code, and push it
```
dup is the only instruction which duplicates a value; this is where laziness will be encoded.
Let’s look at the code for each piece of our example program:
``` I: [] { exec 0 }
K: [] { pop 1; exec 0 }
```
These are pretty straightforward. They both receive their arguments on the stack (the first argument in position 0, and downwards from there), reduce and continue execution. Recall the code is what happens when a value is forced, so every value ends with an exec to continue the execution.
``` (\x y. y I x): [] { push I; exec 2 }
```
This one might be a little tricker. The stack comes in like this (position 0 on the left): x y, After push I, it looks like “I x y”, so y’s incoming stack will be “I x” just as it wants. Finally, the interesting one:
``` (\f a b. f b a): [] { abs 2; exec 2 }
```
abs 2 pushes an indirection to the argument b onto the stack before calling f. This is how evaluation is pulled under binders; when an indirection is forced, it reduces the application at the appropriate level, and all below it. I am still not totally sure when to introduce an abs; my guess is you do it whenever a function would otherwise reorder its arguments. An example may demonstrate what I mean (but perhaps not; I haven’t totally wrapped my head around it myself).
Here’s an execution trace for the example above, with the stack growing to the left. I start with the instruction pointer on the first function and the three arguments on the stack. The stack shown is the state before the instruction on each row. An identifier followed by a colon marks a position in the stack for indirections:
| | | |
|----|-----------|---------------------------|
| 1 | abs 1 | (\x y. y I x) K K |
| 2 | abs 3 | a (\x y. y I x) a:K K |
| 3 | exec 2 | b a (\x y. y I x) a:K b:K |
| 4 | closure I | b a a:K b:K |
| 5 | exec 2 | I b a a:K b:K |
| 6 | pop 1 | I b b:K |
| 7 | exec 0 | I |
When an indirection is executed, as in step 5, that is evaluation under a lambda.
This machine still doesn’t support laziness (though it didn’t matter in this example). We can achieve laziness by allocating a thunk when we dup. To evaluate the thunk, we put a mark on the stack. Whenever an instruction tries to reach beyond the mark, we capture the current stack and instruction pointer and jam it in a closure, then write that closure to the thunk. Indirections get replaced by their offsets; i.e. the “abs” commands that would create them. After we have done that, remove the mark (point it to where it was previously) and continue where we left off.
There you have it: my nifty partial evaluation machine. I’m reasonably confident that it’s correct, but I’m still not totally happy with the implementation of indirections — mostly the fact that you have to skip other indirections when you are pushing them. I wonder if there is a better way to get the same effect.
Comments/criticisms requested! :-)
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, haskell
# Certificate Design Pattern
By on March 24, 2009 | 9 Comments
When working the latest incarnation of my System IG compiler, I used a thingy which I now realize ought to be characterized as a design pattern. It substantially changed the way I was thinking about the code, which is what makes it interesting.
Summary: separate an algorithm into certificate constructors and a search algorithm.
A large class of algorithms can be considered, in some way, as search algorithms. It is given a problem and searches for a solution to that problem. For example, typically you wouldn’t phrase the quadratic formula as a search algorithm, but it is—it’s just a very smart, fast one. It is given a,b, and c and searches for a solution to the equation ax2 + bx + c = 0.
The certificate design pattern separates the algorithm into two modules: the certificate module and the algorithm. The certificate module provides constructors for solutions to the problem. For each correct solution, it is possible to construct a certificate, and it is impossible to construct a certificate for an incorrect solution. The certificate module for the quadratic formula algorithm might look like this:
```module Certificate (Certificate, certify, solution) where
data Certificate = Certificate Double Double Double Double
certify :: Double -> Double -> Double -> Double -> Maybe Certificate
certify a b c x | a*x^2 + b*x + c == 0 = Just (Certificate a b c x)
| otherwise = Nothing
solution :: Certificate -> (Double,Double,Double,Double)
solution (Certificate a b c x) = (a,b,c,x)
```
There is only one way to construct a Certificate, and that is to pass it a solution to the quadratic equation. If it is not actually a solution, a certificate cannot be constructed for it. This module is very easy to verify. The algorithm module is obvious:
```module Algorithm (solve) where
import Certificate
import Data.Maybe (fromJust)
solve :: Double -> Double -> Double -> Certificate
solve a b c = fromJust $ certify a b c ((-b + sqrt (b^2 - 4*a*c)) / (2*a))
```
Here, we use the quadratic formula and construct a certificate of its correctness. If we made a typo in the formula, then certify would return Nothing and we would get an error when we fromJust it (an error is justified in this case, rather than an exception, because we made a mistake when programming — it’s like an assert).
The client to the algorithm gets a certificate back from solve, and can extract its solution. All the information needed to verify that the certificate is a correct certificate for the given problem should be provided. For example, if Certificate had only contained x instead of a,b,c,x, then we could have implemented solve like:
```solve a b c = certify 0 0 0 0
```
Because that is a valid solution, but we have not solved the problem. The client needs to be able to inspect that a,b,c match the input values. Maximally untrusting client code might look like this:
```unsafeSolve a b c =
let (a',b',c',x) = solution (solve a b c) in assert (a == a' && b == b' && c == c') x
where
assert True x = x
assert False _ = error "Assertion failed"
```
Here we can give any function whatsoever for solve, and we will never report an incorrect answer (replacing the incorrectness with a runtime error).
This is certainly overkill for this example, but in the System IG compiler it makes a lot of sense. I have a small set of rules which form well-typed programs, and have put in much effort to prove this set of rules is consistent and complete. But I want to experiment with different interfaces, different inference algorithms, different optimizations, etc.
So my Certificate implements combinators for each of the rules in my system, and all the different algorithms plug into that set of rules. So whenever I write a typechecker algorithm, if it finds a solution, the solution is correct by construction. This gives me a lot of freedom to play with different techniques.
Verification rules can be more involved than this single function that constructs a certificate. In the System IG compiler, there are 12 construction rules, most of them taking other certificates as arguments (which would make them certificate “combinators”). I’ll show an example of more complex certificate constructors later.
What is interesting about this pattern, aside from the added correctness and verification guarantees, is that is changed the way I thought while I was implementing the algorithm. Instead of being master of the computer, and telling it what to do, it was more like a puzzle I had to solve. In some ways it was harder, but I attribute that to redistributing the workload; it’s harder because I am forced to write code that is correct from the get-go, instead of accidentally introducing bugs and thinking I’m done.
The other interesting mental change was that it often guided my solution. I would look at the certificate I’m trying to create, and see which constructors could create it. This gave me an idea of the information I was after. This information is the information necessary to convince the client that my solution is correct; I cannot proceed without it.
Theoretically, the algorithm part could be completely generic. It might just do a generic search algorithm like Dijkstra. If it finds a certificate, then it has solved your problem correctly. Solutions for free! (But this will not be practical in most cases — it might not yield a correct algorithm by other criteria, such as “always halts”).
Here’s an example of a more complex certificate. The domain is SK combinator calculus, and a Conversion is a certificate that holds two terms. If a Conversion can be constructed, then the two terms are convertible.
```module Conversion ( Term(..), Conversion
, convId, convCompose, convFlip
, convS, convK, convApp)
where
infixl 9 :*
data Term = S | K | Term :* Term deriving (Eq)
data Conversion = Term :<-> Term
convTerms (a :<-> b) = (a,b)
convId t = t :<-> t
convCompose (a :<-> b) (b' :<-> c)
| b == b' = Just $ a :<-> c
| otherwise = Nothing
convFlip (a :<-> b) = b :<-> a
convS (S :* x :* y :* z) = Just $ (S :* x :* y :* z) :<-> (x :* z :* (y :* z))
convS _ = Nothing
convK (K :* x :* y) = Just $ (K :* x :* y) :<-> x
convK _ = Nothing
convApp (a :<-> b) (c :<-> d) = (a :* c) :<-> (b :* d)
```
The export list is key. If we had exported the (:<->) constructor, then it would be possible to create invalid conversions. The correctness of a certificate module is all about what it doesn’t export.
I’m wondering what the best way to present this as an object-oriented pattern is, so I can insert it into popular CS folklore (assuming it’s not already there ;-).
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, haskell
# Enumerating a context-free language
By on May 2, 2008 | 37 Comments
Here is a familiar context-free grammar for arithmetic expressions:
```S ::= add
add ::= mul | add + mul
mul ::= term | mul * term
term ::= number | ( S )
number ::= digit | digit number
digit ::= 0 | 1 | ... | 9
```
I have a challenge for you: write a program (in a language of your choice) which enumerates all strings accepted by this grammar. That is, your program runs forever, but if I have a string this grammar accepts, then your program should output it in a finite amount of time.
(This really is a fun challenge, so I encourage interested readers to try it themselves)
This is a tricky problem. Indeed it is sufficiently tricky that I cannot think of a clean imperative solution to it (which I’m sure is also related to being very out-of-practice in imperative programming). I’m interested to see any such solutions that people came up with. (And I’ll try it myself in the meantime)
But I’m going to present a functional solution using a neat little monad called `Omega` which I just uploaded to Hackage.
Let’s step back and consider a simpler motivating example. Consider the following list comprehension (and recall that list comprehensions are just the list monad behind a little bit of sugar):
```pairs = [ (x,y) | x <- [0..], y <- [0..] ]
```
This looks like it generates all pairs of naturals, but it does not. It generates the list `[(0,0), (0,1), (0,2), ...]`, so the first element of the pair will never get to 1. If Haskell allowed us to use ordinal numbers as indices then we could: `pairs !! ω == (1,0)`. :-)
Conceptually what we have is a lattice of pairs that we’re “flattening” poorly:
```(0,0) (0,1) (0,2) (0,3) ...
(1,0) (1,1) (1,2) (1,3) ...
(2,0) (2,1) (2,2) (2,3) ...
. . . .
```
We’re flattening it by taking the first row, then the second row, and so on. That’s what `concat` does. But anybody who’s had a brief introduction to set theory knows that that’s not how you enumerate lattices! You take the positive diagonals: `(0,0), (1,0), (0,1), (2,0), (1,1), (0,2), ...`. That way you hit every element in a finite amount of time. This is the trick used to show that there are only countably many rational numbers.
`Omega` is the monad that comes from this concept. We define `Omega`‘s `join` to be this “diagonal” function. What we get back is a monad with a nice property:
If x occurs at a finite position in xs and y occurs at a finite position in f x, then y occurs at a finite position in f =<< xs
It’s hard to know what that means without knowing what =<< is supposed to mean. It means if you have a (possibly infinite) list of items, and to each one you apply a function which generates another (possibly infinite) list, and you merge them all together, you’ll be able to reach every result in a finite time.
More intuitively, it means if you write a multiple branching recursive function where each branch recurses infinitely (generating values), you will not get stuck in the first branch, but rather generate values from all branches.
And that is exactly what we need to write the context-free enumerator. Given a simple data structure representing a context-free grammar:
```data Symbol a
= Terminal a
| Nonterminal [[Symbol a]] -- a disjunction of juxtapositions
```
Then we can write an enumerate function in a straightforward depth-first way:
```enumerate (Terminal a) = return [a]
enumerate (Nonterminal alts) = do
alt <- each alts -- for each alternative
-- (each is the Omega constructor :: [a] -> Omega a)
rep <- mapM enumerate alt -- enumerate each symbol in the sequence
return $ concat rep -- and concatenate the results
```
But the `Omega` monad will do some heavy lifting and stagger those generators for us. Defining the grammar above:
```arithGrammar = s
where
s = Nonterminal [[add]]
add = Nonterminal [[mul], [add, Terminal '+', mul]]
mul = Nonterminal [[term], [mul, Terminal '*', term]]
term = Nonterminal [[number], [Terminal '(', s, Terminal ')']]
digit = Nonterminal $ map (map Terminal . show) [0..9]
number = Nonterminal [[digit], [digit, number]]
```
And then running our enumerator:
```runOmega $ enumerate arithGrammar
```
We get a very encouraging list of results:
```0
1
0+0
0*0
0+1
(0)
1+0
0*1
0+0*0
00
...
```
Notice how each type of node is represented in that initial prefix. That’s a good sign that there won’t be degenerate repetitive behavior. But of course we know there won’t be by the guarantee `Omega` makes (unless I implemented it wrong).
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, haskell
# Set Selectors
I am writing a poker game, and I got mildly annoyed when I went to write the hand classification functions. There was a disparity between the specification and implementation of poker hands; I had to come up with an algorithm to match each type. I didn’t like this, I want the code to match the specification more directly.
This is quite a geeky post. The types of poker hands are very unlikely to change, and the amount of time I’ve spent thinking about this problem already is many times that of solving it directly in the first place. I.e. it would be stupid for someone to pay me by the hour to solve it this way. Still, it gives rise to an interesting generalization that could be very useful.
I decided that the way I would like to specify these things is with “set selectors” over logical expressions. That is, given a finite set U, find a subset R of U such that some logical expression holds in R (i.e. all quantifiers are bounded on R).
This has a straightforward exponential time solution. I’m trying to do better.
I started by classifying logical expressions. In the following, let P(…) be quantifier-free.
• $\exists x P(x)$ is straightforward $O(n)$.
• More generally, $\exists x_1 \exists x_2 \ldots \exists x_k P(x_1, x_2, \ldots x_k)$ is straightforward $O(n^k)$.
• $\forall x P(x)$ is also $O(n)$ to find the largest solution (because the empty set would satisfy it, but that’s not very interesting).
• $\exists x \forall y P(x,y)$ has an obvious solution, same as $\exists x. P(x,x)$. There is no unique largest solution, but there is a unique largest for each x which can be found in $O(n^2)$ time. It’s unclear what the library should do in this scenario.
• $\forall x \forall y P(x,y)$ is called the Clique problem and is NP-complete! Damn!
But the most interesting one so far is the case: $\forall x \exists y P(x,y)$. It turns out that there is a unique largest solution for this, and here is an algorithm that finds it:
Given a finite set U, find the largest subset R such that $\forall x \! \in \! R \, \exists y \! \in \! R \, P(x,y)$.
Let $r_0 = U, r_{n+1} = \{ x \in r_n | \exists y\!\in\!r_n \, P(x,y) \}$. That is, iteratively remove x’s from r that don’t have corresponding y’s. Then define the result $R = \bigcap_i r_i$.
Lemma. There is a natural number $n_f$ such that $r_{n_f} = R$.
Proof. Follows from finite U and $r_{n+1} \subseteq r_n$.
Theorem. $\forall x \! \in \! R \, \exists y \! \in \! R \, P(x,y)$.
Proof. Given $x \in R = r_{n_f} = r_{n_f + 1}.$ Thus there exists $y \in r_{n_f} = R$ such that P(x,y), by the definition of $r_{n_f+1}$.
Theorem. If $R^\prime \subseteq U$ and $\forall x \! \in \! R^\prime \, \exists y \! \in \! R^\prime \, P(x,y)$, then $R^\prime \subseteq R$.
Proof. Pick the least n such that $R^\prime \not\subseteq r_n$. There is an $x \in R^\prime$ with $x \not\in r_n$. The only way that could have happened is if there were no y in rn-1 with P(x,y). But there is a y in R’ with P(x,y), so $R^\prime \not\subseteq r_{n-1}$, contradicting n’s minimality.
The time complexity of this algorithm can be made at least as good as $O(n^2 \log n)$, maybe better.
While that was interesting, it doesn’t really help in solving the general problem (which, I remind myself, is related to poker, where all quantifiers will be existential anyway!). The above algorithm generalizes to statements of the form $\exists w_1 \exists w_2 \ldots \exists w_j \forall x \exists y_1 \exists y_2 \ldots \exists y_k \, P(w_1,w_2,\ldots w_j,x,y_1,y_2, \ldots y_k)$. Each existential before the universal adds an order of magnitude, and I think each one after does too, but I haven’t thought it through.
In fact, I think that, because of the clique problem, any statement with two universal quantifiers will take exponential time, which means I’m “done” (in a disappointing sort of way).
Back to the real world, I don’t like that finding a flush in a hand will take $O(n^5)$ time (16,807 checks for a 7-card hand, yikes), when my hand-written algorithm could do it in $O(n)$. I’m still open to ideas for specifying poker hands without the use of set selectors. Any ideas?
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, math
# Call-by-Future
By on February 19, 2008 | 1 Comment
I was reading about evaluation strategies for lambda calculus on Wikipedia, and one caught my eye: “call-by-future”. The idea behind this strategy is that you evaluate the function and its argument in parallel. It’s like lazy evaluation, except you start evaluating earlier. Call by future semantics are non-strict, so I figured I could coerce Haskell to doing it.
Template Haskell to the rescue again! I wrote a module which has this function:
``` parApp :: (a -> b) -> (a -> b)
parApp f x = x `par` f x
```
par is a special function which says to start evaluating its first argument and then return its second argument. Then it walks the syntax tree, replacing applications with applications of parApp, like:
``` fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
```
Becomes:
``` fib 0 = 0
fib 1 = 1
fib n = parApp (parApp (+) (parApp fib (parApp (parApp (-) n) 1))
(parApp (parApp (-) n) 2))
```
Pretty, ain’t it? :-p
For the above program (test.hs) computing fib 40, when compiled with -threaded -O2 I get the following results on a dual quad-core machine:
``` # command time speedup incr.speedup
./test +RTS -N1 # 20.508s 1.00 N/A
./test +RTS -N2 # 18.445s 1.11 0.56
./test +RTS -N3 # 15.944s 1.29 0.77
./test +RTS -N4 # 12.690s 1.58 0.94
./test +RTS -N5 # 11.305s 1.81 0.90
./test +RTS -N6 # 9.663s 2.12 0.97
./test +RTS -N7 # 8.964s 2.29 0.92
./test +RTS -N8 # 8.541s 2.40 0.92
```
The number after -N is the number of hardware threads used, the speedup is the ratio of the original speed against the multicore speed (in an ideal world this would match the number of hardware threads), and the incremental speedup is (tn-1Nn-1)/(tnNn), i.e. the fraction of what we gained over the previous simulation versus what we should have in an ideal world. As long as this is near one, our time is decreasing linearly. As we can see, we pay a lot of overhead mostly at 2 and 3 processors, and after that there is little additional overhead. There is too little data here to see what the large-scale trend is, though.
Suppose that the incremental speedup goes to a constant p < 1. Then tn+1 = n/(n+1) * tn/p. Doing a little algebra, we see that the time levels off at n = p/(1-p). So, for example, if p were 0.92 as it looks like it’s going to be, then n = 11.5. That is, adding a 12th processor is not going to gain you anything. The long term goal for parallelization is to get solutions where p is very close to 1, because processors are going to start being cheap like hard drives, and I actually wouldn’t surprised if in 24 years, 64-cores were common (calculated using Moore’s law).
So, 8 processors, 2.4x speedup. We could certainly ask for better. But it ain’t bad considering you didn’t have to know anything about your program at all to get it there :-).
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, haskell
# No Total Function from Infinite Sequences of Booleans onto the Integers
By on October 22, 2007 | 5 Comments
Yesterday I found out about a remarkable algorithm which can take any total predicate (function returing true or false) on infinite sequences of bits and find an infinite sequence which satisfies it (and implemented it). It’s a counterintuitive result, since you’d think that you can’t search the (uncountably) infinite space of bit sequences in a finite amount of time, but you can.
This implies some results about how much total functions can encode. For example, there must not be any total function from infinite sequences of bits onto the integers (such that any integer is reachable using a suitable infinite sequence). If there were, you could use this algorithm to decide any predicate on the integers, which would be a far-too-powerful mathematical tool.
But I couldn’t prove it, not without citing the algorithm. Why must there not be any such function, besides that it would make the algorithm incredibly powerful (so powerful that you could prove its impossibility, but that proof method was not good enough for me, because it could have been that the algorithm had an error).
The proof was actually hiding in the proof on sigfpe’s blog, but I couldn’t see it at first. So I’ll give a more explicit version here.
Let f be a total function from sequences of bits onto the integers (f : (Z -> Bool) -> Z). Let df(xs) denote the greatest index that f reads when given the argument xs. Now, for example, choose an xs such that f(xs) is 0. df(xs) is the greatest index that f reads, so f doesn’t depend on anything greater (for this argument), so every sequence with the prefix xs[0..df(xs)] has the value zero.
Now make a binary tree corresponding to the input sequence, where a node is a leaf iff the sequence leading to the node is the shortest prefix that completely determines f‘s output. Every possible extension of the sequence leading up to the leaf maps to the same value, so we informally say that the leaf itself maps to the value (even though the leaf represents a finite sequence and f takes infinite sequences).
This tree has infinitely many leaves, since there must be (at least) one leaf for each integer. And now, just as sigfpe did, invoke the König lemma, “every finitely branching tree with infinitely many nodes has at least one infinite path”. That infinite path represents an input to f for which it will not halt. That is to say, f was not total after all!
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, math
# Abstract State Machines for Game Rule Specification
Jude has been working on the design of a general board game programming engine. We brainstormed a bit, and I remembered something I had read about a little while ago: Abstract state machines.
An ASM is really nothing more than a formalization of a simultaneous chain of if-then statements. A common example is Conway’s life: given a lattice and a countNeighbors function you could define the rules for Conway’s life as an ASM as follows:
```letDie(cell, n) =
if status(cell) = Alive and (n 3) then
status(cell) := Dead
letLive(cell, n) =
if status(cell) = Dead and n = 3 then
status(cell) := Alive
gameOfLife =
forall cell in cells do
letDie(cell, countNeighbors(cell))
letLive(cell, countNeighbors(cell))
```
e only thing this is getting us over any type of program specification is the parallelism. But the idea of a list of rules, each of which has a condition and an action, is the right level of abstraction to make a board game engine.
We have to tweak a few things to get it just right. For example, you could have must and may combinators, which specify which moves are available to the player.
Let’s look at Pente. First, we assume a linked-list style lattice, where each cell has eight pointers indexed by direction (they could be abstract or just numbers or whatever), and then a link(direction, n) function gives us the cell in a particular neighbor. Then we could specify pente something like this (Haskellesque syntax now):
```move s player =
if color s == Empty then may $ do
color s := player
mapM_ capture directions
where
capture d = do
let oneAway = link d s
let twoAway = link d oneAway
let threeAway = link d twoAway
if color oneAway == color twoAway == otherPlayer player && color threeAway == player then do $
color oneAway := Empty
color twoAway := Empty
```
There are still a few issues to solve as far as expressing “you must do either A or B, then you must do C”. I actually used very little of the ASM model (just one may combinator) in expressing pente. Okay, idea on the backburner, let’s see what else comes.
### Share this:
Posted in: Uncategorized | Tagged: algorithms, code, games, haskell
# Posts navigation
### Recent Comments
• H2s on Polyamory and Respect
• Luke on Polyamory and Respect
• akos on Polyamory and Respect
• wren ng thornton on Polyamory and Respect
• Luke on Polyamory and Respect
### Twitter
• RT @pigworker: Statutory "Ult" tweet. A tweet prefixed with "Ult" is a comment on the immediately preceding retweet. We all need something … 2 days ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247204661369324, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-statistics/131799-question-about-mean-value-theorem-multiple-integrals.html | # Thread:
1. ## A question about Mean-Value Theorem for multiple integrals
In page 401 of Apostol's "Mathematical Analysis"(see figure below),
the underlined statement seems to use the Intermediate value theorem to assure the existence of ${\bf x}_0$. But the Intermediate value theorem is applicable only when m and M are values of f for some point in S, which, however, can not be guaranteed by conditions of the theorem and the NOTE. If we additionally assume S is compact, the conclusion can hold. Whether or not there is a mistake here, or is there any other means to prove the existence of ${\bf x}_0$? Thanks!
2. sorry for posting in a wrong subforum due to carelessness, please move to analysis subforum. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208181500434875, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/4243/what-does-it-mean-to-say-gravity-is-the-weakest-of-the-forces | # What does it mean to say “Gravity is the weakest of the forces”?
I can understand that on small scales (within an atom/molecule), the other forces are much stronger, but on larger scales, it seems that gravity is a far stronger force; e.g. planets are held to the sun by gravity. So what does it mean to say that "gravity is the weakest of the forces" when in some cases, it seems far stronger?
-
## 7 Answers
When we ask "how strong is this force?" what we mean in this context is "How much stuff do I need to get a significant amount of force?" Richard Feynman summarized this the best in comparing the strength of gravity - which is generated by the entire mass of the Earth - versus a relatively tiny amount of electric charge:
And all matter is a mixture of positive protons and negative electrons which are attracting and repelling with this great force. So perfect is the balance however, that when you stand near someone else you don't feel any force at all. If there were even a little bit of unbalance you would know it. If you were standing at arm's length from someone and each of you had one percent more electrons than protons, the repelling force would be incredible. How great? Enough to lift the Empire State building? No! To lift Mount Everest? No! The repulsion would be enough to lift a "weight" equal to that of the entire earth!
Another way to think about it is this: a proton has both charge and mass. If I hold another proton a centimeter away, how strong is the gravitational attraction? It's about $10^{-57}$ newtons. How strong is the electric repulsion? It's about $10^{-24}$ newtons. How much stronger is the electric force than the gravitational? We find that it's $10^{33}$ times stronger, as in 1,000,000,000,000,000,000,000,000,000,000,000 times more powerful!
-
1
This answer might be useful in response to a different question. Here it does not seem to jive with what the OP is asking. Then again, after seeing some of the OP's comments it is not clear to me what he was asking. – user346 Jan 31 '11 at 19:48
When we say that gravity is much weaker then the other forces we mean that its coupling constant is much smaller than the coupling constants of other forces.
Think about a coupling constant as a parameter that says how much energy there will be in per "unit of interacting stuff". This is a very rough definition but it will serve our purpose.
If you determine the coupling constants of all different forces, you discover that, in decreasing order, strong, eletromagnetic and weak forces are much, much stronger than gravity.
You need around $10^{32}$ (that is 100,000,000,000,000,000,000,000,000,000,000) times more "stuff interacting" to get around the same energy scale with gravity if you compare it with the weak force. Moreover, the difference between strong, weak and electromagnetic forces among themselves isn't nearly as extreme as the difference between gravity and the other forces.
-
+1, this is basically the answer I would have posted if I hadn't already seen it here. (I think it would not be inappropriate to include a small bit of mathematical detail, though.) – David Zaslavsky♦ Jan 31 '11 at 2:33
Yes, it answers what it means, but doesn't offer an explanation. I guess, though, he didn't ask for one... – Gordon Jan 31 '11 at 2:55
Actually the Hierarchy problem is about the radiative corrections in the Higgs propagator. The large discrepancy of coupling constants is not a real problem. – Leandro Seixas Jan 31 '11 at 3:41
I stand corrected then! :) – Rafael S. Calsaverini Jan 31 '11 at 3:59
2
@Leandro: that is just one particular hierarchy problem. In general @space_cadet is right, the problem in general is with understanding the magnitude of coupling constants which usually requires some fine-tuning or new physics. The actual mechanism which explains it (in your case either fine-tuned cancellation of bare mass by quadratic radiative corrections or the usual SUSY solution) is only secondary. By the way, another instance of a famous hierarchy problem is the one of the smallness of cosmological constant. – Marek Feb 3 '11 at 21:23
show 4 more comments
Gravity seems stronger because it's always attractive. Of the other 3 interactions:
• Electromagnetism has positive and negative charges, so it only manifests macroscopically when there is a charge imbalance.
• The weak and strong interactions are intrinsically short-ranged.
-
Though color (i.e. strong force related) charges come in 3 (plus 3 anti) flavors they share with electric charges the ability to form "neutral" bodies (indeed confinement requires this), the phenomena called "color transparency" (known in meson systems and theorized in baryons) takes advantage of this even inside the range of the strong force. In principle bodies could be assembled that are on aggregate "weak neutral", though not using a small integer number of bits. – dmckee♦ Jan 31 '11 at 16:13
The Randall-Sundrum model explains it. The other forces are confined to the brane which we consider to be our universe. The brane is embedded in higher dimensional space where some of the dimensions may be compactified, but others could be larger or even infinite (a 5 dimensional anti-de Sitter space in which a (3+1)dim brane is embedded.All particles except the graviton are bound to the brane.) Higher dimensional space is called the bulk. If gravity is not confined to our brane and can penetrate into the bulk, that would explain its weakness. The problem with the extreme difference in strengths of the forces is termed the hierarchy problem (weak force=$10^{32}$grav force). There are other explanations involving supersymmetry.
-
To explain the down-vote: the question isn't about why gravity appears weaker. It's the fact that, in my mind, gravity doesn't appear weaker. – Smashery Jan 31 '11 at 4:04
2
Dear @Smashery please quit down-voting answers based on perfectly good physical arguments. Otherwise it appears you not looking for answers but only an affirmation of your pre-existing beliefs. – user346 Jan 31 '11 at 19:41
2
@Smashery: Your headline question seems to ask why gravity is considered the weakest force. In your elaboration you seem to be asking why you think it should be stronger. Arrghh, I am not a telepath. An obvious reason would be that we evolved brains on one massive mother of a planet and gravity is accretive. Any number of other explanations as to why you think gravity should be stronger come to mind, but none of them are flattering. – Gordon Feb 1 '11 at 6:10
@space_cadet: Thanks for the support. I thought he wanted to learn something, not that I am the Amazing Kreskin. – Gordon Feb 1 '11 at 6:12
My apologies if you took the downvote personally; like I said elsewhere, you are obviously knowledgeable about the topic; but the gap in my knowledge was more that "to my intuition, gravity doesn't seem weak at all." Sorry if that wasn't clear in the question. I've since upvoted another of your excellent answers. – Smashery Feb 1 '11 at 6:20
This is indeed something one has to be careful about, because, after all, gravity scales with the mass of the particles in question, whereas the other forces scale with the electric charge or the magnetic moment. It appears that one compares apples with pears.
However, I believe the declaration that gravity is the "weakest" of the forces stems indeed solely from its irrelevancy on the scale of particle physics.
-
other forces don't scale ? sure ? – iamgopal Jan 31 '11 at 12:14
Gravity is weak because the masses of elementary particles are so small. Gravity has a natural mass unit, $m_p~=~\sqrt{\hbar c/G}$, the Planck mass, which is about $10^{-5}$ g. The proton is $22$ orders of magnitude less massive. So the stuff which makes up the world is elementary particle “styrofoam stuff” which gravity couples to.
This can be seen as well with IIA strings and their S-dual heterotic strings. Those heterotic strings just do not like to stay on our brane, which they have no end points to form Chan-Paton factors or Dirichlet boundary conditions on the brane with. They slip through our brane as if nothing is there. Their S-dual strings are open strings on the brane, but with puny masses --- far less than the Planck mass or the mass corresponding to the string tension.
-
To explain the down-vote: the question isn't about why gravity appears weaker. It's the fact that, in my mind, gravity doesn't appear weaker. – Smashery Jan 31 '11 at 4:03
Then look for answers on a psychology site, not a physics site. We're supposed to be answering physics questions, not explaining how your mind works. – pho Jan 31 '11 at 14:33
@Smashery this is the best answer to your question. To repeat myself, please do not down-vote unless you actually have a good reason to do so. And if you do do so, don't proclaim it. It hurts your credibility. – user346 Jan 31 '11 at 19:42
4
@space_cadet Not sure why this is such a crazy request by @Smashery. He wants an explanation of how to reconcile everyday phenomena with commonly cited theory: "Gravity seems to be the most important force in my life, yet people say it's the weakest. Explain." Do you see how talking about Chan-Paton factors might not be a satisfactory response? – kharybdis Jan 31 '11 at 20:25
Hi @Spencer. If you were to ask a physicist today the question "why is gravity the weakest force" then of the many ways of stating the problem, one way is to note (as @Lawrence did) the tremendous difference between the natural mass scale of gravity and that of the standard model. This is what is referred to as the "hierarchy problem". Ask a postdoc or a professor about it and I'm sure they would be delighted to explain. The tidbit about S-duality is only one of many ways to find a resolution to this question, another one being the Randall-Sundrum model as @Gordon mentions in his answer. – user346 Jan 31 '11 at 21:02
show 7 more comments
Gravity is the weakest force as its coupling constant is small in value. Gravity cannot be felt by us in daily life because of the huge universe surrounding us. Electromagnetic force is undoubtedly stronger as it deals with microscopic particles (electrons, protons). Gravity is always attractive in nature. It is a long range force among all other interactions in nature.
-
3
""gravity cannot felt by us in daily life "" Rofl I recommend You lay down under an apple tree for some hours! – Georg Aug 3 '11 at 9:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512394070625305, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/8426/elliptic-integrals-with-parameter-outside-0m1/8427 | # Elliptic integrals with parameter outside 0<m<1
I'm attempting to implement an equation (for calculating magnetic forces between coils, eqs (22–24) in the linked paper) that requires the use of elliptic integrals.
Unfortunately these equations require the evaluation of the elliptic integrals far outside their standard parameter range of $0\le m\le 1$ and the numerical implementations I have available to evaluate them give inconsistent results.
I believe that Mathematica is correct in its answer:
EllipticF[ArcSin[Sqrt[1/c5]], c5] //. c5 -> 817.327
=> 0.054961 - 1.17196*10^-17 i
Whereas Matlab's MuPad engine gives:
mfun('EllipticF',asin(sqrt(1/817.327)),sqrt(817.327))
=> 0.054961 - 0.000707i
(Mma's function takes parameter $m=k^2$ whereas MuPad takes modulus $k$, explaining the sqrt) While I can use Mathematica for my own work, my colleagues only have Matlab available and I'd like them to be able to use this code.
I'm a pretty unfamiliar with the theory behind elliptic integrals, but Baker's ‘Elliptic Functions’ says
We shall see later on that the quantity $k^2$ [...] can always be considered real and less than unity.
Which leads me to ask: can the arguments to these elliptic integrals be re-stated in terms of an input of $k>1$, such as I seem to require above?
-
– J. M. Oct 31 '10 at 3:56
Lest people become confused with terminology: $k$ is a modulus, while $m$ is a parameter. – J. M. Oct 31 '10 at 8:10
I had a look at that paper you linked to; a lot of their elliptic integral expressions are a baroque mess, and I suspect they only wrote what Mathematica spat out without even pausing to look at what can be simplified. – J. M. Oct 31 '10 at 10:40
Thanks for the nudge on getting my naming straight. I edited the question slightly to improve my terminology there. Regarding the elliptic integrals, I'm sure you're right unfortunately; I'm actually having some further issues (my question only addresses one term of the equations, as you will have seen in the paper) but I need to get my head straight before I keep asking for help. Thanks again in the mean time! – Will Robertson Oct 31 '10 at 14:18
## 1 Answer
What you seem to require here are the so-called "reciprocal-modulus transformations".
In your specific case, upon applying the reciprocal-modulus transformation, your elliptic integral can actually be simplified to
$$\frac1{\sqrt{c_5}}F\left(\frac{\pi}{2}\vert \frac1{c_5}\right)=\frac1{\sqrt{c_5}}K\left(\frac1{c_5}\right)$$.
where $K(m)$ is the complete elliptic integral of the first kind with parameter $m$ (though I understand MuPAD uses $k=\sqrt{m}$ as argument, in which case you seem to know the conversion formulae already).
There are similar formulae for the elliptic integrals of the second and third kinds, and I direct you to the DLMF link I gave above for them.
-
For a Mathematica demonstration: With[{c5 = 817.327}, {EllipticK[1/c5]/Sqrt[c5], EllipticF[ArcSin[1/Sqrt[c5]], c5]}] // Chop – J. M. Oct 31 '10 at 3:54
Fantastic, thanks! This both fixes my Matlab code and makes the equation nicer. One of the downsides of equation derivation by a CAS, I suppose. I suspect there are some more simplifications that can then be made in the overall equation. – Will Robertson Oct 31 '10 at 4:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9058256149291992, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/113058-finding-distance-between-two-points.html | # Thread:
1. ## Finding the distance between two points
Let Q = (0,5) and R = (10,6) be given points in the plane. We want to find the point P = (x,0) on the x axis such that the sum of distances PQ+PR is as small as possible.
To solve this problem, we need to minimize the following function of x: f(x) = ?
over the closed interval [A,B] where A = ? and B = ?
2. Originally Posted by derekjonathon
Let Q = (0,5) and R = (10,6) be given points in the plane. We want to find the point P = (x,0) on the x axis such that the sum of distances PQ+PR is as small as possible.
To solve this problem, we need to minimize the following function of x: f(x) = ?
over the closed interval [A,B] where A = ? and B = ?
$PQ = d_1$
$d_1 = \sqrt{(x-0)^2 + (0-5)^2}$
$PR = d_2$
$d_2 = \sqrt{(x-10)^2 + (0-6)^2}$
$d_1 + d_2 = S$
find $\frac{dS}{dx}$ and minimize
3. ## Still not getting it...
Thank you for your help, I really appreciate it...
But I am still not getting what the technique is here to answer the question.
I understand finding distance (d1) and distance (d2)
but the derivative part has me thrown off. Also, I don't get what function is supposed to be minimized...
How do I go step by step through this?
4. Originally Posted by derekjonathon
Thank you for your help, I really appreciate it...
But I am still not getting what the technique is here to answer the question.
I understand finding distance (d1) and distance (d2)
but the derivative part has me thrown off. Also, I don't get what function is supposed to be minimized...
How do I go step by step through this?
minimize the function $S = \sqrt{x^2+25} + \sqrt{x^2-20x+136}$
start by finding $\frac{dS}{dx}$, set the result equal to 0, and solve for the value of x that minimizes S. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498882293701172, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/65837?sort=oldest | ## What are the models of Peano Arithmetic plus the negation of the corresponding Gödel sentence like?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Since the Godel sentence for PA (G henceforth) is independent of PA, if PA is consistent, so is PA plus not-G. Thus, since PA is a first-order theory, if PA is consistent, PA plus not-G has some models. My question concerns what are these models like. I've read a couple of times that these are 'non-standard models' but I've got the following query. Presumably PA is part of the theory of arithmetic -- call it 'Teo(N)' -- (the whole set of sentences of the language of PA that are true in the standard model) and so is G. Using compactness and downward LS, we show that Teo(N) has models that are non-standard (models containing elements that cannot be reached from 0 by a finte number of applications of the successor function; in fact, these models contain denumerably many galaxies of such elements). Call these models 'NSM1'. If G is part of Theo(N), G is true in NSM1. Thus, if PA plus not-G has models, these must be distinct (non-isomorphic) to NSM1. So, it seems, there must be something specific about these models, not just that they're non-standard. What is it (or is my reasoning flawed somewhere)? Thanks!
-
## 2 Answers
While there is only one standard model, there are indeed many distinct (and elementarily nonequivalent) nonstandard models. As for distinguishing those that satisfy G and those that do not, I’m afraid there is no better answer than Tarski’s definition of satisfaction. All countable nonstandard models look superficially alike in that they have isomorphic order relation: the standard natural numbers followed by countably many copies of $\mathbb Z$ arranged in a densely ordered way (i.e., $\mathbb N+\mathbb Q\times_{\mathrm{Lex}}\mathbb Z$). However, the structure of $+$ and especially $\cdot$ in the models are much more complicated (for instance, no countable nonstandard model of PA has computable $+$ or $\cdot$) and next to impossible to classify.
The Gödel sentence is $\Pi^0_1$: its negation is equivalent to a formula $\exists x\,\theta(x)$ where $\theta$ is bounded, and therefore absolute in end-extensions. Thus, the most important feature of a model of PA + ¬G is a witness to the existential quantifier in the above formula, which is necessarily nonstandard (in the usual construction of the Gödel formula, the witness will be a Gödel number of a proof of G). That’s more or less everything you can say in general about the model.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is the most "tangible" distinctive feature of models of $PA$ that satisfy the negation of Gödel's true but unprovable sentence.
Let $\phi$ be a true $\Pi^0_1$ arithmetical sentence that is not provable from $PA$, e.g., $\phi$ can be chosen as the sentence expressing "I am unprovable from $PA$" [as in the first incompleteness theorem], or the sentence "$PA$ is consistent" [as in the second incompleteness theorem].
Thanks to a remarkable theorem of Matiyasevich-Robinson-Davis-Putnam, known as the MRDP theorem [see here], there is a diophantine equation $D_{\phi}$ that has no solutions in $\Bbb{N}$, but has the property that for any model $M$ of $PA$, $D_{\phi}$ has a solution in $M$ iff $M$ satisfies the negation of $\phi$.
[note: $D_{\phi}$ is of the form $E=0$, where $E$ is a polynomial in several variables that is allowed to have negative coefficients, but $D_{\phi}$ can be re-expressed as an equation of the form $P=Q$, where $P$ and $Q$ are polynomials [of several variables] with coefficients in $\Bbb{N}$; hence it makes perfectly good sense to talk about $D_{\phi}$ having a solution in $M$].
Let me close by recommending two good sources for the study of nonstandard models of $PA$.
Richard Kaye, Models of Peano arithmetic. Oxford Logic Guides, 15. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1991.
Roman Kossak and James H. Schmerl, The structure of models of Peano arithmetic, Oxford Logic Guides, 50. Oxford Science Publications. The Clarendon Press, Oxford University Press, Oxford, 2006.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378769993782043, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/240912/can-someone-check-if-my-proof-is-sufficient-enough | # Can someone check if my proof is sufficient enough?
I've just started my undergraduate mathematics degree, and I have to say, proving things isn't very intuitive. I used to be very good at proofs in high school, but only when it comes to algebraic and logical reasoning of the informal kind; now I just feel it's not sufficient enough. Anyway, I hope to get back on track and see if I've got the hang of things, hopefully with your help. I appreciate anyone's consideration. :)
1) Prove that the sequence $S(n) = (n^2)[(-1)^n]$ is unbounded for all natural numbers.
Answer: Consider $n=2k$ for all k in natural numbers, $\implies S(n)=n^2$
Suppose there exists M in natural numbers such that $S(n)<M$ for all n in natural numbers $\implies n^2<M$, therefore, $n<\sqrt{M}$. Consider the value $n(0)=(\mbox{ceilingfunction}:\sqrt{M}) + 1 > M$, which is in the natural numbers. This would imply that there exists a value in $S(n)$ which will always be greater than $M$. Therefore $S(n)$ is unbounded.
Consider $n=2k+1$ for all $k$ in natural numbers, $\implies S(n)=-(n^2)$
Suppose there exists $M$ in natural numbers such that $S(n)>M$ for all $n$ in natural numbers $\implies -n^2>M \implies n^2<M$, therefore $n<\sqrt{M}$. It has been shown above that in this situation, there exists no such value of $n$ such that $M>S(n)$ for all values $n,M$ in natural numbers.
END PROOF
Would it suffice to say that, since the set containing all natural numbers is unbounded then n^2 is unbounded, or are these steps necessary? And most importantly, am I doing it right? What are the things that I have to consider when conducting a proof? Any advice?
Thanks!
-
1
In English at least, the question would stop with "is unbounded." We show that for any $M$, there is an $n$ such that $S(n)\gt M$. If $M$ is $\le 0$, pick $n=2$. If $M\gt 0$, let $n$ be any even integer $\gt \sqrt{M}$, like $2\lceil \sqrt{M}\rceil$. This settles unbounded, under the usual meaning of "not bounded." It is not necessary to show the sequence is not bounded below. – André Nicolas Nov 19 '12 at 22:46
## 2 Answers
"What are the things that I have to consider when conducting a proof? Any advice?" is awfully open-ended. Pretty much the definition of rigour, though, is that the proof must establish the truth of the theorem given the truth of certain other statements (ie. axioms or other theorems derived from them). In other words, you need to be clear what truths you are starting with and exactly how your proof gets you from those truths to your destination.
Personally, I find questions like this one a little annoying. They tend to be posed as kind of "warm up" questions, as if they are particularly easy, since what you are being asked to prove is "obviously" true. However, to construct a real proof of such a statement would require that you start from something even more "basic", and in this case that presumably means some choice of a set of axioms for the natural numbers. However, the chances are that you haven't been introduced to, for example, the Peano Axioms. If you haven't, then it beats me what kind of "proof" you are expected to produce.
Anyway, rather than laboriously prove this from the Peano axioms or, say, the axioms of the real numbers, here's a proof based on the following statement (which is hardly more self-evident than the theorem itself):
If $N$ and $M$ are both natural numbers, $N \gt 1$ and $M \gt 0$ then $NM \gt M$.
We also need to know what $(-1)^n$ means, so negative numbers are involved. However, the theorem can be proved without reference to the negative terms so I'm going to ignore that except to assume that:
$(-1)^{2N} = 1$, for any natural number $N$.
Strictly speaking, you also need some other stuff about arithmetic operations and ordering on the natural numbers, but I've already almost lost the will to live.
Given this, you could prove that $S_n$ is unbounded using something like:
Take any natural number $N > 1$ and consider $S_{2N}$:
$S_{2N} = (2N)^2(-1)^{2N} = (2N)^2 = 4 \times N \times N \gt N \times N \gt N$.
So there is no natural number $N$ such $S_n \le N$ for all n, i.e. {S_n} is unbounded.
The problem with the question is that it's so simple that in your search to find something even more basic to prove it from you punch straight through the notion of "basic" itself and find yourself rummaging around in some vaguely foundations-of-maths stuff: Peano arithmetic and set theory. It's not that what's needed is necessarily difficult, it's just that a real, honest-to-goodness proof of something like this is a more tedious business than you might think. Perhaps realising that is the point of the question! :-)
I wouldn't worry about it, if I were you. I just tend to shrug at questions like this and move on to something more interesting. You'll get a better sense of what a proof entails from something a bit more meaty.
There's a lovely and relevant quote from Bertrand Russell's Principia Mathematica:
"From this proposition it will follow, when arithmetical addition has been defined, that 1+1=2." —Volume I, 1st edition, page 379 (page 362 in 2nd edition; page 360 in abridged version). (The proof is actually completed in Volume II, 1st edition, page 86, accompanied by the comment, "The above proposition is occasionally useful.") (wikipedia)
When you dig away at the "obvious" you find the rabbit hole goes very deep indeed. Infinitely deep, I guess...
-
Removed the mention of latex, since you've now formatted it more legibly. – Tom Nov 19 '12 at 23:45
Thank you very much. :) Yes, I thought very much the same regarding the difficulty of the proofs. It's kind of distracting proving something that seems very obvious, and I think that's where part of the difficulty comes from - the fact that I lack initiative because it doesn't seem "challenging" enough. Unfortunately, it seems the first year of my degree solely compromises of such proofs. Oh well, I'll keep your advice in mind. :) Thanks very much for the help! – Sanyia Saidova Nov 20 '12 at 0:00
The case $n=2k$ is almost correct but the case $n=2k+1$ not (of course the case $n=2k+1$ is unnecessary since from the case $n=2k$ you deduced that $S_n$ is unbounded).
In case $n=2k+1$ you want to assume that $S_n$ is bounded i.e. for some $M>0, \ |S_n|<M \Rightarrow n^2<M \ldots$
Instead you assumed that $S_n>M$.
Assume that $S_n$ is bounded. Then exist an $M>0$ such that $|S_n|<M$ for all $n \in \mathbb{N}$. This mean that in the particular case $n=2k, \ |S_{2k}|=(2k)^2<M \Rightarrow k<\dfrac{\sqrt{M}}{2}, \ \forall k \in \mathbb{N}$. But for $k=\lceil \dfrac{\sqrt{M}}{2}\rceil +1 \in \mathbb{N}$ this is not true. Thus $S_n$ cannot be bounded. (Or as Hendrik Jan suggested there is no need to separate even or odd cases and instead of $k=\lceil \dfrac{\sqrt{M}}{2}\rceil +1$ you can simply take $n=M$)
Personally I would do without the contradiction, avoid the odd/even case by taking absolute value, and do without the square root: For each $M>1$ the value $|S(M)| = M^2$ is larger than $M$, so $S(n)$ cannot be bounded. – Hendrik Jan Nov 19 '12 at 23:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9677314758300781, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/2329-slope.html | # Thread:
1. ## Slope
Please let me know if this is correct:
Find the equation of the line which satisfies the point through the point (0,5) with slope =-2
y-y1 = m(x-1)=
y-5=-2(x-1)=
y=-2x+-7
2. Perhaps you meant it, but it should be: $y - y_1 = m\left( {x - x_1 } \right)$.
Now, with m = -2 and the point being (0,5) we get:
$y - 5 = - 2\left( {x - 0} \right) \Leftrightarrow y - 5 = - 2x \Leftrightarrow y = - 2x + 5$
The slope is clearly fine and you can easily check whether the line goes through the given point by filling it in to see if it satisfies the equation.
3. ## Thanks!!
Thank you.
4. You're welcome | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8716785907745361, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/71024-vector-geometry-work-done.html | # Thread:
1. ## Vector geometry: Work done
A constant force moves an object along a straight line from point (7, -4, 6) to point (-6, -2, 8). Find the work done if the distance is measured in meters and the magnitude of the force is measured in newtons.
2. Originally Posted by wvlilgurl
A constant force moves an object along a straight line from point (7, -4, 6) to point (-6, -2, 8). Find the work done if the distance is measured in meters and the magnitude of the force is measured in newtons.
Let $\bold{D}$ be the vector from (7,-4,6) to (-6,-2,8), that is, $\bold{D} = \left< -13, 2, 2 \right>$
Now the work is given by
$W = \bold{F} \cdot \bold{D}$
of course, $\cdot$ here means dot product | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915945827960968, "perplexity_flag": "head"} |
http://en.m.wikipedia.org/wiki/Heat | # Heat
For other uses, see Heat (disambiguation).
Nuclear fusion in the Sun converts nuclear potential energy into available internal energy and keeps the temperature of the Sun very high. Consequently, heat is transported to Earth as electromagnetic radiation. This is the main source of energy for life on Earth.
In physics and chemistry, heat is energy transferred from one body to another by thermal interactions.[1][2] The transfer of energy can occur in a variety of ways, among them conduction,[3] radiation,[4] and convection. Heat is not a property of a system or body, but instead is always associated with a process of some kind, and is synonymous with heat flow and heat transfer.
Heat flow from hotter to colder systems occurs spontaneously, and is always accompanied by an increase in entropy. In a heat engine, internal energy of bodies is harnessed to provide useful work. The second law of thermodynamics states the principle that heat cannot flow directly from cold to hot systems, but with the aid of a heat pump external work can be used to transport internal energy indirectly from a cold to a hot body.
Transfers of energy as heat are macroscopic processes. The origin and properties of heat can be understood through the statistical mechanics of microscopic constituents such as molecules and photons. For instance, heat flow can occur when the rapidly vibrating molecules in a high temperature body transfer some of their energy (by direct contact, radiation exchange, or other mechanisms) to the more slowly vibrating molecules in a lower temperature body.
The SI unit of heat is the joule. Heat can be measured by calorimetry,[5] or determined indirectly by calculations based on other quantities, relying for instance on the first law of thermodynamics. In calorimetry, the concepts of latent heat and of sensible heat are used. Latent heat produces changes of state without temperature change, while sensible heat produces temperature change.
## Overview
Heat may flow across the boundary of the system and thus change its internal energy.
Heat in physics is defined as energy transferred by thermal interactions. Heat flows spontaneously from hotter to colder systems. When two systems come into thermal contact, they exchange energy through the microscopic interactions of their particles. When the systems are at different temperatures, the result is a spontaneous net flow of energy that continues until the temperatures are equal. At that point the net flow of energy is zero, and the systems are said to be in thermal equilibrium. Spontaneous heat transfer is an irreversible process.
The first law of thermodynamics states that the internal energy of an isolated system is conserved. To change the internal energy of a system, energy must be transferred to or from the system. For a closed system, heat and work are the mechanisms by which energy can be transferred. For an open system, internal energy can be changed also by transfer of matter.[6] Work performed by a body is, by definition, an energy transfer from the body that is due to a change to external or mechanical parameters of the body, such as the volume, magnetization, and location of center of mass in a gravitational field.[7][8][9][10][11]
When energy is transferred to a body purely as heat, its internal energy increases. This additional energy is stored as kinetic and potential energy of the atoms and molecules in the body.[12] Heat itself is not stored within a body. Like work, it exists only as energy in transit from one body to another or between a body and its surroundings.
↑Jump back a section
## Microscopic origin of heat
Heat characterizes macroscopic systems and processes, but like other thermodynamic quantities it has a fundamental origin in statistical mechanics — the physics of the underlying microscopic degrees of freedom.
For example, within a range of temperature set by quantum effects, the temperature of a gas is proportional (via Boltzmann's constant kB) to the average kinetic energy of its molecules.[13] Heat transfer between a low and high temperature gas brought into contact arises due to the exchange of kinetic and potential energy in molecular collisions. As more and more molecules undergo collisions, their kinetic energy equilibrates to a distribution that corresponds to an intermediate temperature somewhere between the low and high initial temperatures of the two gases. An early and vague expression of this was by Francis Bacon.[14][15] Precise and detailed versions of it were developed in the nineteenth century.[16]
For solids, conduction of heat occurs through collective motions of microscopic particles, such as phonons, or through the motion of mobile particles like conduction band electrons.[17] As these excitations move around inside the solid and interact with it and each other, they transfer energy from higher to lower temperature regions, eventually leading to thermal equilibrium.
↑Jump back a section
## History
Scottish physicist James Clerk Maxwell, in his 1871 classic Theory of Heat, was one of many who began to build on the already established idea that heat has something to do with matter in motion. This was the same idea put forth by Sir Benjamin Thompson in 1798, who said he was only following up on the work of many others. One of Maxwell's recommended books was Heat as a Mode of Motion, by John Tyndall. Maxwell outlined four stipulations for the definition of heat:
• It is something which may be transferred from one body to another, according to the second law of thermodynamics.
• It is a measurable quantity, and thus treated mathematically.
• It cannot be treated as a substance, because it may be transformed into something that is not a substance, e.g., mechanical work.
• Heat is one of the forms of energy.
From empirically based ideas of heat, and from other empirical observations, the notions of internal energy and of entropy can be derived, so as to lead to the recognition of the first and second laws of thermodynamics.[18] This was the way of the historical pioneers of thermodynamics.[19][20]
↑Jump back a section
## Transfers of energy between closed systems
### Adiabatic transfer of energy as work between two bodies
A body can be connected to its surroundings by links that allow transfer of energy only as work, not as heat, because the body is adiabatically isolated. Such transfer can be of two pure kinds, volume work, and isochoric work. Volume work means that the initial volume and the final volume of the body are different, and that mechanical work is transferred through the forces that cause the changes in the deformation parameters. Isochoric work is done on the body by the surroundings when the initial and final volumes and all deformation parameters of the body are unchanged. For example, the surroundings can do work through a changing magnetic field that rotates a magnetic stirrer within the body. Another example is 'shaft work', in which an externally driven shaft rotates fan- or paddle-blades within the body. Another example is rubbing, considered as tangential motion of a wall that contains the body. Stirring and rubbing were the main forms of work in Joule's experiments.[21][22][23][24][25]
### Transfers of energy as heat between two bodies
Referring to conduction, Partington writes: "If a hot body is brought in conducting contact with a cold body, the temperature of the hot body falls and that of the cold body rises, and it is said that a quantity of heat has passed from the hot body to the cold body."[26]
Referring to radiation, Maxwell writes: "In Radiation, the hotter body loses heat, and the colder body receives heat by means of a process occurring in some intervening medium which does not itself thereby become hot."[27]
### Diathermal wall
In considering transfer of energy between two bodies, it is customary to allow the existence of a wall between them which is permeable only to heat. This allowance is a presupposition of thermodynamics.[28] Such a wall is called diathermal. It is usual for many theoretical discussions to allow that the wall itself has negligibly small internal energy. Then the only property of interest in the wall is that it allows conduction and radiation of heat between the two bodies of interest. While walls are readily found which are permeable only to heat, and impermeable to matter, it is very exceptional indeed to find walls which are permeable to matter but not to entropy. Such a rare exception is a wall penetrated by fine capillaries, which allow the passage of the superfluid of helium II but not of the normal fluid.[29] Transfer of energy as heat is uniquely defined only between closed systems, while for open systems, 'transfer of energy as heat through a wall that allows transfer of matter' is not uniquely defined; such a wall, however, does allow transfer of internal energy.[30][31][32]
It is sometimes allowed that the diathermal wall is substantial and has properties of its own including a temperature of its own, and it is considered that the wall is a body in its own right, but is still considered as a closed system not permitted to exchange matter. Then when there is thermal equilibrium between the two bodies of interest, there is also thermal equilibrium between each of them and the wall, and all three have the same temperature. Then from the viewpoint of states of thermal equilibrium, all diathermal walls are equivalent; this is proposed as a possible statement of the zeroth law of thermodynamics.[33] If heat is considered with respect to diathermal walls, then, because all diathermal walls are equivalent, all heat is of the same kind.[34]
When the two bodies are initially separate and not connected by a substantial wall, and at different temperatures, and are then connected with the substantial wall as connecting medium, the properties and state of the wall need to be taken into account for the process of transfer of energy as heat. If the substantial wall contains fluid, and there is a gravitational field, then transfer of energy as heat between the two bodies of interest may involve convective circulation within the wall, with no transfer of matter into or out of the wall. In this sense, it can be said that convective circulation is a mechanism of transfer of energy as heat, but in this case, the transfer of energy as heat is complex, because of change of internal energy and temperature of the wall. Thermal convective circulation is always opposed by friction, and consequently occurs only above a threshold of thermal difference between source and destination of the transferred energy.[35] Thermal equilibrium between source and destination is therefore finally reached by a non-convective stage, of conduction and radiation. In discussions of thermodynamics, such a diathermal wall and process of transfer of energy as heat is not usually intended unless they are explicitly expressed.
### Transfers of energy involving more than two bodies
#### Heat engine
In classical thermodynamics, a commonly considered model is the heat engine. It consists of four bodies: the working body, the hot reservoir, the cold reservoir, and the work reservoir. A cyclic process leaves the working body in an unchanged state, and is envisaged as being repeated indefinitely often. Work transfers between the working body and the work reservoir are envisaged as reversible, and thus only one work reservoir is needed. But two thermal reservoirs are needed, because transfer of energy as heat is irreversible. A single cycle sees energy taken by the working body from the hot reservoir and sent to the two other reservoirs, the work reservoir and the cold reservoir. The hot reservoir always and only supplies energy and the cold reservoir always and only receives energy. The second law of thermodynamics requires that no cycle can occur in which no energy is received by the cold reservoir.
#### Convective transfer of energy
Convective transfer of energy involves three or more systems, which may be closed or open. A process of convection takes some finite amount of time, because it involves three steps at least. The simplest kind of convection has a hot reservoir, a cold reservoir, and a carrier body. In this simplest kind of convection, the carrier body exchanges heat successively with the respective thermal reservoirs. The second law of thermodynamics requires the carrier body to be initially colder than the hot reservoir and finally warmer than the cold reservoir. For convection in general, the transfers of energy can be of more general kinds. For example, for convection between open systems, the transfers may be more conveniently described in terms of internal energy, or of enthalpy, or of some other quantity of energy. Here a convenient model is described by internal energy. First, the carrier body increases its internal energy by taking internal energy from the source reservoir. Then it moves through space and carries its internal energy from the location of the source reservoir to that of the destination reservoir; this step is characteristic of convection, and is sometimes called advection. Then it decreases its internal energy by giving energy to the destination reservoir. Convection can transfer internal energy as latent heat, and can be from a source at a lower temperature to a destination at a higher one, work being provided to drive the transfer.
↑Jump back a section
## Notation and units
As a form of energy heat has the unit joule (J) in the International System of Units (SI). However, in many applied fields in engineering the British Thermal Unit (BTU) and the calorie are often used. The standard unit for the rate of heat transferred is the watt (W), defined as joules per second.
The total amount of energy transferred as heat is conventionally written as Q for algebraic purposes. Heat released by a system into its surroundings is by convention a negative quantity (Q < 0); when a system absorbs heat from its surroundings, it is positive (Q > 0). Heat transfer rate, or heat flow per unit time, is denoted by $\dot{Q}$. This should not be confused with a time derivative of a function of state (which can also be written with the dot notation) since heat is not a function of state. Heat flux is defined as rate of heat transfer per unit cross-sectional area, resulting in the unit watts per square metre.
↑Jump back a section
## Estimation of quantity of heat
Quantity of heat transferred can measured by calorimetry, or determined through calculations based on other quantities.
Calorimetry is the empirical basis of the idea of quantity of heat transferred in a process. The transferred heat is measured by changes in a body of known properties, for example, temperature rise, change in volume or length, or phase change, such as melting of ice.[36][37]
A calculation of quantity of heat transferred can rely on a hypothetical quantity of energy transferred as adiabatic work and on the first law of thermodynamics. Such calculation is the primary approach of many theoretical studies of quantity of heat transferred.[28][38][39]
↑Jump back a section
## Internal energy and enthalpy
For a closed system (a system from which no matter can enter or exit), the first law of thermodynamics states that the change in internal energy ΔU of the system is equal to the amount of heat Q supplied to the system minus the amount of work W done by system on its surroundings. [note 1]
$\Delta U = Q - W \quad{\rm{(first\,\,law)}}.$
N.B. This, and subsequent equations, are not a mathematical expressions; the ' = ' sign is to be read as 'equivalent', not 'equals'. The expressions 'Q' and 'W' represent amounts of energy but energy in different forms, heat and work. It is fundamental to the science of heat that conversion between heat (Q) and work (W) is never 100% efficient, so the ' = ' sign cannot have its usual mathematical meaning.
$Q = \Delta U + W.$
The work done by the system includes boundary work (when the system increases its volume against an external force, such as that exerted by a piston) and other work (e.g. shaft work performed by a compressor fan), which is called isochoric work:
$Q = \Delta U + W_\text{boundary} + W_\text{other}.$
In this Section we will neglect the "other-work" contribution.
The internal energy, U, is a state function. In cyclical processes, such as the operation of a heat engine, state functions return to their initial values after completing one cycle. Then the differential, or infinitesimal increment, for the internal energy in an infinitesimal process is an exact differential dU. The symbol for exact differentials is the lowercase letter d.
In contrast, neither of the infintestimal increments δQ nor δW in an infinitesimal process represents the state of the system. Thus, infinitesimal increments of heat and work are inexact differentials. The lowercase Greek letter delta, δ, is the symbol for inexact differentials. The integral of any inexact differential over the time it takes for a system to leave and return to the same thermodynamic state does not necessarily equal zero.
The second law of thermodynamics observes that if heat is supplied to a system in which no irreversible processes take place and which has a well-defined temperature T, the increment of heat δQ and the temperature T form the exact differential
$\mathrm{d}S =\frac{\delta Q}{T},$
and that S, the entropy of the working body, is a function of state. Likewise, with a well-defined pressure, P, behind the moving boundary, the work differential, δW, and the pressure, P, combine to form the exact differential
$\mathrm{d}V =\frac{\delta W}{P},$
with V the volume of the system, which is a state variable. In general, for homogeneous systems,
$\mathrm{d}U = T\mathrm{d}S - P\mathrm{d}V.$
Associated with this differential equation is that the internal energy may be considered to be a function U (S,V) of its natural variables S and V. The internal energy representation of the fundamental thermodynamic relation is written
$U=U(S,V).$[40][41]
If V is constant
$T\mathrm{d}S=\mathrm{d}U\,\,\,\,\,\,\,\,\,\,\,\,(V\,\, \text{constant)}$
and if P is constant
$T\mathrm{d}S=\mathrm{d}H\,\,\,\,\,\,\,\,\,\,\,\,(P\,\, \text{constant)}$
with H the enthalpy defined by
$H=U+PV.$
The enthalpy may be considered to be a function H (S,P) of its natural variables S and P. The enthalpy representation of the fundamental thermodynamic relation is written
$H=H(S,P).$[41][42]
The internal energy representation and the enthalpy representation are partial Legendre transforms of one another. They contain the same physical information, written in different ways.[42][43]
### Chemical reactions
For a closed system in which a chemical reaction is of interest, the extent of reaction, denoted by ξ, states the degree of advancement of the reaction and is included as a further natural variable for internal energy and for enthalpy. This is written
$U = U(S,V,\xi)\,\,\,\mathrm{and}\,\,\, H = H(S,P,\xi)\,.$
In practice, chemists often use tables of a special but unnamed thermodynamic potential that is not the enthalpy expressed in its natural variables; instead they use the enthalpy expressed as a function of temperature instead of entropy. This special potential is related to the natural form of the enthalpy H (S,P,ξ) by another partial Legendre transform, that makes its natural variables T, P, and ξ. The special unnamed potential is still usually called the enthalpy. It can be written
$H=H(T,P,\xi)\,.$
This enthalpy is used to report the enthalpy change of reaction, also called the heat of reaction.[44][45]
↑Jump back a section
## Latent and sensible heat
Joseph Black
In an 1847 lecture entitled On Matter, Living Force, and Heat, James Prescott Joule characterized the terms latent heat and sensible heat as components of heat each affecting distinct physical phenomena, namely the potential and kinetic energy of particles, respectively.[46] He described latent energy as the energy possessed via a distancing of particles where attraction was over a greater distance, i.e. a form of potential energy, and the sensible heat as an energy involving the motion of particles or what was known as a living force. At the time of Joule kinetic energy either held 'invisibly' internally or held 'visibly' externally was known as a living force.
Latent heat is the heat released or absorbed by a chemical substance or a thermodynamic system during a change of state that occurs without a change in temperature. Such a process may be a phase transition, such as the melting of ice or the boiling of water.[47][48] The term was introduced around 1750 by Joseph Black as derived from the Latin latere (to lie hidden), characterizing its effect as not being directly measurable with a thermometer.
Sensible heat, in contrast to latent heat, is the heat exchanged by a thermodynamic system that has as its sole effect a change of temperature.[49] Sensible heat therefore only increases the thermal energy of a system.
Consequences of Black's distinction between sensible and latent heat are examined in the Wikipedia article on calorimetry.
↑Jump back a section
## Specific heat
Specific heat, also called specific heat capacity, is defined as the amount of energy that has to be transferred to or from one unit of mass (kilogram) or amount of substance (mole) to change the system temperature by one degree. Specific heat is a physical property, which means that it depends on the substance under consideration and its state as specified by its properties.
The specific heats of monatomic gases (e.g., helium) are nearly constant with temperature. Diatomic gases such as hydrogen display some temperature dependence, and triatomic gases (e.g., carbon dioxide) still more.
↑Jump back a section
## Rigorous mathematical definition of quantity of energy transferred as heat
It is sometimes convenient to have a very rigorous mathematically stated definition of quantity of energy transferred as heat. Such definition is customarily based on the work of Carathéodory (1909), referring to processes in a closed system, as follows.[28][50][51][52][53][54]
The internal energy UX of a body in an arbitrary state X can be determined by amounts of work adiabatically performed by the body on its surrounds when it starts from a reference state O, allowing that sometimes the amount of work is calculated by assuming that some adiabatic process is virtually though not actually reversible. Adiabatic work is defined in terms of adiabatic walls, which allow the frictionless performance of work but no other transfer, of energy or matter. In particular they do not allow the passage of energy as heat. According to Carathéodory (1909), passage of energy as heat is allowed, by walls which are "permeable only to heat".
For the definition of quantity of energy transferred as heat, it is customarily envisaged that an arbitrary state of interest Y is reached from state O by a process with two components, one adiabatic and the other not adiabatic. For convenience one may say that the adiabatic component was the sum of work done by the body through volume change through movement of the walls while the non-adiabatic partition was excluded, and of isochoric adiabatic work. Then the non-adiabatic component is a process of energy transfer through the wall that passes only heat, newly made accessible for the purpose of this transfer, from the surroundings to the body. The change in internal energy to reach the state Y from the state O is the difference of the two amounts of energy transferred.
Although Carathéodory himself did not state such a definition, following his work it is customary in theoretical studies to define the quantity of energy transferred as heat, Q, to the body from its surroundings, in the combined process of change to state Y from the state O, as the change in internal energy, ΔUY, minus the amount of work, W, done by the body on its surrounds by the adiabatic process, so that Q = ΔUY − W.
In this definition, for the sake of mathematical rigour, the quantity of energy transferred as heat is not specified directly in terms of the non-adiabatic process. It is defined through knowledge of precisely two variables, the change of internal energy and the amount of adiabatic work done, for the combined process of change from the reference state O to the arbitrary state Y. It is important that this does not explicitly involve the amount of energy transferred in the non-adiabatic component of the combined process. It is assumed here that the amount of energy required to pass from state O to state Y, the change of internal energy, is known, independently of the combined process, by a determination through a purely adiabatic process, like that for the determination of the internal energy of state X above. The mathematical rigour that is prized in this definition is that there is one and only one kind of energy transfer admitted as fundamental: energy transferred as work. Energy transfer as heat is considered as a derived quantity. The uniqueness of work in this scheme is considered to provide purity of conception, which is considered as guaranteeing mathematical rigour. The conceptual purity of this definition, based on the concept of energy transferred as work as an ideal notion, relies on the idea that some frictionless and otherwise non-dissipative processes of energy transfer can be realized in physical actuality. The second law of thermodynamics, on the other hand, assures us that such processes are not found in nature.
↑Jump back a section
## Heat, temperature, and thermal equilibrium regarded as jointly primitive notions
Before the rigorous mathematical definition of heat based on Carathéodory's 1909 paper, recounted just above, historically, heat, temperature, and thermal equilibrium were presented in thermodynamics textbooks as jointly primitive notions.[55] Carathéodory introduced his 1909 paper thus: "The proposition that the discipline of thermodynamics can be justified without recourse to any hypothesis that cannot be verified experimentally must be regarded as one of the most noteworthy results of the research in thermodynamics that was accomplished during the last century." Referring to the "point of view adopted by most authors who were active in the last fifty years", Carathéodory wrote: "There exists a physical quantity called heat that is not identical with the mechanical quantities (mass, force, pressure, etc.) and whose variations can be determined by calorimetric measurements." James Serrin introduces an account of the theory of thermodynamics thus: "In the following section, we shall use the classical notions of heat, work, and hotness as primitive elements, ... That heat is an appropriate and natural primitive for thermodynamics was already accepted by Carnot. Its continued validity as a primitive element of thermodynamical structure is due to the fact that it synthesizes an essential physical concept, as well as to its successful use in recent work to unify different constitutive theories."[56][57] This traditional kind of presentation of the basis of thermodynamics includes ideas that may be summarized by the statement that heat transfer is purely due to spatial non-uniformity of temperature, and is by conduction and radiation, from hotter to colder bodies. It is sometimes proposed that this traditional kind of presentation necessarily rests on "circular reasoning"; against this proposal, there stands the rigorously logical mathematical development of the theory presented by Truesdell and Bharatha (1977).[58]
This alternative approach to the definition of quantity of energy transferred as heat differs in logical structure from that of Carathéodory, recounted just above.
This alternative approach admits calorimetry as a primary or direct way to measure quantity of energy transferred as heat. It relies on temperature as one of its primitive concepts, and used in calorimetry.[59] It is presupposed that enough processes exist physically to allow measurement of differences in internal energies. Such processes are not restricted to adiabatic transfers of energy as work. They include calorimetry, which is the commonest practical way of finding internal energy differences.[60] The needed temperature can be either empirical or absolute thermodynamic.
In contrast, the Carathéodory way recounted just above does not use calorimetry or temperature in its primary definition of quantity of energy transferred as heat. The Carathédory way regards calorimetry only as a secondary or indirect way of measuring quantity of energy transferred as heat. As recounted in more detail just above, the Carathéodory way regards quantity of energy transferred as heat in a process as primarily or directly defined as a residual quantity. It is calculated from the difference of the internal energies of the initial and final states of the system, and from the actual work done by the system during the process. That internal energy difference is supposed to have been measured in advance through processes of purely adiabatic transfer of energy as work, processes that take the system between the initial and final states. By the Carathéodory way it is presupposed as known from experiment that there actually physically exist enough such adiabatic processes, so that there need be no recourse to calorimetry for measurement of quantity of energy transferred as heat. This presupposition is essential but is explicitly labeled neither as a law of thermodynamics nor as an axiom of the Carathéodory way. In fact, the actual physical existence of such adiabatic processes is indeed mostly supposition, and those supposed processes have in most cases not been actually verified empirically to exist.[61]
↑Jump back a section
## Entropy
Main article: Entropy
Rudolf Clausius
In 1856, German physicist Rudolf Clausius defined the second fundamental theorem (the second law of thermodynamics) in the mechanical theory of heat (thermodynamics): "if two transformations which, without necessitating any other permanent change, can mutually replace one another, be called equivalent, then the generations of the quantity of heat Q from work at the temperature T, has the equivalence-value:"[62][63]
${} \frac {Q}{T}.$
In 1865, he came to define the entropy symbolized by S, such that, due to the supply of the amount of heat Q at temperature T the entropy of the system is increased by
$\Delta S = \frac {Q}{T}$
and thus, for small changes, quantities of heat δQ (an inexact differential) are defined as quantities of TdS, with dS an exact differential:
$\delta Q = T \mathrm{d}S .$
This equality is only valid for a closed system and if no irreversible processes take place inside the system while the heat δQ is applied. If, in contrast, irreversible processes are involved, e.g. some sort of friction, then there is entropy production and, instead of the above equation, one has
$\delta Q \leq T \mathrm{d}S \quad{\rm{(second\,\,law)}}\,.$
This is the second law of thermodynamics for closed systems.
↑Jump back a section
## Heat transfer in engineering
A red-hot iron rod from which heat transfer to the surrounding environment will be primarily through radiation.
The discipline of heat transfer, typically considered an aspect of mechanical engineering and chemical engineering, deals with specific applied methods by which thermal energy in a system is generated, or converted, or transferred to another system. Although the definition of heat implicitly means the transfer of energy, the term heat transfer encompasses this traditional usage in many engineering disciplines and laymen language.
Heat transfer includes the mechanisms of heat conduction, thermal radiation, and mass transfer.
In engineering, the term convective heat transfer is used to describe the combined effects of conduction and fluid flow. From the thermodynamic point of view, heat flows into a fluid by diffusion to increase its energy, the fluid then transfers (advects) this increased internal energy (not heat) from one location to another, and this is then followed by a second thermal interaction which transfers heat to a second body or system, again by diffusion. This entire process is often regarded as an additional mechanism of heat transfer, although technically, "heat transfer" and thus heating and cooling occurs only on either end of such a conductive flow, but not as a result of flow. Thus, conduction can be said to "transfer" heat only as a net result of the process, but may not do so at every time within the complicated convective process.
Although distinct physical laws may describe the behavior of each of these methods, real systems often exhibit a complicated combination which are often described by a variety of complex mathematical methods.
↑Jump back a section
## Practical applications
In accordance with the first law for closed systems, energy transferred as heat enters one body and leaves another, changing the internal energies of each. Transfer, between bodies, of energy as work is a complementary way of changing internal energies. Though it is not logically rigorous from the viewpoint of strict physical concepts, a common form of words that expresses this is to say that heat and work are interconvertible.
Heat engines operate by converting heat flow from a high temperature reservoir to a low temperature reservoir into work. One example are steam engines, where the high temperature reservoir is steam generated by boiling water. The flow of heat from the hot steam to water is converted into mechanical work via a turbine or piston. Heat engines achieve high efficiency when the difference between initial and final temperature is high.
Heat pumps, by contrast, use work to cause thermal energy to flow from low to high temperature, the opposite direction heat would flow spontaneously. An example is a refrigerator or air conditioner, where electric power is used to cool a low temperature system (the interior of the refrigerator) while heating a higher temperature environment (the exterior). High efficiency is achieved when the temperature difference is small.
↑Jump back a section
## Usage of words
The strictly defined physical term 'quantity of energy transferred as heat' has a resonance with the ordinary language noun 'heat' and the ordinary language verb 'heat'. This can lead to confusion if ordinary language is muddled with strictly defined physical language. In the strict terminology of physics, heat is defined as a word that refers to a process, not to a state of a system. In ordinary language one can speak of a process that increases the temperature of a body as 'heating' it, ignoring the nature of the process, which could be one of adiabatic transfer of energy as work. But in strict physical terms, a process is admitted as heating only when what is meant is transfer of energy as heat. Such a process does not necessarily increase the temperature of the heated body, which may instead change its phase, for example by melting. In the strict physical sense, heat cannot be 'produced', because the usage 'production of heat' misleadingly seems to refer to a state variable. Thus, it would be physically improper to speak of 'heat production by friction', or of 'heating by adiabatic compression on descent of an air parcel' or of 'heat production by chemical reaction'; instead, proper physical usage speaks of conversion of kinetic energy of bulk flow, or of potential energy of bulk matter,[64] or of chemical potential energy, into internal energy, and of transfer of energy as heat. Occasionally a present-day author, especially when referring to history, writes of "adiabatic heating", though this is a contradiction in terms of present day physics.[65] Historically, before the concept of internal energy became clear over the period 1850 to 1869, physicists spoke of "heat production" where nowadays one speaks of conversion of other forms of energy into internal energy.[66]
↑Jump back a section
## See also
↑Jump back a section
## Notes
↑Jump back a section
## References
1. Reif, F. (1965), pp. 67, 73.
2. Kittel, C. Kroemer, H. (1980). Thermal Physics, second edition, W.H. Freeman, San Francisco, ISBN 0-7167-1088-9, p. 227.
3. Planck. M. (1914). The Theory of Heat Radiation, a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia.
4. Crawford, F.H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, Harcourt, Brace & World, p. 98.
5. Reif, F. (1965), p. 73.
6. Gislason, E.A., Craig, N.C. (2005). Cementing the foundations of thermodynamics: Comparison of system-based and surroundings-based definitions of heat and work, J. Chem. Thermodynamics, 37: 954–966.
7. Anacleto, J., Ferreira, J.M. (2008). Surroundings-based and system-based heat and work definitions: Which one is most suitable?, J. Chem. Thermodynamics, 40: 134–135.
8. Smith, J.M., Van Ness, H.C., Abbot, M.M. (2005). Introduction to Chemical Engineering Thermodynamics. McGraw-Hill. ISBN 0073104450.
9. Fowler, R., Guggenheim, E.A. (1939/1965). Statistical Thermodynamics. A version of Statistical Mechanics for Students of Physics and Chemistry, Cambridge University Press, Cambridge UK, pp. 70, 255.
10. Bacon, F. (1620). Novum Organum Scientiarum, translated by Devey, J., P.F. Collier & Son, New York, 1902.
11. Partington, J.R. (1949), page 131.
12. Partington, J.R. (1949), pages 132–136.
13. Kittel, C. (1953/1980). Introduction to Solid State Physics, (first edition 1953), fifth edition 1980, John Wiley & Sons, New York, ISBN 0-471-49024-5, Chapters 4, 5.
14. Planck, M. (1903).
15. Partington, J.R. (1949).
16. Truesdell, C. (1980), page 15.
17. Buchdahl H.A. (1966), pp. 6–7.
18. Wilson, A.H. (1957), p. 7.
19. Reif, F. (1965), p. 72.
20. Münster, A. (1970), p. 45.
21. ^ a b c C. Carathéodory (1909). "Untersuchungen über die Grundlagen der Thermodynamik". Mathematische Annalen 67: 355–386. doi:10.1007/BF01450409. A mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
22. Tisza, L. (1966), pp. 139–140.
23. Fitts, D.D. (1962). Nonequilibrium Thermodynamics. Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York, p. 28.
24. Münster, A. (1970), pp. 50–51.
25. Denbigh, K. (1954/1971). The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering, third edition, Cambridge University Press, Cambridge UK, pp. 81–82.
26. Bailyn, M. (1994), pp. 22–23.
27. Chandrasekhar, S. (1961). Hydrodynamic and Hydromagnetic Stability, Oxford University Press, Oxford UK.
28. Maxwell J.C. (1872), p. 54.
29. Planck (1927), Chapter 3.
30. Bryan, G.H. (1907), p. 47.
31. ^ a b Adkins, C.J. (1983), p. 101.
32. ^ a b
33. Adkins, C.J. (1983), pp. 100–104.
34. Prigogine, I.; Defay, R. (1954). Chemical Thermodynamics. translated by D.H. Everett. Longmans, Green & Co., London, Section 2-6, pp. 29–31.
35. J. P. Joule (1884), The Scientific Paper of James Prescott Joule, The Physical Society of London, p. 274, "Heat must therefore consist of either living force or of attraction through space. In the former case we can conceive the constituent particles of heated bodies to be, either in whole or in part, in a state of motion. In the latter we may suppose the particles to be removed by the process of heating, so as to exert attraction through greater space. I am inclined to believe that both of these hypotheses will be found to hold good,—that in some instances, particularly in the case of sensible heat, or such as is indicated by the thermometer, heat will be found to consist in the living force of the particles of the bodies in which it is induced; whilst in others, particularly in the case of latent heat, the phenomena are produced by the separation of particle from particle, so as to cause them to attract one another through a greater space." , Lecture on Matter, Living Force, and Heat. May 5 and 12, 1847
36. Perrot, Pierre (1998). A to Z of Thermodynamics. Oxford University Press. ISBN 0-19-856552-6.
37. Clark, John, O.E. (2004). The Essential Dictionary of Science. Barnes & Noble Books. ISBN 0-7607-4616-8.
38.
39. Adkins, C.J. (1968/1983).
40. Münster, A. (1970).
41. Pippard, A.B. (1957).
42. Fowler, R., Guggenheim, E.A. (1939).
43. Buchdahl, H.A. (1966).
44. Lieb, E.H., Yngvason, J. (1999). The physics and mathematics of the second law of thermodynamics, Physics Reports, 314: 1–96, p. 10.
45. Serrin, J. (1986). Chapter 1, 'An Outline of Thermodynamical Structure', pages 3-32, in New Perspectives in Thermodynamics, edited by J. Serrin, Springer, Berlin, ISBN 3-540-15931-2, p. 5 .
46. Owen, D.R. (1984), pp. 43–45.
47. Truesdell, C., Bharatha, S. (1977). The Concepts and Logic of Classical Thermodynamics as a Theory of Heat Engines, Rigorously Constructed upon the Foundation Laid by S. Carnot and F. Reech, Springer, New York, ISBN 0-387-07971-8.
48. Atkins, P., de Paula, J. (1978/2010). Physical Chemistry, (first edition 1978), ninth edition 2010, Oxford University Press, Oxford UK, ISBN 978-0-19-954337-3, p. 54.
49. Published in Poggendoff’s Annalen, Dec. 1854, vol. xciii. p. 481; translated in the Journal de Mathematiques, vol. xx. Paris, 1855, and in the Philosophical Magazine, August 1856, s. 4. vol. xii, p. 81
50. Clausius, R. (1865). The Mechanical Theory of Heat –with its Applications to the Steam Engine and to Physical Properties of Bodies. London: John van Voorst, 1 Paternoster Row. MDCCCLXVII.
51. Iribarne, J.V., Godson, W.L. (1973/1989). Atmospheric thermodynamics, second edition, reprinted 1989, Kluwer Academic Publishers, Dordrecht, ISBN 90-277-1296-4, p. 136.
52. Bailyn, M. (1994), Part D, pp. 50–56.
53. For example, Clausius, R. (1857). Über die Art der Bewegung, welche wir Wärme nennen, Annalen der Physik und Chemie, 100: 353–380. Translated as On the nature of the motion which we call heat, Phil. Mag. series 4, 14: 108–128.
### Bibliography
• Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, ISBN 0-521-25445-0.
• Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3.
• Beattie, J.A., Oppenheim, I. (1979). Principles of Thermodynamics, Elsevier, Amsterdam, ISBN 0–444–41806–7.
• Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig.
• Buchdahl, H.A. (1966), The Concepts of Classical Thermodynamics, Cambridge University Press, London.
• Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-86256-8.
• Fowler, R., Guggenheim, E.A. (1939/1965). Statistical Thermodynamics. A version of Statistical Mechanics for Students of Physics and Chemistry, Cambridge University Press, Cambridge UK.
• Guggenheim, E.A. (1967) [1949], Thermodynamics. An Advanced Treatment for Chemists and Physicists (fifth ed.), Amsterdam: North-Holland Publishing Company.
• Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081.
• Kondepudi, D. (2008), Introduction to Modern Thermodynamics, Chichester UK: Wiley, ISBN 978–0–470–01598–8.
• Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures, John Wiley & Sons, Chichester, ISBN 0–471–97393–9.
• Maxwell, J.C. (1871), Theory of Heat (first ed.), London: Longmans, Green and Co.
• Münster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6.
• Owen, D.R. (1984). A First Course in the Mathematical Foundations of Thermodynamics, Springer, New York, ISBN 0-387-90897-8.
• Partington, J.R. (1949), An Advanced Treatise on Physical Chemistry., volume 1, Fundamental Principles. The Properties of Gases, London: Longmans, Green and Co.
• Pippard, A.B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK.
• Planck, M., (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, first English edition, Longmans, Green and Co., London.
• Planck, M., (1923/1927). Treatise on Thermodynamics, translated by A. Ogg, third English edition, Longmans, Green and Co., London.
• Reif, F. (1965). Fundamentals of Statistical and Thermal Physics. New York: McGraw-Hll, Inc.
• Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA.
• Truesdell, C. (1980). The Tragicomical History of Thermodynamics 1822-1854, Springer, New York, ISBN 0–387–90403–4.
• Wilson, A.H. (1957). Thermodynamics and Statistical Mechanics, Cambridge University Press, London.
↑Jump back a section
## Read in another language
This page is available in 70 languages
Last modified on 9 May 2013, at 04:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180158376693726, "perplexity_flag": "middle"} |
http://nrich.maths.org/7324/solution | The Olympic Torch Tour
Stage: 4 Challenge Level:
Niharika was the only one to send us a solution this time round:
The possible routes are:
1. London-Cambridge-Bath-Coventry-London
2. London-Cambridge-Coventry-Bath-London
3. London-Bath-Cambridge-Coventry-London
4. London-Bath-Coventry-Cambridge-London
5. London-Coventry-Bath-Cambridge-London
6. London-Coventry-Cambridge-Bath-London.
I worked out the distances for each route, and they came to:
1. 336 miles
2. 296 miles
3. 372 miles
4. 296 miles
5. 336 miles
6. 372 miles
The shortest routes are routes 2 and 4.
Now imagine that each city along our route is a box. We have five boxes to line up. The first and last box must be 'London', so there is only 1 way to fill those boxes. The second box can be filled in one of 3 ways. Once we've used that city up, the third box can be filled in 2 ways. Finally when we come to the fourth box there is only 1 way to fill it in. So there are 1*3*2*1*1 = 6 routes.
I guessed that, when extending to 5 cities, we should probably start with the shortest route for 4 cities and then add the extra city in. In this case, we should start with routes 2 and 4, and add Oxford in in all the different places, and see which distance is smallest. In each case, least distance is covered if we add in Oxford between Coventry and Bath. In this case there are 6 boxes and 1*4*3*2*1*1 = 24 routes.
When there are n cities, there are n+1 boxes, and so there are $1\times (n-1)\times (n-2)\times \dots \times 2 \times 1\times 1$ routes - just think of how many ways there are to fill in each box. But if you know the shortest routes for the case of n-1 cities, my guess is that you should be able to add the n-th city to those.
Great - thanks, Niharika!
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9044811725616455, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/54153/does-this-class-of-cipher-have-a-name-what-weaknesses-does-it-have?answertab=oldest | # Does this class of cipher have a name? What weaknesses does it have?
Some Background
In October I have been asked by the school I teach at to organise and lead 'a hands-on cryptography session' for a bright group of 13 year olds to follow a talk on Enigma by an outside speaker.
The plan is to start with some affine shifting in ones and twos, then hit them with a task creating and breaking monoalphabetic substitutions in teams. After that I'd like at least to show them something harder, ideally interpolating somewhere in between monoalphabetics and enigma (all the better if I can recycle the sweet macro code I've written to help with the first two tasks- better still if I can recycle the kids' ciphers). Which has all led me to 'inventing' a particular class of cipher.
The Ciphers
Given a monoalphabetic substitution $g \in S_{26}$, we evaluate the ciphertext of plaintext character '$x$', $n$ characters into the string, as $C(x,n)= g^n(x)$, where $S_{26}$ is the permutation group on 26 elements and $g(x)$ is the natural action on the alphabet.
My Question(s)
Clearly I am not the first person to think of this. But after going through wikipedia's list of classical encryptions to no avail, as someone who had never done any cryptography before being tasked with this, I'm stumped. What the dickens is this class of encryption called?!
Given that someone has probably though of this before, someone has probably also tried to break it, and has probably found weaknesses. Going through ciphertext and looking at weaknesses, even if this is all they could accomplish, would I think still be a good activity to end on, as it represents a nod toward the real activities at Bletchley Park. But so far the only weakness I've been able to find without a reference is that the 26771144400th character must be the same in ciphertext as in plaintext. This is not helpful. What are the weaknesses of this class of encryption?
And finally, since I may as well ask, having given you the background, has anyone got any better ideas?
-
Can you explain the notations $S_{26}$ and $g^{n}(x)$? – Srivatsan Jul 27 '11 at 22:39
@Srivatsan I've edited the question to incorporate an explanation. – Tom Boardman Jul 27 '11 at 22:45
1
– Ross Millikan Jul 27 '11 at 22:49
3
While it has not yet reached public beta, in the future this type of question would be perfect for the Cryptography.SE site. – Brandon Carter Jul 28 '11 at 0:16
1
I wouldn't feel comfortable encrypting two plaintexts with the same $g$. – jug Jul 28 '11 at 10:46
show 2 more comments
## 3 Answers
I'm not sure if that particular kind of polyalphabetic substitution cipher has a specific name.
A naive application of it, encrypting the $n$-th letter $n$ times, sounds rather laborious: $O(n^2)$ to encrypt an $n$-letter message. I guess it becomes a lot easier if you decompose $g$ into cycles first, though.
I guess one potential weakness would be the fact that each letter always belongs to same cycle. English has quite a few double letters, and thus the ciphertext ought to contain a larger proportion of bigrams of the form $(x, g(x))$ than one would expect by chance.
Another approach might be to treat it like the Vigenère cipher: if $g$ contains a cycle of length $k$, then the letters in that cycle will be more likely than usual to repeat $k$ position apart in the ciphertext. One could even generalize these two methods: if the letters $x$ and $y$ are $j$ steps apart in a $k$-letter cycle (i.e. $g^j(x) = y$, $g^k(x) = x$), then $x$ and $y$ may be more likely than expected to be separated by $ak+j$ positions in the ciphertext, where $a \in \mathbb Z$.
## Addendum
Here's a fairly easy way to crack this cipher, given a sufficiently long ciphertext sample (or many shorter samples). I've illustrated this method with a concrete example (Pride and Prejudice by Jane Austen, courtesy of Project Gutenberg, encrypted with the key QKLWDVEOSUZYGMFXACNPHJIBRT) below:
Step 1: First, you want to identify the cycles of $g$. To do this, simply count the frequencies of each letter in the ciphertext and plot them in ascending order. The plot should looks something like this:
T: 11757 #######################
B: 11827 #######################
K: 11895 #######################
Z: 11898 #######################
P: 11927 #######################
X: 12048 #######################
H: 18161 ###################################
O: 18296 ###################################
V: 18370 ###################################
J: 18518 ####################################
F: 18570 ####################################
U: 18748 ####################################
R: 20536 ########################################
Y: 20547 ########################################
L: 20660 ########################################
C: 20889 ########################################
Q: 21642 ##########################################
A: 21691 ##########################################
E: 30275 ###########################################################
G: 30340 ###########################################################
S: 30346 ###########################################################
N: 30473 ###########################################################
I: 30516 ###########################################################
W: 30564 ###########################################################
D: 30564 ###########################################################
M: 30638 ############################################################
Notice that the plot looks like a staircase: each step corresponds to a single cycle of $g$. For this example, I deliberately picked a slightly ambiguous case; it's fairly clear that there are two 6-cycles ({T,B,K,Z,P,X} and {H,O,V,J,F,U}) and one 8-cycle ({E,G,S,N,I,W,D,M}), but it's not quite obvious whether the remaining letters ({R,Y,L,C,Q,A}) form a single 6-cycle, two 3-cycles, or a 4-cycle and a 2-cycle. Fortunately, we can just try them all in the next step.
(Incidentally, even if you didn't know what cipher was used, this kind of staircase-shaped frequency plot would be a good hint.)
Step 2: Once you've identified the cycles, you still need to determine the order of the letters in each of them. One quick way to do this is simply to count the occurrences of each letter of the cycle at positions $n \equiv i \pmod k$ in the ciphertext for each $i = 0, \ldots, k-1$, where $k$ is the length of the cycle. If, for each $i$, you then sort the letters by their frequency at that position, you'll hopefully see something like this (for the 8-cycle {E,G,S,N,I,W,D,M}):
0: E > N > I > S > D > M > W > G
1: D > M > S > N > W > G > I > E
2: W > G > N > M > I > E > S > D
3: I > M > E > G > S > D > N > W
4: S > G > D > E > N > W > M > I
5: N > W > E > D > M > I > G > S
6: M > D > I > W > G > S > E > N
7: G > W > S > I > E > N > D > M
Note that there are no repeats in the leftmost column; this is because one of the letters in the cycle (presumably E) is sufficiently more common than the others that its encryption under $g^i$ dominates the row. From this, we can guess that the order of the letters in this cycle is, in fact, (E→D→W→I→S→N→M→G(→E)).
Note, though, that the next two columns in the table above are mixed up, presumably because the letters N and I are close enough in frequency that their order varies by chance. All the other columns, however, are simply shifted versions of the first, which confirms that we almost surely have the correct order.
If we try to do the same with {R,Y,L,C,Q,A}, however, the output looks different:
0: A > L > R > Y > C > Q
1: Q > C > Y > L > R > A
2: A > R > L > C > Y > Q
3: Q > Y > C > L > R > A
4: A > L > R > C > Y > Q
5: Q > C > Y > R > L > A
From this, we can guess that {R,Y,L,C,Q,A} is not, in fact, a cycle. Instead, {A,Q} seems a likely 2-cycle, which would leave {R,Y,L,C} as a 4-cycle. Indeed, with these guesses the output looks much nicer:
0: A > Q
1: Q > A
0: R > L > C > Y
1: C > Y > L > R
2: L > R > Y > C
3: Y > C > R > L
It's pretty clear that (A→Q(→A)) and (R→C→L→Y(→R)) are cycles. Applying the same technique to the remaining two candidate cycles gives
0: O > H > U > F > V > J
1: F > O > H > V > J > U
2: V > F > O > J > U > H
3: J > V > F > U > H > O
4: U > J > V > H > O > F
5: H > U > J > O > F > V
and
0: T > B > P > K > Z > X
1: P > K > X > Z > B > T
2: X > Z > B > T > K > P
3: B > T > K > P > X > Z
4: K > P > Z > X > B > T
5: Z > X > T > B > K > P
from which we can correctly deduce that $g$ equals (EDWISNMG)(AQ)(RCLY)(OFVJUH)(TPXBKZ) = QKLWDVEOSUZYGMFXACNPHJIBRT. Looking at the resulting decryption output (which we didn't even have to do to guess the key!) confirms that it indeed makes sense.
Note that the simplistic frequency analysis I used in step 2 can fail if a cycle consists entirely of uncommon letters with similar frequencies. However, as long as we can decrypt most of the ciphertext correctly, it shouldn't be too hard to sort such minor cycles manually. A bigger issue with this technique is that is also tends to fail if the amount of ciphertext available is too small for the statistical frequency differences to stand out of the noise. On the other hand, it it otherwise remarkably robust: it makes no assumptions about the actual letter frequency distribution of the plaintext, except that it is not too close to uniform.
-
Thanks @Ilmari! That's totally ace! Now to think of a way to explain it to 13 year olds... :) – Tom Boardman Jul 29 '11 at 16:53
This is not an encryption algorithm that I have come across before, but it seems to me that it's ideal for your purposes. It's significantly harder to crack than a simple substitution cipher, but -- as others have pointed out -- it may be susceptible to attack, given a sufficiently long ciphertext. You got yourself a case study!
-
That's a nice cipher between mono-alphabetic and enigma (apart from the calculation ballooning).
If the mono-alphabetic is non-cyclic for period of less than 26 (e.g. repeated application on a character goes through all other characters before returning you to the original character after 26 applications), then you've got a good chance of breaking a longer message (frequency analysis on the n*26+1 characters being the quickest).
If you want to give them a hint, telling them the first part of the message always helps!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8751698732376099, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showpost.php?p=3099038&postcount=6 | Thread: \$1/cosx dx help needed View Single Post
Quote by Riazy Mark44: We don't use sec x in sweden, so no i don't
LOL
╔(σ_σ)╝: Hmm i could need some more elaboration on that method, I really need to get to solve this, but i won't be able to get in touch with my professor at the moment, and I live 60 kilometers from my uni :/
Well,...
$$\int \frac{1}{cosx} \frac{sinx}{sinx} dx$$
Technically speaking, this is not a good idea since sinx or cosx could be zero and you have problems with division by zero. By I assume that the region in question doesn't cause this type of problems.
Now use the substitution u =sinx and show me what you get. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513877034187317, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Bijective_proof | # Bijective proof
In combinatorics, bijective proof is a proof technique that finds a bijective function f : A → B between two finite sets A and B, or a size-preserving bijective function between two combinatorial classes, thus proving that they have the same number of elements, |A| = |B|. One place the technique is useful is where we wish to know the size of A, but can find no direct way of counting its elements. Then establishing a bijection from A to some B solves the problem in the case when B is more easily countable. Another useful feature of the technique is that the nature of the bijection itself often provides powerful insights into each or both of the sets.
## Basic examples
### Proving the symmetry of the binomial coefficients
The symmetry of the binomial coefficients states that
${n \choose k} = {n \choose n-k}.$
This means there are exactly as many combinations of k in a set of n as there are combinations of n − k in a set of n.
#### The bijective proof
More abstractly and generally, we note that the two quantities asserted to be equal count the subsets of size k and n − k, respectively, of any n-element set S. There is a simple bijection between the two families Fk and Fn − k of subsets of S: it associates every k-element subset with its complement, which contains precisely the remaining n − k elements of S. Since Fk and Fn − k have the same number of elements, the corresponding binomial coefficients must be equal.
### Pascal's triangle recurrence relation
${n \choose k} = {n-1 \choose k-1} + {n-1 \choose k}\text{ for }1 \le k \le n - 1.$
#### The bijective proof
Proof. We count the number of ways to choose k elements from an n-set. Again, by definition, the left hand side of the equation is the number of ways to choose k from n. Since 1 ≤ k ≤ n − 1, we can pick a fixed element e from the n-set so that the remaining subset is not empty. For each k-set, if e is chosen, there are
${n-1 \choose k-1}$
ways to choose the remaining k − 1 elements among the remaining n − 1 choices; otherwise, there are
${n-1 \choose k}$
ways to choose the remaining k elements among the remaining n − 1 choices. Thus, there are
${n-1 \choose k-1} + {n-1 \choose k}$
ways to choose k elements depending on whether e is included in each selection, as in the right hand side expression. $\Box$
## Other examples
Problems that admit combinatorial proofs are not limited to binomial coefficient identities. As the complexity of the problem increases, a combinatorial proof can become very sophisticated. This technique is particularly useful in areas of discrete mathematics such as combinatorics, graph theory, and number theory.
The most classical examples of bijective proofs in combinatorics include:
• Prüfer sequence, giving a proof of Cayley's formula for the number of labeled trees.
• Robinson-Schensted algorithm, giving a proof of Burnside's formula for the symmetric group.
• Conjugation of Young diagrams, giving a proof of a classical result on the number of certain integer partitions.
• Bijective proofs of the pentagonal number theorem.
• Bijective proofs of the formula for the Catalan numbers. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004982113838196, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/135799-min-max-value-unusual-way-phrase-print.html | # Min/max value, is this an unusual way to phrase it?
Printable View
• March 26th 2010, 08:40 AM
anywhere
Min/max value, is this an unusual way to phrase it?
In my math 132 (college algebra and trigonometry) class when answering questions where we're supposed to identify the minimum/maximum, we're asked to identify where the min/max "occurs" (what they really want is the X value) and where the "value" is (the Y value).
IMO this makes no sense, and I'm pretty annoyed because on a test I had no idea what they meant by "occurs" and "value" and answered in (x, y) form, and even though I got it right it was marked wrong. I tried explaining to my teacher that if you look at the terminology, "occurs at" and "value of" could both be logically applied to either the X value or the Y value, but she seems to think that what is meant is obvious and doesn't see a problem.
Is the "occurs" and "value" thing a pretty common format and I should suck it up and memorize what they really want when they say that, or does it make as little sense as I think it does and I should ask for at least partial credit? Thanks! (I hope this is in the right spot!)
• March 26th 2010, 08:57 AM
masters
Quote:
Originally Posted by anywhere
In my math 132 (college algebra and trigonometry) class when answering questions where we're supposed to identify the minimum/maximum, we're asked to identify where the min/max "occurs" (what they really want is the X value) and where the "value" is (the Y value).
IMO this makes no sense, and I'm pretty annoyed because on a test I had no idea what they meant by "occurs" and "value" and answered in (x, y) form, and even though I got it right it was marked wrong. I tried explaining to my teacher that if you look at the terminology, "occurs at" and "value of" could both be logically applied to either the X value or the Y value, but she seems to think that what is meant is obvious and doesn't see a problem.
Is the "occurs" and "value" thing a pretty common format and I should suck it up and memorize what they really want when they say that, or does it make as little sense as I think it does and I should ask for at least partial credit? Thanks! (I hope this is in the right spot!)
Hi anywhere,
Let me give you one quick example. There are lots of others, but...
Suppose you toss a projectile in the air. The quadratic model might be something like:
$h(t)=-16t^2+50t+4$
The height (h) is a function of time (t).
Question: When does the maximum "occur" and what is the "value" at that time?
You need to find the vertex which is your "maximum point".
The x-coordinate of the vertex will tell you the time (t) when the maximum "occurred"
and the y-coordinate will tell you the "value" or height (h) at that time (t).
Maybe that helps. Hope so.
• March 26th 2010, 01:16 PM
HallsofIvy
Quote:
Originally Posted by anywhere
In my math 132 (college algebra and trigonometry) class when answering questions where we're supposed to identify the minimum/maximum, we're asked to identify where the min/max "occurs" (what they really want is the X value) and where the "value" is (the Y value).
IMO this makes no sense, and I'm pretty annoyed because on a test I had no idea what they meant by "occurs" and "value" and answered in (x, y) form, and even though I got it right it was marked wrong. I tried explaining to my teacher that if you look at the terminology, "occurs at" and "value of" could both be logically applied to either the X value or the Y value, but she seems to think that what is meant is obvious and doesn't see a problem.
Is the "occurs" and "value" thing a pretty common format and I should suck it up and memorize what they really want when they say that, or does it make as little sense as I think it does and I should ask for at least partial credit? Thanks! (I hope this is in the right spot!)
Yes, that is pretty common usage and is perfectly good English. I have no idea why you would think it makes "little sense".
All times are GMT -8. The time now is 03:38 PM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9802731871604919, "perplexity_flag": "middle"} |
http://nrich.maths.org/398/clue | nrich enriching mathematicsSkip over navigation
### Shades of Fermat's Last Theorem
The familiar Pythagorean 3-4-5 triple gives one solution to (x-1)^n + x^n = (x+1)^n so what about other solutions for x an integer and n= 2, 3, 4 or 5?
### Exhaustion
Find the positive integer solutions of the equation (1+1/a)(1+1/b)(1+1/c) = 2
### Code to Zero
Find all 3 digit numbers such that by adding the first digit, the square of the second and the cube of the third you get the original number, for example 1 + 3^2 + 5^3 = 135.
# Power Up
##### Stage: 5 Challenge Level:
As a hint, try comparing the '$7$'s' inequality to a similar one for $8$. For the '$4$'s' inequality use the fact that any root of $4$ is greater than $1$.
To sketch the graph, find the derivative for $x=0$ and then consider where the derivative is positive, where it is negative and if it tends to a limit as $x$ increases.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8812301158905029, "perplexity_flag": "middle"} |
http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-113-S | New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Return to List
Homotopy Limit Functors on Model Categories and Homotopical Categories
William G. Dwyer, University of Notre Dame, IN, Philip S. Hirschhorn, Wellesley College, MA, Daniel M. Kan, Massachusetts Institute of Technology, Cambridge, MA, and Jeffrey H. Smith, Purdue University, West Lafayette, IN
SEARCH THIS BOOK:
Mathematical Surveys and Monographs
2004; 181 pp; softcover
Volume: 113
Reprint/Revision History:
reprinted 2005
ISBN-10: 0-8218-3975-6
ISBN-13: 978-0-8218-3975-1
List Price: US\$65
Member Price: US\$52
Order Code: SURV/113.S
The purpose of this monograph, which is aimed at the graduate level and beyond, is to obtain a deeper understanding of Quillen's model categories. A model category is a category together with three distinguished classes of maps, called weak equivalences, cofibrations, and fibrations. Model categories have become a standard tool in algebraic topology and homological algebra and, increasingly, in other fields where homotopy theoretic ideas are becoming important, such as algebraic $$K$$-theory and algebraic geometry.
The authors' approach is to define the notion of a homotopical category, which is more general than that of a model category, and to consider model categories as special cases of this. A homotopical category is a category with only a single distinguished class of maps, called weak equivalences, subject to an appropriate axiom. This enables one to define "homotopical" versions of such basic categorical notions as initial and terminal objects, colimit and limit functors, cocompleteness and completeness, adjunctions, Kan extensions, and universal properties.
There are two essentially self-contained parts, and part II logically precedes part I. Part II defines and develops the notion of a homotopical category and can be considered as the beginnings of a kind of "relative" category theory. The results of part II are used in part I to obtain a deeper understanding of model categories. The authors show in particular that model categories are homotopically cocomplete and complete in a sense stronger than just the requirement of the existence of small homotopy colimit and limit functors.
A reader of part II is assumed to have only some familiarity with the above-mentioned categorical notions. Those who read part I, and especially its introductory chapter, should also know something about model categories.
Readership
Graduate students and research mathematicians interested in algebraic topology.
Model categories
• An overview
• Model categories and their homotopy categories
• Quillen functors
• Homotopical cocompleteness and completeness of model categories
Homotopical categories
• Summary of part II
• Homotopical categories and homotopical functors
• Deformable functors and their approximations
• Homotopy colimit and limit functors and homotopical ones
• Index
• Bibliography
AMS Home | Comments: webmaster@ams.org © Copyright 2012, American Mathematical Society Privacy Statement | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8568357229232788, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/8347/is-bayesian-statistics-genuinely-an-improvement-over-traditional-frequentist-s | # Is Bayesian statistics genuinely an improvement over traditional (frequentist) statistics for behavioral research?
While attending conferences, there has been a bit of a push by advocates of Bayesian statistics for assessing the results of experiments. It is vaunted as both more sensitive, appropriate, and selective towards genuine findings (fewer false positives) than frequentist statistics.
I have explored the topic somewhat, and I am left unconvinced so far of the benefits to using Bayesian statistics. Bayesian analyses were used to refute Daryl Bem's research supporting precognition, however, so I remain cautiously curious about how Bayesian analyses might benefit even my own research.
So I am curious about the following:
• Power in a Bayesian analysis vs. a frequentist analysis
• Susceptibility to Type 1 error in each type of analysis
• The trade-off in complexity of the analysis (Bayesian seems more complicated) vs. the benefits gained. Traditional statistical analyses are straightforward, with well-established guidelines for drawing conclusions. The simplicity could be viewed as a benefit. Is that worth giving up?
Thanks for any insight!
-
1
Bayesian statistics is traditional statistics - can you give a concrete example for what you mean be traditional statistics? – Ophir Yoktan Mar 11 '11 at 19:54
@OphirYoktan: He's talking about frequency probability versus Bayesian probability. It's even mentioned in the question's title. – Borror0 Mar 11 '11 at 19:57
4
– Solus Mar 11 '11 at 23:29
2
– Jason Plank Mar 13 '11 at 1:46
1
I think this question can potentially have a "good" or "correct" answer. E.g. if someone could say "for every frequentist test with type 1 error $\alpha$ and type 2 error $\beta$, there exists a Bayesian test with type 1 error $\alpha$ and type 2 error $\beta - x$", this would be a good answer. Or something like "every frequentist test is equivalent to a Bayesian test with uninformative prior". I.e. this doesn't have to be a religious war between frequentists and bayesians. I'm only arguing because I don't understand how the replies relate to the specific questions in OP. – SheldonCooper Mar 16 '11 at 23:56
show 11 more comments
## 3 Answers
A quick response to the bulleted content:
1) Power / Type 1 error in a Bayesian analysis vs. a frequentist analysis
Asking about Type 1 and power (i.e. one minus the probability of Type 2 error) implies that you can put your inference problem into a repeated sampling framework. Can you? If you can't then there isn't much choice but to move away from frequentist inference tools. If you can, and if the behavior of your estimator over many such samples is of relevance, and if you are not particularly interested in making probability statements about particular events, then I there's no strong reason to move.
The argument here is not that such situations never arise - certainly they do - but that they typically don't arise in the fields where the methods are applied.
2) The trade-off in complexity of the analysis (Bayesian seems more complicated) vs. the benefits gained.
It is important to ask where the complexity goes. In frequentist procedures the implementation may be very simple, e.g. minimize the sum of squares, but the principles may be arbitrarily complex, typically revolving around which estimator(s) to choose, how to find the right test(s), what to think when they disagree. For an example. see the still lively discussion, picked up in this forum, of different confidence intervals for a proportion!
In Bayesian procedures the implementation can be arbitrarily complex even in models that look like they 'ought' to be simple, usually because of difficult integrals but the principles are extremely simple. It rather depends where you'd like the messiness to be.
3) Traditional statistical analyses are straightforward, with well-established guidelines for drawing conclusions.
Personally I can no longer remember, but certainly my students never found these straightforward, mostly due to the principle proliferation described above. But the question is not really whether a procedure is straightforward, but whether is closer to being right given the structure of the problem.
Finally, I strongly disagree that there are "well-established guidelines for drawing conclusions" in either paradigm. And I think that's a good thing. Sure, "find p<.05" is a clear guideline, but for what model, with what corrections, etc.? And what do I do when my tests disagree? Scientific or engineering judgement is needed here, as it is elsewhere.
-
I'm not sure that asking about type 1/type 2 errors implies anything about a repeated sampling framework. It seems that even if my null hypothesis cannot be sampled repeatedly, it is still meaningful to ask about the probability of type 1 error. The probability in this case, of course, is not over all the possible hypotheses, but rather over all possible samples from my single hypothesis. – SheldonCooper Mar 16 '11 at 15:24
It seems to me that the general argument is this: although making a type 1 (or 2) error can be defined for a 'one shot' inference (Type 1 vs 2 is just part of a typology of mistakes I can make) unless my making this mistake is embedded in repeated trials neither error type can have a frequentist probability. – conjugateprior Mar 16 '11 at 15:53
What I'm saying is that making a type 1 (or 2) error is always embedded in repeated trials. Each trial is sampling a set of observations from the null hypothesis. So even if it is difficult to imagine sampling a different hypothesis, repeated trials are still there because it is easy to imagine sampling a different set of observations from that same hypothesis. – SheldonCooper Mar 16 '11 at 15:59
You are, I think backing off to from N-P to Fisher when you ask whether it doesn't still make sense to think of the p value as simultaneously a measure of divergence from some null, as a way to define a type 1 error, and as something that doesn't require a sampling framework. I'm not sure how to have all these at once, but perhaps I'm missing something. – conjugateprior Mar 16 '11 at 16:01
oops, we overlapped. In the terms of my original response to the question you're saying that you can imagine embedding in a repeated framework. OK, then Type 1 and perhaps Type 2 makes good sense. – conjugateprior Mar 16 '11 at 16:04
show 7 more comments
Bayesian statistics can be derived from a few Logical principles. Try Searching "probability as extended logic" and you will find more in depth analysis of the fundamentals. But basically, Bayesian statistics rests on three basic "desiderata" or normative principles:
1. The plausability of a proposition is to be represented by a single real number
2. The plausability if a proposition is to have qualitative correspondance with "common sense". If given initial plausibility $p(A|C^{(0)})$, then change from $C^{(0)}\rightarrow C^{(1)}$ such that $p(A|C^{(1)})>p(A|C^{(0)})$ (A becomes more plausible) and also $p(B|A C^{(0)})=p(B|AC^{(1)})$ (given A, B remains just as plausible) then we must have $p(AB| C^{(0)})\leq p(AB|C^{(1)})$ (A and B must be at least as plausible) and $p(\overline{A}|C^{(1)})<p(\overline{A}|C^{(0)})$ (not A must become less plausible).
3. The plausability of a proposition is to be calculated consistently. This means a) if a plausability can be reasoned in more than 1 way, all answers must be equal; b) In two problems where we are presented with the same information, we must assign the same plausabilities; and c) we must take account of all the information that is available. We must not add information that isn't there, and we must not ignore information which we do have.
These three desiderata (along with the rules of logic and set theory) uniquely determine the sum and product rules of probability theory. Thus, if you would like to reason according to the above three desiderata, they you must adopt a Bayesian approach. You do not have to adopt the "Bayesian Philosophy" but you must adopt the numerical results. The first three chapters of this book describe these in more detail, and provide the proof.
And last but not least, the "Bayesian machinery" is the most powerful data processing tool you have. This is mainly because of the desiderata 3c) using all the information you have (this also explains why Bayes can be more complicated than non-Bayes). It can be quite difficult to decide "what is relevant" using your intuition. Bayes theorem does this for you (and it does it without adding in arbitrary assumptions, also due to 3c).
EDIT: to address the question more directly (as suggested in the comment), suppose you have two hypothesis $H_0$ and $H_1$. You have a "false negative" loss $L_1$ (Reject $H_0$ when it is true: type 1 error) and "false positive" loss $L_2$ (Accept $H_0$ when it is false: type 2 error). probability theory says you should:
1. Calculate $P(H_0|E_1,E_2,\dots)$, where $E_i$ is all the pieces of evidence related to the test: data, prior information, whatever you want the calculation to incorporate into the analysis
2. Calculate $P(H_1|E_1,E_2,\dots)$
3. Calculate the odds $O=\frac{P(H_0|E_1,E_2,\dots)}{P(H_1|E_1,E_2,\dots)}$
4. Accept $H_0$ if $O > \frac{L_2}{L_1}$
Although you don't really need to introduce the losses. If you just look at the odds, you will get one of three results: i) definitely $H_0$, $O>>1$, ii) definitely $H_1$, $O<<1$, or iii) "inconclusive" $O\approx 1$.
Now if the calculation becomes "too hard", then you must either approximate the numbers, or ignore some information.
For a actual example with worked out numbers see my answer to this question
-
3
I'm not sure how this answers the question. Frequentists of course disagree with desideratum 1 from this list, so the rest of the argument doesn't apply to them. It also doesn't answer any of the specific questions in the OP, such as "is Bayesian analysis more powerful or less error-prone than a frequentist analysis". – SheldonCooper Mar 16 '11 at 15:18
@sheldoncooper - if a frequentist disagrees with desideratum 1, then on what basis can they construct a 95% confidence interval? They must require an additional number. – probabilityislogic Mar 16 '11 at 22:32
@sheldoncooper - and further, sampling probabilities would have to be re-defined, because they too are only 1 number. A frequentist cannot reject desideratum 1 without rejecting their own theory – probabilityislogic Mar 16 '11 at 23:12
1
I'm not sure what additional number you are referring to. I'm also not sure what is the computation of $p(H_1|...)$ that you've introduced. The procedure in standard statistical tests is to compute $p(E_1, E_2, ... | H_0)$. If this probability is low, $H_0$ can be rejected; otherwise it cannot. – SheldonCooper Mar 16 '11 at 23:21
1
"they cannot reject desideratum 1 without rejecting their own theory" -- what do you mean by that? Frequentists have no notion of "plausibility". They have a notion of "frequency of occurrence in repeated trials". This frequency satisfies conditions similar to your three desiderata and thus happens to follow similar rules. Thus for anything for which the notion of frequency is defined, you can use laws of probability without any problem. – SheldonCooper Mar 16 '11 at 23:25
show 7 more comments
I am not familiar with Bayesian Statistics myself but I do know that Skeptics Guide to the Universe Episode 294 has and interview with Eric-Jan Wagenmakers where they discuss Bayesian Statistics. Here is a link to the podcast: http://www.theskepticsguide.org/archive/podcastinfo.aspx?mid=1&pid=294
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405435919761658, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/74272/solving-this-algebra-question | # solving this algebra question
If $(7+4\sqrt 3)^{x^2-8}+(7-4\sqrt 3)^{x^2-8}=14,\quad x=?$
How do I solve this algebra question?
-
## 3 Answers
Let $a=(7+4\sqrt{3})^{x^2-8}$ then $\dfrac{1}{a}=(7-4\sqrt{3})^{x^2-8}$
So the given equation will be $a+\dfrac{1}{a}=14\implies a^2-14a+1=0$
Solving quadratic for $a=7+4\sqrt{3}$ or $a=7-4\sqrt{3}\implies x^2-8=1$ or $-1$
therefore possible values of $x=3,-3,\sqrt{7},-\sqrt{7}$
-
## Did you find this question interesting? Try our newsletter
Hint: if $a$ be the first term on LHS, then the second term is $\frac{1}{a}$.
Note that $$(7+4 \sqrt{3}) \cdot (7-4\sqrt{3}) = 49-48 =1$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8095254898071289, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/123567?sort=newest | ## for what arguments the function reaches maximum?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi,
What is the maximum of the following function?:
$f(x_i,w_i)=\frac { \sum w_i}{ \sum \frac {w_i}{x_i} } - \frac{ 1 - \prod \left ( 1 - w_{i}\right )}{ 1 - \prod \left ( 1 - \frac{w_{i}}{ x_i}\right )}$
given:
$i = 1,..,n$
$1 < w_i$
$0 < x_i < 1$
The generalization of the inequality of arithmetic and geometric means may indicate that the maximum is reached, when all $x_i$ are equal (the minuend is a weighted harmonic mean) but I haven't managed to proove it.
EDIT: The above question was badly constructed. The answer I am looking for is: For what values of $w_i$ and $x_i$, in case of any $i=1,..,n$ the following function reaches maximum.
EDIT2: I am looking for the optimum distribution of $w_i$ and $x_i$. The specific values of $w_i$ and $x_i$ are not needed. The problem needs to be solved, knowing that the minuend is a set number: $S=\frac { \sum w_i}{ \sum \frac {w_i}{x_i} }$ . So actually the question can be redefined: for given S, what is the optimum distribution of $w_i$ and $x_i$ so $f(x_i,w_i)$ reaches maximum.
-
## 1 Answer
For $n=1$ you get $f=0$ for all values.
For $n=2$ you get for $w_1 = w_2 \approx 1$ and $x_1 = x_2 = 1/2$ division by zero in the second term, and so $f= \infty$.
EDIT:
For every even $n$ you get division by zero for $w_1 = \ldots = w_n \approx 1$ and $x_1 = \ldots = x_n = 1/2$, and so $f= \infty$.
-
Hi Hans, thank you for your answer. However, i asked a wrong question (will edit it now). I am not searching the maximum value - I am searching in what conditions (for what values of $x_i$ and $w_i$, for any $i=1,..,n$) the maximum is reached. That is why, I mentioned the inequality of arithmetic and geometric means, as it may indicate that the maximum is reached when all $x_i$ are equal, but I don;t know how to proove it. – Chris Mar 6 at 7:49
@Chris: but your problem seems ill-posed: already for $n=2$ you get $f=\infty$ for the values I state above. This can be already guessed as your domain is not compact, and so f need not attain a maximum. - Please let the original post, and write your reformulation as "Ed: .." as an addition to your present post. – Hans Mar 6 at 10:08
@Hans: I returned to the original version of the question adding the EDIT part, as you requested. I agree, for $n=2$, when $w_1 = w_2$ and $x_1 = x_2$ the function f reaches infinity, but can it be proven that that for any $i=1,..,n$ keeping $w_1 = ... = w_n$ and $x_1 = ... = x_n$ the function f will always reach maximum? – Chris Mar 6 at 10:33
@Chris: your edit has not changed the message of your post at all. This I have answered. Your last comment says you have only two parameters, namely $w$ (= $w_i$ for all $i$) and $w/x$ (= $w_i/x_i$ for all $i$), which you could easily discuss by yourself. – Hans Mar 6 at 12:34
@Chris: see my edit. – Hans Mar 6 at 13:06
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399685263633728, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?p=3960768 | Physics Forums
Page 187 of 205 « First < 87 137 177 184 185 186 187 188 189 190 197 > Last »
## The Should I Become a Mathematician? Thread
I'm sure you've heard this before, but my grades suffer from "dumb mistakes." I don't know how to stop making them, and I don't know if they are something that is eventually going to be ironed out or if I have to find another way to fix this. I really do take my time with everything, but they still seem to crop up.
I am a master at making dumb mistakes. That's part of why I did so much better when I got past high school math and lower division math. In the long run, it doesn't matter that much, as long as the mistakes are inadvertent ones. In "real world" situations (including research), you can check your work 20 times if you want to get it right.
From childhood I was passionate about mathematics but I noticed I can not afford to become a mathematician.
Anyone can afford to be a mathematician to some extent. In America, all you have to do is do really well in high school and you can get a scholarship. Then, in grad school, you usually get paid. Even if you don't go to college, you can still teach yourself quite a bit on your own.
Recognitions: Homework Help Science Advisor you might try becoming a mathematician who spends more time with her family. you could start a trend.
Quote by mathwonk you might try becoming a mathematician who spends more time with her family. you could start a trend.
I recently started getting invited to gatherings with our math department, and it was funny to start finding out how many of the professors were married to each other. I had no idea, because most of the women kept their last names. So, I guess that's one way!
-Dave K
Recognitions: Homework Help Science Advisor there are at least 5 couples in our department such that both spouses are either professors or instructors.
Good to know. That was my intended course of action. (go outside the "syllabus" if I feel like it but then when there's exams, I focus on those) A lot of what motivated my initial question was that I had some ~12 exams within the span of 3-4 weeks and they were all exams that are much in the vein of the usual standardised testing... Does anyone here have any experience with the Jerry Shurman (at Reed College) notes on single variable calculus? I'm currently checking out Apostol and Spivak using the free previews available on Google Books and Amazon, before choosing which of the two to buy. Shurman says that he learned from them, Courant and Rudin. Mathwonk, I read on another post that you used Sternberg and Loomis after Spivak back in the day. What do you think about this course compared to the modern alternatives - Apostol's second volume, I guess? Would one be correct in assuming that the current MATH 55 course at Harvard assumes (equivalent?) knowledge of both that book and Spivak?
Quote by Mariogs379 @mathwonk, Bit of a specific question but I thought you might be a good source of advice. Here's my background/question: Went to ivy undergrad, did some math and was planning on majoring in it but, long story short, family circumstances intervened and I had to spend significant time away from campus/not doing school-work. So I did philosophy but have taken the following classes: Calc II (A) Calc III (A) Linear Alg. (B+) ODE's (A) Decision Theory (pass) Intro to Logic (A-) Anyway, I did some mathy finance stuff for a year or so but realized it wasn't for me. I'm now going to take classes at Columbia in their post-bac program but wanted to get your advice on how best to approach this. They have two terms so I'm taking Real Analysis I in the first term and, depending on how that goes, Real Analysis II in the second term. I'm planning on taking classes in the fall semester as a non-degree student and was thinking of taking: Abstract Algebra Probability (some type of non-euclidean geometry) Anyway, here are my questions: 1) What do you think of my tentative course selection above? 2) How much do you think talent matters as far as being able to hack it if I ended up wanting to do grad school in math? 3) I'm also having a hard time figuring out whether math is a fit for me. By that, I just mean that I really like math, I'm reading Rudin / Herstein in my free time, but I've spoken with other kids from undergrad and it's clear that they're several cuts above both ability and interest-wise. Any thoughts on how to figure this out? Thanks in advance for your help, much appreciated, Mariogs
Thought I'd update. This 6 week real analysis class covers the first 6 chapters of Rudin. I'm finding the homework hard but we have a midterm on Monday; he showed us the one from last year and it looks *relatively* easy (definitely compared to the HW). Anyway, thinking I'm gonna take RA II, and some classes in the fall, decide about applying to grad school the following year.
In short, material's harder than I appreciated but also much more interesting. I think I'll enjoy it even more once I get more comfortable with some of the concepts (I feel like I spend a lot of time trying to understand Rudin's language/terminology/general technical writing even when he's conveying a *relatively* basic idea. A good example is his def. of convergence; easy now, but was a bit confusing at first. Tho I think once I'm able to get the ideas more easily, it'll be even more rewarding.
Thoughts?
Quote by Mariogs379 Thought I'd update. This 6 week real analysis class covers the first 6 chapters of Rudin. I'm finding the homework hard but we have a midterm on Monday; he showed us the one from last year and it looks *relatively* easy (definitely compared to the HW). Anyway, thinking I'm gonna take RA II, and some classes in the fall, decide about applying to grad school the following year. In short, material's harder than I appreciated but also much more interesting. I think I'll enjoy it even more once I get more comfortable with some of the concepts (I feel like I spend a lot of time trying to understand Rudin's language/terminology/general technical writing even when he's conveying a *relatively* basic idea. A good example is his def. of convergence; easy now, but was a bit confusing at first. Tho I think once I'm able to get the ideas more easily, it'll be even more rewarding. Thoughts?
Wowza. Six chapters of Rudin in six weeks? How many times do you meet every week?
I'm not sure what you meant by thoughts?", I'll take it that you ask how to understand the material quickly. I don't think there's a tired and true method to expedite one's understanding other than practice in time. I'll also add that if you manage to understand the ideas in Rudin in 6 weeks, then you're doing fine. Also, this stuff takes a lot of time to understand. With that being said, try the following:
Write definitions, proofs, concepts, whatever you see fit really, in your own words. By explaining the ideas to yourself, you'll start figuring out how you understand things, and how to approach them. So next time you read a definitions or a proof, you'll be faster.
Get a few more books from your library. Sometimes Rudin is terse, and sometimes those proofs are hard. Other authors expand on the material more than Rudin. It'll be worth it to look some stuff up in those books. I recommend Charles Chapman Pugh's Real Mathematical Analysis. It has the same breadth and depth as Ruding, although sometimes the author does things with less generality.
Read about some of this stuff on Wikipedia. I tried to avoid Wikipedia for a long time, because I was afraid that I'll read an entry that was edited by some crank. All entries I've encountered were nicely written, explained the ideas in depth, and have a nice way of tying things together (how one theorem relates to another, why it's important, generalizations, etc.)
Good luck!
Especially if your first course in upper level math is with Analysis from Rudin. Rudin isn't a bad book, and in fact I like it quite a bit, however, it's a little hard for beginners
In fact, I think that practice and time will help you understand things more quickly
Recognitions: Homework Help Science Advisor in my opinion loomis and sternberg is a show offy book (my book is harder than yours) and the two volumes of apostol or the two volumes of spivak, or of courant, are much better.
Quote by Mépris ^ Sounds awesome! Post here to tell us how things pan out. What is "summer B" though? A summer class for business students? --- Does anyone have experience with the math departments at these colleges: - Berea College - Carleton College - Reed College - UChicago - Colorado College -Grinnell College - University of South Florida These are a few places I'm considering applying for next year. I don't know much about any of them except for what is found on their website and that a number of them are in cold, bleak places. And that they're quite selective...at least, for people who're non-US citizens requiring aid!
Just to let you know, it's MUCH harder to get into Berea, than Harvard. G'luck! Out of the "foreign students" pool of accepted students, only 30 aspiring applicants can be chosen, out of thousands. I'd still apply, if I were you. Just cross your fingers for good outcomes, from crazy probabilities. They usually prefer to accept "brilliant" foreign applicants who are living under crisis conditions, really deserve going to college, and/or wont ever have a chance at it; like that talented math-wiz living in Homs, Syria right now.
Either way, it's a great liberal arts school. In my opinion, you could get a great mathematics education there because it seems that their mathematics students graduate with a broad knowledge in mathematics, ranging from pure mathematics, applied mathematics, and statistics/probability; which is ideal, I think. Check out their mathematics courses! The only problem is, though, that they don't offer much variety in mathematics courses :b
And, have you considered, the best one of them all for math (in general), the University of Waterloo? It's in a town close to Toronto, Canada. I'd go there, if I didn't mind getting into debt; "Lulz."
By the way, unless you want to be chocking in debt after you graduate, then go to Colorado College! I'm infatuated with their block plan and great academic programs; and the MAGNIFICENT LOCATION; but it's totally not worth graduating with \$130,000+ in debt.
Lol
Quote by grendle7 Just to let you know, it's MUCH harder to get into Berea, than Harvard. G'luck! Out of the "foreign students" pool of accepted students, only 30 aspiring applicants can be chosen, out of thousands.
Coincidence I came back to see this post. I read it before it was edited.
I think my grades may actually be just good enough to get me into Waterloo but it's really not worth the money...that I don't have. I don't know much about Colorado; it looked nice and has financial aid on offer, but it's very limited, as with most liberal arts colleges. I probably won't apply there. There's also the issue of limited coursework but few math/physics majors mean that one can try get some "independent study" thing going on. It doesn't mean grad-level courses, though.
Yeah, I read that about Berea. It's definitely going to be competitive but I believe it's free to apply, so I might as well give it a shot. There's also a list of those "free to apply to" colleges, somewhere on CollegeConfidential. It's easy to find - in case you can't find it, lemme know and I'll try dig it up.
Another thing about liberal arts colleges is that bar a few (Amherst and Williams, being one of those), there just isn't much money to give to international students, which makes the competition even fiercer. It makes more sense to apply to larger colleges. Casting too wide a net is also not a very good idea. Too many essays, too much money on application fees, etc but some people can manage that just fine. ;)
This looks interesting:
http://en.wikibooks.org/wiki/Ring_Th...rties_of_rings
Quote by mathwonk in my opinion loomis and sternberg is a show offy book (my book is harder than yours) and the two volumes of apostol or the two volumes of spivak, or of courant, are much better.
It's the post below, on another thread, that made me ask the question. I had also, per chance, stumbled upon the book, which is available for free on Sternberg's website.
In spite of its "show offy" nature, is the book any good? As for Spivak, are you referring to "Calculus on Manifolds" or is there another text which comes after "Calculus"?
Quote by mathwonk In the old days, the progression was roughly: rigorous one variable (Spivak) calculus, Abstract algebra (Birkhoff and Maclane), rigorous advanced calculus (Loomis and Sternberg), introductory real and complex analysis via metric spaces as in Mackey's complex analysis book, general analysis as in Royden, (big) Rudin, or Halmos and Ahlfors, algebra as in Lang, and algebraic topology as in Spanier. Then you specialize.
Recognitions: Homework Help Science Advisor It depends on your definition of "good". I have already stated that i think it is not as good as the other three I named. Of course Loomis - Sternberg is very authoritative and correct and deep and well written. But the show offy aspect refers to very little attempt to make it accessible to anything like an average student, or to cover what is really needed by that student. Differential calculus is done in a Banach space, possibly infinite dimensional, essentially the last case anyone will ever need. Most people will benefit far more from a careful treatment of calculus in 2 and 3 dimensions instead. E.g. after giving all the definitions of differentiation in infinite dimensions, most applications are to finite dimensions. Even the brief discussion of calculus of variations is apparently influenced by Courant who devotes a chapter to it. The treatment of the inverse function theorem again in Banach space is overkill, and gives little intuition that is actually needed in everyday practice. The implicit function function should be understood first for single valued functions of two variables. Loomis is an abstract harmonic analyst. His own personal preference is to render everything as elegant as possible, not as useful or understandable. But make up your own mind. These books are available in many libraries. Just because my course of lectures from Loomis left me feeling very disappointed, with little intuition, and almost deceived as to what is important in calculus, does not mean it may not help you. If you read Loomis and Sternberg at least you will learn that a derivative is a linear map. That's a lot right there. Indeed that's about all i got from loomis, but it has been very helpful. But I recommend Fleming, Calculus of several variables more highly. Loomis used that book officially in his course, before writing his own. If you want a very high powered book that also does things in banach space, but manages to be very useful, in my opinion, there is dieudonne's foundations of modern analysis. he also perversely adheres to a credo of making life harder for the reader by banishing all illustrations from his book. but it is good book with a lot of useful high level information. not easy to find elsewhere. he explcitly states however that one should not approach his book until after mastering a more traditional course, (e.g. courant). Another book Loomis used that I do not recommend either is the super show offy book by Steenrod Spencer and Nickerson. As one reviewer put it roughly, this book is more about the ride than the destination. However I do have all these books on my shelf, I just don't look at them all very much nor with the same pleasure. Your last quote from me above is a historical account of life at Harvard in the 1960's, not a personal recommendation, indeed to some extent the opposite. Spivak's second recommended book is indeed calculus on manifolds, an excellent place to learn the most basic several variable calculus topics. now that i reflect, i am not familiar so much with sternberg's (second) half of the LS book. i only heard him lecture once and was quite impressed with his down to earth and insightful approach. maybe that half of the book would suit me more. but i'm not much into physics. In my opinion you are spending more than enough time here asking for advice, i.e. "dancing around the fire", and need to get to work in the library reading some of these books.
Recognitions: Homework Help Science Advisor well you provoked me to go look at LS and i did in fact like Sternberg's chapter 12 on integration. This whole discussion is beginning to remind me of a friend telling me that his brother warned him off of reading a famous algebra book, so i myself also avoided it for years. Finally I was required to read some of it and found it wonderfully clear. When I went back and asked my friend's brother he said he never said it was bad, just "tedious". by which he seems to have meant overly detailed, just what I appreciated about it. so please take what we have said with a grain of salt and try to get a good look at these books yourself. Even Loomis' half of the book helped me in the section on "inifinitesimals" and his slick proof of the chain rule. But the abstract implicit function theorem in terms of projections from a product of banach spaces, there left me wondering what Mumford even meant when he said the theorem simply says you can solve for some of the variables in terms of the others.
Recognitions: Homework Help Science Advisor oh, also the intro to LS says plainly that apostol, spivak, and courant are suitable prerequisites for their book. if that includes both volumes of those books, i would agree.
Hi PrinceRhaegar Quote: my second semester of college as a mechanical engineering major, but I'm thinking about switching to math. The reasons are simple; recently I've found that I'm better at math than any other subject (especially physics.... .....I just think math is cooler than any other subject I've seen so far. The reason I'm really hesitant to do so is because firstly, I have no idea what I'd do with my degree after I graduate, and secondly, and this may seem a bit shallow, I know that I'll likely be making more money as an engineer.... .....In a perfect world I'd major in math and get a job as an engineer (or at least in an engineering company). This is because I love math and I feel like I'd get a TON of satisfaction out of doing useful stuff for the world while also doing what I love..... ------- Honestly, it sounds like the ideal path is to do both, and just take that extra one or two years for your B.Sc and do a double major There are people out there that sound a lot like you and they do things like get a Mechanical Engineering degree and double it with a Physics Degree... if you really wish to slow it down, and you got zero problems with the textbooks, you can almost accomplish it all, and think of it as engineering as a hobby and math as a hobby, and then think of the engineering stuff as your income... ------- Some bizarre and brilliant souls in 5-6 years end up with a satisfying thing of doing four Bachelor degrees. [maybe 6-7 for ordinary mortals with the same goal] a. Mechanical/Aerodynamic Engineering b. Physics Degree c. Math Degree d. Electronics Engineering Degree since there is considerable overlap and his future goals worked out that he used most all of it in his career.... though he wasnt as deep as some that just took one and only one path... But you can be 65%-80% fluent in two courses with a Double Degree. so there is a LOT you can accomplish with an extra 1.5 years of your life, that these sorts of things are possible. The Hardest thing is knowing how to self-study and how much effort to put into things, and not fearing failing or exams anymore.... the second dilemma is what really makes you happy, and a career may or may not conflict, if you just put some extra time into things. but sure if you go up a ladder in academia you do tend to end up stuck there, where people who get a physics degree who almost wanted to go to grad school in pure math, and they find they *had* to pick one or the other, but if it's a hobby, or circumstances are right, you can sometimes slide into both worlds.... all depends how happy you are, and you like the results..
Recognitions: Homework Help Science Advisor or perhaps, just start with one small significant goal, like learning calculus, and do it well. then go further. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9745495319366455, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/56304/looking-for-interesting-actions-that-are-not-representations/56305 | ## Looking for interesting actions that are not representations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As a person interested in group theory and all things related, I'd like to deepen my knowledge of group actions.
The typical (and indeed the most prominent) example of an action is that of a representation. In this case the target space has so much structure that one can deduce a huge number of properties of a given group just by working out some linear algebra (to put it bluntly).
Now, I am wondering
whether there exist other structures that provide interesting classes of actions. Either for the study of the given group or just as an application to solve some interesting problems.
I realize my question is probably a bit naive and ignorant of what is probably a standard knowledge but I actually can't think of that many useful actions and wikipedia article on them doesn't provide many examples (at least not very interesting and non-linear). Not that I can't think of anything at all. Coming from physics, I am aware of stuff such as gauge symmetries (free transitive fiber-wise actions on fiber bundles) or various flows (whether for time evolution, or as an symmetry orbit). And I am also aware of the usual Lie theory, left/right translations, etc. But I am looking for more.
Note: feel free to generalize the above to any action. I'd be certainly also interested in actions of algebras, rings, etc.
-
1
Projective representations are pretty useful... – Mariano Suárez-Alvarez Feb 22 2011 at 18:25
6
Making a group act freely (or almost freely in some sense) on some contractible topological space is a standard way of studying the group. I am thinking especially of torsion-free groups, where the space has a chance of being a manifold or at least finite-dimensional. Some keywords: geometric group theory, classifying space, Teichmueller theory. – Tom Goodwillie Feb 22 2011 at 18:53
3
The point of looking for actions on vector spaces is that vector spaces are particularly well-behaved, and so you can study a group by studying its linear representations. But groups most naturally act just on spaces, and Lie groups, being groups of manifolds, naturally act on manifolds. SO(n), for example, by construction acts on the (n-1)-dimensional metrized sphere. That you can extend this to an n-dimensional linear representation is almost an accident (but sure does make studying it easier!). (continued) – Theo Johnson-Freyd Feb 22 2011 at 18:57
2
(continuation) On the other hand, its action through PSO(n) on the metrized real projective space (sphere mod antipodal identification) does not extend well to a linear representation, and gives a nice example of Mariano's comment that projective representations are cool. – Theo Johnson-Freyd Feb 22 2011 at 18:59
1
@Theo: Actually, the action of PSO(n) on real projective space does extend quite naturally to a linear action: If you let SO(n) act by conjugation on the vector space of symmetric $n$-by-$n$ matrices, then $\mathbb{RP}^{n-1}$ is identified with the set of symmetric matrices~$p$ that satisfy $p^2=p$ and have trace equal to $1$. One identifies $p$ with its $+1$ eigenspace, which is a line in $\mathbb{R}^n$. – Robert Bryant Dec 13 2011 at 15:55
show 3 more comments
## 10 Answers
Finite group actions on compact Riemann surfaces are a classical subject, and the related literature is huge.
It is well known that if a finite group $G$ acts as a group of automorphisms on a compact Riemann surface of genus $g \geq 2$, then necessarily
$|G| \leq 84(g-1)$.
This is a old result of Hurwitz, and if equality holds then the group $G$ is called a Hurwitz group in genus $g$. The classification of Hurwitz groups is not yet completed; it is known that there exists a Hurwitz group for infinitely many values of $g$, and that there exists no Hurwitz group for infinitely many values of $g$ as well.
Moreover, any Hurwitz group $G$ is a quotient of the infinite triangle group
$T_{2,3,7}=\langle x, y | x^2=y^3=(xy)^7=1 \rangle$.
There exist no Hurwitz group in genus $2$, and exactly one in genus $3$. It is the group $G=PSL(2, \mathbb{F}_7)$, the unique simple group of order $168$. The corrisponding Riemann surface can be realized as a particular curve of degree $4$ in $\mathbb{P}^3(\mathbb{C})$, the so-called Klein quartic.
-
1
On the other hand, one of the key tools for understanding / classifying the action of $G$ on a compact Riemann surface $X$ is via its associated representation on the space $H^0(X,\Omega^1)$ of global holomorphic differentials. So the representation theory is there lurking just beneath the surface. (Which is a good thing, IMO...) – Pete L. Clark Feb 23 2011 at 14:06
A related example comes from the theory of K3 surfaces. Here, their automorphism groups can be infinite and are not "linearisable", by which I mean do not extend to an automorphism of projective space. However, one can understand this group action by studying its representation on the 2nd cohomology group. – Daniel Loughran Feb 23 2011 at 14:32
1
Technical note: only for $g\geq 2$. – Will Sawin Dec 13 2011 at 20:39
@Will: you are right, thank you. – Francesco Polizzi Dec 14 2011 at 9:45
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One of the most important way of getting Lie transformation groups (the one that motivated Sophus Lie in the first place) is to look at the group of symmetries of a smooth manifold with some "extra structure". For example, the group $G=$Isom$(M)$ of isometries of a complete Riemannian manifold $M$ is always a Lie group, and the associated Lie algebra consists of the Killing vector fields on $M$ with the usual bracket for vector fields. If $M$ is compact, then $G$ is also compact. A lot of the deeper theory of Lie groups and their homogeneous spaces $G/K$ come from this class of examples. Perhaps the most beautiful class are the so-called symmetric spaces of Cartan. These are the complete Riemannian manifolds $M$ such that at each point $p$ there exists an isometry that fixes $p$ and reverses all the geodesics through $p$. There are loads of great books on this subject, for example Helgason's Differential Geometry, Lie groups, and Symmetric Spaces".
-
What that his motivation? I would hav imagined that looking at those groups was a way of constructing examples, and that he was more interested in symmetries of PDEs and other such things (which have much less structure, in a way...) – Mariano Suárez-Alvarez Feb 22 2011 at 18:58
3
@Mariano: In fact, Lie studied germs of analytic groups acting on germs of analytic manifolds---the concept of a global manifold and global group manifold did not exist in the middle and late 1800s when he did his pioneering work. And yes, you are correct that the "extra structure" that Lie dealt in his Geometrie Der Ber\"uhrungs Transformationen'' with was primarily differential equations. – Dick Palais Feb 22 2011 at 19:07
Good point. Although symmetries of a (pseudo)Riemannian manifold are a daily bread of a theoretical physicist :) But I actually never studied homogeneous spaces and I guess I should fix that. – Marek Feb 22 2011 at 19:28
Bass-Serre theory studies groups via their action on trees. This is a combinatorial version of groups acting on simply connected spaces and leads to a very nice theory; one treats the quotient as a kind of orbifold and deduces information about the group from its structure.
Bruhat-Tits trees are a natural example of trees equipped with group actions, but I can't say much more about this.
-
7
'This is a combinatorial version of groups acting on simply connected spaces...'. I'm a little mystified by this. It's a special case of a group acting on a simply connected space! See Scott and Wall's classic article 'Topological methods in group theory' for a geometric point of view on Bass--Serre theory. – HW Feb 22 2011 at 21:30
Why are you mystified, HW? It is hardly news that special cases are often of great value! – Mariano Suárez-Alvarez Dec 13 2011 at 15:47
Finite group actions on sets have important applications to combinatorics, e.g., the Polya enumeration theorem.
-
Well, I only had continuous groups in mind but this is actually very neat. – Marek Feb 22 2011 at 19:31
Zimmer's program is about continuous (or differentiable) actions of groups on manifold. Roughly, it expects that a lattice in a rank $r$ semi-simple Lie group cannot act non-trivially on a manifold of dimension `$<r$`. This result is known for the circle (see "Actions de réseaux sur le cercle" by Étienne Ghys, Inventiones 99) but, up to my knowledge, is still open even for surfaces.
More generally, many geometric, topological and dynamical problems are about group actions.
-
I highly recommend David Fisher's recent survey on the Zimmer program : arxiv.org/abs/0809.4849. I also recommend Zimmer-Morris's older survey people.uleth.ca/~dave.morris/books/…. – Andy Putman Feb 23 2011 at 15:30
In addition to what has been said already: I think that everywhere in Mathematics when you speak of symmetries you mean "group plus action" and not just the group itself.
The thoughts about symmetry are probably of geometric nature: asking for symmetry means asking for the symmetry of a geometric object (we have already the examples of Riemannian manifolds, but there are many more) In differential geometry you can ask for "symmetries" of all kind of structures: metric, but also symplectic forms or Poisson tensors. In this case you enter the realm of dynamical systems with symmetries. The symmetries usually help to simplify the dynamical system by using "conserved quantities" to eliminate degrees of freedom. You may remember this from your first mechanics courses when dealing with the Kepler problem...
But symmetries in crystals might yet give another example, not related to Lie groups and some inherited action from a linear action: treating a crystal as an abstract lattice with colored edges and vertices one may well ask for its symmetries and arrives at discrete groups acting in a much more combinatorial way. The original possibility that the lattice can be embedded into some Euclidean space is no longer relevant.
In addition, symmetries arise in much more abstract concepts that these geometric ones. A prominent example is perhaps the question of solving polynomial equations. Here the symmetries of the polynomial might allow for general formulas or not. This is the beginning of Galois theory in field theory, where not Lie groups but discrete groups are acting.
From my own field a statement which I would like to understand better: the Grothendieck-Teichmueller group acts on the set of Drinfeld associators. Not a linear action at all :(
On the other hand: One reason why linear actions are so omnipresent is perhaps that (beside being the simplest type of actions) all types of geometric actions dualize to a linear action on the spaces of reasonable functions on the geometric spaces. Hence even a group action on some geometric object (manifold, lattice, ...) can by studied by means of representation theory when one looks for the induced action (via pull-back) on the functions on it. However, this is typically quite complicated as the representation spaces typically are infinite-dimensional.
-
I am quite familiar with all the stuff you've said (except for those Drinfeld associators, those left me baffled) but it's good to be reminded of it. Also, the last remark is particularly valid, I think. I am only acquainted with Peter-Weyl theorem for compact groups but I appreciate the idea that one can get a lot just by looking at the regular representation, or group algebra or some similar canonically associated object. – Marek Feb 22 2011 at 20:07
Another important example is given by groups acting on graphs, especially (but not only) in finite group theory. Quite a few of the sporadic finite simple groups have actually been discovered as automorphism groups of graphs, e.g. the Hall-Janko group $J_2$ or the Higman-Sims group $HS$.
A related class of examples with more "structure" are so-called incidence geometries (combinatorial objects with geometric structure), and the most prominent example of those are the (Tits) buildings, introduced by Jacques Tits in the early 70's. (In fact, the example of Bruhat-Tits trees mentioned by Qiaochu Yuan is a very specific example of this situation; these are buildings of type $\tilde A_1$.)
-
In number theory there are countless examples. Off the top of my head, here are three:
1. The group SL$_2({\mathbb Z})$ acts on the space of binary quadratic forms $ax^2 + bxy + cy^2$. The set of equivalence classes can be shown to form a group called the class group. By the way, the same group SL$_2$, even with coefficients from the reals, acts on the solution space of the Riccati equation $y' = a(x) + b(x)y + c(x)y^2$ and can be used to reduce the equation to some kind of "normal form".
2. The group of rational points on an elliptic curve $E$ acts on certain curves of genus $1$ and makes them into principal homogeneous spaces. These are a most important tool for studying the group of rational points on $E$.
3. The units of a quadratic number field act on the generators of principal ideals. In more archaic terms: solutions of the Pell equation $x^2 - dy^2 = 1$ act on the representations of a number $n$ as $n = x^2 - dy^2$. These kind of investigations lead to Dirichlet's class number formula, which is related to example 1.
4. Galois groups tend to act on almost everything, but on the other hand have a habit of leading to representations and so belong to the list of examples you're less interested in.
-
7
For a number theorist... you count very badly! :D – Mariano Suárez-Alvarez Dec 13 2011 at 15:46
Groups can act on categories in ways that may be relevant to some physicists.
1. One may consider a group acting on the derived category of coherent sheaves (also called the category of B-branes) of a complex manifold by exact autoequivalences. I think that if the manifold is an elliptic curve, the exact automorphism group contains the braid group $B_3$, which is substantially larger than the group of geometric automorphisms - there is an explanation in Polishchuk's book on Abelian varieties. I guess on the A side you can look for $A_\infty$-equivalences of Fukaya categories, but I don't know anything about that.
2. In the geometric local Langlands program, a loop group $G((t))$ acts on categories of $\mathfrak{g}((t))$-modules attached to opers, where $G$ is a linear algebraic group. (An oper is a kind of $G$-connection on a curve with some extra structure - see E. Frenkel's book Langlands for loop groups).
3. More concretely, if a group acts by automorphisms on an algebra $A$ over the complex numbers, then it also acts on the category of $A$-modules. By Schur's lemma, an irreducible $A$-module then inherits an action of a central extension of its objectwise stabilizer.
4. A manifestation of the previous example that is close to my heart is the case when the monster simple group acts by automorphisms on the monster vertex algebra (which isn't quite an algebra, but the same idea applies), and hence on the categories of twisted modules. We naturally get projective actions of large finite groups on irreducible twisted modules.
One has to be a little careful about what one means by an action of a group on a category, since there is the question of whether the composition of functors $F_g \circ F_h$ is equal to $F_{gh}$ or just naturally isomorphic, and whether associativity holds on the nose or up to some other system of isomorphisms that satisfies a pentagon identity. The things that naturally act on categories are called 2-groups, and groups that we see acting are a sort of "shadow" or truncation of them.
-
Minor correction: The categories in number 2 are made out of projective $\mathfrak{g}((t))$-modules at the critical level. By "critical level", one means the central line of the central extension acts in a way that gives a completion of the enveloping algebra a very large center (isomorphic to the coordinate ring of a space of opers). – S. Carnahan♦ Feb 23 2011 at 16:46
There are interesting actions, called affine isometric actions, that are related to orthogonal representations (roughly they are "perturbations" of orthogonal representations by certain cocycles into the representation space), and arise in an essential way in the study of Kazhdan's property (T).
This is treated in depth in the following fantastic book on Property (T):
http://perso.univ-rennes1.fr/bachir.bekka/KazhdanTotal.pdf
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343551993370056, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?s=849006b8d6178ebb8e112a46d6f2da69&p=4280127 | Physics Forums
## Laplace’s equation inside a semi-infinite strip
Hello everyone,
can anyone help me with solving Laplace’s equation inside a semi-infinite strip.
Is there specific steps to follow. I'm gonna give an example and I will be really grateful
if someone explains to me.
Solve Laplace’s equation ∇2u = 0 inside a semi-infinite strip (0 < x < ∞, 0 < y < H) with the following boundary condi- tions:
u(x,0) = 0, u(x,H) = 0, u(0,y) = f(y).
I miss the classes and I feel I'm lost.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Quote by Hio Hello everyone, can anyone help me with solving Laplace’s equation inside a semi-infinite strip. Is there specific steps to follow. I'm gonna give an example and I will be really grateful if someone explains to me. Solve Laplace’s equation ∇2u = 0 inside a semi-infinite strip (0 < x < ∞, 0 < y < H) with the following boundary condi- tions: u(x,0) = 0, u(x,H) = 0, u(0,y) = f(y). I miss the classes and I feel I'm lost.
You will want to solve this using separation of variables. The idea is to take a linear combination of solutions of Laplace's equation of the form $X(x)Y(y)$. Then
[tex]
\nabla^2 (XY) = X''(x)Y(y) + X(x)Y''(y) = 0
[/tex]
so that
[tex]
\frac{X''}{X} + \frac{Y''}{Y} = 0.
[/tex]
The first term on the left is a function only of x and the second a function only of y. The only way this equation can hold for all x and y is if each term is constant. Hence
[tex]X'' = CX \\
Y'' = -CY
[/tex]
for some real constant $C$ (known as a separation constant). The values of $C$ we need to take depend on the boundary conditions, which are:
[tex]
X(0) = 1,\qquad \lim_{x \to \infty} X(x) = 0 \\
Y(0) = Y(h) = 0
[/tex]
with $Y(y)$ not identically zero (actually all that's required is $X(0) \neq 0$, but it is convenient to specify $X(0) = 1$).
The easiest boundary condition to satisfy is that $X(x) \to 0$ as $x \to \infty$. We must have $X(x) = e^{-kx}$ for some $k > 0$. This means that $C = k^2$ so that
[tex]
Y'' = -k^2 Y
[/tex]
subject to $Y(0) = Y(h) = 0$ but with $Y(y)$ not identically zero. That can be done if we take $k = (n\pi)/h$ for some positive integer $n$ with
[tex]
Y(y) = B\sin \left(\frac{n\pi y}{h}\right)
[/tex]
where the constant $B$ cannot be determined from the boundary conditions on $Y$. But given the next stage of the solution we may as well take $B= 1$.
Putting this together, we have, for each positive integer $n$, an eigenfunction
[tex]
X_n(x) Y_n(y) = \exp\left(-\frac{n\pi x}{h}\right) \sin\left(\frac{n\pi y}{h}\right)
[/tex]
and the natural thing to do is to take a linear combination of these,
[tex]
u(x,y) = \sum_{n=1}^{\infty} a_n \exp\left(-\frac{n\pi x}{h}\right) \sin\left(\frac{n\pi y}{h}\right),[/tex]
and choose the coefficients $a_n$ to satisfy the boundary condition $u(0,y) = f(y)$. We then have
[tex]f(y) = u(0,y) = \sum_{n=1}^{\infty} a_n \sin\left(\frac{n\pi y}{h}\right)
[/tex]
which is the fourier sine series for $f(y)$ on the interval $0 \leq y \leq h$. Thus
[tex]
a_n = \frac{2}{h} \int_0^h f(y) \sin\left(\frac{n\pi y}{h}\right)\,\mathrm{d}y.[/tex]
Thread Tools
Similar Threads for: Laplace’s equation inside a semi-infinite strip
Thread Forum Replies
Advanced Physics Homework 14
Differential Equations 1
Calculus & Beyond Homework 1
Classical Physics 2
Calculus & Beyond Homework 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7870205044746399, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=233286 | Physics Forums
Thread Closed
Page 1 of 2 1 2 >
## Another ant riddle
Let me come back with a totally different ant riddle
I hope this one has not been posted already. As far as I know, it should be attributed originally to Martin Gardner.
This time, let us put a (point-like) ant on the edge (mathematically idealized) one meter long elastic band. The ant walks at a constant speed of 1 cm/s towards the other end of the elastic band. Every second, the elastic band is streched one meter longer.
Will the ant reach the other edge ?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Does the midpoint remain fixed in position relatively to the ground?
Really good question. I'd say the elastic band would reach its modulus eventually and burst, killing the ant in the process and giving the observer a nasty snap. So the ant wouldn't reach the end... alive anyway. That or: I think no. A centimeter is 1/100th of a meter. Assuming the band is fixed at one end, and stretched from the other, each section stretches relative to its position from the fixed end. This would allow the ant to only make it to the 1/100th mark at any given time, which is not even half way there.
Blog Entries: 1
Recognitions:
Gold Member
## Another ant riddle
Quote by humanino Let me come back with a totally different ant riddle I hope this one has not been posted already. As far as I know, it should be attributed originally to Martin Gardner. This time, let us put a (point-like) ant on the edge (mathematically idealized) one meter long elastic band. The ant walks at a constant speed of 1 cm/s towards the other end of the elastic band. Every second, the elastic band is streched one meter longer. Will the ant reach the other edge ?
This is poorly worded. The ant's progress is stated to be at a constant speed, but the stretching is not. So for instance, it may be that the band is stretched by 1 meter nearly instantaneously at the end of each second. Also, we may interpret the words '1 cm/s towards the other end of the band' to mean that each second brings the ant 1 cm closer to the end point regardless of the stretching, that is, 1 meter and 1 cm per second, and so will finish the course in 100 seconds.
Recognitions: Homework Help I think what he means is that the band is stretched uniformly to its new length and the "fractional position" (the new position of the ant over the new length of the band) remains the same before and after stretching provided the ant hasn't moved during the stretching. So suppose the ant is at 0.01m of the band(after having moved 1cm). The band is then stretched by 1 metre uniformly ie. every single part of the band expands. The new length of the band is 2m. The position of the ant after it has been stretched once is $$\frac{1}{100} \times (100+100)$$ This may be generalised as follows. Let $$L_{k}$$ be the length of the band at any one time. Let $$x_{k}$$ be the position of the ant at any one time. $$L_{k+1} = L_{k} + 100$$ $$x_{k+1} = \frac{x_{k}}{L_{k}}(L_{k+1}) + 1$$. The +1 term for x_k is because the ant moves 1 cm each second. Ok so I was thinking of writing it out as a matrix and then see whether it's diagonalisable. But the expression for x_{k+1} isn't linear; it has $$L_{k}$$ as its denominator. Ugh I'm stuck. There appears to be a problem with my formulation of the length and position of the band as given above; given that the band is stretched continuously and uniformly while the ant is walking, the recurrence relation should be continuous rather than discrete as I have given. But I don't see how to do it that way...
I refute my answer. The ant will reach the end. I can't put it on formal grounds. Something along the lines of its always making progress. I sort of visualized it, Ill try to formalize this tomorrow.
Quote by Werg22 Does the midpoint remain fixed in position relatively to the ground?
Not necessarilly. You may suppose it does if it helps you. The problem of the ant walking on an elastic band is very similar to conceptual problems one encounters when first dealing with general relativity-like concepts. The elastic band may be thought of as a space by itself independent of another embedding.
Quote by jimmysnyder This is poorly worded.
Apologies if this is the case, I certainly cannot render justice to Martin Gardner. Buy his books !
On the other hand, you may want to try to word riddles in french, and I'll tell you how good it is.
The ant's progress is stated to be at a constant speed, but the stretching is not.
If this is the reason you complain about my wording, let me make it clear that this is exactly what I meant.
So for instance, it may be that the band is stretched by 1 meter nearly instantaneously at the end of each second.
Yes, this is a math but not physics if you will.
Also, we may interpret the words '1 cm/s towards the other end of the band' to mean that each second brings the ant 1 cm closer to the end point regardless of the stretching, that is, 1 meter and 1 cm per second, and so will finish the course in 100 seconds.
No, I do not see any possible confusion : the ant walks at a constant speed of 1cm/s locally with respect to the elastic band.
Quote by Defennnder I think what he means is that the band is stretched uniformly to its new length and the "fractional position" (the new position of the ant over the new length of the band) remains the same before and after stretching provided the ant hasn't moved during the stretching.
Very good observation !
Blog Entries: 1
Recognitions:
Gold Member
Quote by jimmysnyder So for instance, it may be that the band is stretched by 1 meter nearly instantaneously at the end of each second.
This is physical, I put the word 'nearly' in there for a reason. I have googled the answer so I have no more to say about this puzzle. However, you have some nerve distinguishing between what is and what is not physical in my post.
Quote by jimmysnyder This is physical, I put the word 'nearly' in there for a reason. I have googled the answer so I have no more to say about this puzzle. However, you have some nerve distinguishing between what is and what is not physical in my post.
Any time you have been posting about the riddles that I cared to share with people on PF, for the mere reason that I love riddles and enjoy them, you have been either throwing away a solution without explaining it (which is not very fair and interesting, anybody can google the solutions to 99.99% of the riddles posted here anyway, what is interesting is how you got your own), or you have been posting negative comments. Thank you.
I get e^100 steps before he hits the other side. Thanks to mathematica, some fitting, and soem simple numerical methods. edit: i think im off by a factor somewhere, but at least i got the idea.
Quote by humanino what is interesting is how you got your own
I definitely have difficulties with english. At this point, I really meant that I am not interested in the solution but in how to get it. I never meant to imply anybody would google the problem, and throw away the solution here later, which would be completely pointless.
I publicly apologies to Jimmysnyder for my own difficulties of communication, and want to state that I know he is an excellent puzzle-solver on this forum. I really meant I am frustrated if somebody gives me a solution without explaining how he got it, especially if I think this person really has an alternative solution to the one I know.
Recognitions:
Homework Help
Quote by K.J.Healey I get e^100 steps before he hits the other side. Thanks to mathematica, some fitting, and soem simple numerical methods. edit: i think im off by a factor somewhere, but at least i got the idea.
How did you arrive at that answer?
This is a relatively straightforward analogue to a simple GR sort of problem. To solve this problem, it's convenient to put down a coordinate system on the band that stretches with it. That way, we can always uniquely specify a point, without having to know at what time we're looking. So, let x run from 0 (the point where the ant starts) to 1 (the point the ant is trying to get to). At the initial time, x=0 and x=1 are separated by a distance of 1 m. To keep this independent of units, for the moment, call the initial length of the band $$l_0$$. Then, the distance at the initial time from the starting end to a point x is $$l(x)=l_0x$$. The length of the band increases by 1 meter every second. Call this rate of stretching s. Then, the length of the band at time t is $$l(t)=l_0+st$$. So, the distance from the starting end to point x at time t is $$l(x,t)=(l_0+st)x$$. At this time, the distance from point x to point x+dx will be $$dl=(l_0+st)dx$$. Now, if the ant moves from x to x+dx in a time dt sufficiently short that the stretching is negligible, we get the relation between physical and coordinate speeds: $$\frac{dl}{dt} = (l_0 + st)\frac{dx}{dt}$$ Now, we're given that the ant's physical speed is constant - call it v. So, the ant's coordinate speed at time t must be: $$\frac{dx}{dt} = \frac{v}{l_0 + st}$$ This can be integrated rather simply and, given that the ant is defined be at x=0 at t=0, we find that $$x(t)=\frac{v}{s} \ln (1 + \frac{st}{l_0})$$ Since ln(x) has no upper limit, the ant must eventually reach x=1. We can solve for the time at which this will happen: $$t_1 = \frac{l_0}{s} \left (e^{\frac{s}{v}} - 1 \right )$$ Plugging in numbers, this gives $$t_1 = e^{100} - 1\ \mathrm{s}$$.
Recognitions:
Homework Help
Quote by Parlyne Now, we're given that the ant's physical speed is constant - call it v. So, the ant's coordinate speed at time t must be: $$\frac{dx}{dt} = \frac{v}{l_0 + st}$$
I don't understand this part. Your formulation assumes that x=0 is always the start and x=1 is always the end. But where did this expression come from?
Quote by Defennnder I don't understand this part. Your formulation assumes that x=0 is always the start and x=1 is always the end. But where did this expression come from?
This just comes from recognizing that $$\frac{dl}{dt}$$ is the physical velocity relative to the elastic band, which is given to be constant, and which I've called v for convenience. I then use the expression above to solve for the coordinate velocity.
Recognitions: Homework Help Took me some time, but I think I got it, thanks.
This is basically Zeno's paradox with an ant and a elastic band, before the and can reach the end he will have to reach the middle, but before he reaches the middle he has to reach the 1/4 point, but he will never reach any of those points because the elastic is constantly stretching at a speed that is greater than the ants.
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
| | | |
|-----------------------------------------|----------------------------------|---------|
| Similar Threads for: Another ant riddle | | |
| Thread | Forum | Replies |
| | General Discussion | 9 |
| | Precalculus Mathematics Homework | 2 |
| | Brain Teasers | 6 |
| | General Discussion | 32 |
| | Brain Teasers | 13 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551600217819214, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/31464/can-i-derive-the-boltzmann-distribution-by-an-invariance-argument/31469 | ## Can I derive the Boltzmann distribution by an invariance argument?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In statistical mechanics, the Boltzmann distribution gives the probability of a system being in state $i$ as
$$\displaystyle \frac{e^{- \beta E_i}}{\sum_i e^{-\beta E_i}}$$
where $E_i$ is the energy of state $i$. I have generally seen this demonstrated, starting with some reasonable physical assumptions, via a heat bath argument (as exposited e.g. by Terence Tao) involving interactions between the system and a larger external system. For me, an unsatisfying aspect of the heat bath argument is that it doesn't give me a strong reason to expect that a fundamental function like the exponential should appear at the end.
Here is what I think could be an argument which accomplishes that. By inspection, the Boltzmann distribution only depends on the relative energies of the different states. Under some mild assumptions this actually characterizes the Boltzmann distribution. Let us suppose there is a non-negative function $f(E)$ such that WLOG $f(0) = 1$ and such that the probability of a system being in state $i$ is
$$\displaystyle \frac{f(E_i)}{\sum_i f(E_i)}.$$
Let us suppose that the system has two states. Then the statement that the Boltzmann distribution only depends on the relative energies turns out to be equivalent to the functional equation $f(x + y) = f(x) f(y)$, which under any kind of continuity assumption whatsoever gives $f(x) = e^{ax}$ for some constant $a$.
Question 1: How can this argument be fleshed out? In particular, what physical principle would suggest that the Boltzmann distribution only depends on the relative energies of the states? (I seem to recall from my high-school physics lessons that energies are only well-defined up to an additive constant, but I would really appreciate some clarification on this issue.)
Question 2: How does this argument relate to the heat bath argument or the combinatorial argument given, for example, at Wikipedia?
(Motivation: some important functions in mathematics, like the Jones polynomial and various zeta functions, can be interpreted as partition functions of certain statistical-mechanical systems, and I am trying to sharpen my physical intuition about these constructions.)
-
Hi Qiaochu, did you read before about the variational method ? I liked it. I wrote a post about that question, because of a question of a friend of mine. Unfortunately it was written in Portuguese. If you want to take a look in this approach the link is : leandromat.wordpress.com/2010/07/04/… It is a very basic text and it was wrote with help of these books: Thermodynamic Formalism - David Ruelle Entropy, Large Deviations; and Statistical Mechanics - Richard Ellis; Equilibrium states in ergodic theory - Gerhard Keller. – Leandro Jul 11 2010 at 21:40
## 5 Answers
Like Andreas, I find a maximum entropy argument to be intellectually appealing. However, he says the solution can be found by Lagrange multipliers and I don't know the justification for using Lagrange multipliers. That is, in the space of all probability distributions on the particles, how do you know the maximum entropy solution is really accessible to variational methods?
For a derivation not using Lagrange multipliers, see the bottom of page 9 through page 11 at http://www.math.uconn.edu/~kconrad/blurbs/analysis/entropypost.pdf.
-
Hi KConrad, your question is about the non finite state spaces ? – Leandro Jul 11 2010 at 22:01
Thanks! That paper was very helpful. Is it correct to say that the dependence on relative energies comes from the mean-energy constraint? – Qiaochu Yuan Jul 11 2010 at 22:24
The question is in part about the non-finite case, but even in the finite case how do you know in advance that the max. entropy distr. does not lie on the boundary of the convex space of prob. distr., where one of the particle probabilities is 0? You need to rule out the answer being located there to know that the answer by variational methods is max. over all the possibilities. (In fact the max. entropy distr. is on the boundary for a finite state space if you want the avg. energy to be min or max of the $E_i$'s, so there is something to show in the other "non-degenerate" situations.) – KConrad Jul 11 2010 at 22:32
Qiaochu, yes if you read Theorem 4.9 you will see there is a mean-energy constraint $\sum q_jE_j = \langle E\rangle$. That is the only condition imposed, along with the necessary condition that your choice of $\langle E\rangle$ has to lie in the closed interval between the min and max of the $E_i$'s. (If you want $\langle E\rangle$ to be outside that range then of course there's no answer.) – KConrad Jul 11 2010 at 22:36
Since I asked a question in my answer about why the variational method is justifiable even though the space of prob. distributions has a boundary, I should add that the variational method does have the virtue of telling us what form the answer ought to be! A downside to the nonvariational proof in the link I give is that it doesn't explain where the family of Boltzmann distr. comes from. I see two parts: (1) variational methods tell us what kind of answer to expect and (2) we then need a proof taking the whole space, incl. the boundary, into account. I don't know how to do (2) variationally. – KConrad Jul 11 2010 at 22:49
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I sketched this here: http://blog.eqnets.com/2009/09/09/the-fundamental-law-of-statistical-physics/
-
Thanks, Steve. I don't think I have a clear understanding of why energy is only defined up to an additive constant. Do you know anywhere this issue is clarified? – Qiaochu Yuan Jul 12 2010 at 1:02
The equations of motion are always invariant under the transformation $U \mapsto U + const$ of any potential. This is a fancy way of talking about the work-energy theorem. – Steve Huntsman Jul 12 2010 at 1:22
1
BTW, I always thought it was funny that Feynman (not to mention anyone else) never did this, especially given his observation about this invariance in his statistical physics lectures. See the footnote on page 3: books.google.com/… – Steve Huntsman Jul 12 2010 at 1:44
2
I wondered the same thing when I read that footnote, actually. (It's mildly annoying that Feynman didn't state a continuity hypothesis - I guess he didn't know about pathological solutions to the Cauchy functional equation.) – Qiaochu Yuan Jul 12 2010 at 1:56
1
Even if Feynman had known about them, he probably wouldn't have mentioned them—not really his style (even compared to other physicists) to let mathematical pathologies derail physical reasoning. – Steve Huntsman Jul 12 2010 at 2:34
For me the clearest derivation of the Boltzmann distribution is by maximizing the entropy $\sum n_i \ln(n_i)$ unter the constraint of constant total energy $\sum n_i E_i = \text{const.}$ and constant total particle number $\sum n_i = \text{const.}$. The Lagrange multiplicator for the first constraint gives $\beta$. You can immediately see that a shift of the energies does not change the distribution.
-
That shifts the source of my confusion to what the rationale behind the definition of entropy is! – Qiaochu Yuan Jul 11 2010 at 21:34
1
Qioachu, see Theorem 5.1 of the link I put in my answer for a justification of the formula for entropy (on finite sample spaces). Section 6 may also be interesting to you in terms of the relation between maximum entropy and invariance. – KConrad Jul 11 2010 at 21:43
This answer is just an expanding version of Kconrad answer's. I am posting it here because this argument support the variational method for finite state space and also touch in the observation made by Kconrad about a technical issue about boundary values of the variational approach.
Proposition: Suppose $\Omega$ non empty finite set and let $\mathcal M$ denote the set of the probability measures on $\Omega$ then $$\sup_{\mu\in\mathcal M} \left[ h(\mu)-\int_{\Omega} U d\mu \right]=\log Z$$ moreover, the supremum is attained for the measure $\mu$ given by $$\mu({\omega})=\frac{1}{Z}e^{-U(\omega)}.$$ Proof: Let be $n$ the cardinality of $\Omega$. Define the function $f:\mathbb R_+^n\to\mathbb R$ by $$f(x_1,\ldots,x_n)=-\sum_{i=1}^n \Big[x_i\log x_i +K_ix_i\Big],$$ where $K_i\in\mathbb R$ for all $i\in{1,\ldots,n}$. Consider the function $g:\mathbb R_+^n\to\mathbb R$ given by $$g(x_1,\ldots,x_n)=\sum_{i=1}^n x_i.$$ We fix an enumeration for $\Omega$ and let be $K_i=U(\omega_i)$. So the following optimization problem $$\sup_{\mu\in\mathcal M} \left[ h(\mu)-\int_{\Omega} U d\mu \right]$$ can be solved by finding a maximum for $f$ restricted to $g^{-1}(1)$. Note that for any critical point $(x_1,\ldots,x_n)$ of $f$ in $(0,\infty)^n\cap g^{-1}(1)$, it follows from the Lagrange Multipliers Theorem's that $$\nabla f(x_1,\ldots,x_n)=\lambda \nabla g(x_1,\ldots,x_n)$$ for some $\lambda\in\mathbb R$, i.e., $$-(\log x_i +1+K_i)=\lambda, \ \ \ \text{for all}\ i=1,\ldots,n.$$ So for any pairs of index `$i,j\in\{1,\ldots,n\}$`, we have $$\log x_i +K_i=\log x_j+K_j$$ taking the exponentials it follows that $$x_ie^{K_i}=x_je^{K_j}.$$ Using that $\sum_{i=1}^nx_i=1$ and the above identities, we have $$x_ie^{-K_i}=\left[1-\sum_{j\in {1,\ldots,n}\backslash{i}}x_j\right]e^{-K_i}$$ $$=e^{-K_i}-\sum_{j\in {1,\ldots,n}\backslash{i}}x_je^{-K_i}$$ So $$x_ie^{-K_i}=e^{-K_i}-x_i\sum_{j\in {1,\ldots,n}\backslash{i}}e^{-K_j}.$$ Explicting $x_i$, we show that all critical points of $f$ in $(0,\infty)^n\cap g^{-1}(1)$ are given by (here there is just one) $$x_i=\frac{e^{-K_i}}{\sum_{j=1}^ne^{-K_j}}.$$ The image of $f$ at this point is given by $$-\sum_{i=1}^n \left[\left(\frac{e^{-K_i}}{\sum_{j=1}^ne^{-K_j}}\right)\log \left(\frac{e^{-K_i}}{\sum_{j=1}^ne^{-K_j}}\right) +K_i\left(\frac{e^{-K_i}}{\sum_{j=1}^ne^{-K_j}}\right)\right] = \log\left(\sum_{j=1}^ne^{-K_j}\right)$$ to see that $(x_1,\ldots,x_n)$ is local maximum we can compute the Hessian and check that it is negative definite at this point.
To show that the point is global maximum point, we can compare the image of $f$ at this point, with the value of $f$ in any point of the set $\partial (0,\infty)^n\cap g^{-1}(1)$. The restriction of $f$ to this set is given by $$f(x_1,\ldots,x_n)=-\sum_{i\in{1,\ldots,n}\backslash I}\Big[x_i\log x_i +K_ix_i\Big]$$ Where `$I\subset \{1,\ldots,n\}$` is a index set such that $|I|\geq 1$ e $x_i=0$ para todo $i\in I$ . We define $f_I$ which is a function of $n-|I|$ variables. It is maximum point can be determined in the same way and therefore we have that the max of $f_I$ is $$\log\left(\sum_{j\in{1,\ldots,n}\backslash I}e^{-K_j}\right)$$ which is less than $$\log\left(\sum_{j=1}^ne^{-K_j}\right).$$ Repeating this argument at most $n$ times we conclude that maximum of $f$ restricted to $g^{-1}(1)\cap \mathbb R^n_+$, is not attained in the boundary.
-
Thanks for posting this with the discussion of the boundary case. If the set $\Omega$ is countably infinite, is this method still complete? I don't know about justifications of Lagrange multipliers in that situation. – KConrad Jul 12 2010 at 1:36
@KConrad, you are welcome. About your question, unfortunately I do not know the answer. – Leandro Jul 12 2010 at 3:26
I'd like to chime in here, as someone with a physics background.
I absolutely love the derivation given by Landau in volume 5 on statistical physics, chapter 1. The basic idea is that since the log of the probability distribution function (i.e. the entropy) is an additive constant of the motion, it can be expressed as a linear combination of the 7 fundamental additive constants of the motion, namely the three components of momentum, the three components of angular momentum, and the energy. But since the momentum/angular momentum components can be reduced to zero with an appropriate frame of reference, the log of the distribution function depends only on some multiple of the energy, which turns out to be 1/T. We obtain the partition function naturally by normalizing the probability distribution.
I think this answers your question 1 from a physics point of view.
EDIT:
in view of the comments below, I should point out the the probability distribution I am referring to gives the probability of finding a system of N particles which obey the laws of classical mechanics in the state for which the nth particle is at position rn and moving with a velocity vn
-
I think this argument uses too many properties specific to systems of particles. Keith Conrad's answer shows that the underlying principles here are information-theoretic in nature and don't depend on the specific details of the physical system. – Qiaochu Yuan Jul 12 2010 at 2:07
To add to Qiaochu's comment, the physicist Edwin Jaynes (sorry, I don't know how well-known he is in physics, so maybe this looks as dumb as speaking of "the mathematician Frobenius"?) promoted the information-theoretic approach to explaining the Boltzmann distribution. See bayes.wustl.edu/etj/articles/theory.1.pdf. – KConrad Jul 12 2010 at 3:57
Re: Jaynes: blog.eqnets.com/2009/09/21/… – Steve Huntsman Jul 12 2010 at 5:12
If you would forgive me for protesting your comment Qiaochu, then I would say that I would not know how to state a "physical principle" to show "that the Boltzmann distribution only depends on the relative energies of the states" without a reference to the energy of particles. The information theoretic approach is useful for quantum mechanics, but,in my opinion, if we want a clear picture in our head of why the Boltzmann distribution is related to the relative energy of states, we must resort to an analogy with the classical mechanics of systems of extremely large numbers of particles. – Matt Jul 12 2010 at 5:51
@Matt—see my answer for a derivation that does not rely on any of that stuff. – Steve Huntsman Jul 12 2010 at 6:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 19, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930022656917572, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/44397/what-is-a-good-example-of-a-complete-but-not-model-complete-theory-and-why | ## What is a good example of a complete but not model-complete theory, and why?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The standard examples of complete but not model-complete theories seem to be:
- Dense linear orders with endpoints.
- The full theory $\mathrm{Th}(\mathcal{M})$ of $\mathcal{M}$, where $\mathcal{M} = (\mathbb{N}, >)$ is the structure of natural numbers equipped with the relation $>$ (and nothing else, i.e. no addition etc).
Can anyone explain or give a reference to show why any of these two theories are not model-complete, or give another example altogether of a complete but not model complete theory (with explanation)?
-
## 1 Answer
For the second example, let $M$ be the natural numbers and let $N$ be the integers greater than or equal to -1. Then $M$ is a substructure of $N$ but $M\models$ 0 is the least element", while this is false in $N$. Thus the theory is not model complete.
The first example is similar, let $M$ be $[0,1]$ and let $N$ be $[-1,1]$. Again $M\models$ 0 is the least element" but the extension $N$ does not.
One equivalent of model completeness is that every formula is equivalent to an existential formula. So theories like true arithmetic, the theory of the natural numbers in the language {$+,\cdot,0,1$}, are far from model complete.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8942102789878845, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2010/06/11/using-the-dominated-convergence-theorem/?like=1&source=post_flair&_wpnonce=c1c9c39881 | # The Unapologetic Mathematician
## Using the Dominated Convergence Theorem
Now that we’ve established Lebesgue’s dominated convergence theorem, we can put it to good use.
If $f$ is a measurable function and $g$ is an integrable function so that $\lvert f\rvert\leq g$ a.e., then $f$ is integrable. Indeed, we can break any function into positive and negative parts $f^+$ and $f^-$, which themselves must satisfy $f^\pm\leq g$ a.e., and which are both nonnegative. So if we can establish the proposition for nonnegative functions the general case will follow.
If $f$ is simple, then it has to be integrable or else $g$ couldn’t be. In general, there is an increasing sequence $\{f_n\}$ of nonnegative simple functions converging pointwise to $f$. Each of the $f_n$ is itself less than $g$, and thus is integrable; we find ourselves with a sequence of integrable functions, dominated by the integrable function $g$, converging pointwise to $f$. The dominated convergence theorem then tells us that $f$ is integrable.
Next, a measurable function $f$ is integrable if and only if its absolute value $\lvert f\rvert$ is integrable. In fact, we already know that $f$ being integrable implies that $\lvert f\rvert$ is, but we can now go the other way. But then $f$ is a measurable function and $\lvert f\rvert$ an integrable function with $\lvert f\rvert\leq\lvert r\rvert$ everywhere. The previous result implies that $f$ is integrable.
If $f$ is integrable and $g$ is an essentially bounded measurable function, then the product $fg$ is integrable. Indeed, if $\lvert g\rvert\leq c$ a.e., then $\lvert fg\rvert\leq c\lvert f\rvert$ a.e. as well. Since $c\lvert f\rvert$ is integrable, our first result tells us that $\lvert fg\rvert$ is integrable, and our second result then tells us that $fg$ is integrable.
Finally, for today, if $f$ is an essentially bounded measurable function and $E$ is a measurable set of finite measure, then $f$ is integrable over $E$. Since $E$ has finite measure, its characteristic function $\chi_E$ is integrable. Then our previous result tells us that $f\chi_E$ is integrable, which is what it means for $f$ to be integrable over $E$.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 1 Comment »
1. [...] we know that a.e., and so a.e. as well. This tells us that is integrable. And thus we [...]
Pingback by | June 14, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214230179786682, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/251665/how-can-i-introduce-complex-numbers-to-precalculus-students/251860 | # How can I introduce complex numbers to precalculus students?
I teach a precalculus course almost every semester, and over these semesters I've found various things that work quite well. For example, when talking about polynomials and rational functions, in particular "zeroes" and "vertical asymptotes", I introduce them as the same thing, only the asymptotes are "points at infinity". This (projective plane) model helps the students in understanding what multiplicity of a zero/asymptote really means. I later use the same projective plane model to show how all conic sections (circle, ellipse, parabola, hyperbola, lines) are related. I get very nice feedback on this, and the students seem to really enjoy it.
I have not, however, been able to find similar motivating examples for introducing complex numbers. I know there must be similar (pictorial!) arguments to engage the students and pique their curiosity, but I haven't found it yet. Simply saying "all polynomials have a zero over the complex numbers" doesn't really do it for them (again, the more pictures involved, the better).
Are there "neat" and "cool" ways of talking about complex numbers for the first time, that are understandable by beginning precalculus students, but also interesting enough to capture their attention and provoke thought?
-
I cannot for the life of me figure out how to make this community wiki! – Steve D Dec 5 '12 at 16:48
3
Flag it and request a mod to do it, or just edit it ten times. :) – Alistair Buxton Dec 5 '12 at 17:28
2
@SteveD: It is now CW. – robjohn♦ Dec 5 '12 at 18:56
1
I always imagine complex numbers as a plane, only with one axis as the imaginary numbers. I thought it was the only way to do it, really! – Anurag Kalia Dec 5 '12 at 19:08
– gerrit Dec 7 '12 at 14:55
## 12 Answers
Edan Maor provides an answer on this post, which gives a nice explanation:
A way to solve polynomials
We came up with equations like $x - 5 = 0$, what is $x$?, and the naturals solved them (easily). Then we asked, "wait, what about $x + 5 = 0$?" So we invented negative numbers. Then we asked "wait, what about $2x = 1$?" So we invented rational numbers. Then we asked "wait, what about $x^2 = 2$?" so we invented irrational numbers.
Finally, we asked, "wait, what about $x^2 = -1$?" This is the only question that was left, so we decided to invent the complex numbers, in particular "imaginary" numbers, to solve it. All the other numbers, at some point, didn't exist and didn't seem "real", but now they're fine. Now that we have complex numbers, we can solve every polynomial, so it makes sense that that's the last place to stop.
Pairs of numbers
This explanation goes the route of redefinition. Tell the listener to forget everything he knows about imaginary numbers. You're defining a new number system, only now there are always pairs of numbers. Why? For fun. Then go through explaining how addition/multiplication work. Try and find a good "realistic" use of pairs of numbers (many exist).
Then, show that in this system, $(0,1) * (0,1) = (-1,0)$, in other words, we've defined a new system, under which it makes sense to say that $\sqrt{-1} = i$, when $i=(0,1)$. And that's really all there is to imaginary numbers: a definition of a new number system, which makes sense to use in most places. And under that system, there is an answer to $\sqrt{-1}$.
• Along these lines, see André Nicolas's answer to this post.
The historical explanation
Explain the history of the imaginary numbers. Showing that mathematicians also fought against them for a long time helps people understand the mathematical process, i.e., that it's all definitions in the end.
I'm a little rusty, but I think there were certain equations that kept having parts of them which used $\sqrt{-1}$, and the mathematicians kept throwing out the equations since there is no such thing.
Then, one mathematician decided to just "roll with it", and kept working, and found out that all those square roots cancelled each other out.
Amazingly, the answer that was left was the correct answer (he was working on finding roots of polynomials, I think). Which lead him to think that there was a valid reason to use $\sqrt{-1}$, even if it took a long time to understand it.
Another idea comes from Byron Schmuland answer on this Math Overflow thread:
Euclidean Geometry
Use complex numbers to explain Ptolemy's Theorem. For a cyclic quadrilateral with vertices $A,B,C,D$ we have $$|AC|\cdot|BD| = |AB|\cdot|CD|+|BC|\cdot|AD|$$
• Thanks to André Nicolas, whom I'll quote: that mathematician "was Bombelli, showing that for cubics with three distinct real roots, the Cardano Formula involved square roots of negative numbers." (See André Nicolas's comment below.)
-
2
I like the (macro-)historical perspective in the section "A way to solve polynomials" :-D – user1551 Dec 5 '12 at 17:44
4
It was Bombelli, showing that for cubics with three distinct real roots, the Cardano Formula involved square roots of negative numbers. It turns out that one can solve this "real" problem algebraically only by travelling through the complex numbers. – André Nicolas Dec 5 '12 at 18:32
2
I would say calling them "imaginary" numbers is evil. It throws off the learner gives them an impression that we are talking about some fairytale world and these numbers have nothing to do with the actual world. Why would some numbers be imiganary? They are as "real" as R, and are used to solve very real problems – Midhat Dec 6 '12 at 8:19
8
– Jonas Meyer Feb 21 at 17:47
3
– Jonas Meyer Feb 22 at 4:23
show 3 more comments
I really liked the explanation given by the site BetterExplained. Basically, you think of multiplication of real numbers as a geometric transformation.
Take any real number, and represent it as an arrow on the real line, starting from $0$. If you multiply it by a positive number, you change the arrow's length. If you multiply it by $-1$, you make the arrow point the other way. And multiplication by negative numbers other than $-1$ is just a reflection together with a scaling.
Thinking about multiplication this way, if you start with $1$ and multiply twice by $-1$, you reflect twice and therefore you get back to where you started. This makes sense, since $(-1)^2 = 1$. Now, is there an operation that, applied twice to $1$, gives $-1$? It's clear that no real number will do. But what if you escape the real line for a bit and rotate $1$ twice by $90^\circ$? You get $-1$. So now it turns out that the reflections that negative numbers were responsible for were just rotations by $180^\circ$, but we couldn't tell because we were restricted to just a line.
And now we could say that this rotation by $90^\circ$ deserves a name, so let's call it $i$. Of course, a rotation by $90^\circ$ in the other direction will also do, so we just pick one of them and call it $i$, and the other will be $-i$, because they're mirror images of each other.
The last step is realizing that now we have the whole plane to work with. We can use what we know about vectors and say that any arrow starting at $0$ and ending somewhere in the plane can be represented as a linear combination of $1$ and $i$.
Thinking this way, De Moivre's formula becomes a definition, which makes it clear what complex number multiplication really is (absolute values are multiplied while angles are added), because that's how we started! We wanted $i$ to represent a rotation by $90^\circ$, not some abstract solution of $x^2 = -1$.
-
3
I love this explanation because it is so intuitive, but it always makes me think, hey, it was so easy to escape from the real number line into two dimensions, let's just use that trick again to extend it to three dimensions... afaik it does not work (although 4 or more dimensions are possible), but I've never seen an equally intuitive explanation of why. (But maybe that's a topic for another question.) – Alistair Buxton Dec 5 '12 at 20:27
@AlistairBuxton: Consider which of the following i represents: Half of a reflection along the real line, an axis in 2D space, or a direction of rotation in 2D space? Now, is there any "trick" that will similarly represent half of a complex conjugation, an axis in 3D space, and a direction of rotation in 3D space (recall that you need a scalar component here to get a magnitude of 1!)? – camccann Dec 5 '12 at 22:02
This kinda blew my mind. – JonoRR Dec 6 '12 at 5:25
One of my favorite introductions to the complex numbers is to not define them with respect to the square root of $-1$! In fact, introduce the complex numbers and the operations therein as just an extension of those of the reals that they all know and love. Then, for instance, define addition and how this works in the plane (as vectors). Then, define multiplication, albeit in the necessarily "strange" way, saying that we need to be able to multiply these 2-tuples. Then proceed to blow their mind that actually $(0,1)$ is the square root of $(-1,0)$! It seems that the typical introduction to the complex number system merely by talking about the square root of $-1$ makes students think that complex numbers don't "exist" or are not important because it addresses a seemingly trivial problem.
-
1
I like this alot, its only deficiency (if there is one!) being there are not many pictures. :) – Steve D Dec 5 '12 at 17:19
@GregL I'm confused by the point you are making. – Jebruho Dec 5 '12 at 17:32
3
You might get some nice pictures by linking this 'strange' new way of multiplying pairs of 2-vectors to the geometric interpretation of complex multiplication as a rotation and a dilation/contraction. For example, draw some points, lines, contour plots in the $x-y$ plane and then look at the effect under (say) multiplication by a constant $(x,y)\to(x,y)*(a,b)=(ax-by,bx+ay)$. – user12477 Dec 5 '12 at 17:47
2
Is the ! a factorial, or are you really excited to explain math? :) – alexy13 Dec 6 '12 at 1:07
3
@alexy13 I'm just really excited to explain math! – Jebruho Dec 6 '12 at 1:08
show 3 more comments
Note: the following is commonly known and you can find several websites that give the following example.
I have taught complex numbers several times and I do agree with you that it can be hard to motivate. While I agree that pictures often can be helpful in motivating and understanding a problems, I also don't think that one has to be afraid of staying abstract.
I have done this with some success in the past: First recall how we have the formula that solves a quadratic equation. And note how nice it is, that one doesn't even have to think, one just has to remember a formula. Now then mention that historically there was an interest in solving the general cubic equation. Without (or maybe you would want to do this) actually showing the general formula mention that at some point people considered this special cubic equation (in fact all cubics can be brought into this form, but if we only allow positive coefficients (since they are nicer) consider just the following): $$x^{3} = px + q.$$ where $p$ and $q$ are positive real numbers. Now the way this was solved was by letting $x = u + v$. Then $$x^3 + px = u^3 + v^3 + 3uv(u + v) − p(u + v) = q.$$ And so $$\begin{align} u^3 + v^3 &= q \\ u^3v^3 &= \left(\frac{p}{3}\right)^3. \end{align}$$ So to solve the cubic, one could try to find $u$ and $v$ because $x = u+v$, that is: Find two numbers whose sum is $q$ and whose product is $(p/3)^3$. That sounds like a simple problem and the solution is (think about this for a moment) $$\begin{align} u &= \sqrt[3]{\frac{1}{2}q + w} \\ v &= \sqrt[3]{\frac{1}{2}q - w} \end{align}$$ where $$w = \sqrt{\left(\frac{1}{2}q\right)^2 - \left(\frac{1}{3}p\right)^3}.$$ Now try to apply this to the equation: $$x^3 = 15x + 4$$ Then $w = \sqrt{-121}$. Opps. I guess you have to stop now since we can't take a square-root of a negative number. Hmm... maybe if we just continue ignoring that for a moment (the rebels that we are) we see that we actually get $$x = u + v = \sqrt[3]{2 + \sqrt{-121}} + \sqrt[3]{2 - \sqrt{-121}}$$ Magic happens and we actually get $$x = 4.$$ So the point is (IMO) that ignoring that we can't really take a square root of a negative number lets us find a real solution to a cubic equation. Of course complex numbers have a life outside being used to find real zeros, but at least it shows how some people started thinking about them. Then you can say that now you are going to make it formal what complex numbers are and then continue from there.
Obviously this way of introducing complex numbers will take some time and there is some "hard" algebra involved, but usually I get the feeling that it does get the point across. I like the method because the final example is easy to understand. You can get the calculator involved (or you can do the algebra if you would like) in the last step where you find the value of $x$. There is some history also. Just the basic sub-problem of finding two numbers is $4$ and whose product is $125$ can be interesting in itself.
-
I was never satisfied with the "magic happens" bit because finding the cube root of $2+ \sqrt{-121}$ is algebraically as hard as solving the original equation. suddenly saying "but this is a cube root of this" is as magical as saying "but $4$ is a solution to the original equation !" so you are reducing hard equations to solving hard cube roots of numbers that aren't even supposed to exist. – mercio Dec 5 '12 at 17:48
1
@mercio: I think that one can do this example as an introduction or one can do it after having covered complex numbers. In the first case one would indeed have to simplify certain things. So as an introduction one could maybe just show on a calculator how x indeed is $4$. A point being that one never actually sees a complex number. Then later after you have covered complex numbers, you can go back to that "magic moment" and show how this is done by hand. No matter what, hopefully the point gets across that one can abstractly just work with $\sqrt{-1}$ without worrying too much about what it is – Thomas Dec 5 '12 at 17:51
(cont.) and it will actually work as the example above shows. So as an introduction this works well (IMO) because it shows how complex numbers can be used to solve problems that we can understand from just knowing about real numbers. – Thomas Dec 5 '12 at 17:53
I don't have the rep for such a minor edit, but one of you equations is missing an exponent. `u^ + v^3` => `u^3 + v^3` – xan Dec 5 '12 at 20:48
@xan: Thank you for catching that! – Thomas Dec 5 '12 at 22:27
You want something neat and visual? Try showing solutions to $x^5=3$ on the complex plane:
-
Here's a visual way to explain multiplication of complex numbers:
http://www.math.umn.edu/~hardy/1031/handouts/March.3.pdf
-
Not a very mathematical answer (sorry), but if you're looking for something visually interesting to introduce complex numbers, M. C. Escher has got you covered. I won't try to explain it, because I'll do a bad job. I'll let the real mathematicians do that:
edit: see also: http://www.ams.org/notices/200304/fea-escher.pdf
http://www.math.uconn.edu/~kconrad/blurbs/grouptheory/CstarqZ.pdf
-
Unfortunately all the "cool" stuff connected with complex numbers is probably over the students' heads right now.
It's nice that you can find the roots of any polynomial and all, but all the "real action" is elsewhere.
Here is something that may be easy to explain. Because complex numbers are two-dimensional, a function with a complex domain and range has a geometric interpretation: it maps some two-dimensional space to another. So it can represent a geometric mapping, which morphs one space to another. That has not only obvious applications in computer-based visualization, but in problem solving: e.g. translating some problem with boundaries that are some oddly-shaped space into one over a rectangle.
Here is another. Because complex numbers are two dimensional, a complex-valued function can map each point in a two-dimensional space to a vector. In other words, complex functions can represent 2D fields. Properties of field, such as "fluxless" and "irrotational" can be connected to certain properties of the underlying functions which are nicely expressed using complex numbers.
And of course complex numbers have applications in signal processing, because signals exhibit frequency and phase, which give rise to two dimensions. The Fourier Transform uses complex functions, allowing a representation of a signal in the time domain to be taken into the frequency domain, with phase information. The magnitude of the complex number at each frequency tells us the amplitude of the signal at that frequency, and the angle of the complex number with respect to the real axis (the "argument") gives us the phase.
-
1
At least in the US, I think most students learn complex numbers before they learn about "fluxless" or "irrotational" vector fields. – Jesse Madnick Dec 6 '12 at 5:55
Yes, well I think I touch on that point. See above: all the "cool" stuff connected with complex numbers is probably over the students' heads right now. However, some of these topics could be introduced in a very superficial way as "applications of complex numbers". – Kaz Dec 6 '12 at 21:59
While targeted at web developers, this presentation is mostly about how to present abstract maths in an easy way for students to understand.
Links to the tools he uses to create the animated graphs and the slides itself are included in the video.
Thought it was pretty cool.
-
This is like Javier and Izkata's answers combined. Go to this site that plots complex function graphs and type in $$f(z)=z^2+1$$ then look at the top view. You will see the solutions are not along the horizontal center line. You could produce several of these graphs and ask students in groups to find out how they relate to the roots of the function.
-
You can say some polynomials e.g. $x^2+1$ don't have a root, and that we would like to be able to solve equations like $x^2+1=0$. So lets add a number, call it $i$, s.t. $i^2+1=0$ i.e $i^2=-1$.
I would say this is a good motivation, though without advanced knowledge it seems artificial.
-
The problem with this approach (which is how I do it now) is that it is indeed artificial, and students who mostly will not do much more math in their adult lives rarely find time to care about imaginary numbers! There are ways around this (as an example, drawing nice graphs of the parabola $x^2+1$, and showing where its "zeroes" went), but I don't know how to do that in a convincing and interesting way! – Steve D Dec 5 '12 at 17:07
1
@SteveD: So, where did the zeroes go? – user1551 Dec 5 '12 at 17:29
@user1551: I am talking here of the viewpoint of "complexifying" the curve. This is something I have already thought about introducing (it explains why hyperbolas appear disconnected, while circles do not), but I find it highly nontrivial to do this in an elementary and convincing way. – Steve D Dec 5 '12 at 18:04
@SteveD: I see. You want to echo the OP's projective plane model approach. Thanks. That's interesting. – user1551 Dec 5 '12 at 18:06
3
@user1551 I would hope so, I am the OP! :) – Steve D Dec 5 '12 at 18:11
show 1 more comment
Similar to Thomas's answer, there are problems that can be solved without complex numbers, but if you solve for $x$ it is complex.
E.g. $x^3+1/x^3=1$ evaluate $x^{999}+1/x^{999}$
(You can see $x$ is actually complex here, and WolframAlpha also shows the solution here, but I've forgotten the non-complex solution :-( )
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565487504005432, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/measurement+electron | # Tagged Questions
2answers
84 views
### Electron mass changes with website
When particles mass can be changed by changing the website, then how to calculate with confidence? For example: Google: electron mass = 9.10938 188 × 10$^-31$ kilograms Wikipedia: electron mass ...
1answer
293 views
### Measuring the magnitude of the magnetic field of a single electron due to its spin
Is it possible to measure the magnitude of the magnetic field of a single electron due to its spin? The electron's intrinsic magnetic field is not dependent upon the amount of energy it has does it? ...
3answers
510 views
### Measuring the spin of a single electron
Is it possible to measure the spin of a single electron? What papers have been published on answering this question? Would the measurement require a super sensitive SQUID, Superconductive Quantum ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8800445199012756, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/142154/is-there-a-language-agnostic-way-to-formularize-if-conditions | # Is there a language-agnostic way to formularize “if”-conditions?
How can you formularize "if"-conditions in a language-agnostic way?
$$x=\begin{cases} 2,&\text{ if } n \le 2 \lor m=3\\ 3,&\text{ if } n \gt 2 \land m=n \end{cases}$$ Of course, everybody in the world will understand the "if", but is there a way to write it completely without using any "real world" language whatsoever?
-
1
There's no need to use such a word, at least not in your example, AFAICT. – Gigili May 7 '12 at 8:59
6
It is a common mistake to think that mathematics is written in pure symbols. There is a lot of text around and it just makes things clearer. – Asaf Karagila May 7 '12 at 9:01
1
If you're going to use "if", you may as well use "and" and "or", and I assure you it looks much better that way. – Zhen Lin May 7 '12 at 9:01
2
To add on to Asaf's point, symbols are used because sometimes they make things clearer to express, not because they are inherently better than words. For example, it's easier to understand "$y = ax + b$" than to understand "$y$ is the sum of $b$ and the product of $a$ and $x$", but it is easier to understand "$A$ is a nonsingular $3\times3$ matrix" than to understand "$A \in M_{3\times3} \wedge \det A \ne 0$". – Rahul Narain May 7 '12 at 9:13
2
Asaf has told you about how words are a lot of times preferred in mathematical writing (and personally I'm not at all fond of $\land$ and $\lor$); that being said, look into Iverson brackets. – J. M. May 7 '12 at 9:15
show 4 more comments
## 2 Answers
If you absolutely insist, you can use Iverson brackets:
$$x=\begin{cases} 2,&\text{ if } n \le 2 \lor m=3\\ 3,&\text{ if } n \gt 2 \land m=n \end{cases}$$
is equivalent to
$$x=2+[n>2][m=n]\;.$$
But the first version is easier to read, and while I don't at all mind $\lor$ and $\land$, $\text{or}$ and $\text{and}$ are preferable for most audiences.
-
I was tought that if ... then is written in such form:
$$x>0 \Rightarrow y=2x$$
which means if $x$ is greater than 0 then assign $2x$ to $y$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9671146869659424, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/43627?sort=votes | ## Can subgradient infer convexity?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is known that If $f:U→ R$ is a real-valued convex function defined on a convex open set in the Euclidean space $R^n$, a vector v in that space is called a subgradient at a point $x_0$ in $U$ if for any $x$ in U one has $f(x)-f(x_0)\geq v\cdot(x-x_0)$
What if for function $f$, at any $x_0$ I can find $v$, such that $f(x)-f(x_0)\geq v\cdot(x-x_0)$ for any $x$, does this show that $f$ is convex?
-
## 1 Answer
Yes. Let $x, y \in U$. Let $z = \lambda x + (1 - \lambda) y$, for $\lambda \in [0,1]$. Let $v_z$ be a subgradient for $z$.
Then $$f(x) \geq f(z) + v_z \cdot (x - z) = f(z) + v_z \cdot \left(x - ( \lambda x + (1 - \lambda) y)\right)$$ $$= f(z) + (1 - \lambda) v_z \cdot (x - y).$$ Similarly, $$f(y) \geq f(z) - \lambda v_z \cdot (x - y).$$
Multiplying the first inequality by $\lambda$ and the second by $1 - \lambda$ and adding the two, we obtain
$$\lambda f(x) + (1 - \lambda)f(y) \geq f(z),$$ proving that $f$ is convex.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249299764633179, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/67989/optimizing-directly-on-the-eigenspectrum-of-a-matrix/68007 | ## Optimizing directly on the eigenspectrum of a matrix
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have an application where I want to the eigenvalues of the graph to be involved in the objective and constraints in a flexible way (moreso than just the nuclear or frobenius norm). Whats a good survey or intro source to this sort of optimization?
-
## 2 Answers
Naturally, the answer very much depends on the function you'd like to optimize. I recommend looking at:
1. Proposition 4.2.1 in Lectures on Modern Convex Optimization by Ben-Tal and Nemirovski. It describes a large set of eigenvalue optimization problems which can be written as semidefinite programs. Specifically, if $g(x_1,\ldots,x_n)$ is a symmetric function such the set $t \geq g(x_1,\ldots,x_n)$ has a semidefinite representation, then so does the set $t \geq g(\lambda(X))$, where $\lambda(X)$ is a vector of eigenvalues of a symmetric matrix $X$.
2. Section 4.2 in the same book, which gives some other examples of functions of eigenvalues that can be written in this way (for example, sums of $k$ largest eigenvalues of a symmetric matrix).
3. On the other hand, these types of problems can quickly become NP-hard. The paper Maximum algebraic connectivity augmentation is NP-hard by Damon Mosk-Aoyama shows that the problem of adding a prespecified number of edges to the graph to maximize the second-smallest eigenvalue of the Laplacian is NP-hard.
4. The papers Eigenvalue Optimization by Lewis and Overton and The Mathematics of Eigenvalue Optimization by Lewis.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Many (but not all) problems involving the eigenvalues of a graph are convex optimization problems that can be formulated as semidefinite programming problems. There are a number of "tricks" that you need to learn in order to formulate problems as SDP's. Once you've got an SDP, there are a number of software packages that can be used to solve the SDP.
You should check out the SIAM Review paper on semidefinite programming by Vandenberghe and Boyd:
L. Vandenberghe and S. Boyd. Semidefinite Programming. SIAM Review, 38(1): 49-95, March 1996.
http://stanford.edu/~boyd/papers/sdp.html
Vandenberghe and Boyd also have a textbook on convex optimization- you can read the .pdf online for free. See
http://www.stanford.edu/~boyd/cvxbook/
Unfortunately, there are lots of eigenvalue optimization problems that cannot be formulated as convex optimization problems. These are much harder (if not practically impossible) to solve.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.892545223236084, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/239557/one-simple-question-in-ring-theory | # one simple question in ring theory
Suppose $R$ is a ring (possibly noncommutative), $I$ is a minimal left ideal in it, and $I^2\neq 0$, show that $I=Re$ for some idemopotent $e$.
It is easy to show that we can find some $x\in I$, such that $I=Ix$, so $I=Rx=Rx^n=Ix^n$, for all $n\geq 1$, but how to construct the idemopotent $e$ using $x$, I guess I must have failed to realize some key point.
-
## 1 Answer
As $I^2\neq 0$, then there's an element $x\in I$ such $Ix=I$,also there is an element $e\in I$ with $ex=x$ with $e\neq 0$ then $e^2x=x$. Since right multiplication by $x$ is an isomorphism from $I$ to $Ix$ by minimality of $I$, this implies that $e^2=e$.
-
Thanks! Now I see the point, further use the minimal condition. – ougao Nov 18 '12 at 2:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457389116287231, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/121549?sort=votes | ## Complexity of winning strategies for open games (for open player)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If $G\subseteq\omega^{<\omega}$ is a computable clopen game, then $G$ has a winning strategy which is hyperarithmetic $(\Delta^1_1)$, by an inductive ranking process. The key observation here is that the length of this induction is bounded above by the length of the Kleene-Brouwer ordering $G_{KB}$, which is a computable ordinal and hence $<\omega_1^{CK}$, and that each successive stage of the induction can be achieved by one application of the jump operator, so there is a winning strategy with complexity at most $0^{(\vert G_{KB}\vert)}$.
(An annoying subtlety here is that the theory $\Delta^1_1-CA_0$, which amounts to closure under hyperarithmeticity, does not prove determinacy of clopen games, since there are games which are not actually clopen but have no hyperarithmetic witnesses to their ill-foundedness.)
My question is whether a version of this result is also true for open games. Specifically, let $T\subseteq\omega^{<\omega}$ be an open game in which the "Open" player (i.e., the player trying to fall off the tree) has a winning strategy; do they necessarily have a winning strategy hyperarithmetic in $T$?
I'm asking this question because I was looking through my notes from a previous class, and I ran across the assertion that "a similar ranking argument" shows that the answer is 'yes;' however, I can't reconstruct this argument, and I'm wondering whether I (or the lecturer) was simply incorrect; or whether there's a basic argument I'm not seeing.
-
This was proved by Andreas Blass in A. Blass, Complexity of winning strategies, Discrete Math. 3 (1972), 295–300. " – Liang Yu Feb 12 at 5:23
The rough idea is if the open player has a winning strategy, then for any node $\sigma$ with an even length in the tree $T$, we may define a partial function $f(\sigma)=\inf_n\sup_m f(\sigma ^{\smallfrown} n ^{\smallfrown}m)$ and ensure $f(\emptyset)$ always exists. – Liang Yu Feb 12 at 5:36
Liang, yes, but you are missing a +1 in your expression---it should be $f(\sigma\frown n\frown m)+1$---without it you don't get non-zero values. This $f$ is exactly the game value. – Joel David Hamkins Feb 12 at 13:52
Joel, you are right. There should be +1 there. – Liang Yu Apr 3 at 15:00
## 1 Answer
The answer is yes.
The point is that if there is any winning strategy for a designated player from a given position, then there is in a sense a canonical winning strategy, which is to make the first move that minimizes the game value of the resulting position, and for a given winning position in a fixed game, I claim that this strategy will have at worst hyperarithmetic complexity.
To explain, consider how the ordinal game values arise. We fix the tree of all possible finite plays. We assign value $0$ to any position in which the designated open player has already won. We assign value $\alpha+1$ to a position with the open player to move, if $\alpha$ is least among the values of the positions to which he or she can legally play. If it is the opponent's move and every move by the opponent has a value, then the value of the position is the supremum of these values. Thus, the open player seeks to reduce value, and wins when the value hits zero. The opposing player seeks to maintain the value as undefined or as high as possible.
Since playing according to the value-reducing strategy reduces value at every move for the open player, it follows that the tree $T_p$ of all positions arising from the value-reducing strategy is well-founded, and the value of $p$ is precisely the rank of the well-founded tree $T_p$, if one should consider only the positions where it is the opponent's turn to play.
Note that the assertion, "position $p$ in tree $T$ has value $\alpha$" is $\Sigma_1$ expressible in any admissible structure containing $T$ and $\alpha$, since this is equivalent to the assertion that there is a ordinal assignment fulfilling the recursive definition of game value, which gives $p$ value $\alpha$ in that tree. It follows that there can be no position in $T$ with value $\omega_1^{T}$, since otherwise we would get a $\Sigma_1$-definable map unbounded in $\omega_1^{T}$. So the value of a position in $T$ is a $T$-computable ordinal or undefined.
If a player has a winning strategy from a position $p$, then because the ordinal game value assignment is unique and all relevant values in the game proceeding from $p$ will be bounded by the fixed value $\beta_p$ of $p$, it follows that the value-reducing strategy from $p$ is $\Delta^1_1(T)$ definable and hence hyperarithmetic in $T$.
Basically, the way I think about it is that once you know a code for the ordinal value of the initial position, then the strategy only cares about positions with value less than that, and you can bound the ordinals that arise in the recursive definition of game value. Since the ordinal game value assignment is unique, this allows the strategy to become $\Delta^1_1$ in a code for the initial value, which is bounded by the ordinal value of the well-founded tree.
-
I apologize for my long-winded and redundant answer. – Joel David Hamkins Feb 12 at 3:13
See also Andreas Blass's monotone fixed point argument in the comments of mathoverflow.net/questions/63423/…. Basically, one can view my argument above as an unraveling of that way of thinking. – Joel David Hamkins Feb 12 at 3:16
It's not long-winded or redundant - I'm really happy to have an answer that's so explicit. Thanks! – Noah S Feb 12 at 7:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465041756629944, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/3977/hash-collision-resistance-requirements-for-lamport-signatures?answertab=votes | # Hash collision resistance requirements for Lamport signatures
According to the original paper, Lamport one-time signature scheme uses two one-way functions: $F$ and $G$. The former one, $F$, is used to create a public key by hashing elements of the private key (and also for hashing signature elements when verifying it); the latter, $G$, is used to hash a message when signing and verifying to get the number of bits corresponding to the number of pairs in keys.
As far as I understand, the requirement for $F$ is that it must be resistant to preimage attacks, and $G$, in addition to this, must also have collision resistance.
Given the standard hash function that satisfies all these requirements, I assume that $F$ and $G$ can be the same, e.g. SHA-256, which provides 256 bit security against preimage attacks and 128 bits security against collision attacks. Using it, we have the following key and signature sizes (described as dimensional arrays):
```` Private key: [256][2][32]byte (16 KiB)
Public key: [256][2][32]byte (16 KiB)
Signature: [256][32]byte (8 KiB)
````
My question is: if we're aiming for overall 128 bit security, can we reduce the output of $F$ to 128 bits instead of 256 bits, leaving $G$ as is. Does $F$ require collision resistance?
For example, if we use SHA-256/128 for $F$ and SHA-256 for $G$, this will give us the following sizes:
```` Private key: [256][2][16]byte (8 KiB)
Public key: [256][2][16]byte (8 KiB)
Signature: [256][16]byte (4 KiB)
````
Will this give us 128-bit security?
For each private key $y_{i,j}$ and its corresponding $z_{i,j}$ public key pair, the private key length must be selected so performing a preimage attack on the length of the input is not faster than performing a preimage attack on the length of the output. For example, in a degenerate case, if each private key $y_{i,j}$ element was only 16 bits in length, it is trivial to exhaustively search all $2^{16}$ possible private key combinations in $2^{15}$ operations to find a match with the output, irrespective of the message digest length. Therefore a balanced system design ensures both lengths are approximately equal.
However I fail to understand it: what are input and output and what message digest length (is it the length of output of $G$, $F$, or both), which part refers to the number of elements in keys and which part refers to the length of a single element? I'd appreciate a better explanation. Thanks!
-
## 1 Answer
Yes, it makes sense to truncate the hash to 128 bits. The security proof actually says that if finding a preimage for F requires effort 2^n, then breaking the Lamport signature scheme with G having k-bit digests requires effort (2^n)/(2k). So strictly speaking, with F truncated to 128 bits and G having 256 bits (2k=512=2^9), you will have 128-9=119 bits of security.
This is not an artifact of the security proof: since there are 256*2=512 public keys, an attacker has 512x more chances to find a preimage compared to a normal, single preimage attack.
In the Merkle signature scheme, the result is the same but you would multiply k by the number of messages that can be signed. EG for 1 million (2^20) messages, the security level is 128-9-20=99 bits.
The wikipedia article is about F only: the "input" is one of the private keys, the "output" is the corresponding public key and "message digest size" is the output size of F. I think that it just attempts to say that when the private key is very short and regardless of the output size of F, it is trivial to find a preimage.
-
Thanks! So, in general, for Lamport signatures with ~s-bit security for both collisions and preimages, k=2s, n=s+log2(4s), which for 128-bit security is 256 and 137. – dchest Oct 8 '12 at 17:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8969996571540833, "perplexity_flag": "middle"} |
http://climateaudit.org/2006/05/31/more-on-mbh98-figure-7/?like=1&source=post_flair&_wpnonce=c327cf8c6a | by Steve McIntyre
More on MBH98 Figure 7
There’s an interesting knock-on effect from the collapse of MBH98 Figure 7 (see here and here).
We’ve spent a lot of time arguing about RE statistics versus r2 statistics. Now think about this dispute in the context of Figure 7. Mann "verifies" his reconstruction by claiming that it has a high RE statistic. In his case, this is calculated based on a 1902-1980 calibration period and a 1854-1901 verification period. The solar coefficients in Figure 7 were an implicit further vindication in the sense that the correlations of the Mann index to solar were shown to be positive with a particularly high correlation in the 19th century, so that this knit tightly to the verification periods.
But when you re-examine Mann’s solar coefficients, shown again below, in a 100-year window, which is a period that is sized more closely to the size of the calibration and verification periods, the 19th century solar coefficient collapses and we have a negative correlation between solar and the Mann index. If there’s a strong negative correlation between solar and the Mann index in the verification period, then maybe there’s something wrong with the Mann index in the verification period. I don’t view this as an incidental problem. A process of statistical “verification” is at the heart of Mann’s methodology and a figure showing negative correlations would have called that verification process into question.
There’s another interesting point when one re-examines the solar forcing graphic on the right. I’ve marked the average post-1950 solar level and the average pre-1900 solar level. Levitus and Hansen have been getting excited about a build-up of 0.2 wm-2 in the oceans going on for many years and attributed this to CO2. Multiply this by 4 to deal with sphere factors and you need 0.8 wm-2 radiance equivalent. Looks to me like 0.8 wm-2 is there with plenty to spare.
I know that there are lots of issues and much else. Here I’m really just reacting to information published by Mann in Nature and used to draw conclusions about forcing. I haven’t re-read Levitus or Hansen to see how they attribute the 0.2 wm-2 build-up to CO2 rather than solar, but simply looking at the forcing data used by Mann, I would have thought that it would be extremely difficult to exclude high late 20th century solar leading to a build-up in the oceans as a driving mechanism in late 20th century warmth. In a sense, the build-up in the ocean is more favorable to this view as opposed to less favorable.
None of this "matters" to Figure 7. It’s toast regardless. I’m just musing about solar because it’s a blog and the solar correlations are on the table.
UC adds
With window length of 201 I got bit-true emulation of Fig 7 correlations. Code in here. Seems to be OLS with everything standardized (is there a name for this?), not partial correlations. These can quite easily be larger than one.
The code includes non-Monte Carlo way to compute the ’90%, 95%, 99%
significance levels’. The scaling part still needs help from CA statisticians, but I
suspect that the MBH98 statement ‘The associated confidence limits are approximately constant between sliding 200-year windows’ is there to add some HS-ness to the CO2 in the bottom panel:
(larger image )
This might be outdated topic (nostalgia isn’t what it used to be!). But in this kind of statistical attribution exercises I see a large gap between the attributions (natural factors cannot explain the recent warming!) and the ability to predict the future:
Like this:
This entry was written by , posted on May 31, 2006 at 10:55 AM, filed under MBH98 and tagged chefen, uc, uc00. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL.
181 Comments
1. TCO
Leaving aside the issue of Mann’s wordings and claims wrt a 100 year window, the 200 year window makes more sense than a 100 year one since it includes more data.
2. John A
Leaving aside the issues of Mann’s incorrect methodology, cherry-picking, false claims of robustness, misleading descriptions, bad data, abysmal statistical control, his paragraphing and punctuation are excellent.
[sarcasm OFF]
The data is bad, TCO. It doesn’t matter how wide the windows are.
3. TCO
Sure. But that is a seperate criticism. If you read Chefen (or McK, can’t remember), he makes the specific point that this is an error of methodology. Don’t do that weaving thing that Steve does sometimes when I want to examine the criticisms one by one.
4. Posted May 31, 2006 at 1:42 PM | Permalink | Reply
Changing from a 200 to 100 year window should not cause the correlations to collapse like that though. If the correlations are real then the change should be minor, at least until you get down to the different signal regime below 10 years-ish. That the correlations are so dependent on window choice indicates that either the correlation method is wrong or the signals are in fact not correlated.
You can say “it adds more data”, but that is neither a positive nor negative act without a good justification. After all, why not just correlate the *entire* data set then?
Here’s a question, what sort of signal will correlate randomly well or badly with a given other signal depending on the choice of window length?
5. Dave B
#1, #3, TCO…
the simple linear logic of your statement in #1 then makes an argument for a 500 year window, or a 1000 year window, or 10,000…simply because it “contains more data” does NOT mean it is optimal. robustness may fall apart at 10 or 20 year windows (decadal fluctuations, anyone?), but 100 years should be sufficient if the data are any good.
6. Chefen
This is posted for Chefen who’s having trouble with the filter:
Changing from a 200 to 100 year window should not cause the correlations to collapse like that though. If the correlations are real then the change should be minor, at least until you get down to the different signal regime below 10 years-ish. That the correlations are so dependent on window choice indicates that either the correlation method is wrong or the signals are in fact not correlated.
You can say "it adds more data", but that is neither a positive nor negative act without a good justification. After all, why not just correlate the *entire* data set then?
Here’s a question, what sort of signal will correlate randomly well or badly with a given other signal depending on the choice of window length?
7. Steve McIntyre
TCO, I don’t think that the main issue is even whether a 100-year window or 200-year window is better. That’s something that properly informed readers could have decided. But he can’t say that the conclusions are “robust” to the window selection, if they aren’t. Maybe people would have looked at why there was a different relationship in different windows. Imagine doing this stuff in a prospectus.
It’s the same as verification statisitcs. Wahl and Ammann argue now that verification r2 is not a good measure for low-frequency reconstructions. I think their argument is a pile of junk and any real statistician would laugh at it. But they’re entitled to argue it. Tell the readers the bad news in the first place and let them decide. Don’t withhold the information.
8. Mark
The “window” is a moving average filter, right?
Mark
9. TCO
#7: Steve, read the first clause of my comment. I’m amazed how people here think that I am excusing something, when I didn’t say that I was excusing it. Even without the caveat, my point stands as an item for discussion on its own merits. But WITH IT, there is a need to remind me of what I’ve already noted?
I make a point saying, “leaving aside A, B has such and such properties”, and the response is “but wait, A, A, A!”. How can you have a sophisticated discussion? RC does this to an extreme extent. But people here do it too. It’s wrong. It shows a tendancy to view discussion in terms of advocacy rather then curiosity and truth seeking.
10. TCO
Chefen, Mark and Dave: Thanks for responding to the issue raised. Although I want to think a bit more about this. I still think the 200 years is better then the 100 (and 500 even better, but I’m open to being convinced of the opposite, for instance there are issues of seeing a change rapidly enough…actually this whole thing is very similar to the m,l, windows in VS06 (the real VS06, not what Steve calls VS06). (For instance look at all the criticism we’ve had of the too short verification/calibration periods–it’s amusing to see someone making the opposite point now.) As far as how much of a difference in window causes a difference in result and what that tells us, I think we need to think about different shapes of curves and to have some quantititive reason for saying that 200 to 100 needs to have similar behavior, but 200 to 10 different is ok–how do we better express this?
Steve and John: boo, hiss.
11. John Adams
TCO Nobody cares which window you think is better, the MBH claim was that window size does not matter (“robust”), but it does matter. Therefore the claim does not hold.
12. Ed Snack
I disagree TCO, because it is a “moving window”, I don’t think that the size of the window is nearly as important as the congruence between results from different window sizes. Correct me if I am wrong, but you are viewing the data in the window over certain time periods, and any significant variance between the results in, say, the 200 and 100 year windows does require explanation. The variance we apparently see suggests very strongly that either different processes are at work over the different time scales, or that no real correlation exists.
I know you are not supporting the 200 year window exclusively, but I do think you are de-emphasising the startling divergence of the results.
Now where is Tim Lambert when you want him, I hope he observes this issue and adds it to the list of errors (being polite) made by Mann et al. His post on this is going to get seriously large !
13. TCO
Ed: I’m just struggling to understand this intuitively. Is there a window size that makes more than another one? How much should 100-200 year windows agree, numerically? Is there perhaps a better way to describe the fragility of the conclusions than by susceptibility to changed window size? Some other metric or test? Intuitively, I think that Mann’s attribution efforts are very weak and sketchy and rest on some tissue paper. Yet, also intutitively, I support larger data sets for correlation studies than smaller ones. The only case in which this does not make sense is where one wants to really narrow into correlation as a function of time (really you want the instantaneous correlation in that case and the less averaging the better). But it’s a trade-off between accuracy of looking at the system changing wrt time, versus having more data. I’m just thinking here…
I guess in a perfect multiple correlation model, we would expect to have an equation that describes the behavior over time and that the correlation coefficients would not themselves be functions of time. That’s more what I’m used to seeing in response mapping in manufacturing. Might need a couple squared terms. But there are some good DOS programs that let you play with making a good model that minimizes remaining variance and you can play with how many parameters to use and how many df are left. Really, this is a very classic six sigma/DOE/sociology type problem. I wonder what one of them would make of the correlation studies here.
14. jae
Why in the hell do you want Tim Lambert’s opinion. He doesn’t even understand this thread!
15. Steve Sadlov
RE: #14. Well it is entertaining to watch him put his foot in it.
16. Posted May 31, 2006 at 9:51 PM | Permalink | Reply
Re: 13. I would vary it from 1 to 300 and plot the change in correlations. How do you know 100 is not cherry picking?
17. Posted May 31, 2006 at 11:25 PM | Permalink | Reply
TCO, regardless of exactly what features you’re trying to zoom in on the fact remains that the correlations should be fairly stable at similar time scales *until* you run into a feature of one of the signals. All the features of sigificance lie around the 10-year mark, there is the 11-year sunspot cycle in the solar data and the 10-year-ish switch over in power law behaviour of the temperature data. There isn’t really anything else of significance, particularly on the 100+ year level. So while the correlations may alter somewhat in going from a 100 to 200 to 300 year correlation, they definitely shouldn’t dramatically switch sign if the signals *truly* are correlated that well at 200 years. If they are actually uncorrelated then the behaviour would be expected to vary arbitrarily with window size, without any further knowledge.
You can reasonably suggest that the true dependence is non-linear and that is stuffing things up. But then where does the paper’s claim of “robustness” with window size come from?
18. Peter Hartley
TCO, as Chefen’s post makes clear, the “optimal” window size depends on the theory you are trying to test. I don’t think it is something that can be decided on statistical grounds alone. For example, if you theorized that the “true” relationship was linear with constant coefficients you would want as long a data set as possible to get the best fix on the values of the coefficients. On the other hand, if you thought the coefficients varied over time intervals as short as 10 years you would want narrow windows to pick up that variation. If the theory implies the relationship is non-linear, you shouldn’t be estimating linear models in the first place, and so on. The point here, however, is that these results were presented as “robust” evidence that (1) the temperature reconstruction makes sense and (2) that the relative importance of CO2 and solar, in particualr, for explaining that temperature time series varied over time in a way that supported the AGW hypothesis. We have seen from the earlier analysis that the purported temperature reconstruction cannot be supported as such — the supposed evidence that it tracked temperature well in the verification period does not hold up, the index is dominated by bristlecones although the evidence is that they are not a good temperature proxy, there were simple mistakes in data collection and collation that affected the reconstruction etc. Now we also find that the supposed final piece of “robust evidence” linking the reconstruction to CO2 and solar fluctuations is also bogus. One can get many alternative correlations that do not fit any knwon theory depending on how the statistical analysis is done, and no reason is given for the optimality of the 200-year window. Just a bogus claim that the window size does not matter — one gets the same “comforting” results for smaller window sizes too.
19. Posted Jun 1, 2006 at 8:21 AM | Permalink | Reply
The thing is this looks like the tip of the attribution iceberg. Just taking one major attribution study, Crowley, Science 2000, “Causes of Climate Change over the Past 2000 Years” where CO2 is attributed by removing other effects, you find Mann’s reconstruction. So much for it ‘doesn’t matter’.
Its been argued that greater variance in the past 1000 years leads to higher CO2 sensitivity. But that is only if the variance is attributed to CO2. If the variance is attributed to Solar, then that explains more of the present variance, and CO2 less.
20. Jean S
The end of Peter Hartley’s post (#18) is such a nice summary of these results that I want to give it a little more emphasis. So anyone strugling to understand the meaning of these latest results, and their relation to Steve’s earlier findings, I suggest you start from this:
The point here, however, is that these results were presented as “robust” evidence that (1) the temperature reconstruction makes sense and (2) that the relative importance of CO2 and solar, in particualr, for explaining that temperature time series varied over time in a way that supported the AGW hypothesis. We have seen from the earlier analysis that the purported temperature reconstruction cannot be supported as such “¢’¬? the supposed evidence that it tracked temperature well in the verification period does not hold up, the index is dominated by bristlecones although the evidence is that they are not a good temperature proxy, there were simple mistakes in data collection and collation that affected the reconstruction etc. Now we also find that the supposed final piece of “robust evidence” linking the reconstruction to CO2 and solar fluctuations is also bogus. One can get many alternative correlations that do not fit any knwon theory depending on how the statistical analysis is done, and no reason is given for the optimality of the 200-year window. Just a bogus claim that the window size does not matter “¢’¬? one gets the same “comforting” results for smaller window sizes too.
21. Steve McIntyre
#19. Take a look at my previous note on Hegerl et al 2003, one of the cornerstones. I haven’t explored this in detail, but noticed that it was hard to reconcile the claimed explained variance with the illustrated residuals. We discussed the confidence interval calculations in Hegerl et al Nature 2006 recently.
Now that we’ve diagnosed one attribution study, the decoding of the next one should be simpler. The trick is always that you have to look for elementary methods under inflated language.
We didn’t really unpack Hegerl et al 2006 yet – it would be worth checking out.
22. Posted Jun 1, 2006 at 10:36 AM | Permalink | Reply
#21. Yes the Hegerl study started me on a few regressions. I had some interesting initial observations. Like very high solar correlation and low to non-existent GHG correlations with their reconstructions. One hurdle to replication is expertise with these complex models they use even though they are probably governed by elementary assumptions and relationships. Another thing you have to wade through is the (mis)representation. In Hegerl, because the pdf’s are skewed, there is a hugh difference between the climate sensitivity as determined by the mode, the mean and the 95 percentile. The mode for climate sensitivity, i.e. the most likely value for 2XCO2 is very low, but not much is made of it.
23. Posted Jun 1, 2006 at 11:56 AM | Permalink | Reply
Any comments on the the implications to Mann and Michael Mann and Kerry Emanuel’s upcoming article in the American Geophysical Society’s EOS? Science Daily has an article about it at http://www.sciencedaily.com/releases/2006/05/060530175230.htm
Some key quotes include:
Anthropogenic factors are likely responsible for long-term trends in tropical Atlantic warmth and tropical cyclone activity
and
To determine the contributions of sea surface warming, the AMO and any other factors to increased hurricane activity, the researchers used a statistical method that allows them to subtract the effect of variables they know have influence to see what is left.
and
When Mann and Emanuel use both global temperature trends and the enhanced regional cooling impact of the pollutants, they are able to explain the observed trends in both tropical Atlantic temperatures and hurricane numbers, without any need to invoke the role of a natural oscillation such as the AMO.
and
Absent the mitigating cooling trend, tropical sea surface temperatures are rising. If the AMO, a regional effect, is not contributing significantly to the increase, than the increase must come from general global warming, which most researchers attribute to human actions.
24. Steve McIntyre
#22. The skew was one reason for plotting the volcanics shown above. That distribution hardly meets regression normality assumptions.
25. Posted Jun 1, 2006 at 12:28 PM | Permalink | Reply
#24. Yes, and the quantification of warmth attributed to GHGs arises from a small remainder after subtracting out much larger quantities, over a relatively short time frame.
26. Steve Bloom
Re #23: With all eyes here focused on hockey sticks, occasionally in the distance a leviathan is seen to breach. Pay no attention.
27. Posted Jun 1, 2006 at 1:45 PM | Permalink | Reply
In addition to the CO2-solar-temperature discussion here, Mann refered to Gerber e.a. in reaction on my remarks on RC about the range of variation (0.2-0.8 K) in the different millennium reconstructions. The Gerber climate-carbon model calculates changes in CO2 as reaction on temperature changes.
From different Antarctic ice cores, the change in CO2 level between MWP and LIA was some 10 ppmv. This should correspond to a variation in temperature over the same time span of ~0.8-1 K. The “standard” model Gerber used (2.5 K for 2xCO2, low solar forcing) underestimates the temperature variation over the last century, which points to either too low climate sensitivity (in general) or too high forcing for aerosols (according to the authors). Or too low sensitivity and/or forcing for solar (IMHO).
The MBH99 reconstruction fits good in the temperature trend of the standard model. But… all model runs (with low, standard, high climate sensitivity) show (very) low CO2 changes. Other experiments where solar forcing is increased (to 2.6 times minimum, that is the range of different estimates from solar reconstructions), do fit the CO2 change quite well (8.0-10.6 ppmv) for standard and high climate sensitivity of the model. But that also implies that the change in temperature between MWP and LIA (according to the model) is 0.74-1.0 K and the Esper (and later reconstructions: Moberg, Huang) fit the model results quite well, but MBH98 (and Jones ’98) trends show (too) low variation…
As there was some variation in CO2 levels between different ice cores, there is a range of possible good results. The range was chosen by the authors to include every result at a 4 sigma level (12 ppmv) of CO2. That excludes an experiment with 5 times solar, but includes MBH99 (marginally!).
Further remarks:
The Taylor Dome ice core was not included, its CO2 change over the MWP-LIA is ~9 ppmv, which should reduce the sigma level further. That makes that MBH99 and Jones98 are outliers…
Of course everything within the constraints of the model and the accuracy of ice core CO2 data (stomata data show much larger CO2 variations…).
Final note: all comments on the Gerber e.a. paper were deleted as “nonsense” by the RC moderator (whoever that may be)…
28. John A
Final note: all comments on the Gerber e.a. paper were deleted as “nonsense” by the RC moderator (whoever that may be)…
This leaves what left?
29. Posted Jun 1, 2006 at 2:10 PM | Permalink | Reply
Re #23
Jason, according to the article:
Because of prevailing winds and air currents, pollutants from North American and Europe move into the area above the tropical Atlantic. The impact is greatest during the late summer when the reflection of sunlight by these pollutants is greatest, exactly at the time of highest hurricane activity.
Have a look at the IPCC graphs d) and h) for the areas of direct and indirect effect of sulfate aerosols from Europe and North America. The area of hurricane birth is marginally affected by North American aerosols and not by the European aerosols at all, due to prevailing Southwestern winds…
Further, as also Doug Hoyt pointed out before, the places with the highest ocean warming (not only the surface), are the places with the highest insolation, which increased with 2 W/m2 in only 15 years, due to changes in cloud cover. That points to natural (solar induced?) variations which are an order of magnitude larger than any changes caused by GHGs in the same period. See the comment of Wielicki and Chen and following pages…
30. Posted Jun 1, 2006 at 2:16 PM | Permalink | Reply
Re #27,
Comments about the (global) temperature – ice core CO2 feedback were nicely discussed by Raypierre.
31. BradH
Steve,
MBH98 has been carved up piece by piece here. In fact, there have been so many comments regarding its errors that it’s nigh on impossible to get a feel for the number and extent of the problems.
Ed’s comments to the effect that Tim Lambert’s post on the list of errors must be getting substantial got me thinking – wouldn’t it be nice to have a post, with a two column table and in one, place the MBH claim, while in the other, place a link to the Climate Audit post(s) refuting it. That would be a really powerful way to summarise the body of work on this blog relating to that paper.
You could call it, “Is MBH98 Dead Yet?”, or maybe, “Come back here and take what’s coming to you! I’ll bite your legs off!”
32. Steve McIntyre
#31. I guess you’r referring to the famous Monty Python scene where the Black Knight has all his arms and legs cut off and still keeps threatening. David Pannell wrote about a year ago and suggested the simile for the then status of this engagement. I guess it’s in the eye of the beholder because I recall William Connolley or some such using the same comparison but reversing the roles.
There’s a lot of things I should do.
33. BradH
Yep, that’s the one. Did William Connolley really do that? Honest?
34. TCO
1. If the model shows correlation coefficients of factors varying as a function of time, then we’re not doing a good job of specifying the forcings in the system.
2. I still want some numerical feel for why and how much 100-200 may not differ and the reverse for 200-10. If you can’t give one, then maybe the differences are not significant.
3. Hartley: but A! but A!
35. Posted Jun 2, 2006 at 5:11 AM | Permalink | Reply
TCO, as regards 1. and 2. you need to look back at comments #17 and #18. The answers are right there and can’t be made much simpler. You need to consider what exactly correlation coefficients are and think about how you’d expect them to behave given the properties of the data you have.
36. Paul
Here is a question for those with the technical ability, software, time and inclination (of which I have one of the four – I’ll leave you to guess which).
This problem appears to be asking for the application of the Johansen technique.
We have multiple time series and we have an unknown number of cointegrating relationships. Johansen would help us discover which ones matter, if any, and with what sort of causality. At the same time it was sepcifically designed to be used with non-stationary time series so will easily stride over the problems of finding correlated relationships in stochastically trended series such as these.
Steve, I am sure Ross would be familiar with Johansen and could do some analysis.
37. TCO
Chefen: Disagree. Answer to my question 2 is non-numeric. Give me something better. Something more Kelvin-like. More Box, Hunter, Hunter like. On 1, think about it for a second. If I have a jackhammer feeding into a harmonic oscillator of springs and dampers, does that change the relationship F=mA over time? No. If you understant the system, you can give a relationship that describes the behavior. Sure the solar cycle as a FORCING may be changing over time. But if you have modeled system behavior properly, that just feeds through your equation into output behavior.
38. John Creighton
If CO2 is more highly correlated with a 200 year window then a 100 year window then that could suggest; that the climatic response to C02 is slow. That is there is a time lag. However the slower the response of the climate to CO2 the less information there is to estimate that correlation. The rise in the correlation for the 200 year window towards the end of the millennium I think is problematic because it suggests that the response to CO2 might not be linear with the log of the CO2. However, this is all conjecture Mann’s results have been shown to be week.
39. John Creighton
Something confuses me about the CO2 graph. It says it is plotting the log of the CO2 but the curve looks exponential. If CO2 is rising exponentially shouldn’t the log of it give a straight line. If that is the log of CO2 then CO2 must be rising by e^((t-to)^4) or so. Has anyone verified this data?
40. Jean S
re #39: John C, it says indeed in the figure that it’s “log CO2″, but it is actually the direct CO2 concentration (ppm), which is used (without logging) for the correlation calculations as Steve notes here. Don’t get confused so easily, the whole MBH98 is full of these “small” differencies between what is said what is actually done.
41. John Creighton
I don’t know if anyone is interested in this but with all the criticisms of Mann’s work I have been thinking if I could devise a more robust method of determining the correlation coefficients. My method starts by assuming that the signal is entirely noise. I use the estimate of the noise to whiten the regression problem (A.k.A weighted least means squares).
I then get an estimate of the regression coefficients and an estimate of the covariance of those regressions coefficients. I then try to estimate the autocorrelation of the noise by assuming the error in the residual is independent of the error in the temperature due to a random error in the regression parameters. With this assumption I estimate the correlation.
This gives me a new estimate of the error autocorrelation which I can use in the next iteration of the algorithm. This gave me the correlation coefficients:
```
0.3522 //co2
0.2262 //solar
-0.0901 //volcanic
```
With the covariance matrix
```
0.0431 -0.0247 0.0008
-0.0247 0.0363 -0.0002
0.0008 -0.0002 0.0045
```
Where the correlation coefficients relate how one standard deviation change in the parameter causes a standard deviation change in the temperature where the standard deviation is measured over the instrumental records.
The algorithm took about 3 iterations to converge. Using the normal distribution the 99% confidence intervals are given by:
```
0.2413 0.4631 //co2
0.1326 0.3197 //solar
-0.1017 -0.0784 //volcanic
```
0.1732 standard deviations
I am not sure how this method relates to standard statically techniques. If anyone knows of a standard technique that is similar or better please tell me. Some points of concern is that I was not able to use the MATLAB function xcorr to calculate the autocorrelation as it resulted in a non positive definite estimate of the covariance matrix. I am not sure if my method of calculating the correlation is more numerically robust but I know it is not as computationally efficient as the MATLAB method since the MATLAB method probably uses the fft.
Also I found most of my noise was very high frequency. I would of expected more low frequency noise. I plan to mathematically derive an expression for the confidence intervals for signal and noise with identical poles which are a single complex pare. I think this is a good worst case consideration and it is convenient because the bandwidth of both the signal and noise is well defined.
The bandwidth is important because it is related to the correlation length which tells us how many independent verifications of the signal we have. I could also do some numerical tests to verify this observation.
```
function RX=JW_xcorr(a,b,varargin)
L=length(a);
RX=conv(a(end:-1:1),b);
adjust=zeros(size(RX));
adjust(:)=(abs((1:length(RX))-L)+1);
RX=RX./adjust/RX(L);
function y=normalConf(mu,P,conf)
if sum(size(P)>1)==1; P=diag(P); end
y=zeros(length(mu),2);
for i=1:length(mu)
low=(1-conf)/2;
delta=sqrt(2)*P(i,i)*erfinv(2*low-1);
y(i,:)=[(mu(i)-abs(delta)) (mu(i)+abs(delta))]
end```
```
%%%%%%%%%%%%%Script to process the data bellow%%%%%%%%%%%
%%%%%%%%%%%%%Part I load the data%%%%%%%%%%%%%%%%%%%%%%%%
clear all;
%%%%%%%%%%%%%%%%Part One load The Data%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
nhmean; %Load the mean temperature data
fig7_co2; %Load the Co2 data.
m_fig7_solar; %load the solar data
m_fig7_volcanic; %load the volcanic data
tspan=[1611 1980]; %The range of overlapping temperature
%values
t=tspan(1):tspan(2);
```
```%%%%%%%%%Part II Normilize and center the data%%%%%%%%%%%%%%%%%%%%%
[I1 J1]=find((m_co2>=tspan(1))&(m_co2=tspan(1))&(mtemp=tspan(1))&(m_solar=tspan(1))&(m_volcanic```
42. John Creighton
Hmmm…..All my code wasn’t pasted. The rest of the script:
```
%%%%%%%%%%%%%%%%Part II Normilize and center the data%%%%%%%%%%%%%%%%%%%%%
[I1 J1]=find((m_co2>=tspan(1))&(m_co2=tspan(1))&(mtemp=tspan(1))&(m_solar=tspan(1))&(m_volcanic```
43. John Creighton
grrr…….looks like I can’t paste my code. I guess you’ll have to wait untill I do it on gocities.
44. Steve McIntyre
John C , I think that WordPress considers the slash sign as an operator and gets mixed up with the code.
45. John Creighton
My code is now posted here:
http://geocities.com/s243a/warm/
I’ll upload the figures in a second.
46. John Creighton
Okay, the figures are now added. The figures were generated by MATLAB
47. TCO
John A, fix the sidebar.
48. TCO
The problem is not people posting long links, the problem is your sidebar. Fix the computer program, not the user behavior.
49. TCO
This is the only blog with this problem, that I have seen.
50. TCO
It’s not Microsoft’s fault either and Internet Explorer is NOT an obscure application.
51. John Creighton
Originally I was wondering why Michal Mann was plotting the correlation vs time since if the system is stationary the correlation should be constant with time and you should get a better estimate using more data. I know Michael Mann’s argument is that most of the data there is too much noise to show any significant correlation but I don’t buy this argument. What Steve has shown is that Mann’s correlation estimate is not robust to low frequency noise.
I think that it is an interesting question if the correlation changes with time but I think that a more important question is why. If we are just interested in the correlation between temperature and carbon dioxide we may have an expression like:
t(i)=(c(i)(1+s(i)+v(i)+t(i))a(i)
t temperature
c carbon diocide
s solar
v volcanic
t temperature
where a(i) is the linear correlation coefficient and (1+s(i)+v(i)+t(i))a(i) is our nonlinear coefficient which we use to estimate the relationship of the correlation with time. Once we have a suitable estimate of correlation with time along with the error bounds we can make better claims as to how reasonable the proposition of a tipping point is.
52. Jean S
John C., IMHO the real problem in MBH98 is not how the correlation is estimated, the real problem is that the correlations are more or less spurious (MBH98 is not representative of the temperature history). So I don’t fully get where you are heading to?
In any case, it might be useful (if you have some spare time) to calculate the correlations with respect to different “spaghetti reconstructions”… it would be interesting to see if they share anything in common! Another interesting testing would be to correlate the individual proxies with those forcings. This might give some indication to what degree they serve as whatever type of proxy.
53. TCO
I pointed this out a while ago. Correlations changing with time imply that we don’t have a good understanding of the system. There is some other factor driving the changes. I don’t really think it’s acceptible or meaningful to have correlations change over time.
54. Mark T.
Correlations changing with time imply that we don’t have a good understanding of the system.
Not at all. It implies that the statistics are really that: non-stationary. I.e. the forcings change over time. We can understand a system just fine in the face of non-stationary statistics, but that does not mean we can track it.
Mark
55. jae
re 52
if they share anything in common! Another interesting testing would be to correlate the individual proxies with those forcings. This might give some indication to what degree they serve as whatever type of proxy.
Exactly. This has bugged me for over a year now. If tree rings (or any other proxy series) are to be used as “thermometers,” then EACH damn series should correlate with temperatures IN THE SAME GRIDCELL. When only one or two selected groups of trees correlate only with “global temperature” (e.g., certain bristlecone trees), then something is really amiss. How can these guys keep a straight face?
56. Mark T.
The rest of the world is not capable of understanding the implications of a lack of correlation to local temperatures vs. global temperatures.
Mark
57. TCO
52. Which means something else is changing in the system. There is some variable that is not being accounted for. A non-stationary system becomes stationary once we specify all of the things that drive the behavior.
58. jae
Like, a whole bunch of variables, in the case of tree ring data. Precipitation, sunlight/shade, nutrients, CO2, to name a few. Can you ever claim that living things will exhibit stationarity?
59. TCO
I think you need to do things like age-normalizing and the like. Think that there are a lot of confounding variables. In the long term, would not lose hope that we can find ways to get the information by combining different proxies mathematically (more equations helps solve more unknowns) or by finding new proxies (better forensics in a sense). Sometimes, I get the impression that people here don’t want to know what the behavior was in the past. Just because Mannian methods are not sufficient or have been overplayed, would not lose hope that we can’t figure something out. Look at all the advances that science has made in the last 100 years: isotopes, thermoluminscence, computer analysis and statistical methods, DNA, etc. etc. We may find a way to answer the questions.
60. jae
TCO: I agree. I like the treeline approach.
61. John Creighton
#52 (Jean) I think we can resolve spurious correlations and once we are able to do that either with some standard method or one that we derive then we can go further and try to answer the question of if the statistics are stationary and weather we can identify the nonlinearities that result in non stationary statistics.
I am not sure about if standard statistics methods exist but I think it is a good learning exercise for met to derive my own. In the link I posted above I calculated the autocorrelation wrong but it was a good learning experience. I have just hypothesized that complex poles and non positive poles in the autocorrelation indicate less degrees of freedom in the measurements.
Or in other words given a time series Y of length N that has an autocorrelation with non positive real poles then there exist no linear transformation T such that:
W=T Y
and W is a white noise time series of length N.
Thus to whiten the signal we must find a T that reduces the dimensionality of Y. Intuitively this makes sense as the more deterministic the signal is the narrower in bandwidth it will be. Or equivalent the closer the poles will be to the unit circle in the discrete case and the positive real axis in the continuous case.
Statistics cannot be used to prove how likely a deterministic model fits a signal, it can only can be used to estimate a likelihood region where the poles of the signal can be found. This is because only random independent measurements give information and there is no way to separate a purely deterministic signal into random independent measurements.
62. Mark T.
re # 57:
Which means something else is changing in the system. There is some variable that is not being accounted for. A non-stationary system becomes stationary once we specify all of the things that drive the behavior.
Uh, not necessarily true. If we can actually back-out all the variables and remove their impact, then perhaps you can create a stationary data out of non-stationary data. I suppose that’s what you mean when you say “There is some variable that is not being accounted for.”? Unfortunately, this cannot work if there are non-linearities, i.e. even if you know what all the variables are, you can’t back out non-linear effects and the data remain non-stationary.
I think it is hampered even further by interdependence between variables as well (which I think will manifest as a non-linearity).
Mark
63. Mark T.
I think you need to do things like age-normalizing and the like.
That’s a good point. Also, what happens if a tree gets damaged for a while, say during an extreme drought, how long does it take before the tree is again “normal?”
Mark
64. TCO
Interdependance is handled by joint factors.
65. John Creighton
#52 good points. At first I wondered if what you said was always true but then I realized that what is of primary importance is not if our measurements have stationary statistics but rather what is important is if the model parameters we are trying to estimate have stationary statistics. If the statistics of our measurement are stationary that just makes the job easier and I guess implies the system is likely linear.
If we assume the model is linear then non stationary regression coefficients implies the model is different at different times. The less stationary the regression coefficients are the less the model at one time has to do with the model at another point in time. This has many implications including:
-there is less information to identify the model since it is only valid over a short period of time
-identification of model parameters at one time may have nothing to do with model parameters at another time. For example in the case of tree rings identification of proxies in the instrumental period may have nothing to do with temperatures 1000 years ago.
66. mark
Interdependance is handled by joint factors.
TCO, joint factors are not separable using PCA, MCA or ICA.
Mark
67. TCO
So what? Did he (or does he need to) do PCA for the 3 forcing analysis? Just do basic multiple correlation. Joint factors (x1 times x2) can be handled just like an x3 or an x28. It’s a confounding factor either way. Go read Box, Hunter, Hunter for guidance on when introduction of additional factors is warrented (it is a trade-off between degrees of freedom and fitting all the variance). This stuff is like super-basic. Minitab will do this in a jiffy.
Someone here (Steve M?) made some remark about Mann not having a polynomial least squares ability when I talked about looking at the higher order factors. I was like, duh. Just put x in one column of excel, y in the other, make a column for x squared. Do the damn linear regression of xsq versus y. Any moron with excel can do that.
This stuff is like college undergraduate design of experiments stuff. Oh…and if you had modeled a system of forcings in any basic engineering class or in a manufacturing plant that does six sigma and gone ahead and blithely and unconcernedly just made the correlations for all the forcings be functions of time, people would laugh at you! That that was your result. That it didn’t even bother you. You don’t know the system then! You don’t even know it phenomenologically.
68. TCO
63. If you read the tree lit, there are individual studies where they look at previous year effects on this year (this moves towards the damaging understanding). It’s not a new thought at all. I think part of the issue comes though if you use data in a Mannian approach that just mushes everything together. However, if you use individual reconstructions (that have already taken this effect into consideration), rather then more basic data, this may resolve some of that issue.
69. mark
So what? Did he (or does he need to) do PCA for the 3 forcing analysis?
My point, TCO, is that you cannot separate them if you do not have a-priori information. I.e. you can pull out the joint (dependent) vars, but you cannot discern the difference between them.
Of course, PCA does not tell you which source is which anyway. FWIW, I’m finding ICA to be more interesting. Higher order statistics are cool.
Mark
70. TCO
You can look at joint var as well as each independant factor. Adding x1x2 into the modeling (and leaving x1 and x2 there) is handled the same way as if you had added in an x3. This is a very normal process in DOE.
71. John Creighton
I’ve been banging my head over the estimation problem trying to derive an expression for the probability function that includes both the error in the estimate of the mean and the error in the estimate of the autocorrelation values.
Then it occurred to me that they are not independent. You can map N estimates of the error in the mean to N estimates of error in the higher order statistics. Consequently you can get a probability distribution in terms of the error in the mean or the error in the higher order statistics or the static’s but not both simultaneously.
Since we generally assume the error in the mean is Gaussian we express the probability distribution in terms of the error in the mean as as opposed to the error in the higher order statistics since it is a much more mathematically tractable estimation problem. This brings me back to the Gaussian likely hood function:
http://www.ece.ualberta.ca/~hcdc/Library/RandProcClass/JGaussian.pdf
P(x|y,r(k delta_t))=(1/((2*pi*det(P))^(n/2)))exp(-(1/2)*(y-Ax)’inv(P)(y-AX))
P[i,j]=r(delta_t*(i-j))
r(delta_t*(i-j)) is the discrete autocorrelation function of the error in the mean
Notice I wrote the likelihood function in terms of a conditional probability function. The actual probability function is given by the chain rule:
P(x,y, r(k delta_t))=P(x|y,r(k delta_t))P(y,r(k delta_t))
The idea of maximum likelihood is that the actual solution
P(y,r(k delta_t)) is near the maximum of the probability distribution. The likelihood function is an estimate of the probability distribution. Given no knowledge of the mean and autocorrelation a reasonable assumption (priori information) is that P(y,r(k delta_t)) is a uniform distribution in which case the maxium of the probability distribution is given by the maximum of P(x|y,r(k delta_t)). The use of prior information P(y,r(k delta_t)) in an estimation problem is known as Bayesian estimation.
An interesting question is how robust is maximum likelihood estimation to changes in P(y,r(k delta_t)). It is my opinion if P(x|y,r(k delta_t)) is narrow and the estimate using P(x|y,r(k delta_t)) yields an estimate that lies within a few standard deviations of the maximum of P(y,r(k delta_t)) then the estimate will be robust. I think that most reasonable assumptions of P(y,r(k delta_t)) will yield a robust maximum likelihood estimator of P(x,y, r(k delta_t)).
The only likely probablamatic case I can think of is a probability distribution P(y,r(k delta_t)) that has two maximums which are a much greater distance apart then the width of the likelihood function. However I am still not sure if this case is problemamatic because if the conditional probability function is sufficiently narrow then the weighting of P(y,r(k delta_t)) may not effect the maximum. I am going to try out maximum likelyhood and see if my computer can handel it. I am then going to see if I can use it to compute the confidence intervals of the correlation coefficients of temperature drivers over the period of instrumental data. To compute the confidence intervals I am going to assume that P(y,r(k delta_t)) is a uniform distribution. I may later try to refine this knowing the distribution properties of the higher order statistics.
72. John Creighton
CO2 Negative Correlation w/ Temp! I tried to different techniques on Mann’s data for MBH98. With regular least squares I got a positive correlation coefficients with solar:
0.3187 //CO2
0.2676 //Solar
-0.1131 //volcanic
I then used an iterative technique where I iteratively estimated the error covariance the fit from the autocorrelation of the residual. When it converged the power spectrum of the previous iteration was the same as the iteration before it. I got the following correlation coefficients:
0.4667 //CO2
-0.1077 //Solar
-0.0797 //volcanic
Clearly this latter result is wrong but why should it give a different answer then the previous technique? If you look at the plots of solar and CO2, the CO2 curve looks like a smoothed version of the solar curve. Since the CO2 curve is less noisy and the higher frequency parts of the solar curve don’t appear to match the temperature curve well, the regression prefers to fit the low frequency “hockey stick like shape” of the CO2 then the low frequency hockey stick like shape of the sloar forcing curve. If the CO2 gives too much of a hockey stick and the solar gives not enough of a hockey stick the regression can fit the hockey stick by making the solar correlation negative and the CO2 correlation extra positive.
What I think is wrong is since the low frequency noise is easily fit by both the CO2 curve and the low frequency parts of the solar curve, the residual underestimates the error in the low frequency part of the spectrum. As a consequence in the next iteration the regression algorithm overweight the importance in the low frequency parts of the signal in distinguishing between the two curves. Since the low frequency parts of the CO2 curve and solar curve are nearly collinear the estimation is particularly sensitive to low frequency noise. I think this shows some of the pitfalls of trying to estimate the error from the residual and perhaps gives a clue as why the MBH98 estimate of the correlation is not robust.
73. UC
re 55
Proxies are more accurate than thermometers. That’s the only explanation for 2-sigmas of MBH99. Maybe it is possible to use the same proxies to obtain all the other unknowns (than temperature) for 1000-1980.
74. John Creighton
Estimating w/ Unknown Noise
So I don’t get too adhoc and incoherent I decided to search the web fore estimation techniques that make no assumptions about the noise. This link is the best I’ve found:
http://cnx.org/content/m11285/latest/
It looks promising but I don’t fully understand it. I think when possible these techniques should be tried.
75. John Creighton
Negative Power Spectrim :0
I’ ve been trying some to estimate the auto correlation of the residual and I noticed that it leads to a negative power spectrum. I don’t understand way but there seems to be a known solution to the problem:
http://www.engineering.usu.edu/classes/ece/7680/lecture13/node1.html
The idea is to maximize the entropy of the signal and it doesn’t even assume Gaussian noise. I noticed Steve has discussed the power spectrum of the residual with respect to some of Mann’s papers. I wonder if Steve or the opposing Hockey stick team have tried using these techniques to estimate the power spectrum.
76. John Creighton
GENERALIZED MAXIMUM ENTROPY ESTIMATION OF SPATIAL AUTOREGRESSIVE MODELS
http://www.spatialstatistics.info/webposting/mittelhammer_marsh/spatial_entropy_marsh_mittelhammer.pdf
“Abstract: We formulate generalized maximum entropy estimators for the general linear
model and the censored regression model when there is first order spatial autoregression
in the dependent variable and residuals. Monte Carlo experiments are provided to
compare the performance of spatial entropy estimators in small and medium sized
samples relative to classical estimators. Finally, the estimators are applied to a model
allocating agricultural disaster payments across regions.”
I have a hunch that this paper will tell me what I need to know to properly apply regression to the case where the statistics of the error are not known in advance. I’ll say more once I have done reading it.
77. UC
Are Solar and DVI based on proxies as well? And NH is proxy + instrumental. Lots of issues, indeed..
78. John Creighton
#77 I suppose all the non temperature components of the proxi are suppose to cancel out when you fit the data to the proxies. Since many or the drivers are correlated I don’t think this is guarantied to happen.
79. UC
45 :
Your method is bit hard to follow, maybe because people tend to have
different definitions for signal and noise. Is the MBH98 ‘multivariate
regression method’ LS solution for the model
mtemp= a*CO2+b*solar+c*volcanic+noise ?
i.e. your ‘no noise assumption fit’. And what is time-dependent correlation of MBH98? Sometimes the Earth responds to Solar and sometimes to CO2?
80. John Creighton
#79 Oh, gosh, given the lack of responses on this thread I didn’t think anyone was following it. Anyway, you have the right definition of the temperature signal plus noise. Sometimes, people try to fit deterministic signals to a best fit and use a least squares criterion. This is effectively the same as fitting a stochastic signal but assuming white noise with a standard deviation of one. There are several methods of deriving weighted least means squares and they all give the same solution. The standard methods are: finding the Cramer Roa lower bound, minimizing maximum likelihood, or whitening the least squares error equation then Appling the normal least squares solution to the whitened equation.
I haven’t updated that link since I posted it because I haven’t been satisfied with the results. I’ll take a look at it now and respond further.
81. John Creighton
Later today, I’ll clean up the code some in the link there will be new figures and this will be my introduction:
“The Climate scientist that make headlines often use adhoc statistical methods in order justify there presupposed views without doing the necessary statistical analysis to either test there hypothesis or justify their methods. One assumption used by Michael Mann in one of his papers is that the error can be estimated from the residual. In the following we will iteratively use this assumption to try to improve the estimator and show how it leads to an erroneous result. By proof by contradiction we will say that Mann’s assumption is invalid.
There have been several techniques in statistics that attempt to find solutions to estimation without knowledge of prior statistics. They even point out that estimation of the true autocorrelation via the autocorrelation of the measurement data is inefficient and can lead to biased results. Worse then that the estimation of the power spectrum can lead to negative power values for the power spectrum as a result of windowing. This is a consequence of the window not exceeding the coherence length of the noise.
I will later address these techniques once I read more but believe that entropy may be a neglected constraint. I conjecture that if we fix the entropy based on the total noise power then try to find the optimal estimate of the noise based on this constraint, then absurd conclusions like negative power spectrum will be resolved. Entropy is a measure of how random a system is. Truly random systems will have a flat power spectrum and therefore a high entropy relative to the power.
“
82. John Creighton
Thinking more about coherence length I recalled that the coherence length is inversely proportional to the width of the power spectrum peak, thus narrow peaks cannot be estimated over a limited window. This tells me that a good estimate of the power spectrum will not have overly narrow peaks. Thus for the iterative technique to work of it must constrain the power spectrum to certain smoothness properties based on the window length.
83. Kevin
Re 81: Right on! Though perhaps a bit too diplomatic. Many Climate Scientists, brilliant though they may be, inhabit the closed form world of Physics. In this world, if the equations say it happened, it happened and there is no need for empirical verification. Statistics also are not required; “We’re beyond the need for that” seems to be the attitude of some modelers at least. End of discussion. This sort of mindset does not tend to produce good researchers. (And the good ones doing the solid research don’t produce headlines.)
Will be very busy at work but will try to revisit this blog and thread as much as I can. Thanks for your comment, John.
84. Steve McIntyre
#81. John Cr, I haven’t been keeping up for obvious reasons, but intend to do so.
85. bruce
Re #81: Good stuff John. However, you might want to address the spelling errors, word omissions etc if you don’t want to be exposed to the related credibility risk.
86. John Creighton
#85 I’m not worried. I’m far from ready to publish. I am just at the brain storming stage and have much to learn about estimation when the statistics of the noise aren’t known. I am curious though about how Mann estimates the power spectrum of the residual as I am finding out it is not a trivial problem.
I guess, I won’t be cleaning up the link I posted above with corrections to the code figures and a new introduction today. I have been instead fiddling a bit with the power spectrum and auto correlation estimation. I want to see if I can do better at estimating the true auto correlation then just auto-correlating the data.
My current approach involves auto-correlating the data, taking the fft, taking the magnitude of each frequency value, taking the inverse fft, then smoothing in the frequency domain by multiplying by a sinc function in the time domain. It sounds ad hoc so I want to compare the result with just auto-corelating the data.
I know iteratively trying to improve your estimate via estimating the noise by taking the auto-correlating of the data yields erroneous results. I am not sure if you can improve your estimate by using the smoothing feature I proposed above in the iterative loop. I want to compare the results for curiosity. Weather I take it to the next level of statistical testing though motie carlo analysis will depend if I believe it is a better use of my time to test and validate my ad-hoc technique or read about established techniques as I posted above.
http://www.spatialstatistics.info/webposting/mittelhammer_marsh/spatial_entropy_marsh_mittelhammer.pdf
87. Steve McIntyre
#86. John Cr, I’ve not attempted to replicate Mann’s power spectrum of residuals, but he has written on spectra outside of MBH98 in earlier work and I presume that he uses such methods – see Mann and Park 1995. I would guess that he calculates 6 spectra using orthogonal windows (6 different Slepian tapers); then he calculates eigenspectra.
In the calibration period, because there is already a very high degree of overfitting – his method is close to being a regression of temperature on 22-112 near-orthogonal series – and the starting series are highly autocorrelated. Thus simple tests for white noise are not necessarily very powerful.
88. UC
One way to test the algorithm is to simulate the NH_temp data, e.g.:
noise=randn(386,1)/10;
co2s=(co2(:,2)-290)/180;
solars=(solar(:,2)-1365)/12;
s_temp=co2s+solars+noise;
and use the s_temp instead of NH_temp in the algorithm. MBH98 algorithm seems to show changing correlation even in this case. Normalize/standardize problem.
89. John Creighton
#88 I agree simulation is a good test method. What kind of noise did you assume in you simulation? Anyway, it occurred to me today that in weighted least mean squares there is a tradeoff between the bias introduced in the non modeled parameters (e.g. un-modeled red noise) and estimator efficiency (via a whitening filter). The correlation length of the noise tells us how many independent measurements we have. We can try to remove the correlation via a whitening filter to improve the efficiency but there is an uncertainty in the initial states of the noise dynamics. The longer the correlation length due to those states (time constant) the more measurements are biased by this initial estimate. Bias errors are bad because they will add up linearly in the whitening filter instead of in quadrature. We can try to remove these bias errors by modeling them as noise states but this also reduces our degrees of freedom and thus may not improve our effective number of independent measurements.
90. bender
JC,
It’s not that people don’t want to follow this thread. It’s just that the language is sufficiently opaque that you’re not going to get a lot of bites. I understand the need to be brief. But if you’re too brief, the writing can end up so dense as to be impenetrable to anyone but the narrowest of audiences. If the purpose of these posts is simply to document a brainstorm, where you are more or less your own audience (at least for awhile), maybe label it as such, and be patient in waiting for a reply? Just a thought. Otherwise … keep at it!
Will try to follow, but it will have to be in pulses, as time permits. Meanwhile, maximum clarity will ensure maximum breadth of response, maximum information exchange, and a rapid rate of progress.
Consider posting an R script, where the imprecision of english language packaging is stripped away, leaving only unambiguous mathematical expressions. A turnkey script is something anyone can contribute to. It is something that can be readily audited. And if we can’t improve upon it, at least we might be able to learn something from it.
91. John Creighton
Hopefully, it will be clearer sooner. I am making progress on estimating the correlation of CO2. My iterative estimation of the noise seems to be working because the peaks in the power spectrum stay in the same location after several iterations. It takes about 5 iterations to converge. The regression coefficients of the solar seem to decrease by half of what it was doing non weighted regression well the CO2 seems to stay about the same. I expected the solar regression coefficient to catch up to the CO2. However, the power spectrum has three dominate peaks and if I model those with an autoregressive model the situation may change. I also haven’t tried working out the confidence intervals.
Keep in mind I am using Mann’s data and my techniques can’t correct for errors in the data.
92. John Creighton
I just tried fitting the temperature data to an ARMA model. Since there were 3 polls pairs, I used 6 delays for the auto regressive part. I used 2 delays for each input. My thinking was to keep the transfer function proper in the case that each complex pole pair was due to each input. More then likely though the poles are due to the sunspot cycles or weather patters.
The matrix was near singular so in my initial iteration I added a constant to each diagonal element so that that the condition number didn’t exceed 100. This first iteration give me a nearly exact fit. The power of the peaks in the power spectrum of the error were less then 10^-3. As a comparison non weighted least squares without modeling the noise dynamics gives a power for the peaks of 30.
Therefore the ARMA model significantly reduced the error. Unfortunately when I tried to use this new estimate of the error in the next iteration the algorithm diverged giving peaks in the error power spectrum of 10^5. I think I can solve this problem by using an optimization loop to find the optimal value to add to the diagonal of the matrix I invert in my computation of the pseudo inverse.
For those curious the pseudo inverse is computed as follows:
inv(A’ A) A’
Where ” denotes transpose
A is the data matrix. For instance one column is the value of carbon dioxide and another column represents the solar divers, other columns represent the auto regressive parts and other columns represent the moving average part.
Adding to the diagonal effectively treats the columns of the data matrix as signal plus white noise and the diagonal matrix added to A’A represents the white noise in the columns of A. Although the value of the columns of A may be known precisely the noise could represent uncertainty in our model. The effect of adding a diagonal matrix to A’A is to smooth the response. This technique of adding a constant to the diagonal has many applications. For instance it is used in iterative inverse solvers such as used in the program P spice. It is also a common technique used in model predictive controls.
93. UC
cont. #79:
It seems that inv(A’*A)*A’*nh, replicates MBH98 results more accurately than http://data.climateaudit.org/scripts/chefen.jeans.txt code (no divergence at the end of the data). Tried to remove CO2 using this model and the whole data set, the solar data does not explain the residual very well. Climate science is fun. I think I’ll join the Team.
86: just take fft and multiply by conjugate, there’s the PSD. Gets too complex otherwise.
94. Steve McIntyre
At some point, this topic needs to be pulled together in 500 words and submitted to Nature.
95. TCO
Yeah and when you do that, take a careful Sean Connery look to make sure that you don’t go a bridge too far. Cheffen already did. But had the Co-Jones to admit it.
96. UC
Here is the code if someone is interested:
http://www.geocities.com/uc_edit/full_fig7.txt
97. bender
Are the input files also posted somewhere? i.e.:
fig7-co2.txt
fig7-corrs.txt
fig7-nh.txt
fig7-solar.txt
fig7-volcanic.txt
98. Steve McIntyre
99. bender
What?! You openly allow people to access data, and you share your scripts?! Don’t you know that that’s a threat to your monopoly on the scientific process? You’re mad! You’re setting a dangerous precedent!
But seriously. I am surprised how much good material there is buried here in these threads. Do you have a system for collating or scanning through them all? I’ve used the search tool; I’m wondering if there’s some other more systematic way to view what all is here. e.g. I have a hard time remembering under which thread a particular comment was posted. And once a thread is off the ‘hot list’ (happens often under heavy posting), then I have a hard time sifting through the older threads. Any advice?
(You may delete 2nd para, unless you think other readers will find the answer useful.)
100. Steve McIntyre
The categories are useful and I use them.
If you google climateaudit+kenya or something like that, you can usually find a relevant post. I’ve been lazy about including hyperlinks as I go which is now frustrating that I didn’t do it at the time.
As an admin, there are search functions for posts and comments, which I often use to track things down. But that doesn’t seem to be available to readers or at least I don’t know how to accommodate it.
There are probably some ways of improving the indexing, but I’ll have to check with John A on that.
101. ET SidViscous
Steve
Setting up an FTP site would probably be the best/easiest.
Usually if you point the browser to the directory of the FTP site it will list everything letting people find it themselves.
I’m sure you are aware of this, but probably didn’t make the connection here. The necesarry neurons are probably taken up with the memory of a Lil Kim video.
102. bender
I am not sure if standard statistics methods exist but I think it is a good learning exercise for met to derive my own.
JC, I definitely do not want to discourage you from carrying on with your innovative work. In fact, I think I will start paying a little closer attention to what you are doing. It is starting to look interesting (as I slowly wade my way though this thread).
I just want to point out that this attitude can be somewhat dangerous. It validates my earlier remark when I described the Mannomatic as an hoc invention (and MBH98 as the unlikely product of a blind watch-maker). Innovating is great fun. Until you get burned. So, if you can, be careful.
In general, CA must be careful that a “good learning exercise” for one does not turn out to be a hard lesson for all. We have seen how innovation by a novice statistician can lead to a serious credibility loss. Let THAT be the lesson we learn from.
That being said, JC is obviously a MATLAB whiz, and that is a very good thing. Keep it up JC. (Can we start him a scratch-pad thread off to the side where he can feel free to brainstorm?)
103. Steve McIntyre
Sure.
Johm Creighton, if you want to collate some of your posts and email it to me, I’ll post it up as a topic, so that people can keep better track/
104. John Creighton
Steve, I might check tonight to see what I might want to move.
Bender not to worry, I think my thought process is converging on something standard.
http://www.csc.fi/cschelp/sovellukset/stat/sas/sasdoc/sashtml/ets/chap8/sect3.htm
In the link above the error and the regression parameters are simultaneously estimated. How do you do this? Well, here is my guess. (When I read it I “ll see if I have the same method a better method or a worse method.) Treat each error as another measurement in the regression problem:
[y]=[A I][ x]
[0]=[0 I][e]
With an expected residual covariance of:
[I 0]
[0 I]
This can be solved using weighted least mean squares
Once x is estimated, estimate we estimate
P=E[(Y-AX) (Y-AX)']
And compute the cholesky factorization of P
S S’=P
This gives us a new regression problem becomes:
[y]=[A S][ x]
[0]=[0 S][e2]
With an expected covariance in the residual of:
[P 0]
[0 P]
Subsequent iterations go as follows:
P(n)=E[(Y - A X(n)) (Y - A X(n))']
S(n)S(n)’=P(n)
[y]=[A S(n)][ x(n+1)]
[0]=[0 S(n)][e2(n+1)]
With an expected error in the residual of:
[P(n) 0 ]
[0 P(n)]
We keep repeating using weighted least mean squares until P(n) and x(n) converge. Hopefully, by this weekend I can show some results If I don’t play too much Tony Hawlk pro skater. Lol Well, I got to go roller balding now.
Welcome to the team the UC You can estimate the power spectrum the way you suggest but it will give a nosier estimate unless you average several windows. It can also give a biased estimate. In the data Mann’s used for estimating the correlations, the number of measurements isn’t so large that I can’t compute the auto correlation directly by definition but the optimization you suggest should be use when computer time is more critical.
105. bender
Re #104 multiple regression with autocorrelated error
In the link above the error and the regression parameters are simultaneously estimated. How do you do this?
JC, you’re precious. And I’m not kidding. But there is a difference between “how you do this” and “how a statistician would write the code in MATLAB so that a normal human could do this”.
Your link suggests you want to do multiple regression with autocorrelated error. If that’s all you want to do, don’t need to re-invent the wheel. Use the arima() function in R. The xreg parameter in arima() is used to specify the matrix of regressors.
What we are after are turnkey scripts that TCO could run, preferably using software that anybody can download for free, and whose routines are scrupulously verified to the point that they don’t raise an eyebrow in peer review. That’s R.
I still have not digested all you’ve written, so be patient. My comment pertains only to the link you provided. (I do enjoy seeing your approach as it develops.)
106. John Creighton
#105 I typed help arima on my version of MATLAB and I couldn’t find any documation about this routine. I searched arima+MATLAB on google and I didn’t find any code right away. Maybe I’m reinventing the wheel but I still think it is a good learning exercise. I learn best when I can anticipate what I am about to learn. If I get it working on MATLAB I can try righting a visual basic macro for it so people can use it in excel. Or better yet maybe I can find some code someone already written that is free.
107. UC
# 104: IMO, if the noise part is strongly correlated over time, we need to change the model. And the simple-fft method in my code shows that the residuals have strong low-freq components (you are right, it can be noisy, but fits for this purpose). That is, linear model with i.i.d driving noise can be rejected. Maybe there is additional unknown forcing, or responses to CO2 and Solar are non-linear and delayed. My guess is that nh reconstruction has large low-frequency errors.
108. bender
#107 You already know there are other forcings, intermediary regulatory processes, lag effects, threshold effects & possible nonlinearities. So you know a priori your model is mis-specified. [This model does not include heat stored in the oceans. I'm no climatologist, but I presume this could be a significant part of the equation.] So you know there is going to be low-frequency signal that is going to be lumped in with the residuals and assumed to be part of the noise. So it’s a fair bet your final statement is correct.
Re #41
My method starts by assuming that the signal is entirely noise
Given above, why would you start by assuming THAT? (I know: because it’s helpful to your learning process. … Carry on.)
109. UC
#108 : My final statement is independent of the former statements Other issues imply that NH reconstruction has no ‘statistical skill’ and it usually leads to low-frequency errors.
It seems to me that fig-7corrs are estimates of linear regression parameters (LS), not partial correlations. And that explains the divergence when http://data.climateaudit.org/scripts/chefen.jeans.txt is used.
110. Jean S
re #109, and others: Thanks for the hard work, UC, JC & bender! I’ll check those once I’m back from the holidays. If you feel need to keep busy on research, I suggest you take a look on MBH99 confidence intervals I’m sure it is (again) something very simple, but what exactly?!??! The key would be to understand how those “ignore these columns” are obtained… try reading Mann’s descriptions if they would give you some ideas… Notice also that they have also third (!!!) “confidence intervals” for the same reconstruction (post 1400) given in (with another description):
Gerber, S., Joos, F., Bruegger, P.P., Stocker, T.F., Mann, M.E., Sitch, S., Constraining Temperature Variations over the last Millennium by Comparing Simulated and Observed Atmospheric CO2, Climate Dynamics, 20, 281-299, 2003.
It seems to me that fig-7corrs are estimates of linear regression parameters (LS), not partial correlations.
You may well be right on this issue. I though that they are partial correlations, as those gave
pretty good fit compared to ordinary correlations. Based on Mann’s description, they may well be regeression parameters, or actually almost anything :
We estimate the response of the climate to the three forcings based on an evolving multivariate regression method (Fig. 7). This time-dependent correlation approach generalizes on previous studies of (fixed) correlations between long-term Northern Hemisphere temperature records and possible forcing agents. Normalized regression (that is, correlation) coefficients r are simultaneously estimated between each of the three forcing series and the NH series from 1610 to 1995 in a 200-year moving window.
111. bender
A brief side comment on how uncertainty is portrayed in EVERY ONE of these reconstructions. The graphs are deceptive, possibly intentionally so. And many, many good people are being deceived, whether they know it or not. I think most people are looking at the thread of a curve in the centre of the interval, thinking that IS the recontruction. It is NOT. It is only one part of it. What you should be looking at are the margins of the confidence envelope, thinking about the myriad possibilities that could fit inside that envelope.
An honest way of portraying this reality would be to have the mean curve (the one running through the centre of the envelope) getting fainter and fainter as the confidence interval widens the further you go back in time. That way your eye is drawn, as it should be, to the envelope margins.
North specifically made a passing reference to this problem in the proceedings, when he pointed out that the background color on one of the reconstruciton graphics was intentionally made darker and darker as you go back in time. (I don’t recall the name of the report.) That addresses the visual problem, but in a non-quantitative way. I think the darkness of the centreline should be proportional to the width of the confidence envelope at each point in time. THEN you will see the convergent truth that is represented at the intersection of all these reconstructions: a great fat band of uncertainty.
112. UC
#110: Yep, hard to track what is going on; the story continues:
The partial correlation with CO2 indeed dominates over that of solar irradiance for the most recent 200-year interval, as increases in temperature and CO2 simultaneously accelerate through to the end of 1995, while solar irradiance levels off after the mid-twentieth century.
113. UC
Now, as we have the model we can use years 1610-1760 to calibrate the system and reconstruct whole 1610-present (CO2, Solar and Volcanic as proxies). Calibration residuals look OK. http://www.geocities.com/uc_edit/rc1.jpg
114. John Creighton
“‘My method starts by assuming that the signal is entirely noise’
Given above, why would you start by assuming THAT? (I know: because it’s helpful to your learning process. … Carry on.) ”
With the algorithm I am envisioning
http://www.climateaudit.org/?p=692#comment-39007
I don’t think that is a necessary assumption. I expect it to converge to the same value regardless of the initial assumption of the noise. But for a robustness I can try initializing it with the assumption that the signal is all noise and with the assumption that the noise is white with a standard deviation of one and see if I get the same results.
115. bender
See, this is the problem with posting every detailed thought. I thought you were suggesting it was somehow a helpful assumption to start with. Maybe not “necessary”, but somehow “better” in a substantive way. My attitude is: if it’s “not a necessary assumption”, why tell us about it? I guess because you write stream-of-consciousness style? Fine, but it makes it hard for us to separate wheat from chaff.
Never mind. It’s really a minor issue. Use the blog as you will. Looking forward to the result.
116. John Creighton
#115, I am not sure a useful algorithm can’t be derived on the basis of that assumption. However, I abandoned that approach. Anyway, I was thinking about error bars today and what they mean if you are both estimating, the natural response, the forced response due to error and the forced response due to known forcing effects all simultaneously.
Joint Gausian’s are such that you have hyper ellipses with a constant value in the pdf. The shape of the ellipse is defined by the eign vectors and eign values of the covariance matrix. The N standard deviation ellipse in the coordinates space defined by linear combinations of the eignvectors centered about the mean value of the estimate vector (the estimate vector is made up of the estimates of the parameters and errors) is defined as follows:
N=(v1)^2/lambda1+…+(vn)^2/lambdan
Where vn is the amount we deviate from the mean (the mean is the estimate vector) in the direction of the nth eigenvector and lambdan is the nth eigen value.
There exists a transformation that transforms our space of possible estimates (includes estimations of the parameters and errors) to our eigen coordinate space. There exists another transformation that transforms our space of possible estimates to our measurement space measurement space (includes actual measurements augmented with the expected value of the error (the expected value of the errors is zero)).
We define the N standard deviation error bars for a given coordinate in our measurement space as the maximum and minimum possible estimate of that coordinate that lies on the N standard deviation hyper ellipse given by:
N=(v1)^2/lambda1+…+(vn)^2/lambdan
I suspect non of the following are always at the center of these error bars: the fit including initial conditions, known forcing and estimated error forcing; the fit including only known forcing centered vertically within the error bas as best as possible; the actual temperature measurements.
Another question is, are these error bars smoother then the temperature measurements and any of the fits described in the last paragraph. I conjecture they are.
117. John Creighton
On the subject of gain. The correlation coefficients are only a really good measure of gain if the output equals a constant multiplied by the input. Other types of gain include the DC gain, the maximum peak frequency gain, the matched input gain, and the describing function gain.
I will define the describing function gain as follows:
Average of (E[y y_bar]/sigma_x) over the time interval of interest
Where y is the actual output, y_bar is the output response due to input x and signa_x is the standard deviation of input x. I may try to come up with a better definition later.
118. UC
#116 Sorry, again hard to follow, maybe it’s just me. But I’d like to know as well what those error bars actually mean. Multinormal distributions are out, try to plot CO2 vs. NH or Solar vs. NH. and you’ll see why. IMO ‘forcing’ refers to functional relationship, not to statistical relationship.
119. bender
forcing’ refers to functional relationship, not to statistical relationship
Re #118. Yes. Forcing by factors A, B, C on process X just means the effects a, b, c are independent. When modeling the effects in a statistical model, one would start by assuming the forcings are additive: X=A+B+C, but obviouly that is the simplest possible model, no lags, nonlinearities, interactions, non-stationarities, etc.
Not sure how your comment relates to #116, but then again not sure where #116 is going either. (But you gotta admit: it is going.)
120. John Creighton
#118 The error bars represent the range of values that can be obtained with plausible model fits (fits to parameters end errors). That is we consider a confidence region of fits based on the statistics and we plot the supremum (least upper bound) and infimum (greatest lower bound) of all these fits.
This gives us an idea of how much of our model prediction we can rely on and how much is due to chance.
121. UC
# 119 , # 120 Don’t worry, we’ll get in tune soon. One more missing link: how are the fig7-corrs.dat confidence intervals computed? Is there a script somewhere?
122. John Creighton
You mean how did Mann do it?
Steve has a few posts on that. Click on the link on the side bar that say statistics. You should see a few topics on it as you go though the pages.
123. Steve McIntyre
I’ve discussed MBH confidence intervals for the reconstruction, try the sidebar MBH – Replication and scroll down. So far, Jean S and I have been unable to decode MBH99 confidence intervals nor has von Storch nor anyone else. We didn’t look at confidence intervals for the regression. Given the collinearity of the solar and CO2 forcing and the instability of correlation by window size, one would assume that the confidence intervals for Mann’s correlation coefficients should be enormous – but no one’s checked this point yet to my knowledge. Why don’t you see what you turn up?
124. UC
# 123
Sure, I can try. Not exactly the numbers in MBH98 but close:
http://www.geocities.com/uc_edit/mbh98/confidence.html
125. John Creighton
I am not sure if I am on the way to anything good yet. In the ARMA model I used a four pole model and three zeros for each input. In the figure bellow, the solid line is the fit using my method, the dotted line is the fit using regression on a simple linear model (no lags), and the dots are the temperature measurements. My fit is better but it is not fair comparison because I am using a higher order model.
http://www.geocities.com/s243a/warm/index.htm
Another interesting plot is a plot I did of the DC gain:
The model seems to say that the equilibrium response of the climate to a DC (0 frequency a constant signal) of either solar or CO2 of one standard deviation would change the temperature by an amount that is only about 10% of the amount we have seen the climate vary over the best 1000 years.
This is only very preliminary work and I need to spend a lot of time looking for bugs, and analyzing the code. Trying to better understand the theory and hopefully eventually some Monty Carlo testing. I’ll post the code in a sec. One thing that doesn’t quite look right is I was expecting the error parameters to have a standard deviation of one.
126. John Creighton
The code can be found here:
http://www.geocities.com/s243a/warm/code/
To get it to work in MATLAB, copy all the files into your working directory, change the .txt extension to a .m extension. Then run script2. In the yes know question it asks, type ‘y’ your first time though. Don’t forget to put in the single quotes. On you’re second time though you can type ‘n’. This user input was added because MATLAB seems to take a while to initialize the memory it needs for arrays. This can really slow a person down if they are working on code.
127. fFreddy
Bump
John Creighton, if you put a full link in the first few lines of your post, it messes up the front page for Internet Explorer users. If you use the link button, you can put a link text instead of the URL, and the problem does not arise.
128. fFreddy
And again …
129. fFreddy
Avast ye, landlubbers …
130. fFreddy
The devil damn thee black, thou cream faced URL …
131. fFreddy
where got thou that goose look …
132. UC
# 124 cont.
How should we interpret these confidence intervals? Here’s my suggestion: It is very unlikely that the NH reconstruction is a realization of AR1 process with p??? In an assignment A(:,matrix) = B, the number of elements in the subscript of A and the number of columns in B must be the same.
Error in ==> Z:\script2.m On line 148 ==> Ry(:,k-1)=Rx3(LE:end);%We keep track of the auto corelation at each itteration
133. John Creighton
UC, I don’t get that error. To help me find the bug once you get the error message type:
size(Ry),size(Rx3),size(LE)
I suspect that Ry might be initialized to the wrong size.
134. John Creighton
Here is my suggestion. On line 85 change
RYs=zeros(L,L,N);
To
Ry=zeros(LE,N);
I think that the version of MATLAB I am using (version 7) initializes Ry in the assignment statement where you got the error. I’ll go make this change in my code now.
135. UC
Thanks, seems to work now. I’m still not quite sure what it does, though I have a fear of overfitting.
136. John Creighton
Glad it is working. I’ve been thinking about where to go with it and bothered me that the noise parameters were not white. Well, they looked white except for the standard deviation not being one. I think this is caused by me weighting too much the estimated error in the estimation of the error auto correlation and not weighting enough the residual in the estimate of the error auto correlation.
I am going to add some adaptation into the algorithm as follows. The initial assumption is that the measurement part of the residual is white noise. My first estimate of the error auto correlation will be done by taking the sum of the residual error plus a weight times the estimated error and then auto correlating that weighted sum. I will choose the weight so after I do my frequency domain smoothing the value of the auto correlation at zero is equal to the expected standard deviation of the residual.
This gives me an estimate of the error covariance which I use in the regression and I get a new estimate of the error parameters. To get my new estimate of the measurement residual standard deviation, I will multiply what I previously thought it was by the standard deviation of the error parameters. The idea being to force the standard deviation of the error parameters to one. Then hopefully at convergence all of the residual covariance error can be explained by the mapping of the error parameters onto the measurements via the square root of the covariance matrix. (fingers crossed )
137. UC
#136
So your final goal is to explain the NH-temp series using only forcings and linear functions of them, in a way that residual will be std 1 and white? Sorry if Im completely lost here!
138. John Creighton
Forcing include the error and the known inputs. The estimation of the error should help make up for model uncertainty. The optional solution is a statistical since of the linear equation
Y=AX+e
Is equal to the least squares solution of:
RY=RAX+e
where
R is the choleskey factorization of the covariance of the noise (e=y-AX)
E[e e^T]=P=R’ * R
P is the covariance matrix of the noise (residual)
R is the choleskey factorization of P
So the optimal solution is:
X~=inv(A’ R’ R A’) R A y
The problem is we don’t know the noise so how do we find the most efficient estimator? Consider the equation:
[y]=[A S][X]
[0]=[0 I][e]
Where e is a white noise vector. We want to estimate e and x and we can do this by weighted regression but if we want the most efficient estimate we have to whiten our regression problem as we did above.
This can be done by multipolying the top part of the matrix equation by R
To get:
[Ry]=[RA I][X]
[0] = [0 I][e]
And the residual error is:
[e]=[Ry]-[RA I][X]
[e]=[0 ]- [0 I][e]
Which is white noise:
Thus although we don’t know R or S we know what the residual should look like if we choose S correctly. One of my conjectures is that is if we choose an S that gives a residual error that looks white then we have a good estimate. My second conjecture is that if we pick a well conditioned S and estimate the residual based on that, we can get successively better estimates of S until we are close enough to S to get a good estimate. I conjecture we will know we have a good estimate when the error looks like white noise (flat spectrum standard deviation of one)
139. UC
Let’s see if I got it now. So, you assume y=Ax+S*e, where A is the [co2 solar volc] data, and e is white noise. Due to matrix S, we have correlated noise process (S should be lower diagonal to keep the system causal, I think). If we know S, we can use weighted least squares to solve x. But you are trying to solve x and S? If this is the case, it would be good see how the algorithm works with simulated measurements, i.e. choose x and S, generate e using Matlab randn-command, and use y_s=A*x+S*e as an input to your program.
140. John Creighton
#139 (UC) you got it exactly . Thanx for that lower diagonal idea. I am not sure if it is necessary in the algorithm for S to be lower diagonal but for simulation purposes I think it is a good idea. You can have non causal filters (smoothing) so I am not sure if non-causalisty is that bad. For simulation purpose I am not sure what S you should choose as I think some choices may be difficult to solve for. Those choices that I think that would be difficult are where S results in non station error statistics.
On another note, I was thinking about the co linearity of proxies and how to best handle this in a regression problem. It occurred to me that we can add extra error terms to introduce error in the cost function if the regression proxies are not collinear. We do this by introducing an error (residual) measurement that is the difference between two proxies.
141. UC
Physical model should be causal, present is independent of future (just my opinion..). And actually, in this case, weighted least squares does the smoothing anyway, it uses all the data at once. S for AR1 is easy, that’s a good way to start.
142. John Creighton
It’s been a while since I posted in this thread. I had some thinking to do about my results and as a result I made some code changes. The ideas are the same but I thought of a much better way numerically to converge on the results I had in mind. Is the algorithm an efficient and accurate way to get the fit? I am not sure but:
1.The fit looks really good,
2.The noise parameters looks like white noise with standard deviation one as I expected.
3.The estimate of the error attributes more error to low frequency noise.
4.All the parameters appear to converge and all the measures of the error appear to converge.
6.The predicted DC gain is not that far off from a basic regression fit with simple linear model (no auto regressive or moving average legs)
The figures in my post 125
http://www.climateaudit.org/?p=692#comment-39708
Are replaced by my new figures and all the code in the link given in that post is replaced. Unfortunately at this time I have no expression for the uncertainty. On the plus side the algorithm estimates the noise model as well as the system model so several simulations can be done with different noise to see the effect the noise has on the estimation procedure and on the temperature trends. The algothithum is interesting because unlike most fits that use a large number of parameters only a few of the parameters are attributed to the system and the majority of the parameters are attributed to the error.
Thus although I do not have a measure of uncertainty my procedure tells me that 60% of the variance in the signal is attributed to the noise model. Well this does not give us the error bars it tells us that from the perspective of an arma model with 3 zeros and 4 poles and a noise model of similar order that the majority of the signal is due to noise and not the external drivers CO2, solar and volcanism.
143. John Creighton
#89 I think a good part of temperature variation cannot be explained with simple linear models as a result of the combined forcing agents, solar, volcanic and CO2. I believe the dynamics of the earth induce randomness in the earth climate system. I’ve tried doing regression with a simultaneous identification of the noise here:
http://www.climateaudit.org/?p=692#comment-39708
I don’t think the results are that different from stand multiple regression for the estimates of the deterministic coefficients but it does show that the system can be fit very well, to an ARMA model plus a colored noise input. Regardless of what regression technique you use a large part of the temperature variance is not explained by the standard three forcing agents alone. Possible other forcing agents (sources of noise) could be convection, evaporation, clouds, jet streams and ocean currents.
144. John Creighton
opps, did my html wrong:
145. John Creighton
Please ignore the above two posts, I meant to post here:
http://www.climateaudit.org/?p=796#comment-43600
146. Posted Aug 30, 2006 at 12:39 PM | Permalink | Reply
Regardless of what regression technique you use a large part of the temperature variance is not explained by the standard three forcing agents alone
That is true, but I would put it like this: ‘large part of MBH98-reconstructed-NH temperature variance is not explained by the given three forcing agents alone’. I don’t think that MBH98 NH-recon is exact.
147. John Creighton
I made some changes to the file script2.txt
http://www.geocities.com/s243a/warm/code/script2.txt
The algorithm wasn’t stable for a 20 parameter fit, now I know it is stable for at least up to a 20 parameter fit. I enhanced figure 2 so you could compare it to a standard regression fit of the same order. I will do the same for the plot of the DC gains and the bode plot.
Oh yeah, I added a bode plot for the parameters identified by my algorithm. Not to interesting. The biggest observation is that the earth seems to have a cutoff frequency of about 0.2 radians per year or equavalanetly 0.06 cycles per yer. Or another words inputs with a period less then 16 years are attenuated by about 2DB (50%)
http://www.phys.unsw.edu.au/~jw/dB.html
148. Willis Eschenbach
Re #143, you say:
I think a good part of temperature variation cannot be explained with simple linear models as a result of the combined forcing agents, solar, volcanic and CO2.
Let me suggest that you might be looking at the wrong forcing agents. You might therefore be interested in LONG TERM VARIATIONS IN SOLAR MAGNETIC FIELD, GEOMAGNETIC FIELD AND CLIMATE, Silvia Duhau, Physics Department, Buenos Aires University.
In it, she (he?) shows that the temperature can be explained quite well using the geomagnetic index (SSC), total solar irradiation (TSI) and excess of length of day (LOD). Look at the October 29, 2005, entry at http://www.nuclear.com/environment/climate_policy/default.html for more info.
w.
149. John Creighton
Whillis, that are some very interesting articles at the site you gave. The mechanisms sound very plausible and it is disappointing the figures don’t seem to contain much high frequency information. Of course perhaps a lot of the high frequency components of global average temperatures are due to the limited number of weather stations over the globe.
150. John Creighton
Here is an interesting section from the link Willis gave:
…On the sources of climate changes during the last century
Besides CMEs, which impact in the Earth’s environment may be measured by the SSC index introduced by Duhau (2003a), other sources of climate changes of natural origin are variations in solar radiative output which strength is measured by solar total irradiation index (TSI) (see e.g. Lean and Rind, 1999) and changes in the Earth’s rotation rate which is measured by the excess of length of day variations (LOD) (Lambek and Cazenave, 1976; Duhau, 2005).
The best fitting to the long-term trend in NH surface anomaly that includes the superposition of the effect of the above three variables is given by the equation (Duhau, 2003b):
NHT(t-NHT(to)) = 0.0157[SSC(t-SSC(to)] + 0.103[STI(t-STI(to)] – 0.022[LOD(to-LOD(to)],
where XLT(to) is the long-term trend in the X variable at to = 1900 yr. This particular time was chosen prior to the industrialization process. Since data to compute SSC start at 1868 and end at 1993, the three terms of the above equation (figure 7) and its superposition (figure 8) has been computed during this period.
http://www.nuclear.com/environment/climate_policy/default.html
I’ll see if I can find some data.
151. John Creighton
hmmm more driver data sets (ACI)
George Vangengeim, the founder of ACI, is a well-known Russian climatologist. The Vangengeim-Girs classification is the basis of the modern Russian climatological school of thought. According to this system, all observable variation in atmospheric circulation is classified into three basic types by direction of the air mass transfer: Meridional (C); Western (W), and Eastern (E). Each of the above-mentioned forms is calculated from the daily atmospheric pressure charts over northern Atlantic-Eurasian region. General direction of the transfer of cyclonic and anticylonic air masses is known to depend on the distribution of atmospheric pressure over the Atlantic-Eurasian region (the atmosphere topography).
http://www.fao.org/docrep/005/Y2787E/y2787e03.htm
152. John Creighton
Another interesting climite factor:
http://www.nasa.gov/centers/goddard/news/topstory/2003/0210rotation.html
153. John Creighton
I found some data on the earths rotation:
http://www.zetatalk.com/theword/tworx385.htm
It seems to have a hockey stick shape from 1700 to year 2000. From the link I posted earlier the slowing of the earths rotation would have a cooling effect so this produces and anti hockey stick effect. This makes senesce because it would increase the standard deviation of the temperature on earth thereby increasing the cooling since heat flux is proportional to the forth power of the temperature.
154. Posted Oct 20, 2006 at 2:48 AM | Permalink | Reply
This attribution stuff is quite interesting. I used to think that the methods ‘they’ use are fairly simple, but maybe they aren’t.
I’ve been following the topical RC discussion, where ‘mike’ refers to Waple et al Appendix A, which is a bit confusing. Specially Eq. 9 is tricky. I would put it this way:
$\hat{s}_f=s_f+\frac{\langle FN \rangle}{\langle F^2\rangle}$
i.e. the last term is the error in the sensitivity estimate. This shows right away that high noise level (other forcings, reconstruction error, ‘internal climate noise’) makes the sensitivity estimate noisy as well. Same applies if the amplitude of the forcing is low during the chosen time-window. That is one explanation for #113 (CO2 does not vary much during 1610-1760).
But maybe I’m simplifying too much.. See these:
For the sake of our foregoing discussions, we will make the conventional assumption that the large-scale response of the climate system to radiative forcing can reasonably be approximated, at least to first order, by the behavior of the linearized system.
(Waple et al) I thought first order approx. is the linear model..
These issues just aren’t as simple as non-experts often like to think they are. -mike
(RC)
155. Posted Oct 20, 2006 at 2:52 AM | Permalink | Reply
I see, eq. should be ‘estimate of s equals s + E(FN)/E(FN*FN) ‘. Hope you’ll get the idea.
156. Steve McIntyre
US, is my Latex edit in 154 OK?
157. Posted Oct 20, 2006 at 4:52 AM | Permalink | Reply
Yes, thanks. The original paper uses prime, and that didn’t get through. I think they mean ‘estimate of s’. But on the other hand, they say s (without prime) is ‘true estimate of sensitivity’. This is amazing, it takes only 4 simple looking equations and I’m confused. These issues just aren’t as simple as non-experts like me like to think.
158. Jean S
I thought first order approx. is the linear model..
Obviously you don’t have a proper training in Mann School of Higher Undertanding
159. Posted Oct 20, 2006 at 1:01 PM | Permalink | Reply
# 158
I see Robustly estimated medians, true estimates etc. I need some practice.. Seriously, I think MBH98 fig 7 and MBH99 fig 2 are the ones people should really look into.
But when I really think about it, this is my favourite
All of the extra CO2 in the atmosphere is anthropogenic
(-gavin)
160. Posted Oct 21, 2006 at 4:12 AM | Permalink | Reply
I know people here don’t like RC, but I think this topic is a good introduction to MBH98 fig 7 (and even more general) problems.
Some highlights:
None of your analyses take account of the internal variability which is an important factor in the earlier years and would show that most of the differences in trends over short periods are not significant. Of course, if you have a scheme to detect the difference between intrinsic decadal variability and forced variability, we’d love to see it. -gavin (#94)
gavin started to read CA? If there is no scheme to detect difference between ‘internal variability’ and forcings, then we don’t know how large is the last term in #154 eq, right?
The big problem with Moberg (or with any scheme that separates out the low frequency component) is that it is very very difficult to calibrate that component against observed instrumental data while still holding enough data back to allow for validation of that low frequency component. – gavin (#94)
Hey, how about MBH99 then? Or is this different issue?
Following up Gavin’s comment, it has indeed already been shown- based on experiments with synthetic proxy data derived from a long climate model simulation (see Figure 5 herein)- -that the calibration method used by Moberg et al is prone to artificially inflating low-frequency variability. -mike (#94)
experiments, synthetic, simulation, mike hasn’t started to read CA.
161. TCO
The discussion between Rasmus and Scarfetta is a hoot over at RC. Rasmus persists in complaining about methods to get a linear relationship based on delta(y)/delta(x). He has the bizarre stance of saying that because the method is in danger with low differentials (which only Rasmus tries, not Scarfetta), that the method is inherently wrong. I guess Rasmus thinks Ohm’s Law is impossible to prove as well! Poor Scarfetta, he just doesn’t know what a moron he is dealing with.
Gavin on the other hand is more artful in his obfuscation. But doesn’t bother putting Rasmus out of his misery (nor does he support his silly failure to understand). All that said, I think Moberg is a very suspect recon, so that examinations based on it, are less interesting. Scarfetta does make the cogent point that D&A studies and modeling studies vary widely based on what they mimic (Moberg or Mann) so that given the uncertainty in recon, there is uncertainty in the models.
162. Posted Oct 21, 2006 at 10:21 AM | Permalink | Reply
I think Gavin has good points, but they are in conflict with MBH9x. Dialogue between Scafetta and Gavin might lead to interesting conclusions.
163. Posted Oct 22, 2006 at 9:24 AM | Permalink | Reply
The distribution of the last term in #154 eq is very important. And not so hard to estimate, it is just a simple linear regression. Thus, if terms in N are mutually uncorrelated (does not hold, but let’s make the assumption here), then the variance of that error term is
$\sigma ^2(X^TX)^{-1}$
and X is simply a vector that contains CO2 (or any other forcing in question) values, and $\sigma ^2$ is the variance of N. So, the variance of N is important, but so is the variance (and amplitude) of the forcing! I think we should think about MBH98 figure 7 from this point of view, and forget those significance levels they use..
164. Posted Oct 23, 2006 at 8:05 AM | Permalink | Reply
I think that the error variance, as shown in # 163 explains a lot. Shorter window, more error. More noise (N), more error. Variability of the forcings within the window does matter. And finally, if (for example) solar and CO2 are positively correlated within the window, the errors will be negatively correlated. I tried this with MBH98 fig 7 supplementary material, makes sense to me.
And I think these are the issues Ferdinand Engelbeen points out in that RC topic (# 71). And Mike says that it is nonsense.
165. Posted Oct 24, 2006 at 8:38 AM | Permalink | Reply
Here’s the Matlab script, if someone wants try/correct/simulate/etc
166. bender
UC, could you make the script turn-key so that the figures load correctly from the ftp site?
167. Posted Oct 24, 2006 at 9:45 AM | Permalink | Reply
bender, don’t know how to force Matlab to load directly from ftp (or was that the question?)
http://ftp.ncdc.noaa.gov is down, but http://www.nature.com/nature/journal/v430/n6995/extref/nature02478-s1.htm seems to work. ‘Save as’ to working direcory and then it should work (my 5-version Matlab changes fig7-co2.txt to fig7_co2 when loaded, not sure what the new verions do)
168. Jean S
re #167: worked fine on my ver 6.5. The command to use is mget, but I think it was introduced in ver 7
169. bender
Re #167
That was precisely the question. Thx.
170. bender
Re #168
But you have the files already in a local directory, correct? I want it to run without having to download anything.
171. Jean S
re #170: Yes, and the only solution I know for Matlab to do that is with the wget-command, which, I think, was introduced in ver 7…
But, hey, check the Hegerl et al paper I linked… it has an attribution section …
172. John Creighton
Yicks! I’m looking back at what I did and I’m scratching my head. I think I need to document it a little better. No wonder their wasn’t more discussion.
173. John Creighton
Looking back I see what I’ve done and I am wondering why I didn’t use a state space model. The temperature, carbon dioxide and volcanic dust should all be states. Perhaps to see the contribution each drive has on temperature you don’t need a state space model but a state space model will help to identify the relationship between CO2 and temperature. It will also allow predictions and perhaps even the identification of a continuous model.
174. Jean S
175. John Creighton
#174 that could be of interest. I have though been developing my own Kalamn filter softare. I lost interest in it for a while but this could reinvigorate my interest. The software I developed before can deal with discontinuous nonlinearties. What was causing me trouble was trying to simplify it.
176. John Creighton
Michael Man tried to fit the northern hemisphere mean temperature to a number of drivers using linear regression in mbh98. To the untrained eye the northern hemisphere temperatures looked quite random. As a consequence it was hard to tell how good the fit was. One can apply various statistical tests to determine the goodness of fit. However many statistical tests rely on some knowledge of the actual noise in the process.
I had become frustrated that I could not determine what a good fit was for the given data. I decided to try to break the signal into a bunch of frequency bands because I was curious how the regression coefficients of the drivers was dependent upon the frequency band. I decided to graph each band first before trying any fitting technique.
The three bands were a low frequency band which results from about a 20 year smoothing. A mid frequency band that results from a frequencies in the period of 5 years and 20 years and a high frequency band where the frequency is less then a 5 years period.
Each filter is almost linear phase. The filter produces signals that are quite orthogonal and extremely linearly independent. In figure 1-A the solid line is the original northern hemisphere temperature signal and the dotted line is the sum of the three temperature signals produced by the filters. As you can see the sum of the three bands nearly perfectly adds up to the original signal.
What is clear from the graphs is the low frequency signal looks much more like the graph of the length of day then the carbon dioxide signal. If carbon dioxide was the principle driver then the drip in temperature between 1750 and 1920 would be very difficult to explain. Conversely the length of day fell between 1750 and 1820 and then started to rise again after 1900. One wonders why the length of day would impact the earth much more then carbon dioxide but the evidence is hard to ignore.
The mid frequency band looks like it has frequency components similar to the sunspots cycle but the relationship does not appear to be linear. The high frequency band looks difficult to explain. My best bet is cloud cover.
my web page
177. John Creighton
The code to make the above graphs can be found here:
http://www.geocities.com/s243a/warm-band/mbh98_bands_files/code/
178. John Creighton
My new suspension is volcanoes are the cause of global warming. In Man’s graph if you look at the rise in temperature from 1920 to 1950 and then look down bellow at his measure of volcanic activity you will notice that before 1920 volcanoes were quite frequent but afterwards they almost stopped in terms of intensity and frequency. Additionally if CO2 was the cause you would expect the temperatures to continue to rise but the rise in temperature between 1920 and 1940 far surprises any temperature rise after that. Man’s data for the northern temperature mean looks like a first order response to a step change. It does not look like a response to an exponentially increasing input driver.
179. Posted Nov 16, 2007 at 4:43 PM | Permalink | Reply
Re #176 John it’s intriguing that the low-frequency variation in NH temperature seems to match better (visually at least) with length of day variation than with CO2.
Length of day on a decadal scale, per this article , seems to be related to activity in Earth’s core, which in turn could be forced by internal dynamics and/or solar system activity and/or surface events. How that ties to surface temperature, though, is unclear.
Perhaps an alternate factor affecting length of day is that warming oceans expand, which would move them farther from Earth’s center, which would slow the planet’s rotation.
Something to ponder.
180. Posted Nov 16, 2007 at 4:57 PM | Permalink | Reply
If Earth’s rotation (length of day) is affected by solar factors, and if John’s work above shows a correlation between length of day and NH temperature reconstructions for recent centuries, then maybe there’s a clue somewhere in this about a solar / climate relationship.
181. UC
With window length of 201 I got bit-true emulation of Fig 7 correlations. Code in here. Seems to be OLS with everything standardized (is there a name for this?), not partial correlations. These can quite easily be larger than one.
The code includes non-Monte Carlo way to compute the ’90%, 95%, 99%
significance levels’. The scaling part still needs help from CA statisticians, but I
suspect that the MBH98 statement ‘The associated confidence limits are approximately constant between sliding 200-year windows’ is there to add some HS-ness to the CO2 in the bottom panel:
(larger image )
This might be outdated topic (nostalgia isn’t what it used to be!). But in this kind of statistical attribution exercises I see a large gap between the attributions (natural factors cannot explain the recent warming!) and the ability to predict the future:
• Tip Jar
(The Tip Jar is working again, via a temporary location) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291386008262634, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2011/07/26/cartans-formula/?like=1&source=post_flair&_wpnonce=f9365a8c97 | # The Unapologetic Mathematician
## Cartan’s Formula
It turns out that there is a fantastic relationship between the interior product, the exterior derivative, and the Lie derivative.
It starts with the observation that for a function $f$ and a vector field $X$, the Lie derivative is $L_Xf=Xf$ and the exterior derivative evaluated at $X$ is $\iota_X(df)=df(X)=Xf$. That is, $L_X=\iota_X\circ d$ on functions.
Next we consider the differential $df$ of a function. If we apply $\iota_X\circ d$ to it, the nilpotency of the exterior derivative tells us that we automatically get zero. On the other hand, if we apply $d\circ\iota_X$, we get $d(\iota_X(df))=d(Xf)$, which it turns out is $L_X(df)$. To see this, we calculate
$\displaystyle\begin{aligned}\left[L_X(df)\right](Y)&=\lim\limits_{t\to0}\frac{1}{t}\left(\left[(\Phi_t)^*(df)\right](Y)-df(Y)\right)\\&=\lim\limits_{t\to0}\frac{1}{t}\left(df((\Phi_t)_*Y)-df(Y)\right)\\&=\frac{\partial}{\partial t}\left[(\Phi_t)_*Y\right](f)\bigg\vert_{t=0}\\&=\frac{\partial}{\partial t}Y(f\circ\Phi_t)\bigg\vert_{t=0}\\&=YXf\end{aligned}$
just as if we took $d(Xf)$ and applied it to $Y$.
So on exact $1$-forms, $\iota_X\circ d$ gives zero while $d\circ\iota_X$ gives $L_X$. And on functions $\iota_X\circ d$ gives $L_X$, while $d\circ\iota_X$ gives zero. In both cases we find that
$\displaystyle L_X=d\circ\iota_X+\iota_X\circ d$
and in fact this holds for all differential forms, which follows from these two base cases by a straightforward induction. This is Cartan’s formula, and it’s the natural extension to all differential forms of the basic identity $L_X(f)=\iota_X(df)$ on functions.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 1 Comment »
1. [...] Cartan’s formula in hand we can show that the Lie derivative is a chain map . That is, it commutes with the exterior [...]
Pingback by | July 28, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8987501263618469, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/4700/whats-the-right-object-to-categorify-a-braided-tensor-category/4744 | ## What’s the right object to categorify a braided tensor category?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The yoga of categorification has gained a lot of popularity in recent years, and some techniques for it have made a lot of progress. It's well-understood that, for example, a ring is probably categorified by a monoidal abelian (or triangulated) category.
But I get a little more confused another step up. If I have a braided tensor category, what sort of 2-category should I expect to categorify it?
Edit: I realized this wasn't the right question to ask; what I really wanted to know is What structure on a monoidal category would make its 2-category of module categories monoidal and braided?
-
When you say "a ring is probably categorified by a monoidal category" I assume you mean by a monoidal additive category? Plain monoidal categories just categorify monoids. – Mike Shulman Nov 9 2009 at 2:01
Ring ~ monoidal category is close to the kind of categorification I am most interested in, but unfortunately I have no idea how to continue to the next step. – Reid Barton Nov 9 2009 at 2:26
Perhaps the more interesting question is, given a braided 2-category and a braided category, how does one justify the claim that one is a categorification of the other? – S. Carnahan♦ Nov 9 2009 at 16:52
## 4 Answers
I'd guess you want braided monoidal (i.e. 2-monoidal) 2-category based on the guess that Grothendieck group moves you one step left on the Baez-Dolan periodic table. That is you want a 4-category with only one object and only one morphism.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As Noah mentioned, you want a braided 2-category, or roughly equivalently, a 4-category with one object (i.e., one 0-morphism) and one 1-morphism. Under this translation, objects in the braided category become 2-endomorphisms of the 1-morphism, and the braided monoidal structure is given by the composition law for 2-endomorphisms, which should admit a canonical action of the E[2]-operad (assuming you chose a good model for n-categories).
I think the usual decategorification process involves removing all non-invertible morphisms above level n, and (if you don't believe in infinity-categories) collapsing isomorphism classes of n-morphisms to points. I may be missing some subtleties in the abelian/enriched situation, as it's been a while since I've read Baez-Dolan. In your case, n=3, so you can repeat the first sentence in this paragraph with the appropriate substitution. In the braided language (shifting down by 2), this means you remove the non-invertible 2-morphisms and contract the isomorphism classes of maps to be single maps. If your braided category was weak, this strictifies composition of 1-morphisms.
-
The free braided monoidal 2-category with duals on one self-dual object generator is the 2-category of 2-tangles. The original proof of Fisher had a gap, in part, because he relied on the original movie-move theorem. In our abstract, there was a slight mis-statement about the meaning of "movie" in that context. The error was fixed in CRS, and then Baez Langford gave a precise characterization of a free braided monoidal 2-category with duals.
Ben's question might be along the lines of using Lauda-Khovanov construction to construct a representation theory of the categorification of $u^+(sl(2))$ for example. Arron spoke about this in Riverside on Saturday morning. According to Arron's slides, there was hope that such representations can give a BM2Catw/D.
Is anyone working on giving an explicit "other" braided monoidal 2-cat w/ duals?
-
That's exactly what I'm interested in, though I think in retrospect this was the wrong question to ask. – Ben Webster♦ Nov 9 2009 at 23:52
I have a lot I would like to say about this, but it's not really in any coherent form in my head, so I'll content myself to one comment without justification: as the first step, I believe you will need to repeat the process that takes us from the natural numbers to the integers and from the category of R-modules to its derived category, applying it to the category of [whatever you think of as categorifying an R-module]s.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342001676559448, "perplexity_flag": "middle"} |