url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/90524/how-many-iterations-are-required-for-the-lanczos-algorithm-to-converge | ## How many iterations are required for the Lanczos algorithm to converge?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to find the n smallest eigenvalues and eigenvectors of a NxN SPD matrix using Lanczos method. What is the number of iterations usually required? I mean, does it scale as $O(N)$ or $O(\sqrt{N})$ assuming n << N (n is usually <=10 and N ~ $10^6$ or $10^7$)?
-
2
See my short note: mathoverflow.net/questions/75370/… – S. Sra Mar 8 2012 at 0:52
Thanks Suvrit. From "Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start by J. Kuczyński and H. Woźniakowski", it looks like for a fixed error, the number of iterations for the Lanczos method scales as $O(ln(N))$ for finding the eigenvalues. Is there a difference if I need to calculate the eigenvectors too? I assume the complexity is not different, but would like to know if that is not so. – unknown (yahoo) Mar 9 2012 at 1:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.884058952331543, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=149565&page=2 | Physics Forums
## Windows graphing calculator software
Hi,
I was looking for some Windows software that did the job a graphing calculator does, but on the computer screen. I've been using the calculator that comes with Microsoft Student, but it lacks some features that would be quite useful (specially when it comes to complex functions).
Which software should I use?
Thank you very much.
Mathematica (pricy) or Maxima (freeware).
If you want ONLY a graphing software, then use gnuplot. Maxima, which uses gnuplot for graphing, is a full-fledged (at least among oss/freeware) CAS.
## Windows graphing calculator software
Aren't there any 'easier' ones? I mean, not requiring you to code your queries.
I used this a couple of years ago...you've got to just enter equations, although I 'm not sure if it handled complex functions. http://www.graphcalc.com/
The web site www.webgraphing.com plots complex functions with calculus analysis. Also, it is pretty easy to use.
I wasn't able to plot complex functions in GraphCalc or WebGraphing, but I'm giving Mathematica a try. However, I'm not able to rotate 3D graphics yet. How can I do it? Thanks.
It would be helpful if you gave an example of what you call a complex function. The term can be used is a variety of ways, so how you use it is important. For example, if you mean a complex-valued function of a real variable, then there are no graphing calculators that will graph it; you will need to create your own programs. On the other hand, if you mean, say, a real-valued trig functions of a real variable, just about any graphing calculator will do, including the ones you tried and did not get what you wanted. If you mean 3D graphing, there are many graphing calculators that will do the job. It all depends...
Quote by springo I wasn't able to plot complex functions in GraphCalc or WebGraphing, but I'm giving Mathematica a try. However, I'm not able to rotate 3D graphics yet. How can I do it? Thanks.
You're finding it easier to enter codes in Mathematica than in Maxima!
Recognitions: Gold Member Science Advisor Staff Emeritus You might also want to look into emulator software, which will let you run a "virtual" Texas Instruments or HP hand-held graphing calculator on your PC screen. http://www.ticalc.org/basics/calculators/ti-89.html#10 - Warren
Quote by calcwiz It would be helpful if you gave an example of what you call a complex function.
I meant something like:
$$f(z) = \frac{1}{z}$$
(with z being any number from the complex plane)
Quote by neutrino You're finding it easier to enter codes in Mathematica than in Maxima!
I didn't mean to say that. I didn't try Maxima, I just tried the website and GraphCalc. As I couldn't see how to use them for my puroposes, I decided to try using Mathematica. So, you mean Maxima is easier than Mathematica (but has the same functionality)?
Quote by chroot You might also want to look into emulator software
I don't a TI, but a Casio. I'll look for my "virtual" one.
Thank you all for your answers!
So, you want to graph complex-valued functions of a complex variable. That involves a domain of two dimensions (x+iy) and a range of two dimensions (u+iv) . This is not for the faint of heart!
If you are serious, you might check out the book "Complex Analysis with Mathematica" by William T. Shaw, copyright 2006, Cambridge University Press. There you will find various ways to plot complex valued functions of a complex variable.
Quote by calcwiz If you are serious, you might check out the book "Complex Analysis with Mathematica" by William T. Shaw, copyright 2006, Cambridge University Press. There you will find various ways to plot complex valued functions of a complex variable.
Thanks, I'll check that book as soon as I can.
Since we're talking about Mathematica, please let me ask again: how can I rotate a 3Dm graph?
Thank you.
The quickest way is to use any of several Interactive 3D Graphing Calculators on WebGraphing.com. You can check out some examples of 3D function graphs at: http://www.webgraphing.com/examples_graph3d.jsp and follow the instructions to rotate, zoom in/out, spin, etc.
Thanks! I had tried that before and it's great. However I'd still like to know if it's possible to do the same in Mathematica.
Sure. Check out the web site for LiveGraphics3D: http://www.vis.uni-stuttgart.de/~kraus/LiveGraphics3D/ That is a source for software that works with Mathematica to produce 3D Interactive Graphs.
Thread Tools
| | | |
|-----------------------------------------------------------|------------------------|---------|
| Similar Threads for: Windows graphing calculator software | | |
| Thread | Forum | Replies |
| | Calculators | 32 |
| | Calculators | 1 |
| | Calculators | 12 |
| | Electrical Engineering | 9 |
| | General Math | 5 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205763936042786, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/37570/state-dependent-diffusions-ficks-law-vs-fokker-plancks-which-and-why?answertab=oldest | # State-dependent diffusions: Fick's law vs. Fokker-Planck's, which and why?
Consider a "state-dependent diffusion": a diffusion process for which the diffusion coefficient $D(x)$ depends on the (stochastic) state $x$ of the system. (An example is provided by the diffusion of tracers in a spatially inhomogeneous bath, e.g. with varying viscosity or temperature.) What is the associated flux? If $c(x)$ is the probability to find the system in the state $x$ at a given time, should the flux read $j(x)=-D(x)\nabla c(x)$, as in Fick's law? Or rather $j(x)=-\nabla(Dc)(x)$, as in the Fokker-Planck equation? (I'm using the terms "Fick" and "Fokker-Planck" here only as tags for each alternative.)
It is well known that this question does not have a definite answer, and should be answered case by case. My actual question is threefold:
1. Do you know examples of physically relevant state-dependent diffusions? What category do they fall in? Fick, or Fokker-Planck?
2. Do you have an intuition for the physics underlying this alternative?
3. The Fokker-Planck case has the peculiarity to lead to equilibrium distributions which are not of the Boltzmann-Gibbs form (the steady-state $c(x)$ depends on $D(x)$ and not just on the state's energy). What does this tell us about the foundations of statistical mechanics?
-
## 2 Answers
I don't agree with the premise of the question (that there is some mysterious disagreement between hydrodynamics, Fick's law, and the Fokker-Plank equation). I am not entirely certain what you mean by state dependent''. I will assume that the system is in local thermodynamic equilibrium. This is the basic assumption in hydrodynamics, and the Fokker-Planck equation is a possible microscopic model of how equilibrium is reached. In this case the diffusion constant is a function of $x$ only through its dependence on thermodynamic variables. In a simple fluid with a single type of impurity these are $T(x)$ and $\mu(x)$.
Note that $j(x)=-D(\mu(x),T(x))\nabla c(x)$ is not the most general statement of Fick's law. In general, there is also thermal diffusion, and there are extra terms in the presence of an external potential. This is explained in standard texts on fluid dynamics (like Landau). The RHS of the diffusion equation is $\nabla [D(\mu(x),T(x))\nabla c(x)]$.
I think that this is indeed the form one obtains from a stochastic model, see for example equ.(312) in Chandrasekhar's review http://rmp.aps.org/abstract/RMP/v15/i1/p1_1 (he calls this the Smoluchowski equation), or equ. (4.19) in these lecture notes http://www.ks.uiuc.edu/~kosztin/PHYCS498NSM/LectureNotes/chp4.pdf . There is an extra term in Fick's law, but this term is related to the external potential, $j(x)\sim Dc/T\nabla V$. I also think that this had to be the case. As long as I consider the most general hydro equation any stochastic model that relaxes to local thermodynamic equilibrium should reduce to this equation in the appropriate limit.
-
I didn't mention any mysterious disagreement, and do not think there is one. My point is, contrary to what many people think, not all diffusion processes are consistent with Fick's law, even with a drift term. I'd like to understand this phenomenon better. – Matteo Smerlak Oct 4 '12 at 21:11
A simple experiment with anybody can make (I did) is the following: prepare two equal volumes of (i) water and (ii) a water+gelatine mixture. Dissolve the same quantity of food coloring in both, and put them in contact without mixing them (typically the gel at the bottom of a tube, the water on top). Initially the food coloring is uniformly distributed. After a day, not so: the coloring has accumulated in the gel. No temperature gradient, no potential, and yet the equilibrium c(x) is not homogeneous: Fick's law does not apply. Credit: [B Ph van Milligen et al 2005 Eur. J. Phys. 26 913] – Matteo Smerlak Oct 4 '12 at 21:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284650683403015, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/8436-counting-pathways.html | # Thread:
1. ## Counting pathways
Hello
I need to count the number of ways,from start to end,without passing through the red spots.
you can move one step at a time:forward or to the right.
thanks.
Attached Thumbnails
2. There are 49 paths.
The numbers in each box of the excel spreadsheet in the attachments shows how many paths can go through each point. Do you see how it was made? Or would you like an explanation.
Attached Files
• PathWays.xls (13.5 KB, 75 views)
3. I would appreciate it if you could give me an explanation,in a combinatorial way i.e n choose k
thanks
4. Originally Posted by parallel
I would appreciate it if you could give me an explanation,in a combinatorial way i.e n choose k
thanks
You're going to have to ask someone else
5. Say $p(x,\ y)$ is the number of paths to a specific square. For the bottom left square, $x = 0$ and $y = 0$, and for the upper right square, $x = 4$ and $y = 4$. Then
$p(x,\ y)\ =\ \left\{\begin{array}{l}<br /> 0,\text{ if square } (x,\ y) \text{ cannot be entered}\\<br /> 1,\text{ if } (x,\ y) \text{ is the start square}\\<br /> \text{else }p(x\!-\!1,\ y) + p(x,\ y\!-\!1)<br /> \end{array}\right.$
You can get to a square from only two other squares. The sum of the number of paths to dose squares is the number of paths to that square.
Then, if you turn it all $135^\circ$ clockwise, you will se it's starting to look like Pascal's triangle. And if it hadn't been for those two blocked squares $p(x,\ y)$ would have been $= {{x+y}\choose{y}}$. Cause ${x+y}\choose{y}$ has almost the same properties as $p(x,\ y)$ has in this case. You do know that ${{x}\choose{y}}\ =\ {{x-1}\choose{y-1}}\ +\ {{x-1}\choose{y}}$? Let's say that if (x, y) is outside the triangle, ${x\choose{y}}\ =\ 0$. I don't know if I'm the best to explain why, but in this case
$p(x,\ y)\ =\ {{x+y}\choose{y}}\ -\$ ${{1+3}\choose{3}}\cdot{{(x-1) + (y-3)}\choose{y-3}}\ -\$ ${{4+1}\choose{1}}\cdot{{(x-4) + (y-1)}\choose{y-1}}\ =$.
$=\ {{x+y}\choose{y}}\ -\ {4\choose{3}}\cdot{{x+y-4}\choose{y-3}}\ -\$ ${5\choose{1}}\cdot{{x+y-5}\choose{y-1}}\ =$
$=\ {{x+y}\choose{y}}\ -\ 4\cdot{{x+y-4}\choose{y-3}}\ -\ 5\cdot{{x+y-5}\choose{y-1}}$
Since $x=4$ and $y=4$ in the top right square,
$p(x,\ y)\ =\ {8\choose{4}}\ -\ 4\cdot{{8-4}\choose{4-3}}\ -\ 5\cdot{{8-5}\choose{4-1}}\ =$
$=\ 70\ -\ 4\cdot{4\choose{1}}\ -\ 5\cdot{3\choose{3}}\ =$
$=\ 70\ -\ 4\cdot 4\ -\ 5\cdot 1\ =$
$=\ 49$
Now i got it right, I forgot to use zero indexing.
6. i'm sorry but it's getting too complicated for me,we just started with this.
the first question was to solve this without any limitation on the squares.
so the way I did it was like that:
I can go RRRRRUUUUU from the start to the end.
(R-right, U-up)
so now I just computed the number of ways to arrange the letters.
could you try and explain it to me,with this kind of thinking(sorry for the trouble)
7. Originally Posted by parallel
i'm sorry but it's getting too complicated for me,we just started with this.
the first question was to solve this without any limitation on the squares.
so the way I did it was like that:
I can go RRRRRUUUUU from the start to the end.
(R-right, U-up)
so now I just computed the number of ways to arrange the letters.
could you try and explain it to me,with this kind of thinking(sorry for the trouble)
Take a look at Quick's excel diagram. What it does is that it simulates the number of paths by calculating the number square by square. If you wan't to use a mathematical non-iterating formula, you will have to use the combination function as I just did in my last post.
8. I will have to get back on this one. I don't seem to get the formula right.
9. I dont blame ya
10. I believe this is a graph theory related question (but because I am barely familar with it I cannot help you).
Basically you want to find all the possible paths (that is how Graph theorist call them). And not walks (going over the same edge).
So you set up a graph with 25 vertices. And you draw an edge between any two which you are allowed to move. (That means the pink dots are isolated from the rest of the graph). And there is a special techinque of how to solve this problem.*)
*)One way is the adjancey matrix but it gets huge.
**)Another way, there are certain algorithms related to graphs that enable to find the solution. (Again not familar with them).
11. I hate to be a pain, but there are an infinite number of paths. The problem doesn't specify a maximum path length, nor that you can't, for example, take a backward step. I'm assuming either one or both of the two previous thoughts are correct, but the problem statement should have a mention of some kind of limitation of this kind.
-Dan
12. I think he did:
Originally Posted by parallel
you can move one step at a time:forward or to the right.
Forward in this case I believe means "upp".
13. Originally Posted by TriKri
I think he did:
Forward in this case I believe means "upp".
No, it was a joke. Really. (Ahem! I'm going to curl up in a little ball in this corner over here for a while, if nobody minds...)
-Dan
14. There are 70 ways to arrange 4-R’s & 4-U’s. That is the total number of paths from start to finish without regard to the forbidden squares.
There are 4 ways to go from start to the left most forbidden square and 4 ways to go from that square to finish. Thus, there are 16 ways to go from start to finish passing through the first forbidden square.
Now likewise, there are 5 ways to go from start to finish passing through the right most forbidden square.
Now note that 70-(16+5)=49. That take the forbidden paths from the total.
15. Originally Posted by topsquark
No, it was a joke. Really. (Ahem! I'm going to curl up in a little ball in this corner over here for a while, if nobody minds...)
Oh, haha. hehe. I know, I'm can be quite seriously commited to things sometimes.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949786901473999, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=501872 | Physics Forums
Fourier analysis and prob. distributions?
Ok, this might seem like either a really idiotic question or a really profound one.
Consider a probability distribution. I'm picturing a normal distribution, is it meaningful to be able to build up a final probability distribution from a set of narrower probability distributions?
Ok, that seems like it came out really poorly so i'll say a few of my thoughts. In quantum mechanics we use $$\Psi$$(r,t) to represent the wave function for very small particles. Then we square this to get |$$\Psi(r)|^2$$ which is the probability density. This, I believe would then give me a probability distribution. Which in alot of physics examples is just some multiple of a sine wave. Now, it seems to me(being a novice at both probability and physics) that it may be possible to build up a probability distribution of this sort from several smaller probability distributions through simple interference plotting or fourier analysis or the like.
However, I can't resolve to myself why this would be a meaningul thing to do. For instance, multiple probability distributions might imply multiple wave functions and hence multiple particles. And multiple particles would interact usually; thus changing the original wave functions and doing something funky.
Can anyone comment on this?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I guess profundity is ruled out.
Recognitions: Gold Member Science Advisor Staff Emeritus I have no idea what you intend to do. You do realize that we can do all sorts of arithmetic on random variables, right? For instance, we can add them, multiply them, square them, divide them, take their logarithm, etc...
Fourier analysis and prob. distributions?
Now, it seems to me(being a novice at both probability and physics) that it may be possible to build up a probability distribution of this sort from several smaller probability distributions through simple interference plotting or fourier analysis or the like. However, I can't resolve to myself why this would be a meaningul thing to do. For instance, multiple probability distributions might imply multiple wave functions and hence multiple particles. And multiple particles would interact usually; thus changing the original wave functions and doing something funky.
Indeed. This is just the superposition of wavefunctions. An obvious example is the interference pattern observed in double-slit electron diffraction experiments.
Thanks for the reply. I'll play with it a little and see what I can get out of it.
You can easily write a single distribution as the sum of two or more. An example is given below. It’s called probability mixing. I suppose you could do the same thing for the magnitude of the wave function. exp(-pi*x^2)=(1/2)*exp(-pi*x^2)+(1/2)*exp(-pi*x^2)
Thread Tools
| | | |
|----------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Fourier analysis and prob. distributions? | | |
| Thread | Forum | Replies |
| | Calculus | 3 |
| | General Physics | 13 |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 5 |
| | Calculus & Beyond Homework | 3 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193660020828247, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/239250/closed-surjective-map?answertab=active | # Closed surjective map
How to prove that every closed surjective map is open? (Exercise from book Borisovich "General topology")
Thank you very much!
-
## 2 Answers
I will show a counterexample showing that the assertion is false. Let us consider the following function $h:[0,1]\rightarrow [0,1],$
h(x) = \begin{array}{ccc} \frac{3}{2}x & \text{if} & x\in \lbrack 0,\frac{1}{3}] \\ \frac{1}{2} & \text{if} & x\in \lbrack \frac{1}{3},\frac{2}{3}] \\ \frac{3}{2}x-\frac{1}{2} & \text{if} & x\in \lbrack \frac{2}{3},1]. \end{array}
Here we consider $[0,1]$ to be a topological space with the topology generated by the metric $d\left( x,y\right) =|x-y|$. Now the function is clearly continuous and surjective. It is also closed which follows from compactness. On the other hand the interval $\left( \frac{1}{3},\frac{2}{3}% \right)$ is an open set in $[0,1]$, and $h[\left( \frac{1}{3},\frac{2}{3}% \right) ]=\left\{ \frac{1}{2}\right\}$ which is not open in $[0,1]$.
-
Let $f:X\to Y$ be closed and surjective, and assume we have given a $U\subseteq X$ open subset. Use complement, and prove that $f(U)$ is open.
Update: It is indeed not that trivial, moreover, not even true (see comments below). The problem is that, though $f(U)\cup f(X\setminus U)=Y$ by surjectivity, these 2 sets may intersect, so we cannot simply conclude $f(X\setminus U)=Y\setminus f(U)$.
-
1
This is exactly what I did, but I don't see how it can be trivially deduced from this. I see that the image of $U$ in union with image of it's complement must give the whole $Y$. But they can intersect, because we are not given that map is injective. Could you, please, give a more detailed proof. – Sergey Finsky Nov 17 '12 at 14:46
Ah, now I see your point. – Berci Nov 17 '12 at 14:50
It turns out that wiki says : "surjective closed map is not necessarily an open map". So yours explanation is wrong. I assume, there is just a typo in a book. They don't really use it a lot, it just matters in one theorem. (They also give in the beginning that map is continuous (but don't use it in sub-statement), maybe with this restriction it's true, or you have counterexample?) – Sergey Finsky Nov 17 '12 at 14:56
Yes, now I was just trying to find a counterexample for the original exersize. Hmm.. seemed so trivial anyway.. – Berci Nov 17 '12 at 14:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434259533882141, "perplexity_flag": "head"} |
http://mathematica.stackexchange.com/questions/2889/randomvariate-with-a-discrete-distribution?answertab=votes | # RandomVariate with a Discrete Distribution
Nature has provided me with a random variable $Z$ taking on the values $0, 1, 2, \ldots$, with probabilities $z_0, z_1, \cdots$. I can sample from the distribution of $Z$ reasonably efficiently (I have done so $2^{28}$ times), and so I have estimates $\hat{z_0},\hat{z_1},\ldots$ of $z_0,z_1,\ldots$.
I am interested in the number $f(z_0,z_1,\ldots,z_8)$, where $f$ is an explicit but involved function. Obviously, a good estimate is $f(\hat{z_0}, \hat{z_1}, \ldots,\hat{z_8})$, and a standard way to build a confidence interval is bootstrapping. To do this, I need to sample from the discrete distribution with probabilities $\hat{z_0}, \hat{z_1},\dots,\hat{z_8},q$, ($q$ chosen to make the probabilities add up to 1) and I need to sample $2^{28}$ times (I don't need the samples, just the number of times 1 comes up, 2 comes up, ...), and this process needs to be repeated, say, 1000 times.
That's a lot of sampling, and it's going way too slowly. The first law of fast Mathematica code is to use built-in functions. Is there a way to get RandomVariate to work with an arbitrary discrete distribution? Any other suggestions for rapidly sampling from an arbitrary distribution (again, I only need the Tally of the sampling, not the sample itself)?
-
Kevin, maybe instead change each $z_j$ into a smallish interval around the estimated value? Depending on the nature of $f$ you might get something not unreasonable. This assumes $f$ can handle Interval input, which may not be the case. – Daniel Lichtblau Mar 13 '12 at 20:53
2
No time to answer right now, but perhaps EmpiricalDistribution is what you're looking for. It can be used with RandomVariate. If this works, I'll write an answer later in the day. – rm -rf♦ Mar 13 '12 at 21:12
You don't specify how you count the occurrences of each possible value of Z. Perhaps your bottleneck may be not the sampling but the counting? – Sjoerd C. de Vries Mar 13 '12 at 23:10
## 1 Answer
This is a multinomial distribution. Obtain your bootstrap sample quickly as in this example:
z = RandomReal[{0, 1}, 10];
z = z / (Plus @@ z) (* Generate an example set of values for z0^, ..., z8^, q *)
f = MultinomialDistribution[2^28, z];
Timing[RandomVariate[f, 1000];]
(1.342 seconds).
-
Assuming the observed counts of $0$ through $8$ are not too small (bigger than $30$ or so each)--you could do just fine using a Normal approximation to the distribution of the $\widehat{z}_i$ and assuming (slightly incorrectly) that they are independent. This would be about a thousand times faster. But that would be an important consideration only if you needed orders of magnitude more bootstrap draws. – whuber Mar 13 '12 at 21:25
I assume that since the variance of a multinomial can be described analytically the confidence interval for f could be described analytically as well; so, no need for bootstrapping? – Sjoerd C. de Vries Mar 13 '12 at 23:13
Good thought. It depends in part on the nature of $f$: we don't even know whether it's differentiable. Typically, one resorts to bootstrapping precisely when an analytic solution might not be trustworthy. – whuber Mar 13 '12 at 23:16
Well, now I feel like a dope. The answer to my Mathematica question is "EmpiricalDistribution" (why is this not on the DiscreteDistribution page?), but this answer completely sidesteps what I thought the issue was. – Kevin O'Bryant Mar 15 '12 at 1:31
lang-mma | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216378331184387, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/253303/quantifier-elimination?answertab=active | # Quantifier Elimination
I want to prove the following: The structure $(\mathbb{Z},\equiv,0)$ has QE (with $\equiv$ a relation such that for all $m,n\in\mathbb{Z}$: $m\equiv n$ iff $m-n$ is even). I thought hereover in the following way: Suppose you have a formula $\phi$ in the language of the structure. Then bring this formula to the disjunctive normal form. The basis formulas in the language are: $$a=b,\neg(a=b),a=0,\neg(a=0),(a\equiv b),\neg(a\equiv b),(a\equiv 0),\neg(a\equiv 0)$$ and also combinations with $\wedge$. But than i have to find for all possible combinations a quantifier free formula. But how to do this?
Wat shall I do if i want to prove that a given structure has no QE?
Thank you ;)
-
## 1 Answer
1. First, notice that $a\equiv b$ is equivalent to $(a\equiv 0\wedge b\equiv 0)\vee (a\not\equiv 0\wedge b\not\equiv 0)$, and similarly for $a\not\equiv b$, so you can assume that there are none of those at the very beginning (before taking the normal form).
2. If you have a formula of the form $\varphi=\exists x \varphi'(x,\overline y)$ with $\varphi$ a conjunction of literals, one of which is $x=y_j$, then $\varphi$ is equivalent to $\varphi'(y_j,\overline y)$.
3. All other $\varphi$ are necessarily contradictory or tautological, which I will leave to you to prove.
Quantifier elimination is equivalent to substructural completeness, that is, a theory $T$ has q.e. iff for any model $M\models T$ and a nonempty subset $A$ of its universe, $T$ with the atomic diagram of $A$ is complete (or, equivalently, that any two models containing $A$ are elementarily equivalent), so you can show that a theory doesn't have q.e. by exhibiting a counterexample to substructural completeness (which may still be quite hard, as you need to find a suitable $A$ and show that some theory is not complete).
• For example, the theory of abelian groups does not have q.e., because any set of integers can be extended to, among others, the integers themselves, or the rationals, but the two are not elementarily equivalent.
• For a complete (and model complete!) example, the theory of real closed fields (in the language $\{0,1,+,\cdot\}$) does not have q.e., because if we take some two algebraically independent real numbers $a,b$, then their atomic diagram will always be the same, but one is bigger than the other ($(\exists x) a+x^2=b$ or $(\exists x) b+x^2=a$), so we can't decide which.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457980990409851, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/104366/equivalence-class-for-abstract-algebra-class/104369 | # Equivalence Class for Abstract Algebra Class
Let $$R_3= \{(a,b)\mid a,b \in \mathbb{Z}\text{ and there exists }k \in \mathbb{Z} \text{ such that }a-b=3k\}.$$
I know there is an equivalence relation but I'm not 100% on what it means to be an equivalence class for this problem. In class we got 3: $\{0,3,6,9,\ldots\}$ and $\{1,4,7,10,-2,-5,\ldots\}$ and $\{2, 5, 8, 11, -1, -4,\ldots\}$.
I don't understand where these cells came from. Help?
-
## 3 Answers
I'll try to put it this way:
Define a relation $\sim$ on $\mathbb Z$, such that $a \sim b \iff \exists k \in \mathbb Z ~~ \text{such that}~~~~a-b=3k$
What does this say?
Integers $a$ and $b$ are related if and only if on their difference is a multiple of $3$. Since, the remainder when $a-b$ is divided by $3$ is the difference of the remainders when $a$ and $b$ are divided by $3$, taken(all taken$\mod 3$). So, integers $a$ and $b$ are related if and only if they leave the same remainder when divided by $3$.
Now try to put all those numbers that are related to each other in the same "cell" and those that are not related in different "cells".
But, now notice that the number of distinct cells you'll need for the purpose is no more than $3$ and no less! (Why?)
Construct these "cells" to see how they coincide with what you have written down in your class.
And, now call these cells "equivalence classes".
-
This question is old, but I'd like to share my view of it. I hope this offends no one.
What you are dealing with are equivalence classes modulo an integer. That is, you are dealing with equivalence classes defined by $[a]=\{b \in \mathbb{Z}:b \cong a \pmod 3\}$ for an element $a$.
In the case of the relation of congruence modulo $n$, there are $n$ equivalence classes: $[0], [1], [2], \dots, [n-1]$. You can see that supposing $[n]$ or $[n+1]$ is silly, since $b \cong n \pmod n$ reduces to $b \cong 0\pmod 0$ and $b \cong n+1 \pmod n$ reduces to $b \cong 1 \pmod n$ (and, hence $[n]=[0]$ and $[n+1]=1$). In other words, $[0], [1], [2],\dots, [n-1]$ are all of the equivalence classes.
For congruence modulo $3$, we thusly have $[0], [1],$ and $[2]$. That is, \begin{align} [0]&=\{b \in \mathbb{Z}: b\cong 0 \pmod 3\}\\ &=\{3,6,9,\dots\}.\\ [1]&=\{b \in \mathbb{Z}: b\cong 1 \pmod 3\}\\ &=\{4,7,10,\dots\}.\\ [2]&=\{b \in \mathbb{Z}: b\cong 2 \pmod 3\}\\ &=\{5,8,11,\dots\}. \end{align}
In other words, the three equivalence classes are numbers that divide $3$ with remainder $0$, all numbers that divide $3$ with remainder $1$, and all numbers that divide $3$ with remainder 2.
This is the beauty of equivalence classes: They partition the set they are defined on into disjoint sets which as a whole form the entire set. i.e. For any given equivalence relation $E$ on set $S$ and the equivalence classes $[a]=\{b \in S: aEb\}$, $$\bigcup_{a \in S}[a]=S.$$
-
An exercise in set theory (if $k$ runs through $\mathbb Z$, then so does $-k$):
$[a] = \\ \{b \in \mathbb{Z} \text{ such that there exists }k \in \mathbb{Z} \text{ such that }a-b=3k\}=\\ \{b \in \mathbb{Z} \text{ such that there exists }k \in \mathbb{Z} \text{ such that }b=a-3k\}=\\ \{b \in \mathbb{Z} \text{ such that there exists }k \in \mathbb{Z} \text{ such that }b=a+3(-k)\}=\\ \{a+3k | k \in \mathbb{Z}\}=\\ \{\dots,a-6,a-3,a,a+3,a+6,\dots\}$
You should have written $\{\ldots, -5, -2, 1, 4, 7, 10,\ldots\}$ instead of $\{1,4,7,10,-2,-5,\ldots\}$
-
I agree with you. A matter of notation, changing which we would have lost some info. +1. (May be, I'll add it to my answer later =)) – user21436 Jan 31 '12 at 21:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323445558547974, "perplexity_flag": "head"} |
http://physics.aps.org/articles/v5/5 | # Viewpoint: Rydberg Atoms Jump in Bunches
, Max Planck Institute for the Physics of Complex Systems, 01187 Dresden, Germany
Published January 9, 2012 | Physics 5, 5 (2012) | DOI: 10.1103/Physics.5.5
Theorists uncover a new way that interactions between highly excited atoms can affect the way they radiate.
#### Collective Quantum Jumps of Rydberg Atoms
Tony E. Lee, H. Häffner, and M. C. Cross
Published January 9, 2012 | PDF (free)
When an atom is excited to a high-lying electronic state, virtually all of its properties become vastly exaggerated. The size of these so-called Rydberg atoms scales quadratically with the principal quantum number $n$, while the electron motion slows down by the same fraction, drastically increasing the time it takes for the atom to radiatively decay. As a result, Rydberg atoms feel a strong van der Waals attraction or repulsion to one another that scales as $n11$; a tenfold increase of the atom’s excitation level therefore enhances the potential energy between two Rydberg atoms by $11$ orders of magnitudes. These strong interactions shift the energy levels of the atoms and are sufficient to affect optical Rydberg excitation over distances as large as several micrometers—an effect [1] that forms the basis for a number of fascinating phenomena and promising applications. In a theoretical paper appearing in Physical Review Letters, Tony Lee at the California Institute of Technology, Pasadena, and his colleagues now show that this effect, which is known to alter the way an ensemble of Rydberg atoms coherently interacts with light [1], can also dramatically change how such atoms radiate [2].
While highly excited atoms have long been of interest since the early days of atomic physics, the advent of techniques to trap and cool atoms down to a few microkelvin or less has given birth to a subfield of Rydberg atom physics [3]. At ultralow temperatures, Rydberg atoms are a model building block for making a neutral-atom quantum computer [1]: the atoms have long radiative lifetimes (good for preserving quantum information) and the interactions between atoms are strong (good for manipulating information rapidly). This capability was recently demonstrated experimentally for two atoms [4] (see 8 January 2010 Synopsis). Systems containing many Rydberg atoms act as an effective spin ensemble, where the ground and Rydberg state of each atom represents the down and up states, respectively, of a fictitious spin. The local dynamics of such spins can be controlled by the intensity and frequency of a laser tuned to the excitation energy of the Rydberg atom, while coupling between spins is mediated by the long-range interactions between Rydberg atoms [Fig. 1(a)]. These ensembles can be used as quantum simulators of exotic Hamiltonians [5], for engineering long-range interactions in Bose-Einstein condensates [6] or to prepare entangled, i.e., quantum correlated, many-body states [7, 8].
Most of these schemes rely on the long radiative lifetime of highly excited atoms and therefore can typically operate no longer than a few microseconds. On the other hand, Lee et al. turn this limitation into a feature. Investigating the excitation dynamics of cold Rydberg gases over much longer times, they have uncovered a connection between entanglement among multiple Rydberg atoms and the way these excited states lose energy, which leads them to an unexpected result. If they start with a small fraction of excited atoms, the radiative decay of one atom tends to increase the probability for finding the remaining atoms in a Rydberg state. This enhances the subsequent decay of other atoms and culminates in an avalanchelike population of a large fraction of excited states [Fig. 1(b)]. This behavior comes as a surprise, as one might reasonably think that excited-state decay decreases, rather than increases, the number of excited atoms.
Lee et al. explain this seemingly counterintuitive finding using the notion of so-called quantum jumps: photons emitted from an atom during its spontaneous decay can, in principle, be measured (by an external observer) such that the atom is continuously monitored by its environment. Each emission, consequently, abruptly projects the respective atom into its ground state. For two atoms, this quantum jump projects the doubly excited state containing two Rydberg excitations to the singly excited state with one excitation, and so on. Without interactions, the atoms decay independently of each other and the corresponding projection can only decrease the fraction of excited atoms, as expected. However, this restriction no longer holds in the presence of entanglement, i.e., with the atoms interacting. Consider, for example, a simple entangled state, which contains mostly the two-atom ground state plus a small amplitude $p≪1$ of the doubly excited, two-atom Rydberg state, corresponding to a probability $p≪1$ for finding a particular atom in the excited state. Once an atom decays, the other atom has to be in the Rydberg state, which increases its excitation probability to unity and thereby greatly enhances its subsequent decay. Lee et al. show that such a population inversion and the resulting collective decay can indeed be observed for a properly chosen intensity and frequency of the excitation laser.
This simple configuration illustrates the sometimes weird effects that arise from quantum correlations. It also captures the essential physics behind the dynamics of larger systems driven by the coherent build-up of many-body entanglement and incoherent quantum jumps. Lee et al. describe this more complex situation by approximating the Rydberg-Rydberg atom interaction by constant level shifts that are identical for all Rydberg atom pairs, irrespective of their mutual distances. Their analysis starts from a mean-field treatment, which neglects atom-atom correlations and instead mimics atomic interactions by an average potential, related to the mean number of Rydberg excitations in the sample. Using this approximation, the many-body system can be described by a single set of optical Bloch equations, plus a nonlinearity arising from the Rydberg-Rydberg interactions. This nonlinearity makes the steady state of these equations bistable for certain laser parameters, meaning the system can eventually settle into one of two stable fixed-points. Lee et al. find that they correspond to two distinct states—one with a low and one with a high fraction of Rydberg excitations—and are closely linked to the collective dynamics of the ensemble.
Such bistabilities are common in classical nonlinear systems, which are known to switch spontaneously between two stable points upon stochastic driving. Here, Lee et al. discover similar behavior, but with the all-important difference that the switches are nonclassical in nature and induced by the interplay of quantum entanglement and spontaneous single-atom decay. They demonstrate this point via numerical simulations of the fully correlated quantum dynamics of small atomic ensembles. Their results show that, under conditions of classical bistability, the Rydberg atom number indeed jumps between two distinct values, which correspond precisely to the two stable fixed points of the mean-field model.
The close agreement between the mean-field prediction and the exact quantum dynamics is surprising. On the one hand, the mean-field approach neglects any atomic correlations, producing a fictitious nonlinearity that gives rise to the observed bistability. On the other hand, the collective quantum jumps rest entirely on quantum correlations that emerge from the linear time-evolution of interacting atoms.
However, mean-field approximations are known to work best for the infinitely ranged interactions used by Lee et al. Their success may well motivate future work to elucidate the importance of more realistic interaction potentials or establish links between classical bistabilities and collective quantum jumps in other systems, such as cavity arrays [9].
Related work [10] has suggested that collective quantum jumps of Rydberg atoms could be a source of correlated, nonclassical light—a core objective of quantum optics research. In this context, the authors’ findings will likely foster new ideas for current efforts to utilize cold Rydberg gases for deterministic few-photon entanglement [8, 11] and as highly nonlinear optical media [12]. The constructive interplay between coherent and—often disregarded—dissipative processes points up interesting new directions for the still emerging field of cold Rydberg atom physics.
### References
1. D. Jaksch et al., Phys. Rev. Lett. 85, 2208 (2000); M. D. Lukin et al., 87, 037901 (2001).
2. T. E. Lee, H. Häffner, and M. C. Cross, Phys. Rev. Lett. 108, 023602 (2012).
3. M. Saffman, T. G. Walker, and K. Mølmer, Rev. Mod. Phys. 82, 2313 (2010); D. Comparat and P. Pillet, J. Opt. Soc. Am. B 27, A208 (2010); T. Pohl, C. S. Adams, and H. R. Sadeghpour, J. Phys. B 44, 180201 (2011).
4. T. Wilk et al., Phys. Rev. Lett. 104, 010502 (2010); L. Isenhower et al., 104, 010503 (2010).
5. H. Weimer et al., Nature Phys. 6, 382 (2010).
6. N. Henkel, R. Nath, and T. Pohl, Phys. Rev. Lett. 104, 195302 (2010); F. Cinti et al., 105, 135301 (2010); F. Maucher et al., 106, 170401 (2011).
7. B. Olmos, R. González-Férez, and I. Lesanovsky, Phys. Rev. Lett. 103, 185302 (2009).
8. T. Pohl, E. Demler, and M. D. Lukin, Phys. Rev. Lett. 104, 043002 (2010).
9. A. D. Greentree, C. Tahan, J. H. Cole, and L. C. L. Hollenberg, Nature Phys. 2, 856 (2006).
10. J. D. Pritchard, C. S. Adams, and K. Mølmer, arXiv:1108.5165.
11. I Friedler, D. Petrosyan, M. Fleischhauer, and G. Kurizki, Phys. Rev. A 72, 043803 (2005); A. V. Gorshkov et al., Phys. Rev. Lett. 107, 133602 (2011).
12. J. D. Pritchard et al., Phys. Rev. Lett. 105, 193603 (2010); D. Petrosyan, J. Otterbach, and M. Fleischhauer, 107, 213601 (2011); S. Sevinçli, N. Henkel, C. Ates, and T. Pohl, 107, 153001 (2011).
### About the Author: Thomas Pohl
Thomas Pohl is a group leader at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany. In 2005 he obtained his Ph.D. at the MPIPKS, Germany, and subsequently was awarded the ITAMP postdoctoral fellowship (2005–2008) of the Institute for Theoretical Atomic, Molecular and Optical Physics at the Harvard-Smithsonian Center for Astrophysics. He is recipient of the Otto Hahn Medal of the Max Planck Society and the Gustav Hertz Award of the German Physical Society. His group conducts research on atomic physics, quantum and nonlinear optics, many-body quantum dynamics, and ultracold plasma physics.
## Related Articles
### More Atomic and Molecular Physics
Condensate in a Can
Synopsis | May 16, 2013
Remove the Noise
Synopsis | Apr 25, 2013
## New in Physics
Wireless Power for Tiny Medical Devices
Focus | May 17, 2013
Pool of Candidate Spin Liquids Grows
Synopsis | May 16, 2013
Condensate in a Can
Synopsis | May 16, 2013
Nanostructures Put a Spin on Light
Synopsis | May 16, 2013
Fire in a Quantum Mechanical Forest
Viewpoint | May 13, 2013
Insulating Magnets Control Neighbor’s Conduction
Viewpoint | May 13, 2013
Invisibility Cloak for Heat
Focus | May 10, 2013
Desirable Defects
Synopsis | May 10, 2013 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8899287581443787, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/46604/list | Return to Answer
2 Improved formatting
It's worth noting this special case: if $G$ is isomorphic to $(\mathbb{Z}/p)^r$ then it can be regarded as a vector space of dimension $r$ over the field $\mathbb{Z}/p$, and the subgroups are just the subspaces. The number of linearly independent lists of length $n$ is
$(p^r-1)(p^r-p)\dotsb(p^r-p^{n-1})$
(choose a nonzero vector $v_1$, then a vector $v_2$ not in the one-dimensional space spanned by $v_1$, then $v_2$ not in the 2-dimensional space spanned by $v_1$ and $v_2$, and so on). For any subspace $V$ of dimension $n$, the number of bases is
$(p^n-1)(p^n-p)\dotsb(p^n-p^{n-1})$
by the same argument. It follows that the number $N_n$ of subspaces of dimension $n$ is
$N_n = \frac{(p^r-1)\dotsb(p^r-p^{n-1})}{(p^n-1)\dotsb(p^n-p^{n-1})} = \frac{(p^r-1)(p^{r-1}-1)\dotsb(p-1)}{(p^n-1)(p^{n-1}-1)\dotsb(p-1)}$
1
It's worth noting this special case: if $G$ is isomorphic to $(\mathbb{Z}/p)^r$ then it can be regarded as a vector space of dimension $r$ over the field $\mathbb{Z}/p$, and the subgroups are just the subspaces. The number of linearly independent lists of length $n$ is $(p^r-1)(p^r-p)\dotsb(p^r-p^{n-1})$ (choose a nonzero vector $v_1$, then a vector $v_2$ not in the one-dimensional space spanned by $v_1$, then $v_2$ not in the 2-dimensional space spanned by $v_1$ and $v_2$, and so on). For any subspace $V$ of dimension $n$, the number of bases is $(p^n-1)(p^n-p)\dotsb(p^n-p^{n-1})$ by the same argument. It follows that the number $N_n$ of subspaces of dimension $n$ is
$N_n = \frac{(p^r-1)\dotsb(p^r-p^{n-1})}{(p^n-1)\dotsb(p^n-p^{n-1})} = \frac{(p^r-1)(p^{r-1}-1)\dotsb(p-1)}{(p^n-1)(p^{n-1}-1)\dotsb(p-1)}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8739040493965149, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/52448?sort=oldest | Time integrals of diffusion processes
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was wondering if someone could recommend a reference that deals with time integrals of diffusion processes.
Suppose $X$ is an Ito diffusion process with dynamics $dX_t = \mu(X_t)dt + \sigma(X_t)dW_t$. The process I'm interested in is $Y_t = \int_0^t X_s ds$. I haven't seen any treatment of the properties of $Y$ in the better-known texts on stochastic analysis - perhaps someone on MO can help.
I'll give a simple example to try to explain part of the reason I'm interested. Suppose $dX^{(1)} = dW_t^{(1)}$ and $dX^{(2)} = \sigma dW_t^{(2)}$, where $W^{(1)}$ and $W^{(2)}$ are independent Brownian motions. $X^{(1)}$ has quadratic variation $t$ almost surely, and $X^{(2)}$ has quadratic variation $\sigma t$. Thus, for $\sigma \neq 1$ the process laws are not equivalent.
I'm wondering what this implies for the laws of $\int^t X^{(1)}_s ds$ and $\int^t X^{(2)}_s ds$. Intuitively, integration should "hide" the small oscillations of the sample paths. Is it possible that the integrated processes have equivalent laws?
-
Does any of the answers below correspond to what you were asking for? – Didier Piau Apr 11 2011 at 17:11
None of the references were as systematic as I would have liked, but I largely stopped pursuing this line of inquiry when you pointed out that the laws of the integrated processes are still singular. – Simon Lyons Apr 11 2011 at 23:46
4 Answers
One can adapt the argument used to show that, for a standard Brownian motion $W$, the laws of $W$ and $\sigma W$ on any interval $[0,t]$ with $t > 0$ and $\sigma^2\ne1$ are singular.
For every positive $v$, let $E_v$ denote the space of $C^1$ real valued functions defined on $[0,t]$ such that the quadratic variation of their first derivative on $[0,t]$ exists and equals $v$. Let $X=(X_s)_{0\le s\le t}$ with $X_s=\displaystyle\int_0^sW_u\mathrm{d}u$. Then $[X\in E_{t}]$ and $[\sigma X\in E_{\sigma^2t}]$ are both almost sure events but $E_t$ and $E_{\sigma^2t}$ are disjoint hence the laws of $X$ and $\sigma X$ are singular.
-
Yes, you're absolutely right. Thanks for pointing that out. It's strange that processes of this type do not appear much in the literature though. – Simon Lyons Jan 19 2011 at 13:13
Hmmm... A keyword which might help you here is Langevin processes. Bertoin published at least two papers about the influence of reflecting boundaries on these processes. You could also have a look at the recent paper by E. Jacob (Annales IHP Probab. Stat.) dealing with their excursions. And I should have mentioned explicitly in my answer these two related MO questions: mathoverflow.net/questions/51103 and mathoverflow.net/questions/51090. – Didier Piau Jan 19 2011 at 13:57
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I believe you can use integration by parts to express \int X_s ds as -\int s dX_s + Boundary terms.
This is then a stochastic integral of the type commonly dealt with.
Edit: I am not 100% sure that what I suggested is correct (though I swear I saw something like this in a class)... but now I would like to make sure I get the correct understanding in my mind.
My understanding at the moment is:
To make proper sense of a Reimann-Stieltjes integral of the form \int f dg, you need that one of f and g be continuous and the other be of bounded variation. Which is which doesn't matter because you can define the other from the first via integration by parts.
Since W_t is continuous a.s., you can then define \int h(t) dW_t omega-wise a.s. provided h(t) is of bounded variation. This way of defining "stochastic integration" fails however for \int W_t dW_t since neither of the pieces is of bounded variation... hence the need for more advanced notions of stochastic integration.
However, I believe that the different notions of stochastic integration coincide when the integrand is of bounded variation. And so, provided your integrand was of bounded variation, you could think in terms of RS-integration. And therefore the integration-by-parts I suggested would be legitimate.
Zhoraster's observation does raise some concern (although it could be that the sum of two non-abs continuous functions is abs continuous) so now I am curious if my mental picture is wrong.
-
1
This is not very helpful. $\int_0^t X_s ds$ is absolutely continuous, while both $t X_t$ and $\int_0^t s dX_s$, which one gets by ibp, are not. So they in fact say nothing about the initial integral. – zhoraster Jan 19 2011 at 7:05
This is not really a full answer, but depending on your needs, can be somewhat helpful. The time integral of a Brownian motion has been studied, for the purposes of a specific problem, in the following paper:
Ya. G. Sinai Statistics of shocks in solutions of inviscid Burgers equation Commun. Math. Phys. 148 (1992) 601-621
A heuristic, physicist's summary of Sinai's arguments can be found here:
M. Vergassola, B. Dubrulle, U. Frisch, and A. Noullez Burgers' equation, Devil's staircases and the mass distribution for large-scale structures Astron. Astrophys. 289:2 (1994) 325-356
Yet later there was a series of Toufic Suidan's papers on the subject, you can search for this name on the arXiv. Look also for other citations of Sinai's paper.
-
My Stochastic calculus professor always used to say "When in doubt use Ito"
So let $f(t,x) = t x$ and compute $\partial_t f(t,x) = x$, $\partial_x f(t,x) = t$ and $\partial_{xx} f(t,x) = 0$
Now the Ito lemma says for $f$ twice differentiable with respect to $x$ and once differentiable with respect to $t$ then the following formula holds for any Ito process:
$f(t, X_t) = f(0,X_0) + \int_{0}^{t}\partial_s f(s,X_s) ds + \int_{0}^{t}\partial_x f(s,X_s) dX_s$ $+ \int_{0}^{t}\partial_{xx} f(s,X_s) d \left< X,X \right>_s$
So applying the above fact to the function $f(t,x) = tx$ gives:
$t X_t = 0 + \int_{0}^{t}X_s ds + \int_{0}^{t} s dX_s + 0$ or to give you a starting answer $\int_{0}^{t}X_s ds = t X_t - \int_{0}^{t} s dX_s$
Edit 3: The following only holds now if $X_t$ is a Gaussian process which is not true in general... So in vague words we have that The (Riemann) integral of an ito process is equal to the difference of two Gaussian processes which should again be Gaussian .... (if all this logic is correct it should then suffice to characterize the processes covariance structure in order to have a complete understanding of the law of the process.)
For example one can compute the variance if $X_t$ is standard Brownian motion:
$\mathbb{E}[(\int_{0}^{t}X_s ds)^2] = t^2 \mathbb{E}[(X_t)^2] -2t \mathbb{E}[X_t \int_{0}^{t} s dX_s] + \mathbb{E}[(\int_{0}^{t}sdX_s)^2]$
By the Ito isometry we have $\mathbb{E}[(\int_{0}^{t}sdX_s)^2] = \int_{0}^{t}s^2ds = t^{3}/3$.
To compute $\mathbb{E}[X_t \int_{0}^{t} s dX_s]$ notice first that the Ito integral of a deterministic function is always a Gaussian process. EDIT: Shavi has given that $\mathbb{E}[X_t \int_{0}^{t} s dX_s] = t^2/2$
$\mathbb{E}[(\int_{0}^{t}X_s ds)^2] = t^3 - t^3 + \frac{t^{3}}{3} = \frac{t^3}{3}$
Computing the covariance $\mathbb{E}[\int_{0}^{t}X_s ds \int_{0}^{u}X_s ds]$ involves dealing with terms $\mathbb{E}[\int_{0}^{t}s dX_s \int_{0}^{u}s dX_s]$ and $\mathbb{E}[X_t X_u]$ which are again probably well known in certain cases (the second term is obviously equal to $min(t,u)$ when $X$ is b.m.) but may be difficult to handle in your general case.
Edit 2: To give an approach to answer the question "Is it possible that the integrated processes have equivalent laws?"
Since $\int_{0}^{t}X_{s}^{(1)}ds$ and $\int_{0}^{t}X_{s}^{(2)}ds$ are Gaussian processes (we proved this using ito) it suffices to check if there covariance functions $g_{1}(t,u)=\mathbb{E}[\int_{0}^{t}X_{s}^{(1)}ds \int_{0}^{u}X_{s}^{(1)}ds]$ and $g_{2}(t,u) = \mathbb{E}[\int_{0}^{t}X_{s}^{(2)}ds \int_{0}^{u}X_{s}^{(2)}ds]$ are equal for all $t,u >0$ to show that the two processes have equivalent laws.
Now applying the result we got above from the ito calculation lets us start computing the covariance:
$\mathbb{E}[\int_{0}^{t}X_s ds \int_{0}^{u}X_s ds]$ = $\mathbb{E}[( t X_t - \int_{0}^{t} s dX_s)( u X_u - \int_{0}^{u} s dX_s)]$ $= t u \mathbb{E}[ X_t X_u ] - t \mathbb{E}[X_t \int_{0}^{u} s dX_s ] -u \mathbb{E}[X_u \int_{0}^{t} s dX_s ]$ $+\mathbb{E}[\int_{0}^{t}s dX_s \int_{0}^{u}s dX_s]$
I refer to my above example on ways to deal with the terms in this expression given certain assumptions on $\mu$ and $\sigma$.Edit 3: Again this is just a way to start and obviously the calculations involving standard Brownian motion are trivial but the point is that the laws of $Y^{(1)}$ and $Y^{(2)}$ are equivalent (as opposed to equal) as soon as you show $g_1(t,u) = g_2(t,u)$ for all $t,u>0$.
-
I have partially went over your answer. It should be noted that ${\rm E}[X_t \int_0^t {s\,{\rm d}X_s } ] = t^2 /2$, and ${\rm E}[(\int_0^t {X_s \,{\rm d}s} )^2 ] = t^3 /3$ (as is well known, $\int_0^t {X_s \,{\rm d}s} \sim {\rm N}(0,t^3/3)$). – Shai Covo Jan 19 2011 at 21:31
Thank you for the help I will make the appropriate edits. – jzadeh Jan 19 2011 at 22:48
I don't see the connection with the initial question. What is your point exactly ? – The Bridge Jan 20 2011 at 7:16
Thanks for the downvote The Bridge.... I refer you to my above passage "it should then suffice to characterize the processes covariance structure in order to have a complete understanding of the law of the processes". Since that is obviously to vague I have elaborated a little more in edit 2 and I refer you to one of the excellent texts by Robert Adler for the theorems I am citing on Gaussian processes. # R.J. Adler, (1990), , An Introduction to Continuity, Extrema, and Related Topics for General Gaussian Processes, IMS Lecture Notes-Monograph Series, Vol 12, vii + 160 – jzadeh 0 secs ago – jzadeh Jan 20 2011 at 10:16
+1 for The Bridge's question. Re jzadeh's second edit: indeed, $E(Y^{(1)}_tY^{(1)}_s)=E(Y^{(2)}_tY^{(2)}_s)$ for every $(t,s)$ iff $E(X^{(1)}_tX^{(1)}_s)=E(X^{(2)}_tX^{(2)}_s)$ for every $(t,s)$. This is obvious and general--but not the point. The OP asks for cases when the laws of $Y^{(1)}$ and $Y^{(2)}$ are equivalent (as opposed to equal). – Didier Piau Jan 20 2011 at 10:28
show 4 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941382884979248, "perplexity_flag": "head"} |
http://mathhelpforum.com/statistics/143767-if-die-assumed-fair-what-most-likely-number-sides-has.html | # Thread:
1. ## If the die is assumed to be fair, what is the most likely number of sides it has.
Suppose the following experiment is conducted : A die with s sides marked 1 through s is tossed n times and the number of times a 1 is tossed is recorded. After many repetitions of the experiment, it is found that the number of 1's tossed has a mean of 33 and a standard deviation of 5.5.
a). If the die is assumed to be fair, what is the most likely number of sides it has ?
Thank you.
2. Originally Posted by sahip
Suppose the following experiment is conducted : A die with s sides marked 1 through s is tossed n times and the number of times a 1 is tossed is recorded. After many repetitions of the experiment, it is found that the number of 1's tossed has a mean of 33 and a standard deviation of 5.5.
a). If the die is assumed to be fair, what is the most likely number of sides it has ?
Thank you.
Perhaps I'm missing some complexity here, but for a 6-sided die, you'd expect each face value to appear close to one sixth of the time, so I would think that you could simply write
$\frac{n}{s} \approx 33$
and use the nearest integer function when solving, without needing to think about standard deviation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.968300461769104, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/90207-integration-separation-variables.html | # Thread:
1. ## Integration by separation of variables
$\rho \frac {du}{d \rho}=1-e^u$
Use separation of variables to obtain:
$e^u=\frac 1 {1-\frac{c_2} \rho}$
When I separate the variables I get:
$\frac{du}{1-e^u}=\frac{d \rho}\rho$
But I can't see how to integrate the LHS or manipulate it to give the required form.
I have tried:
$(1+e^u+e^{2u}+e^{3u}+e^{4u}+...)du=\frac{d \rho} \rho$
$\therefore u-1+1+e^u+\frac12e^{2u} +\frac13e^{3u}+...=ln \rho+c$
But that doesn't seem to be getting me far.
2. Originally Posted by Kiwi_Dave
$\rho \frac {du}{d \rho}=1-e^u$
Use separation of variables to obtain:
$e^u=\frac 1 {1-\frac{c_2} \rho}$
When I separate the variables I get:
$\frac{du}{1-e^u}=\frac{d \rho}\rho$
But I can't see how to integrate the LHS or manipulate it to give the required form.
I have tried:
$(1+e^u+e^{2u}+e^{3u}+e^{4u}+...)du=\frac{d \rho} \rho$
$\therefore u-1+1+e^u+\frac12e^{2u} +\frac13e^{3u}+...=ln \rho+c$
But that doesn't seem to be getting me far.
$\frac{1}{1 - e^u} = \frac{1 - e^u + e^u}{1 - e^u} = \frac{1 - e^u}{1 - e^u} - \frac{-e^u}{1 - e^u} = 1 - \frac{-e^u}{1 - e^u}$
integrate the last expression | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8841039538383484, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=4201130 | Physics Forums
Page 1 of 3 1 2 3 >
## Entropy is a measure of energy availiable for work ????
"Entropy is a measure of energy availiable for work". Can someone explain this to me? Give some examples that show in what sense it is true. It has to come with a lot of caveats, proviso's etc. because its simply not true on its face.
I mean, if I have a container of gas at some temperature above 0 K, then I can extract all of its internal energy as work, just let it quasistatically expand to infinity.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
I mean, if I have a container of gas at some temperature above 0 K, then I can extract all of its internal energy as work, just let it quasistatically expand to infinity.
Such a container has internal pressure P.
Expanding from P into a vacuum does no work.
Expanding from P against an external pressure P' < P does work, but as this happens P diminishes until P = P' when the system is in equilibrium.
How would you proceed from this equilibrium to infinity, where P = 0 ?
Quote by Studiot Such a container has internal pressure P. Expanding from P into a vacuum does no work. Expanding from P against an external pressure P' < P does work, but as this happens P diminishes until P = P' when the system is in equilibrium. How would you proceed from this equilibrium to infinity, where P = 0 ?
Yes, you would need initial pressure (P) greater than zero, and ambient pressure (P') equal to zero, i.e. in outer space. But the point remains, the statement that "entropy is a measure of energy unavailable for work" is contradicted by this example.
## Entropy is a measure of energy availiable for work ????
Expanding from P into a vacuum does no work.
This is fundamental, pushing against something that offers no resistance does no work.
Mentor
Quote by Rap "Entropy is a measure of energy availiable for work".
Where did you get that quote? It is wrong: it is missing the word "not"!
Quote by wiki Entropy is ... the measure of a system's thermal energy per unit temperature that is unavailable for doing useful work.
http://en.wikipedia.org/wiki/Entropy
Where did you get that quote? It is wrong: it is missing the word "not"!
Agreed - I think that was a typo -, but we also need to dispel the misconception in the proposed counterexample that follows.
Once that is done it is easy to explain the correct reasoning.
Yes, sorry, I misquoted it. It should be something like "Entropy is a measure of the energy NOT available for useful work". I also see the point that the example I gave is not good. In order for the expansion to be slow, there has to be an opposing force almost equal to the pressure force, and that force has to diminish as the volume increases (and pressure decreases). This opposing force would be the mechanism by which work was done on the environment. Something like a mass in a gravitational field, in which the mass is slowly being reduced by removal. I found a web site http://web.mit.edu/16.unified/www/SP...es/node48.html which seems to give an explanation, I will have to look at it closely. Also the Wikipedia quote says it is "energy per unit temperature" not available for work, which I cannot immediately decipher.
Looking at the above site, it seems to me what it is saying is that the amount of energy unavailable for useful work in a Carnot cycle is equal to the entropy extracted from the hot reservoir (which is equal to the entropy deposited in the cold reservoir) times the temperature of the cold reservoir. How you get from this to the idea that "Entropy is a measure of the energy unavailable for work" still eludes me. Entropy of what? I tried assuming that the hot reservoir was actually a hot system with a finite amount of energy and entropy. Again, you can prove that if you extract entropy ∆S from the hot body, the amount of unavailable energy is Tc ∆S where Tc is the temperature of the cold reservoir. But you can only extract so much entropy from the hot body using a working body at the cold reservoir temperature. If their temperatures are very close, you can extract very little entropy and energy. The great majority of the internal energy of the hot body is unavailable for work in this case. If you set the cold reservoir to zero degrees, then the amount of energy unavailable for work is zero. You could extract all of the internal energy of the hot body as work. (right?). I still don't get it.
Recognitions:
Homework Help
Science Advisor
Quote by Rap Looking at the above site, it seems to me what it is saying is that the amount of energy unavailable for useful work in a Carnot cycle is equal to the entropy extracted from the hot reservoir (which is equal to the entropy deposited in the cold reservoir) times the temperature of the cold reservoir. How you get from this to the idea that "Entropy is a measure of the energy unavailable for work" still eludes me. Entropy of what? I tried assuming that the hot reservoir was actually a hot system with a finite amount of energy and entropy. Again, you can prove that if you extract entropy ∆S from the hot body, the amount of unavailable energy is Tc ∆S where Tc is the temperature of the cold reservoir. But you can only extract so much entropy from the hot body using a working body at the cold reservoir temperature. If their temperatures are very close, you can extract very little entropy and energy. The great majority of the internal energy of the hot body is unavailable for work in this case. If you set the cold reservoir to zero degrees, then the amount of energy unavailable for work is zero. You could extract all of the internal energy of the hot body as work. (right?). I still don't get it.
We have hashed this over before. The problem with "entropy is a measure of the energy unavailable for work" is that standing alone it is not clear and open to a number of interpretations. It requires a special definition of "energy unavailable for work" as the explanation shows. Without that explanation it can easily lead to misinterpretation.
For example, between two temperatures, Th and Tc, the heat flow Q is capable of producing an amount of work, W = Q(1-Tc/Th) ie. with a Carnot engine. So there is energy E = QTc/Th energy that is, in that sense, "unavailable" for doing work. Yet, as we know, ΔS = 0 for a Carnot engine. So one could well ask, how is 0 a measure of QTc/Th? The answer is: "well, that is not what we mean by 'energy unavailable to do work'. We really mean 'lost work' which is the potential work that could be extracted minus the work that was actually extracted, or the amount of additional work required to restore the system and surroundings to their original state if you saved the output work and used it to drive the process in reverse." Hence the confusion.
So, as I have said before, this particular statement should not be used to introduce the concept of entropy. By itself it explains nothing and leads to great confusion.
AM
Quote by Andrew Mason We have hashed this over before. The problem with "entropy is a measure of the energy unavailable for work" is that standing alone it is not clear and open to a number of interpretations. It requires a special definition of "energy unavailable for work" as the explanation shows. Without that explanation it can easily lead to misinterpretation. For example, between two temperatures, Th and Tc, the heat flow Q is capable of producing an amount of work, W = Q(1-Tc/Th) ie. with a Carnot engine. So there is energy E = QTc/Th energy that is, in that sense, "unavailable" for doing work. Yet, as we know, ΔS = 0 for a Carnot engine. So one could well ask, how is 0 a measure of QTc/Th? The answer is: "well, that is not what we mean by 'energy unavailable to do work'. We really mean 'lost work' which is the potential work that could be extracted minus the work that was actually extracted, or the amount of additional work required to restore the system and surroundings to their original state if you saved the output work and used it to drive the process in reverse." Hence the confusion. So, as I have said before, this particular statement should not be used to introduce the concept of entropy. By itself it explains nothing and leads to great confusion. AM
Well, I kind of thought that was case, but there was also the possibility I was missing something. Thanks for the clarification.
The time for hand waving is over, here is some mathematics. Consider a universe consisting of a system contained in a heat bath or reservoir at uniform constant temperature T. Consider changes in the function Z = entropy of bath plus entropy of the system = entropy of this universe. dZ = dSb + dSs.................1 Where b refers to the bath and s refers to the system. If the system absorbs heat dq the same amount of heat is lost by the bath so the entropy change dSb =-dq/T...................2 substituting this into equation 1 dZ = dSs - dq/T..............3 Now consider a change of state of the system from state A to state B By the first law dUs = dq - dw ..................4 Combining equations 3 and 4 and rearranging dSs = (dUs+dw)/T + dZ re-arranging dw = TdSs - dUs -TdZ Since TdZ is always a positive quantity or zero dw ≤ TdSs - dUs Where dw is the work done by the system This is the principle of maximum work and calculates the maximum work that can be obtained from the system. As you can see it has two components vis from the entropy created and from the change in internal energy and these act in opposite directions so in this sense the TdSs term reduces the amount of work obtainable from the internal energy of the system and accounts for unavailable energy since TdSs has the dimensions of energy. Note the usual caveat The inequality refers to irreversible processes, the equality to reversible ones.
Quote by Rap "Entropy is a measure of energy availiable for work". Can someone explain this to me? Give some examples that show in what sense it is true. It has to come with a lot of caveats, proviso's etc. because its simply not true on its face. I mean, if I have a container of gas at some temperature above 0 K, then I can extract all of its internal energy as work, just let it quasistatically expand to infinity.
Quote by Rap "Entropy is a measure of energy availiable for work". Can someone explain this to me? Give some examples that show in what sense it is true. It has to come with a lot of caveats, proviso's etc. because its simply not true on its face. I mean, if I have a container of gas at some temperature above 0 K, then I can extract all of its internal energy as work, just let it quasistatically expand to infinity.
What you said is only possible if the gas expands both adiabatically and reversibly. In an adiabatic and reversible expansion, the change in entropy of the gas is zero. Under that condition, one could turn all the internal energy into work.
Any deviation from the conditions of adiabatic and reversible would result in some internal energy not being turned to work.
First, I prove that one can extract all the internal energy from a monotonic ideal gas using an expansion that is BOTH adiabatic and reversible.
Suppose one were to take an ideal gas in a closed chamber and expand it both adiabatically and slowly, so that it is in a state near thermal equilibrium at all times. No entropy goes in or out of the chamber.
At the end of the expansion, even in the limit of infinite volume, you would end up with a gas of finite temperature.
The ideal gas law is:
1) PV=nRT
where P is the pressure of the gas, V is the volume of the chamber, n is the molarity of the gas, R is the gas constant and T is the temperature.
The internal energy of the ideal gas is:
2) U=(3/2)nRT
where U is the internal energy of the gas and everything else is the same.
Substituting equation 2 into equation 1:
3)U=(3/2)PV
Before the gas starts expanding, let P=P0, V=V0, T=T0, and U=U0. The chamber is closed, so "n" is constant the entire time. There are three degrees of freedom for each atom in a mono atomic gas. Therefore, for a mono atomic gas:
3) U0=1.5 P0 V0
The adiabatic expansion of an mono-atomic gas is:
4) P0 V0 ^(5/3)= P V^(1.5)
Therefore,
5) P = P0 (V0/V)^(5/3)
The work, W, done by the gas is
6) W = ∫[V0→∞] P dV
Substituting equation 5 into equation 6:
7) W = (P0 V0^(5/3)))∫[V0→∞] V^(-5/3) dV
Evaluating the integral in equation 6:
8) W = (3/2)(P0 V0^5/3)V0^(-2/3)
9) W=(1.5) P0 V0
The expression for W in equation 9 is the same as the expression for U0 in equation 3. Therefore, the internal energy has been taken out completely.
I’ll solve the problem later (a week or so) for an expansion with sliding friction. There, the increase in entropy characterizes the amount of internal energy not turned into work. However, I will set up the problem. I will show the two equations that makes the case with sliding friction different from the reversible condition.
One remove the reversibility condition by adding sliding friction,
10) Wf = ∫[V0→Vf] (P-Pf) dV
where Pf is the pressure due to sliding friction and Vf is the volume when the P=Pf. The chamber stops expanding when P=Pf.
However, another expression is necessary to describe how much entropy is created.
11) dQ = Pf dV
Equation merely says that the energy used up by the sliding friction causes entropy to be created. The heat energy, dQ, is the energy used up by friction.
I don’t have time now, so I leave it as an exercise. Honest, moderator, I promise to get back to it. However, he wants an example where the creation of entropy limits the work that can be extracted. This is a good one.
Spoiler
Wf<W. Not all the internal energy is turned into work with sliding friction included. Let Q be the work done by the sliding force alone. The increase in entropy is enough to explain why the internal energy is not being turned into work.
Any deviation from the conditions of adiabatic and reversible would result in some internal energy not being turned to work.
Do you not agree that the maximum possible work is extracted in a reversible isothermal expansion?
The adiabatic expansion of an mono-atomic gas is: 4) P0 V0 ^(5/3)= P V^(1.5)
Are you sure you mean this: you have different values of gamma on each side?
Quote by Studiot Do you not agree that the maximum possible work is extracted in a reversible isothermal expansion?
In an isothermal expansion, energy is entering the gas from a hot reservoir. Therefore, one can't say that one is extracting the energy from the ideal gas. Most of the energy is being extracted from the hot reservoir, not from the ideal gas.
In the corresponding isothermal expansion, the ideal gas is acting like a conduit for energy and entropy. The ideal gas is not acting as a storage matrix for the energy.
The OP was saying that the work was being extracted from the internal energy that was initially embedded in the ideal gas. All work energy comes from the container of gas, not outside reservoirs. So the walls of the container have to be thermal insulators.
If you allow heat energy to conduct through the walls of the container, then the work may exceed the initial value of the internal energy. The hot reservoir can keep supplying energy long after the internal energy is used up.
So I stick to my guns with regards to the specificity of the OPs hypothesis. He was unconsciously assuming that the expansion is both adiabatic and reversible.
I diagree with those people who said that the gas would remain the same temperature during the expansion, and that not all the energy could be extracted from the internal energy. I showed that the internal energy could be entirely extracted for an ideal monotonic gas under adiabatic and reversible conditions.
The big problem that I have with the OP's question is with the word "quasistatic". I conjecture that the OP thought that "quasistatic" meant "both adiabatic and reversible".
A quasistatic process can be both nonadiabatic and irreversible. However, "quasistatic" is a useful hypothesis. The word "quasistatic" implies spatial uniformity. In this case, the word quasistatic implies that the temperature and the pressure of the ideal gas is uniform in the chamber.
Quasistatic implies that enough time has passed between steps that both temperature and pressure are effectively constant in space. Thus, temperature is not a function of position. Pressure is not a function of position.
Quote by Studiot Are you sure you mean this: you have different values of gamma on each side?
I think this is correct. I didn't spend much time checking my work. If you see an arithmetic blunder, feel free to correct me.
Also, I specified a specific case to simplify the problem. So even if I did it correctly, the gamma value that I used was atypical. Next time, I will let gamma be a parameter of arbitrary value.
I specified a monotonic gas. The gas is comprise of individual atoms. There are no internal degrees of freedom in these atoms. Any correlation between coefficients may be due to my choice.
There are many ways an expansion can be irreversible. I think the one most people think of is where the expansion is not quasistatic. Suppose the gas is allowed to expand freely, so that temperature and pressure are not uniform. This is irreversible. However, the mathematics is way beyond my level of expertise.
The problem is tractable if the process is quasistatic. So, I think the best thing would be to show how it works with an quasistatic but irreversible. For instance, what happens if one turns on the sliding friction and the static friction in this expansion. That would result in a process where entropy is created. In other words, friction would result in an irreversible expansion even under quasistatic conditions.
I don't have time now. I will post a solution to that later. For now, just remember that not all quasistatic processes are reversible.
I wanted to be more subtle and polite but
$${P_1}{V_1}^\gamma = {P_2}{V_2}^\gamma$$
Whereas you have
$${P_0}{V_0}^{\frac{5}{3}} = {P_1}{V_1}^{\frac{3}{2}}$$
I diagree with those people who said that the gas would remain the same temperature during the expansion
Is this a rejection of Joule's experiment and the definition of an ideal gas?
If so you should make it clear that your view is not mainstream physics.
It should be noted that Joules experiment was both adiabatic and isothermal and has been repeated successfully many times.
You are correct in observing that during an isothermal expansion neither q nor w are zero.
However what makes you think the internal energy is the same at the beginning and end, in the light of your above statement?
One definition (or property derivable from an equivalent definition) of an ideal gas is that its internal energy is a function of temperature alone so if the temperature changes the internal energy changes. To remove all the internal energy you would have to remove all the kinetic energy of all its molecules.
Recognitions:
Homework Help
Science Advisor
Quote by Studiot I wanted to be more subtle and polite but $${P_1}{V_1}^\gamma = {P_2}{V_2}^\gamma$$ Whereas you have $${P_0}{V_0}^{\frac{5}{3}} = {P_1}{V_1}^{\frac{3}{2}}$$
I think it was a typo. Darwin did write: P = P0 (V0/V)^(5/3) a little farther down.
Is this a rejection of Joule's experiment and the definition of an ideal gas? If so you should make it clear that your view is not mainstream physics.
I am not sure that you are both talking about the same process. Darwin was referring to a quasi-static adiabatic expansion of an ideal gas. Temperature is given by the adiabatic condition:
$$TV^{(\gamma - 1)} = \text{constant}$$
So, if volume changes this cannot be isothermal. Temperature has to change.
I think you (Studiot) may be talking about free expansion, not quasi-static expansion, in which case T is constant for an ideal gas.
AM
Studiot - I agree with your derivation but with regard to the OP, the correct statement would then be: "an infinitesimal change in entropy is a measure of the minimum infinitesimal amount of energy unavailable for work given a particular ambient temperature." Darwin123 - Thank you for clarifying the muddled OP. Depending on conditions, all, some, or none of a system's internal energy can be converted to work, and so the statement "Entropy is a measure of the energy unavailable for work" is ambiguous at best, wrong at worst. You are right, I was assuming adiabatic and reversible. Adiabatic or else you are potentially using energy from somewhere else to do the work. Reversible, because it will give the miniumum amount of work unavailable. I should have said that instead of quasistatic. I take quasistatic to mean, by definition, a process can be described as a continuum of equilibrium states.
Page 1 of 3 1 2 3 >
Thread Tools
| | | |
|------------------------------------------------------------------------------|------------------------------|---------|
| Similar Threads for: Entropy is a measure of energy availiable for work ???? | | |
| Thread | Forum | Replies |
| | Classical Physics | 2 |
| | Classical Physics | 2 |
| | Special & General Relativity | 3 |
| | Classical Physics | 7 |
| | General Physics | 1 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468492865562439, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/28784?sort=newest | ## Example of an algebra finite over a commutative subalgebra with infinite dimensional simple modules
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $A$ be an algebra over an algebraically closed field $k.$ Recall that if $A$ is a finitely generated module over its center, and if its center is a finitely generated algebra over $k,$ then by the Schur's lemma all simple $A$-modules are finite dimensional over $k.$
Motivated by the above, I would like an example of a $k$-algebra $A,$ such that:
1) $A$ has a simple module of infinitie dimension over $k,$
2) $A$ contains a commutative finitely generated subalgebra over which $A$ is a finitely generated left and right module.
Thanks in advance.
-
Is this a homework problem? – S. Carnahan♦ Jun 19 2010 at 23:36
4
No, but you could use it as a homework problem if you wish. – Bedini Jun 20 2010 at 0:25
1
Is there a reference for the statement in the first paragraph? And are there familiar (noncommutative) infinite dimensional algebras meeting these conditions? The motivation here needs some reinforcement. For me the interesting examples are universal enveloping algebras of finite dimensional Lie algebras in prime characteristic, where Schur's lemma isn't enough to prove finite dimensionality of all simple modules. (Ditto for quantized enveloping algebras or function algebras at a root of unity.) – Jim Humphreys Jun 20 2010 at 12:51
@Jim: I don't know the exact reference, but one could prove it as follows. Let V be a simple A-module. Without loss of generality we may assume that the annihilator of V is 0. Let Z denote the center of A. It follows that V is f.g. Z-module and any nonzero element of Z acts invertibly on V (Schur's lemma), so Nakayama's lemma implies that Z is a field. But by the assumption Z is a finitely generated algebra over an algebraically closed field k. Therefore Z=k and V is finite over k. – Bedini Jun 20 2010 at 20:53
I can't follow the last steps in your sketch and would prefer a reference. A textbook version I recall (following Jacobson's original line of proof) treats only universal enveloping algebras over a field of prime characteristic, combining Schur's Lemma with a generalized version of Nakayama's Lemma and the Hilbert Nullstellensatz. Is there a simpler version? (And other natural examples besides enveloping algebras where the same hypotheses are satisfied?) – Jim Humphreys Jun 20 2010 at 22:46
show 2 more comments
## 1 Answer
Doc, this is a stinker. Your condition (2) forces your algebra to be finitely generated PI, and every little hare knows that simple modules over such algebras are finite-dimensional. See 13.4.9 and 13.10.3 of McConnell-Robson...
-
Very good! Thanks. – Bedini Jun 21 2010 at 17:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8668799996376038, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/58483?sort=votes | ## Looking for a simple proof that R^2 has only one smooth structure
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
So not so long ago, I asked for a simple proof that $\mathbf{R}$ has only one smooth structure. A proof that was communicated to me by Ryan Budney (link text) was the following:
So let me recall his argument: So let $X$ be a real line endowed with a "potentially" exotic smooth structure. We know that $X$ is Hausdorff and paracompact so for every open covering $\mathcal{U}$ of $X$ we have a partition of unity dominated by $\mathcal{U}$. Using this we can endow $\mathbf{R}$ with a Riemanian metric $ds^2$ (choose your favorite open covering which is locally finite!). Let $x_0$ be a point of $X$ so that $X-x_0=X^+\bigcup X^{-}$ is the disjoint union of the two components. Finally, note that one may integrate this metric against the Haar measure of $X$ using the velocity vectors $1$ and $-1$ in the fiber above $x_0$ to get two bijections
$f^+:X^+\rightarrow\mathbf{R}_{>0}$
and
$f^-:X^-\rightarrow\mathbf{R}_{<0}$.
Since the metric $ds^2$ is smooth we see that $f^+$ and $f^-$ are smooth and that they glue in a smooth way. So basically, the proof works because we can think of $\mathbf{R}$ as the union of two geodesics.
Q: Is there somekind of similar argument for $\mathbf{R}^2$ and $\mathbf{R}^3$ ?
Any simple proof along different lines is welcome!
-
1
It would be strange (or interesting...) that a similar argument worked for $n=2$ and $n=3$, because nothing similar can work for $n=4$ :) – Mariano Suárez-Alvarez Mar 14 2011 at 21:55
2
A different way to phrase that argument is: we can pick a complete Riemannian metric on our $1$-dimensional smooth manifold $M$, see e.g. mathoverflow.net/questions/18844; the geodesic through one of its points is a surjective map $\mathbb R\to M$, according to Hopf-Rinow. The map is locally a diffeo by the inverse function theorem, and if it were not injective it would be periodic, by uniqueness of geodesics---but in that case $M$ would be compact. – Mariano Suárez-Alvarez Mar 14 2011 at 22:06
5
Ken, I think there's a confusion regarding the meaning of "smooth structure". One meaning is a maximal atlas of smooth charts. Another is a diffeomorphism class of such maximal atlases. Your example gives two distinct maximal atlases of smooth charts on $\mathbb{R}$. But these two maximal atlases give rise to diffeomorphic manifolds (note, though, that the diffeomorphism is not given by the identity map). I think this is explained in the first chapter of Spivak's book. – Dan Ramras Mar 14 2011 at 23:22
1
Hi Ken, the example you gave, say $X$, is equivalent to the usual smooth structure on $\mathbf{R}$. Indeed the map $\sqrt[3]:X\rightarrow\mathbf{R}$ is a diffeomorphism! – Hugo Chapdelaine Mar 14 2011 at 23:35
3
It seems to me that some of the modern proofs of the Uniformisation theorem have few to no topological prerequisites. I wouldn't be surprised if some of them did prove that any smooth structure on R2 can be refined to a complex structure equivalent to C or the unit disk, and is therefore diffeomorphic to R2. – Maxime Bourrigan Mar 15 2011 at 1:02
show 4 more comments
## 1 Answer
Some comments alluded to the possibility to show this using the Riemann uniformization theorem (by paracompactness, any oriented $2$-manifold has an almost complex structure, which is integrable by Newlander-Nirenberg and by the uniformization theorem, it will be biholomorphically equivalent to the plane or the unit disc, hence diffeomorphic to $R^2$). This is not circular, but to claim that this is "simple" would be utterly absurd. The complete proof of the uniformization theorem is one of the hardest mathematical achievements of the early 20th century; the proof uses a lot of analysis and also a bit of algebraic topology.
Using Morse theory, you can argue as follows: let $U$ be a connected noncompact surface, pick a Morse function $f: U \to \mathbb{R}$. One can modify $f$ so that it has no critical points of index $2$ and precisely one critical point of index $0$, so let us assume that $f$ has this property. This is the most basic case of the handle-cancellation technique.
Now let $C_{\ast}(f)$ be the chain complex of the Morse function. $C_k (f)$ has the critical points of index $k$ as a basis. If $f$ is above, it follows that $C_0 (f)=Z$, $C_k (f)=0$ if $k \geq 2$. The differential $C_1 \to C_0$ will be zero and so $H_1 (U)= C_1 (f)$.
If $H_1 (U)=0$, we see that there is a Morse function $f:U \to \mathbb{R}$ with precisely one minimum. Use the flow lines of $f$ to cook up a diffeomorphism $f: U \to \mathbb{R}^2$.
I don't think you get this result much cheaper.
-
Thanks Johannes, I think that your argument fulfils my requirement! And yes indeed, the proof of the uniformization theorem of simply connected Riemann surfaces is far from trivial and extremely deep! – Hugo Chapdelaine Mar 15 2011 at 11:40
It is definitely far from simple and indeed was the source of great development in the XIXth and XXth century (if you allow me some advertisement pro domo, this book explains a part of this story: amazon.fr/…) but some modern proofs of the whole package aren't that long... – Maxime Bourrigan Mar 15 2011 at 12:49
BTW, your proof confuses me, wouldn't the square of the norm be a Morse function with exactly one minimum on a small exotic R4 ? How do you "cook up" your diffeomorphism? – Maxime Bourrigan Mar 15 2011 at 13:06
Assume that the minimum of $f$ is at $p$ and $f(p)=0$. For any $b>a >0$, the manifold $f^{-1}[a,b]$ is diffeomorphic to $[a,b] \times f^{-1}(a)$, see Milnor, Morse theory, Thm 3.1. Moreover, $f$ corresponds to the projection onto $R$ under this diffeomorphism (the phrase "use the flow lines" corresponds to this). Do this for many intervals; you get a diffeo $f^{-1}[a,\infty) \cong f^{-1}(a) \times [a,\infty)$. By the Morse lemma, $f^{-1}[0,a]$ is a disc when $a$ is small enough. These things can be glued together to give the asserted diffeo. – Johannes Ebert Mar 15 2011 at 14:12
1
If I remember correctly, the first step in the most standard proof of the uniformzation theorem is to show that a simply connected surface can be exhausted by relatively compact simply connected subsets (maybe with nice boundary). For that, a proper function and some algebraic topology is needed, in other words: quite a bit of what I sketched above. Only after this is done, the geometric analysis part of the proof begins. – Johannes Ebert Mar 15 2011 at 19:11
show 4 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286977052688599, "perplexity_flag": "head"} |
http://quant.stackexchange.com/questions/529/how-to-quickly-estimate-a-lower-bound-on-correlation-for-a-large-number-of-stock | # How to quickly estimate a lower bound on correlation for a large number of stocks?
I would like to find stock pairs that exhibit low correlation. If the correlation between A and B is 0.9 and the correlation between A and C is 0.9 is there a minimum possible correlation for B and C? I'd like to save on search time so if I know that it is mathematically impossible for B and C to have a correlation below some arbitrary level based on A to B and A to C's correlations I obviously wouldn't have to waste time calculating the correlation of B and C.
Is there such a "law"? If not, what are other methods of decreasing the search time?
-
– Tal Fishman Jun 8 '12 at 17:17
## 2 Answers
Yes, there is such a rule and it is not too hard to grasp. Consider the 3-element correlation matrix
$$\left(\begin{matrix} 1 & r & \rho \\ r & 1 & c \\ \rho & c & 1 \end{matrix}\right)$$
which must be positive semidefinite. In simpler terms, that means all its eigenvalues must be nonnegative.
Assuming that $\rho$ and $r$ are known positive values, we find that the eigenvalues of this matrix go negative when
\begin{equation} c<\rho r-\sqrt{1-\rho ^2+\rho ^2 r^2-r^2}. \end{equation}
Therefore the right hand side of this expression is the lower bound for the AC correlation $c$ that you seek, with $\rho$ being the AB correlation and $r$ being the BC correlation.
-
Thanks, that is exactly what I was looking for! – Joshua Chance Feb 15 '11 at 22:49
i think it has a name, sometimes, like "law of the triangle" or something similar. look at fx volatility, it has exactly the same problem all the times, since you correlate currency pairs, those consistency conditions appear naturally – nicolas Mar 13 '12 at 7:52
What is the upper bound on that correlation ? – Qbik Apr 24 '12 at 23:59
An upper bound, in the general case, can be obtained in the same way -- compute the determinant of the $3\times3$ matrix and solve for $c$: one of the roots is a lower bound, the other an upper bound. The formula is the same, except for the sign in front of the square root. – Vincent Zoonekynd Jun 7 '12 at 8:07
The upper bound on BC correlation would be 1 for the example given. B=C would correlate to 1. If AB and Ac are different, I don't know off the top of my head.
-
2
Is this supposed to be a joke? The upper bound for correlation is 1? – chrisaycock♦ Jun 7 '12 at 11:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346312880516052, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/26446/references-for-complex-analytic-geometry/26501 | ## References for complex analytic geometry?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for references on the "algebraic geometry" side of complex analytis, i.e. on complex spaces, morphisms of those spaces, coherent sheaves, flat morphisms, direct image sheaves etc. A textbook would be nice, but every little helps.
Grauert and Remmert's "Coherent analytic sheaves" seems to contain what I want, but it is very dense reading. You could say I'm looking for sources to read on the side as I work through G&R, to get different points of views and examples. For example, B. and L. Kaup's "Holomorphic functions of several variables" talks about the basics of complex analytic geometry, but doesn't go into much detail.
My motivation is twofold. First, I'm studying deformation theory, which necessarily makes use of complex spaces, both as moduli spaces and objects of deformations, so while I can avoid using complex spaces at the moment they're certain to come in handy later. Second, I want to be able to talk to the algebraic geometers in my lab, so I should know what their schemes and morphisms translate to in the analytic case. I like reading as much as I can about what I'm trying to learn, so:
### Do you know of other sources (anything: textbooks, lecture notes, survey articles, historical overviews, comparisons with algebraic geometry ...) that talk about complex spaces and their geometry?
-
The G&R book doesn't really address flatness (but handles everything else); they use a notion of "active germs" (non-zero-divisors in a suitable sense) to get around it. The series by Gunning & Rossi treats the case of analytic spaces which are reduced. Assuming reducedness is rather restrictive (e.g., cannot make fibers products, even for analytic fibers of a branched covering of Riemann surfaces), but it may be a good way of easing into the general case and becoming more comfortable with sheaf-theoretic reasoning. – BCnrd May 30 2010 at 13:30
Also, the old Seminaire Cartan lectures by C. Houzel on analytic spaces give a nice treatment of the local structure of analytic spaces and properties of local rings on them, especially the henselian property of the local rings and local structure of analytic maps with isolated point in a fiber. You may find that a useful prelude to Grauert-Remmert (if not already covered by Gunning & Rossi). – BCnrd May 30 2010 at 13:50
Thanks for the tips, I'll look both Gunning & Rossi and Houzel up tomorrow. Some random googling also turned up "Complex analytic geometry" by G. Fischer in Lecture notes in Mathematics. The index is available on SpringerLink (springerlink.com/content/l37102231p72/…), it looks like it talks about similar things as Grauert and Remmert do. – Gunnar Magnusson May 30 2010 at 18:22
## 5 Answers
Two books that I like a lot:
1) Joseph Taylor's Several complex variables with connections to algebraic geometry and Lie groups .
2) Constantin Banica and Octavian Stanasila's "Algebraic methods in the global theory of complex spaces" , Wiley (1976)
-
Tony, the B&S book is certainly the ultimate reference post-Grauert-Remmert for "algebraic" aspects (flatness, $S_n$ loci, etc.). I didn't know that it was published in a language other than French & Romanian. – BCnrd May 30 2010 at 22:50
It exists in English - published by John Wiley and Sons in 1976. Unfortunately our library doesn't have it but one can find it through inter library loan. – Tony Pantev May 31 2010 at 2:26
Excellent! Thank you Tony, references like these are exactly what I was hoping for. – Gunnar Magnusson May 31 2010 at 10:58
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
For complex geometry,which really is fundamental in analytic deformation theory,I strongly suggest 2 sources besides the classical source by Griffiths and Harris: Complex Geometry:An Introduction by Daniel Huybrechts,which has rapidly become the standard text on the subject,and the online text draft of a comprehensive work by Demially. The Demailly text is much more comprehensive and more advanced,with an emphasis on algebraic and differential geometry.But you may find it more helpful as it contains a great deal more near the research level. It can be found here: http://www-fourier.ujf-grenoble.fr/~demailly/manuscripts/agbook.pdf
-
Huybrechts' book is very good indeed, but like Griffiths and Harris almost exclusively discusses the manifold case. Demailly is my thesis advisor so I'm pretty familiar with his book. He does go into more detail, especially in Chapters 2 and 4, and eventually 9 when he'll have time to finish writing it, but his focus really is on the smooth case, the complex spaces being somewhat treated as digressions. And since we're on the subject of books on smooth complex manifolds, Complex differential geometry by Fangyang Zheng is an absolute dream. – Gunnar Magnusson May 31 2010 at 10:49
When I needed to understand a little bit of complex algebraic geometry to study a complexe geometry problem, I used Griffith & Harris' book. It was quite easy to learn and extract just the informations I needed.
-
Thanks Benoit. Griffiths and Harris is very nice, but treats almost exclusively complex manifolds (they do mention analytic sets at the beginning). I'm looking for something more like an analytic version of Harthshorne, where the spaces in question are ringed spaces locally isomorphic to analytic sets. – Gunnar Magnusson May 30 2010 at 13:39
Since your interest is in deformation theory I would advise you to have a look at
"Introduction to singularities and deformations" by Greul, Lossen and Shushtin.
The first part of the book treats complex analytic geometry (complex space germs) and the second their deformation theory.
There's also a survey paper by Palamodov "Deformations of complex spaces" in Encyclopedia of Mathematics (Springer) which treats some foundational material as well.
Good luck!
-
Thanks Daniel, I'll take a look at both of those. Is the Palamodov paper the same one as appeared in "Russian Mathematical Surveys" in 1976? I've been looking for that one, but it's hard to find copies of that journal. Incidentally, there's a huge list of references and a historical overview of deformation theory by Doran here: www.math.columbia.edu/~doran/Hist%20Ann%20Bib.pdf - It's been extremely useful for finding insightful papers and articles. – Gunnar Magnusson May 30 2010 at 18:07
For deformation theory and complex manifolds, I'm a fan of Manetti's lecture notes.
-
I actually have those on my desk. The examples are very nice, and the historical surveys are a good touch. – Gunnar Magnusson May 30 2010 at 16:28
I like these notes a lot, too. If you like Manetti's notes, I recommend also looking at Kontsevich's and Gerstenhaber's work on deformation theory for a more general picture. – Kevin Lin May 30 2010 at 19:27
I think at some point,I'm going to begin a thread on general deformation theory where we need to distinguish carefully between algebraic and analytic deformation theory,as well as between classical algebraic deformation theory (Gerstenhaber,Schessinger) and the modern theory (Harsthorne,Lurie). They all appear quite different,but they are clearly connected. Those interconnections aren't really clear to the beginner or even someone with a classical background like me. – Andrew L May 30 2010 at 20:29
Andrew, have you seen mathoverflow.net/questions/385/… ? I believe that differential graded Lie algebras are the bridge between the "classical" and the "modern" theory. – Kevin Lin May 31 2010 at 5:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424178004264832, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2010/12/01/induction-and-restriction-are-additive-functors/?like=1&source=post_flair&_wpnonce=420e8d64d7 | # The Unapologetic Mathematician
## Induction and Restriction are Additive Functors
Before we can prove the full version of Frobenius reciprocity, we need to see that induction and restriction are actually additive functors.
First of all, functoriality of restriction is easy. Any intertwinor $f:V\to W$ between $G$-modules is immediately an intertwinor between the restrictions $V\!\!\downarrow^G_H$ and $W\!\!\downarrow^G_H$. Indeed, all it has to do is commute with the action of each $h\in H\subseteq G$ on the exact same spaces.
Functoriality of induction is similarly easy. If we have an intertwinor $f:V\to W$ between $H$-modules, we need to come up with one between $\mathbb{C}[G]\otimes_HV$ and $\mathbb{C}[G]\otimes_HW$. But the tensor product is a functor on each variable, so it’s straightforward to come up with $1_{\mathbb{C}[G]}\otimes f$. The catch is that since we’re taking the tensor product over $H$ in the middle, we have to worry about this map being well-defined. The tensor $s\otimes v\in\mathbb{C}[G]\otimes V$ is equivalent to $sh^{-1}\otimes hv$. The first gets sent to $s\otimes f(v)$, while the second gets sent to $sh^{-1}\otimes f(hv)=sh^{-1}\otimes hf(v)$. But these are equivalent in $\mathbb{C}[G]\otimes_HW$, so the map is well-defined.
Next: additivity of restriction. If $V$ and $W$ are $G$-modules, then so is $V\oplus W$. The restriction $(V\oplus W)\!\!\downarrow^G_H$ is just the restriction of this direct sum to $H$, which is clearly the direct sum of the restrictions $V\!\!\downarrow^G_H\oplus W\!\!\downarrow^G_H$.
Finally we must check that induction is additive. Here, the induced matrices will come in handy. If $X$ and $Y$ are matrix representations of $H$, then the direct sum is the matrix representation
$\displaystyle\left[X\oplus Y\right](h)=\left(\begin{array}{cc}X(h)&0\\{0}&Y(h)\end{array}\right)$
And then the induced matrix looks like:
$\displaystyle \left[X\oplus Y\right]\!\!\uparrow_H^G(g)=\left(\begin{array}{ccccccc}X(t_1^{-1}gt_1)&0&X(t_1^{-1}gt_2)&0&\cdots&X(t_1^{-1}gt_n)&0\\{0}&Y(t_1^{-1}gt_1)&0&Y(t_1^{-1}gt_2)&\cdots&0&Y(t_1^{-1}gt_n)\\X(t_2^{-1}gt_1)&0&X(t_2^{-1}gt_2)&0&\cdots&X(t_2^{-1}gt_n)&0\\{0}&Y(t_2^{-1}gt_1)&0&Y(t_2^{-1}gt_2)&\cdots&0&Y(t_2^{-1}gt_n)\\\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\X(t_n^{-1}gt_1)&0&X(t_n^{-1}gt_2)&0&\cdots&X(t_n^{-1}gt_n)&0\\{0}&Y(t_n^{-1}gt_1)&0&Y(t_n^{-1}gt_2)&\cdots&0&Y(t_n^{-1}gt_n)\end{array}\right)$
Now, it’s not hard to see that we can rearrange the basis to make the matrix look like this:
$\displaystyle\left(\begin{array}{cc}\begin{array}{cccc}X(t_1^{-1}gt_1)&X(t_1^{-1}gt_2)&\cdots&X(t_1^{-1}gt_n)\\X(t_2^{-1}gt_1)&X(t_2^{-1}gt_2)&\cdots&X(t_2^{-1}gt_n)\\\vdots&\vdots&\ddots&\vdots\\X(t_n^{-1}gt_1)&X(t_n^{-1}gt_2)&\cdots&X(t_n^{-1}gt_n)\end{array}&0\\{0}&\begin{array}{cccc}Y(t_1^{-1}gt_1)&Y(t_1^{-1}gt_2)&\cdots&Y(t_1^{-1}gt_n)\\Y(t_2^{-1}gt_1)&Y(t_2^{-1}gt_2)&\cdots&Y(t_2^{-1}gt_n)\\\vdots&\vdots&\ddots&\vdots\\Y(t_n^{-1}gt_1)&Y(t_n^{-1}gt_2)&\cdots&Y(t_n^{-1}gt_n)\end{array}\end{array}\right)$
There’s no complicated mixing up of basis elements amongst each other; just rearranging their order is enough. And this is just the direct sum $X\!\!\uparrow_H^G\oplus Y\!\!\uparrow_H^G$.
## 1 Comment »
1. [...] come to the real version of Frobenius reciprocity. It takes the form of an adjunction between the functors of induction and [...]
Pingback by | December 3, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383083581924438, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/44931/does-earths-rotation-affect-its-shape?answertab=active | # Does Earth's Rotation Affect Its Shape?
The question I am working on is, "Consider the following.
(a) Find the angular speed of Earth's rotation about its axis. rad/s
(b) How does this rotation affect the shape of Earth?"
I am fully capable of solving part (a); however, I am not sure how to describe the effect earth's rotation on its shape. I tried to search my textbook for the answer, but could not find anything. Is there an actual effect on the shape?
-
Possible duplicates: physics.stackexchange.com/q/8074/2451 and physics.stackexchange.com/q/10670/2451 – Qmechanic♦ Nov 23 '12 at 16:43
1
The answer is Yes. – Serg Nov 23 '12 at 17:34
1
@EMACK, definitely the rotation gives the Earth the shape of a disk, rather than a sphere. The distance to the Earth center is less on the poles than it's on the equator. It also affects the gravitation `g` that is measured at sea level on poles or equator. – Serg Nov 23 '12 at 17:40
2
@EMACK, The disk shape is a trade-off between gravitational force to the Earth's center, and centrifugal force due to rotation. The centrifugal force is zero on poles and the largest on the equator. But you can find much more detailed and numeric answer on the link posted by Qmechanic. – Serg Nov 23 '12 at 18:11
1
@EMACK: I suggest updating your title to reflect exactly what you're asking. Your current title is ambiguous; you could be asking whether the Earth's rotation affects its shape, or how it does so, or why it does so. – Keith Thompson Nov 23 '12 at 19:27
show 3 more comments
## 2 Answers
The Earth is mostly fluid. This may seem a strange claim but the rock in the mantle behaves like an extremely viscous fluid, which is why continental drift can happen.
Anyhow, if you imagine a stationary drop of liquid it will form a sphere. This is a bit of a cheat because small drops from spheres due to surface tension not gravity, but the end results are similar. If you start the drop rotating the water at the "equator" is going to feel an outwards force due to the rotation, so the drop will change shape and get bigger around the equator while the poles flatten. This shape is known as an oblate spheroid, and indeed it's the shape of the Earth because the Earth behaves like a rotating fluid drop.
To try and calculate the change of shape is a little messy, but luckily someone has done all the hard work for you and you can find the results here.
-
Is this outwards force due to rotation perhaps the centrifugal force? – Mack Nov 23 '12 at 18:24
1
Yes, it is the centrifugal force. – Emilio Pisanty Nov 23 '12 at 19:14
It is a very simple problem, it is not hard at all! Just add the gravitational and centrifugal potential and equal it to a constant. – Eduardo Guerras Valera Nov 24 '12 at 7:33
Whoever downvoted, please would you explain why you thought the answer wasn't helpful. I'm not complaining about the downvote, but I can't improve my answers if you won't tell me what's wrong with them. – John Rennie Nov 24 '12 at 14:24
I don't think it is difficult to derive analytically the shape of the Earth. Simply look for the shape of the surfaces of equal potential.
The geometrical symmetry reduces the calculation to a 2-dimensional problem. Assume the rotation axis is vertical. The potential is the sum of the gravitational plus centrifugal:
$\Phi=\Phi_{g}+\Phi_{c}=-\frac{GM_{(x,y)}}{\sqrt{x^2+y^2}}+\frac{\omega^{2}}{2}x^{2}=-\frac{GM_{r}}{r}+\frac{\omega^{2}}{2}r^{2} \cos^{2} l$
The angle $l$ is the same as the latitude, and $M_{r}$ is the mass enclosed by a spherical surface (but please see footnote) at the point, i.e $M_{r}=\frac{4}{3}\pi \rho r^{3}$ by assuming a constant density model. Therefore,
$\Phi= -\frac{4}{3} G \pi \rho r^{2} +\frac{\omega^{2}}{2}r^{2} \cos^{2} l = r^{2}(\frac{\omega^{2}}{2}\cos^{2} l -\frac{4}{3} G \pi \rho)$
Thus, the family of curves of constant (negative) potential $\Phi=-C^{2}$ is:
$-C^{2} = r^{2}(\frac{\omega^{2}}{2}\cos^{2} l -\frac{4}{3} G \pi \rho) = r^{2}(A^{2} \cos^{2} l -B^{2})$
Let's go back to rectangular coordinates, to see that this is indeed an ellipse:
$C^{2} = r^{2}(B^{2} - A^{2} \cos^{2} l) = (x^{2}+y^{2})(B^{2} - A^{2} \frac{x^{2}}{x^{2}+y^{2}}) = (x^{2}+y^{2})B^{2} - A^{2} x^{2}$
$C^{2} = (B^{2} - A^{2}) x^{2} + B^{2} y^{2}$
For that equation to be an ellipse, $B^{2} - A^{2}$ must be positive. This is natural, otherwise (see how we defined $A$ and $B$) the angular speed $\omega$ would make the centrifugal force stronger than the gravitational force. The semiaxis are then $1/B$ for the vertical direction, and $1/\sqrt{B^{2} - A^{2}}$, i.e. bigger, in the horizontal direction. Note too, that $A=0$ for $\omega = 0$, that is, you recover the spherical shape if there is no rotation.
Thus, an Earth with constant density that rotates as a rigid solid can be approximated by an ellipsoid shape, whose dimension along the rotation axis is smaller.
Additionally, we probably don't need the interior of the Earth to be molten, for the hydrostatic equilibrium assumption to be valid. It could be completely cold and solid and the model still would hold, because at that size scales, relative small deviations of matter distribution from the constant potential surfaces give rise to enormous shear stress that rocks, no matter how hard and solid, cannot resist. That is why the liquid model is a valid approximation (but I have not done any numbers on this).
NOTE: We have assumed that any point belongs to a spherical surface that is completely full of matter, therefore the potential gravitational energy is the same as if all matter inside that sphere were located at the Earth centre. If the Earth were much more flattened, this approximation would not be valid.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934421956539154, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/236543/proving-a-reduction-formula-for-the-antiderivative-of-cosnx/236547 | # Proving a reduction formula for the antiderivative of $\cos^n(x)$
I want to show that for all $n\ge 2$, it holds that $$\int \cos^n x\ dx = \frac{1}{n} \cos^{n-1} x \sin x + \frac{n-1}{n}\int \cos^{n-2} x\ dx.$$ I'm not even getting the result for the induction base $(n=2)$: Using integration by parts, I only get $$\int \cos^2 x\ dx = \cos x \sin x + \int \sin^2 x\ dx.$$ I'm suspecting that I need to use some trigonometric identity here.
-
2
– Joe Nov 13 '12 at 17:40
## 2 Answers
You only need that $$\sin^2 x+\cos^2 x=1$$
Let $$\varphi(n)=\int \cos^n x dx$$
Integrate by parts
$$\begin{cases}\cos^{n-1} x =u\\ \cos x dx =dv\end{cases}$$
Then
$$\begin{align}\varphi(n)&=\int \cos^n x dx\\ &=uv-\int v du\\&=\cos^{n-1}x\sin x+\int (n-1)\cos^{n-2}x \sin ^2x d x\end{align}$$
But $\sin^2 x=1-\cos^x$
$$\begin{align}&=\cos^{n-1}x\sin x+\int (n-1)\cos^{n-2}x \sin ^2x d x \\&=\cos^{n-1}x\sin x+\int (n-1)\cos^{n-2}x \left(1-\cos^2 x\right) d x \\ &=\cos^{n-1}x\sin x+\int (n-1)\cos^{n-2}x dx-\int (n-1)\cos^{n-2}x \cos^2 x d x \\&=\cos^{n-1}x\sin xdx+(n-1)\int \cos^{n-2}x dx-(n-1)\int \cos^{n }x dx \\&=\cos^{n-1}x\sin xdx+(n-1)\int \cos^{n-2}xdx -(n-1)\varphi(n)\end{align}$$
Now solve for $\varphi(n)$.
-
thanks, great answer! :) – somebody Nov 13 '12 at 18:23
For the base case, use that $\sin^2 x = 1-\cos^2 x$, write $1=\cos^{0} x$ and you get:
$$\int \cos^2 x \ dx = \cos x \sin x + \int \cos^{0} x\ dx- \int \cos^2 x \ dx$$
So: $$2\int \cos^2 x dx = \sin x \cos x + \int \cos^0 x \ dx$$
Dividing by $2$ gives your result.
Now try for $n>2$. It's basically the same argument, where you replace $\sin^2 x=1-\cos^2 x$ and then solve for the integral.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8839758634567261, "perplexity_flag": "middle"} |
http://chempaths.chemeddl.org/services/chempaths/?q=book/General%20Chemistry%20Textbook/Nuclear%20Chemistry/1540/bombardment-positive-ions | # Bombardment with Positive Ions
Submitted by ChemPRIME Staff on Thu, 12/16/2010 - 15:57
A powerful method of artificially inducing nuclear reactions is the bombardment of a sample of matterAnything that occupies space and has mass; contrasted with energy. with ions. When the bombarding particle is positively charged, which is usually the case, it must have a very high kinetic energy to overcome the coulombic repulsion of the nucleusThe collection of protons and neutrons at the center of an atom that contains nearly all of the atoms's mass. being bombarded. This is particularly necessary if the nucleus has a high nuclear charge. To give these charged particles the necessary energy, an accelerator or “atom smasher” such as a cyclotron must be used. The cyclotron was developed mainly by E. O. Lawrence (1901 to 1958) at the University of California. A schematic diagram of a cyclotron is shown in Fig. 1. Two hollow D-shaped plates (dees) are enclosed in an evacuated chamber between the poles of a powerful electromagnet. The two dees are connected to a source of high-frequencyThe rate at which a periodic event occurs; specifically, the rate at which the waves of electromagnetic radiation pass a point. alternating current, so that when one is positive, the other is negative. Ions are introduced at the center and are accelerated because of their alternate attraction to the left- and right-hand dees. Since the magnetic field would make ions traveling at constant speed move in a circle, the net result is that they follow a spiral path as they accelerate until they finally emerge at the outer edge of one of the dees.
Figure 1 A cyclotron. The spiral path of the ions is shown in color.
Some examples of the kinds of nuclear reactions which are possible with the aid of an accelerator are as follows:
${}_{\text{12}}^{\text{24}}\text{Mg + }{}_{\text{1}}^{\text{1}}\text{H}\to \text{ }{}_{\text{11}}^{\text{21}}\text{Na + }{}_{\text{2}}^{\text{4}}\text{He }$ (1)
${}_{\text{3}}^{\text{7}}\text{Li + }{}_{\text{1}}^{\text{2}}\text{H}\to \text{ }{}_{\text{4}}^{\text{8}}\text{Be + }{}_{\text{0}}^{\text{1}}n\text{ }$ (2)
${}_{\text{46}}^{\text{106}}\text{Pb + }{}_{\text{2}}^{\text{4}}\text{He +}\to \text{ }{}_{\text{47}}^{\text{109}}\text{Ag + }{}_{\text{1}}^{\text{1}}\text{H}$ (3)
A particularly interesting type of nuclear reaction performed in an accelerator is the production of the transuranium elements. These elements have atomic numbers greater than that of uranium (92) and are too unstable to persist for long in nature. The heaviest of them can be prepared by bombarding nuclei which are already heavy with some of the lighter nuclei:
${}_{\text{92}}^{\text{238}}\text{U + }{}_{\text{6}}^{\text{12}}\text{C}\to \text{ }{}_{\text{98}}^{\text{246}}\text{Cf + 4}{}_{\text{0}}^{\text{1}}n$ (4)
${}_{\text{92}}^{\text{238}}\text{U + }{}_{\text{7}}^{\text{14}}\text{N}\to \text{ }{}_{\text{99}}^{\text{247}}\text{Es + 5}{}_{\text{0}}^{\text{1}}n\text{ }$ (5)
${}_{\text{98}}^{\text{252}}\text{Cf + }{}_{\text{5}}^{\text{10}}\text{B}\to \text{ }{}_{\text{103}}^{\text{257}}\text{Lr + 5}{}_{\text{0}}^{\text{1}}n\text{ }$ (6)
• Printer-friendly version
• Login or register to post comments
## Term of the Day
Abbreviation for curieA unit of radioactive decay; equal to 3.70 x 1010 disintegrations per second; abbreviated Ci., a unitA particular measure of a physical quantity that is used to express the magnitude of the physical quantity; for example, the meter is the unit of the physical quantity, length. of radioactiveDescribes a substance that gives off radiation‐alpha particles, beta particles, or gamma rays‐by the disintegration of its nucleus. decay; equal to 3.70 x 1010 disintegrations per second.
## Print Options
• Printer Friendly Version will open a new window that has a clean view of the page you are viewing, plus any sub-pages. Warning: if you attempt to view a print version of the top page of a path, this could result in a very large file.
• PDF Version will prompt a download of only the current page you are viewing. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9133015871047974, "perplexity_flag": "middle"} |
http://mathhelpforum.com/pre-calculus/34661-help-math-project.html | # Thread:
1. ## help in this math project
help would be appreciated if u can solve it all i will be very thankful and i would like some hints too
Attached Thumbnails
2. What have you done so far? Where are you stuck? What formulas do you know about circles?
3. The equation of a circle is $(x-a)^2 + (y-b)^2 = r^2$
where $(a,b)$ is the centre of the circle and r the radius.
4. $x^2 + y^2 - 2x - 4y - 4 = 0$
$(x^2 - 2x) + (y^2 - 4y) = 4$
Complete the square for both:
$(x^2 - 2x + 1) + (y^2 - 4y + 4) = 9$
Now we factor:
$(x - 1)^2 + (y - 2)^2 = 3^2$
Therefore, r = 3 and the center is $(1, 2)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305103421211243, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/154715/if-m-and-n-are-graded-modules-what-is-the-graded-structure-on-operatornam | # If $M$ and $N$ are graded modules, what is the graded structure on $\operatorname{Hom}(M,N)$?
Let $A$ be a graded ring. Note that the grading of $A$ may not be $\mathbb{N}$, for example, the grading of $A$ could be $\mathbb{Z}^n$. Actually, my question comes from the paper of Tamafumi's "On Equivariant Vector Bundles On An Almost Homogeneous Variety" (it can be downloaded freely in http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.nmj/1118795362). Proposition 3.4. And I translate this proposition to modern language:
Let $A = \mathbb{C}[\sigma^{\vee} \cap \mathbb{Z}^n]$. If $M$ is a finitely generated $\mathbb{Z}^n$-graded $A$-projective module of rank $r$, then there exists $u_1,u_2,\dots,u_r$ in $\mathbb{Z}^n$ such that \begin{eqnarray*} M \simeq A(-u_1) \oplus A(-u_2)\oplus \dots \oplus A(-u_r) \end{eqnarray*} as $\mathbb{Z}^n$-graded $A$-module. In particular, $M$ is an $A$-free module.
(page 71) He says since $\operatorname{Hom}(\widetilde{M},\widetilde{F})$ is $T$-linearized vector bundle, $\operatorname{Hom}(M,F)$ is a $\mathbb{Z}^n$-graded A-module.
My Questions: I don't know what this statement means. I know why we need to show that $\operatorname{Hom}(M,F)$ is a $\mathbb{Z}^n$-graded A-module, but I don't understand his reason. What is the "grading" of "$\operatorname{Hom}(M,F)$"? What does "$\operatorname{Hom}(M,F)$" mean? "$\operatorname{Hom}_A(M,F)$", "$\operatorname{Hom}_{\mathbb{Z}}(M,F)$" or what? I think it is just a purely algebraic question, so why do we need to use a "vector bundle"? I feel uncomfortable about this question.
-
## 2 Answers
If $A$ is a (commutative) $\newcommand\ZZ{\mathbb Z}\ZZ^n$-graded ring and $M$ and $N$ are $\mathbb Z^n$-graded $A$-modules, we can consider the $A$-module $\hom_A(M,N)$ of all $A$-linear maps. For each $g\in\ZZ^n$ we can look at the subset $\hom_A(M,N)_g$ of all $A$-linear maps $f:M\to N$ such that $f(M_h)\subseteq N_{h+g}$ for all $h\in\ZZ^n$: we call the elements of $\hom_A(M,N)_g$ the homogeneous $A$-linear maps of degree $g$.
It is easy to see that the sum $\hom_A(M,N)_{\text{homog}}=\bigoplus_{g\in\ZZ^n}\hom_A(M,N)_g$ is a direct sum, giving a $A$-submodule of $\hom_A(M,N)$.
If $M$ is a finitely generated $A$-module, then we have $\hom_A(M,N)_{\text{homog}}=\hom_A(M,N)$. In general, though, $\hom_A(M,N)_{\text{homog}}$ is a proper submodule of $\hom_A(M,N)$.
In what you are reading, it is most likly that $\hom(M,N)$ denotes $\hom_A(M,N)$.
-
Thank you very much! I guess I understand your argument, and I need to go bed to sleep now. (Taiwan times). I will check some details you tell to me. I really appreciate your help. – Peter Hu Jun 6 '12 at 16:52
2
Don't dream of graded modules! – Mariano Suárez-Alvarez♦ Jun 6 '12 at 16:58
– Peter Hu Jun 7 '12 at 12:10
I dont know if this will help but, Let $A$ be a graded ring, $N$ a graded $A$ module, and $M$ an $A$-module. The submodules $Hom_A(M,N_n)$ of $Hom_A(M,N)$ define on $Hom_A(M, N)$ a graded module structure.
-
Please ignore this, it is just a comment and not an answer. I am new to this and accidentally put it here, instead of just making it as a comment. – messi Jun 6 '12 at 16:46
1
But your answer also help me to think this problem! Thank you very much! – Peter Hu Jun 6 '12 at 16:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.885431170463562, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/44566/does-the-exact-string-theory-s-matrix-describe-all-physics-there-is | # Does the exact string theory $S$-matrix describe all physics there is?
Suppose someone manages to evaluate the string theory $S$-matrix to all orders for any and all vertex operator insertions including non-perturbative contributions from world-sheet instantons and re-sum the whole series to obtain the exact non-perturbative string theory $S$-matrix for any combination of in- and out-states. Suppose further that the analytic result is compact, tractable, and easily amenable to numerical evaluations (say, some special function). Would such a result tell us "what string theory is"? Would it be enough in principle to answer all sensible questions about physics described by string theory? If not, what else is there we should care about?
-
2
Are you excluding cosmological questions? One must be clear that there are many different string S-matrices, which are linked by non-S-matrix operations, which involve turning on moduli and such, and infinite number of particles. So you shouldn't say "the" string S-matrix, but "the string S-matrix for a flat-version of our vacuum". – Ron Maimon Nov 19 '12 at 4:51
Of course not, I asked about "all sensible questions about physics" which certainly includes cosmology, doesn't it? If you know the S-matrix "for any and all vertex operator insertions", as I supposed, that should allow for arbitrary moduli and geometries, no? If not, please explain why not. – Udo Kamilla Nov 19 '12 at 4:55
It's not enough because the S-matrix is for a finite number of particles--- it doesn't even describe what happens when you move a charged particle from one momentum to another, this involves infinite number of soft photons, let alone change moduli over a region where the cosmology changes. – Ron Maimon Nov 19 '12 at 5:09
Perhaps you have a point, that's what I'm trying to find out. But you are not giving a coherent argument. Changing the momentum of just one particle is not a physically allowed process. At least you'd have to construct a physically possible process described in terms of actual observables (--> presumably S-matrix elements?!) and use that to show that there is something more to be learned... – Udo Kamilla Nov 19 '12 at 5:58
1
You can't describe electron electron scattering, as this is infrared divergent (you find the same infrared divergences in string theory--- you fix them by adding a soft classical changing background--- this is the dirty secret). You can't describe T-duality, as this is condensation of infinite number of zero mass particles. But you are absolutely right in your insistance that the S-matrix is complete for a given background, so I don't want to give fuel to string critics by saying "there's more than S-matrix", because it is more correct to say there isn't. – Ron Maimon Nov 20 '12 at 15:44
show 2 more comments
## 2 Answers
Well, for starters, the scattering matrix picture of interactions does not include the dynamics of spacetime, it is instead assumed as a background space where everything happens
Even string theory is just classical general relativity in a more fair description such that it can be quantized in a way that gives finite results for measurable quantities: it assumes that string modes contribute to $T_{\mu \nu}$ and as such, produce a curvature. The curvature of coherent excitations of a closed string has been proved to be equivalent to a small perturbation of the metric (see this question for details) and this gives string theorists confidence that such excitations describe gravitons.
But the picture of space-time is still classical, and a proper nonperturbative formulation of quantum spacetime is a revolution that still has not happened. Until that happens, no scattering matrix description can hope to be complete
-
3
That's obviously not true as there are matrix operators for the graviton which perturb the metric and hence the spacetime. That's the whole point why people got excited about strings, they include quantum gravitons in a dynamic theory. In principle it should be possible to obtain any dynamic spacetime background by appropriate vertex insertions, no? If this is not the case then I would be interested in hearing a rational argument why not. – Udo Kamilla Nov 19 '12 at 5:04
No, first because string theory is based in a number of approximations/assumptions and second because not every physical question can be answered assuming that processes take an infinite time and involve objects separated infinitely as is assumed in the S-matrix approach.
The S-matrix approach is excellent for particle physics, which deals with few particles (usually two or three) in a large mostly empty volume and only considers initial and final states of free particles. The S-matrix approach fails when you start to study many-body motion in condensed phases. This is the reason why chemists have developed other theories beyond the S-matrix formalism for the study of chemical reactions, for instance.
-
3
String theory is obviously not based on any approximation whatsoever. I did not ask what an S-matrix (in QFT) is usually used for. My question is conceptual and concerns string theory. – Udo Kamilla Nov 20 '12 at 2:10
1
Evidently string theory is based in a number of approximations and this is why the string S-matrix fails to explain even the most elementary chemical reactions in condensed phases. Moreover, even in its supposed strong point (as 'candidate' for a quantum gravity theory) the string theory approach is based in a set of gross assumptions and this is why generalizations to string theory are under active research. – juanrga Nov 20 '12 at 15:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340481162071228, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=614744 | Physics Forums
## Thermal expansion olympiad problem
1. The problem statement, all variables and given/known data
I only know volumetric expansion and linear expansion which require the coefficient constants that are not given in the problem. I also read somewhere that the mass remains constant during thermal expansion or is this false?
The problem is attached pls take a look at it.
2. Relevant equations
delta V=BVi Dleta T
3. The attempt at a solution
I tried finding the total energy but could not do anything after that.
Attached Thumbnails
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Your question has nothing to do with thermal expansion.
Mentor This problem has nothing to do with thermal expansion. Thermal expansion, as you've noted, results in an increase in volume, not mass. It's the tendency of things to get bigger with increasing temperature. So if not thermal expansion, then what is this question about? My hint to you would be: think "mass-energy equivalence" and Einstein's famous equation. You can easily compute from the given data how much light energy is absorbed by the surface of the Earth over the course of a day. Then you just have to convert that into an equivalent mass value.
## Thermal expansion olympiad problem
Wow it uses E=mc^2? I didnt think it like that! Thanks for the help!!
I actually havent learnt how to use that equation but when i use it by Dividing E/c^2=m , I got the wrong answer, (E) the answer given was (D)
Quote by zabachi I actually havent learnt how to use that equation but when i use it by Dividing E/c^2=m , I got the wrong answer, (E) the answer given was (D)
show the details of your calculation.
Mentor Pay attention, in particular, to the geometry.
Surface area of earth=4∏r2≈5.1×1014m2 5.1×1014×1500W/m2=7.65×1017 Watt=J/s, 7.65×1017×60×60×24≈6.61×1022=Total energy in one day. E=mc2, Thus m=E/c2=$\frac{6.61×10^{22}}{3.0×10^{8}}$≈7.34×105
Answers D and E differ by the same factor as the area of a great circle to the total area of a sphere. It is my opinion that answer D is the correct answer.
Why do we take it be a circle instead of a sphere?
Because the incident light delivers power equal to the product of the intensity to the perpendicular area. What is the shape perpendicular to Sun's rays (assuming they're parallel)?
Mentor A way to visualize what Dickfore said. Say you have a whole "wall" of parallel rays coming from the sun. What is the effective area of this "beam" that is blocked out by Earth? In other words, what's the shape of the Earth's shadow? Code: ```-------------------------->
-------------------------->
-------------------------->
-------------------------->
------------>O
-------------------------->
-------------------------->
-------------------------->
-------------------------->``` It's a poor diagram, but the idea is that the "beam" of photons that is intercepted and absorbed is a circular cylinder whose radius is equal to the radius of the Earth. Therefore the cross sectional area of this cylinder is the area of the circle: πR2, where R is the radius of the Earth. So the incident power absorbed has to be equal to the flux (in W/m2) multiplied by this perpendicular cross-sectional area over which the light is blocked out. Now, you might argue that the incident rays impact over a hemispherical area of 2πR2 on the surface of the Earth. That's true, but the rays are not perpendicular to the surface of the Earth at every point. The angle of incidence may be perpendicular at the equator, but closer to the poles it's a very oblique angle. This means that the flux received is not 1.5 kW/m2 everywhere on the surface of the Earth. There are some places at higher latitudes where the 1.5 kW gets spread out over an area larger than 1 m2 because of this oblique incidence. I think that if you worked out the power received over the entire hemisphere, (which might be mathematically tricky at your level and probably requires integration), you'd find the same answer as we gave using the above argument about just what portion of the incident light is ultimately blocked out or "removed" from the beam. In other words, effectively, the power absorbed from the incident light by this hemisphere, which is not perpendicular to the incident light at every point, is the same as the power that would have been absorbed by a flat circular disc of the same radius that WAS perpendicular to the incident light at every point, because from the point of view of the "stream" of photons, they have the same cross-section.
Thread Tools
Similar Threads for: Thermal expansion olympiad problem
Thread Forum Replies
Introductory Physics Homework 7
Mechanical Engineering 1
Introductory Physics Homework 10
Introductory Physics Homework 4
Introductory Physics Homework 2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377111196517944, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/tagged/list-manipulation+programming | # Tagged Questions
0answers
33 views
### Generating partitions of a set with a specified size of the parts [duplicate]
I tried the following (inspired by the answer here) myList = {a, b, c}; Needs["Combinatorica`"]; SetPartitions[myList] and I got this answer, ...
3answers
221 views
### Accessing list elements by name
First, a bit of a long introduction to my problem: I only have a few weeks of Mathematica experience. I am creating a mathematica application that calculates some material properties of steel based ...
3answers
376 views
### What is Mathematica's equivalent to MATLAB's filter function?
The MATLAB code filter(0.5,[1, -0.5], [1:10]) is equivalent to Rest@FoldList[(#1 + #2)/2. &, 0, Range[10]] I don't ...
2answers
123 views
### Select rows from table by keys
I am looking for a way to iteratively select all sublists with the same ID (my 1st column, residual columns are AbsoluteTime entries). First, I obtained the list of ...
0answers
62 views
### Keeping the length of vectors fixed
b and d are two arrays that are given. I create aa and ...
1answer
198 views
### Plotting Energies vs. m for all values of R with the colors of the levels indexed by R [closed]
I have two lists like this: ...
1answer
61 views
### Collecting roots of different equations and create a list
I solve two equations and have two solutions one by each equation. I want to create list of these roots. Could anyone please help me? Appreciate it. m02R150 = FindRoot[P1 == 0, {E1, 0.07, 0.1}] ...
1answer
98 views
### Where do those nulls come from? [closed]
I have seen discussions of unwanted nulls in the output in the context of building lists with conditions on the elements, but that is not involved here. I would like to know where the nulls come from ...
4answers
271 views
### Best way to modify values in a list of rules?
Recently I had to solve a problem similar to this: Let's say I have a list of list of rules ...
4answers
290 views
### Passing large list by reference
I have the following problem: I would like to control evaluation of a variable that points to a list. For example, frequently in the code I have functions of the form that are supposed to work on ...
1answer
133 views
### Find Roots in Do loop
Task: Finding roots in loop t = List[1, 2, 3, 4, 5] fx[x_] := a*x^2 - 5 List[Do[Print[FindRoot[fx[k] == 1, {a, 1}]], {k, 0, 5}]] Output: Currently the output is ...
3answers
225 views
### Delete elements from a list really fast
I have this bit of code that works, but it's very slow when there are 600k elements in the list: ...
8answers
378 views
### Any built-in function to generate successive sublists from a list?
Given lst = {a, b, c, d} I'd like to generate {{a}, {a, b}, {a, b, c}, {a, b, c, d}} but using built-in functions only, ...
3answers
211 views
### How to create functions of arbitrary number of variables?
In the following code what would be the simplest way to generalize it to say some $N_f$ number of $z$ instead of just $z_1$ and $z_2$? ...
3answers
198 views
### Filtering elements from a list
Suppose that I have a list {{{2, 1}, {4, 3}, {2, 4}}, {{2, 1}, {4, 3}, {3, 1}}, {{2, 1}, {2, 4}, {3, 1}}, {{4, 3}, {2, 4}, {1, 2}}} I want to make a new list ...
1answer
113 views
### Computing closest set of points to each point in large set — running out of memory (arrays unpacking)
I have a large set of points in 3D and I'm trying to identify all the points that lie within a certain distance of each point. Then using this data store the vector between the pairs of points. I ...
1answer
192 views
### speed up iteration with conditionals plus optimize memory usage
Given list1 and list2 whose elements are vectors of a certain (fixed) dimension, I am interested in the behaviour of a scalar ...
4answers
173 views
### clean functional way to get first n rows that yield maximum rank
I have a matrix A and want the matrix consisting of the first n rows of A, having the same rank as A, where n is minimal. More generally, I want the shortest "start piece" of a list, such that some ...
2answers
406 views
### Optimize inner loops
I'm Mathematica newbie so please be gentle :) I have this, heavily non-optimized part of code, which I would like to speed up. I have put all matrices to be RandomReal, but in my code they take ...
1answer
123 views
### Why is MapIndexed better than mapping over a range?
Background I'm working on an application in which I need to create and control two sets of locators. I know from reading the Mathematica documentation and certain posts on Mathematica.SE that this ...
3answers
203 views
### Restricted accumulation of values
Please consider the following list data. I was trying to accumulate data until the result turns positive the first time and ...
3answers
278 views
### Generate a Combination of letters by a number
I'm trying to write a function f. example: ...
2answers
190 views
### How can I use Max[] in a function that is passed a list not find the max of the list
For most functions in Mathematica, passing them a list will call the function on each element of the list. For example: ...
3answers
321 views
### How can I regroup elements in a list into a tree based on their values?
I have a list of elements in an outline, here is an example that is only 3 levels deep: ...
11answers
639 views
### Generating an ordered list of pairs of elements from ordered lists
I have a pair of ordered lists. I want to generate a new ordered list (using the same ordering) of length n by applying a binary operator to pairs of elements, one from each list, along with the index ...
2answers
270 views
### find subsequences of constant increase
A list like l = {0, 1, 2, 3, 4, 5, 7, 9, 12, 13, 18, 19} may have subsequences of constant increase, $a_{n+1} = a_n + k$. For example: ...
3answers
267 views
### Generating Linear Extensions of a Partial Order
Given a set $S$ and a partial order $\prec$ over $S$, I'm looking for a way to "efficiently" generate a list of linear extensions of $\prec$. Suppose the partial order is given by a ...
6answers
811 views
### Mathematica Destructuring
Context I'm writing a function that look something like: ...
1answer
223 views
### How to monitor the progress of Map?
I have a function doSomethingComplicated[...] that takes about 10s on average to evaluate. My list, listOfArgs has about 10000 ...
3answers
2k views
### K-means clustering
In MATLAB, there is a command kmeans() that divides an array into $k$ clusters and calculates the centroid of each cluster. Is there any command in Mathematica to ...
4answers
365 views
### Implementing a function which generalizes the merging step in merge sort
One of the key steps in merge sort is the merging step. Given two sorted lists sorted1={2,6,10,13,16,17,19}; sorted2={1,3,4,5,7,8,9,11,12,14,15,18,20}; of ...
1answer
273 views
### Modifying a List in a function in place
An example will be most specific: func[list_, column_] := list[[All, column]] = Map[#*2 &, list[[All, column]]]; This throws errors. I want to avoid doing ...
1answer
250 views
### Path queries for tree-structured data
Can anyone suggest documentation or tutorials for developing path queries and indices for (XML-like) tree-structured data? Suppose data is organized hierarchically in key->value pairs, eg: ...
1answer
220 views
### Are there advantages to using additional arguments of common functions rather than alternative ways of calculating with lists?
(Apologies for the long question title.) One of the interesting, if sometimes confusing, things about Mathematica is that there is always more than one way to do things. Even intermediate users can ...
7answers
279 views
### How to efficiently Append a result of an operation on each element of a list to itself
I'm looking for the best function to apply the product of the last two elements of sublist elements to each element: Example: ...
5answers
309 views
### How to distinguish between lists and values?
I have a (hopefully small) problem with some numerical integration algorithm, more specifically I want to integrate the imaginary part of a complex valued function, e.g. ...
5answers
957 views
### Finding all elements within a certain range in a sorted list
Suppose we have a sorted list of values. Let's use list = Sort@RandomReal[1, 1000000]; for this example. I need a fast function ...
6answers
555 views
### Splitting up delimited data in lists
One task that I frequently find myself doing in Mathematica is splitting lists into lists of sublists, using specific elements to define the break-points. This is particularly useful with imported ...
9answers
411 views
### Generating a matrix using sublists A and B n times
I want to write a function that generates a square matrix from sublists. My sublists are a = Range[0, x, 0.5]; b = Range[0.25, x + 0.25, 0.5]; Suppose x=2, then I ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8943488597869873, "perplexity_flag": "middle"} |
http://mathhelpforum.com/number-theory/196427-prove-n-p-k-even.html | # Thread:
1. ## prove that φ(n/p^K) is even
so in this problem i no that gcd(a,n)=1 and the question i need to answer is : let p be any prime factor of n and let k be the number of times that p appears in the prime factorization of n. prove that φ(n/p^K) is even
so far i have
If p is prime, then φ(p^k) = p^(k-1)(p-1). So, if p is odd, then p-1 is even, hence
φ(p^k) is even.
There is an odd prime q, such that q^k appears in the prime factorization of n.
Then (q - 1)/φ(q^k)/φ(n). Hence φ(n) is even.
how can i finish answering the question
2. ## Re: Prove that φ(n/p^K) is even
What is the full question? You need to tell us at least what $n$ is. It can't just be any positive integer; for example, the statement is false if $n=p^k.$
3. ## Re: prove that φ(n/p^K) is even
well it also say that we should let n be an integer that has at least two distinct odd prime factors and there are no primitive roots of n
4. ## Re: Prove that φ(n/p^K) is even
Okay. Hint: For any positive integer $m,$ if $\varphi(m)$ is odd, then $m=1$ or $m=2.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9000692963600159, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/55600/eigenvalues-of-a-parametrized-family-of-linear-functions | ## Eigenvalues of a Parametrized Family of Linear Functions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose that we have a family of linear functions $L(\alpha) : \mathbb{R}^n \rightarrow \mathbb{R}^n$, where $\alpha$ is a positive real number.
For each $\alpha$, it is given that $L(\alpha)$ is a symmetric matrix, and so it has a basis of eigenvectors, and $n$ eigenvalues (not necessarily distinct). We define a function $f(\alpha)$ to be the smallest eigenvalue of $L(\alpha)$.
It is also given that the elements of $L(\alpha)$ are smooth functions of $\alpha$ for each $\alpha$. Therefore, as the eigenvalues are the roots of a polynomial whose coefficients are also smooth functions of $\alpha$, $f(\alpha)$ is a smooth function of $\alpha$ for all but a few values of $\alpha$ (this follows from the implicit function theorem).
The problem I am having is to try to find the global minimum of $f(\alpha)$. Usually what one would do is to take its derivative with respect to alpha, and set that equal to 0, but that is actually not possible in the present case because there is no way to get an explicit formula for $f(\alpha)$.
A possible idea that I had was to get some kind of sequence which converges to the smallest eigenvalue of $L(\alpha)$, and hope that this sequence converges uniformly in $\alpha$, but I have not yet found such a thing.
-
1
I don't find a question here. – Denis Serre Feb 16 2011 at 9:27
I belive he wants a method to minimalize $f(\alpha)$. Is it clear that this minimum exists? – András Bátkai Feb 16 2011 at 9:32
over a compact set of $\alpha$'s. Otherwise certainly not: $\alpha \mapsto L (\alpha) = \alpha$ viewed as a $1\times1$ matrix does not have a minimum... – Stefan Waldmann Feb 16 2011 at 9:51
Educated guess: The search is for the global minimum of the absolute value of the smallest eigenvalue. – Tim van Beek Feb 16 2011 at 10:13
I did not say this in the above, but it is implicit that I am assuming the existence of a global minimum. Just like you can find the global minimum of a function f: R -> R in principle by solving f'=0 should it exist, is there a practical way to in principle find the global minimum of this function $f(\alpha)$ given that it exists? – Eric Haengel Feb 16 2011 at 12:44
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350554347038269, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/25831-few-pre-calc-guidance-needed.html | # Thread:
1. ## A few pre-calc guidance needed
Hi again. I have a few problems which I feel like I'm right at the finish line but I just can't cross...They are the following:
FIRSTLY
Let a and b be real numbers.
f(x)=a sin(x)+ b ∛x+4
If f(log(log(10,3))=5,
what is the value of f(log(log3))?
For this what i did was evaluate the log of 10 base 3, and then evaluate that...anyways, I evaluated eerything and then got the sin and 3rd roots of it, or x. I got the following:
.0056a+.685b=1 or f(.3214)=5
Whats next? I can evaluate log log 3 to get -.3214, so does that mean the answer is just -5? thanks..
SECOND
The graphs of (x/4)+(y/3)=1 and ((x^2)/16)+((y^2)/9)=1 intersect at points A and B. Find the coordinates of a third point C on the graph of the second curve such that the area of triangle ABC is equal to 3
So I graphed them(isolated y for firstequation) and I found a circle being interescted in two points. I found A to be (0,3) and B to be (4,0). Now, point c obviously has to be in the northeastern part of the ellipse within the range of the line. I have no idea how to find it and get an area of 3 though. Maybe use the midpoint? (2,1.5)?
Thanks
This is basically it for now. There are two other ones that I've solved but I'm not too sure about.
2. Originally Posted by supercheddarcheese
SECOND
The graphs of (x/4)+(y/3)=1 and ((x^2)/16)+((y^2)/9)=1 intersect at points A and B. Find the coordinates of a third point C on the graph of the second curve such that the area of triangle ABC is equal to 3
... I found A to be (0,3) and B to be (4,0). ........Correct
Hello,
I've attached a sketch of the ellips and the line.
Consider the distance between (0, 3) and (4, 0) to be the base of the triangle you are looking for. The base has a length of 5. Thus you need the height of the triangle so that the area is 3:
$A = \frac12 \cdot 5 \cdot h = 3~\iff~ h = \frac65$
Therefore you need a parallel to the given line with the distance of $\frac65$. Calculate the coordinates of a point on the y-axis which has a distance of $\frac65$ from the given line:
C(0, a)
Use the distance formula:
$\pm \frac65=\frac{3 \cdot 0 + 4 \cdot a -12}{\sqrt{3^2+4^2}} =\frac{4a-12}{5}~\implies~a = \frac92~\vee~a=\frac32$
Only $a = \frac32$ is a valid value (otherwise the line will not intersect with the ellipse)
The parallel to the given line has the equation: $y = -\frac34 \cdot x + \frac32$
To get the coordinates of the points R and V calculate the intersection points of the ellipse and the parallel:
$\frac{x^2}{16}+\frac{\left( -\frac34 \cdot x + \frac32 \right)^2}{9}=1~\implies~x = 1-\sqrt{7}~\vee~x=1+\sqrt{7}$
Plug in these x-values into the equation of the parallel to get the y-coordinates of the points R and V.
Attached Thumbnails | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327799081802368, "perplexity_flag": "head"} |
http://mathhelpforum.com/trigonometry/4984-trig-help.html | Thread:
1. Trig HELP
y=2sin(4x+*Pi*)+3
I need the period, the domain, the range...and i have no idea to get the period...ahh!!!!
2. Originally Posted by Luckyjoshua
y=2sin(4x+*Pi*)+3
I need the period, the domain, the range...and i have no idea to get the period...ahh!!!!
The period is $2\pi$ divided by the frequency. The frequency is the number in front of x which in this case is 4. Thus,
$f=\frac{2\pi}{4}=\frac{\pi}{2}$.
The domain is any number. Because you can calculate sine for any value.
The range is tricker. You need to find the amplitute first which is 2. Then the highest value is take is 3+2=5 and the smallest is 3-2=1. Thus the range is,
$1\leq y\leq 5$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8779987692832947, "perplexity_flag": "middle"} |
http://cs.stackexchange.com/questions/tagged/stack | # Tagged Questions
The stack tag has no wiki summary.
0answers
52 views
### procedural representation for the stack? (LIFO structure)
I'm trying to do the exercise 2.12 of the book Essential of programing languages 3rd edition. They ask me to do a procedural representation for a stack, like they did in the example of page 40 with ...
1answer
141 views
### Efficient algorithm for a modified stack to pop the smallest element
I was practicing the following problem : There are a total of $N$ operations. At each operation, you can either add an element to the top or remove several elements as described below. ...
1answer
49 views
### A puzzle in Permutation
There are two stacks A and B. A : a,b,c,d ('a' is on top and 'd' is at the bottom of the stack) B : (empty) There are two rules. ...
2answers
70 views
### Using Queues for a Stack and Stacks for a Queue
I was asked a question on how to use a pair of Queues to create a Stack and how to use a pair of Stacks to create a Queue. Any thoughts on how I would do this? Right now I don't even know where to ...
1answer
176 views
### How to detect stack order?
We take the sequence of integers from $1$ to $n$, and we push them onto a stack one by one in order. Between each push, we can choose to pop any number of items from the stack (from 0 to the current ...
0answers
430 views
### Is there a 'string stack' data structure that supports these string operations?
I'm looking for a data structure that stores a set of strings over a character set $\Sigma$, capable of performing the following operations. We denote $\mathcal{D}(S)$ as the data structure storing ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384831786155701, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/64840-help-optimization-problem.html | # Thread:
1. ## help!!!- optimization problem
I dont know how to start on this probem. It would nice if someone helps me.
A water line runs east-west. A town wants to connect two new housing developments to the line by running lines from a single point on the existing line to the developments. One is 3 miles south of the existing line; the other is 4 miles south of the existing line and 5 miles east of the first development. Find the place on the existing line, relative to the 2 developments, to make the connection that minimizes the total length of the new line.
Attached Thumbnails
2. Originally Posted by 52090
I dont know how to start on this probem. It would nice if someone helps me.
A water line runs east-west. A town wants to connect two new housing developments to the line by running lines from a single point on the existing line to the developments. One is 3 miles south of the existing line; the other is 4 miles south of the existing line and 5 miles east of the first development. Find the place on the existing line, relative to the 2 developments, to make the connection that minimizes the total length of the new line.
By Pythagoras' theorem, the length of the line connecting the town 3 miles south of the existing line is $\sqrt{x^2 + 3^2}$ and the length of the line connecting the town 4 miles south of the existing line is $\sqrt{4^2 + (5 - x)^2}$. you want to minimize the sum of those
hint: the square roots aren't needed, hope you know why
3. Hello, 52090!
If we must use Calculus, the approach and set-up is quite ugly.
A water line runs east-west. A town wants to connect two new housing developments
to the line by running lines from a single point on the existing line to the developments.
One is 3 miles north of the existing line; the other is 4 miles north of the existing line
and 5 miles east of the first development.
Find the place on the existing line, relative to the two developments,
to make the connection that minimizes the total length of the new line.
Code:
* B
* |
A * * |
| * * | 4
3 | * * |
| * * |
* - - - * - - - - - *
C x P 5-x D
Development $A$ is 3 miles from the line: . $AC = 3$
Development $B$ is 4 miles from the line: . $BD = 4$
$CD = 5$
Let $P$ be a point on $CD$.
Let $x \,=\, CP$, then $5-x \,=\, PD$
We want to minimize the distance: $AP + PB$
In right triangle $ACP\!:\;AP \:=\:\sqrt{x^2+3^2}$
In right triangle $BDP\!:\;PB \:=\:\sqrt{(5-x)^2 + 4^2}$
The total distance is: . $D(x) \;=\;\left(x^2+9\right)^{\frac{1}{2}} + \left(x^2-10x+41\right)^{\frac{1}{2}}$
. . and that is the function we must minimize.
I'll wait in the car . . .
. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215567111968994, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/39161/what-happens-to-an-embedded-magnetic-field-when-a-black-hole-is-formed-from-rota | # What happens to an embedded magnetic field when a black hole is formed from rotating charged dust?
Black holes have no-hair so there are uniquely specified by a mass, charge and angular momentum. Imagine a cloud of charged rotating dust. There will be a magnetic field associated with the current of charge that is rotating. As this cloud collapses to form a black hole, how is the magnetic field excluded from the region of the black hole?
These three questions are similar but I think the answers will be different for each one:
What happens to an embedded magnetic field when a black hole is formed from rotating charged dust? It seems to me a rotating charged black hole must have a dipole magnetic field. But the strength of the dipole field seems like an extra parameter that black holes are forbidden by the no-hair theorem.
If a magnetic monopole falls into a schwarzchild black hole, what happens to the magnetic field? Here there would be only radial magnetic field lines leaving from the event horizon to infinity. So if magnetic charge is counted as charge this should be no problem. But if the black hole were rotating wouldn't that produce an electric dipole field?
When a neutral star with a magnetic field collapses to form a black hole, what happens to the magnetic field? Here there is no charge so how can there be a magnetic field associated with a black hole? That would definitely violate the no-hair theorem.
-
– Qmechanic♦ Oct 5 '12 at 21:35
this is a very, very good question. I am almost tempted to believe that implying a zero magnetic field for a rotating charged black hole is evidence of internal inconsistency in classical GR, but i'll refrain of commenting on this since it is just a baseless hunch – lurscher Oct 5 '12 at 21:45
@Qmechanic, the two related questions did not discuss magnetic fields as far as I can tell... – FrankH Oct 6 '12 at 0:45
## 1 Answer
The solution for a black hole with non-zero spin and non-zero charge is completed with the vector potential associated with the electromagnetic field. If both spin and charge are nonzero, the vector potential will have $A_{i}$ that depend on the spatial coordinates, and thus, the black hole will have a nonzero magnetic field.
In spheriodial coordinates, the vector potential comes out to:
$\begin{equation} A_{a} = -\frac{er}{r^{2}+a^{2}\cos^{2}\theta}dt_{a} + \frac{era\sin^{2}\theta}{r^{2}+a^{2}\cos^{2}\theta}d\phi_{a} \end{equation}$
Taking the curl of the spatial part (thus, assuming that we're going to calculate 'the magnetic field observed by someone who is not moving relative to $r,\theta,\phi$, for instance), we find the relevant two components of the Maxwell tensor:
$\begin{align} F_{r\phi}&=\frac{(a^{2}\cos^{2}\theta - r^{2})ea\sin^{2}\theta}{(r^{2}+a^{2}\cos^{2}\theta)^{2}}\\ F_{\theta\phi}&=\frac{(r^{2}+a^{2})era\sin(2\theta)}{(r^{2}+a^{2}\cos^{2}\theta)^{2}} \end{align}$
We know that $F_{r\phi}$ is proportional to $B_{\theta}$, while $F_{\theta\phi}$ is proportional to $B_{r}$ (the exact factors require calculating the determinant of the metric tensor, and I don't think calculating these terms exactly is the point of this excersise). The angular dependence of this field, though, should make it clear that the field is different than that of a true dipole. The question about the field lines crossing the horizon is a trickier one, since these coordinates are singular on the horizon, and that calculation would have to be carried out in a coordinate system that is non-singular there. But I would expect there to be a normal component of the magnetic field to the horizon, since there is nothing in these $F_{ab}$ terms that is singular on the horizon.
-
This doesn't answer the question, because the astrophysical magnetic field generating objects that collapse are net neutral. – Ron Maimon Oct 5 '12 at 23:41
@RonMaimon: "charged rotating dust" sounds pretty not net neutral to me. – Jerry Schirmer Oct 6 '12 at 0:29
I really did want to know about real astrophysical bodies with magnetic fields. Should I edit this question to include that or ask another question? - I decided to just ask a second question rather than invalidate this answer... – FrankH Oct 6 '12 at 0:43
@FrankH: That's probably better, but even the charged rotating dust will not have a Kerr-Newman compatible magnetic field and will expel it's field in a blast of EM radiation, like other bodies with a magnetic field. – Ron Maimon Oct 6 '12 at 1:59
@JerrySchirmer - I will accept your answer if you can tell me what the magnetic field looks like? Is it a dipole magnetic field? Do the magnetic field likes intersect the event horizon? Wouldn't the dipole moment be an extra parameter for the black hole that would violate the no-hair theorem? – FrankH Oct 6 '12 at 4:58
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937721312046051, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/28546/plotting-the-cmb-power-spectrum-why-c-ell-ell-ell1-rather-than-only-c | # Plotting the CMB power spectrum - Why $C_\ell \ell (\ell+1)$ rather than only $C_\ell$?
I can't find any convincing answer for the following question :
Why do we always (or often) plot the CMB power spectrum in this way?
I mean the y axis is $C_\ell \ell (\ell+1)$ and not only $C_\ell$. Why?
I know it's because of the scale invariance, but why do we absolutely want to show the flat line at low $\ell$? And I do not understand why the power spectrum is flat in this scale.
Thanks for any answer. :)
-
## 1 Answer
I haven't got a great answer for this, but since no-one else has answered ...
As you mention, for the Sachs-Wolfe effect the $C_{\ell}$ values drop off as approximately $\ell(\ell + 1)$ so plotting $C_{\ell}\ell(\ell + 1)$ on the $y$ axis gives an approximately horizontal line and this makes it easy to see deviations from Sachs-Wolfe behaviour. However I suspect the main reason the graphs are drawn this way is that it nicely highlights the doppler peaks. If you just plotted $C_{\ell}$ you'd need to use a log axis and that would make all the peaks look smaller.
-
Thanks for your answer. I think you're quite close to the aim of plotting the CMB PS in this way. – Bagheera May 19 '12 at 8:40
3
Well, because we're really plotting the anisotropy i.e. variations. So they're the Fourier modes not of the temperature $T$ itself but its Laplacian over the sphere, $\Delta T$, and the Laplacian has a simple well-defined effect on the component $C_l$ which is multiplied by a spherical harmonic function $Y_{lm}$: it just multiplies the spherical harmonic function by $-l(l+1)$. That's why $l(l+1)$ may be identified with the (minus) Laplacian. It's more natural to insert the Laplacian rather than not to really measure "variations" and to get rid of the huge constant term prop. to $Y_{00}$, too. – Luboš Motl May 19 '12 at 8:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537245035171509, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/83857/list | ## Return to Answer
1 [made Community Wiki]
Probably, Johannes Ebert is right: (almost) all natural mathematical objects may be characterized by a universal property. The question is now what we understand exactly by the the fact that universal property is delving in the concrete habitual definition.
More concrete, let consider the usual definition of a factor structure, let say a factor group (of $G$ modulo a normal subgroup $H$). There is also a universal one: A factor group is (up to an isomorphism) an epimorphism (i.e. a surjective group homomorphism) $G\to G'$. Does the second definition delve the first? I really don't know!
Another example: Having two $R$-modules, $M$ of the right and $N$ of the left, one may define the tensor product as a factor of the free abelian group with the basis the cartezian product $M\times N$ modulo the relations which emphasize the bilinearity. Secondly, we may define the tensor $M\otimes_R-$ as the right adjoint of the functor $Hom_R(M,-)$, definition which may be extended for $M$ in a cocomplete abelian category. This time the possibility to change the settings leading to a more general definition stands as an argument that the universal definition is not delving in the usual one. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8990694284439087, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/41424/what-does-the-n-of-a-group-mean | # What does “the N of a group” mean?
In the context of group theory (in my case, applications to physics), I frequently come across the phrase "the N" of a group, for example "a 24 of SU(5)" or "the 1" (the integer is usually typeset in bold).
My knowledge of group theory is pretty limited. I know the basics, like what properties constitute a group, and I'm familiar with simple cases that occur in physics (e.g. rotation groups SO(2), SO(3), the Lorentz group, SU(2) with the Pauli matrices as a representation), but not much more. I've got a couple of related questions:
• What is meant by "N of a group"?
• Is is just shorthand for an N representation? If so, what exactly is an N representation of a given group? :-)
• How can I work out / write down such a representation concretely, like the Pauli matrices for SU(2)? I'd be grateful for a simple example.
• What does it mean when something "transforms like the N"?
-
1
You should be asking physicists, not mathematicians, so I'm migrating. But I suspect that this means an $N$-dimensional representation. – Qiaochu Yuan Oct 22 '12 at 14:34
Oh well, I thought this was a standard thing, and not just another piece of particle physics jargon. Thanks for moving! – jdm Oct 22 '12 at 14:45
The dimension of SU(5) is 24. – MBN Oct 22 '12 at 14:45
"Standard thing" depends on who you're talking to. Physicists use jargon for talking about Lie groups that isn't standard among mathematicians and vice versa. – Qiaochu Yuan Oct 22 '12 at 14:47
## 2 Answers
1) Physicists are referring to an irreducible representation (irrep) for whatever group $G$ we are talking about. The point is that irreps are so rare that irreps are often uniquely specified by their dimension (modulo isomorphisms). (This is not quite true in general, and physicists then start to decorate the bold-faced dimension symbol with other ornaments, e.g. ${\bf 3}$ and $\bar{\bf 3}$, or e.g. ${\bf 8}_v$ and ${\bf 8}_s$ and ${\bf 8}_c$, etc, to distinguish.)
2) By the way, concerning a group representation $\rho: G \to GL(V,\mathbb{F})$, where $G$ is a group, where $\mathbb{F}$ is a field (typically $\mathbb{F}=\mathbb{R}$ or $\mathbb{F}=\mathbb{C}$), where $V$ is a $\mathbb{F}$-vector space, and where $\rho$ is a group homomorphism; be aware that physicists refer to both the map $\rho$ and the vector space $V$ as "a representation".
-
''the $N$ of a group $G$'' refers to an $N$-dimensional irreducible (projective) representation of the (typically semisimple) group $G$. A representation is a homomorphism $U$ from $G$ to the space of linear self-mappings of a vector space $V$ (in the projective case acting on the rays); it is irreducible if there is no basis in which all $U(g)$ are block triangular. The dimension of the representation is the dimension of $V$.
For example, the representation theory of $SO(3)$ implies that there is precisely one irreducible projective representation of every dimension $N$. The 2-dimensional representation is the spinor representation, the three-dimensional one the ordinary vector representation.
If an object $x$ transforms like an $N$ then $x$ is a generic element from an $N$-dimensional space with the representation $N$, and hence trasforms under a group element $g$ by means of $x\to U(g)x$. For example in case of $SO(3)$, if $x$ transforms as a $2$ then it is a spinor, if it transforms like a $3$ then it is a vector, etc.
In many cases, the dimension determines the representation up to isomorphism, hence the jargon. (Otherwise, representations may be called $N$ and $\overline N$, etc., to distinguish them.) For example, the dimension of SU(5) is 24, and the 24 characterizes the adjoint representation (which has dimension 24).
-
Maybe you meant $so(3)$ instead of $SO(3)$. The Lie group $SO(3)$ has irreps of only odd dimension but it's Lie algebra $so(3) = su(2)$ has irreps of every dimension. – Eric Oct 22 '12 at 15:20
@Eric: Thanks. I edited the answer to make clear that I meant projective representations. These are the relevant ones in quantum mechanics. – Arnold Neumaier Oct 22 '12 at 16:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921389102935791, "perplexity_flag": "head"} |
http://www.anthonysmith.me.uk/research/2010/04/ | Surveying the Universe
# Archive for April, 2010
## Bayesian number counts
30 Apr 2010
Posted by Anthony in Research
No comments
Here's a simple bit of statistics for a Friday lunchtime. You count the number of galaxies in a certain area on the sky (with the galaxies satisfying some specific properties, if you like). What is the true number density? Let the expected number be $\lambda$ (the true number density multiplied by the area on the sky) and the measured number be $k$. Then, in true Bayesian fashion, what we want is
Now, for the prior, $P(\lambda)$, we assume a prior which is flat on a logarithmic scale. That is, we guess (before making the observation) that the expected number is as likely to lie between 1 and 10 as it is to lie between 1000 and 10,000. (The alternative, a flat prior on a linear scale, would mean that we guess the true density is just as likely to lie between 10,001 and 10,010 as it is to lie between 1 and 10, which is ridiculous.) So $P(\lambda) \propto 1/\lambda$. The likelihood, $P(k|\lambda)$ is given by the Poisson distribution. So, ignoring the normalizing factor of $P(k)$,
And this is the Gamma distribution. Easy peasy. | {"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473326206207275, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/28246/why-are-gauge-integrals-not-more-popular | # Why are gauge integrals not more popular?
A recent answer reminded me of the gauge integral, which you can read about here.
It seems like the gauge integral is more general than the Lebesgue integral, e.g. if a function is Lebesgue integrable, it is gauge integrable. (EDIT - as Qiaochu Yuan points out, I should clarify this to mean that the set of Lebesgue integrable functions is a proper subset of gauge integrable functions.)
My question is this: What mathematical properties, if any, make the gauge integral (aka the Henstock–Kurzweil integral) less useful than the Lebesgue or Riemann integrals?
I have just a cursory overview of the properties that make Lebesgue integration more useful than Riemann in certain situations and vice versa. I was wondering if any corresponding overview could be given for the gauge integral, since I don't quite have the background to tackle textbooks or articles on the subject.
-
## 5 Answers
I would have written this as a comment, but by lack of reputation this has become an answer. Not long ago I've posed the same question to a group of analysts and they gave me more or less these answers:
1) The gauge integral is only defined for (subsets of) $\mathbb R^n$. It can easily be extended to manifolds but not to a more general class of spaces. It is therefore not of use in (general) harmonic analysis and other fields.
2) It lacks a lot of very nice properties the lebesgue integral has. For example $f \in \mathcal L^1 \Rightarrow |f| \in \mathcal L^1$ obviously has no generalization to gauge theory.
3) and probably most important. Afaik (also according to wikipedia) there is no known natural topology for the space of gauge integrable functions.
-
And on top of that, the space of gauge integrable functions is not a Banach space. – Damien L Mar 20 at 10:41
In mathematics, there is a general philosophy (I think due to Grothendieck) that one should work not with a bad category containing nice objects, but with a nice category containing bad objects. (The category of schemes over a base scheme is perhaps one example, as is the category of presheaves on a category.) The statement applied to analysis is perhaps that working with a nice space is more important than the objects in it. For instance, the spaces one may obtain from the Lebesgue integral--namely, the $L^p$ spaces--are wonderful from the point of view of analysis; they are Banach spaces (Hilbert if $p=2$), and with very weak hypothesis have the duality property $(L^p)^* = L^q$ for $p, q$ conjugate exponents (and $p \neq \infty$). By contrast, the Henstock-Kurzweil integral does not lend itself to such nice spaces: to define a norm on some subspace of the space of integrable functions, you would presumably need to consider the integral of the absolute value. But functions $f$ such that $|f|$ is HK-integrable are in fact Lebesgue integrable! So no new information is gained. The fact that the Lebesgue integral can't handle the derivative of every differentiable function is made up for by the niceness of the resulting function spaces.
(If I remember correctly, one can make the space of all HK-integrable functions into a topological vector space, but it's not anywhere near as nice as the Banach spaces that one obtains via the Lebesgue integral.)
Another reason, which has already been given above, is that the Lebesgue integral is fantastically general. The fact that it can integrate functions on euclidean space is only a very limited and special case of its power; it will work on any measure space. To give one example, the Haar integral (one can obtain a translation-invariant measure on a locally compact group; the Haar measure is the Lebesgue integral with respect to this) is frequently used: for instance, in number theory, one wishes to integrate functions over topological groups such as $K^*$ for $K$ a local field, and the Haar integral is the natural way to do this. (Though, as Matt E observes in the comments, in practice one may integrate over $K^*$ or more generally algebraic groups over local fields, fairly explicitly, without need of actually constructing a measure formally (except in the case $K = \mathbb{R}, \mathbb{C}$, where the usual integral on euclidean space suffices).)
-
+1. Thanks for the answer; this is a very interesting way of looking at it. – Joseph Mar 22 '11 at 1:33
2
Dear Akhil, This is a minor point, but for $K^{\times}$, or more generally $G(K)$ where $G$ is a linear algebraic group, one doesn't need any abstract theory to construct the integral: in the case when $K$ is $\mathbb R$ or $\mathbb C$, linear groups embed into Euclidean space in a very explicit way, so one quickly reduces to Lebesgue measure on Euclidean space, while for non-archimedean local fields, the Haar measure is essentially a counting measure. This is not to discount your general point, but just to say that Haar measure on local-field valued points of a linear algebraic group ... – Matt E Mar 22 '11 at 5:21
2
... is not a case where much abstract theory is required. (Of course, the existence of this abstract theory is a source of comfort even in these more explicit settings, and provides moral justification for various considerations. And perhaps more importantly it suggests ways of thinking and points of view that are not as obvious from a more explicit perspective.) Regards, – Matt E Mar 22 '11 at 5:23
@Matt: Dear Matt, that's certainly true; thanks for pointing it out! – Akhil Mathew Mar 22 '11 at 13:50
Dear Akhil, Wikipedia tells me that $f$ is Lebesgue integrable if and only if both $f$ and $|f|$ are HK-integrable. – Damien L Mar 28 at 7:47
This is more of a comment than an answer, but it is too long to be a comment. Take my opinion with a grain of salt.
You ask what properties make the gauge integral "less useful" than the others. This is really several questions: "useful" in terms of pedagogy? in terms of how its ideas generalize? in terms of the theorems it provides in the context it was designed for (I think, standard calculus in one variable or in $\mathbb{R}^n$)? And "usefulness" is such a fuzzy and contentious notion... but we don't need to go there.
In the context for which it was designed, is certainly less well known, less widely documented in textbooks, and certainly less taught than the Riemann or Lebesgue integrals, and it seems to me that your "usefulness" question is really about this. You're guessing, quite reasonably, that there must be some mathematical or pedagogical reason why Calculus I/II/III or Analysis I/II/III at University X is almost guaranteed to study another integral and not the gauge integral.
Well, I don't think there is any logical reason for this.
But there isn't a pressing need for the gauge integral, either. Its benefits, whether technical (in getting theorems with fewer hypotheses) or pedagogical (some feel it is easier to learn or to teach), do not seem to be enough to outweigh historical tradition. And it is really just a question of tradition.
It's a bit like asking why the USA doesn't use the metric system. In each case, there isn't any abstract reason why people don't do it, and if history could be replayed with different initial conditions, things might have been done differently.
The strength of tradition would be more of a surprise if the "technical advantages" of the gauge integral were more pronounced in relation to what is already in wide use. A huge factor in the adoption of the Lebesgue integral was all of its nice properties (e.g. completeness of the $L^p$ spaces, the dominated convergence theorem) that made analysis involving integrals possible, and on a firm logical footing, in ways that it really wasn't before. Speaking informally, I think 99% of what humanity wants out of integrals was taken care of with these advances. Of course the next thing that comes along can't offer as dramatic a change, so nobody cares to switch.
The chief "added advantage" of the gauge integral (or at least the one most often mentioned) is its more general "fundamental theorem of calculus." IMHO this sounds much better than it actually is. I'm an analyst and every time I've ever used the FTC, theorems about older integrals were enough; there was simply no need for more.
Unless you feel "having the best FTC possible" is a key property an integral should have... but analysis is full of statements that are easy theorems under restrictive hypotheses, and harder theorems under more general hypotheses, and often there is an epsilon of stuff left over, not covered by the standard tricks, where the statement is still true. Statements about interchanging limiting operations (e.g. differentiation under the integral) are classic examples; the truth boundary is so often unknown--- or it is so difficult or unrewarding to formulate useful "if and only if" conditions under which the statement is a theorem, that nobody bothers to do it. From this view, it's not surprising that there is a better FTC than the Lebesgue FTC, and it's not surprising that there is not a movement to correct this "defect".
So that's my take: people don't use the gauge integral because it is our heritage to use other integrals. These other integrals do most of the same things, and the gauge integral isn't so much better that people switch. And once we know why most experts don't use it... we know why it's generally not taught to anybody. [The pedagogical case for or against any integral of your choosing is a completely separate issue.]
-
My understanding is that the gauge integral, also known as the Henstock-Kurzweil integral, is actually not more general than the Lebesgue integral in the sense that it does not generalize to higher dimensions spaces more exotic than subsets of $\mathbb{R}^n$. The Lebesgue integral, on the other hand, generalizes to the integration of a measurable function with respect to a measure on a measure space, which is enormously general and useful.
Keep in mind that part of the reason we prefer the Lebesgue integral to the Riemann integral is not just that we can integrate more functions, it's that the theorems are nicer.
-
Alexander contradicts you concerning generalizations to higher dimensions. – Yuval Filmus Mar 21 '11 at 6:32
Fair enough. Corrected. – Qiaochu Yuan Mar 21 '11 at 6:41
1
I didn't say anything about whether dominated convergence holds for the Henstock-Kurzweil integral because I didn't know. All I said is that the ability to integrate more functions is not the only reason people prefer the Lebesgue integral to the Riemann integral. – Qiaochu Yuan Mar 21 '11 at 15:27
1
Ah, sorry. I just wanted to say: besides integrating more functions we should indeed care about theorems, and also about difficulty of proofs. If you want to integrate over R then HK integral wins. Fundamental thm of calculus (a limited form) is a difficult theorem in Lebesgue's theory (LT). In the HK theory it's trivial. Dominated convergence thm is easy in LT, and it's also easy in HK theory. Definition of the Lebesgue's measure is not easy, definition of the HK integral is very easy. I fully agree though that measure theory is much more important than the HK integral. (...) – user8268 Mar 21 '11 at 20:04
1
(...) Both because spaces other than $R^n$ and because of the unity of the subject: all "reasonable" measure spaces (of the same total measure) are isomorphic (e.g. Wiener measure is isomorphic to Lebesgue's on $[0,1]$). HK integral should not be a replacement of Lebesgue's theory. The best suggestion I know is, rather, to replace Riemann integral by HK integral. The theory is the same and theorems much better (often with easier proofs). Sorry for this long comment :) – user8268 Mar 21 '11 at 20:11
show 2 more comments
the gauge intehral is as general as lebesgue intehgral. it can be defined over arbitary spaces. in fact it is aboslutely necessary when we simplify calculus . thwre are simple prrofs of hakes heorem the study of improper integrals need not becarried out separately feynman path intehgrals can be more naturally and easily defined using gauge integration rather than lebesgue and it is also easier to use gauge integhral for weiner integration. the question regardin space of gauge integhrable functions not being useful are irrelevent as one can always use the space L1 for absoilutly convergent gauge intehgrals It is the same l1 space. but gauge integrals give you aded benefits of having better fundamental theorem of calculus , and stokes theorem is given simpler proof. fubinis theorem proof is simpler and more natural. furtyher though monotone convergeence theorem has simpler prof in lebesgue the simplicity is deceiving as first one has to extend measure to sigma algebra . in henstock automatically measure is generated. it is simply the human inertia of a generation
-
2
To improve the readability I recommend reading your answer through before you post it. – AndreasS Nov 16 '12 at 18:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945283830165863, "perplexity_flag": "middle"} |
http://publications.ias.edu/rpl/section/21 | Bibliography [ pdf ]
# Functoriality
written in 1967, to appear 2011
[ handwritten.pdf ] , [ weil1.pdf ]
In January of 1967, while he was at Princeton University, Langlands wrote a letter of 17 hand-written pages to Andre Weil outlining what quickly became known as the Langlands conjectures'. This letter even today is worth reading carefully, although its notation is by present standards somewhat clumsy. It was in this letter that what later became known as the L-group' first made its appearance, like Gargantua, surprisingly mature. Because of its historic importance, we give here two versions of this letter, as well as a pair of supplementary notes accompanying it. A typed copy of this letter, made at Weil's request for easier reading, circulated widely among specialists in the late 'sixties and 'seventies. The covering note from Harish-Chandra has been helpful in establishing a date for the letter, which is itself undated.
Langlands Comments:The letter to Weil is undated. However, thanks to David Lieberman, I was able to discover that Chern's talk in the IDA Mathematics Colloquium was held on January 6, 1967. Thus the letter was written between then and the date January 16 that appears in the note of Harish-Chandra.
In order to make it easier for Weil to read, the handwritten note was typed some days later. The four footnotes were then added and one or two phrases were modified for the sake of clarity. These modifications are incorporated into the present version. Otherwise the letter has been allowed to stand as it was. Even unfortunate grammatical errors have not been corrected.
The emphasis on explicit, concrete reciprocity laws may surprise the reader. The note A little bit of number theory will clarify what I had in mind.
1967
[ langlands-hc-ps.pdf ] , [ l0.jpg ]
Written in 2010
Langlands comments: This note Funktorialität in der Theorie der automorphen Formen: Ihre Entdeckung und ihre Ziele was written as commentary to accompany the original letter in a collection of documents on reciprocity laws and algebraic number theory, to appear shortly.
Lectures in modern analysis and applications III, Lecture Notes in Mathematics 170 1970
The conjectures made in the 1967 letter to Weil were explained here more fully. This appeared originally as a Yale University preprint, later in the published proceedings of a conference in Washington, D.C., in honor of Solomon Bochner: Lectures in modern analysis and applications III, Lecture Notes in Mathematics 170, Springer-Verlag, 1970.
Langlands Comments:The lecture in Washington, D. C. on which these notes were based (they were presumably written shortly thereafter) was, I surmise, delivered sometime in 1969, thus more than two years after the letter to Weil. They were the first published account of the conjectures made in the letter. In the meantime, a certain amount of evidence had accumulated.
The letter had been written, I believe, only a few days or at most weeks after the discoveries it describes. They were not mature. The local implications appear not to have been formulated, and the emphasis is not on the reciprocity laws as a means to establish the analytic continuation of Artin L-functions but on concrete, elementary laws, for which groups other than GL(n) are important because they admit anistropic R-forms. The coefficients of automorphic L-functions attached to groups anisotropic over R can be interpreted in an elementary way as in A little bit of number theory. In addition, I was not aware of Weil's paper on the Hecke theory or of the Taniyama conjecture. Indeed, not being a number theorist by training (and perhaps not even by inclination) I was well informed neither about Hasse-Weil L-functions nor about elliptic curves.
After the letter had been transmitted, I learned from Weil himself both about his paper and about the Weil group. This is implicit in the lecture and accounts in part for its greater maturity. First of all, encouraged by Weil's re-examination of the Hecke theory, Jacquet and I had developed a theory for GL(2) with some claims to completeness both locally and globally, although at both levels the major questions about reciprocity remained unanswered. With the local theory for GL(2) came ε-factors and the correspondence of the letter then required that such factors also exist for Artin L-functions. One achievement of a year spent in Turkey was the proof that these ε-factors exist. One achievement of the following year, accomplished in collaboration with Jacquet, was a complete proof of the correspondence between automorphic forms on GL(2) and on quaternion algebras. This correspondence had, of course, already appeared classically. Our achievement was, I believe, local precision, in particular the understanding that there were local phenomena of importance, and generality.
Although specific attention is drawn in the lecture to the case that G' is trivial and the automorphic L-functions attached to it therefore nothing but Artin L-functions, it is not at all stressed that functoriality entails the analytic continuation of the Artin L-functions. It is of course evident, but I had not yet learnt the advantages of underlining the obvious. The other examples of functoriality may or may not appear well chosen to a number theorist in 1998. In 1967, however, it was rather agreeable to see the recently established analytic theory of Eisenstein series fitting so comfortably into a conjectural framework with much deeper arithmetical implications.
The question about elliptic curves appearing toward the end of §7 is nothing but a supplement to the conjecture of Taniyama-Shimura-Weil, but a useful one: a precise local form of the conjecture, that is now available, thanks to Carayol and earlier authors, whenever the conjecture itself is. At the time, what was most fascinating was, as mentioned in the comments on the letter to Serre, the relation between the special representation and the l-adic representations attached to elliptic curves with nonintegral j-invariant.
The observation about L-functions and Ramanujan's conjecture has, I believe, proved useful.
Pacific Journal of Mathematics 61 -19 1998
This first appeared in mimeographed notes dated 1968 available from the Mathematics Department of Yale University. It was reprinted in the issue of the Pacific Journal of Mathematics dedicated to the memory of Olga Taussky-Todd (volume 61 (1998), pp. 231-250).
This is a short note written to illustrate some examples of how the conjectures worked out in very explicit examples.
Langlands Comments:I am not sure exactly when this text was written. Internal evidence and memory together suggest that it was early in 1973. The internal evidence cannot be interpreted literally, as I was unlikely to be sure even in 1973 exactly when the letter to Weil was written.
The examples are of the type I had in mind when writing that letter. I had not, however, at that time formulated any precise statements. Indeed, not being aware of the Shimura-Taniyama conjecture and not having any more precise concept of what is now known as the Jacquet-Langlands correspondence than that implicit in the letter, I was in no position to provide the examples of the present text, some of which exploit results that had become available in the intervening years. The formulas are as in the original text. I did not repeat the calculations that lead to them.
I have never found anyone else who found the type of theorem provided by the examples persuasive, but, apart from the quadratic reciprocity law over the rationals, explicit reciprocity laws have never had a wide appeal, neither the higher reciprocity laws over cyclotomic fields nor simple reciprocity laws over other number fields (Dedekind: Über die Anzahl der Idealklassen in reinen kubischen Zahlkörpern).
The conjecture referred to in the text as the Weil conjecture is now usually referred to as the Shimura-Taniyama conjecture.
Proceedings of the Gibbs symposium 1989
Edinburgh conference on automorphic forms, AMS 1997
# Letters
## Letter to Serre
March 15, 1968
Langlands spent 1967-68 visiting in Ankara, Turkey, and while there wrote this letter to Serre. In it occurs for the first time the question of how to account for `special' representations of the Galois group, such as at primes where an elliptic curve has unstable bad reduction, corresponding to special representations of GL2. This correspondence was later expanded to the Deligne-Langlands conjecture, proven eventually by Kazhdan and Lusztig.
Langlands Comments:This letter is a response to a question of Serre about the gamma-factors appearing in the functional equations of automorphic L-functions. Fortunately Serre's letter to me was accompanied by several reprints, among them apparently the paper Groupes de Lie l-adiqes attachées aux courbes elliptiques that appeared in the volume Les tendances géométriques en algèbre et théorie des nombres.
Although the letter promised in the last line was never written, it is clear what I had in mind. Sometime soon after writing the letter to Weil, perhaps even at the time of writing, I was puzzled by the role of the special representations. The solution of the puzzle was immediately apparent on reading Serre's paper which treated the l-adic representations associated to elliptic curves whose j-invariant was not integral in the pertinent local field. The special representations of GL(2) corresponded to these l-adic representations. The connection between non-semisimple l-adic representations and various kinds of special representations is now generally accepted. The theorem of Kazhdan-Lusztig is a striking example.
## Letter to Deligne
March 31, 1974
Langlands Comments:Comments on this letter as on many of the others about functoriality and related matters are at the moment (2009) necessarily provisional, for I hope we shall soon arrive at a stage where we can weigh with more confidence the significance of a number of specific contributions to the theory of automorphic forms and its connections to Galois theory.
Taking, for the purposes of these personal comments, as the subject's beginning the letter to Weil in January, 1967, there are up to the time of this letter three periods or phases, the third emerging, in part, from the second. In the second two quite distinct currents appeared, sometimes merged, sometimes remaining quite separate. During the first, initial period the local consequences or analogues of the global notions were formulated, and in the case of GL(2) collated, developed, and compared with the available material. Moreover, some significant, specific features of general and long-term importance were discovered: the existence of the ε-factor; the role of the special representation, at first for GL(2), but implicitly in general (see the 1967 letter to Serre in Group 5) although there was no urgent need for general formulations; a first use of the trace formula to establish a significant case of functoriality, the correspondence, local and global, between representations of GL(2) and representations of its inner forms.
The second period began, for me, with two matters. The first was the introduction, as recorded in the letters to Lang of December, 1970, of the Galois representations of general Shimura varieties as an enlargement of the conceptual frame, partially implicit, partially explicit in the letter to Weil of January, 1967. The letters were written a couple of months before the Bourbaki talk of Deligne in which he gave a uniform reformulation of Shimura's theory that was tremendously valuable to me and, indeed, to every student of Shimura's papers. If that reformulation had been available the letters might have been briefer, but it was perhaps more amusing to discover the existence of the necessary representations of the L-group experimentally. It was also perhaps instructive to follow, at least at first, the longer road traversed by Shimura. I began fairly soon to use the designation Shimura variety and introduced it formally, I believe for the first time, in the 1974 paper Some contemporary problems with origins in the Jugendtraum.
The second matter was what came to be called endoscopy, a term created by Avner Ash in response to an appeal of Diana Shelstad. As is apparent from the discussion in the first letter to Lang in which the Blattner conjecture is invoked, the ideas of that letter led immediately to complications caused by the unequal multiplicities of what I referred to as L-indistinguishable representations, a term improved on by the that of L-packet introduced, I believe, by Borel later. The problems raised could be attacked in two different contexts: real groups, where Harish-Chandra's theory was available; and SL(2) where they were of an elementary nature. Shelstad, in her thesis and later, clarified the real theory completely. I undertook, jointly with Labesse to whom I had described the problems, the study of the group SL(2), much more elementary but also very illuminating. Endoscopy is now, thanks to Ngo Bao Chau, Laumon, Kottwitz and others a subject of independent interest and importance, although still central to the theory of automorphic forms. I tried to underline the significance of Kottwitz's contributions to endoscopy and Shimura varieties in the comments on the paper Representation theory and arithmetic that appears in this section.
The third phase began for me with the paper on base change Base change for GL(2), but it is well to remember that base change had an earlier history that is mentioned already in this letter, namely the work of Doi and Naganuma, as well as that of Jacquet, on what would now be called quadratic base change for GL(2). The subsequent work by Saito and by Shintani, who were influenced by the results for quadratic base change had of course a decisive and direct influence on me. Base change for GL(2) together with later work by Arthur and Clozel for GL(n) has played a major role in the fusion of the general theory of automorphic forms on one hand and the study of l-adic Galois representations on the other. The decisive factor was the role played by base change in the proof of some previously inaccessible cases of the Artin conjecture that were invoked by Wiles in his proof of Fermat's Last Theorem. Later, similar ideas were used by Richard Taylor and collaborators in the proof of the Sato-Tate conjecture, but as I have already intimated elsewhere -- as indeed the present letter to Deligne foreshadows -- I do not believe this is where long-term importance of Wiles's ideas lies. Indeed, in this letter in the first of the "two vague problems", I already describe why and how the Sato-Tate conjecture, in the particular form in which it first appeared and in a general form, is to be regarded as an immediate consequence of functoriality. The vagueness is considerably reduced if functoriality is regarded as including "Arthur's conjectures", appearing in the two papers Unipotent automorphic representations: conjectures (1989) and Unipotent automorphic representations: global motivation (1990).
The uncertainty, on the one hand, in the present status of functoriality, or rather of its status in the next few years, and, on the other, of the relation of the theory of automorphic forms to algebraic geometry -- in the sense of the intimations of Grothendieck -- and to Galois representations makes any precise speculations about the development of these subjects rash. There are more grounds for confidence in general expectations than there were in 1974 but circumspection in specifics is still wise. My own hope is that we shall soon be on the road to a proof of functoriality and by methods of (nonabelian) harmonic analysis. So the largest unknown may become after not too many years the relation between arithmetic (thus motives!) and the analytic theory.
The questions in the letter were callow but not premature. The question for function fields was not much to the point. The work of Laurent Lafforgue, Chtoucas de Drinfeld et correspondance de Langlands, clarifies considerably what to expect, but I do not believe much has yet been done beyond the group GL(n). Whatever is true, the question formulated for function fields towards the end of this letter does not seem to be now particularly perceptive. Whatever is valid along the lines of the not adroitly formulated question would presumably be a consequence of Lafforgue's work and functoriality.
For number fields, the "obvious guess" at the end of the letter is still very much just that. The question of showing that the automorphic π over a number field that correspond to motives are characterized by their infinite components remains as before and not much has been done, beyond some cases of Artin's conjecture for two-dimensional representations and, of course, the theorem of Deligne-Serre, to clarify it. It remains a central question.
It is not, however, the primary problem. This is to show that every motive over a number field corresponds to an automorphic form. I have since reflected in various ways on the question, first in the paper Automorphic representations, motives, and Shimura varieties. ein Märchen, in which the Taniyama group was introduced. In particular there was shown to be a canonical homomorphism from the Weil group to this algebraic group, or rather to its finite-dimensional quotients. The Weil group can be said, as a consequence of the paper Representations of abelian algebraic groups to be the "Galois group" for automorphic forms on tori. Thus a complex homomorphism of the Taniyama group to the L-group of a torus defines on composing it with the homomorphism from the Weil group to the Taniyama an automorphic form on the torus. Although the remarks in the introduction to the Märchen suggest that I was aware, in some sense, while writing it that the Taniyama group was related to motives of CM-type, I do not think my ideas were very precise. A precise theorem was formulated and proved by Deligne. (See the book Hodge cycles, motives, and Shimura varieties with papers by Deligne, Milne, Ogus, and Shih.) Using a provisional, but acceptable and, so far as I can see, logically impeccable, notion of motive, Deligne proves that the Taniyama group is isomorphic to the "Galois group" for motives of potentially CM-type. Any theorem for motives in general will have to be compatible with this result.
In the Märchen, I was too strongly influenced by the categorical constructions found for example in the book Catégories Tannakiennes of Saavedra Rivano, a theory explained again, with mistakes corrected, by Deligne and Milne in the collection of papers mentioned. For example, I introduced, for automorphic forms on GL(n), direct sums, which exist thanks to the theory of Eisenstein series, and products, whose existence is, so far as I know, still only partially established. That was fine, but I was also attached to the notion of a fiber functor for automorphic representations. I am now inclined to suppose that this was misguided. As I explained in the article Reflexions on receiving the Shaw Prize in section 12, it may be simplest, once functoriality has been established along the lines of Beyond endoscopy, if that is possible, to construct the "automorphic Galois group" by hand by patching together groups corresponding to the "thick representations" of Reflexions. This would be a large group, involving inverse limits of reductive groups, but in fact any concrete meaning it had would undoubtedly be at a finite level. The groups would only be defined over C.
I observe in passing that although the adjective "thick", as in "thick description" has met with considerable success among historians and social scientists, mathematicians seem reluctant to employ it. An alternative would be "hadronic", taken from the Greek, well-known from elementary particle theory, and meaning exactly "thick".
The Märchen was written in the late 1970's when I was still relatively young and impressionable. Having now lived for some decades with various ideas that were new to me then and having had many more years to reflect on the theory of automorphic representations and related matters, I am now inclined to think that although Tannakian categories may ultimately be the appropriate tool to describe the basic objects of algebraic geometry, automorphic representations have a different structure, best expressed by functoriality, in which of course statements formulated in terms of the finite-dimensional representations of the L-group are central. Among the less well-known, but still striking, statements of this kind are predictions of the multiplicity with which a given automorphic representation appears in the space of functions on L2(G(F)G(AF). Some examples to which I draw the reader's attention have been found by Song Wang, Dimension data and local versus global conjugacy in reductive groups.
One expects the global correspondence to define and to be defined by a homomorphism of the "automorphic Galois group" GA onto the "motivic Galois group" GM, but only as a group over C. Of course we are still a long distance from the global correspondence, as the contributions to the footnote to the review in section 12 of Hida's book p-adic automorphic forms on Shimura varieties make clear. Nevertheless the possibility of such a correspondence and such a homomorphism influence, often in a very concrete way, the thinking of many mathematicians. The existence of the Tate motives defined by projective spaces means that there has to be a homomorphism of the "motivic Galois group" to GL(1); the existence of a "degree" or "weight" for motives, just as there is for cohomology, would mean that there was a homomorphism of GL(1) to the motivic Galois group. Something similar is available for automorphic representations: the automorphic representations of GL(1), especially those defined by powers of the norm, correspond to the Tate motives. Thus there may be something like a Tate-twisting. There may also be an analogue of the weight, or, more generally of one-dimensional motives. For automorphic forms these are best thought of as characters of the group of idele classes over the ground field, so that we expect that a homomorphism of this group, or some modification of it, perhaps as an inverse limit, into the "automorphic Galois group" GA exists. These two homomorphisms, of an abelian group TA into GA and of GA to TA allow the introduction of various twistings of any homomorphism of the automorphic Galois group GA to the motivic Galois group GM.
That the homomorphism of the "automorphic Galois group" to the "motivic Galois group" can only be defined over a field large enough for the definition of both, thus, perhaps, only over C troubles some specialists. As I observed in the comments on the letter to A. Gee, some are also troubled by the circumstance that in the theory of Shimura varieties the automorphic representation that defines the cohomology groups from which the Galois representation is constructed is not the automorphic representation to which it corresponds by the (Langlands) correspondence -- at least not if one uses the local correspondence introduced by me, a correspondence with, in my view, much to be said for it. There are even those who would like to modify the definition, by isolating a collection of automorphic representations that define the image of the motives and that, in contrast to a larger collection of automorphic representations, permit, perhaps after a twisting of the kind just described, the introduction of an "automorphic Galois group" over, say, the field of algebraic numbers. L. Clozel has, in the paper Motifs et formes automorphes examined the question carefully.
Only motives over number fields can correspond to automorphic representations. Nevertheless we are certainly hoping to establish sooner or later a theory for general motives, say over C. One is very quickly led to ask, what will be the function of a richer theory over Q, in which the relation with automorphic representations appears, to the theory over C. I do not know of any papers in which this question has been broached, say in relation to the Hodge conjecture.
The letter does not indulge in any real mathematics and, when it does, the explanations are obscure to the point of incomprehensibility. My explanation of the expected local correspondence, in the section labelled l-adic motivation is certainly somewhat embarrassing! I was trying to express whatever understanding I had of the l-adic Galois representations associated to algebraic varieties -- not knowledge, which I lacked almost completely. An informed, current survey can be found in the first section of R. Taylor's article Galois representations. in Annales de la Faculte des Sciences de Toulouse 13 (2004), 73-119. The embarrassing discussion in the letter is, however, purely local, a matter of explaining how appropriate l-adic representations correspond to a pair (ψ,Y).
The loose phrase "such that ψ(σ) is semi-simple if σ projects to the Frobenius" is inappropriate and (surely?) not what was intended. Its presence in the letter is presumably a result of haste and carelessness, not to speak of some real ignorance. I have left it only for the sake of historical veracity. The assumption should be, and was implicitly, that m(σ) is semi-simple. I neglected, moreover, to state that the residual characteristic of the local field F is supposed prime to l. Anyhow, the purpose is clearly to explain to myself --- not to the recipient --- how an l-adic representation leads to a pair φ, Y. The argument is not only brief and hurried, but also fundamentally incomplete, although not fundamentally incorrect. It is clear from the discussion of de Rham representations in Taylor's report that in 1974, at least, I was in to position to explain adequately what I had in mind. There is, none the less, something to be said for the use of the Jacobson-Morosow theorem, implicitly invoked by the phrase "as you know", for it leads to the replacement of the unipotent parameter by an imbedding of SL(2) in the L-group.
I was, by the way, surprisingly -- and inappropriately -- optimistic in the letter about the local conjectures, although serious inroads were made within twenty-five years, so that the time scale of five to ten years was not completely out of order. My suspicion now is that the decisive insights, namely for all groups and complete from the point of view of harmonic analysis, will appear for the global and local problems simultaneously
I add finally that the list of problems suggested by Dieudonné did appear, but so far as I can tell this letter did not influence it in any way.
March 24, 1974
## Letter to Roger Howe
February 23, 1975 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960800290107727, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/9667/what-are-some-results-in-mathematics-that-have-snappy-proofs-using-model-theory/13610 | ## What are some results in mathematics that have snappy proofs using model theory?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am preparing to teach a short course on "applied model theory" at UGA this summer. To draw people in, I am looking to create a BIG LIST of results in mathematics that have nice proofs using model theory. (I do not require that model theory be the first or only proof of the result in question.)
I will begin with some examples of my own (the attribution is for the model-theoretic proof, not the result itself).
1) An injective regular map from a complex variety to itself is surjective (Ax).
2) The projection of a constructible set is constructible (Tarski).
3) Solution of Hilbert's 17th problem (Tarski?).
4) p-adic fields are "almost C_2" (Ax-Kochen).
5) "Almost" every rationally connected variety over Q_p^{unr} has a rational point (Duesler-Knecht).
6) Mordell-Lang in positive characteristic (Hrushovski).
7) Nonstandard analysis (Robinson).
[But better would be: a particular result in analysis which has a snappy nonstandard proof.]
Added: The course was given in July of 2010. So far as I am concerned, it went well. If you are interested, the notes are available at
http://www.math.uga.edu/~pete/MATH8900.html
Thanks to everyone who answered the question. I enjoyed and learned from all of the answers, even though (unsurprisingly) many of them could not be included in this introductory half-course. I am still interested in hearing about snappy applications of model theory, so further answers are most welcome.
-
The last chapter of the new edition of Courant and Robbins' "What is Mathematics?" has an appendix in which Ian Stewart gives a sketch of nonstandard analysis and spectacular applications to proofs in analysis. – Anweshi Jan 16 2010 at 6:09
## 18 Answers
Hilbert's Nullstellensatz is a consequence of the model completeness of algebraically closed fields.
Edit: I don't have a reference, but I can sketch the proof. Suppose you have some polynomial equations that don't have a solution over ${\mathbb C}$. Extend ${\mathbb C}$ by a formal solution, and then algebraically close to get a field $K$. The field $K$ obviously contains a solution, but by model completeness of algebraically closed fields, a first-order statement is true in an algebraically closed field only if it is true in every algebraically closed field. The existence of a solution to a finite set of polynomial equations is a first-order statement and ${\mathbb C}$ is algebraically closed. QED.
-
Again, could you provide a reference? Thanks. – Pete L. Clark Dec 26 2009 at 14:43
2
A good, brief reference is the very first pages of David Marker's notes on Model Theory of Fields. See math.uic.edu/~marker/mtf-reading.html Also, Kevin, I'm not really happy with your proof summary: as you note, it is a consequence of model completeness of ACF. But then you say that "a first-order statement is true in an algebraically closed field only if it is true in every algebraically closed field", and this property is NOT model completeness, it is simply completeness. (And moreover, ACF is model complete but not complete, ACF_p is complete though.) – Dan Petersen Dec 27 2009 at 11:43
2
Let me briefly explain why I like this example best (for now!). First, the application is to a theorem which is important and mainstream (especially here in Athens, where algebraic geometry is popular). Second, the model-theoretic argument seems insightful and in some ways more geometric than the standard proofs: it is essentially a generic points argument. I plan to use it in my lectures, and I'll post notes...several months from now. – Pete L. Clark Dec 28 2009 at 14:20
The conclusion of the proof sketch is incorrect as stated since the coefficients of the given polynomials are not arbitrary. However, model completeness does show that polynomials equations with coefficients in C have common solution in some extension K, then they must have a common solution in C since C is an elementary submodel of K; this gives a correct conclusion. – François G. Dorais♦ Feb 18 2010 at 4:12
I'm a little confused. Isn't char(K)=p a first-order statement which is true of some algebraically-closed fields but not others? – Greg Muller Feb 18 2010 at 15:42
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Plane geometry is decidable. That is, we have a computable algorithm that will tell us the truth or falsity of any geometrical statement in the cartesian plane.
This is a consequence of Tarski's theorem showing that the theory of real closed fields admits elimination of quantifiers. The elimination algorithm is effective and so the theory is decidable. Thus, we have a computable procedure to determine the truth of any first order statement in the structure (R,+,.,0,1,<). The point is that all the classical concepts of plane geometry, in any finite dimension, are expressible in this language.
Personally, I find the fact that plane geometry has been proven decidable to be a profound human achievement. After all, for millennia mathematicians have struggled with geometry, and we now have developed a computable algorithm that will in principle answer any question.
I admit that I have been guilty, however, of grandiose over-statement of the situation---when I taught my first logic course at UC Berkeley, after I explained the theorem some of my students proceeded to their next class, a geometry class with Charles Pugh, and a little while later he came knocking on my door, asking what I meant by telling the students "geometry is finished!". So I was embarrassed.
Of course, the algorithm is not feasible--its double exponential time. Nevertheless, the fact that there is an algorithm at all seems amazing to me. To be sure, I am even more surprised that geometers so often seem unaware of the fact that they are studying a decidable theory.
-
1
This is doubtless rather subjective, but I would call the completeness of geometry a theorem of meta-mathematics. The trouble is, as you point out, that it does not seem to lead to a quicker or better understanding of any particular geometric fact. – Pete L. Clark Dec 24 2009 at 18:53
1
I am not sure exactly how you would like to divide model theory and meta-mathematics, but surely this topic fits naturally into a discussion of elimination of quantifiers, which would seem to be an important part of model theory, no? About your second comment, alas, it is true. Nevertheless, I shall wax poetic about the significance of our having come to a great enough understanding of geometry that we have a decision procedure. – Joel David Hamkins Dec 26 2009 at 2:55
2
@Kevin Lin: Tarski proved that there is an algorithm to decide the truth or falsity of any first order statement in the real-closed field (R,+,l,0,<). The concepts of points, lines, planes, circles, conics, spheres, paraboloids, etc. are all expressible in this language, using the usual polynomials. Thus, we have a decision procedure for Cartesian (as opposed to Euclidean) geometry. And the algorithm handles any R^n simply by working with coordinates. – Joel David Hamkins Dec 26 2009 at 13:28
3
Tarski's concept of plane geometry is both more and less than Euclid's concept. More, because it includes curves other than lines and circles. Less, because it doesn't admit integer variables. A famous open problem implicit in Euclid is: for which n is the regular n-gon constructible? – John Stillwell Feb 1 2010 at 2:15
1
@John: I agree. Once you allow quantification over integers, then Tarski's theorem breaks down completely, and undecidability reigns again. Indeed, one get undecidability even for assertions having just a single integer quantifier, since this is enough to express the halting problem. – Joel David Hamkins Feb 1 2010 at 13:53
show 3 more comments
Not sure if this is what you are looking for but Tychonoff's theorem has a snappy two line proof in the non-standard setting once the non-standard characterization of compact is established. The non-standard characterization of compactness says that $X$ is compact if and only if every $x\in X^*$ is infinitely close to some standard point in $X$ and infinitely close is defined in terms of the monads of the topology.
Edit: (Tychonoff) $X := \prod_{i\in I}X_i$ is compact if and only if each $X_i$ is compact.
The forward direction is easy and does not require any non-standard analysis. Simply use the projection maps.
Now suppose all the $X_i$ are compact and let $y\in X^*$ then $y(i^*)\in (X_i)^*$. Since $X_i$ is compact there is some $x(i)\in st(y(i^*))$ and we can take $x\in X$ to be the product of the points $x(i)$. By construction $x^*\approx y$ and this establishes the backward direction.
The above theorem along with its non-standard proof can be found in "Nonstandard Analysis" by Martin V$\ddot{a}$th on page 166 but I'm sure any other book on the subject will include a proof of the theorem using pretty much the same terminology and concepts.
Notation: $(-)^*$ is the extension map, $st(-)$ is the standard part map, and $\approx$ is the relation defined in terms of the monads of the topology.
-
Could you give some more specifics and/or a reference? – Pete L. Clark Dec 24 2009 at 10:40
The `usual' proof via ultrafilters is very similar, so I do not think that you gain anything here by using non-standard analysis. – Carsten Schultz Aug 11 2010 at 10:00
Also, the 'usual' proof that uses nets instead of ultrafilters is even more similar to this one... – David FernandezBreton Jun 24 2011 at 13:10
The forward direction only holds when all $X_i$ are non-empty. – Martin Brandenburg Apr 12 at 13:28
There are many results in Banach space theory that are proved via ultraproducts or non standard hulls, and most books on the subject contain a few. One nice one that is easy to state is that a Banach space that is uniformly homeomorphic to a Hilbert space is linearly homeomorphic to a Hilbert space.
-
You can find lots of other applications just by browsing the titles of the papers on the MODNET preprint server (follow this link and look under "Publications" on the left side of the page). For example:
1. "The monomorphism problem in free groups", by L. Ciobanu and A. Ould Houcine (in which they show it is decidable);
2. "An invitation to model-theoretic Galois theory," by A. Medvedev and Ramin Takloo-Bighash -- expository paper explaining how the Galois correspondence can be explained using model theoretic tools;
3. "On algebraic closure in pseudofinite fields," by O. Beyarslan and E. Hrushovski;
etc.
Also, there is a recent book Model Theory with Applications to Algebra and Analysis: v. 1 (LMS Lecture Note series v. 349, Cambridge, 2008) which would probably be very relevant (judging by the table of contents -- unfortunately I haven't had a chance to read it yet).
-
The link seems to be broken. – david karapetyan Dec 25 2009 at 0:38
@david karapetyan: Thanks, it's fixed. For some reason, the URL of the preprint server as displayed in my browser had a space in the middle of a directory name! Very strange. – John Goodrick Dec 26 2009 at 5:57
Don't forget the beautiful theory of motivic integration, initiated by Maxim Kontsevich at an Orsay seminar, and developed by Jan Denef, François Loeser, Raf Cluckers, Julien Sebag and many others. The theory is becoming increasingly important.
The crucial, first paper by Denef and Loeser:
Denef & Loeser,
"Germs of arcs on singular algebraic varieties and motivic integration" (Inventiones Mathematicae 1999)
Another standard reference which should be readable - "In writing the paper we tried our best keeping it accessible to a wide audience including algebraic geometers and model theorists":
Cluckers & Loeser,
"Constructible motivic functions and motivic integration" (Inventiones Mathematicae 2008)
A very nice overview of some applications:
Loeser's slides for his ECM plenary lecture
-
@AS: I am looking for particular results with relatively elementary statements. I am epsilon familiar with the work of Denef and Loeser but am concerned that it is too advanced/technical to draw out a result which is accessible to a general graduate student audience (perhaps somewhat slanted towards algebra / algebraic geometry / number theory). Can you extract a particular result from one of these papers which meets these requirements? – Pete L. Clark Feb 1 2010 at 9:07
Hm, you could mention the recent applications to the fundamental lemma (Cluckers-Loeser-Hales). The result might be understandable (clearly the proof won't). Or you could mention the rationality result for Poincaré series (see Denef's 1984 paper - this is not motivic integration, strictly speaking, but uses quantifier elimination) or Batyrev's result that birational Calabi-Yau varieties have equal Betti numbers, a big motivation for the development of the theory. – Wanderer Feb 1 2010 at 10:50
You will find a very clear and understandable overview in Loeser's slides for his ECM plenary lecture: dma.ens.fr/~loeser/ecm_fl_final.pdf – Wanderer Feb 1 2010 at 10:51
@AS: The rationality result for Poincare series and the birational invariance of Betti numbers of Calabi-Yaus are both within the scope of understanding of portions of my target audience. Thanks very much for suggesting them. – Pete L. Clark Feb 1 2010 at 14:38
You can prove Gromov's Theorem on groups of polynomial growth. See ven den Dries and Wilkie, Gromov's theorem on groups of polynomial growth and elementary logic, J Algebra, 89 (1984), 391-396.
-
I guess this one is a bit too late for this summer, but the theory of o-minimal structures greatly simplified proofs in real algebraic and subanalytic geometry, by emphasizing concepts over nitty-gritty details (at the cost of a loss of constructivity, of course, but what can you expect?).
It is worth noting that these two structures are the ones for which the o-minimal property was known before the notion itself was formalized. Many more have been discovered since then, the most noteworthy being probably the real exponential field (Wilkie).
-
Alan Dow and others have explored the use of elementary submodels in Topology. See e.g.his introductory paper here. One application: the theorem by Arhangel'lskij that a Hausdorff Lindelöf first countable space is at most size continuum. There is a technical proof using transfinite recursion (the standard one), but also a slick one using elementary submodels (of sufficiently large countable models of ZFC).
-
This one http://arxiv.org/abs/0909.2190 is brand new. ".... For a simple linear group G, we show that a finite subset X with |X X-1 X |/ |X| bounded is close to a finite subgroup, or else to a subset of a proper algebraic subgroup of G....". I understand that people in additive combinatorics consider it useful.
-
1. For a particular result in analysis with a snappy non-standard proof, there's this proof of the Bieri-Groves Theorem on tropical amoebas. (Does this count as "analysis"? I'm not sure.)
2. There are some nice applications of model theory to [differential Galois theory],[2] partly due to the fact that the complete theory of differentially closed fields of characteristic zero happens to have extremely nice model theoretic properties (it's omega-stable, hence there nice rank functions on definable sets and unique-up-to-isomorphism prime models over any set of parameters).
-
Here's a link to a paper on differential Galois theory (for some reason it wasn't letting me add the link to my answer): math.uic.edu/~marker/sev.pdf – John Goodrick Dec 24 2009 at 19:35
It uses the Fraïssé limit construction to produce a family of topological groups with interesting universal minimal flows.
-
Hilbert's Seventeenth Problem: Let $f \in \mathbb{R}(x_1, x_2, \ldots, x_n)$ be a rational function which is positive everywhere on $\mathbb{R}^n$. Then there exist rational functions $g_1$, $g_2$, ..., $g_N$ such that $f=\sum g_i^2$.
Proof sketch: Let $K$ be the field $\mathbb{R}(x_1, x_2, \ldots, x_n)$. If $f$ is not a sum of squares, then there is a total ordering of $K$ where $f$ is negative. Let $L$ be the real closure of $K$ with respect to that ordering. Then the rational function $f$, evaluated at the point $(x_1, x_2, \ldots, x_n) \in L^n$, is negative. But the theory of real closed fields is complete, so $f$ must be negative somewhere in $\mathbb{R}^n$, contrary to hypothesis.
See Jacobson Basic Algebra II for a detailed exposition.
-
David -- yes, it's a nice application. But please look again at the question... – Pete L. Clark Feb 18 2010 at 3:45
3
I searched the page for "seventeenth" and "sum of squares". Are you telling me I have to read it with my eyes as well? :) – David Speyer Feb 18 2010 at 22:40
There is a proof of a p-adic birational version of Grothendieck's Section Conjecture by Jochen Koenigsmann using Model theory: http://arxiv.org/abs/math.AG/0305226 I'm not entirely sure, but I think there is no known (published?) proof of this result not using model theory.
-
Two more examples of applications:
The paper by Hrushovski arXiv:math/0406514v1 gives an application of Model Theory to a conjecture by Jacobi on difference equations.
The paper by Cluckers, Hales and Loeser arXiv:0712.0708 gives an application of Model Theory to the Fundamental Lemma in Langlands Theory.
-
1. I think the Manin-Mumford Conjecture proofs first used some methods of Model theory, before it was finally proved by Raynaud. I am not able to dig up references, though.
2. Yuri Manin and people who worked in relation with him, posted many papers on arXiv relating the problem of getting Mordell-Weil type theorems in higher dimensions, i.e., on cubic surfaces etc.. One of the references is http://arxiv.org/abs/1001.0223 ...
-
Try looking at at a nice short introduction to "modern" "applied" model theory. Perhaps it could be used to prepare to teach a short course on "applied model theory" at UGA this summer.
-
Recently Pillay and his student Nagloo tackled transcendence questions about the famous Painlevé equations. They entirely use model theory:
http://arxiv.org/pdf/1112.2916.pdf
http://arxiv.org/pdf/1209.1562.pdf
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269319176673889, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/38351/are-vertex-and-edge-transitive-graphs-determined-by-their-spectrum/38431 | ## Are vertex and edge-transitive graphs determined by their spectrum?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A graph is called vertex and edge transitive if the automorphism group is transitive on both vertices and edges.
The spectrum of a graph is the collection (with multiplicities) of eigenvalues of the incidence matrix.
Supposedly, it is conjectured that almost all graphs have the property that they are the unique graph with their spectrum (at least, according to MathWorld).
If $\Gamma_1,\Gamma_2$ are two vertex and edge transitive graphs, with the same valence, which are isospectral (have the same spectrum) then does it follow that $\Gamma_1\cong \Gamma_2$?
-
Half transitive graphs also have the property that they are not arc transitive (=symmetric). If you are only interested in vertex+edge transitive graphs then there are small counterexamples of such cospectral graphs. There are conjectures that all regular and cospectral graphs have a very particular form, but I think your question is still open. – Gjergji Zaimi Sep 11 2010 at 14:58
Ah, I'm not a graph theorist in any way, shape or form, and was unaware. I don't want to rule out arc-transitive graphs, then. If they are cospectral, regular of the same degree, and both are vertex and edge symmetric, is anything known? – Charles Siegel Sep 11 2010 at 15:06
The rook graph on a 4x4 board and mathworld.wolfram.com/ShrikhandeGraph.html form a cospectral pair, but they are both arc-transitive. There is some work on cospectral regular graphs, but I don't think there are any deep results on symmetric ones. By the way, does your question come from some finitary statement about cospectral manifolds? – Gjergji Zaimi Sep 11 2010 at 15:18
It doesn't, actually. It comes from algebraic geometry and studying certain incidence relations, and trying to show that there is a unique incidence satisfying some properties, to construct an action of its automorphism group on a certain moduli space. – Charles Siegel Sep 11 2010 at 17:32
## 2 Answers
Actually a graph is called half-transitive if it is vertex and edge transitive, but not arc-transitive. I am going to assume here that the term means what you chose it to mean.
Van Dam and Koolen construct distance-regular graphs with the same parameters (and hence the same spectrum) as the Grassmann graphs. They show that their graphs are not vertex transitive. The Grassmann graphs are distance transitive, and hence both arc and edge transitive. (Remark: the vertices of the Grassmann graph $G_q(v,k)$ are the $k$-dimensional subspaces of a vector space of dimension $v$ over the field of order $q$, two subspaces are adjacent if their intersection has dimension $k-1$.) If you google on Van Dam and Koolen, you'll easily find their paper.
For a second class of examples, there is a family of arc-transitive self-complementary graphs due to Peisert, which are strongly regular with the same parameters as the Paley graphs.
I do not know of examples of cospectral graphs which are half-transitive in the usual sense of the term. I am confident that there will be such things, but none may be known.
-
Ah, until Gjergji pointed it out, I didn't know I was misusing the term half-transitive (I don't know the standard terminology here). Are Paley graphs vertex and edge transitive? And if not, is spectrum a complete invariant among vertex and edge transitive graphs? I'm rewriting the question, to make it more precise and correct language – Charles Siegel Sep 11 2010 at 17:35
The Paley graphs are vertex and edge transitive, so graphs that are vertex and edge transitive are not characterized by their spectrum. – Chris Godsil Sep 13 2010 at 4:06
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As Chris said, the answer is most probably no, and most likely there are known such examples. There are many examples and constructions of non isomorphic graphs with the same spectrum and some of these examples are Cayley graphs and other very symmetric graphs. Cayley graphs are always vertex transitive and quite often, for a suitable choise of generators, also edge transitive. Some of these examples are based on a famous paper of Sunada. Sunada's method was originally for creating isospectral manifolds but it can be applied (and is even easier) to create isospectral graphs. The following paper by Bob Brooks "Isospecrtal graphs and isospectral surfaces" is a good starting point.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467229247093201, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/55302-function-integrable.html | # Thread:
1. ## Is this function integrable?
If f is a bounded function from [a,b] into the real numbers that is
integrable on [a,b], then for any c such that a < c < b, is f
integrable on [a,c] and is f integrable on [c,b]?
I want to say no, since if I pick $f(x)= \frac {1}{x}$, then f is integrable from [-2,2] but not if you move the boundary to 0.
Is that right? Thanks.
2. I have also observed same question in my book..
3. So I guess this theorem is true then?
Here is my proof so far:
Let $\epsilon > 0$ be given. Since f is integrable, then there exists a partition $P= \{ x_0 = a, x_1, x_2 , . . . , x_{N-1}, x_N = b \}$ on [a,b] such that $U(f,P)-L(f,P)< \epsilon$
Define $M_j=sup \{ f(x) : x \in [x_j,x_{j+1} \}$ and $m_j=inf \{ f(x) : x \in [x_j,x_{j+1} \}$, so we have $U(f,P)-L(f,P)= \sum ^{N-1}_{j=0}(M_j)(x_{j+1}-x_{j}) - \sum ^{N-1}_{j=0}(m_j)(x_{j+1}-x_{j}) =$ $\sum ^{N-1}_{j=0}(M_j-m_j)(x_{j+1}-x_{j})$
Now let $c=x_k \ \ \ \ \ k \in (0,N)$, then $U(f,P)-L(f,P)= \sum ^{N-1}_{j=0}(M_j-m_j)(x_{j+1}-x_{j}) =$
$\sum ^{k}_{j=0}(M_j-m_j)(x_{j+1}-x_{j}) + \sum ^{N-1}_{j=k+1}(M_j-m_j)(x_{j+1}-x_{j}) < \epsilon$
Define $P_1 = \{ x_0, x_1 , . . . , x_k = c \}$ and $P_2 = \{ c = x_k, x_{k+1}, . . . , x_N \}$
But we know that
$\sum ^{k}_{j=0}(M_j-m_j)(x_{j+1}-x_{j}) + \sum ^{N-1}_{j=k+1}(M_j-m_j)(x_{j+1}-x_{j}) =$
$\sum ^{k}_{j=0}(M_j)(x_{j+1}-x_{j}) - \sum ^{k}_{j=0}(m_j)(x_{j+1}-x_{j}) +$ $\sum ^{N-1}_{j=k+1}(M_j)(x_{j+1}-x_{j}) - \sum ^{N-1}_{j=k+1}(m_j)(x_{j+1}-x_{j}) =$
$U(P_1,f)-L(P_1,f)+U(P_2,f)-U(P_2,f) < \epsilon$
I need to show that $U(P_1,f)-L(P_1,f) < \epsilon$ and $U(P_2,f)-U(P_2,f) < \epsilon$
Any hints? Thanks.
4. Note that the theorem says that the function is bounded.
I don’t know what definition of integrable you are using.
However, if you are using some variation of subdivisions we can always include c as an endpoint in a refinement of the subdivision.
5. Originally Posted by tttcomrader
If f is a bounded function from [a,b] into the real numbers that is
integrable on [a,b], then for any c such that a < c < b, is f
integrable on [a,c] and is f integrable on [c,b]?
If $f$ is integrable on $[a,b]$ then $f$ is integrable on $[c,d]\subset [a,b]$.
This is a known result. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344009160995483, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/213530/evaluating-a-double-integral-over-a-possible-unbounded-region | # Evaluating a double integral over a possible unbounded region
Evaluate the (double) integral of $\frac{xy^2}{(4x^2 + y^2)^2}$ over the finite region enclosed by $y= x^2$ and $y = 2x$. My question is: I have tried this by the method of iterated integrals but then I noticed that at $(0,0)$ the function is undefined.
How would you go about solving this? Also is the region here unbounded?
Many thanks
-
2
– joriki Oct 14 '12 at 9:07
The title talks about a double integral, but the question contains no integral. Perhaps you mean "integrate" instead of "evaluate"? – joriki Oct 14 '12 at 9:11
Yes, sorry, evaluate the integral of the function over the given region. – CAF Oct 14 '12 at 9:12
The function isn't discontinuous at the origin; it's undefined there. Please edit any clarifications into the question itself; people shouldn't have to delve into the comments to understand the question. There's an edit button underneath the question. – joriki Oct 14 '12 at 9:14
You edited the question, but now it still doesn't mention any integral? – joriki Oct 14 '12 at 9:26
## 1 Answer
On the first question: The integrand grows like $1/r$ at the origin, but the width of your region also decreases as $r$, so you should be OK. I'd integrate over $x$ first, since the numerator contains the inner derivative of the denominator.
On the second question: The problem explicitly says to integrate over the finite region enclosed by the curves. "Bounded" is just the more formal term for what they call "finite", so no, the region is not unbounded.
-
Exactly what I did. I integrated wrt x and then wrt y and got a positive answer which is as I expected since the function is positive over region. So the discontinuity at x=o is ok, for the reasons you described? (I also recall from Fubini's theorem that if there is a small number of discontinuities then iterated integrals is still valid) – CAF Oct 14 '12 at 9:18
@CAF: As I wrote under the question, this is not a discontinuity. You can interpret this integral as the limit as you extend the region ever closer to the origin. The integrals over all the finite regions are all unproblematic, so the question is just whether the results converge as the region approaches the origin, which they do. – joriki Oct 14 '12 at 9:22
Can I ask what you mean by 'the results converge as the region approaches the origin'. I think the limit is non existent at x=0, does this not imply a discontinuity? – CAF Oct 14 '12 at 9:29
The limit of what is not existent at $x=0$? Imply a discontinuity in what? Please take more care to use language precisely; it makes communication a lot more efficient. – joriki Oct 14 '12 at 9:30
Apologies. The limit of the function as x tends to 0 does not exist and so would this not imply that the graph of the function has a discontinuity there? – CAF Oct 14 '12 at 9:33
show 8 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351754784584045, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?s=218f3d51557334fa02459601bd664159&p=4289765 | Physics Forums
Recognitions:
Science Advisor
## Papers by D. Carfi: extended spectral theory of distributions.
Over in the Quantum Physics forums, we occasionally have threads involving rigged Hilbert space -- a.k.a. Gel'fand triple: ##\Omega \subset H \subset \Omega'## where ##H## is a Hilbert space, ##\Omega## a dense subspace thereof such that certain unbounded continuous-spectrum operators are well-defined everywhere thereon, and ##\Omega'## is its topological dual. A recent thread of this kind is: http://www.physicsforums.com/showthread.php?t=668013
With certain extensions of the meaning of "self-adjoint operator" to the space ##\Omega'##, some treatments of QM rely on the so-called nuclear spectral theorem (cf. Gelfand & Vilenkin vol 4) which basically assures us that the generalized eigenvectors of a self-adjoint operator ##A## in ##\Omega'## span ##\Omega##, and that operators of the form ##f(A)##, for analytic functions ##f##, make sense. This is basically a generalization of the usual spectral theorem for unbounded operators on Hilbert space to the distributional context of rigged Hilbert space.
I've always felt it unsatisfactory that arbitrary elements of the dual space ##\Omega'## are not also covered by such a theorem. Recently, David Carfi put a series of papers on the arXiv claiming to do just this (and he confirmed to me in brief private correspondence that this is indeed his intent). In reverse time order the papers are:
http://arxiv.org/abs/1104.4660
http://arxiv.org/abs/1104.4651
http://arxiv.org/abs/1104.3908
http://arxiv.org/abs/1104.3380
http://arxiv.org/abs/1104.3324
http://arxiv.org/abs/1104.3647
Unfortunately, my abilities in Functional Analysis, etc, are inadequate to form a reliable opinion about these papers. They have not been published in peer-reviewed journals (afaict), but David Carfi seems to have published elsewhere in financial and economics mathematics.
So... I'm hoping that the FA experts here can spare a little time to look through Carfi's papers and figure out whether he does indeed achieve what I described above, i.e., establish a sensible spectral expansion for all elements of the distribution space ##\Omega'##, and not merely the well-behaved elements from ##\Omega##.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus I got to admit that these papers look very interesting. I think I'm going to waste the coming days trying to work through them. If you take a look at the last paper: http://arxiv.org/abs/1104.3647 The Theorem on page 3 really does seem something you want. But that's on a very first glance. I'll read through the papers and post my impressions here. Maybe it would be nice if you could do the same!
Recognitions:
Science Advisor
Quote by micromass I'll read through the papers and post my impressions here.
Thank you! Of course, I was kinda hoping all along that you would!
Maybe it would be nice if you could do the same!
I already looked through them when they first appeared on the arXiv -- that was why I emailed David to ask him a couple of questions. But... I had to admit defeat, due to my inadequate FA abilities. (You were not active in the QM forums back then.)
I will, of course, now try again...
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
## Papers by D. Carfi: extended spectral theory of distributions.
So, I've read the first paper: http://arxiv.org/abs/1104.4651
The paper introduces families of distributions. So let $\mathcal{S}_n^\prime$ be the dual of the Schwartz function (thus the tempered distributions). We are interested in functions $v:I\rightarrow \mathcal{S}_n^\prime$.
This notion can be used to generalize quite some good things. Most important, there is a bijective correspondence between such functions and operators between Schwartz classes. This is easy to see in finite dimensions. Let $A$ be a linear map from $\mathbb{R}^n$ to $\mathbb{R}^m$. Then we can of course express $A$ as a matrix. Now we can look at
$$\{1,...,m\}\rightarrow (\mathbb{R}^m)^*: i\rightarrow \text{Row}_i$$
So we have a family that sends a number i to the i'th row. Of course, the rows of a matrix are easily seen to be functionals.
We want to generalize this notion to infinite-dimensional spaces. So we want to talk about rows of infinitary matrices. But for this, we need to find some kind of "basis" for the Schwartz class. Now, it turns out that a basis for [ite]\mathcal{S}_n[/itex] is given by $\delta_x,~x\in \mathbb{R}^n$, the Dirac delta function. Using this, we can now construct an isomorphism
$$\mathcal{B}(\mathcal{S}_n,\mathcal{S}_m) \leftrightarrow \mathcal{S}(\mathbb{R}^n,\mathcal{S}_m^\prime)$$
The set on the left side are the bounded functions between n-dimensional and m-dimensional Schwartz class. The set on the right side are the families $\mathbb{R}^n\rightarrow \mathcal{S}_m^\prime$ which satisfy some continuity property.
Then there is a notion of "summable". I think it is based on the following. Consider a fixed real sequence $v=(x_n)_n$. We can consider this to be "summable" if the series $\sum_n x_n$ exists.
Now, let's define some functions. Let $A\subseteq \mathbb{N}$ finite. Define
$$\hat{v}(A) = \{(y_n)_n ~\vert~ y_n=x_n~\text{if}~n\in A~ \text{and otherwise}~y_n=0\}$$
Then define
$$a(\hat{v}(A)) = \sum y_n = \sum_{n\in A} x_n$$
This is a finite sum. Now, we can define $(x_n)_n$ to be summable if the function
$$u(A) = a(\hat{v}(A))$$
is a convergent net.
Now, in the continuous case, we have a family $v$ of Schwartz functions instead of a sequence. But the definition remains quite the same thing.
The paper then gives some criteria of summability and some equivalence notions. For example, he gives criteria with which we can detect summability of $v$ for more "algebraic" properties such as the existence of a transpose.
I think most of these notions can be generalized a great deal. The paper only shows things for the Schwartz class, but I'm sure they hold more generally too. Maybe we can show versions of arbitrary nuclear spaces? I think that his other papers do things more generally though...
Recognitions:
Science Advisor
Quote by micromass If you take a look at the last paper: http://arxiv.org/abs/1104.3647 The Theorem on page 3 really does seem something you want. But that's on a very first glance.
I've just read through that paper again. But... either I don't properly understand what he's doing, or something important is missing.
The statement of the theorem of ##^S##spectral expansion of an operator ##A\in L(S_n')## seems like it only covers the tempered distributions ##u## in the ##^S##linear hull ##^S##span##(v)## of the eigenfamily ##v##.
But this seems a bit trivial to me. Ordinary spectral theorems show that any vector can be expanded in terms of the eigenvectors of a self-adjoint operator, and that (almost) any other operator can be expanded in terms of those eigenvectors. This seems rather more general (and more difficult) than what Carfi is doing in arXiv:1104.3647.
So, I've read the first paper: http://arxiv.org/abs/1104.4651 [...explanation...]
However, I still struggle to understand the connection (in any) between the way he "expands" distributions in terms of Dirac deltas, and the spectrum of a self-adjoint operator. He talks about transposability, but that's just a way of extending the operator from ##S## to ##S'##, isn't it? To get a genuine spectral theorem more powerful than the usual nuclear spectral theorem, one needs more than that, or so I would have thought.
We need to express the expansion of any tempered distribution, or operator on ##S'##, in terms of the generalized eigenvectors of any self-adjoint operator (or some generalization or modification thereof).
Blog Entries: 8
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by strangerep I've just read through that paper again. But... either I don't properly understand what he's doing, or something important is missing. The statement of the theorem of ##^S##spectral expansion of an operator ##A\in L(S_n')## seems like it only covers the tempered distributions ##u## in the ##^S##linear hull ##^S##span##(v)## of the eigenfamily ##v##. But this seems a bit trivial to me. Ordinary spectral theorems show that any vector can be expanded in terms of the eigenvectors of a self-adjoint operator, and that (almost) any other operator can be expanded in terms of those eigenvectors. This seems rather more general (and more difficult) than what Carfi is doing in arXiv:1104.3647.
Yes. But he never demands self-adjoint anywhere. He has given an integral representation of arbitrary operators. Somewhere, there should be a theorem that the $^S$-span of a self-adjoint operator is the entire space.
Recognitions:
Science Advisor
Quote by micromass [...] he never demands self-adjoint anywhere. He has given an integral representation of arbitrary operators. Somewhere, there should be a theorem that the $^S$-span of a self-adjoint operator is the entire space.
I've read through the other papers again, but I didn't notice anything like that.
No doubt you read the papers with more insight than me, but if you also don't find something like that soon, then we should probably just forget about it.
Blog Entries: 9 Recognitions: Homework Help Science Advisor Just a personal note: I think these articles are highly irrelevant to a (mathematical) physicist and it only remains to assert their value for the field of pure mathematics.
Thread Tools
| | | |
|-------------------------------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: Papers by D. Carfi: extended spectral theory of distributions. | | |
| Thread | Forum | Replies |
| | High Energy, Nuclear, Particle Physics | 3 |
| | Beyond the Standard Model | 15 |
| | Calculus & Beyond Homework | 2 |
| | General Physics | 0 |
| | General Physics | 0 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230384826660156, "perplexity_flag": "middle"} |
http://mathschallenge.net/view/time_loses_integrity | # mathschallenge.net
## Time Loses Integrity
#### Problem
The velocity of a body increases with constant acceleration. Its motion is recorded between two points, $A$ and $B$, that are one metre apart, with $M$ being the midpoint of $AB$.
The body takes $p$ seconds to travel from $A$ to $M$ and $q$ seconds to travel from $M$ to $B$.
Given that the change in speed from $A$ to $B$, measured in metres per second, is integer, prove that $q$ cannot be integer.
Problem ID: 272 (16 Feb 2006) Difficulty: 4 Star
Show Problem & Solution | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188603758811951, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/6541/gravity-theories-with-the-equivalence-principle-but-different-from-gr/6543 | # Gravity theories with the equivalence principle but different from GR
Einstein's general relativity assumes the equivalence of acceleration and gravitation. Is there a general class of gravity theories that have this property but disagree with general relativity? Will such theories automatically satisfy any of the tests of general relativity such as the precession of mercury or the bending of light?
-
@Daniel, we are not here to judge answer value, especially in case of such questions. Downvote, not flag. – mbq♦ Mar 10 '11 at 15:47
## 8 Answers
Dear Carl, my overall moral answer is the opposite one to the first two answers, so let me write a separate answer. My overall message is that with the right minimal extra assumptions, a correct theory respecting the equivalence principle has to agree with GR at cosmological distances.
There are various modifications or competitors of GR, see e.g. this list:
http://en.wikipedia.org/wiki/Classical_theories_of_gravitation#Articles_on_specific_classical_field_theories_of_gravitation
First of all, most of the theories tend to be given by an action. This is needed to preserve the Noether conservation laws for symmetries, and so on. No interesting (viable enough and new enough) non-action theory is known, as far as I know.
The equivalence principle requires that local physics must know how it's mapped to the flat Minkowski space - so there must exist the metric tensor at each point. However, there may also exist other fields or degrees of freedom.
One class you find in the list above is Brans-Dicke theory. It's an example of a broader class of theories that contain new scalar or tensor fields, besides the metric tensor. Torsion is a popular, frequently discussed addition by the people working on "competition to GR".
If such fields are massless, they may modify the long-distance physics. But they effectively destroy the equivalence principle, too. If the extra fields are massless (like moduli, massless scalar fields), their values should be viewed as a part of the gravitational field but these values influence local physics. That's really forbidden.
So when understood sufficiently strictly, only the metric tensor should form the spectrum of massless (or light) fields.
Then, you may have many theories which are GR plus additional fields. The additional fields are then treated "matter fields". They can be anything. An interesting subclass are Kaluza-Klein theories. In them, many new massive fields may be understood as coming from a higher-dimensional general relativity.
With the equivalence principle fully respected, we really deal with theories defined by an action where the metric tensor is the only massive "gravitational" field. The action for the other fields - matter - is a separate question and there are of course many choices. The action for the metric tensor must be diffeomorphism invariant.
One may prove that such an action, to have the invariance, must be a function of the Riemann tensor and its covariant derivatives, with properly contracted indices. For example, terms like $$\nabla_\alpha R_{\beta\gamma\delta\epsilon} \nabla^\alpha R^{\beta\gamma\delta\epsilon}$$ are tolerable. However, all such terms contain either extra covariant derivatives or extra powers of the Riemann tensor, relatively to the Ricci scalar $R$. For dimensional reasons, all such extra terms have to be multiplied by a positive power of a distance $L$ and it seems likely that all such distances $L$ that could occur are microscopic or ultramicroscopic. So all such terms become negligible at cosmological (or astrophysical) distances. In this long-distance limit, the gravitational theory has to be given by the Einstein-Hilbert action e.g. Einstein's equations and the predictions for Mercury and bending of light are inevitable.
It wasn't an accident that Einstein ended up with the right theory. He had to - and many of his "heuristic" arguments (especially "minimality") may be justified by the more valid RG-like $L$-argument above.
-
2
@Luboš Motl - Your answer is very interesting although somehow not correct.Do You say that GR theory agree with data on cosmological distances? That is why 90% of energy and mass has to be postulated as dark and possibly not observed up to now? ;-) I am not against GR of course, but there is no cosmological data which agree with GR but rather local data from Solar system, and local observations performed on Earth ( clocks in gravitational field). Even for galaxies GR cannot give correct velocity distribution. In cosmological scales probably only deflection of light from distant stars agree wi – kakaz Mar 9 '11 at 9:29
@kakaz Comments are restricted to about 600 chars... And well, I hope you'll get your comment privilege soon ;-) – mbq♦ Mar 9 '11 at 13:32
How is this opposite to Moshe's answer? Every theory you cite is a metric theory of gravity. If you require other things beyond the EEP, yes you are going to need the theory to come form an action principle and other things, but if you're just looking to enforce the EEP, then all you need is a metric theory of gravity. – Jerry Schirmer Mar 9 '11 at 17:00
1
@mbq- thank You;-) here I am;-) – kakaz Mar 9 '11 at 20:38
1
Dear Moshe, I agree that you may write many diff-invariant theories etc. but the sense of the original question could have been that "GR is just one among many theories respecting the EP and there's no good reason to be sure that it's right" which is a vague but wrong assertion, or at least, it's a conclusion one may naturally make out of your strictly correct but oversimplified answer. ;-) – Luboš Motl Mar 10 '11 at 5:31
show 2 more comments
The principle of equivalence is built on the observation that motion of matter due to gravity is independent of any of its specific characteristics, it is universal. Contrast that with the electric force, where different particles with different charges will move along different trajectories. Gravity is different because of the equality of the gravitational and inertial masses.
Since gravity affects everything in a universal way, you can declare it to be a property of spacetime instead of being a force. In Einstein's gravity the effect of gravity is summarized in the geometry or more specifically the metric of spacetime. A priori you could imagine that gravity can be manifested in some other universal property of spacetime, but metric is the only choice I know of.
But, once you choose the metric to encode the force on test particles due to gravity, I think the principle of equivalence doesn't give any more information. For example, it does not tell you how the metric responds to matter sources - how much spacetime curvature is created by some source of energy-momentum. There could be many answers to this question, all of which would satisfy the principle of equivalence, most of which will differ (potentially dramatically) from GR. For example, if you replace the Einstein-Hilbert action by some more general function of curvature invariants, you'd get a modification of GR which is generally covariant and satisfies the equivalence principle.
-
1
This is exactly right. Any metric theory of gravity satisfies the equivalence principle, but GR is the only metric theory satisfying Einstein's equation--there are a large range of possible metric theories, most of which have been ruled out or severely constrained by experiment/observation. – Jerry Schirmer Mar 9 '11 at 4:45
The real question here is the meaning of the Equivalence Principle: i like to think of in terms of the Fröbenius Theorem.
In this sense, you could modify the Action by powers of the scalar curvature (Ricci scalar) and get equations of motion different than those of GR — but the Equivalence Principle would still be satisfied (to some approximation). See, for example, $f(R)$ Gravity.
-
I would like to pick up one class of theories touched apon by Lubos, namely the Torsion extensions of General Relativity. The classic one being the Einstein-Cartan Theory.
This theory introduces some form of spin coupling in addition to (metric) gravity. The Torsion tensor is zero in GR and this theory maps Torsion onto a spin connection (both antisymmetric). All this will also cause the Stress-Energy Tensor to not be symmetric (unlike in GR). Note that these spin components are somewhat classical concepts.
So strictly this theory does not obey the Equivalence Principle (I think) as "spin" components could result in a different measure in a gravitational field. However the Wikipedia article states that any deviation is at $10^{-15}$ and so undetectable (at least on Earth). Some quoted arguments point out that the theory is valid around rotating Black Holes, so effects might be noticable there. Otherwise the Einstein-Cartan theory is not ruled out by observations.
Wikipedia also tells us that these ideas formed part of the basis of Loop Quantum Gravity (Ashtekar Variables); and introducing Torsion does provide some element of a mathematical completion to GR. (I believe that Einstein was not aware of the Torsion concept in 1918 and so it is zero in GR simply because he didnt know of it, rather than that it was explicitly considered and rejected. Once Einstein learned of Torsion he became excited by it and even wrote popular News articles explaining it.)
-
Don't those theories correspond to gravity coupled to specific form of matter (anti-symmetric tensors)? The separation is meaningful because the metric and the a.s tensor transform in different representations of the Lorentz group and can be discussed independently. Of course, if you artificially bundle other forces into gravity the hybrid will no longer obey the equivalence principle. – user566 Mar 9 '11 at 21:00
– Roy Simpson Mar 9 '11 at 21:22
Steve Weinberg wrote about this on Physics Today at some point. The separation between geometry and matter is a bit arbitrary, but it is true that the a.s. tensor transforms independently under local Lorentz transformations. Therefore all its couplings are independent of the gravity couplings, generally things don't couple to it universally, and it makes more sense to think about it as a matter field rather than part of spacetime geometry. Note that other matter fields (e.g. gauge fields) are also naturally geometrical objects, but are not associated with the geometry of spacetime. – user566 Mar 9 '11 at 21:50
@Moshe R. : Just to add that with Torsion the Connection now becomes the primary geometric object from which Torsion and metric might be derivable. There is some non-uniqueness, but it could make the theory look more like a Gauge Theory (of Space-Time). The early proponents might not have cared for this, but Weinberg might have identified this too. So I will have to search for the article to see exactly what is said, but it is getting late in my timezone. – Roy Simpson Mar 9 '11 at 23:14
While it is a trivial example, any set of equations in the form of the GR equations with different constants would satisfy the equivalence principle while not being confirmed experimentally. For example, if one took G to be 42 (in conventional units) and the speed of light to be 200 meters per hour, you would have a logically consistent set of equations that met the equivalence principle but did not reproduce GR.
A less trivial example is that the value of the cosmological constant is not constrained tightly by anything else in GR. Other than Hubble constant scale astronomy observations, changing it really doesn't change anything else in GR.
Bekenstein's TeVeS theory is an example of a non-GR set of equations that satisfy the equivalence principle while producing effects similar to those observed an attributed to dark matter in the weak field that does reproduce the classic GR experimental tests. (To be clear, I am not stating that TeVeS is the correct theory, particularly after the Bullet Cluster, merely that it has the mathematical character we are discussing.) More generally, Bekenstein's effort suggests that it is possible as a general matter to craft equations that reduce to GR in most cases and meet the equivalence principle, but differ from it in a systemic way in either the strong field case, or the weak field case, or both.
Similarly, there is no obvious reason in my mind that one couldn't have a set of equivalence principle compliant, non-GR equations in which gravity was also, for example, a function of overall electro-weak charge, which is neutral in all known cases where GR is observed in astronomy.
-
great answer! +1 for mentioning TeVeS not necessarily as an actual alternative, but as a mechanism to produce families of alternatives – lurscher Mar 16 '11 at 20:18
Something along this line was what I was thinking; Take GR, add some sort of dark matter (which depends only on the distribution of regular energy / matter) and what you end up with is a new theory of gravity, different from GR, but with the equivalence principle. – Carl Brannen Mar 17 '11 at 0:26
Since long ago there exists Relativistic Theory of Gravity (RTG) by A.A. Logunov with co-authors where gravity is a physical field in the Minkowski space-time acting on matter as an effective metric of an effective Riemann space-time. It carries energy-momentum and is different from pure geometry. In this theory there are additive conservation laws. It describes all known experimental data but it does not predict black holes (there is no singularities in it). Instead there are heavy objects of finite size. Some Logunov's papers are now available on arXiv.
-
– Carl Brannen Mar 9 '11 at 23:39
Polarizable vacuum representation of general relativity by Puthoff, 1999
Abstract: Standard pedagogy treats topics in general relativity (GR) in terms of tensor formulations in curved space-time. Although mathematically straightforward, the curved space-time approach can seem abstruse to beginning students due to the degree of mathematical sophistication required. As a heuristic tool to provide insight into what is meant by a curved metric, we present a polarizable-vacuum (PV) representation of GR derived from a model by Dicke and related to the "THεµ" formalism used in comparative studies of gravitational theories
pag 11 : Comparison of the Metric Tensors in the GR and PV Approaches
Having shown by specific calculation that the PV approach to the three classical tests of GR reproduces the traditional GR results
(the gravitational redshift, the bending of light and the advance of the perihelion of Mercury)
-
I don't know that there's any general class of theories that compete with GR. If a competitor doesn't agree with experimental tests to date it wouldn't be viable.
The vast majority of tests of GR have been tests of the Schwarzschild metric. It's possible to tweak that metric so that it's compatible with quantum mechanics (unlike GR) yet is still confirmed by every experimental test of the Schwarzschild metric to date, including the anomalous perihelion precession of Mercury. Also the black hole information loss paradox vanishes. Tweaking the metric, as long as it's still a smooth curvature, doesn't necessarily deny compatibility with the equivalence principle.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337629675865173, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2011/03/24/germs-of-functions/ | # The Unapologetic Mathematician
## Germs of Functions
Let’s take the structure sheaves we defined last time and consider the stalks at a point $p\in M$. It turns out that since we’re working with sheaves of $\mathbb{R}$-algebras, we can sort of shortcut the messy limit process.
As before, given some open neighborhood $U$ of $p$, we let $\mathcal{O}_M(U)$ be the algebra of smooth functions — as smooth as $M$ is itself — on $U$. Now we define $\mathcal{Z}_{M,p}(U)$ to be the ideal of those functions which vanish on some neighborhood of $p$. Then we define the quotient
$\displaystyle\mathcal{O}_{M,p}(U)=\mathcal{O}_M(U)/\mathcal{Z}_{M,p}(U)$
Notice that we have effectively pushed our limiting process into the definition of the ideal $\mathcal{Z}_{M,p}(U)$, where for each open neighborhood $V\subseteq U$ of $p$ we get an ideal of functions vanishing on $V$. The ideal we care about is the union over all such neighborhoods $V$, and the process of taking this union is effectively a limit.
Anyhow, there’s still the possibility that this depends on the $U$ from which we started. But this is actually not the case; we get a uniquely defined algebra $\mathcal{O}_{M,p}=\mathcal{O}_{M,p}(U)$ no matter which neighborhood $U$ of $p$ we start from.
Indeed, I say that there is an isomorphism $\mathcal{O}_{M,p}(M)\to\mathcal{O}_{M,p}(U)$. In the one direction, this is simply induced by the restriction map $\mathcal{O}_M(M)\to\mathcal{O}_M(U)$ — if two functions are equal on some neighborhood of $p$ in $M$, then they’re certainly equal on some neighborhood of $p$ in $U$. And this restriction is just as clearly injective, since if two functions are equivalent in $\mathcal{O}_{M,p}(U)$ then they must agree on some neighborhood of $p$, which means they were already equivalent in $\mathcal{O}_{M,p}(M)$.
The harder part is showing that this map is surjective, and thus an isomorphism. But given $U$, let $V$ be an open neighborhood of $p$ whose closure is contained in $U$ — we can find one since $U$ must contain a neighborhood of $p$ homeomorphic to a ball in $\mathbb{R}^n$, and we can certainly find $V$ within such a neighborhood. Anyhow, we know that there exists a bump function $\phi$ which is identically $1$ on $V$ and supported within $U$. We can thus define a smooth function $g\in\mathcal{O}_M(M)$ on all of $M$ by setting $g(q)=\phi(q)f(q)$ inside $U$ and $g(q)=0$ elsewhere. Since $f$ and $g$ agree on the neighborhood $V$ of $p$, they are equivalent in $\mathcal{O}_{M,p}(U)$, and thus every equivalence class in $\mathcal{O}_{M,p}(U)$ has a representative coming from $\mathcal{O}_{M,p}(M)$.
We write the stalk as $\mathcal{O}_{M,p}$, or sometimes $\mathcal{O}_p$ if the manifold $M$ is clear from context, and we call the equivalence classes of functions in this algebra “germs” of functions. Thus a germ subsumes not just the value of a function at a point $p$, but is behavior in an “infinitesimal neighborhood” around $p$. Some authors even call the structure sheaf of a manifold — especially a complex analytic manifold (which we haven’t really discussed yet) — the “sheaf of germs” of functions on the manifold, which is a little misleading since the germs properly belong to the stalks of the sheaf. Luckily, this language is somewhat outmoded.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 5 Comments »
1. Are you missing an equals or defined to be sign in the first displayed formula (the quotient)?
Comment by Robert | March 25, 2011 | Reply
2. sorry, fixed
Comment by | March 26, 2011 | Reply
3. Just checking that ‘algebra’ in par. 2 is something meeting this definition: http://en.wikipedia.org/wiki/Associative_algebra
which seems to fit, but as things get more complicated, uncertainties multiply …
Comment by Avery D Andrews | March 27, 2011 | Reply
4. Yes: like a ring, but built on a vector space, not just a set.
Comment by | March 27, 2011 | Reply
5. [...] we take a manifold with structure sheaf . We pick some point and get the stalk of germs of functions at . This is a real algebra, and we define a “tangent vector at [...]
Pingback by | March 29, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449723958969116, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/2935/regarding-matsuis-paper-on-linear-cryptanalysis-of-des?answertab=active | # Regarding matsui's Paper on Linear Cryptanalysis of DES
I's going throught the the Paper by Matsui on Linear Cryptanalysis of DES . In that he says
$NS_{5}(16,15)=12$
And then in the next paragraph he says considering the expansion and permutation phases the following equation holds good
$X(15)\bigoplus F(X,K)[7,18,24,29]=k[22]$ Can somebody help me understand this because i cant find a relation between the first equation and the second one . ?
-
Please find the paper at www.cs.bgu.ac.il/~crp042/Handouts/Matsui.pdf – Malice Jun 17 '12 at 20:56
## 1 Answer
Let us denote by $x = x_5x_4x_3x_2x_1x_0$, where $x_i \in \{0, 1\}$ the input of a DES S-box and by $y = y_3y_2y_1y_0$, with $y_i \in \{0, 1\}$ its output. Basically, $\mathrm{NS}_5(16, 15) = 12$ means that for S-box #5, the relation $x_4 = y_3 \oplus y_2 \oplus y_1 \oplus y_0$, where $y = S_5(x)$, holds with probability $\frac{\mathrm{NS}_5(16, 5)}{64} = \frac{12}{64} = \frac{1}{2}-\frac{5}{16}$, which is a fairly biased value (one would expect this value to be very close to $\frac{1}{2}$ in an ideal situation).
Starting from this observation, Matsui builds the 1-round linear approximation $X_{15} \oplus F(X, K)_7 \oplus F(X, K)_{18} \oplus F(X, K)_{24} \oplus F(X, K)_{29} = K_{22}$ that holds with the same probability $\frac{1}{2}-\frac{5}{16}$. Note that, for S-box #5, $x_4$ can be traced back to $X_{15} \oplus K_{22}$, taking into account the $\mathrm{EP}(.)$ transformation as well as the key-schedule algorithm, and $y_3y_2y_1y_0$ can be propagated to bits 7, 18, 24 and 29 of the output of the round function, this time taking into account the effect of the bit permutation $\mathrm{P}(.)$.
-
Exactly . But I'm looking for is how did he manage the 1 round linear approximation from the $NS_{5}(16,15)$ equation . He did not describe that in the paper and nor can i find it anywhere on internet – Malice Jun 20 '12 at 12:04
And please note that it's $NS_{5}(16,15)$ not $NS_{5}(16,5)$ . Typo . I've edited the Question – Malice Jun 20 '12 at 12:07
I have corrected and expanded my answer, hope this is more clear for you now :-) – cryptopathe Jun 20 '12 at 15:52
Oh My mistake . I's counting bits starting from left handside and was getting a differnt expression,should start counting from right hand side . Spent hours trying to find this .A bucketfull of thanks – Malice Jun 20 '12 at 17:33
While expanding this Equation to a single round of DES, the Output of the function f is XORed with plain text . How does this equation we're talking about hold true because plain text is of random content but Matsui says this equation holds true $X_{2}[7,8,24,29]\bigoplus P_H[7,18,24,29]\bigoplus P_{L}[15]=K_{1}[22]$ – Malice Jun 20 '12 at 17:58
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314833283424377, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/140869-upper-lower-expression-f-x-sinx.html | # Thread:
1. ## Upper and Lower expression for f(x)=Sinx
What are the expressions of Un and Ln (n is the number of small regtangles) for f(x)=sinx on [0,π/2] ?
(NO need to evaluate these sums)
Thanks
2. Originally Posted by ShaXar
What are the expressions of Un and Ln (n is the number of small regtangles) for f(x)=sinx on [0,π/2] ?
(NO need to evaluate these sums)
Thanks
Do understand what Un and Ln mean? The rest is just arithmetic. Divide the interval from 0 to $\pi/2$ into n intervals of equal length so that each has length $\pi/2n$. Since sin(x) is an increasing function on $[0, \pi/2]$, the lowest value in each interval is at the left endpoint, $i\pi/2n$ and the highest value is at the right endpoint, $(i+1)\pi/2n$ with i going from 0 to n-1.
3. Originally Posted by HallsofIvy
Do understand what Un and Ln mean? The rest is just arithmetic. Divide the interval from 0 to $\pi/2$ into n intervals of equal length so that each has length $\pi/2n$. Since sin(x) is an increasing function on $[0, \pi/2]$, the lowest value in each interval is at the left endpoint, $i\pi/2n$ and the highest value is at the right endpoint, $(i+1)\pi/2n$ with i going from 0 to n-1.
Riemann sums is my guess.
CB
4. Yes, I was assuming that. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400659799575806, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Imaginary_unit | # Imaginary unit
i in the complex or cartesian plane. Real numbers lie on the horizontal axis, and imaginary numbers lie on the vertical axis
The imaginary unit or unit imaginary number, denoted as i, is a mathematical concept which extends the real number system ℝ to the complex number system ℂ, which in turn provides at least one root for every polynomial P(x) (see algebraic closure and fundamental theorem of algebra). The imaginary unit's core property is that i2 = −1. The term "imaginary" is used because there is no real number having a negative square.
There are in fact two complex square roots of −1, namely i and −i, just as there are two complex square roots of every other real number, except zero, which has one double square root.
In contexts where i is ambiguous or problematic, j or the Greek ι (see alternative notations) is sometimes used. In the disciplines of electrical engineering and control systems engineering, the imaginary unit is often denoted by j instead of i, because i is commonly used to denote electric current in these disciplines.
For a history of the imaginary unit, see Complex number: History.
## Definition
The powers of i return cyclic values:
$\ldots$ (repeats the pattern from blue area)
$i^{-3} = i\,$
$i^{-2} = -1\,$
$i^{-1} = -i\,$
$i^0 = 1\,$
$i^1 = i\,$
$i^2 = -1\,$
$i^3 = -i\,$
$i^4 = 1\,$
$i^5 = i\,$
$i^6 = -1\,$
$\ldots$ (repeats the pattern from blue area)
The imaginary number i is defined solely by the property that its square is −1:
$i^2 = -1 \ .$
With i defined this way, it follows directly from algebra that i and −i are both square roots of −1.
Although the construction is called "imaginary", and although the concept of an imaginary number may be intuitively more difficult to grasp than that of a real number, the construction is perfectly valid from a mathematical standpoint. Real number operations can be extended to imaginary and complex numbers by treating i as an unknown quantity while manipulating an expression, and then using the definition to replace any occurrence of i 2 with −1. Higher integral powers of i can also be replaced with −i, 1, i, or −1:
$i^3 = i^2 i = (-1) i = -i \,$
$i^4 = i^3 i = (-i) i = -(i^2) = -(-1) = 1 \,$
$i^5 = i^4 i = (1) i = i \,$
Similarly, as with any non-zero real number:
$i^0 = i^{1-1} = \frac{i}{i} = 1 \,$
## i and −i
Being a quadratic polynomial with no multiple root, the defining equation x2 = −1 has two distinct solutions, which are equally valid and which happen to be additive and multiplicative inverses of each other. More precisely, once a solution i of the equation has been fixed, the value −i, which is distinct from i, is also a solution. Since the equation is the only definition of i, it appears that the definition is ambiguous (more precisely, not well-defined). However, no ambiguity results as long as one of the solutions is chosen and fixed as the "positive i". This is because, although −i and i are not quantitatively equivalent (they are negatives of each other), there is no algebraic difference between i and −i. Both imaginary numbers have equal claim to being the number whose square is −1. If all mathematical textbooks and published literature referring to imaginary or complex numbers were rewritten with −i replacing every occurrence of +i (and therefore every occurrence of −i replaced by −(−i) = +i), all facts and theorems would continue to be equivalently valid. The distinction between the two roots x of x2 + 1 = 0 with one of them as "positive" is purely a notational relic; neither root can be said to be more primary or fundamental than the other.
The issue can be a subtle one. The most precise explanation is to say that although the complex field, defined as ℝ[x]/(x2 + 1), (see complex number) is unique up to isomorphism, it is not unique up to a unique isomorphism — there are exactly 2 field automorphisms of ℝ[x]/(x2 + 1) which keep each real number fixed: the identity and the automorphism sending x to −x. See also Complex conjugate and Galois group.
A similar issue arises if the complex numbers are interpreted as 2 × 2 real matrices (see matrix representation of complex numbers), because then both
$X = \begin{pmatrix} 0 & -1 \\ 1 & \;\;0 \end{pmatrix}$ and $X = \begin{pmatrix} \;\;0 & 1 \\ -1 & 0 \end{pmatrix}$
are solutions to the matrix equation
$X^2 = -I = - \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} -1 & \;\;0 \\ \;\;0 & -1 \end{pmatrix}. \$
In this case, the ambiguity results from the geometric choice of which "direction" around the unit circle is "positive" rotation. A more precise explanation is to say that the automorphism group of the special orthogonal group SO (2, ℝ) has exactly 2 elements — the identity and the automorphism which exchanges "CW" (clockwise) and "CCW" (counter-clockwise) rotations. See orthogonal group.
All these ambiguities can be solved by adopting a more rigorous definition of complex number, and explicitly choosing one of the solutions to the equation to be the imaginary unit. For example, the ordered pair (0, 1), in the usual construction of the complex numbers with two-dimensional vectors.
## Proper use
The imaginary unit is sometimes written √−1 in advanced mathematics contexts (as well as in less advanced popular texts). However, great care needs to be taken when manipulating formulas involving radicals. The notation is reserved either for the principal square root function, which is only defined for real x ≥ 0, or for the principal branch of the complex square root function. Attempting to apply the calculation rules of the principal (real) square root function to manipulate the principal branch of the complex square root function will produce false results:
$-1 = i \cdot i = \sqrt{-1} \cdot \sqrt{-1} = \sqrt{(-1) \cdot (-1)} = \sqrt{1} = 1$ (incorrect).
Attempting to correct the calculation by specifying both the positive and negative roots only produces ambiguous results:
$-1 = i \cdot i = \pm \sqrt{-1} \cdot \pm \sqrt{-1} = \pm \sqrt{(-1) \cdot (-1)} = \pm \sqrt{1} = \pm 1$ (ambiguous).
Similarly:
$\frac{1}{i} = \frac{\sqrt{1}}{\sqrt{-1}} = \sqrt{\frac{1}{-1}} = \sqrt{\frac{-1}{1}} = \sqrt{-1} = i$ (incorrect).
The calculation rules
$\sqrt{a} \cdot \sqrt{b} = \sqrt{a \cdot b}$
and
$\frac{\sqrt{a}} {\sqrt{b}} = \sqrt{\frac{a}{b}}$
are only valid for real, non-negative values of a and b.
These problems are avoided by writing and manipulating i√7, rather than expressions like √−7. For a more thorough discussion, see Square root and Branch point.
## Properties
### Square roots
The two square roots of i in the complex plane
The square root of i can be expressed as either of two complex numbers[nb 1]
$\pm \sqrt{i} = \pm \left( \frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2}i \right) = \pm \frac{\sqrt{2}}2 (1 + i).$
Indeed, squaring the right-hand side gives
$\begin{align} \left( \pm \frac{\sqrt{2}}2 (1 + i) \right)^2 \ & = \left( \pm \frac{\sqrt{2}}2 \right)^2 (1 + i)^2 \ \\ & = \frac{1}{2} (1 + 2i + i^2) \\ & = \frac{1}{2} (1 + 2i - 1) \ \\ & = i. \ \\ \end{align}$
This result can also be derived with Euler's formula
$e^{ix} = \cos(x) + i\sin(x) \,$
by substituting x = π/2, giving
$e^{i(\pi/2)} = \cos(\pi/2) + i\sin(\pi/2) = 0 + i1 = i\,\! .$
Taking the square root of both sides gives
$\pm \sqrt{i} = e^{i(\pi/4)} \,\! ,$
which, through application of Euler's formula to x = π/4, gives
$\begin{align} \pm \sqrt{i} & = \cos(\pi/4) + i\sin(\pi/4) \\ & = \frac{1}{\pm \sqrt{2}} + \frac{i}{\pm \sqrt{2}}\\ & = \frac{1+i}{\pm \sqrt{2}}\\ & = \pm \frac{\sqrt{2}}2 (1 + i).\\ \end{align}$
Similarly, the square root of −i can be expressed as either of two complex numbers using Euler's formula:
$e^{ix} = \cos(x) + i\sin(x) \,$
by substituting x = 3π/2, giving
$e^{i(3\pi/2)} = \cos(3\pi/2) + i\sin(3\pi/2) = 0 - i1 = -i\,\! .$
Taking the square root of both sides gives
$\pm \sqrt{-i} = e^{i(3\pi/4)} \,\! ,$
which, through application of Euler's formula to x = 3π/4, gives
$\begin{align} \pm \sqrt{-i} & = \cos(3\pi/4) + i\sin(3\pi/4) \\ & = -\frac{1}{\pm \sqrt{2}} + i\frac{1}{\pm \sqrt{2}}\\ & = \frac{-1 + i}{\pm \sqrt{2}}\\ & = \pm \frac{\sqrt{2}}2 (i - 1).\\ \end{align}$
Multiplying the square root of i by i also gives:
$\begin{align} \pm \sqrt{-i} = (i)\cdot (\pm\frac{1}{\sqrt{2}}(1 + i)) \\ & = \pm\frac{1}{\sqrt{2}}(1i + i^{2})\\ & = \pm\frac{\sqrt{2}}{2}(i - 1)\\ \end{align}$
### Multiplication and division
Multiplying a complex number by i gives:
$i\,(a + bi) = ai + bi^2 = -b + ai.$
(This is equivalent to a 90° counter-clockwise rotation of a vector about the origin in the complex plane.)
Dividing by i is equivalent to multiplying by the reciprocal of i:
$\frac{1}{i} = \frac{1}{i} \cdot \frac{i}{i} = \frac{i}{i^2} = \frac{i}{-1} = -i.$
Using this identity to generalize division by i to all complex numbers gives:
$\frac{a + bi}{i} = -i\,(a + bi) = -ai - bi^2 = b - ai.$
(This is equivalent to a 90° clockwise rotation of a vector about the origin in the complex plane.)
### Powers
The powers of i repeat in a cycle expressible with the following pattern, where n is any integer:
$i^{4n} = 1\,$
$i^{4n+1} = i\,$
$i^{4n+2} = -1\,$
$i^{4n+3} = -i.\,$
This leads to the conclusion that
$i^n = i^{n \bmod 4}\,$
where mod represents the modulo operation. Equivalently:
$i^n = \cos(n\pi/2)+i\sin(n\pi/2)$
#### i raised to the power of i
Making use of Euler's formula, ii is
$i^i = \left( e^{i (2k \pi + \pi/2)} \right)^i = e^{i^2 (2k \pi + \pi/2)} = e^{- (2k \pi + \pi/2)}$ where $k \in \mathbb{Z}$, the set of integers.
The principal value (for k = 0) is e−π/2 or approximately 0.207879576...[1]
### Factorial
The factorial of the imaginary unit i is most often given in terms of the gamma function evaluated at 1 + i:
$i! = \Gamma(1+i) \approx 0.4980 - 0.1549i.$
Also,
$|i!| = \sqrt{\pi \over \sinh(\pi)} \approx 0.521564... .$[2]
### Other operations
Many mathematical operations that can be carried out with real numbers can also be carried out with i, such as exponentiation, roots, logarithms, and trigonometric functions.
A number raised to the ni power is:
$\!\ x^{ni} = \cos(\ln(x^n)) + i \sin(\ln(x^n)).$
The nith root of a number is:
$\!\ \sqrt[ni]{x} = \cos(\ln(\sqrt[n]{x})) - i \sin(\ln(\sqrt[n]{x})).$
The imaginary-base logarithm of a number is:
$\log_i(x) = {{2 \ln(x)} \over i\pi}.$
As with any complex logarithm, the log base i is not uniquely defined.
The cosine of i is a real number:
$\cos(i) = \cosh(1) = {{e + 1/e} \over 2} = {{e^2 + 1} \over 2e} \approx 1.54308064... .$
And the sine of i is purely imaginary:
$\sin(i) = \sinh(1) \, i = {{e - 1/e} \over 2} \, i = {{e^2 - 1} \over 2e} \, i \approx 1.17520119 \, i... .$
## Alternative notations
• In electrical engineering and related fields, the imaginary unit is often denoted by j to avoid confusion with electrical current as a function of time, traditionally denoted by i(t) or just i. The Python programming language also uses j to denote the imaginary unit. MATLAB associates both i and j with the imaginary unit, although 1i or 1j is preferable, for speed and improved robustness.[3]
• Some sources define[citation needed] j = −i, in particular with regard to traveling waves (e.g., a right traveling plane wave in the x direction) $e^{ i (kx - \omega t)} = e^{ j (\omega t-kx)} \,$.
• Some texts use the Greek letter iota (ι) for the imaginary unit, to avoid confusion, esp. with index and subscripts.
• Each of i, j, and k is an imaginary unit in the quaternions. In bivectors and biquaternions an additional imaginary unit h is used.
## Matrices
When 2 × 2 real matrices m are used for a source, and the number one (1) is identified with the identity matrix, and minus one (−1) with the negative of the identity matrix, then there are many solutions to m2 = −1. In fact, there are many solutions to m2 = +1 and m2 = 0 also. Any such m can be taken as a basis vector, along with 1, to form a planar algebra.
## Notes
1. To find such a number, one can solve the equations
(x + iy)2 = i
x2 + 2ixy − y2 = i
Because the real and imaginary parts are always separate, we regroup the terms:
x2 − y2 + 2ixy = 0 + i
and get a system of two equations:
x2 − y2 = 0
2xy = 1
Substituting into the first equation, we get
x2 − 1/4x2 = 0
x2 = 1/4x2
4x4 = 1
Because x is a real number, this equation has two real solutions for x: x = 1/√2 and x = −1/√2. Substituting both of these results into the equation 2xy = 1 in turn, we will get the same results for y. Thus, the square roots of i are the numbers 1/√2 + i/√2 and −1/√2 − i/√2. (University of Toronto Mathematics Network: What is the square root of i? URL retrieved March 26, 2007.)
## References
1. "The Penguin Dictionary of Curious and Interesting Numbers" by David Wells, Page 26.
2. "abs(i!)", WolframAlpha.
3.
## Further reading
• Nahin, Paul J. (1998). An Imaginary Tale: The Story of √−1. Chichester: Princeton University Press. ISBN 0-691-02795-1. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061692953109741, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/photons?page=5&sort=newest&pagesize=15 | # Tagged Questions
Photons are electromagnetic waves. They exhibit particle-like qualities in many situations and have zero rest mass.
3answers
208 views
### Hydrogen transition and photon behavior
consider a transition for an electron in the Hydrogen atom from the ground state to the 1st excited state. Let's say this transition occurs through absorption of a photon of exactly the energy ...
2answers
279 views
### Photon statistics of an incandescent light source
We usually calibrate the cameras on our microscopes by capturing 20 images of a blurry (not sharp) fluorescent particle. For each pixel in this stack of 20 images we calculate the intensity variance. ...
1answer
419 views
### Can a photon exiting from a gravity well ever reach a frequency of zero / wavelength of $\infty$?
In reading another question about gravity's effects on a photon, I wondered if it were possible for a photon to ever be redshifted to zero wavelength. I know that black holes have a gravity field ...
1answer
144 views
### How is the mechanism of greenhouse gases interacting with IR radiation?
How does atmospheric CO2 and other Greenhousgases (GHG) affect the incoming (from sun) and outgoing (from earth) radiation. I understand that at certain wavenumbers (or areas of wavenumbers) in the ...
2answers
497 views
### Tachyon and Photons
Is there a particle called "tachyons" that can travel faster than light? If so, would Einstein's relativity be wrong? According to Einstein no particle can travel faster than light.
2answers
111 views
### Photon absorption
[sorry, this way below the level of this forum -- flames are most welcome] When a photon is absorbed by a piece of matter that does not reflect it -- where does the photon "go"? Eg, one shines light ...
3answers
129 views
### Distant bodies emitting photons
This comes from a discussion forum, where a friend of mine asked the following: We can see objects in space billion of light years away, right? I started wondering about that. If you take 2 ...
2answers
235 views
### Photon emission from excited atoms
the answer given by classical quantum mechanic to the for atomica levels does not provide that an electron in an excited levels can radiate a photon and move to a lower level. How QED justifies this ...
4answers
672 views
### Why photons transfer to electrons perpendicular momentum?
Linear antenna directed along z, photons (EM waves) propagate along x. Momentum of photons have only x component. Why electrons in antenna have z component of momentum?
1answer
185 views
### If 100% of the energy from the sun is reflected back into space
100% of the energy from the sun is reflected back into space, it's just shifted from a low-entropy state to a high-entropy state, and from a high frequency (ultraviolet) to a low frequency (infrared). ...
3answers
243 views
### Action - Reaction pair, through photons
Here's an example to describe the issue Supossed a high power laser (eg a 100 kW laser, ie, electromagnetic weapons) is fired to a target, then it will receive energy and move. (and likely to burn or ...
5answers
527 views
### Particles, waves and parallel wire filters. Transmission formula?
If I think of a photon as a particle, I think a parallel wire filter should transmit proportionally to the uncovered area. (and reflect proportionally to the covered area). Obviously polarization ...
2answers
171 views
### self-antiparticles and broken symmetries
certain particles (i.e: certain bosons like the photon) do not have an anti-particle, or rather, they are they own anti-particles. lets assume that such symmetry is only approximate and these ...
3answers
631 views
### Radio waves and frequency of photon
How radio waves create the current in antenna in terms of photons? If it is Compton scattering then why is not changed the freuency of photons?
3answers
3k views
### What do ants see?
After watching some ants in my garden today, and then looking at this very illuminating demonstration, I got to wondering, about what they would see. Not specifically ants (I understand their ...
1answer
727 views
### Light waves and Schrödinger probability waves
Ok, bearing in mind that I only have a brief understanding of quantum mechanics (no formal education, only from reading about concepts in books), so I could be way off here, I have a question ...
3answers
522 views
### Nature of Photons
Why is it that photons are emitted in bundles? My physics teacher's answer was "it's complicated"...
2answers
216 views
### Have CMB photons “cooled” or been “stretched”?
Introductory texts and popular accounts of why we see the "once hot" CMB as microwaves nearly always say something about the photons "cooling" since the Big Bang. But isn't that misleading? Don't ...
2answers
324 views
### Is energy exchange quantized?
In the photoelectric effect there is a threshold frequency that must be exceeded, to observe any electron emission, I have two questions about this. I) Lower than threshold: What happen with lesser ...
2answers
381 views
### Is spin is a conserved entity?
Suppose that an electron with spin up emits a photon in the field of an ion (bremsstrahlung). What is the spin of the emitted photon? Is it correct to say that the photon is circularly polarized if ...
1answer
383 views
### How does a particle of light reach the max speed of light? [duplicate]
Possible Duplicate: How can a photon have no mass and still travel at the speed of light? First of all I am not a professional physicist. I was curious as to how a particle of light can ...
3answers
1k views
### Photon hitting an atom with higher energy than needed to ionize
Suppose we have an atom with several energy levels (e.g. an hydrogen), and it is hit by photons. I know that in order to have the atom change energy levels, the photon must have an energy level ...
3answers
584 views
### Kinetic Energy vs. Potential energy with regards to creating particles
So I know that when you collide particles with high enough kinetic energy, (kinetic energy = at least the rest mass of the particles you are making), you get particles. How come potential energy ...
3answers
293 views
### Could a bubble of photons make a spaceship massless?
I'm not sure how theoretically possible this is but my question is... If we could somehow make a perfect bubble of photons (a massless bubble) and put a spaceship inside it, could it therefore ...
0answers
242 views
### Maxwell's Demon - laser cooling
There’s an interesting article on Scientific American that tells how to cool individual atoms can bee cooled to within a very tiny fraction of Absolute Zero. It uses two laser beams acting on a very ...
5answers
2k views
### Does a photon interfere only with itself?
I sometimes hear statements like that: Quantum-mechanically, interference pattern occurs due to quantum interference of wavefunction of a photon. Wavefunction of a single photon only interferes ...
2answers
696 views
### Does $E = mc^2$ apply to photons?
Photons are massless, but if $m = 0$ and $E = mc^2$ then $E = 0c^2 = 0$. This would say that photons have no energy, which is not true. However, given the formula $E = ℎf$, a photon does have energy ...
4answers
426 views
### When lasers are used to cool atoms or ions, etc where does the heat go?
According to the first law of thermodynamics, sourced from wikipedia "In any process in an isolated system, the total energy remains the same." So when lasers are used for cooling in traps, similar ...
3answers
578 views
### Anti-laser: How sure we are that energy is transported?
Reading this PE question can-we-transport-energy-over-infinite-distances-through-vacuum-using-light, a related question arises naturally: Is energy transported (by light)? -- (I did believed in this ...
1answer
334 views
### Does the potential energy for a given photon increase or decrease in quanta?
As a photon leaves a strong gravitational field, it loses energy and redshifts. Is the exchange in potential energy of a photon characterized by energy quanta?
1answer
247 views
### What happend with the light from all the galaxies visibles from an earth telescope?
Supposing it's possible to see some distant galaxies with an earth telescope, then, at the tip of the telescope lens there are photons comming from the distant galaxy... So, if I extend my hand in a ...
1answer
184 views
### Radio waves and frequency of photon
Is 89MHZ station emitting photons of 89MHZ frequency? (I mean $\nu$ in $E=h\nu$).
2answers
820 views
### How to count photons
How are photons counted? What is the experimental setup used to count photons from a laser or even a lamp? Of course, in the case of the lamp, I would be able to count only the photons that pass ...
5answers
2k views
### Why can't photons have a mass
Why can't photons have a mass? Could you explain this to me in a short and mathematical way?
3answers
675 views
### Reconciling refraction with particle theory and wave theory
I have searched the web for good answers to why refraction occurs when light moves from one medium to another with different density. I have limited background in physics and want to know if there is ...
7answers
10k views
### How can a photon have no mass and still travel at the speed of light?
I've read a number of the helpful Q&As on photons that mention the mass/mass-less issue. Do I understand correctly that the idea of mass-less (a rest mass of 0) may be just a convention to make ...
1answer
200 views
### Observing photons
Your replies to my question about being able to see a photon, from the side (answer, unanimous, “no”) have raised in me some additional questions. Would it reasonable to think that in consequence, ...
2answers
361 views
### Is Time Significant in the Double Slit Experiment
When doing the classic double slit experiment is the time between emitting photons significant at all? Say, a single photon is emitted, the scientist waits T seconds, then emits another photon. Are ...
4answers
388 views
### movement of photons
In a typical photon experiment the photon is depicted as moving across the page, say from right to left. Suppose we were actually able to witness such an experiment, from the side (to position of ...
4answers
1k views
### Do mirrors increase the amount of light in a room?
So if you have a light bulb in a room, and you had a tool to measure the amount of light that's in the room, then let's assume the amount of light only caused by the bulb is "1" If you place a mirror ...
3answers
2k views
### Properties of the photon: Electric and Magnetic field components
Consider an electromagnetic wave of frequency $\nu$ interacting with a stationary charge placed at point $x$. My question concerns the consistency of two equally valid quantum-mechanical descriptions ...
3answers
852 views
### How is electromagnetic wave variation distributed in space?
Imagine an electromagnetic wave (a monochromatic one for example) The electric field amplitude, and its variations travel in the propagation direction. So, if there really exists a propagation ...
8answers
13k views
### If photons have no mass, how can they have momentum?
As an explanation of why a large gravitational field (such as a black hole) can bend light, I have heard that light has momentum. This is given as a solution to the problem of only massive objects ...
5answers
2k views
### Do photons gain mass when they travel through glass?
Please correct me if I'm wrong, but I believe that photons slow down when travelling through glass. Does this mean they gain mass? Otherwise, what happens to extra kinetic energy? I understand now ...
1answer
2k views
### How many photons per second is one Lumen?
Also the side question is how many Joules is one photon (any between 450-660nm). Thank you P.S. I am asking because I want to estimate how much thermal energy should be dissipated by LED when part of ...
2answers
654 views
### What is up and down conversion in photonics?
I have heard the terms up and down conversion in photonics/photovoltaics articles. What do the terms mean?
4answers
423 views
### What is the mechanism behind the slowdown of light/photons in a transparent medium?
So light travels slower in glass (for example) than in a vacuum. What causes light to slow down? Or: How does it slow down? If light passes through the medium, is it not essentially traveling in the ...
5answers
957 views
### How are forces “mediated”?
I hope this is the right word to use. To me, these forces seem kind of fanciful (except for General Relativity and Gravity, which have a geometric interpretation). For example, how do two charged ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300652742385864, "perplexity_flag": "middle"} |
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aos/1069362732 | ### On nonparametric estimation of density level sets
A. B. Tsybakov
Source: Ann. Statist. Volume 25, Number 3 (1997), 948-969.
#### Abstract
Let $X_1, \dots, X_n$ be independent identically distributed observations from an unknown probability density $f(\cdot)$. Consider the problem of estimating the level set $G = G_f(\lambda) = {x \epsilon\mathbb{R}^2: f(x) \geq \lambda}$ from the sample $X_1, \dots, X_n$, under the assumption that the boundary of G has a certain smoothness. We propose piecewise-polynomial estimators of G based on the maximization of local empirical excess masses. We show that the estimators have optimal rates of convergence in the asymptotically minimax sense within the studied classes of densities. We find also the optimal convergence rates for estimation of convex level sets. A generalization to the N-dimensional case, where $N > 2$, is given.
First Page:
Primary Subjects: 62G05, 62G20
Full-text: Open access
Permanent link to this document: http://projecteuclid.org/euclid.aos/1069362732
Mathematical Reviews number (MathSciNet): MR1447735
Digital Object Identifier: doi:10.1214/aos/1069362732
Zentralblatt MATH identifier: 0881.62039
2013 © Institute of Mathematical Statistics
Turn MathJax Off
What is MathJax? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7369527816772461, "perplexity_flag": "middle"} |
http://motls.blogspot.com.au/2012/09/albert-einstein-1911-12-1922-23.html | # The Reference Frame
## Wednesday, September 26, 2012
... /////
### Albert Einstein 1911-12, 1922-23
Several events related to Albert Einstein's life occurred in recent days and months. If you consider yourself a sort of Einstein fan, you may let me mention some of them.
First, you may finally buy Einstein's brain for \$9.99 (or \$13.99 now?), no strings or hair attached. See Google News or go to the iTunes AppStore.
Second, Caltech and Princeton University Presses teamed up and released the 13th volume of Einstein papers. They cover January 1922-March 1923 and you may already buy the book for \$125 at the PUP website: it has over 900 pages. Einstein is travelling a lot, is ahead of time and already (or still) speaks Hebrew in British Palestine ;-), and doesn't even mention his Nobel prize. Wired about the new book.
Third, there was a conference of Einstein fans among relativists three months ago in Prague. It was a centennial one because Einstein was a full professor (for the first time!) at the local Charles University (German section: then named Karl-Ferdinands Universität) between 1911 and 1912. He left after the third semester, in July 1912, almost exactly 100 years ago.
You may want to read some of the dozens of presentations. I will recommend you the presentation by Prof Jiří Bičák, the main organizer and my undergraduate ex-instructor of general relativity:
Einstein’s Days and Works in Prague: Relativity Then and Now
It's a fun PDF file that shows a lot about the social context as well as the state of his thoughts about general relativity – which wasn't complete yet – at that time.
Source (EN). Click to zoom in.
He would work in the 7 (at that time: 3) Viničná Street (name related to wineries: Street View) which is a building of the Faculty of Natural Sciences these days. Of course, it was four decades before the Faculty of Mathematics and Physics was established but I know the place well because it's less than 500 meters (direct line) from the "Karlov" building of the Faculty of Maths of Physics, my Alma Mater.
In April 1911, he wrote this to his friend Grossmann:
I have a magnificent institute here in which I work very comfortably. Otherwise it is less homey (Czech language, bedbugs, awful water, etc.). By the way, Czechs are more harmless than one thinks.
As big a compliment to the Czechs as you can get. ;-) Two weeks later, he added this in a letter to M. Besso:
The city of Prague is very fine, so beautiful that it is worth a long journey for
itself.
Bičák adds lots of fun stuff about the local reaction to Einstein's presence. But there's quite some physics he did in Prague – essential developments that had to occur for the general theory of relativity to be born. Einstein himself summarized the importance of his year in Prague rather concisely, in a foreword to the Czech translation of a 1923 book explaining relativity:
I am pleased that this small book is finally released in the mother tongue of the country in which I finally found enough concentration that was needed for the basic idea of general relativity, one that I have been aware of since 1908, to be gradually dressed up into less fuzzy clothes. In the silent rooms of the Institute for Theoretical Physics of Prague's German University in Viničná Street, I discovered that the equivalence principle implied the bending of light near the Sun and that it was large enough to be observable, even though I was unaware that 100 years earlier, a similar corollary was extracted from Newton's mechanics and his theory of light emission. In Prague, I also discovered the consequence of the principles that says that spectral lines are shifted towards the red end, a consequence that hasn't been perfectly experimentally validated yet.
Well, be sure that as of today, it's been validated for half a century – in an experiment that took place in another building where I have worked for 6 years (and yes, the Wikipedia picture of the building is mine, too).
The Czech translation I used to translate it to modern Lumo English was probably obtained by translating a German original and you will surely forgive me some improvements.
Note that it was a pretty powerful combination of insights: gravitational red shift as well as the light bending (observed during the 1919 solar eclipse) were discovered in Prague. It was years before Einstein had the final form of the equations of general relativity, Einstein's equations.
Today, people – including people who consider themselves knowledgeable about physics – often fail to understand that many insights may be physically deduced even if one doesn't possess the final exact form of the equations. Principles imply a lot. They may be logically processed to derive new insights. At the beginning of the 20th century, people like Einstein could do such things very well. Many people today would almost try to outlaw such reasoning – principled reasoning that used to define good theoretical physicists. They would love to outlaw principles themselves. They would love to claim it is illegal to believe any principles, it is illegal to be convinced that any principles are true.
Albert Einstein and Johanna Fantová, his Czech yachting buddy while in the U.S.
The derivation of the bending of light is a somewhat annoying argument and the right numerical factor may only be obtained if you're careful about the equations of GR. So while I was not sure whether Einstein got the right numerical coefficient in 1911-12, I feel that I wouldn't trust it, anyway. Up to the numerical coefficient, the answer may be calculated from Newton's mechanics. (Well, later I searched for the answer and Einstein's numerical coefficient in 1911 was indeed wrong, the same as the Newtonian one, i.e. one-half of the full GR result.)
Just imagine that you shoot a bullet whose speed is the speed of light – Newton's light corpuscle – around the Sun so that the bullet barely touches the Sun and you calculate how much the bullet's trajectory is bent towards the Sun due to the star's gravitational attraction. You integrate the appropriate component of the acceleration to find out the change of the velocity, take the ratio of this perpendicular velocity to the original component of the velocity, and that's the angle.
General relativity just happens to give you a result that is almost the same: well, it's exactly 2 times larger.
It's perhaps more insightful to discuss the derivation of the gravitational red shift where one may reasonably trust even the numerical coefficient and it is indeed right. His argument went like this (optimization by your humble correspondent).
Consider a carousel rotating by the angular speed $\omega$. Objects standing at the carousel at distance $R$ from the axis of rotation will feel the centrifugal acceleration $R\omega^2$. They will also move by the speed $v=R\omega$. Special relativity guarantees that their clocks will tick slower (time dilation), by the factor of \[
\frac{t_\text{at carousel}}{t_\text{at the center}} =
\sqrt{1-\frac{v^2}{c^2}} = \sqrt{ 1-\frac{R^2\omega^2}{c^2}} \approx 1 - \frac{R^2\omega^2}{2c^2} .
\] Observers in the rotating frame of the carousel will interpret the centrifugal force as a gravitational field. And because from the rotating frame's viewpoint, the velocity of all objects standing on the carousel is zero so the special relativistic time dilation doesn't exist in this frame, the slowing down of time must be a consequence of the gravitational field.
The coefficient determining how much the time is slowed down only depends on $R\omega$. How is this quantity related to some observables describing the gravitational field? Well, the gravitational acceleration is $R\omega^2$, as mentioned previously, and it may be integrated to get the gravitational potential:\[
\Phi = -\int_0^R \dd\rho \,\rho \omega^2 = -\frac{R^2\omega^2}{2}.
\] Note that the gravitational potential is negative away from $R=0$ because the gravitational (=centrifugal) force is directed outwards so outwards corresponds to "down" in the analogy with the usual Earth's gravitational field. Now, we see that the gravitational potential depends on $R\omega$ only as well so it's the right quantity that determines the gravitational slowdown of the time, i.e. the gravitational redshift. Substituting this formula for $\Phi$, we see that\[
\frac{t_\text{at carousel}}{t_\text{at the center}} = \dots \approx 1+ \frac{\Phi}{c^2}.
\] So the gravitational potential $\Phi$ is really related to the "rate of time" which we would call $\sqrt{g_{00}}$ these days. Einstein could realize this fact several years before he could write down the equations of the general theory of relativity.
Because special relativity guaranteed that motion unexpectedly influences the flow of time and because by 1911-12, he fully appreciated that the accelerated motion and the inertial forces it causes may be physically indistinguishable from a gravitational field, he could see that the gravitational field influences the flow of time, too. And he could even extract the right formula in the Newtonian limit.
Of course, it's safer to work with the full theory. However, such "semi-heuristic" derivations are valid very frequently and you shouldn't just dismiss them, especially if the author of such arguments seems to know what he or she is doing.
And that's the memo.
P.S. Joseph sent me a link to a rather fascinating 1912 or so notebook of Einstein with handwritten puzzles and trivial tensor calculus that Einstein was just learning or co-discovering.
Posted by Luboš Motl
|
Other texts on similar topics: philosophy of science, science and society
#### snail feedback (4)
:
reader Benjamin said...
In the equation after 'Substituting this formula...', I think you have an extra 2.
reader Luboš Motl said...
I erased the factor while proofreading, 20 seconds before I saw your comment. ;-)
reader Dilaton said...
Nice explanation, I always need and appreciate step by step derivations such as the one in this article for example ... ;-)
reader publius said...
A recent book about Einstein attemps on unification by Jeroen Van Dongen that seems interesting is
http://www.amazon.com/Einsteins-Unification-Jeroen-van-Dongen/dp/0521883466
I just have seen the amazon and google books previews so far, as the book is quite expensive, but those look good. Also Isaacson very good biography draws on Van Dongen thesis on that topic, which seems to be the basis of the book | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553474187850952, "perplexity_flag": "middle"} |
http://mathematica.stackexchange.com/questions/18025/how-do-i-remove-the-flat-parts-that-are-not-part-of-the-function-in-plot3d | # How do I remove the flat parts that are not part of the function in Plot3D?
````GraphicsRow[
{Plot3D[x^2 + y^2, {x, -6, 6}, {y, -6, 6}, PlotRange -> Automatic],
Plot3D[x^2 + y^2, {x, -6, 6}, {y, -6, 6},
PlotRange -> {{-6, 6}, {-6, 6}, {0, 30}}],
Plot3D[x^2 + y^2, {x, -6, 6}, {y, -6, 6},
PlotRange -> {Full, Full, {0, 30}}]}, ImageSize -> 700]
````
yields
How do I get the full "bowl" look without the extra flat part filling out the plane at $z=30$? I must be missing something simple...
-
– rm -rf♦ Jan 18 at 19:18
## 1 Answer
Use the `ClippingStyle` option to control the appearance of the cut off part.
````Plot3D[x^2 + y^2, {x, -6, 6}, {y, -6, 6},
PlotRange -> {{-6, 6}, {-6, 6}, {0, 30}}, ClippingStyle -> None]
````
-
lang-mma | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6801316738128662, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/25727/list | Return to Answer
1 [made Community Wiki]
Theoretical computer science or combinatorics or algorithmics: The diamond lemma.
Some days ago, while giving a seminar talk about Clifford algebras, I realized that Lawson-Michelson has a flawed proof that the canonical inclusion of a vector space in its own Clifford algebra is indeed injective (unfortunately, not until I had written this proof on desk). Most other literature gives ugly proofs using orthogonalization. Fact is, this injectivity works in a much more general context (namely, it works for any module over a commutative ring with $1$), where of course there needs not be any orthogonalization. And it is easily proven using the diamond lemma. A similar assertion for Weyl algebras is also clear from the diamond lemma, and so is the Poincaré-Birkhoff-Witt theorem (which is proven in intricate and opaque ways in most of literature). Maybe the problem is that geometers don't know enough computer science? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140545129776001, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/136250-solved-gaussian-integral-polynomial-term.html | # Thread:
1. ## [SOLVED] Gaussian Integral with Polynomial Term
Any help on how to calculate the following would be highly appreciated (I am familiar with the "standard" way of calculating the Gaussian integral in polar coordinates):
$%5Cint_%7B-%5Cinfty%20%7D%5E%7B%5Cinfty%7De%5E%7B-%5Cfrac%7B1%7D%7B2%7D%28x%5E%7B2%7D+%5Cfrac%7B1%7D%7Bx%5E2%7D%29%7Ddx$
Apologies for a rather straightforward question and thanks in advance.
2. YES I GOT IT!
Sorry... This has been boggling my mind for a wee while.
This integral involves the following lemma...
$\int_{-\infty}^{\infty} F(u) dx = \int_{-\infty}^{\infty} F(x) dx$ where $u = x-\frac{1}{x}$.
So we have...
$\int_{-\infty}^{\infty} e^{-\tfrac{1}{2}\left(x^2 + \tfrac{1}{x^2}\right)}dx$
$=\frac{1}{e} \int_{-\infty}^{\infty} e^{-\tfrac{1}{2}\left(x - \tfrac{1}{x}\right)^2}dx$
WHY..?
Because $e^{-\tfrac{1}{2}\left(x-\tfrac{1}{x}\right)^2} = e^{-\tfrac{1}{2}\left(x^2 - 2 + \tfrac{1}{x^2}\right)} = e^{-\tfrac{1}{2}\left(x^2 + \tfrac{1}{x^2}\right)}e$
So we need an extra $e^{-1}$ outside the integral!
So continuing we get...
$=\frac{1}{e} \int_{-\infty}^{\infty} e^{-\tfrac{1}{2}\left(x - \tfrac{1}{x} \right)^2}dx$
$=\frac{1}{e} \int_{-\infty}^{\infty} e^{-\tfrac{1}{2}x^2}dx$ using Lemma!
$=\frac{1}{e} \sqrt{2} \sqrt{\pi}$
$= \frac{\sqrt{2\pi}}{e}$
Epic Win!
Additional calculations
Showing that
$=\frac{1}{e} \int_{-\infty}^{\infty} e^{-\tfrac{1}{2}x^2}dx = \frac{1}{e} \sqrt{2} \sqrt{\pi}$
Let $z = \frac{x}{\sqrt{2}}$, then $dx = \sqrt{2}dz$.
Hence the integral becomes...
$\frac{\sqrt{2}}{e} \int_{-\infty}^{\infty} e^{-z^2}dz$
$=\frac{\sqrt{2 \pi}}{e}$
3. Thanks, Deadstar. Any chance you could expand a bit/point me towards the proof of the lemma?
Originally Posted by Deadstar
YES I GOT IT!
Sorry... This has been boggling my mind for a wee while.
This integral involves the following lemma...
$\int_{-\infty}^{\infty} F(u) dx = \int_{-\infty}^{\infty} F(x) dx$ where $u = x-\frac{1}{x}$.
So we have...
$\int_{-\infty}^{\infty} e^{-\tfrac{1}{2}\left(x^2 + \tfrac{1}{x^2}\right)}dx$
$=\frac{1}{e} \int_{-\infty}^{\infty} e^{-\tfrac{1}{2}\left(x - \tfrac{1}{x}\right)^2}dx$
WHY..?
Because $e^{-\tfrac{1}{2}\left(x-\tfrac{1}{x}\right)^2} = e^{-\tfrac{1}{2}\left(x^2 - 2 + \tfrac{1}{x^2}\right)} = e^{-\tfrac{1}{2}\left(x^2 + \tfrac{1}{x^2}\right)}e$
So we need an extra $e^{-1}$ outside the integral!
So continuing we get...
$=\frac{1}{e} \int_{-\infty}^{\infty} e^{-\tfrac{1}{2}\left(x - \tfrac{1}{x} \right)^2}dx$
$=\frac{1}{e} \int_{-\infty}^{\infty} e^{-\tfrac{1}{2}x^2}dx$ using Lemma!
$=\frac{1}{e} \sqrt{2} \sqrt{\pi}$
$= \frac{\sqrt{2\pi}}{e}$
Epic Win!
Additional calculations
Showing that
$=\frac{1}{e} \int_{-\infty}^{\infty} e^{-\tfrac{1}{2}x^2}dx = \frac{1}{e} \sqrt{2} \sqrt{\pi}$
Let $z = \frac{x}{\sqrt{2}}$, then $dx = \sqrt{2}dz$.
Hence the integral becomes...
$\frac{\sqrt{2}}{e} \int_{-\infty}^{\infty} e^{-z^2}dz$
$=\frac{\sqrt{2 \pi}}{e}$
4. EDIT: HAHAHA I just realized you were after the proof, not an explanation of how I used it... I'll post it up next post, I don't understand it fully though...
Sure. Bare in mind I just learned this Lemma so I hope I'm describing it correctly!
So, the idea is, if we have an integral of the form $\int_{-\infty}^{\infty} F(u) dx$ where $u=x - \frac{1}{x}$
This is equal to the integral $\int_{-\infty}^{\infty} F(x) dx$.
Where we replace the expression $x - \frac{1}{x}$ by just x.
So an example... (I think)
$\int_{-\infty}^{\infty} \frac{1}{x - \frac{1}{x}} dx = \int_{-\infty}^{\infty} \frac{1}{x} dx$.
As by the lemma... We can replace the expression $x - \frac{1}{x}$ by just $x$.
So... In the example you posted... I just rewrote $x^2 + \frac{1}{x^2}$ as $\left(x - \frac{1}{x}\right)^2$ and included an $e^{-1}$ outside the integral to cancel out the extra $e$ created.
So...
$e^{-\frac{1}{2}\left(x^2 + \frac{1}{x^2}\right)}$
$= \frac{e}{e}e^{-\frac{1}{2}x^2}e^{-\frac{1}{2x^2}}$
$= \frac{1}{e}e^{-\frac{1}{2}x^2} e^1 e^{-\frac{1}{2x^2}}$
$= \frac{1}{e}e^{-\frac{1}{2}x^2 + 1 - \frac{1}{2x^2}}$
$= \frac{1}{e}e^{-\frac{1}{2}\left(x^2 - 2 + \frac{1}{x^2}\right)}$
$= \frac{1}{e}e^{-\frac{1}{2}\left(x - \frac{1}{x}\right)^2}$
However, I did it the opposite way round and just multiplied the integral by the inverse of the extra term I had.
But anyway... So we have the integral
$\int_{-\infty}^{\infty} \frac{1}{e}e^{-\frac{1}{2} \left(x - \frac{1}{x}\right)^2} = \frac{1}{e}\int_{-\infty}^{\infty} e^{-\frac{1}{2}\left(x - \frac{1}{x}\right)^2}$
and by the Lemma, we can replace the $x - \frac{1}{x}$ term by $x$.
Hence we are left with...
$\frac{1}{e} \int_{-\infty}^{\infty} e^{-\frac{1}{2}x^2}$ which is much more workable as shown in my additional calculations.
This clear things up? I'm still learning about it as I say so hopefully one of the mods could step in and say whether I'm doing things right.
5. Almost a year ago , someone posted it as a challenge . I think this thread has been passed into oblivion ...
http://www.mathhelpforum.com/math-he...ation-1-a.html
6. Thanks for the link, simplependulum!
And in general - really valuable input guys - appreciate it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 51, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123410582542419, "perplexity_flag": "middle"} |
http://stochastix.wordpress.com/tag/functionology/ | # Rod Carvalho
## Posts Tagged ‘Functionology’
### A measure-theoretic definition of synergy?
September 14, 2012
Alice and Bob are fruit-pickers at an orange orchard.
Alice can pick 6 baskets of oranges in one hour. In contrast, Bob can pick 7 baskets of oranges in the same period of time. However, if Alice and Bob work together, then they can pick a total of 15 baskets in one hour. As part of a team:
• Alice is now picking 7.5 baskets per hour, a most impressive increase in productivity of 25%.
• Bob is now picking 7.5 baskets per hour as well, a modest increase in productivity of approximately 7%.
Both Alice and Bob benefit if they work together (though Alice benefits more). We thus have an example of synergy. To use a cliché: “the whole is greater than the sum of its parts”. If Alice and Bob work alone, they can only pick 13 baskets per hour in total, but if they work together they can pick 15 baskets per hour. Everyone is happy.
Let $S := \{\text{Alice}, \text{Bob}\}$, let $2^S$ denote the power set of $S$, and let $\mathbb{R}_0^{+}$ denote the set of nonnegative real numbers. We introduce a productivity function $\pi : 2^S \to \mathbb{R}_0^{+}$, enumeratively defined as follows
• $\pi (\emptyset) = 0$
• $\pi (\{\text{Alice}\}) = 6$
• $\pi (\{\text{Bob}\}) = 7$
• $\pi (\{\text{Alice}, \text{Bob}\}) = 15$
where $\pi (\emptyset) = 0$ because the productivity of the “empty team” is zero. Since we have that
$\pi (\{\text{Alice}, \text{Bob}\}) > \pi (\{\text{Alice}\}) + \pi (\{\text{Bob}\})$
we conclude that we have synergy. Note that the existence of a synergistic or synergetic effect is a property of the productivity function $\pi$. We could attempt to study such property in a more general setting.
__________
Superadditive measures
We now introduce a definition
Definition: Given a finite set $S := \{s_1, s_2, \dots, s_n\}$ and a function $\mu : 2^S \to \mathbb{R}_0^{+}$, if the following conditions are satisfied
• $\mu (\emptyset) = 0$
• $\mu (X \cup Y) \geq \mu (X) + \mu (Y)$ for all sets $X, Y \in 2^S$ such that $X \cap Y = \emptyset$
we say that $\mu$ is a superadditive measure [1].
Using the superadditivity property recursively, one can conclude that
$\mu (X) \geq \displaystyle\sum_{x \in X} \mu (\{x\})$
for every $X \in 2^S$. For example, if $S := \{s_1, s_2, s_3, s_4\}$, then we have that the measure of $X :=\{s_1, s_2, s_3\}$ is
$\begin{array}{rl} \mu (\{s_1, s_2, s_3\}) &= \mu (\{s_1, s_2\} \cup \{s_3\})\\\\ &\geq \mu (\{s_1, s_2\}) + \mu (\{s_3\})\\\\ &\geq \left( \mu (\{s_1\}) + \mu (\{s_2\}) \right) + \mu (\{s_3\}\\\\ &\geq \mu (\{s_1\}) + \mu (\{s_2\}) + \mu (\{s_3\})\end{array}$
Frankly, I have (accidentally) opened a can of worms. I started writing this post thinking about synergy and productivity, and I am now drowning in Measure Theory! As it turns out, the union of all my knowledge of Measure Theory is a set of measure zero ;-) Hence, I will abruptly finish this post with a passage from Wang & Klir [1]:
Observe that superadditive measures are capable of expressing a cooperative action or synergy between sets in terms of the measured property, while subadditive measures are capable of expressing inhibitory effects or incompatibility between sets in terms of the measured property. Additive measures, on the other hands, are not able to express either of these interactive effects. They are applicable only to situations in which there is no interaction between sets as far as the measured property is concerned.
I may return to this topic if I happen to have any interesting ideas.
__________
References
[1] Zhenyuan Wang, George J. Klir, Generalized Measure Theory, Springer, 2009.
Tags:Functionology, Measure Theory, Superadditivity, Synergy
Posted in Mathematics | 5 Comments »
### Deciding the bijectivity of finite functions
March 31, 2012
Consider a function $f : \mathcal{X} \to \mathcal{Y}$. Function $f$ is bijective if and only if it is both injective and surjective. We say that $f$ is a finite function if and only if both $\mathcal{X}$ and $\mathcal{Y}$ are finite sets.
In this post, we will restrict our attention to finite functions. For finite functions, we already know how to decide injectivity and surjectivity. Hence, deciding the bijectivity of finite functions is now trivial. In Haskell, we create the following three predicates:
```isInjective :: (Eq a, Eq b) => (a -> b) -> [a] -> Bool
isInjective f xs = all phi [(x1,x2) | x1 <- xs, x2 <- xs]
where phi (x1,x2) = (f x1 /= f x2) || (x1 == x2)
isSurjective :: Eq b => (a -> b) -> [a] -> [b] -> Bool
isSurjective f xs ys = all psi ys
where psi y = or [(f x == y) | x <- xs]
isBijective :: (Eq a, Eq b) => (a -> b) -> [a] -> [b] -> Bool
isBijective f xs ys = (isInjective f xs) && (isSurjective f xs ys)```
Note that predicate isInjective takes as inputs a function and a list (the domain), whereas predicate isSurjective takes as inputs a function and two lists (the domain and the codomain). Both return either True or False. Predicate isBijective is defined using the conjunction of the other two predicates. In this implementation, we separated the decision procedure from the definition of $f$.
__________
Example #1
Let us define $\mathbb{Z}_4 := \{0, 1, 2, 3\}$. Consider the finite function
$\begin{array}{rl} f : \mathbb{Z}_4 &\to \mathbb{Z}_4\\ n &\mapsto 3 n \pmod{4}\end{array}$
where $\pmod{4}$ denotes modulo 4 arithmetic. We know that this function is both injective and surjective. Hence, it is bijective. Here’s a GHCi session (after running the Haskell script above):
```*Main> let f n = (3 * n) `mod` 4
*Main> isInjective f [0..3]
True
*Main> isSurjective f [0..3] [0..3]
True
*Main> isBijective f [0..3] [0..3]
True```
__________
Example #2
Consider now the following finite function
$\begin{array}{rl} g : \mathbb{Z}_4 &\to \mathbb{Z}_4\\ n &\mapsto 2 n \pmod{4}\end{array}$
We already know that function $g$ is neither injective nor surjective. Hence, it is not bijective. Here’s a GHCi session:
```*Main> let g n = (2 * n) `mod` 4
*Main> isInjective g [0..3]
False
*Main> isSurjective g [0..3] [0..3]
False
*Main> isBijective g [0..3] [0..3]
False```
Tags:Bijectivity, Decision Problems, Decision Procedures, Finite Functions, Functionology, Haskell, Injectivity, Surjectivity
### Deciding the surjectivity of finite functions
March 19, 2012
Consider a function $f : \mathcal{X} \to \mathcal{Y}$. We say that $f$ is a finite function if and only if both $\mathcal{X}$ and $\mathcal{Y}$ are finite sets. As mentioned two weeks ago, function $f$ is surjective if and only if
$\forall y \exists x \left( f (x) = y \right)$
where $x$ and $y$ range over sets $\mathcal{X}$ and $\mathcal{Y}$, respectively. We now introduce the surjectivity predicate $\varphi : \mathcal{X} \times \mathcal{Y} \to \{\text{True}, \text{False}\}$, defined by $\varphi (x, y) =\left( f (x) = y\right)$ so that function $f$ is surjective if and only if $\forall y \exists x \, \varphi (x, y)$ evaluates to $\text{True}$.
Let $m := |\mathcal{X}|$ and $n := |\mathcal{Y}|$ denote the cardinalities of sets $\mathcal{X}$ and $\mathcal{Y}$, respectively. Deciding $\forall y \exists x \, \varphi (x, y)$ should then require a total of $m n$ evaluations of predicate $\varphi$. Let us write $\mathcal{X} = \{x_1, x_2, \dots, x_m\}$ and $\mathcal{Y} = \{y_1, y_2, \dots, y_n\}$, and introduce $\Phi \in \{\text{True}, \text{False}\}^{m \times n}$, a Boolean matrix defined as follows
$\Phi = \left[\begin{array}{cccc} \varphi (x_1,y_1) & \varphi (x_1,y_2) & \ldots & \varphi (x_1,y_n)\\ \varphi (x_2,y_1) & \varphi (x_2,y_2) & \ldots & \varphi (x_2,y_n)\\ \vdots & \vdots & \ddots & \vdots\\ \varphi (x_m,y_1) & \varphi (x_m,y_2) & \ldots & \varphi (x_m,y_n)\\\end{array}\right]$
Stating that $f$ is surjective, i.e., $\forall y \exists x \, \varphi (x, y)$ evaluates to $\text{True}$, is equivalent to saying that every column of matrix $\Phi$ contains at least one entry that evaluates to $\text{True}$. We now introduce a new predicate, $\psi : \mathcal{Y} \to \{\text{True}, \text{False}\}$, defined by
$\psi (y) = \exists x \, \varphi (x,y)$
so that $\forall y \exists x \, \varphi (x, y)$ can be rewritten in the form $\forall y \, \psi (y)$, where $y$ ranges over set $\mathcal{Y}$. Note that $\psi (y)$ can be viewed as the disjunction of the $m$ atoms $\varphi (x_1, y), \varphi (x_2, y), \dots, \varphi (x_m, y)$, i.e.,
$\psi (y) = \displaystyle\bigvee_{i =1}^m \varphi (x_i, y)$.
Note also that $\forall y \, \psi (y)$ can be viewed as the conjunction of the $n$ atoms $\psi (y_1), \psi (y_2), \dots, \psi (y_n)$. Thus, we can decide surjectivity using the following procedure:
1. Compute $\psi (y) = \exists x \, \varphi (x,y)$ for all $y \in \mathcal{Y}$.
2. Compute the conjunction $\bigwedge_{j=1}^n \psi (y_j)$. If its truth value is $\text{True}$, then the function is surjective.
Let us now consider two simple examples and implement this decision procedure in Haskell.
__________
Example #1
Let us define $\mathbb{Z}_4 := \{0, 1, 2, 3\}$. Consider the finite function
$\begin{array}{rl} f : \mathbb{Z}_4 &\to \mathbb{Z}_4\\ n &\mapsto 3 n \pmod{4}\end{array}$
where $\pmod{4}$ denotes modulo 4 arithmetic. Note that $f (0) = 0$, $f (1) = 3$, $f (2) = 2$, and $f (3) = 1$. Hence, we easily conclude that $f$ is both injective and surjective (and, thus, is bijective). In Haskell:
```Prelude> let f n = (3 * n) `mod` 4
Prelude> let xs = [0..3]
Prelude> let ys = [0..3]
Prelude> let psi y = or [(f x == y) | x <- xs]
Prelude> all psi ys
True```
which allows us to conclude that $f$ is surjective. Note that we used function or to perform disjunction, and function all to perform universal quantification.
__________
Example #2
Consider now the following finite function
$\begin{array}{rl} g : \mathbb{Z}_4 &\to \mathbb{Z}_4\\ n &\mapsto 2 n \pmod{4}\end{array}$
Note that $g (0) = g (2) = 0$ and $g (1) = g (3) = 2$. Hence, function $g$ is neither injective nor surjective. In Haskell, we have the following:
```Prelude> let g n = (2 * n) `mod` 4
Prelude> let xs = [0..3]
Prelude> let ys = [0..3]
Prelude> let psi y = or [(g x == y) | x <- xs]
Prelude> all psi ys
False```
which allows us to conclude that $g$ is not surjective. Why? The following code returns a list of lists of Booleans where each sublist is a column of matrix $\Phi$ (where each entry is $\Phi_{ij} = (g(x_i) = y_j)$)
```Prelude> [[(g x == y) | x <- xs] | y <- ys]
[[True,False,True,False],[False,False,False,False],
[False,True,False,True],[False,False,False,False]]```
Note that the 2nd and 4th sublists contain only $\text{False}$ truth values. Hence, it is not the case that every column of matrix $\Phi$ contains at least one entry that evaluates to $\text{True}$. Thus, $g$ is not surjective.
Tags:Decision Problems, Decision Procedures, Finite Functions, Functionology, Haskell, Logic, Predicate Logic, Surjectivity
### Deciding the injectivity of finite functions II
March 17, 2012
Last week we considered the following finite function
$\begin{array}{rl} f : \mathbb{Z}_4 &\to \mathbb{Z}_4\\ n &\mapsto 3 n \pmod{4}\end{array}$
where $\mathbb{Z}_4 := \{0, 1, 2, 3\}$ and $\pmod{4}$ denotes modulo 4 arithmetic. We know that function $f$ is injective if and only if
$\forall x_1 \forall x_2 \left( f (x_1) = f (x_2) \implies x_1 = x_2\right)$
where $x_1$ and $x_2$ range over set $\mathbb{Z}_4$. We introduce the injectivity predicate $\varphi : \mathbb{Z}_4 \times \mathbb{Z}_4 \to \{\text{True}, \text{False}\}$, defined by
$\varphi (x_1, x_2) =\left( f (x_1) = f (x_2) \implies x_1 = x_2\right)$
so that function $f$ is injective if and only if the universally-quantified sentence $\forall x_1 \forall x_2 \, \varphi (x_1, x_2)$ evaluates to $\text{True}$.
__________
A naive decision procedure
We can determine the truth value of $\forall x_1 \forall x_2 \, \varphi (x_1, x_2)$ by evaluating the predicate $\varphi$ at all $(x_1, x_2) \in \mathbb{Z}_4 \times \mathbb{Z}_4$, which will require a total of $|\mathbb{Z}_4|^2 = 4^2 = 16$ evaluations. We implemented this decision procedure in Haskell last week. Here’s a rather brief GHCi session:
```Prelude> let f n = (3 * n) `mod` 4
Prelude> let phi (x1,x2) = (f x1 /= f x2) || (x1 == x2)
Prelude> all phi [(x1,x2) | x1 <- [0..3], x2 <- [0..3]]
True```
which allows us to conclude that $f$ is injective. Note that we used the following equivalence
$\left( f (x_1) = f (x_2) \implies x_1 = x_2\right) \equiv \left( f (x_1) \neq f (x_2) \lor x_1 = x_2\right)$
to implement $\varphi$. Can we do better than this? In fact, we can.
__________
A better decision procedure
We introduce matrix $\Phi \in \{\text{True}, \text{False}\}^{4 \times 4}$, defined by
$\Phi = \left[\begin{array}{cccc} \varphi (0,0) & \varphi (0,1) & \varphi (0,2) & \varphi (0,3)\\ \varphi (1,0) & \varphi (1,1) & \varphi (1,2) & \varphi (1,3)\\ \varphi (2,0) & \varphi (2,1) & \varphi (2,2) & \varphi (2,3)\\ \varphi (3,0) & \varphi (3,1) & \varphi (3,2) & \varphi (3,3)\\\end{array}\right]$
Stating that $f$ is injective, i.e., $\forall x_1 \forall x_2 \, \varphi (x_1, x_2)$ evaluates to $\text{True}$, is equivalent to saying that all the $4^2 = 16$ entries of matrix $\Phi$ evaluate to $\text{True}$. Note, however, that the four entries on the main diagonal of $\Phi$ will always evaluate to $\text{True}$, as the implication
$f (x) = f (x) \implies x = x$
evaluates to $\text{True}$ for every $x$. Moreover, from the equivalence
$\left( f (x_1) = f (x_2) \implies x_1 = x_2\right) \equiv \left( f (x_2) = f (x_1) \implies x_2 = x_1\right)$
we can conclude that $\varphi (x_1, x_2) = \varphi (x_2, x_1)$ for all $x_1$ and $x_2$, which tells us that matrix $\Phi$ is symmetric (i.e., $\Phi = \Phi^T$). Therefore, we do not need to evaluate all entries of matrix $\Phi$, only the ones above the main diagonal, as illustrated below
$\left[\begin{array}{cccc} \ast & \varphi (0,1) & \varphi (0,2) & \varphi (0,3)\\ \ast & \ast & \varphi (1,2) & \varphi (1,3)\\ \ast & \ast & \ast & \varphi (2,3)\\ \ast & \ast & \ast & \ast\\\end{array}\right]$
where the symbol $\ast$ denotes “don’t care”, i.e., entries we do not need to evaluate. Hence, instead of evaluating the predicate $\varphi$ a total of $4^2 = 16$ times, we now only need to evaluate it ${4 \choose 2} = 6$ times, which is $37.5 \%$ of the original total. We can generate all pairs $(x_1, x_2)$ with $x_2 > x_1$ using the following Haskell code:
```Prelude> [(x1,x2) | x1 <- [0..3], x2 <- [(x1+1)..3]]
[(0,1),(0,2),(0,3),(1,2),(1,3),(2,3)]```
We thus implement a more efficient decision procedure as follows:
```Prelude> let f n = (3 * n) `mod` 4
Prelude> let phi (x1,x2) = (f x1 /= f x2) || (x1 == x2)
Prelude> all phi [(x1,x2) | x1 <- [0..3], x2 <- [(x1+1)..3]]
True```
which requires less than half of the number of evaluations required by the previous (naive) procedure.
__________
Conclusions
Given a finite function $f : \mathcal{X} \to \mathcal{Y}$, we introduce the injectivity predicate $\varphi : \mathcal{X} \times \mathcal{X} \to \{\text{True}, \text{False}\}$, defined by
$\varphi (x_1, x_2) =\left( f (x_1) = f (x_2) \implies x_1 = x_2\right)$
so that (finite) function $f$ is injective if and only if $\forall x_1 \forall x_2 \, \varphi (x_1, x_2)$ evaluates to $\text{True}$. Let $n := |\mathcal{X}|$ denote the cardinality of set $\mathcal{X}$. Deciding $\forall x_1 \forall x_2 \, \varphi (x_1, x_2)$ using the naive procedure will require a total of $n^2$ evaluations of the predicate $\varphi$. Since $\varphi (x,x) = \text{True}$ for every $x$, and taking into account that $\varphi (x_1,x_2) = \varphi (x_2,x_1)$, using the improved procedure we can decide injectivity at a cost of only ${n \choose 2} = \frac{n (n-1)}{2}$ evaluations of the predicate $\varphi$. As $n$ approaches infinity, the ratio ${n \choose 2} / n^2$ approaches $1/2$. Hence, for “large” problems, the improved procedure will require approximately half the number of evaluations required by the naive procedure.
Tags:Decision Problems, Decision Procedures, Finite Functions, Functionology, Haskell, Injectivity, Logic, Predicate Logic
### Deciding the injectivity of finite functions
March 7, 2012
Consider a function $f : \mathcal{X} \to \mathcal{Y}$. We say that $f$ is a finite function if and only if both $\mathcal{X}$ and $\mathcal{Y}$ are finite sets. As mentioned last week, function $f$ is injective if and only if
$\forall x_1 \forall x_2 \left( f (x_1) = f (x_2) \implies x_1 = x_2\right)$
where $x_1$ and $x_2$ range over set $\mathcal{X}$. We now introduce the injectivity predicate $\varphi : \mathcal{X} \times \mathcal{X} \to \{\text{True}, \text{False}\}$, defined by
$\varphi (x_1, x_2) =\left( f (x_1) = f (x_2) \implies x_1 = x_2\right)$
so that function $f$ is injective if and only if $\forall x_1 \forall x_2 \, \varphi (x_1, x_2)$ is true. In this post we will restrict our attention to finite functions, for which it is fairly straightforward to devise a decision procedure to decide injectivity. Let $n := |\mathcal{X}|$ denote the cardinality of set $\mathcal{X}$. Then, determining the truth value of $\forall x_1 \forall x_2 \, \varphi (x_1, x_2)$ will require a total of $n^2$ evaluations of the predicate $\varphi$.
__________
Example
Let us define $\mathbb{Z}_4 := \{0, 1, 2, 3\}$. Consider the finite function
$\begin{array}{rl} f : \mathbb{Z}_4 &\to \mathbb{Z}_4\\ n &\mapsto 3 n \pmod{4}\end{array}$
where $\pmod{4}$ denotes modulo 4 arithmetic. Is $f$ injective? Note that $f (0) = 0$, $f (1) = 3$, $f (2) = 2$, and $f (3) = 1$. Hence, we easily conclude that $f$ is injective. Suppose that we do not want to enumerate the images of the elements of $\mathbb{Z}_4$ under $f$; in that case, we can use the following Haskell script to decide that $f$ is injective:
```-- define function f
f :: Integral a => a -> a
f n = (3 * n) `mod` 4
-- define disjunction
(\/) :: Bool -> Bool -> Bool
p \/ q = p || q
-- define implication
(==>) :: Bool -> Bool -> Bool
p ==> q = (not p) \/ q
-- define injectivity predicate
phi :: Integral a => (a,a) -> Bool
phi (x1,x2) = (f x1 == f x2) ==> (x1 == x2)```
where we used the fact that $p \implies q$ is semantically equivalent to $\neg p \lor q$. Let us carry out some testing:
```*Main> -- test function f
*Main> [ f n | n <- [0..3]]
[0,3,2,1]
*Main> -- test disjunction
*Main> [ p \/ q | p <-[True, False], q <- [True, False]]
[True,True,True,False]
*Main> -- test implication
*Main> [ p ==> q | p <-[True, False], q <- [True, False]]
[True,False,True,True]```
So far, so good. Finally, we can determine the truth value of the sentence $\forall x_1 \forall x_2 \, \varphi (x_1, x_2)$ using function all, as follows:
```*Main> all phi [(x1,x2) | x1 <- [0..3], x2 <- [0..3]]
True```
which allows us to conclude that $f$ is injective.
An alternative procedure would be to use function nub from library Data.List to remove duplicates from a given list. Note that $f$ is injective if and only if the list of the images of the elements of $\mathbb{Z}_4$ under $f$ contains no duplicates. Hence, we obtain:
```*Main> let images = [ f n | n <- [0..3]]
*Main> import Data.List
*Main Data.List> length (nub images) == length images
True```
What a hack! This second procedure might seem much terser than the first one. But if we do away with type declarations and refrain from re-defining disjunction and defining implication, the first procedure can be made quite succinct as well:
```Prelude> let f n = (3 * n) `mod` 4
Prelude> let phi (x1,x2) = not (f x1 == f x2) || (x1 == x2)
Prelude> all phi [(x1,x2) | x1 <- [0..3], x2 <- [0..3]]
True```
which includes the definition of function $f$. Three lines only! Three!
Tags:Data.List, Decision Problems, Decision Procedures, Finite Functions, Functionology, Haskell, Injectivity, Logic, Predicate Logic
Posted in Haskell, Logic, Mathematics | 5 Comments » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 208, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.886616587638855, "perplexity_flag": "head"} |
http://nrich.maths.org/7499&part=note | ### In a Spin
What is the volume of the solid formed by rotating this right angled triangle about the hypotenuse?
### Conical Bottle
A right circular cone is filled with liquid to a depth of half its vertical height. The cone is inverted. How high up the vertical height of the cone will the liquid rise?
### Peeling the Apple or the Cone That Lost Its Head
How much peel does an apple have?
# Fill Me up Too
### Why do this problem?
This problem follows on from Fill Me Up, and gives students the opportunity to use volume scale factors of enlargement to work out the relationship between the volume and the height of a cone.
### Possible approach
Perhaps start by asking students to sketch the graphs from the problem Fill Me Up. Here is a worksheet showing the containers.
"Imagine we wanted to plot the graphs accurately by working out the equations linking height to volume. Some parts of the containers will be easier to work out than others - which will be easiest? Which will be hardest?"
Take time to discuss students' ideas, relating it back to the graphs sketched in the first problem.
"Let's try to analyse how the height changes as the Pint Glass is filled."
"The Pint Glass can be thought of as part of a cone (a frustum), so I'd like you to consider a cone filling with water first."
Give students this worksheet to work on in groups of 3 or 4. These roles may be useful for students who are not used to working collaboratively on a problem. Make it clear that your expectation is for all students in the group to be able to explain their thinking clearly and that anyone might be chosen to present the group's conclusions at the end of the lesson.
Finally, allow time at the end of the lesson (or two lessons) for groups to present their thinking to the rest of the class.
### Key questions
What happens to the volume of a cone when I enlarge it by a scale factor of 2, 3, 4, 5... k?
If the volume of water is $10$cm$^3$ when the height of the water is $1$cm, what will the volume be when the height is $2, 3, 4...x$cm?
How could this be represented graphically?
### Possible extension
There are two extension tasks suggested in the problem: analysing the inverted cone is a reasonably straightforward extension, but analysing the spherical flask is much much more challenging.
Immersion and Brimful both offer extension possibilities for considering functional relationships relating to volume.
### Possible support
Growing Rectangles offers a good introduction to proportional relationships between length, area and volume.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216550588607788, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/186768-mathbb-q-frac-b-1-b-mathbb-z-b-neq-0-group.html | # Thread:
1. ## (\mathbb{Q}, +) by |\frac{a}{b}| < 1 a,b\in\mathbb{Z} and b\neq 0 is this a group
$(\mathbb{Q}, +)$ by $\left |\frac{a}{b}\right | < 1, \ a,b\in\mathbb{Z}$ and $b\neq 0$ is this a group.
Identity is 0.
Associative is true.
Inverse (I am not sure but I don't think this works).
If we let $-\frac{a}{b}$ be the inverse, we have a problem since
$\displaystyle\left |\frac{a}{b}\right |=\begin{cases}\frac{a}{b}, \ \frac{a}{b}\geq 0\\-\frac{a}{b}, \ \frac{a}{b}< 0\end{cases}$
So is this not a group or is this approach to the inverse wrong?
2. ## Re: (\mathbb{Q}, +) by |\frac{a}{b}| < 1 a,b\in\mathbb{Z} and b\neq 0 is this a group
Originally Posted by dwsmith
$(\mathbb{Q}, +)$ by $\left |\frac{a}{b}\right | < 1, \ a,b\in\mathbb{Z}$ and $b\neq 0$ is this a group.
Identity is 0.
Associative is true.
Inverse (I am not sure but I don't think this works).
If we let $-\frac{a}{b}$ be the inverse, we have a problem since
$\displaystyle\left |\frac{a}{b}\right |=\begin{cases}\frac{a}{b}, \ \frac{a}{b}\geq 0\\-\frac{a}{b}, \ \frac{a}{b}< 0\end{cases}$
So is this not a group or is this approach to the inverse wrong?
Are you asking whether the set $A=\mathbb{Q}\cap[-1,1]$ is a subgroup of $\mathbb{Q}$? Obviously not since $.5+.5=1$.
3. ## Re: (\mathbb{Q}, +) by |\frac{a}{b}| < 1 a,b\in\mathbb{Z} and b\neq 0 is this a group
Originally Posted by Drexel28
Are you asking whether the set $A=\mathbb{Q}\cap[-1,1]$ is a subgroup of $\mathbb{Q}$? Obviously not since $.5+.5=1$.
I am trying to prove if it is a group or not.
4. ## Re: (\mathbb{Q}, +) by |\frac{a}{b}| < 1 a,b\in\mathbb{Z} and b\neq 0 is this a group
Originally Posted by dwsmith
I am trying to prove if it is a group or not.
And, it's not since it isn't closed under addition. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934630811214447, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/225990/finding-a-differentiable-function?answertab=votes | # Finding a differentiable function
I need to find an infinitely differentiable function from $\mathbb{R}$ to $\mathbb{R}$ which is zero for all negative values and nonzero fo all positive values.
Thank you in advance
-
– Babak S. Oct 31 '12 at 11:21
A standard example is $f(x)=e^{-1/x}$ if $x>0$ and $f(x)=0$ otherwise. – wj32 Oct 31 '12 at 11:22
## 1 Answer
$$f(x)=\left\{\begin{array}{rcl} 0 &\mbox{if} & x\leq 0 \\ \exp\left(-\frac{1}{x^2}\right)&\mbox{if}&x>0\end{array}\right.$$ is a good candidate. For any $n\in\mathbb{N}$ we have: $$\frac{d^n}{dx^n}\exp\left(-1/x^2\right) = p(1/x) \exp\left(-1/x^2\right)$$ where $p$ is a polynomial, so: $$\lim_{x\to 0^+} \frac{d^n}{dx^n}\exp\left(-1/x^2\right) = \lim_{z\to +\infty} p(z)\,e^{-z^2} = 0.$$
-
Is there a way to show that this function is infinitely differentiable ? – user43418 Oct 31 '12 at 11:23
– wj32 Oct 31 '12 at 11:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8308566212654114, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/160554/a-lemma-on-the-integral-closure-of-a-noetherian-domain-of-dimension-1 | # A lemma on the integral closure of a Noetherian domain of dimension 1
I need to prove the following lemma(?) which is motivated by this and this.
Lemma Let $A$ be a Noetherian domain of dimension 1. Let $K$ be the field of fractions of $A$. Let $B$ be the integral closure of $A$ in $K$. Suppose $B$ is finitely generated as an $A$-module. Let $\mathfrak{p}$ be a maximal ideal of $A$. Let $B_{\mathfrak{p}}$ be the localization of B with respect to the multiplicative subset $A - \mathfrak{p}$. Then $K^*/(B_{\mathfrak{p}})^*$ is isomorphic to $\bigoplus K^*/(B_{\mathfrak{P}})^*$, where $\mathfrak{P}$ runs over all the maximal ideals of $B$ lying over $\mathfrak{p}$.
EDIT[Jun 26, 2012] Using this lemma, we can prove the following result.
Proposition Let $A$ be a Noetherian domain of dimension 1. Let $K$ be the field of fractions of $A$. Let $B$ be the integral closure of $A$ in $K$. Suppose $B$ is finitely generated as an $A$-module. Let $I(B)$ be the group of invertible fractional ideals of $B$. Then $I(B)$ is canonically isomorphisc to $\bigoplus_{\mathfrak{p}} K^*/(B_{\mathfrak{p}})^*$. Here, $\mathfrak{p}$ runs on all the maximal ideals of $A$.
Proof: Since $B$ is a Noetherian domain of dimension 1, by this, $I(B)$ is canonically isomorphisc to $\bigoplus_{\mathfrak{P}} I(B_{\mathfrak{P}})$, where ${\mathfrak{P}}$ runs over all the maximal ideals of $B$. Since $B_{\mathfrak{P}}$ is a local domain, by this, $I(B_{\mathfrak{P}})$ is the group of principal fractional ideals of $B_{\mathfrak{P}}$. Hence $I(B_{\mathfrak{P}})$ is canonically isomorphic to $K^*/(B_{\mathfrak{P}})^*$. Hence $I(B)$ is canonically isomorphisc to $\bigoplus_{\mathfrak{P}} K^*/(B_{\mathfrak{P}})^*$. Hence by the above lemma, $I(B)$ is canonically isomorphisc to $\bigoplus_{\mathfrak{p}} K^*/(B_{\mathfrak{p}})^*$. QED
-
## 1 Answer
Since $B_{\mathfrak{p}}$ is an integrally closed Noetherian domain of dimension 1, it is a Dedekind domain. Since it has only finitely many maximal ideals, it is a PID. Let $R = B_{\mathfrak{p}}$. By this, $K^*/R^*$ is canonically isomorphic to $\bigoplus_M K^*/(R_M)^*$, where $M$ runs on all the mximal ideals of $R$. Since $M$ is of a form $\mathfrak{P}R$, where $\mathfrak{P}$ is a maximal ideal of $B$ lying over $\mathfrak{p}$, $R_M$ is canonically isomorphic to $B_{\mathfrak{P}}$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140222072601318, "perplexity_flag": "head"} |
http://nrich.maths.org/5805 | # What Are Numbers?
##### Stage: 2, 3, 4 and 5
Article by Toni Beardon
The Question
What are numbers? Most people, even five year olds, can answer that question to their own satisfaction. Many different answers are given to this question, all more or less acceptable within the discourse taking place. Although negative, fractional and irrational numbers were accepted by scholars in Europe in the sixteenth century, and earlier in ancient civilisations in other parts of the world, until the nineteenth century negative numbers and complex numbers were often disparagingly referred to as absurd numbers and imaginary numbers. These numbers now play an essential part in mathematics, even school mathematics, and schoolchildren learn about them.
This brief descriptive article is intended as light reading for the general reader. Here we shall explore the question in a very informal way. We shall discuss different sets of numbers, including quaternions and a brief mention of Clifford Algebras, starting with counting numbers and meeting new sets of numbers, each set containing within it the set of numbers discussed so far. Quaternions are explored in more detail in the NRICH problem Two and Four Dimensional Numbers
In order to understand why there are different sorts of numbers, and what they are, we need to consider how young people meet the familiar number systems and how we broaden our ideas of number as we learn more arithmetic. Many small children are proud of themselves when they can count to one hundred and a little later they have an experience of awe and wonder when they first appreciate that counting goes on for ever with no end. These children have a familiarity with counting and the natural numbers, even with the concept of infinity, before they meet a number system. Roughly speaking a number system is a set of entities that can be combined, according to agreed rules, by the operations of addition, subtraction, multiplication and division, always producing answers that are in the set.
Rules of Arithmetic
Adding and sharing are transactions that we engage in early in our lives and they take us beyond simple counting into the realm of arithmetic. So we should define numbers to be more than labels for naming and recording the size of collections of objects that we have counted. We need to think of numbers as entities that can be combined according to an agreed set of rules which we call arithmetic. The more we know about and use this arithmetic, the more we appreciate that it can be generalised and refined into more and more useful mathematical tools for solving human problems of all sorts.
If we use numbers to describe the size of a collection of objects we need a number for a set that has nothing in it. So we need to expand our concept of number to include zero. While the use of numbers, including place value, dates back at least five thousand years, scholars in Europe were still debating whether zero could be a number as recently as five hundred years ago.
Inverses
Once we have an operation which combines two numbers to give another number, can we undo that process? If we can add 7 what is the operation on the answer which restores it to the original number?
Subtraction is another natural idea based on concrete experience, not of increasing the size of a collection of objects by combining two collections, but of reducing the size of a collection by removing some of the objects. If we have $5$ coins and we need $12$ to buy something we can ask how many more coins do we need, which number do we add to five to make $12$; this is equivalent to finding the answer to $12 - 5$, but what number do we add to $13$ to make $9$ or what is the answer to $9 - 13$? If we say that there is no answer to such subtractions then it is only because we do not know about negative numbers. Once negative numbers come onto the scene we have many uses for them.
We can do arithmetic without knowing the mathematical language and nothing so far is beyond the experience of a small child learning to read a thermometer on a wintry day.
We think of any subtraction as simply the addition of two integers so subtraction is not essential to recording arithmetic operations. For example $9 - 13 = 9 + (-13) = -4$. The integer $-13$ is called the inverse of the $+13$ because $(+13) + (-13) = 0$. Every integer has an inverse such that the number added to its inverse gives zero. We are already working with the structure and using the rules which define the arithmetic involved in adding integers. This is an example of a mathematical structure called a group .
From the Natural Numbers to the Integers
Another way to explain the evolution of thinking that extends ideas of counting and addition is to say that if we want to be able to solve all equations of the form $a + x = b$, where $a$ and $b$ are given and we have to find $x$, then for all such equations to have solutions we need to work in the set of integers. For example $9 + x = 5$ had no solution within the set of counting numbers.
Rational Numbers
In the same way as horizons are extended to include negative numbers it is also everyone's experience to learn that fractions are also numbers. Mathematicians call these numbers rational numbers . Children are interested in 'fair shares' even before they start school so division is also based on concrete experience.
Addition and subtraction are inverse operations inextricably connected. Similarly multiplication and division are inverse operations in the sense that $5 \times 4 = 20$ and $20 \div 4 = 5$.
It is not until we can work with the set of rational numbers that every addition, subtraction, multiplication and division of two numbers in the set gives an answer that is also a number in the set giving a 'self contained' number system. The rules for the arithmetic of rational numbers are simple. This set of rules defines what mathematicians call a field.
(1) The set of rational numbers is closed under addition, and associative, the rational number zero is the additive identity and every rational number has an additive inverse. We say rational numbers form a commutative group under addition.
(2) The set of rational numbers, leaving out the number zero, is closed under multiplication, and associative, the rational number one is the multiplicative identity and every rational number in this set has a multiplictive inverse. We say the rational numbers, leaving out the number zero, form a commutative group under multiplication.
(3) When we add and multiply rational numbers we use the distributive law. For example $$3\times (4 + 5) = 3 \times 4 + 3 \times 5 = 27.$$ We need to extend our ideas of the arithmetic of whole numbers to include fractions (rational numbers) because we cannot solve all equations of the form $ax = b$, where $a$, $b$ and $x$ are integers, $a$ and $b$ are given and we have to find $x$. If $a, b$ and $x$ are rational numbers then all such equations have solutions.
Irrational Numbers
Many people use only rational numbers because, even though no rational number will give exact measurements of even simple shapes, exact measurements can be approximated to a high degree of accuracy by rational numbers. The length of the diagonal of a unit square is $\sqrt 2$ and this is an irrational number, one that cannot be written as the quotient of two integers. It is approximately $1.414$ but it cannot be given exactly however many decimal places we use. See the interactive proof sorter for the proof that
Real Numbers
The rational and irrational numbers together make up the real numbers. Each real number corresponds to exactly one point on a line and all the points on that line are represented by real numbers. We call this line the real line . The real numbers are equivalent to one dimensional vectors and, together with addition and multiplication, form a field. We have now discussed two different examples of fields of numbers, the rationals and the reals.
Complex Numbers
In the field of real numbers we can solve the equation $x^2 = a$ only when $a$ is positive but not when $a$ is negative. Real numbers are good enough for many mathematical purposes but clearly they have limitations. It is necessary to recognise the existence of two dimensional numbers, the complex numbers, in order to solve all quadratic equations. See Root Tracker and Cubic Tracker.
In 1799 Gauss proved the Fundamental Theorem of Algebra: every polynomial equation over the complex numbers has a full set of complex solutions. Then, and finally, complex numbers were completely accepted as 'proper' numbers. This theorem means that every quadratic equation has two solutions, every cubic has three and so on.
What else can we do with complex numbers? Complex numbers are two dimensional; whereas real numbers correspond to points on a line, complex numbers correspond to points in the plane. The complex number written as $x+yi$ corresponds to the point in the plane with coordinates $(x,y)$. Let us examine the significance of this mysterious $i$ referred to as an 'imaginary' number. What role does it play?
Complex Numbers and Rotations
Think about taking a real number and finding its additive inverse, say $5$ and $-5$. To move from any real number to its additive inverse we must multiply by $-1$, or to think of it another way, we must move from the positive real axis to the negative real axis, a half turn about the origin so the point $(5, 0)$ moves to $(-5,0)$, that is the complex number $5+ [0\times i]$ moves to $-5+[0\times i]$. No obvious clue there as to the role of $i$ but let's probe a bit further and think more about rotations. Two quarter turns make a half turn so what happens when we rotate the plane by a quarter turn about the origin? The point $(5, 0)$ moves to $(0,5)$, that is the complex number $5 + [0\times i]$ moves to the complex number $0 + 5 i$ which appears to be equivalent to multiplying by $i$, that is $i(5 + [0\times i] ) = 0 + 5i$. (It is not necessary to write in $[0\times i]$ here but we do so to make clear how the mappings of the complex numbers correspond to the mappings of the points in the plane including the points on the real line which correspond to real numbers.)
We have seen that a real number is mapped to its additive inverse by multiplying by $-1$. So if a quarter turn of the complex plane is equivalent to multiplying by i then a half turn (that is two quarter turns) must be equivalent to multiplying by $i$ twice which must be the same as multiplying by -1 and this tells us that $i^2 = -1$. Taking $i^2 = -1$ this fits in with moving $(5, 0)$ to $(-5, 0)$, or correspondingly, $5 + [0\times i]$ to $i^2(5 + [0\times i]) = -5 + [0\times i]$. A quarter turn moves $(0,5)$ to $(-5,0)$ or equivalently $0 + 5i$ to $-5+ [0\times i]$ and this time multiplying by $i$ gives $i(0 + 5 i) = 5i^2 + [0\times i] = -5 + [0\times i]$.
All this works beautifully because $i^2 = -1$. This little complex number, corresponding to the point $(0,1)$, not only allows all polynomial equations to have solutions but gives a powerful tool for working with rotations. In the form $x+yi$ complex numbers can be added, subtracted, multiplied and divided according the the same rules as elementary arithmetic, that is the complex numbers form a field. We now have three fields of numbers, the complex numbers, the reals and the rationals.
Another glimpse of the beauty of complex numbers is seen in the formula $$e^{i\theta}=\cos \theta + i\sin \theta$$ linking geometry, trigonometry and analysis. This formula involves the important real number $e$ as well as the complex number $i$. Students usually meet this formula and use it in their last year in school if they are preparing to study mathematics, physics or engineering in higher education. In this formula the angle $\theta$ is given in radians and not in degrees but the conversion is a simple matter because $\pi$ radians is $180^o$. If we put $\theta = \pi$ we have the very beautiful result $$e^{i\pi} = \cos \pi + i\sin \pi$$ which connects the important numbers $e$, $i$, $\pi$ and -1 in the simple little formula $$e^{i\pi} = -1.$$ If we put $\theta = {\pi \over 2}$ we have $$e^{i{\pi \over 2}} = \cos {\pi \over 2} + i\sin {\pi \over 2} = i$$ which suggests that multiplying by $$e^{i\theta}=\cos \theta + i\sin \theta$$ might be equivalent to rotating the complex plane by an angle $\theta$ ( as this works for quarter turns and $i$) and this is indeed the case.
If one dimensional real numbers can be generalised to two dimensional complex numbers and both systems form fields, the obvious question is "what about higher dimensional numbers?"
Three dimensional numbers do not exist
Three dimensional vectors are of fundamental importance in applied mathematics. They can be added and subtracted but although there are two different types of vector multiplication, multiplicative inverses do not exist and so the set of three dimensional vectors do not form a field and cannot be a set of numbers. To understand the significance of the two alternative definitions of vector multiplication it is necessary to know about four dimensional quaternions.
Quaternions
If we think about rotations of the plane, and $i$ as the key to understanding the essence of complex numbers, what about rotations of 3 dimensional space? For rotations of the plane that map the plane to itself there is only one possible axis of rotation which must be perpendicular to the plane. [Without loss of generality we can take the centre of rotation to be at the origin.] However in 3 dimensional space there are infinitely many possible axes of rotation through the origin. If we want to specify an axis of rotation we need the three coordinates for one other point on the axis. Whereas in the complex plane we only need one parameter to specify a rotation (the angle of rotation), in 3 dimensional space we need four parameters (three to specify the axis of rotation and one to specify the angle of rotation). This takes us to four dimensions and explains why there are two and four dimensional numbers, but not three dimensional numbers, and why quaternions provide a very efficient way to work with rotations of 3 dimensional space.
Quaternions, discovered by the Irish mathematician Sir William Rowan Hamilton in 1843, have all the properties of a field except that multiplication is not commutative. Moreover quaternions incorporate three dimensional vectors and much of vector algebra and provide simple equations for reflections and rotations in three dimensional space.
As applied mathematics and physics regularly deal with motion in space, quaternions are very useful. It is a quirk of history that this was not perhaps fully appreciated at first and vector algebra was invented as a tool to work with motion in 3 dimensional space and concentrate attention on only 3 dimensions. However a lot of the simplicity of the equations involving quaternions was lost as well as sight of the underlying reasons for defining scalar and vector multiplication in the way they are defined. Nowadays quaternions have come into their own again as an important tool frequently used in applied mathematics and theoretical physics. Quaternions are also now widely used in programming computer graphics because the quaternion algebra involved in transformations in 3 dimensions is so simple.
See the NRICH problems Quaternions and Reflections and Quaternions and Rotations.
Higher Dimensional Numbers
What about higher dimensional numbers? Number theorists work with Clifford Algebras, named after William Clifford (1845-1879), which generalise complex numbers and quaternions to dimensions 2, 4, 8, 16... and higher dimensions (all powers of 2). Some of the properties of a field are lost, for example quaternions are not commutative under multiplication, but Clifford algebras are associative. Clifford algebras have important applications in a variety of areas including geometry and theoretical physics. Other generalisations are studied for which multiplication is not associative.
This 'big picture' discourse has ranged, without getting too technical, from kindergarten mathematics to the fringe of research into analysis and applications of number. There is a wealth of literature to take the reader further at every level and the links below may provide a useful start on such a journey of discovery.
Some Further Reading
NRICH Articles:
Plus Articles:
Other websites:
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475852251052856, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/36756/need-help-using-the-inclusion-exclusion-principle-to-count-the-overlaps-between | # Need help using the Inclusion-Exclusion Principle to count the overlaps between 4 sets?
In a class, 18 students like to play chess, 23 like to play soccer, 21 like biking, and 17 like jogging. The number of those who like to play both chess and soccer is 9. We also know that 7 students like chess and biking, 6 students like chess and jogging, 12 like soccer and biking, 9 like soccer and jogging, and finally 12 students like biking and jogging. There are 4 students who like chess, soccer, and biking, 3 who like chess, soccer, and jogging, 5 who like chess, biking, and jogging, and 7 who like soccer, biking, and jogging. Finally, there are 3 students who like all four activities. In addition, we know that every student likes at least one of these activities. How many students are there in the class?
I know this is a case of using the inclusion-exclusion principle, but I'm a little overwhelmed, given that there are 4 sets. Can someone please explain this to me? Thanks!
-
1
This is absolutely not something to be tagged under set-theory. At most it might fit under elementary-set-theory. I would still protest, as this is a question about discrete mathematics more than it is about set theory. – Asaf Karagila May 3 '11 at 18:42
Oh that was my mistake. I started filling in a tag about sets and clicked the wrong one. Sorry! – Chloe May 3 '11 at 18:43
It's fine, just try not to be so harsh on the Save Edits the next time. :-) – Asaf Karagila May 3 '11 at 18:44
## 2 Answers
Take $S$ to be the set of kids who play soccer; $C$ of kids who play chess; $B$ for biking; and $J$ for jogging.
If you just add up $|S|+|C|+|J|+|B|$, then you are overcounting: any kid who likes more than one sport is getting counted as many times as there are sports he likes. So you need to compensate for that.
You can compensate for those who like exactly two sports by subtracting the six pairwise intersections: $|S\cap C|$, $|S\cap J|$, $|S\cap B|$, $|C\cap J|$, $|C\cap B|$, and $|J\cap B|$. Now, you've counted everyone who likes just one sport once, everyone who likes exactly two sports once (you counted them twice to begin with, and have subtracted them once now).
But you've overcompensated for kids who like three or more sports: for kids who like three sports, you counted them three times when you counted $|S|+|C|+|J|+|B|$, and then you subtracted them three times when you subtracted the pairwise intersections (since they are in all three pairwise intersections). For kids who like all four sports, you counted them 4 times to begin with but now you subtracted them six times... we'll deal with those later...
To compensate for kids who like exactly three sports, which you have now not counted at all, we just need to add the four 3-fold intersections; $|S\cap C\cap J|$, $|S\cap C\cap B|$, $S\cap J\cap B|$ and $|C\cap J\cap B|$. We counted them 3 times first, then subtracted them three times, now add them once. Great.
But now what about the kids who like all four sports? You counted them four times at first; then you subtracted them six times when you dealt with pairs; and now you've added them four times when you counted 3-fold intersections; that means that you've counted them 8 times and subtracted them six times, so they are still overcounted. You need to count how many kids like all four sports and subtract them to get the right total. So you need to subtract $|S\cap C\cap J\cap B|$.
-
1
Thank you. I really just needed it laid out like that. Working on solving it now. – Chloe May 3 '11 at 18:57
2
@Chloe: Inclusion-Exclusion just goes like that. First add the 1-folds; then subtract the 2-folds; then add the 3-folds; then subtract the 4-folds; then add the 5-folds; then subtract the 6-folds. Etc. The number of $k$-folds when you have $n$ categories to begin with is $\binom{n}{k} = \frac{n!}{k!(n-k)!}$, the number of ways of choosing $k$ out of $n$ possibilities. – Arturo Magidin May 3 '11 at 19:01
+1: This is a great answer. It's funny how the math works out so that you just go +/- down the line. – Michael Chen May 3 '11 at 23:47
given sets $X_1, X_2, X_3, X_4$ we have $$|\cup X_i|=\sum_i|X_i|-\sum_{i\neq j}|X_i\cap X_j|+\sum_{i\neq j\neq k}|X_i\cap X_j\cap X_k|-|\cap X_i|$$ so in your example this is $$(18+23+21+17)-(9+7+6+12+9+12)+(4+3+5+7)-3=40$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9704394340515137, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/37005/manipulating-formal-power-series-help-please | # Manipulating formal power series help please
$\displaystyle{\frac{18}{(1+3x)^3}}$ $=\sum_{n=0}^\infty n(n-1)(-3)^n x^{n-2}$
If i got up to this, how could i get $\displaystyle{\frac{(1-2x)}{(1+3x)^3}}$ ?
When i tried to multiply both side, some people says n-m some says n+m for $x^m$
Could someone kindly show me the working out please?
My working out:
$\displaystyle{\frac{(1-2x)}{(1+3x)^3}}$ = $=\sum_{n=0}^\infty [((n)(n-1)(-3)^n)/18] x^{(n-2)}$
Multiply (1-2x) on both side I got
= $\sum_{n=0}^\infty [((n)(n-1)(-3)^n)/18] x^{(n-2)}$ - $2\sum_{n=0}^\infty [((n-1)(n-2)(-3)^{(n-1)}))/18] x^{(n-2)}$
$=\sum_{n=0}^\infty [(5n-4)(n-1)(-3)^n /54 ] x^{(n-2)}$
-
It is not clear what $m$ is. Multiply both sides and show us your calculation. – Phira May 4 '11 at 18:51
It is rather hard to see what you mean by your third sentence "When i tried...". – Mariano Suárez-Alvarez♦ May 4 '11 at 18:53
Working out shown – Jono May 4 '11 at 19:01
You are not taking care of the start of the sum range. I strongly suggest to not use a formula "n+m", but to actually multiply by $x$ and then shift the summation range by substituting $k-1$ for $n$. This will show you how to modify the summation range and the terms. – Phira May 4 '11 at 19:12
@Shai It is possible to interpret the first equation properly, but I agree that this is one more instance of not paying attention to the summation range. – Phira May 4 '11 at 19:14
show 7 more comments
## 1 Answer
So we are given the (interesting) equality $$\frac{{18}}{{(1 + 3x)^3 }} = \sum\limits_{n = 2}^\infty {n(n - 1)( - 3)^n x^{n - 2} } ,$$ for $x$ in a neighborhood of $0$. Hence, $$\frac{{18}}{{(1 + 3x)^3 }} = \sum\limits_{n = 0}^\infty {(n + 2)(n + 1)( - 3)^{n + 2} x^n } ,$$ and in turn $$\frac{1}{{(1 + 3x)^3 }} = \sum\limits_{n = 0}^\infty {\frac{{(n + 2)(n + 1)( - 3)^n }}{2}x^n } .$$ Thus, $$\frac{{ - 2x}}{{(1 + 3x)^3 }} = \sum\limits_{n = 0}^\infty {\frac{{ - 2(n + 2)(n + 1)( - 3)^n }}{2}x^{n + 1} } = \sum\limits_{n = 1}^\infty {\frac{{ - 2(n + 1)n( - 3)^{n - 1} }}{2}x^n } .$$ Therefore, $$\frac{{1 - 2x}}{{(1 + 3x)^3 }} = 1 + \sum\limits_{n = 1}^\infty {\frac{{( - 3)^n }}{2}x^n \bigg[(n + 2)(n + 1) + \frac{2}{3}(n + 1)n \bigg]},$$ or $$\frac{{1 - 2x}}{{(1 + 3x)^3 }} = 1 + \sum\limits_{n = 1}^\infty {\frac{{( - 3)^n }}{2}\bigg[\frac{5}{3}n^2 + \frac{{11n}}{3} + 2\bigg]x^n }$$ (confirmed numerically).
-
Oh ~ that 2nd line makes alot more sense now. – Jono May 4 '11 at 20:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269309639930725, "perplexity_flag": "middle"} |
http://quant.stackexchange.com/questions/tagged/mean | # Tagged Questions
The mean tag has no wiki summary.
learn more… | top users | synonyms
0answers
61 views
### Mean-variance minimizser
I am working on a project that involves pricing european call options in incomplete markets. Now I need to find a unique measure $Q^*$ such that Q^* = \min_{M_e} E_Q [V(T)-F(w)]^2 = \min_{u} E_Q ...
3answers
988 views
### R code for Ornstein-Uhlenbeck process
Can any one help me with some R code to run Ornstein-Uhlenbeck process?
1answer
784 views
### Mean Reversion Time Frame
I am running a mean reversion strategy. I have question with regards to half-life; I have heard of OU process to determine the high life but it's not giving me the kind of result. Can any one ...
2answers
1k views
### Mean reverting Indicator
I'm looking for an indicator which tells me if it's a good time to use mean reverting type quantitative trading strategies. In order to do so I look at the market (the few hundred stocks I trade) and ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8979895114898682, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/267651/remainders-of-primes?answertab=oldest | # Remainders of primes
Maybe an idiot question but I can't find any info! We divide successive prime numbers by some fixed prime number $n$ (e.g. 7 or 17). We'll get some remainders $r[i]= 1..n-1$ Is there any law or theorem about their distribution? It seems Fermat's Little theorem and Chinese remainder theorem don't work.. I've tried it in Mathematica and it seems remainders are chaotic. But "Poincaré 3D view" $\{ r[i],r[i-1],r[i-2]\}$ shows some lines and nets.
UPD Thanks to everybody, esp. TonyK! It seems the answer is:
Let $\mathbb{P}(d)$ - probabilty for distance between successive primes to be $d$. (It depends on value of "first" number and known only numerically). If some prime number $p_1$ has reminder $r_1$ when divided by $n$, than probability for the next prime $p_2$ to have reminder $r_2$ is: $$\sum_{k=0}^{\infty} \mathbb{P}(r_2-r_1+k \cdot n)$$
-
1
$0$ will appear at most once. – Henry Dec 30 '12 at 14:47
## 1 Answer
The Wikipedia article Dirichlet's theorem on arithmetic progressions answers your question:
...different arithmetic progressions with the same modulus have approximately the same proportions of primes. Equivalently, the primes are evenly distributed (asymptotically) among each congruence class modulo d.
In other words, for any $k$ with $0 < k < n$, the proportion of integers $i$ such that $r[i] = k$ is equal to $1/(n-1)$ (since $n$ is prime).
More formally: Given an integer $N > 0$, let $p_N(k)$ be the proportion of integers $i$ with $0 < i \le N$ such that $r[i] = k$. Then $p_N(k)$ tends to $1/(n-1)$ as $N$ tends to $\infty$.
It is possible to make a more precise statement concerning the rate of convergence of $p_N(k)$, but I don't expect that any more concrete result is possible.
-
Aha.. Many thanks! But for small $N$ some regularity appears. Is there some ideas about it? – lesobrod Dec 30 '12 at 15:59
Perhaps you are just imagining patterns in random data? People do that a lot, you know :-) (And do you mean small $N$, or small $n$?) – TonyK Dec 30 '12 at 16:11
Oh $n$ sorry.. What about patterns.. Reminders itself equally distributed with no doubts. But now I start analysis for pairs and triads 'cause it seems they have a big correlations... – lesobrod Dec 30 '12 at 16:37
Good luck with that... – TonyK Dec 30 '12 at 16:39
@lesobrod Primes get more 'rare' as you get to large numbers. Any kind of 'regularity' that you see in small cases, go out the window. For example, there exists strings of composite numbers of arbitrary length, since $N!+2, N!+3, \ldots, N!+N$ are all composite. – Calvin Lin Dec 30 '12 at 16:45
show 4 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330188632011414, "perplexity_flag": "middle"} |
http://cstheory.stackexchange.com/questions/2003/analysis-of-variables-of-varying-numbers | # Analysis of variables of varying numbers
i work with amino acid sequences and i want to use a selfmade model to tell me something about it, lets call it f(seq). Now i want to know the contribution of every position in the sequence onto the model. E.q. My question is what is the importance/effect of amino acid A occuring at position I in the sequence with respect to the model?
How do i visualize something like that?
I want to use my model also on several sequences of differing lengths. Somehow this throws a monkeywrench into my plans of using a neural net...
My question is simple yet i did not find anything about it. Pointers would be appreciated. Or any comment you might have. This whole idea of mine is pretty unfinished and i dont really know yet what i want. So feel free to criticize, i will update the question accordingly.
Ah and if this is the wrong place to put this here please tell me also (:
cheers and thanks
-
What kind of model do you have, in particular deterministic or probabilistic? How would you like to measure contribution? I can imagine using an independence measure if every position and $f(seq)$ are treated as random variables. If $f(seq)$ is independent of $seq[i]$ the $i$th symbol does not yield a contribution. – Raphael Oct 7 '10 at 9:53
2
– András Salamon Oct 7 '10 at 10:22
the best thing would be to keep the model abstract. but it will be a probabilistic one probably. I will look into independence measures (: However i think the sequence positions should not be regarded as independent. I would like to find some pattern or sth that has a bigger contribution than a single amino acid. – tarrasch Oct 7 '10 at 10:36
i will repost the question at stats.stackexchange and pool the answers. thanks – tarrasch Oct 7 '10 at 10:36
2
Question is too vague to be answerable. – Warren Schudy Oct 7 '10 at 20:20
## 1 Answer
"My question is what is the importance/effect of amino acid A occuring at position I in the sequence with respect to the model?"
I suggest Monotony and Surprise by Apostolico for the modeling of importance of words appearing at certain positions, or in certain patterns. Beyond that, I'm not sure what you're looking for.
-
– Aaron Sterling Oct 7 '10 at 20:18
thank you will try that. – tarrasch Oct 8 '10 at 7:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413433074951172, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/292463/on-the-convergence-of-a-specific-sequence-of-integrable-functions | # On the convergence of a specific sequence of integrable functions
Let $\{f_n\}$ a sequence of measurable non-negative functions on $\mathbb{R}$ converging point-wise on $\mathbb{R}$ to $f$, and let $f$ integrable over $\mathbb{R}$. If $\displaystyle \int_{\mathbb{R}} = \lim_{n \to \infty} \int_{\mathbb{R}}f_n$ then show that $\displaystyle \int_{E} f = \lim_{n \to \infty} \int_{E}f_n$ for any measurable set $E$.
One side of the inequality is trivial by Fatou's lemma. I am seeking to prove that $\displaystyle \int_{E}f \ge \lim_{n \to \infty} \int_{E}f_n$.
Any suggestions? It does seem that I will need to use some convergence theorem to derive this. Am I wrong?
-
The sequence of functions $f_n 1_E$ converges pointwise to $f 1_E$, and is bounded by $f$ which is integrable. Use the dominated convergence theorem. – copper.hat Feb 1 at 23:47
@copper.hat How do you deduce the $f_n$ are dominated by $f$ (particularly on the set $E$)? – David Mitra Feb 1 at 23:55
Dominated convergence might not work, but Fatou's lemma does! Thanks for the hint @copper.hat! – user44069 Feb 1 at 23:59
My apologies. I misread. – copper.hat Feb 2 at 0:23
## 1 Answer
Hint:
Fatou's Lemma gives you that $$\tag{1}\liminf\limits_{n\rightarrow\infty} \int_{E} f_n\ \ge\ \int_E f.$$
You don't know that $\lim\limits_{n\rightarrow\infty} \int_{E} f_n$ exists yet.
But, by writing
$$\int_E f =\int_{\Bbb R} f -\int_{E^C} f,$$ use Fatou's Lemma again to show that the right hand side of $(1)$ is no smaller than $\limsup\limits_{n\rightarrow\infty}\int_E f_n.$
You will then have
$$\liminf\limits_{n\rightarrow\infty} \int_{E} f_n\ \ge\ \int_E f\ \ge\ \limsup\limits_{n\rightarrow\infty}\int_E f_n;$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916221022605896, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/48332/why-geometrically-four-acceleration-is-a-curvature-vector-of-a-world-line-and-w?answertab=oldest | # Why geometrically four acceleration is a curvature vector of a world line? And what is proper acceleration?
1. Why geometrically four acceleration is a curvature vector of a world line?
Geometrically, four-acceleration is a curvature vector of a world line. Therefore, the magnitude of the four-acceleration (which is an invariant scalar) is equal to the proper acceleration that a moving particle "feels" moving along a world line. The world lines having constant magnitude of four-acceleration are Minkowski-circles. (Wikipedia)
2. And what is proper acceleration?
-
## 3 Answers
this is why 4-acc is Geometrically curvature.
$$A^{\mu}=\frac {d^2x^{\mu}}{ds^2}+\Gamma_{\alpha\beta}^{\mu}\frac {dx^{\alpha}}{ds}\frac {dx^{\beta}}{ds}=0$$
-
## Did you find this question interesting? Try our newsletter
email address
An object moving on an inertial path has a straight worldline in special relativity. The four-acceleration then measures how non-straight the worldline is.
When gravity is involved, an inertial (free-fall) path may be curved due to gravity. Four-acceleration still measures the acceleration relative to such a path, and proper acceleration is basically just that four-acceleration converted to a three-vector. For example, a free-fall path in the vicinity of Earth would be, well, one in which you freely fall towards Earth's center. Such a path has zero proper acceleration in the absence of other forces. If instead you're standing on Earth's surface, your proper acceleration is radially directed away from the center of the Earth, with magnitude $g$.
-
Curvature of a plane curve is defined as the (magnitude) rate of change of the unit tangent vector over the length of the curve:
$$\kappa =\left | \frac{d\mathbf{T}}{ds} \right |$$
This is a natural definition, because for example the curvature of a circle is just $1/r$, so when the circle is small it has large curvature and when it's big it has small curvature. The unit tangent vector, as you know, is just the velocity vector divided by its magnitude:
$$\mathbf{T}=\frac{\mathbf{u}}{|\mathbf{u}|}$$
Do a bit of calculus and you find that $d\mathbf{T}/ds =\kappa (s) \mathbf{N}(s)$, where $\mathbf{N}$ is the unit normal vector. So you can define $d\mathbf{T}/ds$ as the "curvature vector" of the curve which points normal to the curve and is scaled by the curvature.
In SR, the four-velocity $U^\mu$ is defined such that its magnitude is always $c$. Also, we tend to work in units where $c=1$, so the four-velocity is actually a unit tangent vector to the worldline. The four-acceleration, which is defined as $A^\mu = dU^\mu /ds$, is therefore the curvature vector of the worldline, the magnitude of which is the curvature.
Proper acceleration is just a fancy way of saying "the acceleration that you would be able to measure with an accelerometer." The magnitude of four-acceleration is always proper acceleration, so geometrically the proper acceleration of an observer is the curvature of his worldline.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295201897621155, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/3837/discrete-log-problem-when-we-have-many-examples?answertab=votes | # Discrete log problem, when we have many examples
Suppose I have many instances of the discrete log problem, all using the same unknown exponent. Is this problem easier than the standard discrete log problem?
Oh, heck, I should be more precise. Let $p$ be a large prime, chosen to be large enough that the discrete log problem modulo $p$ is (presumably) hard. Everything from here on will be in the multiplicative group of integers modulo $p$. Suppose we are given $a=(a_1,\dots,a_n)$ and $b=(b_1,\dots,b_n)$, where $b=a^k$, and we want to find $k$. In other words, $b_i = a_i^k$ for all $i$ (the same integer exponent for each of the $n$ instances), and we want to find the exponent $k$.
What is the hardness of finding $k$? Is it significantly easier to find $k$, given these $n$ instances, than it would be given only one instance? (say, is a greater than $n$-fold speedup available?)
$\newcommand{\Z}{\mathbb{Z}}$ I suspect it makes sense to focus in particular on three cases, depending upon how the $a$'s are chosen:
1. Random choice. In this variation, the $a_i$'s are uniformly and independently distributed on $(\Z/p\Z)^*$.
2. Non-adaptive adversarial choice. In this variation, the adversary chooses all of the $a_i$'s in advance, before seeing any of the $b_i$'s.
3. Adaptive adversarial choice. Finally, we can consider an adaptive variant, where the attacker chooses $a_1$, gets to see $b_1=a_1^k$, then the attacker can choose $a_2$, see $b_2=a_2^k$, and so on.
Does it help the adversary significantly to see $n>1$ pairs $(a_i,b_i)$? Of course, when $n=1$ this just reduces to the standard discrete log problem. When $n>1$, is this problem ever significantly easier than the basic discrete log problem?
-
## 3 Answers
I can answer the question for the first of the three cases:
1. Random choice. In this case, seeing $n$ instances cannot help the adversary very much (not more than a $n$-fold speedup). The problem still remains hard, for suitably large $p$.
Justification: By a simple reduction. Suppose we had a clever algorithm to solve the problem for $n>1$. I will show how to use that clever technique to solve the standard discrete log problem. Assume we are given $x,y$ with $y=x^k$ and we want to find $k$. Assume that $x$ is a generator. Then we pick random numbers $r_1,\dots,r_n$, set $a_i = x^{r_i}$, and set $b_i = y^{r_i}$. Then the entries $a_i$ are all uniformly and independently random, so this has the right distribution. Now we send $a=(a_1,\dots,a_n)$ and $b=(b_1,\dots,b_n)$ to our clever algorithm; by assumption it gives us back $k$, which solves the standard discrete log problem. So if the standard discrete log problem is hard, then the $n$-instance version must be hard too.
However I do not know what happens for the other two cases. The simple reduction above does not extend to those two cases.
(I can almost smell the hint of some relationship to the Diffie-Hellman problem. Maybe if DDH is hard then maybe we can apply similar techniques to show that $n$ instances can't help the adversary too much? I dunno.)
-
In practise the view would be that no, it does not get any easier. Indeed many popular deployed schemes depend on it. For example the Trusted Authority in the Boneh-Franklin IBE scheme has a master secret s and issues private keys to users in the form s.ID_i, where ID_i is a point on an elliptic curve, and ID_i is related to the identity of the i-th user. It is a standard DL problem to find s from such a private key. Nevertheless the Boneh-Franklin scheme is assumed strong against a conspiracy of multiple users who can pool their secrets in an attempt to find the master secret s.
-
On the third case, I have a comment. The third oracle may help the adversary using Cheon's algorithm for the DL problem.
Let $q$ be a prime order of the subgroup $\mathbb{G}$ of $(\mathbb{Z}/p\mathbb{Z})^{\times}$.
In the third case, the adversary has an oracle $a \mapsto a^k$ for any $a$. Hence, it can obtain $g^{k^i}$ from $g^{k^{i-1}}$ and so on. When $d \mid q-1$, Cheon's algorithm solves the augumented DL problem $(g,g^k,g^{k^d})$ over the group $\mathbb{G}$ of order $q$ with $O(\sqrt{q/d} + \sqrt{d})$ arithmetic computations. If there exists a divisor $d = O(q^{1/3})$ of $q-1$, the adversary can solve the DL problem with $O(\sqrt{q^{2/3}}+\sqrt{q^{1/3}}+q^{1/3}) = O(q^{1/3})$ arithmetic computations.
The third oracle enhances the adversary's power if the $O(\sqrt{q})$-time algorithm is the best attack for the DL problem.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9257777333259583, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/5567/quantum-shot-noise-and-the-fluctuation-dissipation-theorem | # Quantum shot-noise and the fluctuation dissipation theorem
Classically, shot noise observed in the signal generated by a laser incident on a photodiode is explained as being due to the quantization of light into photons, giving rise to a Poisson process. In quantum optics, on the other hand, the shot noise is said to arise from interference with the vacuum field, which leaks in at points of optical loss (absorption).
Meanwhile, mechanical oscillators are subject to the fluctuation-dissipation theorem, which says, roughly, that the thermal excitement of the various mechanical modes of the oscillator are proportional to the dissipation in that mode. In the context of electronics, Johnson noise is an example of this effect.
I recently read a thought-provoking statement (in this paper) that these effects are really "the same"; just as thermal motion arises at points of mechanical dissipation, shot noise arises at points of optical dissipation.
Is this analogy physically/theoretically significant?
-
## 2 Answers
The principle is the same in both pictures. I'm not sure how to answer "is this analogy significant?", since I don't think it's an analogy at all. It's the same phenomena, as explained below.
Any time you have a mechanism for dissipation (a coupling between the "system" and a "bath") that coupling mechanism will give rise to back-action of the bath the system (i.e. fluctuations/noise). This is true whether the bath is in a thermal state or a vacuum state. Quantum optics can treat both cases; the usual fluctuation-dissipation formula only works in the limit that the bath is well approximated by a classical thermal state (i.e. kT >> the energy of 1 quanta of excitation).
And just to clarify one point: when you say "the shot noise is said to arise from interference with the vacuum field", that's not always the whole story. Noise can come from lots of places. It can be in the original state of your optical field, in which case it will be observed even if you don't have lossy mirrors/detectors. But the idea is that if you DO have loss, then even if you start with a sub-shot-noise beam, you'll pretty quickly approach the shot noise limit.
-
Thanks for the response! Can you show how to treat a lossy mechanical oscillator using the quantum optics formalism? – nibot Feb 22 '11 at 16:13
For that specific case (damped oscillator vs. quantum electromagnetic field), the QM treatment of the simple harmonic oscillator is mathematically equivalent to the QM treatment of a single mode of the EM field. Can I show how? Maybe, but that's more work and typing than I care do do. Look at whatever section of your quantum optics book quantizes the field. I'd bet money they're using photon creation and annihilation operators with the same commutation relations as the raising and lowering operators of the harmonic oscillator. – Anonymous Coward Feb 22 '11 at 17:08
Nibot,
I strongly suggest you read Noise and Fluctuations by D.K.C. MacDonald. It has lots of great discussions related to thermal noise. That's where most of this answer comes from.
You are probably used to the Fluctuation Dissipation Theorem (FDT) written in a form similar to the way Nyquist derived Johnson noise on a resistor: $$<\delta V_f^2> = 4RkT df$$ Which is the variance in the Voltage squared, in terms of $R$ the resistence, $kT$ the temperature, and $df$ the measurement bandwidth. (Alternatively divide both sides by $df$ and the quantity is the power spectral density).
But there is another form of Nyquist's theorem for when $hf \approx kT$, i.e. valid in the quantum regime. $$<\delta V_f^2> = 4R\left( \frac{hf}{2}+\frac{hf}{e^{hf/kT}-1}\right) df$$ You should be able to convince yourself that this reduces to the standard Nyquist formula in the appropriate limit.
Using this form of the theorem, and considering a charged particle which oscillates in vacuum, there is a damping back-reaction of the electromagnetic field given by the Larmour formula: $$\vec{E} = -\frac{(2\pi f)^2}{6\pi \epsilon_0 c^3} \dot{ \vec{p}},$$ for the electric field $\vec{E}$ and dipole $\vec{p}$. So with analogy to the Nyquist formula, $<\delta V^2_f>$ describes the electric field fluctuations, and $R=\frac{(2\pi f)^2}{6 \pi \epsilon_0 c^3}$. Surprisingly, plugging this into the quantum Nyquist theorem reproduces the blackbody radiation spectrum! The FDT never ceases to amaze!
Note that my quantum FDT includes a zero point energy term, which is a bit controversial, because it also predicts a blackbody spectrum which has a zero point energy term, which can't be observed directly.
Now, I must admit I tried and failed to derive the shot noise formula from the blackbody spectrum with the zero point energy term added, but because shot noise is often attributed to zero point energy fluctuations of the EM field, it feels like this represents the same thing physically. I think my math skillz just weren't cutting it.
I guess one thing to realize is that these optical measurments are working in the $hf\gg kT$ limit while usually thermal noise is concerned with the opposite limit. But imagine an interferometer working with 10 $\mu$m light, where the room temperature thermal spectrum is large. This interferometer would be primarily concerned with thermal fluctuations entering ports, rather than quantum ones!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340665340423584, "perplexity_flag": "middle"} |
http://en.wikisource.org/wiki/Absence_of_Effects_of_Motion_through_the_Aether | # Absence of Effects of Motion through the Aether
From Wikisource
On the ascertained Absence of Effects of Motion through the Aether, in relation to the Constitution of Matter, and on the FitzGerald-Lorentz Hypothesis (1904) by Joseph Larmor
Philosophical Magazine, 1904, S. 6., Vol. 7., No. 42., June 1904: 621-625, Online Communicated by the Physical Society: read May 27, 1904.
On the ascertained Absence of Effects of Motion through the Aether, in relation to the Constitution of Matter, and on the FitzGerald-Lorentz Hypothesis Joseph Larmor
1904
IN a recent paper by Prof. D. B. Brace (Phil. Mag. April 1904, p. 318) the author removes by very refined experimenting all trace of doubt from Lord Rayleigh's conclusion that motion of transparent solids through the aether does not induce any double refraction, even to the second order of the ratio of the velocity of the translation to that of radiation; but he infers from this the non-existence of the second-order deformation of the solid due to its translation, suggested by FitzGerald and by H. A. Lorentz to account for Michelson's earlier demonstrated absence of effect on optical interferences over long paths in free aether. As he remarks, it had previously been suggested by Lord Rayleigh that such an inference might possibly follow from this result. The object of this note is to explain that the inference in question is the opposite to that which I still hold to be the natural result of the theory of the motion of molecular aggregates through aether, as hitherto developed.[1]
The argument of Prof. Brace proceeds on the basis that the whole effect of the convection through the aether is to introduce new forces between the molecules, causing the shrinkage aforesaid along the direction of convection; and it can be readily granted that if this were all, double refraction must result. But both the line of argument suggested as probable by Lorentz,[2] and the molecular analysis offered by me some years later,[3] proceed by comparing a system shrunk in the FitzGerald-Lorentz manner and convected through the aether, with the same system unshrunk and at rest, and finding a complete correspondence between them as regards the states and activities of the individual molecules. As the argument is somewhat complex and has been misunderstood, a brief re-statement of the result may prove useful.
We are to compare the field of physical activity of a system of molecules at rest, with the field of the identically same configuration of molecules in uniform translatory motion through aether. If small quantities of the order of the square of the ratio of the velocity of convection to that of radiation (v/c) are neglected, the Maxwellian physical equations for the second system, referred of course to axes of co-ordinates moving along with it, can be reduced to the form belonging to the same system at rest, by the transformation first developed by Lorentz: namely, each point in space is to have its own origin from which time is measured, its "local time" in Lorentz's phraseology, and then the values of the electric and magnetic vectors
(f, g, h) and (a, b, c)
at all points in the aether between the molecules in the system at rest, are the same as those of the vectors
$\left(f,\ g-\frac{v}{4\pi c^{2}}a,\ h+\frac{v}{4\pi c^{2}}b\right)$ and (a, b + 4πvh, c-4πvg)
at the corresponding points in the convected system at the same local times. This correspondence can, in fact, be shown to locate the electrons at corresponding points in the two systems, and to make them equal; if, then, they are held in rigid connexion, or more generally if their states of orbital motion in the molecules are conserved, the effect of translatory motion of the system with velocity v is to transform the aethereal field around them and between them as here specified. The fields of aethereal activity are not identical, but where one vanishes at any point so does the other at the same point. This conclusion was reached by Lorentz, who pointed out that it carried with it a null result for all recognizable optical tests of convection in the system, up to the first order, with the one exception of the Doppler effect which is involved in the "local" time measurements, and which is only a partial exception because it refers to radiation coming from outside the system.
Does, however, the system of electrons need to be constrained in order to prevent change of configuration when being convected? The force acting on an individual electron e is thereby changed from
$4\pi c^{2}e\left(f,\ g-\frac{v}{4\pi c^{2}}c,\ h+\frac{v}{4\pi c^{2}}b\ \right)$ to 4πc²e(f, g, h).
If there is a magnetic field (a, b, c) there will thus be alteration: if there is no sensible average magnetic field, even among the molecules, we may perhaps fairly assume, with Lorentz, that no constraint is needed in order to prevent change in molecular configuration in the system due to convection. Anyhow, the absence of recognizable optical result to the first order is certain, as the physical constants of the system in bulk must be unaltered to that order.
But the brilliant experimenting of Michelson and Morley had already led to the recognition of absence of optical result up to the second order of the ratio of the velocities. Thus the question was suggested whether the above correspondence between the resting and convected systems can be effectively extended up to the second order. It is, in fact, found that the Maxwellian circuital equations of aethereal activity, in the ambient aether, referred to axes moving along with the uniform velocity of convection v. can be reduced to the same form as for axes at rest, up to and including (v/c)², but not (v/c)³, by adopting a local time ε-1/2(t - vx/c²) as before, but with a new unit ε-1/2, and also a reduced unit of length parallel to x equal to ε-1/2, where here and in what follows ε represents 1+v²/c², the units of length along y and z remaining unaltered. It is found that for two aether-fields, one referred to fixed axes and the other to moving axes, standing in this mutual correlation, the electrons, or poles, in approaching which the aethereal electric vector becomes infinite as er-1, are situated at corresponding points and are of equal values: the relation, exact to the second order, is now that
(f, g, h) and (a, b, c)
in the field belonging to the fixed system of poles correspond to
$\mathsf{E}^{\frac{1}{2}}\left(\mathsf{E}^{-\frac{1}{2}}f,\ g-\frac{v}{4\pi c^{2}}c,\ h+\frac{v}{4\pi c^{2}}b\right)$
and
$\mathsf{E}^{\frac{1}{2}}\left(\mathsf{E}^{-\frac{1}{2}}a,\ b+4\pi vh,\ c-4\pi vg\right)$
for the field belonging to the convected system; where ε is 1 + v²/c², as above, the factor ε1/2 being needed to make corresponding poles equal in value instead of merely proportional.
If each pole or electron is connected with a molecule possessing extraneous mass, and it may be having an extraneous field of gravitational and other force of its own, and thereby interacting with other molecules, we shall want to know the forces exerted on that molecule by the surrounding aether, in order to form its own equations of motion, which must be combined with those of the aether-field around it in order to constitute a complete system. But if such other forces are molecularly insignificant, or better, if the electron is a mere passive pole — nucleus of beknottedness in some way — in the aether, conditioned and controlled entirely by the aether around it, just as a vortex ring is conditioned by the fluid in which it subsists and is also carried along thereby, then, as in the familiar hydrodynamics of vortices, the motion of the aether determines the motion of the entirely passive electrons, and the idea of force acting between them and the aether is dispensed with.
If, then, matter is for physical purposes a purely aethereal system, if it is constituted of simple polar singularities or electrons, positive and negative, in the Maxwellian aether, the nuclei of which may be either practically points or else small regions of aether with internal connexions of pure constraint, the propositions above stated for the first order are extended to the second order of v/c, with the single addition of the FitzGerald-Lorentz shrinkage in the scale of space, and an equal one in the scale of time, which, being isotropic, is unrecognizable.
On such a theory as this the criticism presents itself, and was in fact at once made, that one hypothesis is needed to annul optical effects to the first order; that when these were found to be actually null to the second order another hypothesis had to be added; and that another hypothesis would be required for the third order, while in fact there was no reason to believe that they were not exactly null to all order. Such a train of remarks indicates that the nature of the hypotheses has been overlooked. And if indeed it could be proved that the optical effect is null up to the third order, that circumstance would not demolish the theory, but would rather point to some finer adjustments than it provides for: needless to say the attempt would indefinitely transcend existing experimental possibilities.
As, then, the theory contains no further power of immediate adaptation, what are the hypotheses on which it rests, and how far are they gratuitous hypotheses introduced for this purpose alone? Up to the first order the electron hypothesis, that electricity is atomic, suffices by itself, as Lorentz was the first to show. Yet, even if the nature of the particles of the cathode discharge had never been made out, and the Zeeman effect had never been discovered, the facts known to Ampere and Faraday were sufficient to demonstrate that no other conception of electricity than the atomic one is logically self-consistent.[4]
Up to the second order the hypothesis that matter is constituted electrically — of electrons — is required in addition. For this there is no independent evidence except perhaps the general simplicity of the correlations of physical law. The circumstance that positive electrons have not yet been isolated naturally counts considerably on the other side; yet the theory puts no limit to the size and inertia and complexity of an electron, it only prescribes that it must be a collocation of aether poles connected together by some sort of pure constraint, but with no extraneous activities.
Any rival theory must on the threshold give an account of the Michelson null optical result, of Trouton's null electric result for convection of a charged condenser,[5] and of Rayleigh's absence of double refraction now rendered thoroughly secure by Brace.[6]
As electrons are already held to be a reality on various grounds, theoretical and experimental, it would appear therefore that there is much to be said for a benevolent attitude to the proposition that all the interactions of matter, so far as the laws of physics and chemistry extend, are to be described as phenomena occurring in and through the aether, and thus differentiated from the more recondite world of vital growth and change which they make manifest to our senses. This principle does not yet, so far as one can see, stand in the way of any other branch of physical science, while it accounts for the very remarkable absence of influence of the earth's motion through space on the most sensitive phenomena, and is almost led up to thereby.
It is pertinent to the present subject to refer to Mr. Sutherland's recent remarks (Phil. Mag. April, p. 406) on the magnetic effect of electric convection, in relation to the mysterious action of a dielectric varnish that has been announced by Crémieu and Pender. The discrepancy in the conservation of energy, there described, applied to the domain of electric polarization, is too startling to have been overlooked by the current theory;[7] and accordingly closer consideration gets rid of the difficulty. When an electron e is transferred in an electric field from a place where the potential is V1 to a place where it is V2, the fores acting on it, being e multiplied by the gradient of V, does work equal to e(V1-V2). When, however, the electron is embedded in a piece of dielectric matter which is so transferred, the force acting on the electron itself is diminished by the presence of the surrounding polarized matter, and so the work done on the electron is less than before: but now the electric polarization induced by the electron in this surrounding matter is also acted on by the electric field, and if we add the work done on it during the movement, we shall get the same total work as before for the system that is moved, and there will be no discrepancy to be otherwise explained.
Cambridge, April 7, 1904.
1. Loc. cit.
2. Cf. 'AEther and Matter,' p. 337.
3. Phil. Trans. 1903.
4. The null influence on optical rotation, observed by Rayleigh, counts here as a first-order effect.
This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1942, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 70 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9576972126960754, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/7374/the-jouanolou-trick/7602 | ## The Jouanolou trick
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Une suite exacte de Mayer-Vietoris en K-théorie algébrique (1972) Jouanolou proves that for any quasi-projective variety $X$ there is an affine variety $Y$ which maps surjectively to $X$ with fibers being affine spaces. This was used e.g. by D. Arapura to (re)prove that the Leray spectral sequence of any morphism of quasi-projective varieties is equipped from the second term on with a natural mixed Hodge structure.
Here is a proof when $X$ is $\mathbf{P}^n$ over a field $k$: take $Y$ to be the affine variety formed by all $n+1 \times n+1$ matrices which are idempotent and have rank 1. This is indeed affine since it is given by the equations $A^2=A$, the characteristic polynomial of $A$ is $x^n(x-1)$. Moreover, $Y$ is mapped to $\mathbf{P}^n(k)$ by taking a matrix to its image. The preimage of a point of $\mathbf{P}^n(k)$ is "the set of all hyperplanes not containing a given line", which is isomorphic to an affine space.
The general (quasi-projective) case follows easily from the above. However, it is not clear how to generalize Jouanolou's trick for arbitrary varieties. Nor is it clear (to me) that this is impossible.
1. Is there an analogue of the Jouanolou lemma for arbitrary (not necessarily quasi-projective) varieties (i.e. reduced separated schemes of finite type over say an algebraically closed field)?
2. (weaker version of 1 over complex numbers) Is there, given a complex algebraic variety $X$, an affine variety $Y$ that maps surjectively to $X$ and such that all fibers are contractible in the complex topology? A negative answer would be especially interesting.
3. (the following question is a bit vague, but if it has a reasonable answer, then it would probably imply a positive answer to 2.) Is there a quasi-projective analog of the topological join of two projective spaces? I.e., if $P_1$ and $P_2$ are two complex projective spaces, is there a quasi-projective variety $X$ which "contains the disjoint union of $P_1$ and $P_2$ and is formed by all affine lines joining a point in $P_1$ with a point in $P_2$"?
Edit 1: in 1. and 2. the varieties are required to be connected (meaning that the set of closed points is connected in the Zariski topology; in 2 one could use the complex topology instead).
Edit 2: as Vanya Cheltsov explained to me, the answer to question 3 is most likely no.
-
## 4 Answers
Jouanolou's trick has been extended to schemes with an "ample family of line bundles" by Thomason; see Weibel: Homotopy Algebraic K-theory, Proposition 4.4. This includes all smooth varieties and more generally all varieties with torsion local class groups. However, there exist (positive dimensional) proper varieties with no non-trivial line bundles on them; it seems possible that on such varieties there are no affine bundles with affine total space.
-
Thanks, unknown, this is indeed helpful! One remark though: requiring the bundle to be affine may be a bit too much. I'd be more than happy with all fibers being affine spaces. – algori Apr 16 2010 at 18:06
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Jounalou's trick is great isn't it? Off the top of my head, I don't know anything similar for non quasiprojective varieties or schemes. One can certainly use Chow's lemma to reduce to the quasiprojective case, but it's a lot messier ...
-
Thanks, Donu! Re: Jounalou's trick is great isn't it? Indeed it is! And it is true that replacing e.g. a complete variety by a cubical projective one results in a mess. So my question can be rephrased: is this mess avoidable or not? For example, does there exist a complete variety so bad that its cohomology (seen as an algebra equipped with a mixed Hodge structure) is different from the cohomology of any affine variety? – algori Apr 14 2010 at 19:59
To be clear, I think your question(s) is(are) interesting but difficult. – Donu Arapura Apr 15 2010 at 11:50
Donu -- the moment I can handle all interesting and difficult questions, I'll just start posting easy and boring ones;) – algori Apr 16 2010 at 17:44
Well, as no one else has said anything, I'll say the only thing that's come to mind, which is fairly elementary. It should be enough to do this for complete varieties, so restrict to that case. The only other thing I've got is that by Chow's lemma, there's $\bar{X}\to X$ a projective variety over $X$ and a birational map. So this tells us that over an open set, everything should work, so we can reduce to the exceptional locus. It will itself be an open subset of a complete variety, so we can at least get something like this for a stratification of an arbitrary variety, so if we're willing to cheat horribly, we can use the disjoint union of these varieties, to do it, though my thought is that the dimension of the affine space over a point will be semicontinuous rather than constant, so it's of much less use. I don't see immediately how to get an irreducible one.
-
Thanks, Charles! You are right, I should have added the connectedness condition in my posting: the point of all this is to find an affine variety which has the same homotopy type as a given variety (in the complex case). So the dimension of the fiber may vary, but the affine variety should be connected. – algori Dec 2 2009 at 1:34
Is connected good enough? Or do you want irreducible? It seems to be that irreducible is the natural thing to hope for...is this known not to be true? – Charles Siegel Dec 2 2009 at 12:59
Yes, connected is good enough. But for all I know, there are no reasons which would prevent one from finding an irreducible one, once $X$ is irreducible. – algori Dec 2 2009 at 15:06
I might be missing something about question 3. Here's a simple construction:
Consider a projective space $P$ of dimension $\text{dim}\, P_1 + \text{dim}\, P_2$ that contains both $P_1$ and $P_2$ in general position. Then each point of $P-P_1-P_2\$ lies on exactly one line connecting $x$ from $P_1$ with $y\in P_2$. Is this the kind of join you're looking for?
About question 2, I have a simpler thing that isn't clear to me (now posted as a question):
is a complex algebraic variety which is topologically contractible necessarily affine?
-
You most definitely mean affine space there. – Charles Siegel Dec 2 2009 at 19:34
Yes, thanks! Something got into my mind. – Ilya Nikokoshev Dec 2 2009 at 19:39
Thanks, Ilya! In the topological join points are joined with segments, which are contractible. So if we want to mimic this, the points should be joined with $\mathbf{A}^1$'s, not $mathbf{P}^1$'s. Re your second question: there are many contractible affine varieties: take any affine cone i.e. a projective cone minus a hyperplane section not passing through the vertex. I don't know off hand whether there are any non-affine examples. – algori Dec 2 2009 at 20:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429394602775574, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/107186/find-a-closed-form-for-the-generating-function-for-the-number-of-partitions-of-n | # Find a closed form for the generating function for the number of partitions of n into 3 parts
Find a closed form for the generating function for the number of partitions of n into 3 parts.
-
2
Do they have to be distinct parts? Also, don't just transcribe the question like that - then it makes it look like you're commmanding us as if you're our teacher or at least don't have any intention of talking with us. Say, "I have this problem, `problem`, and here are my thoughts." – anon Feb 8 '12 at 20:09
1
Also, if this is homework, there's nothing wrong with that, but please add the homework tag. – Gerry Myerson Feb 8 '12 at 23:57
## 1 Answer
Note that any $k$-part partition's conjugate (obtained by flipping the Ferrers diagram) is a partition with parts of sizes less than or equal to $k$. Of course, this means we may write the conjugates as
$$a_11+a_22+\cdots+a_kk=n,$$
with $a_j$'s nonnegative integers counting the number of $j$'s in the conjugate partition of $n$. According to the diagram, there is at least one $k$ in the conjugate partition, so $a_j\ge1$. From this we may deduce that, since $k$-part partitions are in bijection with all-parts-of-size-$\le k$ partitions,
$$\sum_{n=1}^\infty p(k,n)x^n=\left(\sum_{a_1=0 }^\infty x^{a_1}\right)\left(\sum_{a_2=0}^\infty x^{a_2 2}\right)\cdots\left(\sum_{a_k=1}^\infty x^{a_k k}\right)=$$
$$=\frac{1}{1-x}\frac{1}{1-x^2}\cdots\frac{1}{1-x^{k-1}}\frac{x^k}{1-x^k}=\frac{x^k}{(1-x)(1-x^2)\cdots(1-x^k)}.$$
Of course here we are looking at $k=3$.
-
Huh? There are a lot of geometric series with no term equal to 1, e.g., $2+6+18+54+\cdots$, or $1/2+3/2+9/2+27/2+\cdots$, and the ones like these that are just numbers don't have any coefficients, even if you raise them to the $k$th power. You are far too subtle for me. But I have upvoted your comment - OP has to do better. – Gerry Myerson Feb 8 '12 at 23:55
@Gerry: You're right, it wasn't helpful like I felt while posting it. I've taken a different tact instead. – anon Feb 9 '12 at 0:33
@Gerry: By ‘no 1 term’ anon meant ‘no constant term’. (I see that the answer has now been expanded to make this clear.) – Brian M. Scott Feb 9 '12 at 0:36
This version’s very clear (and much more helpful). – Brian M. Scott Feb 9 '12 at 0:37
Why do the sums start at 1, and not at zero? I guess $a_3$ has to be positive (for $k=3$), but why $a_1$ and $a_2$? – Gerry Myerson Feb 9 '12 at 2:45
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953376829624176, "perplexity_flag": "head"} |
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Convolution | All Science Fair Projects
Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
Convolution
In mathematics and in particular, functional analysis, convolution is a mathematical operator which takes two functions f and g and produces a third function that in a sense represents the amount of overlap between f and a reversed and translated version of g. A convolution is a kind of very general moving average, as one can see by taking one of the functions to be an indicator function of an interval.
Contents
Uses
Convolution and related operations are found in many applications of engineering and mathematics.
• In statistics, as noted above, a weighted moving average is a convolution.
• In statistics, the probability distribution of the sum of two random variables is the convolution of each of their distributions.
• In optics, many kinds of "blur" are described by convolutions. A shadow (e.g. the shadow on the table when you hold your hand between the table and a light source) is the convolution of the shape of the light source that is casting the shadow and the object whose shadow is being cast. An out-of-focus photograph is the convolution of the sharp image with the blur circle formed by the iris diaphragm.
• In acoustics, an echo is the convolution of the original sound with a function representing the various objects that are reflecting it.
• In electrical engineering and other disciplines, the output (response) of a (stationary, or time- or space-invariant) linear system is the convolution of the input (excitation) with the system's response to an impulse or Dirac delta function. See LTI system theory.
• In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is sum of exponential decays from each delta pulse.
• In physics, wherever there is a linear system with a "superposition" principle, a convolution operation makes an appearance.
Definition
The convolution of f and g is written f * g. It is defined as the integral of the product of the two functions after one is reversed and shifted.
$(f * g )(t) = \int f(\tau) g(t - \tau)\, d\tau$
The integration range depends on the domain on which the functions are defined. In the case of a finite integration range, f and g are often considered to extend periodically in both directions, so that the term g(t − τ) does not imply a range violation. This use of periodic domains is sometimes called a cyclic, circular or periodic convolution. Of course, extension with zeros is also possible. Using zero-extended or infinite domains is sometimes called a linear convolution, especially in the discrete case below.
If X and Y are two independent random variables with probability densities f and g, respectively, then the probability density of the sum X + Y is given by the convolution f * g.
For discrete functions, one can use a discrete version of the convolution. It is then given by
$(f * g)(m) = \sum_n {f(n) g(m - n)} \,$
When multiplying two polynomials, the coefficients of the product are given by the convolution of the original coefficient sequences, in this sense (using extension with zeros as mentioned above).
Generalizing the above cases, the convolution can be defined for any two integrable functions defined on a locally compact topological group. A different generalization is the convolution of distributions.
Properties
The various convolution operators all satisfy the following properties:
Commutativity:
$f * g = g * f \,$
Associativity:
$f * (g * h) = (f * g) * h \,$
Distributivity:
$f * (g + h) = (f * g) + (f * h) \,$
Associativity with scalar multiplication:
$a (f * g) = (a f) * g = f * (a g) \,$
for any real (or complex) number a.
Differentiation rule:
$\mathcal{D}(f * g) = \mathcal{D}f * g = f * \mathcal{D}g \,$
where Df denotes the derivative of f or, in the discrete case, the difference operator
Df(n) = f(n+1) - f(n).
Convolution theorem:
$\mathcal{F}(f * g) = \sqrt{2\pi} (\mathcal{F} f) \cdot (\mathcal{F} g)$
where F f denotes the Fourier transform of f. Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform and Mellin transform.
Convolutions on groups
If G is a suitable group endowed with a measure m (for instance, a locally compact Hausdorff topological group with the Haar measure) and if f and g are real or complex valued m-integrable functions of G, then we can define their convolution by
$(f * g)(x) = \int_G f(y)g(xy^{-1})\,dm(y) \,$
In this case, it is also possible to give, for instance, a Convolution Theorem, however it is much more difficult to phrase and requires representation theory for these types of groups and the Peter-Weyl theorem of Harmonic analysis. It is very difficult to do these calculations without more structure, and Lie groups turn out to be the setting in which these things are done.
See also
03-10-2013 05:06:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8905116319656372, "perplexity_flag": "head"} |
http://mathhelpforum.com/trigonometry/129568-so-confused-domain-range-composite-trig-functions.html | Thread:
1. So confused..... domain and range of composite trig functions
How can I identify the domain and range of problems such as sin(cos^-1(2x)) without using a calculator???
Other problems would be sin(sin^-1(x-1/2)) or cos^-1(2sin(x)).
Please help asap because I have a math test tomorrow and this is the only concept I do not know.
Thanks.
2. Hello HubridNoxx
Welcome to Math Help Forum!
Originally Posted by HubridNoxx
How can I identify the domain and range of problems such as sin(cos^-1(2x)) without using a calculator???
Other problems would be sin(sin^-1(x-1/2)) or cos^-1(2sin(x)).
Please help asap because I have a math test tomorrow and this is the only concept I do not know.
Thanks.
I assume that by $\sin^{-1}(x)$, you mean $\arcsin(x)$, etc, in which case my advice is to get rid of the inverse functions as soon as you can. They're nasty things!
Then, in the first one, let:
$y = \arccos(2x)$
So the domain is $-1\le 2x \le 1$, or $-\tfrac12 \le x \le \tfrac12$. And, assuming we're just taking the principle values, the range of values of $y$ is $0\le y \le\pi$.
Now if we re-write the above equation, we get:
$2x = \cos y$
$\Rightarrow \sin(\arccos(2x)) = \sin y=\sqrt{1-\cos^2y} = \sqrt{1-4x^2}$
noting that we only need the positive square root, because for $0\le y \le \pi,\; \sin y \ge 0$.
Therefore range of values of $\sin(\arccos(2x))\;( =\sqrt{1-4x^2})$ for $-\tfrac12le \le \tfrac12$ is $[0, 1]$.
See if you can work out the others in the same way.
Grandad
3. Thanks, this should help me on my test. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8951882719993591, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/268693/is-there-a-more-elegant-way-of-computing-int-frac1-sinxdx-and-int-f | # Is there a more elegant way of computing $\int \frac{1}{\sin(x)}dx$ and $\int \frac{1}{\cos(x)}dx$?
Both integrals can be solved by substitution, and while I am comfortable with that, in both cases I find the method unbearably ugly, mostly because there are hundreds of overtly feasible substitutions (and the corresponding factor the denominator and numerator is multiplied by) a when you look at the integral for the very first time, and so the one that happens to work must be memorised, either by rote or experience using it.
Is there a faster or more aesthetically appealing method of computing these (types of) integrals that 'forces the answer upon you' to a greater extent so that the solution does not require bursts of insight or previous experience, and can be applied generally to many types of awkward trigonometric integrals? Something using complex analysis maybe?
Or am I asking mathematics to be a little too easy on me?
-
1
When you talk about substitution, do you mean the trick of multiplying $\sec x$ by $$\frac{\sec x+\tan x}{\sec x+\tan x}$$ and the corresponding trick for $\csc x$? – Brian M. Scott Jan 1 at 17:04
Yes: by 'bursts of insight', I didn't just mean substitution, but multiplying by $1$ written differently. – Alyosha Jan 1 at 17:08
– experimentX Jan 1 at 17:09
That's the sort of thing I'm trying to avoid, as you need a unique epiphany for each integrand. – Alyosha Jan 1 at 17:11
1
definitely you would need experience of patters to calculate integrals. The easiest way ... memorize the results. differentiation of the result will always give you a way which you will never miss. – experimentX Jan 1 at 17:13
show 2 more comments
## 4 Answers
There is a standard trick for a number of trigonometric integrals: substitute $z = \tan(x/2)$. By grinding through the trigonometric identities, you can obtain facts like
• $2\,dz = (1 + z^2)\, dx$
• $\cos(x) = 2/(1 + z^2) - 1$
and such. This converts your integral into a rational function, and you can apply standard methods to those. (e.g. partial fractions)
-
isn't this Weierstrass subs? – experimentX Jan 1 at 17:18
@exp: Yes: I had long forgotten the name. I've added a link to my answer. – Hurkyl Jan 1 at 17:19
This is really the standard method, if I don't want to memorize the result as @experimentX commented above. – Babak S. Jan 1 at 17:24
I think the simpliest way is$$\int\frac{dx}{\cos x}=\int\frac{\cos x dx}{\cos^2 x}=\int\frac{d(\sin x)}{1-\sin^2 x}=\int\frac{dz}{1-z^2}=\frac{1}{2}\left(\int\frac{dz}{1-z}+\int\frac{dz}{1+z}\right)$$ Now these are easy to calculate. The same method works for the other integral also. Generally if you have a integral like $$\int\frac{dx}{\cos x\cdot F(\sin x)}$$ where $F$ is some nice polynomial ( like product of linear and quadratic factors), then you can use the same trick to convert the integral into the form$$\int\frac{dz}{(1-z^2)\cdot F(z)}$$ and try to solve it using the method of partial fractions.
-
Another approach may be:
Let we have $\int R(\sin(x),\cos(x))~dx$ wherein $R$ is a rational function respect to $\sin(x), \cos(x)$. Then we have the following substations also: $$R(-\sin(x),\cos(x))\equiv -R(\sin(x),\cos(x))\Longrightarrow t=\cos(x)\\\ R(\sin(x),-\cos(x))\equiv -R(\sin(x),\cos(x))\Longrightarrow t=\sin(x)$$
For other and general cases you can use what @Hurkyl suggested.
-
Antiderivatives of $1/\sin x$ and $1/\cos x$ are known and usually provided in a table of antiderivatives. See here for instance: http://en.wikipedia.org/wiki/List_of_integrals_of_trigonometric_functions
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094168543815613, "perplexity_flag": "middle"} |
http://matthewkahle.wordpress.com/2011/08/23/analyzing-card-shuffling-machines/?like=1&_wpnonce=13170383b7 | # Analyzing card shuffling machines
Diaconis, Fulman, and Holmes have uploaded a preprint titled, “Analysis of Casino Shelf Shuffling Machines.” The paper provides a brief overview of the venerable history of mixing time of card shuffling, all the way back to early results by Markov and Poincaré, and their main point is to analyze a model of shuffle that had not been studied previously. What I found most interesting, though, was their account of successfully convincing people in the business of making card shuffling machines that their machines weren’t adequately mixing up the cards. They gave the manufacturers one mathematical argument, based on total variation distance, that they didn’t accept, and then another argument, based on a card guessing game, that they did.
I’ll describe the card guessing game. I flip through a deck of 52 cards, one card at a time, and before I flip a card you try to guess what it will be. Let’s say you have a perfect memory for every card that’s already been flipped, so you obviously won’t guess those. On the other hand, if the cards are in a truly random order to start out, you obviously don’t have any better strategy than to guess uniformly among the remaining cards. An easy analysis gives that your best possible expected number of correct guesses is ${1 \over 52} + {1 \over 51} + \dots + { 1 \over 1} \approx 4.5$. On the other hand, the authors described a strategy (conjectured to be best possible) that allows one to guess an average of $9.5$ cards correctly, on a totally ordered deck run through the shelf shuffling machine only once. This suggests strongly that the cards are not sufficiently random.
This analysis convinced the company to have the shelf shuffling machine make two passes through the deck, rather than one as they had initially hoped. The president of the company told them that “We are not pleased with your conclusions, but we believe them and that’s what we hired you for.” | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.972197413444519, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=371302&page=3 | Physics Forums
Page 3 of 3 < 1 2 3
## How do we alleviate the shortage of qualified physics teachers?
Quote by physics girl phd Show me a university with an organized course like that for some science camp (supported by physics, chem, bio, etc. departments along with teacher ed), and I'd show you a program that could recruit and train excited, qualified teachers that can teach at a K-12 level.
While not exactly that, I can't say enough about the Physics Education program at Illinois State. ISU is known as mostly an education school, and as far as the special ed and primary ed programs I'm sure that's true, but from the little I saw the secondary ed program I hated it. However, most of the secondary ed training takes place inside the content area itself. Even though my degree was Physics Education the whole way through, I took a bunch of ed classes and still was only 1 semester away from a full physics degree, so content knowledge was no problem.
As far as teaching experiences, we were only required 80 hours of observation by law, but our program required 140, a large portion of which was hands on experience in the classroom. We also took part in several teaching activities at local science museums, and went to teach a couple lessons at the Juvenile Detention center in town. While not a part of the program itself, the department runs something called "Physics on the Road" where we take physics equipment and demos around to schools that either lack the money for equipment, or the student numbers for physics courses and teach concepts and lessons. We also took part in a "Physics Night" each month at the children's discovery museum.
Even though nothing can really make you 100% ready for teaching until you get through your first year, this program really gave us as much experience as possible, with as much content knowledge as possible.
Recognitions:
Gold Member
Quote by Birkeland While not exactly that, I can't say enough about the Physics Education program at Illinois State.
Thank you! I knew there were some out there -- your alma mater's well-established "physics on the road" and "physics night" programs sound like great ways to get participants in the field. As Andy points out, the "inertia" of some institutions can be hard to overcome. People think \$ is hard to come by, and then get possessive about their stuff / turf and are afraid to let undergrads have at it (especially if this undergrad doesn't compare favorably to that one ten years ago).
I'm glad to see from your profile you're still an active teacher / success story (and I agree about that first year about being the true test point -- but our training programs certainly need to provide as much good prep for that first year as possible!).
Quote by physics girl phd People think \$ is hard to come by, and then get possessive about their stuff / turf and are afraid to let undergrads have at it (especially if this undergrad doesn't compare favorably to that one ten years ago).
You know, I never really thought about it, but I think part of the reason the program might have been as successful as it is might be because the school does not have a grad program. As I understand it, the university has tried to encourage the development of a grad program, but the department has fought it, preferring to focus on undergrad. Personally I loved it, because it meant that the undergrads did research with the professors, and I got to publish a couple of papers, and I got to TA lab sections of gen ed classes, all of which gave me experiences I might not have otherwise had.
Then again, I could be completely wrong and it could be very common for a large portion of undergrads to do research.
Thanks for the comments everyone. Yes I am teaching algebra based physics. Some years we get into a little bit of trig but I don't remember enough calculus to introduce that to my classes plus only about half of my students take calculus at the same time they are taking physics. We only teach one year of physics at my school. We have an honors physics class and a general physics class offered which students have the opportunity to take their senior year. For physics labs I have access to the modeling curriculum but it's very sequential and I find it hard to pull out things here and there... it seems like an all or nothing sort of program. The worksheets are very oriented to their activities and I do not have time to follow it completely so I do not use very many of their materials. It sounds like there are new materials available but if I understand correctly they are only given out to workshops participants. I also have a copy of PRISMS which is a nice physics lab set that follows the learning cycle nicely. However I find my students can handle a few of the learning cycles and then get tired of doing it over and over again. I have the same problems with modeling. Students get really tired of doing worksheet after worksheet (even though they are very hands on labs). Some of the problem is our school's culture. I swear our students really do prefer a lecture style class with concrete problems and definite right or wrong answers. They get very upset if they get anything counted wrong with their lab grades saying that "those were the results we got!" even though if they actually did the lab diligently and followed directions they couldn't have gotten those results. Oh well... my biggest gripe is not actually about labs available but the other support materials. Just the other day I had to create a (rather pathetic) worksheet so students could practice drawing circuits using schematic diagrams. The worksheets I found were more about calculations than just practice drawing. I find it helpful to have students do the drawings before they begin the calculations. I had hoped to use problems and resources from our book more extensively this year (Holt Physics) but it's just not working out. I'm sure I'll get better as I grow in my physics knowledge but I know I'll be spending a lot of my summer trying to get my physics classes organized for next year. I will have three honors physics classes and one general physics classes along with two very low level freshman physical science classes. My problem is I can't always tell which areas the students need extra practice in until after I give them a worksheet or lab that is over their heads. Then I need to figure out an in between step to get them ready for the main event. It is when I go looking for that in between worksheet or practice problems that I am floundering. They don't seem to exist. Zodea
Salaries are an issue but the teacher still needs to have the desire to teach. I'm teaching because that's what I want to do even though I could make (and have made) a lot more doing other things. There's no performance bonus anything like that. The days are long, at least the hours I put in. A lot of the kids are brats and think they deserve an A for walking in the door and the parents are worse. The students expect to have a "review sheet" that has the exact problems (they are ok if the values are a little different) that will be on the test. The students throw a fit if you put any problem on a test where they have to use previous knowledge and common sense to solve instead of just memorizing the answers. The students are severely lacking in math skills, even rearranging a simple algebraic formula. The elementary and middle school teachers have a poor grasp of science. I heard a 3rd grade teacher telling the kids an open circuit lets the electricity flow just like you open a faucet to let the water flow. They split social studies time with science time and if the administration doesn't rob that time for an assembly or such, the elementary teach spends more time on social studies because that's where the teacher is more comfortable. Finally, The State of Texas is in the process of dumbing down physics to what was once called physical science. The intentions were good -- every student needs four years of science and four years of math to graduate but all this does is force kids who aren't ready to show up at my door. then I have to spend a month on significant figures and algebra review so they don't get a 40 on their first report card. Blah. Ok, I finished. But in spite of all of that, my administration is great and I do enjoy what I'm doing.
Blog Entries: 9
Recognitions:
Gold Member
Quote by jamesnb The students are severely lacking in math skills, even rearranging a simple algebraic formula. The elementary and middle school teachers have a poor grasp of science.
The high school (in Texas also) I went to ended up replacing almost the entire math department after the first year it opened. I think most of them couldn't take the utter lack of basic math skills; I think there was a major breakdown in math education at the middle school level.
Quote by jamesnb I heard a 3rd grade teacher telling the kids an open circuit lets the electricity flow just like you open a faucet to let the water flow.
What?? That's almost as bad as this.
Quote by jamesnb Finally, The State of Texas is in the process of dumbing down physics to what was once called physical science. The intentions were good -- every student needs four years of science and four years of math to graduate but all this does is force kids who aren't ready to show up at my door.
I looked over the new end of course exam topics for physics and was horrified.
Finally, The State of Texas is in the process of dumbing down physics to what was once called physical science. The intentions were good -- every student needs four years of science and four years of math to graduate but all this does is force kids who aren't ready to show up at my door. Then I have to spend a month on significant figures and algebra review so they don't get a 40 on their first report card.
Yeah, as bad as that is I might be able to top it. My school is now requiring that all 11th students must take physics. This is a low income community where 45% of the school has free lunch. Some kids come in at a 3rd grade reading level. As a result of all this, I was put in charge of the committee to write the new curriculum. I also happen to be the only teacher with any real physics training. Despite this we will have 3 teachers teaching physics who have never taken a physics class in their life. On top of this, I wanted to make the lowest level course (where the kids who have never passed a math class are) a conceptual course. I was told that it had to be algebra and trig based (even though some kids can't add) because "that's the model we are following".
But that’s ok, what follows is a paraphrased conversation the year we were told this was happening.
ADMIN: We will require all juniors to take physics. If you look at your data I gave you, you’ll see that students who have taken physics score higher on the ACT. As a result, we have decided to drop Earth Science for Physics so that they will score better.
ME: But, physics is currently optional.
ADMIN: So?
ME: So the students taking it are more likely self motivated, are planning on college, or have parents involved in their life pushing them to take the course. That’s the three groups of students predisposed to do well on the ACT!
ADMIN: But the numbers say physics good!
ME: You’ve never taken a stats class have you?
Oh well, job security I guess.
Recognitions:
Science Advisor
Quote by jamesnb A lot of the kids are brats and think they deserve an A for walking in the door and the parents are worse. The students expect to have a "review sheet" that has the exact problems (they are ok if the values are a little different) that will be on the test. The students throw a fit if you put any problem on a test where they have to use previous knowledge and common sense to solve instead of just memorizing the answers. The students are severely lacking in math skills, even rearranging a simple algebraic formula. The elementary and middle school teachers have a poor grasp of science. I heard a 3rd grade teacher telling the kids an open circuit lets the electricity flow just like you open a faucet to let the water flow. They split social studies time with science time and if the administration doesn't rob that time for an assembly or such, the elementary teach spends more time on social studies because that's where the teacher is more comfortable. Finally, The State of Texas is in the process of dumbing down physics to what was once called physical science. The intentions were good -- every student needs four years of science and four years of math to graduate but all this does is force kids who aren't ready to show up at my door. then I have to spend a month on significant figures and algebra review so they don't get a 40 on their first report card. Blah. Ok, I finished. But in spite of all of that, my administration is great and I do enjoy what I'm doing.
Quote by jhae2.718 The high school (in Texas also) I went to ended up replacing almost the entire math department after the first year it opened. I think most of them couldn't take the utter lack of basic math skills; I think there was a major breakdown in math education at the middle school level.
Quote by Birkeland Yeah, as bad as that is I might be able to top it.
Ugh. I empathize with you all. The way standardized tests have impacted the US school system is horrifying and even worse, there is still a push to *increase* the role standardized tests have on the educational system, by tying student test scores to teacher salaries and promotions.
Blog Entries: 9
Recognitions:
Gold Member
Quote by Andy Resnick Ugh. I empathize with you all. The way standardized tests have impacted the US school system is horrifying and even worse, there is still a push to *increase* the role standardized tests have on the educational system, by tying student test scores to teacher salaries and promotions.
With the caveat that I am not an educator, but merely a survivor of the horrors of the system, I must say I agree with you. Also, I apologize in advance for the following rant...
<rant>
The amount of ridiculous and, quite frankly, idiotic things we were forced to do to "prepare" for standardized tests was horrifying. The brainwave of the science curriculum administrators at the district I went to for preparing for standardized exams (in Texas, the infamous TAKS test) were "foldables". These were different papers that the students would put together and would fill in notes dictated by the teacher. In other words, it was essentially a kindergarten level activity applied to high schoolers, leading to no benefit at all.
When I took the algebra-based physics course my high school offered (prior to taking a proper AP Physics course; this course was labeled 'Pre-AP'), I was shocked and horrified at just how bad the course was. Some of the general crimes against mathematics and physics committed were:
• Claims that the kinematic equation d = 1/2at2+vt* could not be solved because the "t was in two places, and one was squared", lest we have the horror of having to explain how to select the correct solution to a problem.
• The density of water being described as 1 kg/m3, because apparently the teacher or whomever developed the curriculum could not do the simple conversion of units.
• Problems were entirely formulaic and were either "one-step" or "two-step". These required no critical thinking at all, reducing the exercises of the class to simply "plug and chug".
• Free-body diagrams were only glanced over, and the entirety of exercises consisted of drawing an FBD for a block. The use of FBDs as a problem solving tool was not covered. How does one do physics without an understanding of how to draw a free-body diagram?
• Vectors were barely covered, and in a way that had no usefulness. (I realize a high school physics class should not necessarily be at a college level, but surely they don't need to be dumbed down this much?)
• One of the worst was the claim that work could only be done horizontally, in the "x" direction. I don't know how they got this idea, as it makes no since at all.
We never got to electromagnetism, but I don't even want to think of the horror inflicted there. It scares me to think of the people coming out of that class with such misconceptions of physics. Even worse, however, was the total lack of problem solving skills or critical thinking. This has to be one of the worse aspects of the current education system, that students are no longer made to think critically.
I think the science education in the K12 system is pretty awful; I think a lot of teachers aren't qualified to teach hard sciences. At the very least, to teach a science one should possess a degree in that science. Of course, that's easy to say, but it brings us to the macroscopic subject of the thread, namely the lack of qualified physics (and I think science in general) teachers.
And on this, I'm not sure how we can resolve the issue. I only know that it is imperative that we do.
(On another note, one of the best classes I took in high school was a Pre-AP Biology class. The teacher was extremely knowledgeable on the subject (shockingly she actually had a master's degree in biology, and not something like a masters in education; she also was a part time lecturer at a nearby university) and was demanding of the students. But of more interest from a pedagogical side was a program she helped start whereby local high school teachers would do research in their subject in labs at that particular university over the summer. I think that it is a great idea, and should be more widespread. I'm not sure if the program survived; she ended up retiring after the school administration kept complaining that she made students work too hard and wanted her to tone down the course.)
*We couldn't have the horror of writing it as a function of time, could we?
</rant>
I'm lucky in that I have pretty good students and most of the parents care and like I said before, the administration is supportive. I really hate this idea the students have that the homework problems should be the same test questions. They start whining and literally crying on test day because they've "never seen this stuff before". They also think they are entitled to a review sheet with the exact problems that will be on the test. I try to structure the assessments so if they do the homework and try, they can make a C. If they can apply some of what they learn they can make a B and to get an A+, they need to be able to do some higher level thinking and apply previous knowledge. Sorry but I had to vent because it was another round of test whining. Plus, it's mid-April and some of them are acting like they've never heard of vectors or the relationship between velocity and acceleration. Thanks for the gravity magnet, jhae; that was funny.
In second grade, my son was being taught the states of matter; solid, liquid, gas. One day at a bookstore, he happened upon an Encyclopedia of Science. Paging through, he stopped and ran to me across the store exclaiming that there was a FOURTH state of matter - Plasma! It was, in his mind, the best state because it occurs so rarely and the conditions have to be just right. We were both thrilled at his "new" discovery. I had him write down some details of his discovery to show his teacher. He is dyslexic, and although this was just emerging, his dysgraphia made his effort difficult. Still, I encouraged him, and he took his paper bouncing on his feet to school in anticipation of sharing this great news. The teacher's response? "We won't be studying that here, you'll learn about it in a few years." That was the first time this teacher crushed his enthusiasm for science and exploration. The second time, the class was told to make Mother's Day cards. He drew himself and I on the moon happily waving (I was in the pink space suit). His teacher took the card, told him is wasn't acceptable, had him throw it away and try again. To my everlasting joy, he retrieved the card and gave it to me anyway because, "I knew you'd understand mom, and like it." Just two examples. Kids can learn so much more than they are offered. We chose to home school. We found pattern cut-outs of T4 Bacteriophages online from a Colorado University for a science class. We printed and constructed the project and he learned all about DNA, proteins, cells cycles...age 8. His IQ was tested at only about 128 so he wasn't necessarily "gifted", but innately curious. He also needed psycho-therapy due to the cruelty of teachers who not only did not catch signs of a learning disorder, but punished him because he could read as fast as others, complete assignments as quickly, or write in cursive. Physics, mathematics, reasoning, learning to learn conceptually or differently begins in elementary school and often before. You cannot alleviate the shortage of "qualified" physics teachers without thinking back to how they 'germinate' and then construct a system that nurtures curiosity.
Get the government to pay for my house/petrol/food/bills/uni fees, then you have 1 more high school physics teacher in the making! Sorry must have fell asleep for a second there :)
Sorry, a bit of a rant as well coming from the student/parent perspective. So many caveats to teaching and learning. Still I believe children can be fascinated by seemingly complex ideas at very young ages - and that is where we must start.
Regarding the Mother's Day card -- unless the teacher gave specific instructions that he didn't follow, what the teacher did was awful. Even if he didn't follow the teacher's instructions, a teacher should never crumple up a students work and throw it in the trash. That whole incident sounds bizarre. Regarding the 4th state of matter -- I usually have an in depth discussion about plasma, ionizing gasses and electron configuration with the high school physics students. They ask why they haven't been taught this before; all the other teachers say is just plasma is the 4th state of matter. My standard answer is most teachers generally don't understand what a plasma is. I know students don't have the best memory but I really believe it's just mentioned and not explained. Having said all that, it is difficult to cover what the state mandates to cover and prepare the students for the standardized tests and wedge additional material in. It's hard to fight the urge to go to the depth you think they really need and cover additional topics that you know the students should know and will need to know if they plan to study science in college. This is made even more difficult by the number of instructional days lost to merely administer in the standardized tests. We lost 18 instructional days due to testing this year and next year will be the same. We had an in-service a couple of months ago and it included some motivational type speaker that actually said "what's wrong with teaching to the test?" "Get rid of all the extra stuff you think the kids need to know and just teach them what they need to know for the test." After I made sure he wasn't being sarcastic, I asked what about higher level thinking, problem solving and teaching the students to be independent thinkers. He ducked the question and I realized what ever we paid him, we were being robbed. I pretty much tuned him out after that and started thinking about some labs i could do on electricity. @EMFSmith-- more money would be a step in the right direction.
Hi all, another problem is that if you apply as a high school physics teacher, you may only teach physics as a small percentage of the total teaching load. The other 90% of your load (say) could be teaching some area your not familiar with such as biology etc. Thats what can happen in Australia for example
Quote by zodea I realize this thread has been going on forever, but I would like to make a couple of comments. I teach high school physics. I doubt that I'm really considered qualified. After I have taught physics a few more years I might be getting close. One of my biggest complaints is lack of student friendly materials. Let's say I want my students to do some additional practice on motion graphs. If I spend some time searching the Internet for potential worksheets/practice problems I might come up with 2-5 things that might possibly work with my students. However, if I want to find something to help my biology students practice identifying parts of a cell and their functions (or any other biology topic) I will easily finds 100's of things that could be useful and it will take me much less time. So if you want to make physics teachers "better" supply them with the necessary materials. I would seriously like to see some honest to goodness drill and practice sheets for physics. The materials that come with the text books are not what I need. They often only have one or two of each type of problem and if students need extra reinforcement it's not there. Even if you have a "qualified" physics teacher, they probably get fed up with creating all their own materials and leave for something else. I'm to the point I'd rather teach any other subject than physics because there is so much more support for the other subjects. But I'm basically stuck teaching physics because I am the only teacher in my school who has the certification. Zodea
I agree!
Page 3 of 3 < 1 2 3
Thread Tools
Similar Threads for: How do we alleviate the shortage of qualified physics teachers?
Thread Forum Replies
Educators & Teaching 10
Educators & Teaching 2
General Discussion 8
Academic Guidance 8
Academic Guidance 5 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9826532006263733, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=583052 | Physics Forums
## Gamma coincidence
So we know gamma decays are directionally symmetric, but assume we have two detectors and we want to know the likelihood of two γs hitting each detector at the same time as a function of the angle between the line connecting the first detector and the source and the line connecting the second detector and the source. Assume the detectors are equidistant from the source. Obviously, the likelihood has to be related to the size of the detector and to the rate at which the γs are being produced, since if the detectors were infinitesimally sized the only coincidences would be either a result of two separate decays or would occur at $\theta$=$\pi$ radians. How might we go about doing this?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> Nanocrystals grow from liquid interface>> New insights into how materials transfer heat could lead to improved electronics
Check out Perturbed Angular Correlation Spectroscopy (PAC). I'll post a good link when I find one... Oh, I forgot: When you orient the nuclei, the gamma emissions are not isotropic (directionally symmetric) anymore. But you need very low (milliKelvin) temperatures for that.
To observe PAC you don't necessarily need very low temperatures. That you need only if you want to observe an asymmetry of the uncorrelated emissions. This one is quite good, but you need a bit of background knowledge in condensed matter physics. http://physik2.uni-goettingen.de/res...fs/methods/pac This one give more of the gory details. http://www.ias.ac.in/pramana/v70/p835/fulltext.pdf
Thread Tools
| | | |
|----------------------------------------|----------------------------------------|---------|
| Similar Threads for: Gamma coincidence | | |
| Thread | Forum | Replies |
| | High Energy, Nuclear, Particle Physics | 1 |
| | High Energy, Nuclear, Particle Physics | 0 |
| | General Physics | 8 |
| | General Physics | 15 |
| | Calculus & Beyond Homework | 4 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8958123922348022, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/23877/a-question-on-a-davis-complex-of-a-coxeter-group | ## A question on a Davis complex of a Coxeter group
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let us have a look at p. 64 of M. Davis book "The Geometry and Topology of Coxeter Groups". The discussion preceeding Definition 5.1.3. shows that $\mathcal{U}(G, X)/G$ is homeomorphic to $X$. Theorem 7.2.4. says that $\mathcal{U}(W, K)$ is $W$-equivariantly homeomorphic to the Davis complex $\Sigma$. So, $\Sigma/W$ is homeomorphic to $K$. $K$ is the cone on the barycentric subdivision of the nerve $L$. $L$ can have topological type of any polyhedron. So $K$ can be a cone on any polyhedron (up to homeomorphism). But the action of $W$ on $\Sigma$ is cocompact (p. 4, bottom). So $\Sigma/W$ is compact, i.e., $K$ is compact. So a cone on any polyhedron is compact. What's wrong?
-
Without having looked closely (I don't have the book to hand), I guess: the action of W on $\Sigma$ is cocompact if and only if W is finitely generated, which is true if and only if L is compact (which is true if and only if the cone on L is compact). – HW May 7 2010 at 17:11
Actually, my first statement (the action of W on $\Sigma$ is cocompact if and only if W is fg) is probably only true for freely indecomposable W. What I'm trying to say is that several of these statements probably implicitly assume that W is finitely generated. – HW May 7 2010 at 17:19
1
OK, I stand by what I said first time round. The action of W on $\Sigma$ is cocompact if and only if W is fg. (Otherwise, $\Sigma$ isn't even locally compact.) – HW May 7 2010 at 17:47
But doesn't $W$ act simply transitively on the vertex set of $\Sigma$ and hence collapses some vertices of $K$ into a point in $\Sigma/W$? Why is $K$ then a strict fundamental domain? – Kestutis Cesnavicius May 7 2010 at 18:30
No, I don't think W collapses any vertices of K. Here's the example I have in mind. Let L be a disjoint union of countably many points. Then W is the free product of countably many copies of Z/2, and $\Sigma$ is a locally countably-infinite tree. Each generator is a reflection in an edge of $\Sigma$. Topologically, $\Sigma/W$ is indeed the cone on L; from another point of view, it's the union of a vertex of $\Sigma$ with all the incident half-edges - in other words, a fundamental domain. Does that help? – HW May 7 2010 at 18:39
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489924311637878, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/45776/closest-pair-of-points-algorithm/45780 | # “Closest pair of points” algorithm
I'm having a problem understanding why I just have to consider the next 7 points in the
Closest pair of points - algorithm.
Can someone explain it in greater detail?
-
## 3 Answers
I've simply included the diagrams (here they're larger than on the webpage you linked us to!), along with the caption below the figure: "Key concepts".
Together, with thoroughly reading the "Correctness" (and proof of algorithm) section of the exercise, hopefully you'll be able to answer your question. If not, feel free to edit your post and specify, in much greater detail than you first provided us, what it is exactly that you're confused about. We'll be in a much better position that way to satisfactorily answer your question(s).
Figure #33.11: Key concepts in the proof that the closest-pair algorithm needs to check only $7$ points following each point in the array Y'.
(a) If $p_L \in P_L$ and $p_R \in P_R$ are less than $\delta$ units apart, they must reside within a $\delta \times 2\delta$ rectangle centered at line $1$.
(b) How $4$ points that are pairwise at least $\delta$ units apart can all reside within a $\delta \times \delta$. On the left are $4$ points in $P_L$, and on the right are $4$ points in $P_R$. There can be $8$ points in the $\delta \times 2\delta$ rectangle if the points shown on line $1$ are actually pairs of coincident points with one point in $P_L$ and one in $P_R$.
This last point may be the key, in terms of understanding why the there can be no more than $8$ points in a $\delta \times 2\delta$ rectangle. The maximal number of points in a $\delta \times \delta$ square, such that any pair of points is at least $\delta$ units of distance apart is at most $4$.
Why? Try dividing a $\delta \times \delta$ square into $4$ squares (each of dimension $\delta/2 \times \delta/2$. The maximum length between any possible two points in any given subdivided square is the length of its diameter: $\sqrt{2}\delta/2$, which is less than $\delta$. In this way, each such subdivided square can contain at most one point, and, thus, at most $4$ points can be contained in the $\delta \times \delta$ square.
-
Thank you. I now understand. ;) My problem was that I couldn't understand why the δ × 2δ rectangle has at most 8 points. But now I see that there will be a contradiction if there is more than 8 points. – user11775 Jun 16 '11 at 22:18
@user11775: Great! It isn't the easiest of webpages to read (tiny print), and the diagrams were pretty small (certainly hard to make out deltas and such!), but they seemed helpful to understanding the "key concepts" and "correctness" explanation. – amWhy Jun 16 '11 at 22:56
It's explained below the headline "Correctness". At most 8 points may reside in the $\delta \times 2\delta$ rectangle because at most 4 may be found in the left square and at most 4 may be found in the right square. With 8 points in total, it is enough to check for 7 points in the array $Y'$. The background of the proof is complicated and is described on the page you mentioned.
-
I know. I'm confused about the part: "it is easy to see that we need only check the 7 points following each point in the array Y′." – user11775 Jun 16 '11 at 19:58
There are 2 things to understand:
(1) The first is described well in the text you linked to: After finding the closest pair of points on the left side (denote the distance between them by $\delta_L$), and the closet pair of points on the right side (denote their distance by $\delta_R$), we are only interested in knowing whether there is an even closer pair of points, of which one lies in the left side and the other in the right side. So we're only interested in such pairs $(p_L, p_R)$ which are less than $\delta=min(\delta_L, \delta_R)$ units of distance apart. So the distance between $p_L$ and the the line $l$ (see figure $(a)$) must not be more than $\delta$ and the distance between $p_R$ and the line $l$ must not be more than $\delta$. That is, it's enough to only consider the points lying in the infinite vertical strip shown in figure $(a)$. Further, the vertical distance between $p_L$ and $p_R$ must not be more than $\delta$ (this is shown in figure $(b)$). So, we can first filter out all points lying outside the vertical strip (figure $(a)$). Then, remembering that we have the points sorted according to their $y$ coordinate, we see that it's enough to consider for each point, all the points above until we encounter a point more than $\delta$ units of distance apart vertically. How many points might we check until this happens? This is bounded from above by twice the maximal number of points in a $\delta \times \delta$ box, under the constraint that no two points are less than $\delta$ units of distance apart (remember that each of these two $\delta \times \delta$ boxes is entirely contained either in the left side or the right side, and recall how $\delta$ was defined). So, how many points is this?
This is the second part to understand, which is actually not really proved in the text you linked to: (2) The maximal number of points in an $\delta \times \delta$ box, such that any pair of points is at least $\delta$ units of distance apart is at most 4. The proof is easy: Divide the box to 4 boxes, each of size $\delta/2 \times \delta/2$. The diameter of each such box is $\sqrt{2}\delta/2$ which is less than $\delta$. Therefore, in each such small box you can have at most one point and hence there are at most 4 points in the $\delta \times \delta$ box.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546372890472412, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/53548/motivation-of-filtered-colimits/53576 | ## motivation of filtered colimits
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to move in categorical algebra beyond the basics. A Lawvere theory L is a small category with finite products. (I know that there also is a functor $(skeleton(FinSet))^{op}\to L$, which restricts a number of sorts in the algebraic theory to 1. Lets drop that requirement for now.) How to convert some variety to a Lawvere theory is pretty clear for me. The link (varieties ↦ Lawvere theories) is clear in some elementary operations, like
• mapping an algebra by some functor F ↦ postcomposing F;
• underlying functor ↦ precomposition of a functor between Lawvere theories.
Then filtered colimits come. Lets take for reference “Adámek. a categorical introduction to general algebra.” Chapter 2 “Sifted and filtered colimits” and chapter 3 “Reflixive coequalizers” are devoid of mentioning varieties. Why the definition of a filtered colimit is such? I suppose there should be more concrete explanations involving algebraic operations, this is called “algebra” after all. Google suggests few texts on this subject, but they are abstract too. Any references?
The claim “an arbitrary algebra is a filtered colimit of finitely generated algebras” is needed to construct the left adjoint to an underlying functor. Can anyone refer me to its proof? (Update 2011-01-29. Also I want a precise proof constructing that left adjoint.) (Update 2011-01-29. Thank you all for insightful answers and comments. I suspect that there is no direct link between filtered colimits and traditional algebra, i.e. it is an abstract thing that is needed for another abstract thing… I need to think it through to formulate further questions.)
-
3
Filtered colimits are particularily well-behaved colimits (they have pretty much the same properties of increasing unions) The definition of filtered is like it is because experience has shown that it captures the usefulness and good properties of increasing unions. As for finitely generated subalgebras of an algebra: show that the set of finitely generated algebras is directed by inclusion, and then show that the colimit of the tautological functor defined on that set is the algebra you started with. – Mariano Suárez-Alvarez Jan 27 2011 at 23:12
2
It is probably me, but I find the text of your question very confusing. Maybe you could make it more evident what exactly is that you are asking. – Mariano Suárez-Alvarez Jan 27 2011 at 23:27
4
It is confused, because I am confused. Is there a separate website for confused question? ;) – beroal Jan 28 2011 at 21:49
## 5 Answers
To expand on one of the points in David's answer, the absolutely crucial property of filtered colimits is that
Finite limits commute with filtered colimits in Set.
It's probably more important to know this than to know the definition of filtered colimit. In fact, you can use it as a definition, in the following sense:
Theorem Let $J$ be a small category. Then the following are equivalent:
• $J$ is filtered
• colimits over $J$ commute with finite limits in Set.
One weak point of the wikipedia article is that it gives the very concrete definition of filtered category, but it doesn't mention the following more natural-seeming formulation: a category $J$ is filtered if and only if every finite diagram in $J$ admits a cocone.
(A finite diagram in $J$ is a functor $D: K \to J$ where $K$ is a finite category. A cocone on $D$ is an object $j$ of $J$ together with a natural transformation from $D$ to the constant functor on $j$. The three conditions stated in the Wikipedia article correspond to three particular values of $K$.)
If the last couple of paragraphs have helped you, you can balance your karma by incorporating them into the Wikipedia page :-)
-
Good points, Tom. But I don't think that Wikipedia should be a source for this kind of math anyway. Some years ago I edited some articles, but now I think it's worthless. – Martin Brandenburg Jan 28 2011 at 9:17
1
it's even a little better than Tom says: if $J$ is filtered, and $D:K\to J$ is a finite diagram in $J$, the category of cocones over $D$ is not just non-empty but connected. – Steve Lack Jan 28 2011 at 10:13
…and why finite limits in Set are important? – beroal Jan 28 2011 at 20:49
Finite products are generally assumed when talking about models for a Lawvere theory, or finite limits more generally for essentially algebraic theories. – David Roberts Jan 28 2011 at 22:14
@Tom Leinster “every finite diagram in $J$ admits a cocone” I guess your definition is an analogue of logical equivalences “the category has colimits ↔ the category has equalizers and products ↔ the category has pullbacks and a terminal object” and “the subset is directed ↔ every 2 elements in the subset has an upper bound and the subset has the least element”. That is a useful technical detail worth mentioning in an encyclopedia, but it is not problem-breaking. – beroal Jan 29 2011 at 8:01
show 6 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
About the proof that every algebra is a filtered colim of finitely presented ones: Every algebra has a presentation by generators and relations. You can just build your colim diagram by gathering finitely many generators at each stage and dividing out by the relations between those. Then every generator will occur in the diagram, hence in the colim, and since each relation is only between finitely many generators they all are introduced at some place in the diagram, too. A thorough proof is in Adamek/Rosicky's "Locally presentable and Accessible Categories"
-
Here are two further ways that one might motivate filtered colimits. I'll put them in a different answer from my previous one, since they're separate thoughts, although they're still along the lines of "think about what filtered colimits do rather than what the definition is".
First motivation
The functors from Set to Set that appear in universal algebra often have the property that they are "determined" by their values on finite sets. To be more precise: given any functor FinSet$\to$Set, there is a canonical way of extending it to a functor Set$\to$Set (namely, left Kan extension). A functor Set$\to$Set is called finitary if when you restrict down to FinSet and then extend back up to Set again, you get back the functor that you started with.
For example, the free group functor $T:$Set$\to$Set, sending a set $X$ to the set $T(X)$ of words in $X$, is finitary. Informally, this is because the theory of groups involves only finitary operations: each operation takes only finitely many arguments. Thus, each element of the free group on $X$ touches only finitely many elements of $X$. The same is true for any other finitary algebraic theory: rings, lattices, Lie algebras, etc.
So finitary functors are useful. Now the key fact is that a functor from Set to Set is finitary if and only if it preserves filtered colimits. This immediately suggests that filtered colimits are interesting. This fact is also rather useful: for example, it tells us that the class of finitary functors is closed under composition, which wasn't obvious from the definition.
Second motivation
It's not a bad approximation to think of the class of filtered colimits as the complement of the class of finite colimits.
For example, every category with both finite and filtered colimits has all (small) colimits. Similarly, every functor preserving both finite and filtered colimits preserves all colimits. Moreover, the two classes are in some sense disjoint: there are very few colimits that are both filtered and finite.
(One way to make this precise is the following: given a small category A, if you freely adjoin finite colimits to A and then freely adjoin filtered colimits to that, the end result is the same as if you'd freely adjoined all small colimits to A.)
-
The functor, sending a set $X$ to the set of words in $X$, is the free monoid functor? – beroal Jan 28 2011 at 20:01
@beroal - Yes it is – David Roberts Jan 28 2011 at 20:22
@beroal and David: "words" is ambiguous (my fault). In the context I was using it, I meant "words" in the sense of groups. For instance, $x y^{-2} z^3$ is a word-in-the-sense-of-groups in the letters x, y, z. But you can use "word" for any algebraic theory. In particular, you can use it for the theory of monoids, and in that setting, a word in a set X is simply a finite sequence of elements of X. For any algebraic theory, you can say that the free algebra functor sends a set X to the set of words in X (as long as you interpret the meaning of "words" correctly). – Tom Leinster Jan 31 2011 at 14:36
Might I suggest http://ncatlab.org/nlab/show/monadicity+theorem and links there, where various theorems for a functor to be monadic (roughly, a forgetful functor from a category of algebras) are stated. This is essential for an understanding of categorical algebra. The crude monadicity theorem has a requirement on reflexive coequalisers, which could be why the text you are using mentions them so prominently.
Also, filtered colimits commute, in $Set$, with finite limits, so this is a very natural class of colimits to consider when dealing with Lawvere theories. Also, a monadic functor $C \to Set$ is the forgetful functor for (the algebras for) a Lawvere theory if and only if it preserves filtered colimits.
Edit: the link between finitary monads and Lawvere theories is explained here. The categories of finitary monads and Lawvere theories are equivalent.
-
That's too much, I did not ask about categorical algebra in its entirety, that's too much. :) The definition of a monadic functor refers to the Eilenberg–Moore category, I do not know any connection with Lawvere theories. It seems to sit aside of my question. Except that bit about finite limits in Set, which I will comment on the answer by Tom Leinster. – beroal Jan 28 2011 at 20:48
I wasn't trying to heap a huge amount of information on you, beroal, but just give you a taste of some results for which filtered colimits are fairly central. My last sentence is not enough about Lawvere theories? – David Roberts Jan 28 2011 at 22:16
Your edit of your comment is sufficient. – beroal Jan 29 2011 at 8:35
The reason that Grothendieck originally considered filtered colimits and what is now known as the theory of accessible and locally presentable categories (I think named by Makkai and Paré) is as follows:
Let $x$ be an object of a category $C$ such that $hom_C(x,-)$ preserves $\alpha$-filtered colimits, then given any morphism $x\to colim F$ where $F:D\to C$ is an $\alpha$-filtered diagram, the morphism $x\to colim F$ factors through at least one $F(d)$ for some $d$ in $d$, and given any two factorizations through $F(d)$ and $F(d')$, there exists a majorant factorization through $F(d'')$ where $d''\geq d'$ and $d''\geq d$ extending the other two factorizations.
I leave this as an exercise (it is, if you will, proof by introspection) (Hint: Use the corresponding statement for sets (which are valid because the statements hold for hom-sets) to perform the necessary manipulations).
This very powerful technique is used, for instance, in the modern generalizations of the small object argument and in situations regarding Bousfield localizations (the notion of accessibility is absolutely essential for results like Jeff Smith's theorem, for instance).
See Clark Barwick's paper for a fairly detailed treatment with regard to its use in homotopy theory. I would also suggest taking a look at Appendices 1 and 2 of Lurie's Higher topos theory as well as the book of Makkai-Paré, and also the standard modern reference on the subject by Adamek and Rosicky.
-
I should note that most of the inspiration for the modern theory appears in SGA4.1.i (which is why I noted Grothendieck's involvement). – Harry Gindi Jan 29 2011 at 0:03
4
Locally presentable categories were first introduced by Gabriel-Ulmer in '70 and accessible categories were introduced by Lair in '81 under the name "catégories modelables" and baptized accessible by Makkai-Paré in their '89 book. I think it would be fairer to mention Tohoku instead of SGA in connection with Grothendieck, and for that one main motivation was the need of extending Cartan-Eilenberg (derived functors) to sheaves. To achieve this, G. needed to prove that there are enough injectives. It turns out that local presentability of Grothendieck abelian categories is the crucial point. – Theo Buehler Jan 29 2011 at 1:34
But this is just a bit of nitpicking, your point remains valid, of course. – Theo Buehler Jan 29 2011 at 1:34
Interesting! Thanks! – Harry Gindi Jan 29 2011 at 1:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383530616760254, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/70690/endpoint-strichartz-estimates-for-the-schrodinger-equation/70726 | ## Endpoint Strichartz Estimates for the Schrödinger Equation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The non-endpoint Strichartz estimates for the (linear) Schrödinger equation: $$\|e^{i t \Delta/2} u_0 \|_{L^q_t L^r_x(\mathbb{R}\times \mathbb{R}^d)} \lesssim \|u_0\|_{L^2_x(\mathbb{R}^d)}$$ $$2 \leq q,r \leq \infty,\;\frac{2}{q}+\frac{d}{r} = \frac{d}{2},\; (q,r,d) \neq (2,\infty,2),\; q\neq 2$$ are easily obtained using (mainly) the Hardy-Littlewood-Sobolev inequality, the endpoint case $q = 2$ is however much harder (see Keel-Tao for example.)
Playing around with the Fourier transform one sees that estimates for the restriction operator sometimes give estimates similar to Strichartz's. For example, the Tomas-Stein restriction theorem for the paraboloid gives: $$\|e^{i t \Delta/2} u_0\|_{L^{2(d+2)/d}_t L^{2(d+2)/d}_x} \lesssim \|u_0\|_{L^2_x},$$ which, interpolating with the easy bound $$\|e^{i t \Delta/2} u_0\|_{L^{\infty}_t L^{2}_x} \lesssim \|u_0\|_{L^2_x},$$ gives precisely Strichartz's inequality but restricted to the range $$2 \leq r \leq 2\frac{d+2}{d} \leq q \leq \infty.$$
As far as I know, the Tomas-Stein theorem (for the whole paraboloid) gives the restriction estimate $R_S^*(q'\to p')$ for $q' = \bigl(\frac{dp'}{d+2}\bigr)'$ (this $q$ is different from the one above), so I'm guessing that this cannot be strengthened (?).
So my question is: what's the intuition of what goes wrong when trying to prove Strichartz's estimates all the way down to the endpoints using only Fourier restriction theory?
-
## 1 Answer
From my less-than-expert (where's Terry when you need him?) point of view, a possible reason seems to be the following (I wouldn't call it something going wrong or even a difficulty):
The statement of restriction estimates only give you estimates where the left hand side is an isotropic Lebesgue space, in the sense that you get an estimate $L^q_tL^r_x$ with $q = r$. This naturally excludes the end-point, which requires $r > q$.
Why is this? The reason is that the restriction theorems only care about the local geometry of the hypersurface, and not its global geometry. (For example, the versions given in Stein's Harmonic Analysis requires either the hypersurface to have non-vanishing Gaussian curvature for a weaker version, or that the hypersurface to be finite type for a slightly stronger version. Both of these conditions are assumptions on the geometry of the hypersurface locally as a graph over a tangent plane.) Now, on each local piece, you do have something more similar to the classical dispersive estimates with $r > q$, which is derived using the method of oscillatory integrals (see, for example, Chapter IX of Stein's book; the dispersive estimate (15) [which has, morally speaking $q = r = \infty$ but with a weight "in $t$", so actually implies something with $q < \infty$] is used to prove Theorem 1, which is then used to derive the restriction theorem). But once you try to piece together the various "local" estimates to get an estimate on the whole function, you have no guarantee of what the "normal direction" is over the entire surface. (The normal direction, in the case of the application to PDEs, is the direction of the Fourier conjugate of the "time" variable.) So in the context of the restriction theorem, it is most natural to write the theorem using the $q = r$ version, since in the more general context of restriction theorems, there is no guarantee that you would have a globally preferred direction $t$.
(Note that Keel-Tao's contribution is not in picking out that time direction: that Strichartz estimates can be obtained from interpolation of a dispersive inequality and energy conservation is well known, and quite a bit of the non-endpoint cases are already available as intermediate consequences of the proof of restriction theorems. The main contribution is a refined interpolation method to pick out the end-point exponents.)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9166242480278015, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/17893/why-are-anticommutators-needed-in-quantization-of-dirac-fields | # Why are anticommutators needed in quantization of Dirac fields?
Why is the anticommutator actually needed in the canonical quantization of free Dirac field?
-
2
Do you mean to ask why we use anticommutators instead of commutators? – David Zaslavsky♦ Dec 6 '11 at 4:29
There is a discussion of this question in Peskin and Schroeder, An Introduction to QFT, section 3.5. – Qmechanic♦ Dec 6 '11 at 10:41
Short answer: in this way Dirac particles follow the Pauli-Fermi exclusion rule – wiso Dec 7 '11 at 12:09
This version "Why are anticommutators needed..." is the most fluent, "What is the necessity" is awkward and foreign, and overly formal. – Ron Maimon Dec 26 '11 at 3:15
## 2 Answers
The most elementary reason is that the Dirac field Hamiltonian is bounded below only when you use anticommutation relations on the creation/annihilation operators instead of commutators. A free quantum field theory with energy unbounded below has no stable vacuum.
It is easiest to demonstrate this in two dimensions, where there are no polarization issues.
### Instructive 2d example
In two dimensions (one space one time), there is a nice dimensionally reduced analog, which is the right-moving (necessarily massless) Majorana-Weyl Fermion (the argument also works with 2d Dirac fermions with two components, but this is the simplest case). This is a single component field $\psi$ which obeys the equation of motion
$$(\partial_t -\partial_x) \psi = 0$$
This simple equation is derived from the 2d Dirac equation using the (real convention, explicitly real) 2d Dirac matrices (0,1;-1,0) an (0,1;1,0), which are $\gamma_0 = \sigma_x$ and $\gamma_1 = i\sigma_y$. They square to 1 and -1 respectively, and they anticommute, so they reproduce the 1+1 dimensional metric tensor. The $\gamma_5$ analog, which I'll call $\Gamma$ to accommodate different dimensions, is diagonal in this explicit representation, and $\Gamma=\sigma_z$.
The two eigenvectors of $\Gamma$ propagate independently by the 2d massless equation of motion
$$\gamma_i \partial_i \psi = 0$$
And further, because the $\gamma$ matrices are real, this is a Majorana representation (most physicists write the dirac equation with an i factor in front of the derivative, so that the Dirac matrices for a Majorana representation are purely imaginary. I'm using a mathematician's convention for this, because I like the equations of motion to be real. Others like the k-space propagator to not have factors of i in the k part. Unfortunately, physicists never settled on a unique sensible convention--- everyone has their own preferred way to write Dirac matrices). So it is sensible in the equation of motion to restrict $\psi$ to be Hermitian, since its Hermitian conjugate obeys the exact same equation.
So that the field has a k decomposition
$$\psi(x) = \int a_k e^{ikx - ikt }dk$$
And the reality condition (Hermiticity) tells you that $a^{\dagger}(-k) = a(k)$ (one should say that the normalization of the $a$ operators expansion is not completely conceptually trivial--- the $a$'s are both relativistically and nonrelativistically normalized, because the spinor polarization $\sqrt{w}$ factor cancels the mass-shell hyperbola factor, so that the dk integration is not weighted by anything, it's just the normal calculus integral with uniform measure)
An operator with definite frequency, which (Heisenberg picture) evolves in time according to
$$\partial_t O = i\omega O$$
Has the property that it is a raising operator--- acting with this operator adds $\omega$ to the energy. If $\omega$ is negative, $a$ is an annihilation operator. The condition that the vacuum is stable says that all the annihilation operators give 0 when acting on the vacuum state.
But notice that the frequency in the expansion of $\psi$ changes sign at $k=0$. This came from the linearity of the Dirac Hamiltonian in the momenta. It means that the operator $a_k$ acts to raise the energy for k>0, but acts to lower the energy for $k<0$. This means that the $k>0$ operators create, and the $k<0$ operators annihilate, so that the right way to $a^{\dagger}(-k)$ are creation operators, while the $k<0$ operators are annihilation operators.
The energy operator counts the number of particles of momentum k, and multiplies by their energy:
$$H = \int_{k>0} k a^{\dagger}(k) a(k) dk$$
And this is manifestly not a local operator, it is defined only integrated over k>0. To make it a local operator, you need to extend the integration to all k, but then the negative k and positive k contributions have opposite sign, and they need to be equal. To arrange this, you must take anticommutation relations
$$\{ a^{\dagger}(k),a(k)\} = i\delta(k-k')$$
And then
$$H = {1\over 2} \int k a^{\dagger}(k) a(k) = \int \psi(x) i \partial_x \psi(x) dx$$
Note that this looks like it is a perfect derivative, and it would be if $\psi$ weren't anticommuting quantity. For anticommuting quantities,
$$\partial_x \psi^2 = \psi \partial_x \psi + \partial_x \psi \psi$$
Which is zero, because of the anticommutation.
### Deeper reasons
Although this looks like an accidental property, that the energy was negative without anticommutators, it is not. The deeper reason is explained with Euclidean field theory using a Feynman-Schwinger formalism, but this requires understanding of the Euclidean and path integral versions of anticommuting fields, which requires being comfortable with anticommuting quantities, which requires a motivation. So it is best to learn the shallow reason first.
-
Assuming the question is about why anticommutators rather than commutators,as per David's comment:
If I denote the Dirac particle properties by a general index $\alpha$ - these can include spins, momenta - then if I create a two particle state, with particle 1 having $\alpha_1$ and particle 2 $\alpha_2$ then the state is given by applying creation operators in the order $$|\alpha_{1} \alpha_{2}\rangle = b_{\alpha_1}^{\dagger}b_{\alpha_2}^{\dagger}|0\rangle>$$ If we create them the other way round:$$|\alpha_{2} \alpha_{1}\rangle = b_{\alpha_2}^{\dagger}b_{\alpha_1}^{\dagger}|0\rangle>$$ then if the b's obey anticommutation relations, it is easy to see that $$|\alpha_{2} \alpha_{1}\rangle = -|\alpha_1\alpha_2\rangle$$ i.e. the theory naturally reproduces fermi statistics, as you would want for spin 1/2 Dirac particles.
-
Yes, this is true, but I think the OP wanted a proof of spin-statistics. I gave the proof in the simplest case, 2d majorana Weyl spinors. – Ron Maimon Dec 7 '11 at 6:33
Yes I wasn't sure what level the question was aimed at - I opted for "why anticommutators rather than commutators". You answered the harder question (+1 for a simple proof) – twistor59 Dec 7 '11 at 7:32
Ok, +1 to you too. – Ron Maimon Dec 7 '11 at 9:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230723977088928, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/04/20/polynomials-take-2/?like=1&source=post_flair&_wpnonce=9916d21231 | # The Unapologetic Mathematician
## Polynomials, take 2
As I said before, if we take the free commutative monoid on $n$ generators, then build the semigroup ring from that, the result is the ring of polynomials in $n$ variables. I hinted at a noncommutative analogue, which today I’ll construct from the other side.
Instead of starting with a set of $n$ generators and getting a monoid, let’s start by building the free abelian group $\mathbb{Z}^n$. This consists of ordered $n$-tuples of integers, and we add them component by component. We can pick out the $n$ generators $x_k=(0,...,0,1,0,...,0)$, where the $1$ shows up in slot $k$. Then every element can be written $a_1x_1+a_2x_2+...+a_nx_n$, where the $a_k$ is entry $k$ in the $n$-tuple form of the element.
So how do we build the tensor product $\mathbb{Z}^n\otimes\mathbb{Z}^n$? First we take all pairs
$(a_1x_1+...+a_nx_n,b_1x_1+...+b_nx_n)$
and use them to generate a free abelian group. Then we impose the linearity relations $(a+a',b)=(a,b)+(a',b)$ and $(a,b+b')=(a,b)+(a,b')$. What does that mean here? Well for one thing we can apply it to the collection of pairs:
$(a_1x_1+...+a_nx_n,b_1x_1+...+b_nx_n)=$
$a_1(x_1,b_1x_1+...+b_nx_n)+...+a_n(x_n,b_1x_1+...+b_nx_n)=$
$a_1b_1(x_1,x_1)+...+a_1b_n(x_1,x_n)+a_2b_1(x_2,x_1)+...a_nb_n(x_n,x_n)$
So we could just as well write the tensor product as the group generated by $x_i\otimes x_j=(x_i,x_j)$.
This same argument goes through as we tensor in more and more copies of $\mathbb{Z}^n$. The tensor power $(\mathbb{Z}^n)^{\otimes d}$ is the free abelian group generated by the elements $x_{i_1}\otimes...\otimes x_{i_d}$, where each index runs from ${}0$ to $n$.
Now we take all of these tensor powers and throw them together. We get formal linear combinations
$\sum\limits_{d=1}^\infty\sum\limits_{i_1,...,i_d=1}^n a_{i_1,...,i_d}x_{i_1}\otimes...\otimes x_{i_d}$
where all but finitely many of the “coefficients” $a_{i_1,...,i_d}$ are zero. These look an awful lot like polynomials, don’t they? In fact, if we only had a commutative property that $x_i\otimes x_j=x_j\otimes x_i$ then these would be exactly (isomorphic to) the polynomials we came up with last time.
To be explicit about the universal properties, any function from the $n$ generators to the underlying abelian group of a ring $R$ with unit has an unique extension to a linear function from $\mathbb{Z}^n$ to $R$. Then this has a unique extension to a ring homomorphism from $R(\mathbb{Z}^n)$ to $R$. From the other side, there is a unique extension of the original function to a monoid homomorphism from the free monoid $M_n$ to the underlying monoid of $R$. Then this has a unique extension to a ring homomorphism from $\mathbb{Z}[M_n]$ to $R$. Since both $R(\mathbb{Z}^n)$ and $\mathbb{Z}[M_n]$ satisfy this same universal property they must be isomorphic. We commonly write this universal ring as $\mathbb{Z}\{x_1,...,x_n\}$, and call it the ring of noncommutative polynomials in $n$ variables.
### Like this:
Posted by John Armstrong | Ring theory
## 1 Comment »
1. [...] we get an algebra of functions. If has dimension , then this is isomorphic to the algebra of noncommutative polynomials in [...]
Pingback by | October 26, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140624403953552, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/6890/the-speed-of-light-also-applies-for-distance-materials?answertab=votes | # The speed of light also applies for 'distance' materials?
The question is hard for me to put into one sentence so please try to completely read the example:
If I had a stick that is 1000KM long and I would push it forward with 1 millimeter in lets say... 10ms(I have completely no idea how long it would take)but let's say I was about to turn on a switch that is 1000KM + 1 millimeter away from me, how long would it take for me to see the light that is next to the switch to turn on? Basically the answer would be a calculation based on the distance the light needs to travel in order for it to reach me + that one millisecond. But I cannot understand that that calculation does not consider anything about the stick being pushed, if the stick was 1 meter and I would perform the same action the calculation would be the same, resulting in the fact that the stick can have any length no matter how far it would still turn on that light as fast as it would if it was 1 meter long or 1000KM long. Something in my logic way of thinking is telling me that the stick must be slowed down based on the speed of light.
If you can provide an answer to the next example I would fully understand this paradox:
If I have a stick that is 1000KM long and pushing it forward just a little would cause the light in my tower to turn on, 1000KM further away someone is waiting on my push and has the end of the stick placed on the switch of hes light tower. What would the someone see happening if I push that stick ? There are 3 answers possible:
1. He(the someone, the 2nd person) will see hes light turn on and then the other.
2. He would see both lights turn on simontaniously. (my answer, considering the speed of light delay on the moving stick)
3. He would see the other light turn on 1st and than his. (very unlikely)
The the answer I need is either 1 or 2. I do not understand why it would be 1 but I would love to accept it, in order to accept it I need someone to explain to me why 1 is the answer.
Thank you so much for reading my question and/or leaving a comment or answer !
Diede.
-
– Anixx Mar 28 '11 at 14:46
## 3 Answers
There is no such thing as an incompressible stick. When you push on your end, a compression wave travels down the stick at the speed of sound, which is much slower than the speed of light. The other end of the stick does not move until this compression wave reaches it.
I think the rest of your problems disappear without the existence of an incompressible stick. If you still have any questions remaining, I'm happy to edit this answer.
-
Here's the answer to your first example:
Though it might take you only 10 ms to push your end of the stick, it would still take a much longer time for the effect of that to reach the other end. This is because there are no perfect rigid bodies. An impulse at one end has to travel at the speed of sound in the material, which is always less than the speed of light. So it will take $l/v_s$ + $l/c$ + 10 ms + 10 ms for you to see the light turn on, where $v_s$ is the speed of sound and $l$ is the length of your stick.
I don't fully understand your second example, but I'm guessing the speed of sound limitation should also make things sensible here.
-
+1 good answer :) @Diede, let me add that the answer to your second example is in fact choice 3. The reason is the same as the explanation dbrane has given for your first example, namely that your push travels down the stick at the speed of sound, which is slower than the light from your tower. So the light signal reaches the other person before the push on the stick does, and he sees your light before his own light turns on. – David Zaslavsky♦ Mar 14 '11 at 17:34
Thank you sooooo much for these answers, this en lighted me so much ! thanks ! – user2547 Mar 14 '11 at 17:59
Does "ACOUSTIMAGNETOELECTRICISM!" by William J. Beaty answer your question?
-
That's a fun link! I don't agree with precisely every detail he says, but the passion and novelty he brings to the question is admirable. – Andrew Mar 21 '11 at 1:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9673565030097961, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/tagged/probability+measure-theory | Tagged Questions
1answer
21 views
Conditional expectation is square-integrable
I am given the following definition: Let $(G_i:i\in I )$ be a countable family of disjoint events, whose union is the probability space $\Omega$. Let the $\sigma$-algebra generated by these events ...
1answer
24 views
Showing that $\mathbb{P}[X\geq a]\leq \exp[-ta]\mathbb{E}[\exp[tX]]$
The problem is to show that $\mathbb{P}[X\geq a]\leq \exp[-ta]\mathbb{E}(\exp[tX])$ given $\exp(tX)<\infty$ for $t\in \mathbb{R}$ where $X$ is a random variable. Then to show that ...
3answers
64 views
$E[X]$ finite iff $\sum\limits_{n} P(X>an)$ converges
Show that: \sum\limits_{n \in N } P(X>an) < \infty\ \text{for some}\ a > 0 \Rightarrow E[X] < \infty \Rightarrow \sum\limits_{n \in N } P(X>an) < \infty\ \text{for every}\ a > ...
4answers
90 views
What exactly is a probability measure in simple words?
Can someone explain probability measure in simple words? This term has been hunting me for my life. Today I came across Kullback-Leibler divergence. The KL divergence between probability measure P ...
1answer
36 views
Proof of conditional expectation
Suppose that we have three integrable random variables $x,y,z$ on a probability space $(X,\Sigma, \mathbb{P})$ such that $x$ and $z$ are independent, and $y$ and $z$ are also independent. Show that ...
0answers
48 views
Conditional expectation as a random variable
We have three random variables $x,y,z$. Is the condition "$y$ and $z$ are independent" enough to guarantee that "$\mathbb{E}(x\,|\,y)$ and $z$ are independent"? Would anyone give me a brief proof or ...
1answer
34 views
About independence and conditional expectation
Can anyone give me a little hint on a the following question? Many thanks!! The question is: If we know that $x$ and $z$ are independent, and $y$ and $z$ are independent, is it true that ...
0answers
31 views
Tail $\sigma$-algebra
With a simple symmetric random walk such that $S_n=\sum\limits_{k=1}^n X_k$ and $\mathbb{P}[X_i=\pm1]=1/2$ with $S_0=0$ like in this post: Tail events and exchangeable events where Did answered some ...
2answers
51 views
About conditional expectation
Can someone give me some hints on the following problem? Many thanks!! Let $x$, $y$, and $z$ be integrable random variables on a probability space $(X,\Sigma, \mathbb{P})$. Show that if both $x$ and ...
1answer
18 views
If $P$ is a statistically complete set of distributions, the only sufficient subfield is the trivial one
In this thread i solved a claim stated without proof by Bahadur that if $P=\left\{p\right\}$ is the set of all probability measures on the measurable space $\left(\Omega,\mathcal{A}\right)$, ...
2answers
36 views
densities being absolutely continuous wrt Lebesgue measure
I'm reading an article with an assumption similar to: "The density $f(.)$ exists and is absolutely continuous with respect to Lebesgue measure". I don't understand this assumption because $f$ is not ...
1answer
56 views
Probability of two events are indepedent
Given a probability space $(\Omega, \mathscr {B}, P)$, then $\sigma : \mathscr{B}\times \mathscr{B} \to [0,1]^2$ is defined as, for any $A, B \in \mathscr{B}$ $$(A,B) \mapsto (P(A),P(B))$$ Now take ...
1answer
55 views
Counterexamples for Borel-Cantelli
Our teacher mentioned to construct two counterexmaples for Borel-Cantelli using the following ways. (a) Construct an exmaple with $\sum_{i=1}^{\infty}\mathbb P(A_i)=\infty$ where \$\mathbb ...
0answers
26 views
Exponential Order Statistics Independence
Are the order statistics from the $n$-sample with $X_i\sim \text{Exp}(\lambda)$ (taking, without loss of generality, $\lambda=1$) $\Delta_{(k)}X=X_{(k)}-X_{(k-1)}$ independent? Can show that for an ...
2answers
60 views
Conditional Expectation of Exponential Order Statistic $\text{E}(X_{(2)} \mid X_{(1)}=r_1)$
Having already worked out the distributions of $\Delta_{(2)}X=X_{(2)}-X_{(1)}\sim\text{Exp}(\lambda)$ and of $\Delta_{(1)}X=X_{(1)}\sim\text{Exp}(2\lambda)$ where $X_{(i)}$ are the $i$th order ...
1answer
79 views
Exponential Distribution Function
If $X\sim \text{Exp}(X)$ then for all positive $a$ and $b$, $P(X>a+b\mid X>a)=P(X>b).$ So given independent random variables $X \sim \text{Exp}(\lambda)$, $Y \sim\text{Exp}(\mu)$ we would ...
0answers
28 views
Gap distribution independence proof
I have a question bout the proof of the independence of gap RVs. Given the independent exponentially distributed random variables $\xi_1$, $\xi_2$ ~ $\text{Exp}(\lambda)$, and a corresponding order ...
0answers
28 views
Cylindrical sigma algebra answers countable questions only.
I got a missing link in some in the following (standard) textbook question: Show that the cylindrical sigma algebra $\mathcal{F}_T$ on $\mathbb{R}^T$ (equals \$\bigotimes_{t\in ...
1answer
42 views
How can a $\sigma$-algebra be “treated” or computed? Example
My question is: I have a random variable $X:\Omega \rightarrow \mathbb{R}$, the $\sigma$-algebra generated by $X$ is: $\sigma(X) := \{X^{-1}(B), B\in \mathcal{B}(\mathbb{R})\}$. But, imagine now that ...
1answer
64 views
Motivation for Measure Theory example
I was taking a look at this book while trying to pick a book for learning some rigorous probability theory. I have been totally stumped by the motivating eg. on the first page. Specifically, I am ...
1answer
63 views
Is Cesaro convergence still weaker in measure?
I've encountered a question I couldn't answer, and I would appreciate any help: Is it true that $f_n \xrightarrow{m}0$ $\Rightarrow$ $\frac{1}{n} \sum_{k=1}^{n}f_k \xrightarrow{m}0$? Where ...
1answer
18 views
If a function is $L^p$ small, is its expectation with respect to a $\sigma$-algebra $L^p$ small?
This came up in my homework, but isn't strictly my homework. I've just gotten very curious, and I keep going in circles trying to prove it. Consider a probability measure space $(X,\Sigma,\mu)$ and ...
1answer
67 views
Random Walk on Z
Let $S_n$ be the symmetric random walk on $\mathbb{Z}$. How do i calculate $P(\limsup_{n\rightarrow\infty} S_n=\infty)$? I already know that the probability is 1 but I don't really know how to start? ...
1answer
28 views
Scheffe's lemma with dominated convergence
Suppose that $f_n, f$ are non-negative measurable functions with $\mu(f_n)$ and $\mu (f)$ finite and such that $f_n\to f \text{ a.e.}$, Then $\mu(|f_n-f|)\to 0 \iff \mu(f_n)\to\mu(f)$. For ...
1answer
31 views
$\mathbb{P}(\{X>a\}) = 1 \Rightarrow \mathbb{E}(X)>a$
The implication $$\mathbb{P}(\{X>a\}) = 1 \Rightarrow \mathbb{E}(X)>a$$ seems obviously true to me, but I can't nail a half-way rigorous proof of it. (Coming up with a counterexample seems to ...
2answers
98 views
Proof on Fubini's Theorem
The Fubini's Theorem states that for any two $\sigma$-finite measure spaces $(S,\mathcal{S},\mu)$ and $(T,\mathcal{T},\upsilon)$, there exists a unique measure \$(\mu \otimes \upsilon)(A\times B)=\mu A ...
1answer
30 views
The conditional expectation of a random variable
The conditional expectation of a random variable $\xi$ given $B$ is defined as its expectation with respect to the conditional probability measure given $B$: \$\Bbb{E}[\xi|B]=\int\xi(\omega) ...
1answer
61 views
Stopping time computations via martingales
I'm studying probability, and having trouble with the following problem (from this exam). Suppose $X_j$ are i.i.d. random variables with $P(X_i=1) = P(X_i = -1) = 1/2$. Let $S_0=0$ and \$S_n = X_1 + ...
1answer
41 views
Find the distribution of a random variable
Let $\Omega=[0,1]$, $\mathcal{F}=\mathcal{B} \cap [0,1]$, and $P$ be the Lebesgue measure restricted to $[0,1]$. Let $\Phi_{\mu,\sigma^2}(x)=\mathcal{N}_{\mu,\sigma^2}((-\infty,x])$. Then it is clear ...
0answers
30 views
Total variation norm against uniform metric in $\mathbb{R}^n$
Let's consider probability functions $G(\mathbf{x}, a)$ and $F(\mathbf{x}, a)$ of 2 continuous random vectors. $a$ is a parameter. Let the convergence ...
2answers
50 views
Definition of atomic $\sigma$-field.
Reading an article in probability theory I faced with phrase atomic $\sigma$-field. I tried to search for the definition, but google doesn't give any meaningful result. As a result I'm looking for the ...
1answer
42 views
Equivalence of measures and $L^1$ functions
Suppose we have two probability measures $\mu$ and $\delta$ on $(X, \mathcal{B})$ such that $\delta <<\mu << \delta$. How can I prove that $f \in L^1(X,\mathcal{B}, \mu)$ iff \$f \in ...
1answer
54 views
Total variation norm in $\mathbb{R}^n$ [duplicate]
Let's consider total variation norm ρ( , ) on $(\mathbb{R}^n, \mathcal{B}(\mathbb{R}^n)),$ where $\mathcal{B}(\mathbb{R}^n)$ is a Borel $\sigma$-algebra. Is it true that for probabiblity measures $P$ ...
1answer
82 views
Measure on a separable Hilbert space
Let $H$ be a real separable Hilbert space. Is it true that there exist a probability space $(\Omega, \mu)$ and a measurable function $\pi\colon \Omega \to H$ such that for any $h \in H$ we have ...
2answers
52 views
Show $\displaystyle \mathbb{E}\left[\frac{X^a}{Y^a}\right] \geq \frac{\mathbb{E}[X^a]}{\mathbb{E}[Y^a]}$
Given independent RVs $X$ and $Y$, with $Y>0$, $\mathbb{E}[Y^a]< \infty$ and $\mathbb{E}[X^a]< \infty$ for some real $a\geq 0$. I need to show that \$\displaystyle ...
0answers
28 views
Question about probability measures on the real line [closed]
http://www2.imperial.ac.uk/~boz/M34P6/P11_2.pdf Dear comrades. I am struggling with Ex 1.4(i) on here. I think that I have proved that the measures $\mu$, $\nu$, $\lambda$ are all equivalent, in the ...
1answer
54 views
sigma algebra problem [duplicate]
Let $f$ be any function from $\Omega$ to $X$ and $\mathcal{C}$ an arbitrary nonempty collection of subsets of $X$. Show $$\sigma(f^{-1}(\mathcal{C}))=f^{-1}(\sigma(\mathcal{C}))$$ I already know how ...
2answers
30 views
What is meant by closed under complementation?
I was going through the probability and measure chapter of testing of hypothesis book by L.H. Lehman, where I found this "A class of sets that contains Z and is closed under complementation and ...
4answers
209 views
What is the probability you guess the number I am thinking of?
Probability is defined as the likely number of outcomes over all total outcomes. In this case, 1 over infinity; which would equate to zero. But, there is a chance you can guess the number I am ...
1answer
29 views
Almost surely constant [duplicate]
Let $x$ and $y$ be two independent random variables on a probability space $(X,\Sigma,\mathbb{P})$ such that $x+y$ is almost surely constant. Show that both $x$ and $y$ must be almost surely constant. ...
2answers
97 views
Almost sure convergence of random variable
I see a lot of examples of limit theorems in terms of functions, and sequences of functions. But I think the transition from the general measure space to the probability space ...
1answer
40 views
Double Jumps of a Poisson Process
If $N_t$ be a Poisson Process with rate $\lambda>0$, surely for any prescribed $t>0$, the probability that $N_t$ "jumps (at least) twice" at $t$ is zero, i.e. ...
1answer
59 views
Measurable Maps
Let $X$, $M$ be two metric spaces and $\nu:X\rightarrow \mathcal{M}_1(M)$, $x\mapsto \nu_x$ a map, where $\mathcal{M}_1(M)$ is the space of all probabilities over $M$ with the Boral $\sigma$-algebra, ...
1answer
88 views
Probability Curiosity
Let $X_{1},X_{2},\ldots$ be i.i.d. random variables with $E\left[X_{1}\right]=0$ and $0<Var\left(X_{1}\right)=\sigma^{2}<\infty.$. Let $S_{n}=\sum_{j=1}^{n}X_{n}$. Consider now ...
1answer
168 views
Show that if $f_n \leq g$ for all $n$ and $g$ is integrable, then $\{f_n\}$ is uniformly integrable
A sequence {$f_n$} of measurable functions is called uniformly integrable if $$\lim_{M \to \infty} \sup_{n} \int_{[|f| >= M]} |f_n|\ \mathrm{d}\mu = 0$$ Show that if $|f_n| \leq g$ for all $n$ ...
1answer
77 views
Does the sum of sequence of measurable functions converge outside a set of measure zero?
Let {$f_n$} be a sequence of measurable functions defined on a probability space, such that: $$P(f_n = 1/n) = 1 - P(f_n = 0) = 1/(n^2)$$ Does $\sum_{n=1}^{\infty}{f_n}$ converge outside a set of ...
0answers
43 views
If one finite measure is less than another on a field, then the same holds for the generated sigma-field??
Suppose $μ_1$ and $μ_2$ are finite measures on $F = \sigma(F_0)$, where $F_0$ is a field. If $μ_1 \leq μ_2$ on $F_0$, then show that $μ_1 \leq μ_2$ on $F$. This is an exam review question and the ...
1answer
45 views
Behavior of the tail of a cdf.
If X is an integrable real random variable it is true that $$\lim_{x \to \infty} x P(X > x) = 0 \, ?$$ I know it is true for the $L^2$ case since it can be derived easily from Chebyshev ...
0answers
54 views
Is there a canonical probability measure on smooth curves?
For continuous curves, we have Brownian motion giving the most natural probability measure. However, the sample paths of Brownian motion are almost surely terribly behaved (not of bounded variation, ...
1answer
89 views
Uniform measure on the rationals between 0 and 1
I am trying to think of a probability measure on the set of rationals between 0 and 1 ($X:=\mathbb{Q}\cap[0,1]$). I want to achieve something like a uniform measure, i.e. every number should have the ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 177, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248306155204773, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/34645/euler-characteristic-of-general-linear-group | ## Euler Characteristic of General Linear Group
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(Edited) How can I find Euler-Poincare Index with compact support of General Linear Group over $\mathbb{R}$. For example let $A$ be a locally closed subset of a manifold $X$ then: $\chi_c(A)=\chi(R\Gamma(X;\mathbb{R}_A))=\chi(R\Gamma_c(A;\mathbb{R}_A))$
Which, in a smooth case it is the same as alternating sum of Betti numbers of de Rham cohomologies with compact support. Thank you.
-
What do you mean by the Euler characteristic with compact support? Do you mean the Euler-Poincaré characteristic of the compactly supported de Rham complex? – José Figueroa-O'Farrill Aug 5 2010 at 16:13
4
I'm guessing that's what is meant. In which case, use Poincare duality to rewrite it as the ordinary Euler characteristic up to sign. This can be computed in the usual fashion... At this point it might help to tell us what part of the story is familiar to you (I'm addressing Karl). – Donu Arapura Aug 5 2010 at 16:28
1
To expand on what Donu said: please provide some motivation (why do you want to know?) and background (what do you already know? what have you already tried?). – Andrew Stacey Aug 5 2010 at 17:16
## 2 Answers
I'm going to assume that "Euler characteristic with compact support" means
"(Euler characteristic of the one point compactification) - 1".
Let me assume that n>1.
The space in question, namely $GL ( n,R) _ +$ , has a circle action given by any $S ^ 1$ subgroup of $GL(n,R)$. This action is free on $GL(n,R)$, and fixes the point at infinity. $S ^ 1$-orbits contribute zero to the euler characteristic, and the point at infinity contributes 1. So $\chi ( GL (n,R) _ +) = 1$, and the Euler characteristic with compact support is zero.
To make te above argument precise, you need to pick a cell decomposition of $( GL ( n,R)/S ^ 1 ) _ +$, and use it to construct a cell decomposition of $GL ( n,R)$. Above every n-cell of the quotient space, you put a pair of cells of $GL ( n,R) _ +$, one of dimension n and one of dimension n+1 (except for the 0-cell corresponding to the point at infinity). This might fail to be a CW-complex, but you can nevertheless compute the Euler characteristic as the alternating sum of the numbers of cells in given dimensions.
-
For a complete answer, you should mention that $GL(0,\mathbb R)$ consists of a single point, or is empty, depending on the convention, and that $\chi(GL(1,\mathbb R)) = -2$. Note that the Euler characteristic you are using is the correct one --- it's additive on disjoint unions --- but is not a homotopy invariant. – Theo Johnson-Freyd Aug 6 2010 at 21:24
Thank you all. My idea was to use Poincare Duality for $n>1$. Then using a homotopy equivalence of $GL(n)$ and $SL(n)$. Now, since Euler characteristic of a compact Lie group $Sl(n)$ for $n>1$ is zero. We will have $chi_c(Gl(n))=0.$ Which coincides with above answers. – Karl Aug 7 2010 at 10:33
Karl: I realize now that you had thought it through and just wanted confirmation. Sorry if my comment seemed a little blunt. I also got zero using the same process. I guess you meant to write $SO(n)$ rather than $SL(n)$. – Donu Arapura Aug 7 2010 at 12:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The group $GL(n,\mathbb{R})$ is homotopic to $O(n)$ so these two spaces have the same Euler characteristic. For $n\geq 2$, $O(n)$ is a compact smooth manifold of positive dimension with trivial tangent bundle. Hence its Euler class is trivial, and so is its Euler characteristic.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9149457812309265, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2011/10/08/the-hodge-star-in-coordinates/?like=1&_wpnonce=81a70c627a | # The Unapologetic Mathematician
## The Hodge Star in Coordinates
It will be useful to be able to write down the Hodge star in a local coordinate system. So let’s say that we’re in an oriented coordinate patch $(U,x)$ of an oriented Riemannian manifold $M$, which means that we have a canonical volume form that locally looks like
$\displaystyle\omega=\sqrt{\lvert g_{ij}\rvert}dx^1\wedge\dots\wedge dx^n$
Now, we know that any $k$-form on $U$ can be written out as a sum of functions times $k$-fold wedges:
$\displaystyle\eta=\sum\limits_{1\leq i_1<\dots<i_k\leq n}\eta_{i_1\dots i_k}dx^{i_1}\wedge\dots\wedge dx^{i_k}$
Since the star operation is linear, we just need to figure out what its value is on the $k$-fold wedges. And for these the key condition is that for every $k$-form $\zeta$ we have
$\displaystyle\zeta\wedge*(dx^{i_1}\wedge\dots\wedge dx^{i_k})=\langle\zeta,dx^{i_1}\wedge\dots\wedge dx^{i_k}\rangle\omega$
Since both sides of this condition are linear in $\zeta$, we also only need to consider values of $\zeta$ which are $k$-fold wedges. If $\zeta$ is not the same wedge as $\eta$, then the inner product is zero, while if $\zeta=\eta$ then
$\displaystyle\begin{aligned}(dx^{i_1}\wedge\dots\wedge dx^{i_k})\wedge*(dx^{i_1}\wedge\dots\wedge dx^{i_k})&=\langle dx^{i_1}\wedge\dots\wedge dx^{i_k},dx^{i_1}\wedge\dots\wedge dx^{i_k}\rangle\omega\\&=\det\left(\langle dx^{i_j},dx^{i_k}\rangle\right)\omega\\&=\det\left(\delta^{jk}\right)\omega\\&=\sqrt{\lvert g_{ij}\rvert}dx^1\wedge\dots\wedge dx^n\end{aligned}$
And so $*(dx^{i_1}\wedge\dots\wedge dx^{i_k})$ must be $\pm\sqrt{\lvert g_{ij}\rvert}$ times the $n-k$-fold wedge made up of all the $dx^i$ that do not show up in $\eta$. The positive or negative sign is decided by which order gives us an even permutation of all the $dx^i$ on the left-hand side of the above equation.
### Like this:
Posted by John Armstrong | Differential Geometry, Geometry
## 4 Comments »
1. [...] Armstrong: (Pseudo)-Riemannian Metrics, Isometries, Inner Products on 1-Forms, The Hodge Star in Coordinates, The Hodge Star on Differential Forms, Inner Products on Differential [...]
Pingback by | October 8, 2011 | Reply
2. I have a question for you, if I wanted to show the rate of change in a person personality due to a control stimuli, how do you think the formula should llook like.
Comment by | October 9, 2011 | Reply
3. [...] easiest to work this out in coordinates. If is some -fold wedge then is times the wedge of all the indices that don’t show up in . [...]
Pingback by | October 18, 2011 | Reply
4. [...] implications does this have on the coordinate expression of the Hodge star? It’s pretty much the same, except for the determinant part. You can think about it yourself, [...]
Pingback by | March 7, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233860969543457, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/9868/computing-degree-of-map | # Computing degree of map
Suppose given two manifolds $X$ and $Y$, both orientable of dimension $n$, and a map $f:X\to Y$. Is there a relationship between the degree of $f$ calculated with respect to homology (by which I mean the induced map on the top homology groups) and the degree of $f$ calculated with respect to cohomology (by which I mean the induced map on the top cohomology groups)? Thanks!
-
They are the same. – Matt E Nov 11 '10 at 17:52
Yes, they are equal. I'm sure this follows from the universal coefficient theorem. – Robin Chapman Nov 11 '10 at 18:42
I assume this requires integral (co)homology, otherwise you only get a $G$-valued degree. But then what happens with non-orientable manifolds, e.g. $S^2 \rightarrow \mathbb{RP}^2$? – Aaron Mazel-Gee Nov 11 '10 at 20:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188315272331238, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/tagged/variance?sort=unanswered&pagesize=50 | # Tagged Questions
The expected squared deviation of a random variable from its mean; or, the average squared deviation of data about their mean.
0answers
489 views
### Variance on the sum of predicted values from a mixed effect model on a timeseries
I have a mixed effect model (in fact a generalized additive mixed model) that gives me predictions for a timeseries. To counter the autocorrelation, I use a corCAR1 model, given the fact I have ...
0answers
1k views
### How to estimate variance components with lmer for models with random effects and compare them with lme results
I performed an experiment where I raised different families coming from two different source populations, where each family was split up into a different treatments. After the experiment I measured ...
0answers
133 views
### Variance of the Kaplan-Meier estimate for dependent observations
Can someone help me find a way to estimate the variance of the Kaplan-Meier estimate with dependent observations? Specifically, I have failure time data from patients with several different ...
0answers
127 views
### How to test equality of variances with circular data
I am interested in comparing the amount of variability within 8 different samples (each from a different population). I am aware that this can be done by several methods with ratio data: F-test ...
0answers
224 views
### How does pooling and resampling affect variance of sample mean?
Suppose I have $N$ independent random variables $X_n$. I draw a sample of predetermined size $K_n$ from each of them. Denote the average of each sample $\bar{\hat{X}}_n$, and the total number of ...
0answers
63 views
### k-subset with maximal variance
I have two versions of the same question: Given a list of numbers (with possible duplicates), how to find a k-subset (with possible duplicates) that maximize the variance? is there a more efficient ...
0answers
168 views
### How does number of observations supporting alternate hypothesis on a test of a variance have to scale so that null is rejected?
Informal explanation: In the course of my research I've run into the following problem: I am observing a machine that outputs random numbers. Most (if not all) of these random numbers come from the ...
0answers
50 views
### Bounds for the population variance?
Suppose we have i.i.d. samples $x_1$, $\ldots$, $x_n$ for a (potentially non-normal) random variable $X$ with finite moments. We can use these samples to construct an unbiased estimates of the ...
0answers
501 views
### How do I interpret the covariance matrix from a curve fit?
I'm not too great at statistics, so apologies if this is a simplistic question. I am fitting a curve to some data, and sometimes my data best fits a negative exponential in the form \$a * e^{(-b * x)} ...
0answers
184 views
### Link between variance and pairwise distances within a variable
Please, prove that if we have two variables (equal sample size) $X$ and $Y$ and the variance in $X$ is greater than in $Y$, then the sum of squared differences (i.e., squared euclidean distances) ...
0answers
39 views
### Can the OLS residual variance suggest a polynomial relationship?
I am trying to figure out whether from the following graph of the OLS residuals that the linear relationship does not hold, and that probably a cubic relationship would do better? Since both in the ...
0answers
32 views
### Bound on the variance for [0,1] RVs as a function of the mean
I noticed that if $X$ is a RV in $[0,1]$ then $V[X] \leq E[X](1-E[X])$, which also implies that the bernoulli distribution maximizes variance (one of many solutions). For interest's sake consider ...
0answers
54 views
### Finding correlation coefficient
if I have A and B with the following known variables: with $E[A]$, $E[B]$ , $\sigma_{A}$ , $\sigma_B$ and correlation coefficient: $\rho_{AB}$ (assign numbers if you like) Say: $C=0.6A+0.4B$ Then ...
0answers
50 views
### Combined variance following multiple imputation with survival model
I have created 5 imputations of a dataset and have fit a survival model to them all in R. I want to combine the estimates of the coefficients and the standard errors of the coefficients. To do this I ...
0answers
57 views
### How to assess stability of daily time series in sentiment analysis?
I developed a measure of "sentiment" and I have time based data and used the measure to derive a daily sentiment time series. I am looking for some way to establish reliability or maybe stability. For ...
0answers
41 views
### An investment and variance question for monthly payments
I have a question regarding a financial/statistical problem. How do you calculate the variance of the outcome of an investment in a stock, when the investment is so called time diversified, i.e. ...
0answers
72 views
### Can we approximate the distribution of S?
I want to understand how the sampling distribution of the whole covariance matrix behaves for large $n$. I am trying to use the delta method and multivariate CLT. I am trying to show that when the ...
0answers
106 views
### Do we need an unbiased estimator of the variance?
"Although it is nice to have an unbiased estimator of the variance, we do not really need it to understand the relation between our independent variable and our dependent variable. Why?" I think I ...
0answers
126 views
### How to fix the constant variance assumption?
We have a project where we have to find the best model using a large set of data. In our current model there are 10 variables, some quantitative, and a few that are qualitative. When we first do ...
0answers
114 views
### What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest?
In his widely cited paper Prior distributions for variance parameters in hierarchical models (916 citation so far on Google Scholar) Gelman proposes that good non-informative prior distributions for ...
0answers
177 views
### What is point-wise variance?
While reading The Elements of Statistical Learning, I've encountered the term "point-wise variance" several times. While I have a vague idea of what it likely means, I'd be grateful to know How is ...
0answers
109 views
### Correct variance for minimum detectable difference
I have a question regarding variance, paired testing and minimum detectable difference (MDD). Paired samples: $$MDD (δ) = \sqrt{ \frac{σ^2}{n} (t_{(α/2,n-1)} + t_{(1-β, n-1)})}$$ I have a set of ...
0answers
255 views
### R limma voom function mean-variance trend
I am using the limma package in R to do some analysis on a count data matrix. I use the voom function and that normally creates a plot with the mean variance trend line in it. Now I created also a ...
0answers
163 views
### Univariate- Variance preserving, order reversing transformation
This is a soft question: How can the order of a sample univariate data be reversed while preserving the variance?
0answers
89 views
### Generalized Linear Models and Curse of Dimensionality
I was wondering what happens to bias and variance of GLM estimates as dimensionality approaches the number of training data points? Specifically in Linear Regression and Poisson Regression? I know ...
0answers
174 views
### Does pooled variance correct for/protect from unequal variance when calculating effect size?
This may be a lame question, but I got stuck and can't get my head around it. I am running a gene expression analysis, comparing ~10000 genes between two groups, n=6 samples per group. My pipeline ...
0answers
161 views
### Bootstrap variance of squared sample mean
The following is question 8 of chapter 8 in Wasserman's All of Statistics: Let $T_n = \overline{X}_n^2$, $\mu = \mathbb{E}(X_1)$, $\alpha_k = \int|x - \mu|^kdF(x)$, and \$\hat{\alpha}_k = ...
0answers
134 views
### Cramer-Rao type bound for Information Gain
I am interested in the Bayes risk of some distribution $\pi$ $$r(\pi) = \mathbb{E}_{\pi(x)}[ \mathbb{E}_{\Pr(y|d,x)}[L(x,\hat x(y|d))]],$$ where $L$ is some loss function and $\hat x$ is the ...
0answers
44 views
### Looking for formalization of the idea of low-variance predictors
In baseball, Bill James suggested using, to predict next season's winning percentage runs_scored^2/(runs_scored^2 + runs_allowed^2), rather than this season's ...
0answers
220 views
### How to create a ratio distribution from samples?
Ok, let's try again. The context of the original question is given below, but perhaps it helps to focus on the statistical aspect to get an answer. What I got is a number of measurements in unit t. ...
0answers
383 views
### How can this series pass the structural change test?
I have this series: ...
0answers
95 views
### How to show that the variances of 2 sets of 3D points are different?
I have 2 point clouds (3D points). I can visually tell that the spread in one cloud is much larger than the other, and I have also plotted their error ellipsoids. Now, I'm looking for a statistical ...
0answers
9 views
### Are the sequential sum of squares appropriate when treatments must be applied in sequence?
I'm working with some modeled future stream flow data that was created in two steps. First, future precipitation predictions (at certain points in a watershed) were created by running historical ...
0answers
22 views
### What coefficient could I use to calculate the relative difficulty of a test in relation to others using only mean and population standard deviation?
I have a series of tests, all of them of different difficulty, and from each of them I get an average score and a population standard deviation; e.g: ...
0answers
36 views
### Interpretation of Variance and Covariance
I am totally new to statistics and I have to create a variance and covariance analysis. I am using SPSS for this. I have created the covariance table: Hopefully it is the right thing. The first 3 ...
0answers
49 views
### Mean and variance of call center data
I have a fairly involved homework question, I was wondering if I could get some help. There are two types of phone calls arriving at a switch, long-duration and short-duration. Each day the number ...
0answers
30 views
### Measuring relative variability for variables with different scales
I want to compare the relative variability of several sleep-related variables in the same group of subjects. For example, is there more variability in time spent in REM sleep compared to the number of ...
0answers
35 views
### Variance associated with factors in GLS (nlme)
First time posting here, so thank you ahead of time for your help. I'd like to estimate the variances associated with two factors in a relatively simple, but unbalanced GLS model, and I am unsure how ...
0answers
68 views
### Variance of powers of a random variable
Is it possible to derive a formula for variance of powers of a random variable in terms of expected value and variance of X? $$\operatorname{var}(X^n)= \,?$$ and $$E(X^n)=\,?$$
0answers
129 views
### Finding the UMVUE of the variance of a gaussian with mean zero
Given $Z_1, ..., Z_n, \sim\mathcal{N}(0, θ^2), θ>0$. Define $X_i=|Z_i|$ and consider estimation of $\theta$ and $θ^2$ on the basis of the random sample $X_1,...X_n$. Find the uniformly minimum ...
0answers
87 views
### error variance calculations for reliability analysis of a composite metric in HLM
I am trying to determine how to obtain within-group variance for a composite measure based on a set of (weighted) proportions. I have 50 groups being compared on 8 proportion measures, with 4 ...
0answers
40 views
### Why do different estimators for stock volatility exist? (Realized Variance, RAV, etc)
I am very confused about why different volatility estimators (RV, RAV, BPV, etc) exist. If the goal is to find the best estimator for stock volatility, and volatility is latent, how do I know which ...
0answers
69 views
### Compound poisson process: Average size of claim will exceed £110
"An insurance company receives claims at a rate of two per week, the size of a claim in pounds having mean 100 and standard deviation 50. Assuming the compound Poisson process as a model, and using ...
0answers
87 views
### Confidence Interval in Monte Carlo integration
I want to integrate $\int_{\mathbb{R}_+}\mathbb{1}_A(x) d\mathbb{P}(x)$, in other words I am interested in $\mathbb{P}(A)$. I did this numerically with two Monte Carlo steps. First, I drew, say a ...
0answers
43 views
### How to compute variance of a continuous time sequence?
I am observing two continuous time-series where at every instant in time I may observe a unary event. That is, for each sequence, say $S_1$, I have a data set comprised of $S_1 = (t_0, t_1, ..., t_m)$ ...
0answers
56 views
### Different Mean Square partitions in an unbalanced bifactorial ANOVA (with random factor) between R and Statistica
I am trying to extract variance components for selection and chance in a bifactorial design with Generation as a fixed factor and Replicate as a random term, for early fecundity. Since I am using ...
0answers
164 views
### When are the asymptotic variance of OLS and 2SLS equal?
Assume the model $\ y = X\beta + u \$ with $\ W \$ is a $\ n\times l \$ so called matrix of instruments. The following assumptions hold. There is a law of large numbers (LLN) for 1.,2.,3. and ...
0answers
149 views
### Determining Optimal Number of Cluster in Hierarchical Clustering in Consideration of Variance of Data
I'm applying a Hierarchical Agglomerative Clustering (HAC) for grouping my data and I need to determine the number of the cluster automatically. To determine the optimal number of cluster, I obtain ...
0answers
110 views
### Variance decomposition in linear regression model
Consider the linear model $y = \mathbf{X}\mathbf{\beta} + \epsilon$. The residual variance-covariance matrix is given by $\text{Var}(\epsilon)$. Greene's textbook* states that: Var(\epsilon) = ...
0answers
36 views
### Proportion of variance shared pairwise
I have several time series, structured into a matrix: dat = 1+(30-1).*rand(365,7); where each column refers to a different series of annual measurements. In my ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228152632713318, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/121620/cone-created-from-sector-of-circle | # Cone created from Sector of Circle
Suppose I use a sector of circle with radius 1 to create a cone (by joining the radius of the sector).
How do I express radius of cone in terms of $\theta$? Is it
$$\sin{\frac{\theta}{2}} = r, \qquad \cos{\frac{\theta}{2}} = h$$?
-
## 1 Answer
Note that slant height $s$ has value $1$ .
$$\sin \frac{\theta}{2} =\frac{r}{s} \Rightarrow r = \sin \frac{\theta}{2}$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100481271743774, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/23273/given-constant-t-why-does-p-affect-internal-energy | # Given constant T, why does P affect internal energy?
It has always bugged me that tables for water (and other) properties have the capability to look up internal energy as a function of both temperature and pressure. If we limit the discussion to liquid below the saturation temperature, then what is the qualitative argument to say that $u(T)$ is inaccurate and that the multivariate function $u(P,T)$ is needed?
From Wikipedia Internal Energy:
In thermodynamics, the internal energy is the total energy contained by a thermodynamic system. It is the energy needed to create the system, but excludes the energy to displace the system's surroundings, any energy associated with a move as a whole, or due to external force fields.
I understand that internal energy is not fully a proxy for temperature, so what thermodynamic property could we define (in $J/kg$) that would be a fully 1-to-1 relationship with temperature with no influence from pressure? If a liquid was fully incompressible would internal energy then not be a function of pressure?
If my physics understanding is correct, temperature has a definition that stems from the concept of thermal equilibrium. Quantitatively, I thought that temperature was proportional to the average kinetic energy the molecules, but I doubt that as well (in fact, I think this is wrong). The zeroth law of thermodynamics is necessary for formally defining temperature but it, alone, is not sufficient to define temperature. My own definitions for temperature and internal energy do not have the rigor to stand up to scrutiny. What qualitative arguments can fix this?
Symbols
• $u$ - internal energy
• $P$ - pressure
• $T$ - temperature
-
I doubt you could define anything except $\int X dT$, where X is a specific heat sort of thing. But all the specific heats are pressure dependant/process dependant anyway for real systems :/ – Manishearth♦ Apr 5 '12 at 3:51
## 2 Answers
While there are many variables that characterize a thermodynamic system, such as volume $V$, pressure $P$, particle number $N$, chemical potential $\mu$, temperature $T$ and entropy $S$, these are not all independent of each others! In fact, any thermodynamic potential (such as internal energy, free energy, enthalpy) can be written as functions of either three of these variables.
Thus, in the most general case, you will get something like $$U(T,P,N)$$ where you specify temperature, pressure, and number of particles.
I think it's easier to understand if you realize that pressure and volume are intimately linked, and then think about the effect of interactions: These should get stronger if you reduce the volume of the system so particles are closer together and thus (typically) have a higher interaction energy.
In an ideal gas, there are no interactions, so volume doesn't really have an effect on the internal energy: $$U = \frac{3}{2} N k T$$
But if you have interactions, they will give you a contribution that depends on volume and, thus, on pressure.
EDIT: As an example, the van-der-Waals equation describes a gas of weakly interacting particles. There, Wikipedia gives the internal energy as $$U = \frac{3}{2} N k T - \frac{a' N^2}{V}$$ where $a'$ is a parameter describing the interaction.
-
To start on a positive, your $U$ equation is a fantastic insight. Moving on, shouldn't $(T,P)$ imply $\rho$ which would then imply $N$? One factoid I constantly constantly fall back on is the claim that a thermodynamic state is a function of 2 independent variables, not 3. Implicit in this argument is that you already know the composition, i.e. H2O. As far as I can tell, if you specify temperature and pressure, you would not then specify # of particles, so let's get that argument out of the way first. – AlanSE Apr 5 '12 at 5:22
I'm pretty sure you definitely need three independent variables. If you have an equation of $\rho$ in terms of $T$ and $P$, then to get $N$ you also have to know $V$, since $\rho = N/V$. In many situations, one variable is trivially fixed, so then of course there are only two variables left to specify. – Lagerbaer Apr 5 '12 at 5:27
Ok, this is reconciled by the fact that you include a macroscopic $V$, which is fine. My statements about 2 variables should be qualified as applying to a differential volume element. – AlanSE Apr 5 '12 at 5:33
The temperature is the average kinetic energy for classical nonrelativistic particle admixtures. The reason is that the temperature is what you multiply the energy by to get the probability of a given microstate. Kinetic energy is nonrelativistically always quadratic, and the probability distribution is Gaussian.
So the average KE of the atoms/molecules is your quantity in J/atom (or J/kg if you convert) and it is just ${3\over 2}kT$. To prove this, you just have to note that the expected value of $x^2$ for a gaussian of variance $\sigma$ is just $\sigma$, and the probability distribution for atoms having positions x and momentum p is
$$\rho(x,p) = e^{-\beta(\sum_i {p_i^2\over 2m} + V(x_1,x_2,...,x_N))}$$
which is Gaussian of the same width in p for all values of x, so the momentum variables are always distributed according to a Gaussian (Maxwell-Boltzmann) distribution.
If you imagine particles which have a potential energy function which is just proportional to the density (this is a grossly nonlocal potential energy) then if you increase the pressure at constant temperature, you will increase the density, and increase the internal energy.
So in classical statistical mechanics, assuming that the interaction is potential type, the pressure dependence tells you how the potential energy contributes.
In quantum statistical mechanics, at cold enough temperatures that the atomic motion energy levels are further apart then kT, these motions do not contribute to the specific heat, and the kinetic energies for these motions do not average to {1\over 2} kT in each direction. But the classical description is accurate for those motions which are classical, and that's most of the gross motions at room temperature.
-
The OP is about water, which has a qualitatively different set of features than the ideal gas. – Jerry Schirmer Apr 5 '12 at 12:45
You presented the definition that average KE = $3/2 kT = 1/N \sum{ m v_i^2 /2 } = 1/N \sum{ p_i^2 / 2 m}$. Does this satisfy the requirement that gases of two different molecular masses at the same $T$ can't transfer heat? Even in the presence of a potential function? @JerrySchirmer I think the presence of the term $V(x_1,..)$ breaks the assumption of an ideal gas. – AlanSE Apr 5 '12 at 14:59
@JerrySchirmer: Where do you see an ideal gas? The answer that the average kinetic energy of each center of mass motion is 1.5kT is correct for any classical system. – Ron Maimon Apr 5 '12 at 16:18
@AlanSE: Yes it does, in the thermodynamic limit (which is the only case in which it is valid that two systems at the same temeperature interacting don't transfer heat--- otherwise the interaction changes the ensemble), because the average kinetic energy is just the ordinary thermodynamic temperature. – Ron Maimon Apr 5 '12 at 16:21
A free classical system of non-interacting particles with no internal degrees of freedom, yes. That is hardly a description of anything but a gas. Even an ideal gas of molecules with a sufficently high temperatures to excite their rotational modes will have $U=nkT$ with $n>\frac{3}{2}$. – Jerry Schirmer Apr 5 '12 at 16:54
show 8 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9195610284805298, "perplexity_flag": "head"} |
http://mathhelpforum.com/discrete-math/176778-emptyset-function.html | # Thread:
1. ## Is emptyset a function?
My discrete book is defining a function, f, as a special type of relationship in which if both $(a,b) \in f$ and $(a,c) \in f$, then $b=c$ (and a relation is defined as a set of ordered pairs).
So, is the empty set not a function because it doesn't have any ordered pairs, or is it a function because it does not violate the definition of a function?
For each of the following relations, please answer these questions:
(1) Is it a function? If not, explain why.
(2) If yes, what are it's domain and range?
(3) Is the function one-to-one? If not, explain why.
(4) If yes, what is the inverse function?
a,b,c,d,e,... I already did
f. $f=\emptyset$
(1) Yes (trivially), because there are no ordered pairs in $f$, it does not violate the definition of function.
(2) dom $f$ = im $f$ = $\emptyset$
(3) Yes (trivially), since the definition of one-to-one is not violated
(4) $f^{-1}=\emptyset$
Above is how I wrote up my homework (but it's not due until Thu.), but I'm not confident it's the right answer.
2. Originally Posted by MSUMathStdnt
My discrete book is defining a function, f, as a special type of relationship in which if both $(a,b) \in f$ and $(a,c) \in f$, then $b=c$ (and a relation is defined as a set of ordered pairs).
So, is the empty set not a function because it doesn't have any ordered pairs, or is it a function because it does not violate the definition of a function?
I wouldn't put it as "not violating" but rather as "does fulfill."
When we say "S is a set of ordered pairs" we mean "For all x, if x in S, then x is an ordered pair." Well, the emptyset fulfills that requirement. For all x, if x is in the empty set then x is an ordered pair. So, in that very specific sense (which is the only sense that matters toward this question), yes, the empty set is a set of ordered pairs.
Originally Posted by MSUMathStdnt
Above is how I wrote up my homework (but it's not due until Thu.), but I'm not confident it's the right answer.
All correct, except I would modify (1) as I mentioned above.
3. Originally Posted by MoeBlee
I wouldn't put it as "not violating" but rather as "does fulfill."
When we say "S is a set of ordered pairs" we mean "For all x, if x in S, then x is an ordered pair." Well, the emptyset fulfills that requirement. For all x, if x is in the empty set then x is an ordered pair. So, in that very specific sense (which is the only sense that matters toward this question), yes, the empty set is a set of ordered pairs.
All correct, except I would modify (1) as I mentioned above.
I understand what you're saying. But I still don't see how to word it (although I'll probably get full credit as long as I've got the idea right). How does this sound:
(1) Yes (trivially). There are no ordered pairs in $f$, therefore; for all ordered pairs in $f$, there are none that have the same first value and a different second value.
4. This reminds me of a thread I saw recently.
http://mymathforum.com/viewtopic.php?f=22&t=18683
5. But you missed mentioning that every member of the empty set is an ordered pair.
I'll do it in English [where '0' stands for the empty set]:
For all x, if x is in 0 then, x is an ordered pair. So 0 is a relation. And for all x, y, z, if <x y> and <x z> are in 0 then y=z. So 0 is a relation that is moreover a function.
In symbols:
Ax(x in 0 -> x is an ordered pair).
So 0 is a relation.
Axyz((<x y> in 0 & <x z> in 0>) -> y=z).
So 0 is a function.
/
If you want to get more detailed, you can mention that
Ax(x in 0 -> x is an ordered pair)
Axyz((<x y> in 0 & <x z> in 0>) -> y=z)
are true because the antecedent in each is false. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478422999382019, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/36292/why-does-the-formula-for-calculating-a-reflection-vector-work | Why does the formula for calculating a reflection vector work?
The formula for calculating a reflection vector is as follows: $$R = V - 2N(V\cdot N)$$ Where V is the incident vector and N is the normal vector on the plane in question.
Why does this formula work? I haven't seen any good explanations of it. I don't understand the significance of doubling the normal vector, nor the relevance of taking the dot product.
-
Remember that the dot product is related to the cosine of the angle between two vectors. – J. M. May 1 '11 at 22:48
1 Answer
$\langle V,N\rangle N$ is the orthogonal projection of $V$ onto (the line determined by) $N$. That is, if you want to write $V$ as a sum of two orthogonal vectors, $u$ and $n$, with $u$ in the plane and $n$ having the same direction as $N$ (that is, orthogonal to the plane), then $n = \langle V,N\rangle N$ and $u = V - \langle V,N\rangle N$.
Since $N$ is normal to the plane, the vector $u = V - \langle V,N\rangle N$ is in the plane. That is, $u$ is the orthogonal projection of $V$ onto the plane.
To take the reflection, you want to go to the point in the plane "directly below" $V$ (that is, to $u$), and then go in the opposite direction to where $V$ is. So what you are doing is simply reversing the normal component to get the reflection: instead of adding $\langle V,N\rangle N$, you subtract it because that reverses the direction. So instead of $$V = \underbrace{\Bigl( V - \langle V,N\rangle N\Bigr)}_{\text{in the plane}} + \underbrace{\langle V,N\rangle N}_{\text{orthogonal to the plane}}$$ you take $$\underbrace{\Bigl( V - \langle V,N\rangle N\Bigr)}_{\text{in the plane}} - \underbrace{\langle V,N\rangle N}_{\text{orthogonal to the plane}}.$$
-
I fixed a couple of minor typos; hope you don't mind. – Rahul Narain May 2 '11 at 5:16
@Rahul: Certainly not! Thanks. – Arturo Magidin May 2 '11 at 14:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447201490402222, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/73404/embeddings-of-sobolev-orlicz-spaces | Embeddings of Sobolev-Orlicz spaces
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Birnbaum--Orlicz spaces generalize the Lebesgue spaces (see http://en.wikipedia.org/wiki/Birnbaum-Orlicz_space for a precise definition). The space $L_\Phi(\Omega)$ is defined for convex functions $\Phi:(0,\infty)\rightarrow(0,\infty)$ with $\Phi(0)=0$ and $\Phi(\infty)=\infty$. The norm in $L_\Phi$ is denoted `$\|\cdot\|_\Phi$`. When $\Phi(t)=t^p$, then $L_\Phi=L^p$ and $\|\cdot\|_\Phi=\|\cdot\|_p$.
Define the Sobolev space `$W^{1,\Phi}_0(\Omega)$` to be the closure of ${\mathcal D}(\Omega)$ under the norm $$f\mapsto\|\nabla f\|_\Phi.$$ Let me recall some of the Sobolev embeddings, when $\Omega$ is bounded. If `$1<p<n$`, we have $\dot W^{1,p}(\Omega)\subset L^q(\Omega)$, with $$\frac1q+\frac1n=\frac1p.$$
Actually, if $p=n$, `$W^{1,n}_0(\Omega)$` is included in $L_\Phi(\Omega)$ where $\Phi(t)=\exp(t^{n/(n-1)})-1$.
Question: is there a theory of embedding for spaces $W^{1,\Phi}(\Omega)$. I suspect that one can find an other convex function $\Psi$ such that $W^{1,\Phi}\subset L_\Psi$.
-
1 Answer
Yes, you are right. A good place to start are some surveys written by Andrea Cianchi. For example,
Cianchi, Andrea(I-FRNZ-AMA) On some aspects of the theory of Orlicz-Sobolev spaces. (English summary) Around the research of Vladimir Maz'ya. I, 81–104, Int. Math. Ser. (N. Y.), 11, Springer, New York, 2010.
or
Cianchi, Andrea(I-FRNZ-AMA) Optimal Orlicz-Sobolev embeddings. (English summary) Rev. Mat. Iberoamericana 20 (2004), no. 2, 427–474.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8353320360183716, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/239253/is-there-a-way-to-find-the-value-of-1n-2n-cdots-mn-modulo-x | # Is there a way to find the value of $1^n+ 2^n +\cdots + m^n$ modulo $x$?
I am writing a program in which I want to make changes to make it more efficient.
What the program does is it takes three inputs $m$, $n$ and $x$ and I have to find the value of the following equation: $$1^n+ 2^n+\cdots + m^n \mod{x}$$
Is there a better way than calculating the whole value and then solving for answer? Because if $n$ and $m$ are large it takes a lot of computation time which I am trying to avoid.
-
1
What about wikipedia:bernoulli-polynomials , or googling for "sums-of-like-powers" ? – Gottfried Helms Nov 17 '12 at 14:36
please note I made a mistake in my answer. – sperners lemma Nov 17 '12 at 18:20
## 1 Answer
If $x$ is small compared to $n$ and/or $m$ there are some good optimizations you can do:
• Edit: This is wrong, don't do this: Replace $n$ with its remainder on division by $\varphi(x)$. this only works for the bases coprime to n, which is a good proportion so it may still be worth it to do that.. for ones that aren't coprime...
• Use binary exponentiation.
• Split the sum into blocks $[1^n + 2^n + ... + x^n] + [(x+1)^n + (x+2)^n + \ldots] + \ldots$ which are all equal, so you only need to compute the sum of $x$ terms rather than $m$.
If $x$ is large compared to $n$ then (as already mentioned in comments) it will be more efficient to compute the sum using a closed form polynomial (which you may need to compute before use).
-
1
$\varphi(x)$ doesn't do you any good, because the numbers from $1$ to $x$ are not all relatively prime to $x$. In general, it is not true that if $a\equiv b\pmod {\varphi(x)}$ that $a^{\varphi(x)}\equiv b^{\varphi(x)}\pmod x$ – Thomas Andrews Nov 17 '12 at 18:10
1
If $x=8$ then $\varphi(x)=4$ and $2^5\equiv 0 \not\equiv 2^1\pmod 8$ even though $5\equiv 1\pmod 4$ – Thomas Andrews Nov 17 '12 at 18:13
1
Yeah, I got the formula wrong in my comment above, but my counter-example is for your second version. There, $a=2$, $x=8$, $n=1$ and $n'=5$. – Thomas Andrews Nov 17 '12 at 18:14
1
One quick reduction occurs when $n$ is odd. Then the expression $1^n+2^n+...+x^n \equiv 0\pmod x$. That's because $a^n + (x-a)^n \equiv 0\pmod x$. – Thomas Andrews Nov 17 '12 at 18:19
1
Sorry, that only works when both $x$ and $n$ are odd. If $x$ is even and $n$ odd, then $1^n+2^n+..+x^n\equiv (\frac{x}{2})^n \pmod x$. If $x$ is divisible by $4$ and $n>1$, then $(\frac x 2)^n \equiv 0\pmod x$. – Thomas Andrews Nov 17 '12 at 18:25
show 2 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335286021232605, "perplexity_flag": "head"} |