url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Prasad_Dipendra&arg9=Dipendra_Prasad | New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Return to List Item: 1 of 1
Sur Les Conjectures de Gross et Prasad I
Wee Teck Gan, University of California at San Diego, CA, Benedict H. Gross, Harvard University, Cambridge, MA, Dipendra Prasad, Tata Institute of Fundamental Research, Mumbai, India, and Jean-Loup Waldspurger, Institut de Mathématiques de Jussieu-CNRS, Paris, France
A publication of the Société Mathématique de France.
Astérisque 2012; 318 pp; softcover Number: 346 ISBN-13: 978-2-85629-348-5 List Price: US$105 Individual Members: US$94.50 Order Code: AST/346 See also: Sur Les Conjectures de Gross et Prasad II - Colette Moeglin and Jean-Loup Waldspurger A note to readers: Half of this book is in English and half is in French. About 20 years ago Gross and Prasad formulated a conjecture determining the restriction of an irreducible admissible representation of the group $$G = SO(n)$$ over a local field to a subgroup of the form $$G' = SO(n-1)$$. The conjecture stated that for a given pair of generic $$L$$-packets of $$G$$ and $$G'$$, there is a unique non-trivial pairing, up to scalars, between precisely one member of each packet, where $$G$$ and $$G'$$ are allowed to vary among inner forms; moreover, the relevant members of the $$L$$-packets are determined by an explicit formula involving local root numbers. For non-archimedean local fields this conjecture has now been proved by Waldspurger and Mœglin, using a variety of methods of local representation theory; the Plancherel formula plays an important role in the proof. There is also a global conjecture for automorphic representations, which involves the central critical value of $$L$$-functions. This volume is the first of two volumes devoted to the conjecture and its proof for non-archimedean local fields. It contains two long articles by Gan, Gross, and Prasad, formulating extensions of the original Gross-Prasad conjecture to more general pairs of classical groups including metaplectic groups, and providing examples for low rank unitary groups and for representations with restricted ramification. It also includes two articles by Waldspurger: a short article deriving the local multiplicity one conjecture for special orthogonal groups from the results of Aizenbud-Gourevitch-Rallis-Schiffmann on orthogonal groups and a long article (which appeared in Compositio Mathematica in 2010) completing the first part of the proof of the Gross-Prasad conjecture by extending an integral formula relating multiplicities in the restriction problem to harmonic analysis from supercuspidal representations to general tempered representations here. A publication of the Société Mathématique de France, Marseilles (SMF), distributed by the AMS in the U.S., Canada, and Mexico. Orders from other countries should be sent to the SMF. Members of the SMF receive a 30% discount from list. Readership Graduate students and research mathematicians interested in classical groups, metaplectic groups, branching laws, Gross-Prasad conjectures, local root numbers, and central critical $$L$$-value. Table of Contents W. T. Gan, B. H. Gross, and D. Prasad -- Symplectic local root numbers, central critical L-values, and restriction problems in the representation theory of classical groups W. T. Gan, B. H. Gross, and D. Prasad -- Restrictions of representations of classical groups: Examples J.-L. Waldspurger -- Une formule intégrale reliée à la conjecture locale de Gross-Prasad, $$2^e$$ partie : Extension aux représentations tempérées J.-L. Waldspurger -- Une variante d'un résultat de Aizenbud, Gourevitch, Rallis et Schiffmann
Return to List Item: 1 of 1
AMS Home | Comments: webmaster@ams.org © Copyright 2012, American Mathematical Society Privacy Statement | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7558512091636658, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/780/is-the-last-step-of-an-iterated-cryptographic-hash-still-as-resistant-to-preimag | # Is the last step of an iterated cryptographic hash still as resistant to preimage attacks as the original hash?
Considering a cryptographic hash, such as MD5 or SHA2, denoted by the function $H(m)$ where $m$ is an arbitrary binary string, there is a lot of material available that deals with potential weakness to preimage attack. I am interested in resistance to preimage attack where the cryptographic hash is applied recursively a number of times.
A classical first-preimage attack asks to determine a message $m$ for a given hash $h$ such that $h=H(m)$.
A related question considers a known value for $H(H(m))$ and asks if $H(m)$ can be determined more easily than $m$ from $H(m)$. The extension asks if $H^{(n+1)}(m)$ is more vulnerable to preimage attacks to determine $H^n(m)$ than it is to first preimage attacks... and, if resistance deteriorates as $n$ increases, how many times can $H(·)$ be applied while it is safe to remain confident the last application of $H(·)$ remains non-invertible - i.e. how many times is it safe to chain this non-invertible function without significantly weakening its resistance to inversion?
I'm also interested to establish if different hashes have different strengths relative to this scenario of repeated application.
-
## 1 Answer
If the hash function is any good, then it should behave as a "random function" (i.e. a function chosen randomly and uniformly among all possible functions). For a random function with output size $n$ bits, it is expected that nested application will follow a "rho" pattern: the sequence of successive values ultimately enters a cycle with an expected size of $2^{n/2}$; and the length of the "queue" (the values before entering the cycle) should also be close to $2^{n/2}$.
So, theoretically, recursive application weakens the function with regards to preimage resistance, but the weakening is low-bounded by $2^{n/2}$, a boundary which requires an awfully big number of invocations to be reached.
Moreover, when we select a hash function, we want to have decent resistance to collisions, and collisions can be found with effort $2^{n/2}$. So we already choose hash functions such that $2^{n/2}$ is unreachable (e.g. we use SHA-256, because $2^{128}$ is far beyond what can be realistically achieved with Earth-bound technology; see this answer for a discussion on the subject). In short words, the weakening through recursive application has no practical consequence (while the time taken to compute all the hashes improves things greatly with regards to dictionary attacks).
-
Thanks for the answer... though there are a couple of things I'd like clarified. Do you have a reference regarding the "rho pattern"? – aSteve Sep 25 '11 at 17:11
The other issue is whether or not the real hash function "is good"... If there is any narrowing of the state space, it strikes me that there must be a finite limit to the number of times the hash function can be recursively applied before there's insufficient entropy to assure security. It also strikes me as unlikely that all (any?) hash functions are entropy preserving when applied to values of the same length as the hash. In order to establish the limit, I'd need to estimate information loss on each application... but I don't know how to go about that. – aSteve Sep 25 '11 at 17:25
– Thomas Pornin Sep 25 '11 at 23:15
1
@aSteve: the point of the rho pattern is that the "narrowing" is limited to the size of the cycle -- you cannot narrow down any further because that's a cycle. And the size of the cycle is about $2^{n/2}$. – Thomas Pornin Sep 25 '11 at 23:17
1
@aSteve: it is proven that the probability of not having a very small cycle with a random function is vanishingly small. The risks involved in that probability are negligible with regards to other unlucky events such as, e.g., getting hit by an old satellite reentering the atmosphere. The "real" issue is rather on whether a given hash function behaves like a random function. – Thomas Pornin Sep 26 '11 at 14:02
show 4 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223375916481018, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/86808-lagrange-multipliers.html | # Thread:
1. ## Lagrange Multipliers
I'm supposed to find the maximum value of f(x,y)=xy if (x,y) is a point on the ellipse 9x^2+16y^2=144.
I have that y=lambda18x and x=lambda32y but I'm not sure how to go about solving for x and y.
Thanks
2. Originally Posted by jelloish
I'm supposed to find the maximum value of f(x,y)=xy if (x,y) is a point on the ellipse 9x^2+16y^2=144.
I have that y=lambda18x and x=lambda32y but I'm not sure how to go about solving for x and y.
Thanks
$\nabla(xy) = \langle y,x\rangle$ and $\nabla(9x^2+16y^2) = \langle 18x,32y\rangle$
So $\langle y,x\rangle = \lambda\langle 18x,32y\rangle$
So you want to solve the system:
$y=18\lambda x$
$x=32\lambda y$
$9x^2+16y^2=144$
The best thing to do is to get rid of $\lambda$ first. So divide the first two equations to get: $\frac{x}{y} = \frac{16y}{9x} \implies 9x^2=16y^2$
Substituting into the third equation gives us $9x^2+9x^2=144 \implies x=2\sqrt{2} \implies y=\sqrt{\frac{9}{2}}=\frac{3\sqrt{2}}{2}$
So the maximum of $xy$ occurs when $x=2\sqrt{2}$ and $y=\frac{3\sqrt{2}}{2}$, so the maximum value is $2\sqrt{2}\cdot\frac{3\sqrt{2}}{2} = \boxed{6}$
3. Another way to solve it is to switch to polar coordinates: $3x=12cos(t), 4y=12sin(t)$
Then you just need to find the maximum of a function of one variable:
$f(x,y)=xy=f(4cos(t),3sin(t))=12cos(t) sin(t)$
4. Originally Posted by Spec
Another way to solve it is to switch to polar coordinates: $3x=12cos(t), 4y=12sin(t)$
Then you just need to find the maximum of a function of one variable:
$f(x,y)=xy=f(4cos(t),3sin(t))=12cos(t) sin(t)$
True. The derivative of $12\cos(t)\sin(t)$ is $12\cos(2t)$. Setting $12\cos(2t)=0$ gives you $t=\frac{\pi}{4}$.
$12\cos(\pi/4)\sin(\pi/4) = 12\cdot\frac{\sqrt{2}}{2}\cdot\frac{\sqrt{2}}{2} = \boxed{6}$
which is the same answer as above.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9155774712562561, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/22258/what-is-the-use-of-the-line-produced-by-qqline-in-r | # What is the use of the line produced by qqline() in R?
The `qqnorm()` R function produces a normal QQ-plot and `qqline()` adds a line which passes through the first and third quartiles. What is the origin of this line? Is it helpful to check normality? This is not the classical line (the diagonal $y=x$ possibly after linear scaling).
Here is an example. First I compare the empirical distribution function with the theoretical distribution function of ${\cal N}(\hat\mu,\hat\sigma^2)$: Now I plot the qq-plot with the line $y=\hat\mu + \hat\sigma x$; this graph roughly corresponds to a (non-linear) scaling of the previous graph: But here is the qq-plot with the R qqline: This last graph does not show the departure as in the first graph.
-
## 1 Answer
As you can see on the picture,
obtained by
````> y <- rnorm(2000)*4-4
> qqnorm(y); qqline(y, col = 2,lwd=2,lty=2)
````
the diagonal would not make sense because the first axis is scaled in terms of the theoretical quantiles of a $\mathcal{N}(0,1)$ distribution. I think using the first and third quartiles to set the line gives a robust approach for estimating the parameters of the normal distribution, when compared with using the empirical mean and variance, say. Departures from the line (except in the tails) are indicative of a lack of normality.
-
The diagonal "after linear scaling" is here obtained by abline(mean(y), sd(y)). Here you simulate normal data hence these two lines are close. But sometimes the data is not close to a normal distribution but the qqplot is close to the qqline, but not to the diagonal "after scaling". – Stéphane Laurent Feb 4 '12 at 11:45
... I'm going to add an example to my question – Stéphane Laurent Feb 4 '12 at 11:53
1
I think this was my point in stating that using the quartiles is more robust than using empirical mean and variance. – Xi'an Feb 4 '12 at 14:56
1
Ok, thank you very much. Now this seems obvious. The qqline could be preferable because sometimes in practice the non-normality in the tails is acceptable. But there is no real need to plot the qqline: a visual check is sufficient - the only thing we need is to understand the QQ-plot :) – Stéphane Laurent Feb 4 '12 at 16:25
1
Ok - I tag, but the answer itself was not satisfactory: the answer together with our discussion is ; but this is my fault: my question was not clear before I add the example. By the way my question is somewhat related to the KS-test: what about the choice of the estimates $\hat\mu$ and $\hat\sigma$ when we type ks.test(x,"pnorm",mu.hat,sigma.hat) ? – Stéphane Laurent Feb 4 '12 at 19:04
show 8 more comments
default | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169198274612427, "perplexity_flag": "middle"} |
http://www.reference.com/browse/Density-functional+theory | Definitions
Nearby Words
# Density functional theory
Density functional theory (DFT) is a quantum mechanical theory used in physics and chemistry to investigate the electronic structure (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. DFT is among the most popular and versatile methods available in condensed-matter physics, computational physics, and computational chemistry.
## Hohenberg-Kohn theorems
Although density functional theory has its conceptual roots in the Thomas-Fermi model described below, DFT was put on a firm theoretical footing by the (H-K). The first of these demonstrates the existence of a one-to-one mapping between the ground-state electron density and the ground-state wavefunction of a many-particle system. Further, the second H-K theorem proves that the ground-state density minimizes the total electronic energy of the system. The original H-K theorems held only for the ground state in the absence of magnetic field, although they have since been generalized.
The theorems can be extended to the time-dependent domain to derive time-dependent density functional theory (TDDFT), which can be also used to describe excited states.
The first Hohenberg-Kohn theorem is an existence theorem, stating that the mapping exists. That is, the H-K theorems tell us that the electron density that minimizes the energy according to the true total energy functional describes all that can be known about the electronic structure. The H-K theorems do not tell us what the true total-energy functional is, only that it exists.
The most common implementation of density functional theory is through the Kohn-Sham method, which maps the properties of the system onto the properties of a system containing non-interacting electrons under a different potential. The kinetic-energy functional of such a system of non-interacting electrons is known exactly. The exchange-correlation part of the total-energy functional remains unknown and must be approximated. Another approach, less popular than Kohn-Sham DFT (KS-DFT) but arguably more closely related to the spirit of the original H-K theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for the kinetic energy of the interacting system.
## Description of the theory
Traditional methods in electronic structure theory, in particular Hartree-Fock theory and its descendants, are based on the complicated many-electron wavefunction. The main objective of density functional theory is to replace the many-body electronic wavefunction with the electronic density as the basic quantity. Whereas the many-body wavefunction is dependent on $3N$ variables, three spatial variables for each of the $N$ electrons, the density is only a function of three variables and is a simpler quantity to deal with both conceptually and practically.
Within the framework of Kohn-Sham DFT, the intractable many-body problem of interacting electrons in a static external potential is reduced to a tractable problem of non-interacting electrons moving in an effective potential. The effective potential includes the external potential and the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions. Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas, which can be obtained from the Thomas-Fermi model, and from fits to the correlation energy for a uniform electron gas.
DFT has been very popular for calculations in solid state physics since the 1970s. In many cases DFT with the local-density approximation gives quite satisfactory results, for solid-state calculations, in comparison to experimental data at relatively low computational costs when compared to other ways of solving the quantum mechanical many-body problem. However, it was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. DFT is now a leading method for electronic structure calculations in both fields. Despite the improvements in DFT, there are still difficulties in using density functional theory to properly describe intermolecular interactions, especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces and some other strongly correlated systems; and in calculations of the band gap in semiconductors. Its poor treatment of dispersion renders DFT unsuitable (at least when used alone) for the treatment of systems which are dominated by dispersion (e.g., interacting noble gas atoms) or where dispersion competes significantly with other effects (e.g. in biomolecules). The development of new DFT methods designed to overcome this problem, by alterations to the functional or by the inclusion of additive terms, is a current research topic.
## Derivation and formalism
As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born-Oppenheimer approximation), generating a static external potential V in which the electrons are moving. A stationary electronic state is then described by a wavefunction $Psi\left(vec r_1,dots,vec r_N\right)$ satisfying the many-electron Schrödinger equation
where $hat H$ is the electronic molecular Hamiltonian, $N$ is the number of electrons, $hat T$ is the $N$-electron kinetic energy, $hat V$ is the $N$-electron potential energy from the external field, and $hat U$ is the electron-electron interaction energy for the $N$-electron system. The operators $hat T$ and $hat U$ are so-called universal operators as they are the same for any system, while $hat V$ is system dependent or non-universal. The difference between having separable single-particle problems and the much more complicated many-particle problem arises from the interaction term $hat U$.
There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree-Fock method, more sophisticated approaches are usually categorized as post-Hartree-Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems.
Here DFT provides an appealing alternative, being much more versatile as it provides a way to systematically map the many-body problem, with $hat U$, onto a single-body problem without $hat U$. In DFT the key variable is the particle density $n\left(vec r\right)$, which for a normalized $,!Psi$ is given by
$n\left(vec r\right) = N int\left\{rm d\right\}^3r_2 int\left\{rm d\right\}^3r_3 cdots int\left\{rm d\right\}^3r_N Psi^*\left(vec r,vec r_2,dots,vec r_N\right) Psi\left(vec r,vec r_2,dots,vec r_N\right)$
This relation can be reversed; i.e., for a given ground-state density $n_0\left(vec r\right)$ it is possible, in principle, to calculate the corresponding ground-state wavefunction $Psi_0\left(vec r_1,dots,vec r_N\right)$. In other words, $,!Psi_0$ is a unique functional of $,!n_0$,
$,!Psi_0 = Psi\left[n_0\right]$
and consequently the ground-state expectation value of an observable $,hat O$ is also a functional of $,!n_0$
$O\left[n_0\right] = leftlangle Psi\left[n_0\right] left| hat O right| Psi\left[n_0\right] rightrangle$
In particular, the ground-state energy is a functional of $,!n_0$
$E_0 = E\left[n_0\right] = leftlangle Psi\left[n_0\right] left| hat T + hat V + hat U right| Psi\left[n_0\right] rightrangle$
where the contribution of the external potential $leftlangle Psi\left[n_0\right] left|hat V right| Psi\left[n_0\right] rightrangle$ can be written explicitly in terms of the ground-state density $,!n_0$
$V\left[n_0\right] = int V\left(vec r\right) n_0\left(vec r\right)\left\{rm d\right\}^3r$
More generally, the contribution of the external potential $leftlangle Psi left|hat V right| Psi rightrangle$ can be written explicitly in terms of the density $,!n$,
$V\left[n\right] = int V\left(vec r\right) n\left(vec r\right)\left\{rm d\right\}^3r$
The functionals $,!T\left[n\right]$ and $,!U\left[n\right]$ are called universal functionals, while $,!V\left[n\right]$ is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified $hat V$, one then has to minimize the functional
$E\left[n\right] = T\left[n\right]+ U\left[n\right] + int V\left(vec r\right) n\left(vec r\right)\left\{rm d\right\}^3r$
with respect to $n\left(vec r\right)$, assuming one has got reliable expressions for $,!T\left[n\right]$ and $,!U\left[n\right]$. A successful minimization of the energy functional will yield the ground-state density $,!n_0$ and thus all other ground-state observables.
The variational problem of minimizing the energy functional $,!E\left[n\right]$ can be solved by applying the Lagrangian method of undetermined multipliers. Hereby, one uses the fact that the functional in the equation above can be written as a fictitious density functional of a non-interacting system
$E_s\left[n\right] = leftlangle Psi_s\left[n\right] left| hat T_s + hat V_s right| Psi_s\left[n\right] rightrangle$
where $hat T_s$ denotes the non-interacting kinetic energy and $hat V_s$ is an external effective potential in which the particles are moving. Obviously, $n_s\left(vec r\right) stackrel\left\{mathrm\left\{def\right\}\right\}\left\{=\right\} n\left(vec r\right)$ if $hat V_s$ is chosen to be
$hat V_s = hat V + hat U + left\left(hat T - hat T_sright\right)$
Thus, one can solve the so-called Kohn-Sham equations of this auxiliary non-interacting system,
$left\left[-frac\left\{hbar^2\right\}\left\{2m\right\}nabla^2+V_s\left(vec r\right)right\right] phi_i\left(vec r\right) = epsilon_i phi_i\left(vec r\right)$
which yields the orbitals $,!phi_i$ that reproduce the density $n\left(vec r\right)$ of the original many-body system
$n\left(vec r \right) stackrel\left\{mathrm\left\{def\right\}\right\}\left\{=\right\} n_s\left(vec r\right)= sum_i^N left|phi_i\left(vec r\right)right|^2$
The effective single-particle potential can be written in more detail as
$V_s\left(vec r\right) = V\left(vec r\right) + int frac\left\{e^2n_s\left(vec r,\text{'}\right)\right\}$
> {rm d}^3r' + V_{rm XC}[n_s(vec r)]
where the second term denotes the so-called Hartree term describing the electron-electron Coulomb repulsion, while the last term $,!V_\left\{rm XC\right\}$ is called the exchange-correlation potential. Here, $,!V_\left\{rm XC\right\}$ includes all the many-particle interactions. Since the Hartree term and $,!V_\left\{rm XC\right\}$ depend on $n\left(vec r \right)$, which depends on the $,!phi_i$, which in turn depend on $,!V_s$, the problem of solving the Kohn-Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for $n\left(vec r\right)$, then calculates the corresponding $,!V_s$ and solves the Kohn-Sham equations for the $,!phi_i$. From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached.
## Approximations (Exchange-correlation functionals)
The major problem with DFT is that the exact functionals for exchange and correlation are not known except for the free electron gas. However, approximations exist which permit the calculation of certain physical quantities quite accurately. In physics the most widely used approximation is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated:
$E_\left\{rm XC\right\}\left[n\right]=intepsilon_\left\{rm XC\right\}\left(n\right)n \left(vec\left\{r\right\}\right) \left\{rm d\right\}^3r$
The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin:
$E_\left\{rm XC\right\}\left[n_uparrow,n_downarrow\right]=intepsilon_\left\{rm XC\right\}\left(n_uparrow,n_downarrow\right)n \left(vec\left\{r\right\}\right)\left\{rm d\right\}^3r$
Highly accurate formulae for the exchange-correlation energy density $epsilon_\left\{rm XC\right\}\left(n_uparrow,n_downarrow\right)$ have been constructed from quantum Monte Carlo simulations of a free electron model.
Generalized gradient approximations (GGA) are still local but also take into account the gradient of the density at the same coordinate:
$E_\left\{XC\right\}\left[n_uparrow,n_downarrow\right]=intepsilon_\left\{XC\right\}\left(n_uparrow,n_downarrow,vec\left\{nabla\right\}n_uparrow,vec\left\{nabla\right\}n_downarrow\right)$
n (vec{r}) {rm d}^3r
Using the latter (GGA) very good results for molecular geometries and ground-state energies have been achieved.
Potentially more accurate than the GGA functionals are meta-GGA functions. These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second derivative) of the density.
Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree-Fock theory. Functionals of this type are known as hybrid functionals.
## Generalizations to include magnetic fields
The DFT formalism described above above breaks down, to various degrees, in the presence of a vector potential, i.e. a magnetic field. In such a situation, the one-to-one mapping between the ground-state electron density and wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories: current density functional theory (CDFT) and magnetic field functional theory (BDFT). In both these theories, the functional used for the exchange and correlation must be generalized to include more than just the electron density. In current density functional theory, developed by Vignale and Rasolt, the functionals become dependent on both the electron density and the paramagnetic current density. In magnetic field density functional theory, developed by Salsbury, Grayce and Harris, the functionals depend on the electron density and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily implementable computationally.
## Applications
In practice, Kohn-Sham theory can be applied in several distinct ways depending on what is being investigated. In solid state calculations, the local density approximations are still commonly used along with plane wave basis sets, as an electron gas approach is more appropriate electrons delocalised through an infinite solid. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange-correlation functionals have been developed for chemical applications. Some of these are inconsistent with the uniform electron gas approximation, however, they must reduce to LDA in the electron gas limit. Among physicists, probably the most widely used functional is the revised Perdew-Burke-Ernzerhof exchange model (a direct generalized-gradient parametrization of the free electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is combined with the exact energy from Hartree-Fock theory. Along with the component exchange and correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a 'training set' of molecules. Unfortunately, although the results obtained with these functionals are usually sufficiently accurate for most applications, there is no systematic way of improving them (in contrast to some of the traditional wavefunction-based methods like configuration interaction or coupled cluster theory). Hence in the current DFT approach it is not possible to estimate the error of the calculations without comparing them to other methods or experiments.
For molecular applications, in particular for hybrid functionals, Kohn-Sham DFT methods are usually implemented just like Hartree-Fock itself.
## Thomas-Fermi model
The predecessor to density functional theory was the Thomas-Fermi model, developed by Thomas and Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every h3 of volume. For each element of coordinate space volume d3r we can fill out a sphere of momentum space up to the Fermi momentum pf
$\left(4/3\right)pi p_f^3\left(vec\left\{r\right\}\right)$
Equating the number of electrons in coordinate space to that in phase space gives:
$n\left(vec\left\{r\right\}\right)=frac\left\{8pi\right\}\left\{3h^3\right\}p_f^3\left(vec\left\{r\right\}\right)$
Solving for pf and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density:
$T_\left\{TF\right\}\left[n\right]=C_Fint n^\left\{5/3\right\}\left(vec\left\{r\right\}\right) d^3r$
where $C_F=frac\left\{3h^2\right\}\left\{10m\right\}left\left(frac\left\{3\right\}\left\{8pi\right\}right\right)^\left\{2/3\right\}$
As such, they were able to calculate the energy of an atom using this kinetic energy functional combined with the classical expressions for the nuclear-electron and electron-electron interactions (which can both also be represented in terms of the electron density).
Although this was an important first step, the Thomas-Fermi equation's accuracy is limited because the resulting kinetic energy functional is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle. An exchange energy functional was added by Dirac in 1928.
However, the Thomas-Fermi-Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation.
Teller (1962) showed that Thomas-Fermi theory cannot describe molecular bonding. This can be overcome by improving the kinetic energy functional.
The kinetic energy functional can be improved by adding the Weizsäcker (1935) correction:
$T_W\left[n\right]=frac\left\{1\right\}\left\{8\right\}frac\left\{hbar^2\right\}\left\{m\right\}intfrac\left\{|nabla n\left(vec\left\{r\right\}\right)|^2\right\}\left\{n\left(vec\left\{r\right\}\right)\right\}dr$
## Books on DFT
• R. Dreizler, E. Gross, Density Functional Theory (Plenum Press, New York, 1995).
• C. Fiolhais, F. Nogueira, M. Marques (eds.), A Primer in Density Functional Theory (Springer-Verlag, 2003).
• Kohanoff, J., Electronic Structure Calculations for Solids and Molecules: Theory and Computational Methods (Cambridge University Press, 2006).
• W. Koch, M. C. Holthausen, A Chemist's Guide to Density Functional Theory (Wiley-VCH, Weinheim, ed. 2, 2002).
• R. G. Parr, W. Yang, Density-Functional Theory of Atoms and Molecules (Oxford University Press, New York, 1989), ISBN 0-19-504279-4, ISBN 0-19-509276-7 (pbk.).
• N.H. March, Electron Density Theory of Atoms and Molecules (Academic Press, 1992), ISBN 0-12-470525-1.
## Key papers
• L.H. Thomas, The calculation of atomic fields, Proc. Camb. Phil. Soc, 23 542-548
• P. Hohenberg and W. Kohn, Phys. Rev. 136 (1964) B864
• W. Kohn and L. J. Sham, Phys. Rev. 140 (1965) A1133
• A. D. Becke, J. Chem. Phys. 98 (1993) 5648
• C. Lee, W. Yang, and R. G. Parr, Phys. Rev. B 37 (1988) 785
• P. J. Stephens, F. J. Devlin, C. F. Chabalowski, and M. J. Frisch, J. Phys. Chem. 98 (1994) 11623
• K. Burke, J. Werschnik, and E. K. U. Gross, Time-dependent density functional theory: Past, present, and future. J. Chem. Phys. 123, 062206 (2005). OAI: arXiv.org:cond-mat/0410362
## External links
• Walter Kohn, Nobel Laureate Freeview video interview with Walter on his work developing density functional theory by the Vega Science Trust.
• Klaus Capelle, A bird's-eye view of density-functional theory
• Walter Kohn, Nobel Lecture
• Density functional theory on arxiv.org
• FreeScience Library -> Density Functional Theory
• The ABC of DFT
• Density Functional Theory -- an introduction | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 75, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9044767022132874, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/55422/what-is-y-in-mathbb-q-mid-y-cosx-quad-x-in0-2-pi-cap-mathbb-q | # What is $\{y\in\mathbb Q\mid y=\cos(x),\quad x\in[0,2\pi]\cap\mathbb Q\}?$
Given a range of the rational numbers, $x$, between $0$ and $2\pi$\, what is the set of rational numbers $y = \cos(x)$?
I was inspired by the stackoverflow question Can $\cos(a)$ ever equal $0$ in floating point? (The irrational number $\frac{\pi}{2}$ does not translate well into a computer representation.)
I looked for rational cosines, and came up with the likes of $$0, \frac{\pi}{3},\frac{\pi}{2}, \pi, \frac{3\pi}{2}$$ Following this rabbit hole, I wondered if there were any rational (Floating Point) numbers (besides $0$) that yielded rational cosines.
One respondent opened a different question, on english.stackexchange.com, What is the upper bound on “several”? which involves the size of the set in question.
-
8
The title doesn't seem to ask the same question as the body. – Qiaochu Yuan Aug 3 '11 at 20:31
All floating point numbers are rational, so the floating point cosine of a floating point number might answer your question, if that is what you are asking. But I think you are actually asking if there are rational non-zero exact solutions for $\cos(x) \in \mathbb{Q} \text{ where } x \in \mathbb{Q}$ and if so what they are. I don't see why it has to be restricted to $(0,2\pi]$ unless you are looking for solutions where $\cos(\pi x) \in \mathbb{Q} \text{ where } x \in \mathbb{Q}$. – Henry Aug 3 '11 at 20:49
1
@rajah9: I still don't think you are asking the question you meant to ask. Don't you want $y = \cos (\pi x)$? – Qiaochu Yuan Aug 3 '11 at 21:10
@Henry, I agree, it does not have to be restricted to $(0, 2\pi]$ but I was looking for solutions on the unit circle. Yes, I am looking for rational, non-zero exact solutions. – rajah9 Aug 4 '11 at 14:09
@Americo Tavares, than you for editing the question. – rajah9 Aug 4 '11 at 14:09
show 1 more comment
## 3 Answers
The only cases where $x/\pi$ and $\cos(x)$ are both rational are the obvious ones, where $2\cos(x)$ is an integer. The slick way to show this uses the following facts:
1) when $r$ is rational, $e^{\pm i r \pi}$ are algebraic integers
2) the sum of algebraic integers is an algebraic integer
3) the only algebraic integers that are rational numbers are (ordinary) integers.
-
1
I think the question is asking for $x$, not $x/\pi$, to be rational. – Rahul Narain Aug 3 '11 at 20:50
2
It's not clear, since rajah9 mentioned $0, \pi/3, \pi/2, \pi,3 \pi/2$. If $x$ is algebraic and nonzero, $\cos(x)$ is transcendental by Lindemann's theorem. – Robert Israel Aug 3 '11 at 21:03
Thanks, the reference to Lindemann's theorem and your proof were what I was looking for. – rajah9 Aug 4 '11 at 14:03
Robert Israel has already answered your specific question. For some deeper issues related to your question, see the following web pages:
http://www.uni-math.gwdg.de/jahnel/Preprints/cos.pdf
http://www.mathpages.com/home/kmath460/kmath460.htm
Given the rationals are dense and $\cos$ is continuous, you can say that $\{y=\cos{x}\mid x\in \mathbb{Q} \cap [0,2\pi]\}$ is dense in $[-1,1]$ and countable. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457268714904785, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/157516/cartan-theorem | # Cartan Theorem.
Cartan Theorem: Let $M$ be a compact riemannian manifold. Let $\pi_1(M)$ be the set of all the classes of free homotopy of $M.$ Then in each non trival class there is a closed geodesic. (i.e a closed curve which is geodesic in all of its points.)
My question: Why free classes? Why the theorem does not apply if we exchange free classes for classes with a fixed base point?
-
Well my immediate guess is that given a basepoint there won't necessarily be a closed geodesic passing through it. – Chris Gerig Jun 12 '12 at 21:04
Dear Eduardo Siva: I think that the compactness of $M$ is a necessary hypothesis in Cartan-Hadamard Theorem. – Giuseppe Jun 12 '12 at 21:08
Chris Gerig: it's a good point. Can you give me an example? – user27456 Jun 12 '12 at 21:18
4
@ZhenLin: I honestly do not understand the purpose of your comment. – user27456 Jun 13 '12 at 0:36
2
@EduardoSiva: I believe that Zhen Lin is pointing out that to produce bold text, you used LaTeX instead of using the proper Markdown formatting, i.e. `$\bf{text}$` instead of `**text**`. See this page for info on how to format text on the site using Markdown. – Zev Chonoles♦ Jun 13 '12 at 4:08
show 2 more comments
## 2 Answers
Consider the Klein bottle $K$ with flat metric. I'm thinking of $K$ as a square where the left and right sides are identified in the "right" way (like on the torus), while the top and bottom are identified in the "wrong" way (like on $\mathbb{R}P^2$). In this picture, geodesics are straight lines that wrap around depending on the identifications on the edges.
Take your basepoint $p$ to be the center of the square. Consider the geodesic $\gamma$ emanating from $p$ with slope 1. So, it starts in the middle of the square moving towards the top right corner. After it gets to the top right corner, due to the identifications we're making, it becomes a straight line emanating from the bottom right corner with slope -1 until it eventually hits $p$ again, i.e., it closes up. However, it is not a closed geodesic because it makes a corner at $p$.
Further, I claim no other geodesic emanating from $p$ is in the same homotopy class as $\gamma$. To see this, work in the univeral cover, $\mathbb{R}^2$ (thought of as being tiled by squares with identification arrows as approrpriate, with corners on integer lattice points). Geodesics are still straight lines, but now there is a unique straight line from $(\frac{1}{2},\frac{1}{2})$ to $(\frac{3}{2},\frac{3}{2})$, given by lifting $\gamma$.
This means the Cartan's theorem fails on based loops, at least in this particular case.
-
To find the closed geodesic in the same class of a given closed loop, you have to "tighten" the loop: you try to shrink it to make it as short as possible. Either it will shrink to a point (if its free homotopy class is trivial) or eventually it will "catch" around some hole in your manifold, in which case it will be a closed geodesic. (If it wasn't a geodesic, then locally at some point there would be a way to tighten it a bit more to make it shorter.) In this tightening process, you can't expect the loop to remain passing through any particular point.
Added: A typical closed manifold may not have all that many closed geodesics on it, e.g. only countably many. (By the Baire category theorem, most points of the manifold will then not lie on a closed geodesic.) This is the case with hyperbolic surfaces, for example, where one has uniqueness of the closed geodesic representative of a free homotopy class. (And I imagine that this is the typical behaviour.)
-
"In this tightening process, you can't expect the loop to remain passing through any particular point." ... the question is why... – Chris Gerig Jun 13 '12 at 2:39
@ChrisGerig: Dear Chris, E.g. because there may not be enough closed geodesics. (See my edit.) Regards, – Matt E Jun 13 '12 at 3:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337144494056702, "perplexity_flag": "head"} |
http://noncommutativeanalysis.wordpress.com/2012/12/09/the-remarkable-hilbert-space-h2-part-ii-multivariable-operator-theory-and-model-theory/ | # Noncommutative Analysis
### The remarkable Hilbert space H^2 (Part II – multivariable operator theory and model theory)
#### by Orr Shalit
This post is the second post in the series of posts on the d–shift space, a.k.a. the Drury–Arveson space, a.k.a. $H^2_d$ (see this previous post about the space $H^2$).
#### 1. Model theory
One of the ways in which one can understand general linear operators on a finite dimensional space is by the Jordan normal form of a matrix. Recall that every linear operator $T$ on a finite dimensional (complex) space can be decomposed as the direct sum $T = J_1 \oplus \ldots \oplus J_k$, where $J_i$ are Jordan blocks, that is $T$ is made up from simple, understandable building blocks. This is a ubiquitous strategy in mathematics: to decompose a general object into tractable, well–understood pieces (for example, every finitely generated abelian group is the direct sum of cyclic groups, etc., etc.).
When it comes to operators of general type on infinite dimensional spaces, no such decomposition is known to mankind (if it was, then mankind would probably have an answer to the invariant subspace problem). A completely different strategy that is used in the infinite dimensional setting is the following: instead of trying to decompose an operator into smaller and better understood pieces, what we do is exhibit the operator as a piece of a bigger and better understood operator. The various ways in which strategy has been implemented go under the name model theory.
How can something complicated be a piece of something simple? How can this help us understand the complicated thing? A good example (from a different field) to have in mind which explains the philosophy behind this scheme and answers both of these questions, is Whitney’s theorem: every smooth manifold can be embedded in Euclidean space.
Here is one way in which this works. Let $S$ denote the operator of multiplication by the coordinate function on $H^2$:
$Sf (z) = z f(z) .$
$S$ is called the shift. Let $K$ be a fixed infinite dimensional and separable Hilbert space (say $\ell^2$). Consider the Hilbert space $H^2 \otimes K$. This space can be identified as $H^2$ direct sum with itself $\dim K$ times. Now consider $S \otimes I_K$ defined by $S \otimes I_k (f \otimes k) = (Sf) \otimes k$. This can be identified with the direct sum $S \oplus S \oplus \cdots$ of $S$ with itself $\dim K$ times. Then we have the following theorem.
Theorem 1: Let $T \in B(H)$ ($H$ separable) with $\|T\|<1$. Then $H$ can be identified with a subspace of $H^2 \otimes K$ which is invariant under $(S \otimes I_K)^*$ such that $T^* = (S \otimes I_K)^* \big|_H$.
(Of course, if one prefers, one may replace $T$ with $T^*$ and then one gets $T = (S \otimes I_K)^*\big|_H$). Stated in an almost equivalent way, the assertion is that
$T^n = P_H (S \otimes I_K)^n \big|_H$
for all $n=1,2,\ldots$. We say that the shift is a universal model for contractions.
Here is a consequence of the fact that shift is a universal model:
Theorem 2 (von Neumann’s inequality): Let $T \in B(H)$, $\|T\|<1$. Then for any polynomial
$\|p(T)\| \leq \sup_{|z|=1} |p(z)| .$
Proof: This follows at once once we know that $\|p(S)\| = \sup_{|z|=1} |p(z)|$ for any polynomial $p$. But $\|p(S)\| = \sup_{|z|=1}|p(z)|$ is a consequence of $H^2_1$ being a subspace of $L^2(\mathbb{T})$.
#### 2. The d-shift as a universal model for commuting row contractions
We now come to multivariable operator theory. Multivariable operator theory is concerned with the analysis of tuples of operators, rather than single operators. That is, one has a d–tuple $T = (T_1, \ldots, T_d)$ and one tries to understand their simultaneaus action on the space and how they relate with each other.
To see why this is more complicated, consider a pair of operators $T = (T_1, T_2)$ on a finite dimensional space. For each separate operator we can find a basis with respect to which it is in Jordan form, and this gives a relatively simple description of what $T_i$ does to the space. In particular, given a polynomial $p$ it is not hard to compute $p(T_i)$ respect to the Jordanizing basis.
However, in general (even if $T_1$ and $T_2$ commute) one cannot choose a Jordanizing basis that works for both operators at once. In particular, it is difficult to compute $p(T_1, T_2)$ for a polynomial $p$ in two variables.
There is a model theory for d–tuples of commuting operators (there is also a model theory for non-commuting tuples which we shall not discuss. See, for example, the work of Gelu Popescu). As above, we will need to impose some norm condition. For a d–tuple $T = (T_1, \ldots, T_d)$ let us denote by $\|T\|$ the norm of the operator $[T_1, \ldots, T_d] : H \oplus \cdots \oplus H \rightarrow H$ given by
$[T_1, \ldots, T_d] (h_1, \ldots, h_d) = \sum_{i=1}^d T_i h_i .$
Let $K$ be as in Theorem 1. Let $S_i$ denote the operator of multiplication by the $i$th coordinate function in $H^2_d$:
$S_i f (z) = z_i f(z) .$
We denote $S = (S_1, \ldots, S_d)$ and refer to this tuple as the d–shift.
Theorem 3: Let $T = (T_1, \ldots, T_d)$ be a d–tuple of commuting operators on $H$ such that $\|T\|<1$. Then $H$ can be identified with a subspace of $H^2_d \otimes K$ such that
$T_i^* = (S_i \otimes I_K)^* \big|_H$.
Again, as a consequence, one has
$p(T) = P_H(p(S) \otimes I_K)\big|_H$
for every polynomial $p(z_1, \ldots, z_d)$.
Thus, we say that the d–shift is a universal model for commuting row contractions.
As a corollary we obtain the following generalization of von Neumann’s inequality.
Theorem 4 (Drury’s inequality): Let $T = (T_1, \ldots, T_d)$ be a commuting tuple of operators such that $\|T\| \leq 1$. Then for any polynomial $p(z_1, \ldots, z_d)$
$\|p(T)\| \leq \|p(S)\| .$
Note the difference from von Neumann’s inequality: we do not claim that $\|p(T)\| \leq \sup_{|z_1| = \ldots = |z_d| = 1}|p(z)|$, and indeed, this inequality fails for $T = S$. But the fact that we can identify a (simple) tuple of operators on which the maximum norm is obtained for any polynomial is quite remarkable.
#### 3. Some words of warning
Theorem 1 and Theorem 3 (the “dilation theorems”) are nontrivial and important theorems, but one should be warned that there is a limit to what they can tell us. This is because the invariant subspace lattice of $S \otimes I_K$ is very complicated. In fact, it is at least as complicated as the invariant subspace lattice of any operator!
Theorem 1 tells us that if we can completely understand the invariant subspace lattice of $S \otimes I_K$ then we can solve the invariant subspace problem; indeed, by Theorem 1 the invariant subspace problem is equivalent to the question whether or not $(S \otimes I_K)^*$ has a minimal infinite dimensional invariant subspace (or equivalently, whether $S \otimes I_K$ has a maximal infinite co–dimensional invariant subspace). No surprise, this problem turns out to be just as hard.
However, some invariant subspace theorems were obtained using model theory. For example, there are operators for which $S \otimes I_G$ is a model, where $G$ is a finite dimensional Hilbert space. This case is tractable, and one can show that the operator $S \otimes I_G$ has no maximal infinite co–dimensional invariant subspaces. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 80, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239150881767273, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/31109/is-there-an-explicit-construction-of-a-free-coalgebra | ## Is there an explicit construction of a free coalgebra?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am interested in the differences between algebras and coalgebras. Naively, it does not seem as though there is much difference: after all, all you have done is to reverse the arrows in the definitions. There are some simple differences:
The dual of a coalgebra is naturally an algebra but the dual of an algebra need not be naturally a coalgebra.
There is the Artin-Wedderburn classification of semisimple algebras. I am not aware of a classification even of simple, semisimple coalgebras.
More surprising is: a finitely generated comodule is finite dimensional.
This question is about a more striking difference. The free algebra on a vector space $V$ is $T(V)$, the tensor algebra on $V$. I have been told that there is no explicit construction of the free coalgebra on a vector space. However these discussions took place following the consumption of alchohol. What is known about free colagebras?
-
## 4 Answers
There cannot be a "free coalgebra" functor, at least in what I think is the standard usage. Namely, suppose that "orange" is a type of algebraic object, for which there is a natural "forgetful" functor from "orange" objects to "blue" objects. Then the "free orange" functor from Blue to Orange is the left adjoint, if it exists, to the forgetful map from Orange to Blue.
Suppose that the forgetful map from coalgebras to vector spaces had a left adjoint; then it would itself be a right adjoint, and so would preserve products. Now the product in the category of coalgebras is something huge — think about the coproduct in the category of algebras, which is some sort of free product — and it's clear that the forgetful map does not preserve products.
On the other hand, the coproduct in the category of coalgebras is given by the direct sum of underlying vector spaces, and so the forgetful map does preserve coproducts. This suggests that it may itself be a left adjoint; i.e. it may have a right adjoint from vector spaces to coalgebras, which should be called the "cofree coalgebra" on a vector space.
Let me assume axiom of choice, so that I can present the construction in terms of a basis. Then I believe that the cofree coalgebra on the vector space with basis $L$ (for "letters") is the graded vector space whose basis consists of all words in $L$, with the comultiplication given by $\Delta(w) = \sum_{a,b| ab = w} a \otimes b$, where $a,b,w$ are words in $L$. I.e. the cofree coalgebra has the same underlying vector space as the free algebra, with the dual multiplication. It's clear that for finite-dimensional vector spaces, the cofree coalgebra on a vector space is (canonically isomorphic to) the graded dual of the free algebra on the dual vector space. Anyway, this is clearly a coalgebra, and the map to the vector space is zero on all words that are not singletons and identity on the singletons. I haven't checked the universal property, though.
Edit: The description above of the cofree coalgebra is incorrect. I learned the correct version from Alex Chirvasitu. The description is as follows. Let $V$ be a vector space, and write $\mathcal T(V)$ for the tensor algebra of $V$, i.e. for the free associative algebra generated by $V$. Then the cofree coassociative algebra cogenerated by $V$ is constructed as follows. First, construct $\mathcal T(V^\ast)$, and second construct its finite dual $\mathcal T(V^\ast)^\circ$, which is the direct limit of duals to finite-dimensional quotients of $\mathcal T(V^\ast)$. There is a natural inclusion $\mathcal T(V^\ast)^\circ \hookrightarrow \mathcal T(V^\ast)^\ast$, and a natural map $\mathcal T(V^\ast)^\ast \to V^{\ast\ast}$ dual to the inclusion $V^\ast \to \mathcal T(V^\ast)$. Finally, construct $\operatorname{Cofree}(V)$ as the union of all subcoalgebras of $\mathcal T(V^\ast)^\circ$ that map to $V \subseteq V^{\ast\ast}$ under the map $\mathcal T(V^\ast)^\circ \hookrightarrow \mathcal T(V^\ast)^\ast \to V^{\ast\ast}$. Details are in section 6.4 (and specifically 6.4.2) of the book Hopf Algebras by Moss E. Sweedler.
In any case, $\operatorname{Cofree}(V)$ is something like the coalgebra of "finitely supported distributions on $V$" (or, anyway, that's is how to think of it in the cocomutative version). For example, when $V = \mathbb k$ is one-dimensional, and $\mathbb k = \bar{\mathbb k}$ is algebraically closed, then $\operatorname{Cofree}(V) = \bigoplus_{\kappa \in \mathbb k} \mathcal T(\mathbb k)$. I should emphasize that now when I write $\mathcal T(\mathbb k)$, in characteristic non-zero I do not mean to give it the Hopf algebra structure. Rather, $\mathcal T(\mathbb k)$ has a basis $\lbrace x^{(n)}\rbrace$, and the comultiplication is $x^{(n)} \mapsto \sum x^{(k)} \otimes x^{(n-k)}$. Identifying $x^{(n)} = x^n/n!$, this is the comultiplication on the "divided power" algebra. It's reasonable to think of the $\kappa$th summand as consisting of (divided power) polynomials times $\exp(\kappa x)$, but maybe better to think of it as the algebra of descendants of $\delta(x - \kappa)$ — but this is just some Fourier duality.
In the non-algebraically-closed case, there are also summands corresponding to other closed points in the affine line. end edit
I should mention that in my mind the largest difference between algebras and coalgebras (by which I mean, and I assume you mean also, "associative unital algebras in Vect" and "coassociative counital coalgebras in Vect", respectively) is one of finiteness. You hinted at the difference in your answer: if $A$ is a (coassociative counital) coalgebra (in Vect), then it is a colimit (sum) of its finite-dimensional subcoalgebras, and moreover if $X$ is any $A$-comodule, then $X$ is a colimit of its finite-dimensional sub-A-comodules. This is absolutely not true for algebras. It's just not the case that every algebra is a limit of its finite-dimensional quotient algebras. A good example is any field of infinite-dimension.
It follows from the finiteness of the corepresentation theory that a coalgebra can be reconstructed from its category of finite-dimensional corepresentations. Let $A$ be a coalgebra, `$\text{f.d.comod}_A$` its category of finite-dimensional right comodules, and `$F : \text{f.d.comod}_A \to \text{f.d.Vect}$` the obvious forgetful map. Then there is a coalgebra `$\operatorname{End}^\vee(F)$`, defined as some natural coequalizer in the same way that the algebra of natural transformations `$F\to F$` is defines as some equalizer, and there is a canonical coalgebra isomorphism `$A \cong \operatorname{End}^\vee(F)$`. (Proof: see André Joyal and Ross Street, An introduction to Tannaka duality and quantum groups, Category theory (Como, 1990), Lecture Notes in Math., vol. 1488, Springer, Berlin, 1991, pp. 413–492. MR1173027 (93f:18015).)
For an algebra, on the other hand, knowing its finite-dimensional representation theory is not nearly enough to determine the algebra. Again, the example is of an infinite-dimensional field (e.g. the field of rational functions). On the other hand, it is true that knowing the full representation theory of an algebra determines the algebra. Namely, if $A$ is an (associcative, unital) algebra (in Vect), `$\text{mod}_A$` its category of all right modules, and `$F: \text{mod}_A \to \text{Vect}$` the forgetful map, then there is a canonical isomorphism `$A \cong \operatorname{End}(F)$`. (Proof: $F$ has a left adjoint, $V \mapsto V\otimes A$. But $V \mapsto V\otimes \operatorname{End}(F)$ is also left-adjoint to $F$. The algebra structure comes from the adjunction: the `$\text{mod}_A$` map $A\otimes A \to A$ corresponds to the vector space map $\operatorname{id}: A\to A$.) ((Note that you don't actually need the full representation theory, which probably doesn't exist foundationally, but you do need modules at least as large as $A$.))
All this all means is that if you believe that almost everything is finite-dimensional, you should reject algebras as "wrong" and coalgebras as "right", whereas if you like infinite-dimensional objects, algebras are the way to go.
-
great answer!!! – Sean Tilson Jul 10 2010 at 4:22
I'm glad you fixed this, Theo. – Todd Trimble Nov 8 2011 at 9:40
1
Theo, I've been looking over your answer again, and I don't quite see how the comultiplication is supposed to work in your description of the cofree coalgebra (attributed to Alex Chirvasitu). I've opened a discussion on this at the nForum math.ntnu.no/~stacey/Mathforge/nForum/… and I would be most appreciative if you could join this discussion, if you have time. (A reference in the literature would be great if you don't feel like hammering through the details.) – Todd Trimble Jan 21 2012 at 17:43
Hi Todd, I will think about it, and also ask Alex. He told me the reference where he read the above construction, but I have forgotten it --- it shouldn't be too hard to track down. I'll also post something at nForum, once I figure out how to set up an account. – Theo Johnson-Freyd Jan 22 2012 at 4:52
2
One more comment: if anyone is interested, I have expanded on Theo's answer at the nLab: ncatlab.org/nlab/show/cofree+coalgebra. This includes detailed consideration of the structure of the cofree coalgebra on a 1-dim vector space. – Todd Trimble Jan 25 2012 at 11:48
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This may be the third time here that I am linking to Michiel Hazewinkel's "Witt vectors, part 1". This time it's Section 12, mainly 12.11. The graded cofree coalgebra over a vector space is the tensor coalgebra (i. e. the tensor algebra, but you forget the tensor multiplication and instead take the deconcatenation coproduct). "Graded" means that all morphisms in the universal property are supposed to be graded. If you leave out the "graded", however, things get difficult. I have but briefly skimmed the contents of this paper, but it seems to contain a description.
-
I do not know the answer for your question, but let me rephrase the warning that takes places right after Definition 4.17 (p.21) of certain unrelated lecture notes:
Contrary to general belief, the coalgebra $T(V)$ with the projection $T(V) → V$ is not cofree in the category of coassociative coalgebras! Cofree coalgebras (in the sense of the obvious dual of definition of free algebras) are surprisingly complicated objects [10, 43, 20]. The coalgebra $T (V )$ is, however, cofree in the subcategory of coaugmented nilpotent coalgebras [38, Section II.3.7].
[10] T.F. Fox. The construction of cofree coalgebras. J. Pure Appl. Algebra, 84(2):191–198, 1993.
[20] M. Hazewinkel. Cofree coalgebras and multivariable recursiveness. J. Pure Appl. Algebra, 183(1-3):61–103, 2003.
[43] J.R. Smith. Cofree coalgebras over operads. Topology Appl., 133(2):105–138, September 2003.
[38] M. Markl, S. Shnider, and J. D. Stasheff. Operads in Algebra, Topology and Physics, volume 96 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, Rhode Island, 2002.
-
One place to really see the difference is by considering Universal Coalgebra (also known as F-coalgebras) (in contrast to Universal Algebra). The structures considered by Universal Coalgebra are typically infinite, whereas those considered by Universal Algebra are finite. This is taken from a computer science angle. Universal Algebra is about data types and Universal Coalgebra is about systems. Initial algebras are correspond to least fixed points, whereas final coalgebras correspond to greatest fixed points.
A free coalgebra (in this setting) corresponds to the final coalgebra of some functor (although one would call it a cofree coalgebra). A general introduction to the kinds of coalgebras I'm talking about is Introduction to Coalgebra by Jiri Adamek. This paper provides a way of constructing the final coalgebras for a certain class of functors.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338459372520447, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/61895?sort=newest | ## How to solve Ax=b incrementally ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi, everyone.
What I am struggling is the following problem. I have a linear matrix equation $Ax=b$, where $A$ is a known $n \times n$ large sparse real matrix, $x$ and $b$ are known $n \times 1$ vectors. Now, one entry of $A$ has changed into $a$ and we denote by this matrix $A'$. Since we make $b$ unchanged, the updated $A$ will cause the $x$ in the original linear eqaution accordingly changed to $x'$ such that $A'x'=b$. My goal is to find this new $x'$. A naive way is to re-solve $A'x'=b$. But since $A'$ is slightly different from $Ax=b$, is there any incremental way to fast solve $x'$ in $A'x'=b$ by taking advantage of the original equation $Ax=b$? Thank you very much for any of your kind suggestion!
-
1
Yes, if you know what entry that will change beforehand, just solve symbolically, and substitute a resp. b into the symbolic solution. – Per Alexandersson Apr 16 2011 at 8:04
## 3 Answers
Another method to update the solution is using the Sherman-Morrison formula: http://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula in your case, $u$ and $v$ are canonical basis vectors.
So basically you have to solve two linear systems with $A$ and then you can update the solution for all possible values of $A$ with little work. Solve $2n$ linear systems, and you can update as many times as you want every entry of $A$ (only one at a time though).
Not sure that this is really your best option though --- all depends on how many "modified systems" you have to solve with the same starting matrix $A$. We need more information from you to decide this.
[By the way, as already pointed out, you'd better use a linear system solver which is suitable for sparse matrices: sparse LU or iterative methods.]
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As Fumiyo Eda already mentioned, you can use an iterative method such as GMRES to resolve the system after the change to $A$.
If you want to use direct LU factorization rather than an iterative method, look into "rank one update" techniques for the LU factorization.
-
An iterative scheme may do the trick. I would suggest looking into algorithms such as GMRES. Since you have a large, sparse matrix, there is a good chance you already have your matrix in a format that can be accepted by an iterative solver.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529457092285156, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/91036/if-p-ne-np-is-every-language-not-contained-in-np-np-hard?answertab=active | # If $P \ne NP$, is every language not contained in $NP$ $NP$-hard?
The other day, a student asked me whether, if $P \ne NP$, whether any language outside of $NP$ is known to be $NP$-hard. I wasn't sure if
• This is definitely known to be true,
• This is definitely known to be false, or
• This depends on another set of complexity assumptions that do not immediately follow from $P \ne NP$ (that is, even if we knew $P \ne NP$, this would still be an open question)
None of the texts on complexity I looked into seemed to answer this question (though it is quite possible that I simply missed it). Does anyone know which of the above three is true, or know a good reference where I could look up the answer?
(Note: This earlier question is related, but I'm considering solely questions outside of $NP$ (so the existing answer doesn't really help) and am not restricting this to just the decidable languages)
Thanks!
-
I think that it's the case that this is definitely known to be true, but I'm not certain enough to make this an answer. – Keith Irwin Dec 13 '11 at 6:24
1
The answer to this question should not depend on wether P=NP. – Raphael Dec 13 '11 at 6:58
Are you asking if every language outside NP is NP-hard? Or are you asking if there exists some language outside NP that is NP-hard? I cannot tell from the wording... – Srivatsan Dec 13 '11 at 7:26
What happens, say, if you take the language of all $x$ such that $|x| \in K$, for some undecidable $K$? – Yuval Filmus Dec 13 '11 at 7:58
@SRivastan- The question is whether every language outside NP is NP-hard. – templatetypedef Dec 13 '11 at 8:18
show 3 more comments
## 4 Answers
It seems that the probabilistic method works.
Consider $\{0,1\} \times \{0,1\} \times \dots$ as a probability space, corresponding to throwing a coin countably many times (product measure).
Any event $a \in \{0,1\} \times \{0,1\} \times \dots$ encodes a subset $L \subseteq \{0,1\}^{\ast}$, the consecutive bits decide if $\epsilon \in L, 0 \in L, 1 \in L, 00 \in L, \dots$. We will now select random $L$, equivalently random $a$, and check its properties.
With probability 1, $L \notin \mathsf{NP}$ (since $\mathsf{NP}$ is countable), even more: with probability 1, $L$ is undecidable.
Now, suppose you have a reduction $f \colon A^{\ast} \to \{0,1\}^{\ast}$ that attempts to reduce SAT to $L$. Since $\mathsf{P} \neq \mathsf{NP}$ by assumption, the image of $f$ must be infinite; otherwise you could convert the reduction to a decision procedure for $SAT$. However, for any $x$, it must hold $x \in SAT \iff f(x)\in L$, and that happens with probability 1/2. Since there are infinitely many values for $f(x)$, the probability that $f$ is a valid reduction from SAT to $L$ is 0.
Since there are countably many reductions, and countable intersection of sets of measure 1 has measure 1, the overall probability of $L$ satisfying all conditions is 1.
So a "generic" random language is neither decidable nor $\mathsf{NP}$-hard, unless $\mathsf{P} = \mathsf{NP}$.
It is possible to convert this proof to diagonalization: on $2i$-th stage, you diagonalize against $i$-th $\mathsf{NP}$ problem; on $2i+1$-th stage, you diagonalize against $i$-th polynomial time reduction with infinite image. This gives a constructive example; with more careful bookkeeping you can get a decidable example.
-
So it looks like the answer to this question is as follows: If $P \ne NP$, then there exists a language not contained in NP that is not NP-hard. This follows from Mahaney's theorem, which says that $P = NP$ iff there is some sparse language L such that SAT is polynomial-time reducible to L. In particular, this says that if $P \ne NP$, then SAT is not polynomial-time reducible to any sparse language. So consider the unary halting language $UNARYHALT$ consisting of unary encodings of TM/string pairs where the given TM halts on the particular input. This language is sparse, since for any length there is either zero or one strings in the language with that length. Moreover, this language is undecidable by a reduction from the halting problem, so it cannot be in NP. Therefore, if $P \ne NP$, by Mahaney's theorem this language is not NP-hard, because there is no polynomial-time reduction from SAT to it.
-
Take any undecidable language L and let L' be the language of pairs (n,x) with x in L and n=2^|x| in unary. Then L' is in P/poly but not in NP. If L' is NP-hard then NP$\subset$P/poly and the polynomial hierarchy collapses by the Karp-Lipton theorem.
-
This is not a complete answer to the question, however, because we may have NP $\subseteq$ P/poly but P$\neq$NP. – Colin McQuillan Dec 13 '11 at 9:27
The answer to this question depends on the complexity assumptions.
• If $\mathrm P = \mathrm {NP}$, then every nontrivial language $L$ is $\mathrm{NP}$-hard. [A language is said to be nontrivial if it contains at least one yes instance and at least one no instance.] To reduce a given $\mathrm {NP}$ problem $A$ to $L$, we simply ignore $L$ and use the polynomial-time algorithm for $A$ -- this is guaranteed to exist under our assumption.
• Assuming $\mathrm{NP} \neq \text{co-}\mathrm{NP}$, no problem in $\text{co-}\mathrm{NP}$ would be $\mathrm{NP}$-hard.
-
So even if P != NP, we cannot say for certain whether every language outside NP is NP-hard, since it depends on whether NP = co-NP? It seems like there might be some other known result that would show this result either way. – templatetypedef Dec 13 '11 at 8:19
@templatetypedef Yes, this does not answer your question completely because I need the stronger hypothesis that NP $\neq$ coNP (although most people believe this stronger assumption anyway). – Srivatsan Dec 13 '11 at 8:30
Any co-NP-complete problem, for example the set of Boolean formula tautologies, is NP-hard. The definition NP-hardness allows arbitrary oracle ("Turing" or "Cook") reductions, not just many-one ("Karp") reductions. – Colin McQuillan Dec 13 '11 at 9:07
@Colin, You raise a nice point. Indeed under the Turing (Cook) reductions, coNP-complete problems are NP-complete as well. However, without explicit qualification, I always take NP-complete in the sense of Karp reductions (as my post shows :-)). However I will be happy to see answers from the other point of view. :) – Srivatsan Dec 13 '11 at 9:16
"NP-complete" is always Karp, "NP-hard" is always Turing. – Colin McQuillan Dec 13 '11 at 9:28
show 7 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424071907997131, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/8279-vector-spaces.html | # Thread:
1. ## Vector Spaces
Let G be a vector space. Suppose that the span{x_1, ..., x_n} = G and suppose Y = {y_1, ..., y_m} is a set of lin. independent vectors in G. What are you able to say about the relationship between n and m? Why?
2. Originally Posted by Bloden
Let G be a vector space. Suppose that the span{x_1, ..., x_n} = G and suppose Y = {y_1, ..., y_m} is a set of lin. independent vectors in G. What are you able to say about the relationship between n and m? Why?
That,
$n\geq m$
Because any linear independant set can be enlarged into a basis.
3. That makes perfect sense, although seemed a little too simple for such a big homework problem. So, I asked my Prof. exactly what he was looking for, and he replied that he's looking for an explanation of why m > n, m < n, m = n and when any of them might happen, or when any of them might not happen. So I think he wants cases of why some of the others can not happen. You have that n >= m; is that always the case; and why will m > n never work?
4. Originally Posted by Bloden
Let G be a vector space. Suppose that the span{x_1, ..., x_n} = G and suppose Y = {y_1, ..., y_m} is a set of lin. independent vectors in G. What are you able to say about the relationship between n and m? Why?
The vector space has a dimension d, which is the number of vectors in a basis B.
For such a basis B, the d vectors span G and are also linearly independent, by definition.
Since X = {x_1...n} spans G, either n = d (then X is linearly independent) or n > d (then X is linearly depedent).
Since Y = {y_1...m} is linearly independent, m is at most d. If m = d, then Y is also a basis. If m < d, then Y doesn't span G. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9664090871810913, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?s=e679086d81eae026248f3598161a6abe&p=4252032 | Physics Forums
## What Constitutes something being "coordinate free"
People say that exterior calculus ie. differentiating and integrating differential forms, can be done without a metric, in without specifying a certain coordinate system. I don't really get what qualifies something to be 'coordinate free', I mean in the differential forms I do, one still references components ie. x1,x2, etc., yet I never specified a metric, so is this classified as 'coordinate free'. Also, how does one do differential geometry without a coordinate system, in my mind once you don't specify a coordinate system or a metric, and things become vague, it sort of turns into differential topology, is there a 'middle ground' I am missing, keep in mind I have never taken a coarse in differential geometry. Also, in differential geometry, it has always been pertinent to give specific parametrization in order to find tangent vectors, metrics, etc.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Blog Entries: 5 Recognitions: Homework Help Science Advisor If you use xi, that (implicitly) means that you have chosen a coordinate system, so it is not coordinate free. To give a very simple example, consider a linear transformation (e.g. rotation) T on a vector x. You could write this in coordinate-free form as x' = T x This does not depend on which basis you choose for the space that x lives in - it just means: apply this transformation. When you calculate the result on a vector, you usually pick a coordinate system by choosing a set of basis vectors (x, y, z-axis) and write the action of T as a matrix M. You then calculate $$\mathbf{x'}_i = \sum_{j = 1}^n M_{ij} x_j$$ This is not coordinate-free, because both the components of x and x' as well as the entries of M depend on the coordinate system. The advantage of the coordinate-free form is that it looks the same in any choice of basis. If you and I both chose a different coordinate system and wrote down M, we would get two different bunches of numbers but it would not be immediately clear that we're talking about the same "physical" operation.
Recognitions: Gold Member Science Advisor Staff Emeritus Another example: Let X be the set of all quadratic polynomials from R to R and define T by T(f)= df/dx+ f. That is a "coordinate free" definition. If I had chosen $\{1, x, x^2\}$ as basis (essentially choosing a "coordinate system" by choosing a basis), say that T(1)= 1, T(x)= x+ 1, T(x2)= x2+ 2x, then say T is "extended by linearity", T(af+ bg)= aT(f)+ bT(g), that is not "coordinaate free" because I have used a coordinate system (a basis) to define it. Or course, those are exactly the same definition.
Recognitions:
Science Advisor
## What Constitutes something being "coordinate free"
It would be interesting to sort out the distinction between "having a coordinate system" and "having a metric". The two definitions are not identical, but what common situations allow us to proceed from having one to having the other?
Blog Entries: 5 Recognitions: Homework Help Science Advisor Aren't they two completely different things? Sure, for a lot of "common" metric spaces the metric is defined in terms of coordinates. But for Rn, for example, that's mostly because people usually learn Pythagoras before inner products, so $$\sum_i (y_i - x_i)^2$$ is a little more intuitive than $$\langle \vec y - \vec x, \vec y - \vec x\rangle$$
Recognitions:
Science Advisor
Quote by CompuChip Aren't they two completely different things?
Of course they are, that's why sorting out their relation is complicated. The idea of a metric on a set of things is standardized, but I'm not sure whether there is an standard definition for a set of things to "have a coordinate system".
The pythagorean idea won't necessarily work for a coordinate system where the same thing can have two different coordinates (as is the case in the polar coordinate system).
Okay so, your standard differential form, written with coordinates say for example a standard 1 form, $\alpha = \sum^{n}_{i=1}f_{i}du_{i}$, you are still referencing a it's local coordinates, but you don't necessarily need to know what the metric is ie. euclidean space vs. it being embedded in some other manifold, so the coordinates don't necessarily need any intrinsic value. then if you know the $u_{i}$ coordinates in terms of euclidean or other coordinates, you pullback/pushforward the form. Does this count as 'coordinate free' since you don't specify a basis: the coordinates in the form aren't in terms of anything.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by Stephen Tashi It would be interesting to sort out the distinction between "having a coordinate system" and "having a metric". The two definitions are not identical, but what common situations allow us to proceed from having one to having the other?
Once we have a coordinate system, so that every point, x, can be identified with $(x_1, x_2, ..., x_n)$, there is a "natural" metric, $d(x,y)= \sqrt{(x_1-y_1)^2+ (x_2- y_2)^2+ ...+ (x_n- y_n)^2}$.
Given a metric space, even finite dimensional $R^n$, we would need a choice of "origin", and n-1 "directions" for the coordinate axes (after choosing n-1 coordinate directions, the last is fixed) in order to have a coordinate system. So a "coordinate system" is much more restrictive, and stronger, than a "metric space".
Recognitions:
Science Advisor
Quote by HallsofIvy Once we have a coordinate system, so that every point, x, can be identified with $(x_1, x_2, ..., x_n)$, there is a "natural" metric, $d(x,y)= \sqrt{(x_1-y_1)^2+ (x_2- y_2)^2+ ...+ (x_n- y_n)^2}$.
But does the definition of a "coordinate system" ( if there is such a standard definition) include the idea that each element of "the space" can be identified with a unique finite vector of coordinates? And must the vector consist of a vector of real numbers?
okay, so now can someone give an example of using a differential form without a metric?
Thread Tools
Similar Threads for: What Constitutes something being "coordinate free"
Thread Forum Replies
Computers 14
Academic Guidance 15
Academic Guidance 6
Forum Feedback & Announcements 3
General Physics 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9149415493011475, "perplexity_flag": "middle"} |
http://alanrendall.wordpress.com/2011/07/01/when-is-a-dynamical-system-of-mass-action-type/ | # Hydrobates
A mathematician thinks aloud
## When is a dynamical system of mass action type?
At the moment I am at the European Conference on Mathematical and Theoretical Biology in Krakow. This is a joint conference with the Society for Mathematical Biology and it is very large, with more than 900 participants. This leads to a huge number of parallel sessions and the need to choose very carefully in order to get the most profit from the conference. On one day, for instance, there were two cases with two sessions on immunology occurring simultaneously.
On Tuesday I went to a session on biochemical reaction networks. This included a talk by Gheorghe Craciun with a large expository component which I found enlightening. He raised the question of when a system of ODE with polynomial coefficients can be interpreted as coming from a system of chemical reactions with mass action kinetics. He mentioned a theorem about this and after asking him for details I was able to find a corresponding paper by Hars and Toth. This is in the Colloquia Mathematica Societatis Janos Bolyai, which is a priori not easily accessible. The paper is, however, available as a PDF file on the web page of Janos Toth. A chemical reaction network gives rise to a system of equations of the form $\dot x_i=p_i-x_iq_i$ where the $p_i$ and $q_i$ are polynomials with positive coefficients. They represent the contributions from reactions where the species with concentration $x_i$ is on the right and left side respectively. The result of Hars and Toth is that any system of this algebraic form can be obtained from a reaction network. It was pointed out by Craciun in his talk that this means that arbitrarily complicated dynamics can be incorporated into systems coming from reaction networks. If we have a system of the form $\dot x_i=f_i-g_i$ we can replace it by $\dot x_i=(x_1\ldots x_n)(f_i-g_i)$. This changes the system but does not change the orbits of solutions. If, for instance, we start with the Lorenz system with unknowns $x$, $y$ and $z$ we can simply translate the coordinates so as to move the interesting dynamics into the region where all coordinates are positive and then multiply the result by $xyz$. This preserves the strange attractor structure. This result may be compared with the Perelson-Wallwork theorem discussed in a previous post.
The construction of the reaction network reproducing the given equations is not very complicated. The main problem is keeping track of the notation. Suppose we start with a system of $n$ equations in $k$ variables $x_i$ which is polynomial and satisfies the necessary condition already mentioned. The reaction network can be constructed in the following way. (It is not at all unique.) Introduce one species $X_i$ for each $x_i$. The right hand side of each equation is a sum of terms of the form $Ax_1^{m_i}\ldots x_k^{m_k}$ and one reaction is introduced for each of these terms. To explain what it is suppose without generality that it belongs to the first equation. If $A>0$ hen the reaction transforms the complex $m_1X_1+m_2X_2+\ldots m_kX_k$ to the complex $(m_1+1)X_1+m_2X_2+\ldots m_kX_k$ with rate constant $A$. The only species where there is a net production is $X_1$ and so this reaction only contributes to the first equation. Moreover it does so with the desired term. On the other hand if $A<0$ the reaction transforms the complex $m_1X_1+m_2X_2+\ldots m_kX_k$ to the complex $(m_1-1)X_1+m_2X_2+\ldots m_kX_k$ with rate constant $-A$. The assumption on the system assures that $m_1-1\ge 0$.
### Like this:
This entry was posted on July 1, 2011 at 6:10 am and is filed under dynamical systems, mathematical biology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 2 Responses to “When is a dynamical system of mass action type?”
1. János Tóth Says:
July 26, 2011 at 6:05 am | Reply
I liked Gheoge’s idea first, then tried to do some calculations on the Lorenz model, and I think with this premultiplication we often get an equation the solutions of which blow up. Could you find appropriate constants to produce chaotic behaviour in the frst orthant? E. g.
NDSolve[{
x'[t] == x[t] y[t] z[t] (-3 (x[t] – y[t])),
y’[t] == x[t] y[t] z[t] (-x[t] z[t] + 26.5 x[t] – y[t]),
z’[t] == x[t] y[t] z[t] (x[t] y[t] – z[t]),
x[0] == 1, z[0] == 2, y[0] == 5}, {x, y, z}, {t, 0, 200},
MaxSteps -> \[Infinity]];
does not work.
Thanks for the post, János
• hydrobates Says:
July 26, 2011 at 11:25 am | Reply
Thanks for the comment. I see now that things are not as easy as I thought. It would be nice if this idea for treating the Lorenz system could be made watertight.
Alan
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404501914978027, "perplexity_flag": "head"} |
http://simple.wikipedia.org/wiki/Surface_integral | # Surface integral
In mathematics, a surface integral is a definite integral taken over a surface (which may be a curve set in space). Just as a line integral handles one dimension or one variable, a surface integral can be thought of being double integral along two dimensions. Given a surface, one may integrate over its scalar fields (that is, functions which return numbers as values), and vector fields (that is, functions which return vectors as values).
Surface integrals have applications in physics, particularly with the classical theory of electromagnetism.
The definition of surface integral relies on splitting the surface into small surface elements.
An illustration of a single surface element. These elements are made infinitesimally small, by the limiting process, so as to approximate the surface.
## Surface integrals of scalar fields
Consider a surface S on which a scalar field f is defined. If one thinks of S as made of some material, and for each x in S the number f(x) is the density of material at x, then the surface integral of f over S is the mass per unit thickness of S. (This only true if the surface is an infinitesimally thin shell.) One approach to calculating the surface integral is then to split the surface in many very small pieces, assume that on each piece the density is approximately constant, find the mass per unit thickness of each piece by multiplying the density of the piece by its area, and then sum up the resulting numbers to find the total mass per unit thickness of S.
To find an explicit formula for the surface integral, mathematicians parameterize S by considering on S a system of curvilinear coordinates, like the latitude and longitude on a sphere. Let such a parameterization be x(s, t), where (s, t) varies in some region T in the plane. Then, the surface integral is given by
$\int_{S} f \,dS = \iint_{T} f(\mathbf{x}(s, t)) \left|{\partial \mathbf{x} \over \partial s}\times {\partial \mathbf{x} \over \partial t}\right| ds\, dt$
where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of x(s, t).
For example, to find the surface area of some general functional shape, say $z=f\,(x,y)$, we have
$A = \int_S \,dS = \iint_T \left\|{\partial \mathbf{r} \over \partial x}\times {\partial \mathbf{r} \over \partial y}\right\| dx\, dy$
where $\mathbf{r}=(x, y, z)=(x, y, f(x,y))$. So that ${\partial \mathbf{r} \over \partial x}=(1, 0, f_x(x,y))$, and ${\partial \mathbf{r} \over \partial y}=(0, 1, f_y(x,y))$. So,
$\begin{align} A &{} = \iint_T \left\|\left(1, 0, {\partial f \over \partial x}\right)\times \left(0, 1, {\partial f \over \partial y}\right)\right\| dx\, dy \\ &{} = \iint_T \left\|\left(-{\partial f \over \partial x}, -{\partial f \over \partial y}, 1\right)\right\| dx\, dy \\ &{} = \iint_T \sqrt{\left({\partial f \over \partial x}\right)^2+\left({\partial f \over \partial y}\right)^2+1}\, \, dx\, dy \end{align}$
which is the formula used for the surface area of a general functional shape. One can recognize the vector in the second line above as the normal vector to the surface.
Note that because of the presence of the cross product, the above formulas only work for surfaces embedded in three dimensional space.
## Surface integrals of vector fields
A vector field on a surface.
Consider a vector field v on S, that is, for each x in S, v(x) is a vector.
The surface integral can be defined component-wise according to the definition of the surface integral of a scalar field; the result is a vector. For example, this applies to the electric field at some fixed point due to an electrically charged surface, or the gravity at some fixed point due to a sheet of material. It can also calculate the maagnetic flux through a surface.
Alternatively, mathematicians can integrate the normal component of the vector field; the result is a scalar. An example is a fluid flowing through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in a unit amount of time.
This illustration implies that if the vector field is tangent to S at each point, then the flux is zero, because the fluid just flows in parallel to S, and neither in nor out. This also implies that if v does not just flow along S, that is, if v has both a tangential and a normal component, then only the normal component contributes to the flux. Based on this reasoning, to find the flux, we need to take the dot product of v with the unit surface normal to S at each point, which will give us a scalar field, and integrate the obtained field as above. This gives the formula
$\int_S {\mathbf v}\cdot \,d{\mathbf {S}} = \int_S ({\mathbf v}\cdot {\mathbf n})\,dS=\iint_T {\mathbf v}(\mathbf{x}(s, t))\cdot \left({\partial \mathbf{x} \over \partial s}\times {\partial \mathbf{x} \over \partial t}\right) ds\, dt.$
The cross product on the right-hand side of this expression is a surface normal determined by the parametrization.
This formula defines the integral on the left (note the dot and the vector notation for the surface element).
## Theorems involving surface integrals
Various useful results for surface integrals can be derived using differential geometry and vector calculus, such as the divergence theorem, and its generalization, Stokes' theorem.
## Advanced issues
### Changing parameterization
The discussion above defined the surface integral by using a parametrization of the surface S. A given surface might have several parametrizations. For example, when the locations of the North Pole and South Pole are moved on a sphere, the latitude and longitude change for all the points on the sphere. A natural question is then whether the definition of the surface integral depends on the chosen parametrization. For integrals of scalar fields, the answer to this question is simple, the value of the surface integral will be the same no matter what parametrization one uses.
Integrals of vector fields are more complicated, because the surface normal is involved. Mathematicians have proved that given two parametrizations of the same surface, whose surface normals point in the same direction, both parametrizations give the same value for the surface integral. If, however, the normals for these parametrizations point in opposite directions, the value of the surface integral obtained using one parametrization is the negative of the one obtained via the other parametrization. It follows that given a surface, we do not need to stick to any unique parametrization; but, when integrating vector fields, we do need to decide in advance which direction the normal will point to and then choose any parametrization consistent with that direction.
### Parameterizations work on parts of the suface
Another issue is that sometimes surfaces do not have parametrizations which cover the whole surface; this is true for example for the surface of a cylinder (of finite height). The obvious solution is then to split that surface in several pieces, calculate the surface integral on each piece, and then add them all up. This is indeed how things work, but when integrating vector fields one needs to again be careful how to choose the normal-pointing vector for each piece of the surface, so that when the pieces are put back together, the results are consistent. For the cylinder, this means that if we decide that for the side region the normal will point out of the body, then for the top and bottom circular parts the normal must point out of the body too.
### Inconsistent surface normals
Last, there are surfaces which do not have a surface normal at each point with consistent results (for example, the Möbius strip). If such a surface is split into pieces, on each piece a parametrization and corresponding surface normal is chosen, and the pieces are put back together, the normal vectors coming from different pieces cannot be reconciled. This means that at some junction between two pieces will have normal vectors pointing in opposite directions. Such a surface is called non-orientable. Vector fields can not be integrated on non-orientable surfaces. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9007924795150757, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/199408-triple-integral.html | # Thread:
1. ## Triple integral
What does the answer to a triple integral represent?
2. ## Re: Triple integral
If you want a really general answer, a triple integral represents the area under beneath a function which represents the area beneath another function which represents the area beneath another function still.
Triple integrals aren't all that common in my experience but a more specific case would be something like a problem where an object's acceleration is not constant and is expressed in terms of time. The triple integral of the rate of change of acceleration would yield an expression for the object's displacement in this case.
$\tiny \dpi{300} a = f(t)$
$\tiny \dpi{300} s=\int \left [\int \left (\int f(t)dt \right ) dt \right ] dt$
Where a is acceleration and s is displacement. This is a very specific case though, as I said earlier, a triple integral's meaning depends on what the meaning of the expression being integrated is. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277901649475098, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/65943/which-side-should-the-hat-of-unit-vectors-point-to | # Which side should the “hat” of unit vectors point to?
I'm not sure whether to post this here or on TeX.SE. If it belongs there, please move it.
When working with unit vectors in physics class, we usually write them with an upside down hat, like this: $\check{\imath}\, \check{\jmath} \,\check{\rho}$. But when looking online, I usually found them written like this: $\hat{\imath}\, \hat{\jmath} \,\hat{\rho}$. Is there a correct way to do it, or is it just convention?
-
## 1 Answer
I have never seen the upside-down hat, but I don't really think it matters so long as you mention it explicitly the first time you use it and you remain consistent afterwards. I'm very used to the upward hat - in my linear algebra classes, that's how I have taught it. And when I dabble in physics, it plays nicely with their concepts of measurables in quantum mechanics.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506146311759949, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/43481/the-conditions-in-the-definition-of-poisson-process-and-a-levy-process-generaliz | ## The conditions in the definition of Poisson process (and a Lévy process generalization)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Last week, George Lowther provided a rather sophisticated counter-example of a continuous process `$\{W(t):t \geq 0\}$` with $W(0)=0$ and $W(t)-W(s) \sim {\rm N}(0,t-s)$ for all $0 \leq s < t$, yet not a Brownian motion; see link text. Apparently, his approach relied heavily on special properties of BM.
Now, what about the analogue for a Poisson process: Can you find an example of a càdlàg (right-continuous with left limits) process `$\{X(t):t \geq 0\}$` with $X(0)=0$ and $X(t)-X(s) \sim {\rm Poi}(t-s)$ for all $0 \leq s < t$, yet not being a Poisson process?
Bonus question: If the last question turns out to be too easy to answer (etc.), then what about the general Lévy process case? That is, given a Lévy process $X$ (defined below) with law $\mu_t$ at time $t>0$, does there exist a càdlàg process $\tilde X$ with $\tilde X(0) = 0$ and $\tilde X(t)-\tilde X(s) \sim \mu_{t-s}$ for all $0 \leq s < t$, which is yet not identical in law to $X$ (hence not a Lévy process)? [Here, assume that $X$ is non-deterministic, equivalently, $\mu_t$ is not a $\delta$-distribution.]
Definition: A stochastic process `$X=\{X(t):t \geq 0\}$` is a Lévy process (say, real-valued) if the following conditions are satisfied: (1) $X(0)=0$ a.s.; (2) $X$ has independent increments; (3) $X$ has stationary increments; (4) $X$ is stochastically continuous; (5) Almost surely, the function $t \mapsto X(t)$ is right-continuous (for $t \geq 0$) and has left limits (for $t>0$). [In fact, condition (4) is implied by (1), (3), and (5).]
PS: you are still welcome to try and find a simpler counter-example for the Brownian motion case.
-
## 2 Answers
You cannot define a Lévy process by the individual distributions of its increments, except in the trivial case of a deterministic process Xt − X0 = bt with constant b. In fact, you can't identify it by the n-dimensional marginals for any n.
1) Let X be a nondeterministic Lévy process with X0 = 0 and n be any positive integer. Then, there is a cadlag process Y with a different distribution to X, but such that (Yt1,Yt2,…,Ytn) has the same distribution as (Xt1,Xt2,…,Xtn) for all times t1,t2,…,tn.
Taking n = 2 will give a process whose increments have the same distribution as for X.
The idea (as in my answer to this related question) is to reduce it to the finite-time case. So, fix a set of times 0 = t0 < t1 < t2 < … < tm for some m > 1. We can look at the distribution of X conditioned on the ℝm-valued random variable U ≡ (Xt1,Xt2,…,Xtm). By the Markov property, it will consist of a set of independent processes on the intervals [tk−1,tk] and [tm,∞), where the distribution of {Xt }t ∈[tk−1,tk] only depends on (Xtk−1,Xtk) and the distribution of {Xt }t ∈[tm,∞) only depends on Xtm. By the disintegration theorem, the process X can be built by first constructing the random variable U, then constructing X to have the correct probabilities conditional on U. Doing this, the distribution of X at any one time only depends on the values of at most two elements of U (corresponding to Xtk−1,Xtk). The distribution of X at any set of n times depends on the values of at most 2n values of U.
Choosing m > 2n, the idea is to replace U by a differently distributed ℝm-valued random variable for which any 2n elements still have the same distribution as for U. We can apply a small bump to the distribution of U in such a way that the m − 1 dimensional marginals are unchanged. To do this, we can use the following.
2) Let U be an ℝm-valued random variable with probability measure μ. Suppose that there exist (non-trival) measures μ1,μ2,…,μm on the reals such that μ1(A1)μ2(A2)…μm(Am) ≤ μ(A1×A2×…×Am) for all Borel subsets A1,A2,…,Am ⊆ ℝ. Then, there is an ℝm-valued random variable V with a different distribution to U, but with the same m − 1 dimensional marginal distributions.
By 'non-trivial' I mean that μk is a non-zero measure and does not consist of a single atom.
By changing the distribution of U in this way, we construct a new cadlag process with a different distribution to X, but with the same n dimensional marginals.
Proving (2) is easy enough. As μk are non-trivial, there will be measurable functions ƒk on the reals, uniformly bounded by 1 and such that μk(ƒk) = 0 and μk(|ƒk|) > 0. Replacing μk by the signed measure ƒk·μk, we can assume that μk(ℝ) = 0. Then $$\mu_V = \mu + \mu_1\times\mu_2\times\cdots\times\mu_m$$ is a probability measure different from μ. Choosing V with this distribution gives $${\mathbb E}[f(V)]=\mu_V(f)=\mu(f)={\mathbb E}[f(U)]$$ for any function ƒ: ℝm → ℝ+ independent of one of the dimensions. So, V has the same m − 1 dimensional marginals as U.
To apply (2) to U = (Xt1,Xt2,…,Xtm), consider the following cases.
1. X is continuous. In this case, X is just a Brownian motion (up to multiplication by a constant and addition of a constant drift). So, U is joint-normal with nondegenerate covariance matrix. Its probability density is continuous and strictly positive so, in (2), we can take μk to be a multiple of the uniform measure on [0,1].
2. X is a Poisson process. In this case, we can take μk to be a multiple of the (discrete) uniform distribution on {2k,2k + 1} and, as X can take any increasing nonnegative integer-valued path on the times tk, this satisfies the hypothesis of (2).
3. If X is any non-continuous Lévy process, case 2 can be used to change the distribution of its jump times without affecting the n dimensional marginals: Let ν be its jump measure, and A be a Borel set such that ν(A) is finite and nonzero. Then, X decomposes as the sum of its jumps in A (which occur according to a Poisson process of rate ν(A)) and an independent Lévy process. In this way, we can reduce to the case where X is a Lévy process whose jumps occur at a finite rate, with arrival times given by a Poisson process. In that case, let Nt be the Poisson process counting the number of jumps in intervals [0,t]. Also, let Zk be the k'th jump of X. Then, N and the Zk are all independent and, $$X_t=\sum_{k=1}^{N_t}Z_k.$$ As above, the Poisson process N can be replaced by a differently distributed cadlag process which has the same n dimensional marginals. This will not affect the n dimensional marginals of X but, as its jump times no longer occur according to a Poisson process, X will no longer be a Lévy process.
-
It will take me a while to go over such an answer. – Shai Covo Oct 26 2010 at 0:21
Very elegant solution! [Note the typo before "is a probability measure" (you may wish to add here "different from $\mu$"), where $n$ should be $m$.] – Shai Covo Oct 27 2010 at 14:23
@Shai: Thanks. I've fixed the typo. – George Lowther Oct 27 2010 at 14:37
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Based on the comments to this answer, I no longer believe what I initially wrote (still appearing at the bottom of the answer). It seems to me a construction should be possible. It is at least possible in the case of a Binomial point process.
Let `$\{X_i\}_{i \in \mathbb{N},i\neq 4}$` be independent Bernoulli$(1/2)$ random variables. Let $X_4'$ be Bernoulli$(1/2)$ and independent of the $X_i$. Then define $X_4$ as follows:
• $X_4 = 1$ if $(X_1,X_2,X_3)$ is either $(0,0,1)$ or $(1,1,0)$,
• $X_3 = 0$ if $(X_1,X_2,X_3)$ is either $(0,1,0)$ or $(1,0,1)$,
• $X_4=X_4'$ otherwise.
Then for all $j \geq 0$, $n \geq 1$, $X_{j+1}+\ldots+X_{j+n}$ has Binomial$(n,1/2)$ distribution but the family $(X_n)_{n \in \mathbb{N}}$ are not iid.
What appears below (where I suggested such a construction was impossible) is false.
For a standard Poisson process, this won't be possible. (See this question and its answer.)
Edit: Given the comments perhaps I should provide more detail.
With probability one, for every pair $0 < p < q$, $p,q$ rationals, $X(q)−X(p)$ is a non-negative integer. Since X is cadlag the same property must hold for every real pair $0 < s < t$, i.e. $X$ is increasing and integer-valued.
Let us also show that $X$ has no jumps of size more than one: with probability one, for all $x > 0$, $X(x^-) := \lim_{y \uparrow x} X(y) \geq X(x)-1$. If this failed to hold then there would be $\epsilon > 0$ and $t < \infty$ so that $$\mathbb{P}(\exists x \in [0,t), X(x)-X(x^-) \geq 2) > \epsilon/2.$$ But since $X$ is increasing, for any positive integer $n$ we can bound this probability from above by $$\sum_{1 \leq i < 2n} \mathbb{P}(X((i+1)t/2n)-X((i-1)t/2n) \geq 2)$$ the point being that these intervals are chosen to overlap so that a jump of size $\geq 2$ must fall in at least one of them. Each of the differences above is distributed as Poisson$(t/n)$, so the associated probability is $o(n^{-1})$ as $n \to \infty$ and thus the whole sum tends to zero as $n \to \infty$.
We then know that a process $X$ such as you describe must be increasing and integer valued, with all jumps of size $1$. In other words, $X$ is a point process on $[0,\infty)$. Now the answer from the other thread implies that $X$ must be a rate one Poisson process.
-
1
In our situation, we only know $P(X(A)=0)$ for sets $A$ of the form $A=(s,t]$, which cannot characterize the law of $X$. Hence, the question is still unanswered. – Shai Covo Oct 25 2010 at 15:48
My last comment is incorrect, but the situation is only more complicated, as the process $X$ is not even known to be a point process (for example, the function $t \mapsto X(t)$ might not be monotone). – Shai Covo Oct 25 2010 at 16:34
With probability one, for every pair $0 < p < q$, $p,q$ rationals, $X(q)-X(p)$ is a non-negative integer. Since $X$ is cadlag the same property must hold for every real pair $0 < s < t$, i.e. $X$ is increasing and integer-valued, so it is a point process. – Louigi Addario-Berry Oct 25 2010 at 17:42
Of course, a trivial mistake of mine. It would have been correct if $X$ was only assumed cadlag with $X(0)=0$ and $X(t) \sim Poi(t)$. However, the question remains unanswered. – Shai Covo Oct 25 2010 at 18:04
I've elaborated on my answer. If I'm misunderstanding something please let me know. – Louigi Addario-Berry Oct 25 2010 at 18:37
show 6 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 83, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204877018928528, "perplexity_flag": "head"} |
http://stats.stackexchange.com/questions/30708/which-t-test-to-use-for-a-two-group-pre-post-test-design?answertab=oldest | # Which t-test to use for a two-group pre- post-test design?
I have data on 26 participants (13 from computing and remaining 13 from non computing) who have participated in my research. Each participant is treated with a lab module (Hands on Robotics Session). Now each participant will be evaluated using a rubric on scale of 1 to 4. This experiment has both pre and post test.
For my research i want to evaluate the following questions:
### Research question 1
• Null Hypothesis: students do not learn about computational thinking (programming basics and algorithmic thinking) with the help of robotics.
• Alternate Hypothesis: Students learn about computational thinking (programming basics and algorithmic thinking) with the help of robotics.
To evaluate the above question, the categories i will be considering are Plan, Implementation and Knowledge gained on a scale of Excellent, Good, Fair and Poor.
I think i should use DEPENDENT T-TEST FOR PAIRED SAMPLES. Am i correct?
### Research question 2:
• Null Hypothesis: Participants background (computing and non computing) has no effect in learning about algorithmic thinking with help of robotics
• Alternate Hypothesis:Participants background (computing and non computing) has an effect in learning about algorithmic thinking with help of robotics
I will also evaluate question 2 on a scale of Excellent, Good, Fair and Poor, but with respect to background.
### My Statistical Question
Which T test should I use for each of my research questions?
-
– chl♦ Jun 20 '12 at 7:59
## 3 Answers
Since you are interested in measuring an increase pre- and post-test, it seems to me that you should use a paired test.
An issue here is that your variables are likely non Gaussian since they are among 4 categories. If your sample size is big (very roughly larger than 50), then it is no big deal. Otherwise, I would use the Wilcoxon signed rank test which is a non parametric analog of the paired t-test.
-
2
very roughly is right. What if $P( {\rm category \ 1} ) = 1 - 10^{-200}?$ ;) – Macro Jun 18 '12 at 23:40
1
Aha (+1), then I am f... But how likely is that? Close to $10^{-200}$ I'd say :-) – gui11aume Jun 18 '12 at 23:45
1
It's also worth noting that this would also require an assumption that the categories comprise an interval scale. It also seems (unless I misread) that, in question 1, the role of the null and alternative are reversed from the way they normally are in a paired $t$-test; the null appears to be that the module does help. Finally, question 2 appears to require a two independent samples test, where the paired differences are the outcome variables. – Macro Jun 18 '12 at 23:51
1
I do not subscribe to using the Wilcoxon signed rank test alternative. If you do a simulation study, you can still show the power of the t-test is good --comprable to the signed rank test-- in moderately skewed distributions. If the researchers are interested in testing for a geometric difference in responses, then a log transformation would be justified. – AdamO Jun 18 '12 at 23:54
@marco: Thanks for pointing out that my formation for null and alternate was wrong and thanks for your suggestion on the second question. – Dumb_Shock Jun 19 '12 at 0:59
show 1 more comment
I think you are having problems with the nature and elaboration of your alternative hipotesis. Your investigation/reserach hypotesis (and all the theoretical model associated with it) is your alternative hipothesis. Since the variable is clearly ordinal, i´d prefer a Wilcoxon rank test. But anyway i suggest to work out a little bit more your hipothesis.
-
gui11aume beat me to it. He said exactly what I had in mind when I read the question. Since scaled measurements are not going to fit a normal error distribution very well a nonparametric paired test is the best way to go in my view and for that I would have recommended the Wilcoxon signed rank test. On the other hand if you were comparing scores by domains where many responses are summed, normality is not a bad assumption and the t test is fairly robust anyway. So in that situation a paired t test might be okay.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346959590911865, "perplexity_flag": "middle"} |
http://xianblog.wordpress.com/2011/03/02/correlated-poissons/ | # Xi'an's Og
an attempt at bloggin, from scratch…
## Correlated Poissons
A graduate student came to see me the other day with a bivariate Poisson distribution and a question about using EM in this framework. The problem boils down to adding one correlation parameter and an extra term in the likelihood
$(1-\rho)^{n_1}(1+\lambda\rho)^{n_2}(1+\mu\rho)^{n_3}(1-\lambda\mu\rho)^{n_4}\quad 0\le\rho\le\min(1,\frac{1}{\lambda\mu})$
Both terms involving sums are easy to deal with, using latent variables as in mixture models. The subtractions are trickier, as the negative parts cannot appear in a conditional distribution. Even though the problem can be handled by a direct numerical maximisation or by an almost standard Metropolis-within-Gibbs sampler, my suggestion regarding EM per se was to proceed by conditional EM, one parameter at a time. For instance, when considering $\rho$ conditional on both Poisson parameters, depending on whether $\lambda\mu>1$ or not, one can consider either
$(1-\theta/\lambda\mu)^{n_1}(1+\theta/\mu)^{n_2}(1+\theta/\lambda)^{n_3}(1-\theta)^{n_4}\quad0<\theta<1$
and turn
$(1-\theta/\lambda\mu) \text{ into } (1-\theta+\theta\{1-\frac{1}{\lambda\mu}\})$
thus producing a Beta-like target function in $\theta$ after completion, or turn
$(1-\lambda\mu\rho) \text{ into } (1-\rho+\{1-\lambda\mu\}\rho)$
to produce a Beta-like target function in $\rho$ after completion. In the end, this is a rather pedestrian exercise and I am still frustrated at missing the trick to handle the subtractions directly, however it was nonetheless a nice question!
### Share:
This entry was posted on March 2, 2011 at 12:11 am and is filed under Statistics with tags EM, latent variable, Poisson distribution. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 2 Responses to “Correlated Poissons”
1. Actually, the bivariate Poisson regression has been programmed in the bivpois package, in R (as far as I remember, it was using that EM algorithm you mention). But for some reason, it cannot be installed anymore
http://cran.r-project.org/web/packages/bivpois/
“Package ‘bivpois’ was removed from the CRAN repository”
anyone knows why ? are there any problem with that package ?
• Corey Says:
March 6, 2011 at 10:28 pm
I’ve been told that packages can drop out of CRAN when a new version of R is released. The CRAN people run their suite of tests, and a package fails to pass, then it is moved from the repository to the archive. It is the responsibility of the package maintainer, if any, to get the package back into the repository.
Archived packages can still installed with a little work.
Cancel | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9135401844978333, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/47156/occupied-lattice-sites-determining-number-of-microstates-and-energy?answertab=votes | # Occupied lattice sites, determining number of microstates and energy
A solid consisting of $N$ molecules on a lattice of $N$ sites is isolated from its environment, and has energy $E$. Each molecule is fixed in position and independent of all others. It can be in any one of four internal states. Two of these states have energy $0$, while the other two states have energy $b$.
Consider $P=E/b$, so that $P$ is the number of molecules with energy $b$.
(a) Calculate the number of microstates of the solid when it has energy $E$.
(b) The solid is now placed in good thermal contact with its environment at temperature $T$. What is the solid's mean energy?
(c) Suppose $b=0.1 \mathrm{eV}$. What is the energy per molecule at $25^{\circ}C$?
I believe the number of microstates of the solid when it has energy $E$ is ${N/2 \choose bP}$.
However, I am not sure. Can someone clarify/guide, and assist with the latter parts?
-
does anyone have any advice or help to offer? – Alex Trent Dec 19 '12 at 5:19
1
Your guess for the number of microstates can't be right just from units. $bP$ has units of energy, when you need arguments that are numbers. Units are always the first check to see if your formulas make sense. – Todd R Dec 19 '12 at 5:54
## 1 Answer
Part B)
Let's find the partition function.
$Z = \Sigma_i{e^{-E_i/Tk_b}}$ over all microstates i.
We have 4 microstates of energy b,b,0 and 0. Therefore:
$Z = 2e^{-b/Tk_b} + 2 = 2e^{-b\beta} + 2$
Now we need to find the energy. To do this we use the formula:
$E = -\frac{d}{d\beta}Z(\beta)$ = $-\frac{d}{d\beta}(2e^{-b\beta} + 2)$
Evaluating gives:
$<E> = 2be^{-b\beta} = 2be^{b/k_bT}$
Part C)
Substitute the numbers given into the above expression for E. This expression is already "per particle"
-
Don't forget to use the correct SI units when substituting in the numbers. – Chris Dec 19 '12 at 6:03
Hi Chris, for part A, do you agree with what Todd said? – Alex Trent Dec 19 '12 at 6:13
Part A) N Choose P = N Choose (E/b) – Chris Dec 19 '12 at 6:17
you have N molecules, and you're choosing the ones that contribute to the total energy E, which is given by P. makes sense. thank you! – Alex Trent Dec 19 '12 at 6:17
Yes, but make sure you express P in terms of energy. That is, replace P with E/b. – Chris Dec 19 '12 at 6:18
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137356281280518, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-equations/171477-initial-value-problem.html | # Thread:
1. ## Initial value problem
Help me with this one please.
Consider the initial value problem
$\frac{\partial ^{2}u}{\partial x_{2}^{2}}+a\frac{\partial ^{2}u}{\partial x_{1}^{2}} =0, u(x_{1},0)=\varphi _{0}(x_{1}),<br /> \frac{\partial u}{\partial x_{2}}(x_{1},0)=\varphi _{1}(x_{1})<br />$
where $a$ is a real number.
1)For which class of initial data can one solve the problem?
2)Is this problem well-posed? If not, what goes wrong?
3)Do the answers to 1) and 2) depend on $a$? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7796842455863953, "perplexity_flag": "middle"} |
http://www.nag.com/numeric/CL/nagdoc_cl23/html/E01/e01aec.html | # NAG Library Function Documentnag_1d_cheb_interp (e01aec)
## 1 Purpose
nag_1d_cheb_interp (e01aec) constructs the Chebyshev series representation of a polynomial interpolant to a set of data which may contain derivative values.
## 2 Specification
#include <nag.h>
#include <nage01.h>
void nag_1d_cheb_interp (Integer m, double xmin, double xmax, const double x[], const double y[], const Integer p[], Integer itmin, Integer itmax, double a[], double perf[], Integer *num_iter, NagError *fail)
## 3 Description
Let $m$ distinct values ${x}_{\mathit{i}}$ of an independent variable $x$ be given, with ${x}_{\mathrm{min}}\le {x}_{\mathit{i}}\le {x}_{\mathrm{max}}$, for $\mathit{i}=1,2,\dots ,m$. For each value ${x}_{i}$, suppose that the value ${y}_{i}$ of the dependent variable $y$ together with the first ${p}_{i}$ derivatives of $y$ with respect to $x$ are given. Each ${p}_{i}$ must therefore be a non-negative integer, with the total number of interpolating conditions, $n$, equal to $m+\sum _{i=1}^{m}{p}_{i}$.
nag_1d_cheb_interp (e01aec) calculates the unique polynomial $q\left(x\right)$ of degree $n-1$ (or less) which is such that ${q}^{\left(\mathit{k}\right)}\left({x}_{\mathit{i}}\right)={y}_{\mathit{i}}^{\left(\mathit{k}\right)}$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{k}=0,1,\dots ,{p}_{\mathit{i}}$. Here ${q}^{\left(0\right)}\left({x}_{i}\right)$ means $q\left({x}_{i}\right)$. This polynomial is represented in Chebyshev series form in the normalized variable $\stackrel{-}{x}$, as follows:
$qx=12a0T0x-+a1T1x-+⋯+an-1Tn-1x-,$
where
$x-=2x-xmin-xmax xmax-xmin$
so that $-1\le \stackrel{-}{x}\le 1$ for $x$ in the interval ${x}_{\mathrm{min}}$ to ${x}_{\mathrm{max}}$, and where ${T}_{i}\left(\stackrel{-}{x}\right)$ is the Chebyshev polynomial of the first kind of degree $i$ with argument $\stackrel{-}{x}$.
(The polynomial interpolant can subsequently be evaluated for any value of $x$ in the given range by using nag_1d_cheb_eval2 (e02akc). Chebyshev series representations of the derivative(s) and integral(s) of $q\left(x\right)$ may be obtained by (repeated) use of nag_1d_cheb_deriv (e02ahc) and nag_1d_cheb_intg (e02ajc).)
The method used consists first of constructing a divided-difference table from the normalized $\stackrel{-}{x}$ values and the given values of $y$ and its derivatives with respect to $\stackrel{-}{x}$. The Newton form of $q\left(x\right)$ is then obtained from this table, as described in Huddleston (1974) and Krogh (1970), with the modification described in Section 8.2. The Newton form of the polynomial is then converted to Chebyshev series form as described in Section 8.3.
Since the errors incurred by these stages can be considerable, a form of iterative refinement is used to improve the solution. This refinement is particularly useful when derivatives of rather high order are given in the data. In reasonable examples, the refinement will usually terminate with a certain accuracy criterion satisfied by the polynomial (see Section 7). In more difficult examples, the criterion may not be satisfied and refinement will continue until the maximum number of iterations (as specified by the input argument itmax) is reached.
In extreme examples, the iterative process may diverge (even though the accuracy criterion is satisfied): if a certain divergence criterion is satisfied, the process terminates at once. In all cases the function returns the ‘best’ polynomial achieved before termination. For the definition of ‘best’ and details of iterative refinement and termination criteria, see Section 8.4.
## 4 References
Huddleston R E (1974) CDC 6600 routines for the interpolation of data and of data with derivatives SLL-74-0214 Sandia Laboratories (Reprint)
Krogh F T (1970) Efficient algorithms for polynomial interpolation and numerical differentiation Math. Comput. 24 185–190
## 5 Arguments
1: m – IntegerInput
On entry: $m$, the number of given values of the independent variable $x$.
Constraint: ${\mathbf{m}}\ge 1$.
2: xmin – doubleInput
3: xmax – doubleInput
On entry: the lower and upper end points, respectively, of the interval $\left[{x}_{\mathrm{min}},{x}_{\mathrm{max}}\right]$. If they are not determined by your problem, it is recommended that they be set respectively to the smallest and largest values among the ${x}_{i}$.
Constraint: ${\mathbf{xmin}}<{\mathbf{xmax}}$.
4: x[m] – const doubleInput
On entry: ${\mathbf{x}}\left[\mathit{i}-1\right]$ must be set to the value of ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,m$. The ${\mathbf{x}}\left[i-1\right]$ need not be ordered.
Constraint: ${\mathbf{xmin}}\le {\mathbf{x}}\left[i-1\right]\le {\mathbf{xmax}}$, and the ${\mathbf{x}}\left[i-1\right]$ must be distinct.
5: y[$\mathit{dim}$] – const doubleInput
Note: the dimension, dim, of the array y must be at least $\left({\mathbf{m}}+\sum _{\mathit{i}=0}^{{\mathbf{m}}-1}{\mathbf{p}}\left[\mathit{i}\right]\right)$.
On entry: the given values of the dependent variable, and derivatives, as follows:
The first ${p}_{1}+1$ elements contain ${y}_{1},{y}_{1}^{\left(1\right)},\dots ,{y}_{1}^{\left({p}_{1}\right)}$ in that order.
The next ${p}_{2}+1$ elements contain ${y}_{2},{y}_{2}^{\left(1\right)},\dots ,{y}_{2}^{\left({p}_{2}\right)}$ in that order.
$\text{ }⋮$
The last ${p}_{m}+1$ elements contain ${y}_{m},{y}_{m}^{\left(1\right)},\dots ,{y}_{m}^{\left({p}_{m}\right)}$ in that order.
6: p[m] – const IntegerInput
On entry: ${\mathbf{p}}\left[\mathit{i}-1\right]$ must be set to ${p}_{\mathit{i}}$, the order of the highest-order derivative whose value is given at ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,m$. If the value of $y$ only is given for some ${x}_{i}$ then the corresponding value of ${\mathbf{p}}\left[i-1\right]$ must be zero.
Constraint: ${\mathbf{p}}\left[\mathit{i}-1\right]\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
7: itmin – IntegerInput
8: itmax – IntegerInput
On entry: respectively the minimum and maximum number of iterations to be performed by the function (for full details see Section 8.4). Setting itmin and/or itmax negative or zero invokes default value(s) of $2$ and/or $10$, respectively.
The default values will be satisfactory for most problems, but occasionally significant improvement will result from using higher values.
Suggested value: ${\mathbf{itmin}}=0$ and ${\mathbf{itmax}}=0$.
9: a[$\mathit{dim}$] – doubleOutput
Note: the dimension, dim, of the array a must be at least $\left({\mathbf{m}}+\sum _{\mathit{i}=0}^{{\mathbf{m}}-1}{\mathbf{p}}\left[\mathit{i}\right]\right)$.
On exit: ${\mathbf{a}}\left[\mathit{i}\right]$ contains the coefficient ${a}_{\mathit{i}}$ in the Chebyshev series representation of $q\left(x\right)$, for $\mathit{i}=0,1,\dots ,n-1$.
10: perf[$\mathit{dim}$] – doubleOutput
Note: the dimension, dim, of the array perf must be at least $\mathit{ipmax}+\left({\mathbf{m}}+\sum _{\mathit{i}=0}^{{\mathbf{m}}-1}{\mathbf{p}}\left[\mathit{i}\right]\right)+1$.
On exit: ${\mathbf{perf}}\left[\mathit{k}-1\right]$, for $\mathit{k}=0,1,\dots ,\mathit{ipmax}$, contains the ratio of ${P}_{k}$, the performance index relating to the $k$th derivative of the $q\left(x\right)$ finally provided, to $8$ times the machine precision.
${\mathbf{perf}}\left[\mathit{ipmax}+\mathit{j}-1\right]$, for $\mathit{j}=1,2,\dots ,n$, contains the $j$th residual, i.e., the value of ${y}_{i}^{\left(k\right)}-{q}^{\left(k\right)}\left({x}_{i}\right)$, where $i$ and $k$ are the appropriate values corresponding to the $j$th element in the array y (see the description of y in Section 5).
This information is also output if NE_ITER_FAIL or NE_NOT_ACC.
11: num_iter – Integer *Output
On exit: num_iter contains the number of iterations actually performed in deriving $q\left(x\right)$.
This information is also output if NE_ITER_FAIL or NE_NOT_ACC.
12: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_BAD_PARAM
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INT
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{m}}\ge 1$.
NE_INT_ARRAY
On entry, $i=〈\mathit{\text{value}}〉$ and ${\mathbf{p}}\left[i-1\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{p}}\left[i-1\right]\ge 0$.
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$ and ${\mathbf{p}}\left[\mathit{i}-1\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{p}}\left[\mathit{i}-1\right]\ge 0$, for $\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
NE_ITER_FAIL
Iteration is divergent. Problem is ill-conditioned.
NE_NOT_ACC
Not all performance indices are small enough. Try increasing itmax: ${\mathbf{itmax}}=〈\mathit{\text{value}}〉$.
NE_REAL_2
On entry, ${\mathbf{xmin}}=〈\mathit{\text{value}}〉$ and ${\mathbf{xmax}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{xmin}}<{\mathbf{xmax}}$.
NE_REAL_ARRAY
On entry, $i=〈\mathit{\text{value}}〉$, $j=〈\mathit{\text{value}}〉$ and ${\mathbf{x}}\left[i-1\right]=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{x}}\left[i-1\right]\ne {\mathbf{x}}\left[j-1\right]$.
On entry, $i=〈\mathit{\text{value}}〉$, ${\mathbf{x}}\left[i-1\right]=〈\mathit{\text{value}}〉$, ${\mathbf{xmin}}=〈\mathit{\text{value}}〉$ and ${\mathbf{xmax}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{xmin}}\le {\mathbf{x}}\left[i-1\right]\le {\mathbf{xmax}}$.
## 7 Accuracy
A complete error analysis is not currently available, but the method gives good results for reasonable problems.
It is important to realise that for some sets of data, the polynomial interpolation problem is ill-conditioned. That is, a small perturbation in the data may induce large changes in the polynomial, even in exact arithmetic. Though by no means the worst example, interpolation by a single polynomial to a large number of function values given at points equally spaced across the range is notoriously ill-conditioned and the polynomial interpolating such a dataset is prone to exhibit enormous oscillations between the data points, especially near the ends of the range. These will be reflected in the Chebyshev coefficients being large compared with the given function values. A more familiar example of ill-conditioning occurs in the solution of certain systems of linear algebraic equations, in which a small change in the elements of the matrix and/or in the components of the right-hand side vector induces a relatively large change in the solution vector. The best that can be achieved in these cases is to make the residual vector small in some sense. If this is possible, the computed solution is exact for a slightly perturbed set of data. Similar considerations apply to the interpolation problem.
The residuals ${y}_{i}^{\left(k\right)}-{q}^{\left(k\right)}\left({x}_{i}\right)$ are available for inspection . To assess whether these are reasonable, however, it is necessary to relate them to the largest function and derivative values taken by $q\left(x\right)$ over the interval $\left[{x}_{\mathrm{min}},{x}_{\mathrm{max}}\right]$. The following performance indices aim to do this. Let the $k$th derivative of $q$ with respect to the normalized variable $\stackrel{-}{x}$ be given by the Chebyshev series
$12a0kT0x-+a1kT1x-+⋯+an-1-kkTn-1-kx-.$
Let ${A}_{k}$ denote the sum of the moduli of these coefficients (this is an upper bound on the $k$th derivative in the interval and is taken as a measure of the maximum size of this derivative), and define
$Sk = max i≤k Ai .$
Then if the root-mean-square value of the residuals of ${q}^{\left(k\right)}$, scaled so as to relate to the normalized variable $\stackrel{-}{x}$, is denoted by ${r}_{k}$, the performance indices are defined by
$Pk=rk/Sk, for k=0,1,…,maxipi.$
It is expected that, in reasonable cases, they will all be less than (say) $8$ times the machine precision (this is the accuracy criterion mentioned in Section 3), and in many cases will be of the order of machine precision or less.
## 8 Further Comments
### 8.1 Timing
Computation time is approximately proportional to $\mathit{it}×{n}^{3}$, where $\mathit{it}$ is the number of iterations actually used.
### 8.2 Divided-difference Strategy
In constructing each new coefficient in the Newton form of the polynomial, a new ${x}_{i}$ must be brought into the computation. The ${x}_{i}$ chosen is that which yields the smallest new coefficient. This strategy increases the stability of the divided-difference technique, sometimes quite markedly, by reducing errors due to cancellation.
### 8.3 Conversion to Chebyshev Form
Conversion from the Newton form to Chebyshev series form is effected by evaluating the former at the $n$ values of $\stackrel{-}{x}$ at which ${T}_{n-1}\left(x\right)$ takes the value $±1$, and then interpolating these $n$ function values by a call of nag_1d_cheb_interp_fit (e02afc), which provides the Chebyshev series representation of the polynomial with very small additional relative error.
### 8.4 Iterative Refinement
The iterative refinement process is performed as follows.
Firstly, an initial approximation, ${q}_{1}\left(x\right)$ say, is found by the technique described in Section 3. The $r$th step of the refinement process then consists of evaluating the residuals of the $r$th approximation ${q}_{r}\left(x\right)$, and constructing an interpolant, $d{q}_{r}\left(x\right)$, to these residuals. The next approximation ${q}_{r+1}\left(x\right)$ to the interpolating polynomial is then obtained as
$qr+1x=qrx+dqrx.$
This completes the description of the $r$th step.
The iterative process is terminated according to the following criteria. When a polynomial is found whose performance indices (as defined in Section 7) are all less than $8$ times the machine precision, the process terminates after itmin further iterations (or after a total of itmax iterations if that occurs earlier). This will occur in most reasonable problems. The extra iterations are to allow for the possibility of further improvement. If no such polynomial is found, the process terminates after a total of itmax iterations. Both these criteria are over-ridden, however, in two special cases. Firstly, if for some value of $r$ the sum of the moduli of the Chebyshev coefficients of $d{q}_{r}\left(x\right)$ is greater than that of ${q}_{r}\left(x\right)$, it is concluded that the process is diverging and the process is terminated at once (${q}_{r+1}\left(x\right)$ is not computed).
Secondly, if at any stage, the performance indices are all computed as zero, again the process is terminated at once.
As the iterations proceed, a record is kept of the best polynomial. Subsequently, at the end of each iteration, the new polynomial replaces the current best polynomial if it satisfies two conditions (otherwise the best polynomial remains unchanged). The first condition is that at least one of its root-mean-square residual values, ${r}_{k}$ (see Section 7) is smaller than the corresponding value for the current best polynomial. The second condition takes two different forms according to whether or not the performance indices (see Section 7) of the current best polynomial are all less than $8$ times the machine precision. If they are, then the largest performance index of the new polynomial is required to be less than that of the current best polynomial. If they are not, the number of indices which are less than $8$ times the machine precision must not be smaller than for the current best polynomial. When the iterative process is terminated, it is the polynomial then recorded as best, which is returned to you as $q\left(x\right)$.
## 9 Example
This example constructs an interpolant $q\left(x\right)$ to the following data:
$m=4, xmin=2, xmax=6, x1=2, p1=0, y1=1, x2=4, p2=1, y2=2, y2 1 =-1, x3=5, p3=0, y3=1, x4=6, p4=2, y4=2, y4 1 =4, y4 2 =-2.$
The coefficients in the Chebyshev series representation of $q\left(x\right)$ are printed, and also the residuals corresponding to each of the given function and derivative values.
This program is written in a generalized form which can read any number of data-sets.
### 9.1 Program Text
Program Text (e01aece.c)
### 9.2 Program Data
Program Data (e01aece.d)
### 9.3 Program Results
Program Results (e01aece.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 160, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8272677659988403, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/74478/on-a-polynomial-related-to-the-legendre-function-of-the-second-kind/74531 | ## On a polynomial related to the Legendre function of the second kind
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Legendre function of the second kind, $Q_n(z)$, along with the usual Legendre polynomial $P_n(z)$, are the two linearly independent solutions of the Legendre differential equation.
$Q_n(z)$ can be expressed in the following form:
$$Q_n(z)=P_n(z)\mathrm{artanh}\,z-W_{n-1}(z)$$
where $W_{n-1}(z)$ can be expressed either as
$$W_{n-1}(z)=\sum_{k=1}^n \frac{P_{k-1}(z) P_{n-k}(z)}{k}$$
or as
$$W_{n-1}(z)=\sum_{k=0}^{n-1} \frac{(H_n-H_k)(n+k)!}{2^k (n-k)! (k!)^2} (z-1)^k$$
where $H_k$ is the $k$-th harmonic number, $H_k=\sum\limits_{j=1}^k \frac1{j}$.
My questions:
1. Mathematica returns a rather complicated expression for $W_{n-1}(z)$ involving the unknown solution of a certain recurrence (i.e. `DifferenceRoot[]`). Is there possibly a simpler form for this polynomial?
2. Might there be a (hopefully simple) $n$-term recurrence that generates these polynomials?
Addendum:
After staring long and hard at Pietro's answer, I feel now that my second question was sorta kinda dumb; I already knew that both Legendre functions satisfied the same difference equation, so it stands to reason that a linear combination of them should also be a solution to that recurrence.
I now would like to expand my first question a bit: is it possible to express $W_n(z)$ as a single hypergeometric function (e.g. ${}_p F_q$ or some of the fancy multivariate ones), perhaps with one of the parameters being a negative integer? For instance, $P_n(z)$ is expressible as a Gaussian hypergeometric function ${}_2 F_1$ with one of the numerator parameters being a negative integer. Might there be something similar for the $W_n$?
I would also like to consider an additional question: are the $W_n$ orthogonal polynomials with respect to some weight function $\omega(x)$ and an associated support interval $(a,b)$? That is, if
$$\int_a^b\omega(t)W_j(t)W_k(t)\mathrm dt=0,\qquad j\neq k$$
for some $\omega(x)$ and some interval $(a,b)$, what is this weight function and its support interval?
-
2
Note that the $P_n(z)$ have generating function $f:=\frac{1}{\sqrt{1-2zt+t^2}}$. This leads to a GF for the $W_n$: by your second formula they have a generating function which is a product of $f$ and its antiderivative (wrto $t$). – Pietro Majer Sep 4 2011 at 8:06
## 1 Answer
The generating function of the sequence $W_n(z)$ (shifted) is $$w(t,z):=(1-2zt+t^2)^{-\frac{1}{2}}\log\bigg(\frac{-z+t+(1-2zt+t^2)^\frac{1}{2}}{1-z}\bigg) =\sum_{n=1}^\infty\ W_{n-1} (z)\ t^n\ .$$ It verifies a simple linear first order differential equation: $$(1-2zt+t^2)w_t + (t-z)w=1$$ that translates into a three-term linear recurrence for the $W_n\$:
$$(n+1)W_n=(2n+1)zW_{n-1}-nW_{n-2}\qquad$$
with the initial contitions $W_0=1$ and $W_1:=\frac{3}{2}z\ .$
edit. Note that one may start the above recurrence with $W_{-1}:=0$ and $W_0:=1$. Also note that, up to a shift, that recurrence is the same as the Legendre polynomials. This means that the polynomials $R_n:=W_{n-1}$ are the other linear independent solution to the recurrence of the Legendre polynomials, $$(n+1)y_{n+1}=(2n+1)zy_{n}-ny_{n-1}\ ,$$ that corresponds to the initial conditions $R_0=0$, $R_1=1$ (while $P_0=1$ and $P_1=z$). According to the notations of the general theory of orthogonal polynomials, these $R_n$ should be named "Legendre polynomials of the second kind" (not to be confused with the Legendre functions of the second kind).
-
1
Actually, since the $Q_n$ satisfy the same recurrence relation of the $P_n$ (mathworld.wolfram.com/…) it is not surprising that so do $W_{n-1}=P_n\arctan(z) -Q_n$. – Pietro Majer Sep 4 2011 at 22:31
Thanks a lot, Pietro! – J. M. Sep 5 2011 at 2:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089757204055786, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?s=79210bc6f00682e41941beec17527fe1&p=4255046 | Physics Forums
## Any compilation of all classical physics concepts?
I understand many classical physics concepts but I feel like my understanding of the concepts are all scattered. I can't seem to make links between concepts. For example: I understand momentum, forces and energy, but I have trouble making any links between the ideas (other than the link that work is just force times a distance). It doesn't help that the concepts are always taught separately and the problems usually only involve a maximum of 2 concepts at a time.
Is there any resource or video like the one below but for classical physics?
Maybe it has to do with my understanding of physics? My current understanding is that classical physics uses not very much calculus, but according to wikipedia:
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are interrelated through calculus. The mass of an object of known density, the moment of inertia of objects, as well as the total energy of an object within a conservative field can be found by the use of calculus.
So maybe I can link concepts with calculus?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Do you know how these formula's were derived? Knowing just a formula vs. KNOWING a formula (how it was derived, its consequences ect..) will lead to a much deeper knowledge. (you probably know this) Example: What is momentum? - It is the product of the mass of a body and its velocity. How does it relate to force? Well, lets day a force (F) pushes on a body for a time (t), the momentum of the body will be changed by a certain amount (due to how long its pushed on). So we say: Δp=FΔt, then we look at this from calculus differentiate it: F=dp/dt! but then one says, ohh...the change in p depends on a certain time! so, lets factor in t1 and t2 So then we can say: Δp= from t1 to t2 $\int$ F(t)dt ! Then you can see from this, we get: F=m(dv/dt) and since we know (dv/dt)=acceleration, then you can say: F=ma, and you have derived Newtons second law MIT has open course ware on physics I and II and they are a wealth of information. Walter Lewin derives many formulas with calculus and algebra. You can find the video's for free on youtube.
Thread Tools
| | | |
|-------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Any compilation of all classical physics concepts? | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 1 |
| | General Physics | 1 |
| | General Physics | 1 |
| | Introductory Physics Homework | 1 |
| | General Physics | 1 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419553279876709, "perplexity_flag": "middle"} |
http://www.impan.pl/cgi-bin/dict?expansion | ## expansion
The Laurent expansion of $f$ around $\langle$about$\rangle$ zero is ......
expansion in powers of $x$
the base $p$ expansion $\langle$representation$\rangle$ of $x$
Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6935780048370361, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/231830/eigenvalues-for-hessian-evaluated-at-nondegenerate-critical-points?answertab=active | # Eigenvalues for Hessian evaluated at nondegenerate critical points
Let $f \colon M \to \mathbb{R}$ be a smooth function on a manifold $M$. If $f$ is Morse then all the critical points of $f$ are non-degenerate; that is, if $p$ is a critical point of $f$, then $\det \text{Hess}_f(p) \neq 0$.
If $f$ is Morse and $p$ is a critical point of $f$, are the eigenvalues of $\text{Hess}_f(p)$ simple?
If so, are there any references to this?
-
## 1 Answer
Here is a trivial counterexample. Let $M=\mathbb{R}^n$ and let $f(x_1,\dots,x_n)=\sum_{i=1}^nx_i^2$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7828397750854492, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/41309/does-entropy-decrease-through-measurement/41328 | Does entropy decrease through measurement?
For an electron in its rest frame, we have an entropy $$S = \log 2,$$ which comes from the 2 possible spin directions along z-axis.
If the measurement $S_z$ changes its state to $\left| + \right>$, the entropy goes to zero.
Does this violate the second law of thermodynamics?
-
4
The second law is a statement about closed systems. You have neglected to account for the entropy change in the measurement equipment. – genneth Oct 21 '12 at 0:15
Moreover, if you take the system as an open system, its entropy does not decrease. This is because the measurement device is not a part of the description and thus we can not use the knowledge of the measurement result in quantifying the uncertainty of the system's state. "Forgetting" the result, the odds of being a + or a - are 50%:50%, just like before the measurement. Given a particular measurement result, we know the electron's spin with certainty, but this is not entropy being zero, rather conditional entropy. – Vašek Potoček Oct 21 '12 at 9:39
1
One does not have to go to particles for examples of decreasing entropy. Please ponder on the fact that any live biological entity considered by itself, since it continually creates order, decreases entropy. The same for a crystal growing out of of the solution. The crystal by itself decreases entropy. It is the closed system that is important as the answers state. – anna v Oct 21 '12 at 12:02
4 Answers
The entropy of the measured system decreases, at the expense of the entropy of the detector, which goes up. The total entrob=py balasnce is positive (as irreversibly fixing a detection results usually costs more entropy than is gained from the reduction in the measured system.
Thus the second law is not violated.
Edit: About deriving such results: In a fully microscopic description the entropy remains constant, but nothing can be measured as there is no microscopic concept of a permantent record of measurement results. One therefore needs appropriate coarse graining assumptions, which take the place of Boltzmann's 19th century Stosszahl ansatz for classical molecular systems.
Coarse graining means that the density matrix is restricted to take a form depending only on macroscopically measurable parameters, and deviations from this form (due to the exact dynamics) are swept under the coarse graining carpet. This leads to an approximate macroscopic dynamics. The resulting description is dissipative in the Markovian limit: The entropy $Tr(-\rho\log\rho)$ strictly increases with time unless the system is already in equilibrium.
A book covering this nicely and in full detail for a number of coarse graining recipes is Grabert's ''Projection operator techniques in nonequilibrium statistical mechanics''. For a readable summary of the basic technique, see, e.g., http://arxiv.org/pdf/cond-mat/9612129.
-
1
Is there a rigorous manner by which one can show this without assuming the second law of thermodynamics? – Prathyush Oct 21 '12 at 13:43
1
@Prathyush: In statistical mechanics, one has to make some assumptions, of course, to deduce anything. Starting from the standard assumptions (quantum versions of Boltzmann's Stosszahlansatz), one finds the second law, the above observation, and much else. This is enough for consistency. – Arnold Neumaier Oct 21 '12 at 13:48
I know nothing about the quantum version of the Boltzmann's Stosszahlansatz hypothesis(I will look into it sometime in the future), But if we apply unitary evolution to any closed system density matrix(Here state+measuring appratus system) and use the definition of entropy as Tr(p log p) then the entropy cannot increase. Now the next part I am not very sure,but I think the finite time of interactions between measuring apparatus and state must play some role in all of this. – Prathyush Oct 21 '12 at 14:22
@Prathyush: I added details in my answer. – Arnold Neumaier Oct 21 '12 at 16:40
Thank you, I will look into it, perhaps an understanding of what it means to record information is much needed. – Prathyush Oct 21 '12 at 17:49
show 1 more comment
tl;dr: You decrease the systems entropy but increase it in the measuring device.
Quoting parts of my own answer to Entropy-based refutation of Shalizi's Bayesian backward arrow of time paradox? at CrossValidated:
As it was pointed out in comments, what matters to thermodynamics, is the entropy of a closed system. That is, according to the second law of thermodynamics, entropy of a closed system cannot decrease. It says nothing about the entropy of a subsystem (or an open system); otherwise you couldn't use your fridge.
And once we measure sth (i.e. interact and gather information) it is not a closed system anymore. Either we cannot use the second law, or - we need to consider a closed system made of the measured system and the observer (i.e. ourselves).
In particular, when we measure the exact state of a particle (while before we knew its distribution), indeed we lower its entropy. However, to store the information we need to increase our entropy by at least the same amount (typically there is huge overhead).
[...]
To be consistent with classical mechanics (and quantum as well), you cannot make a function arbitrarily mapping anything to all zeros (with no side effects). You can make a function mapping your memory to all zero, but at the same time dumping the information somewhere, which effectively increases the entropy of the environment.
(The above originates from Hamiltonian dynamics - i.e. preservation of the phase space in the classical case, and unitarity of evolution in the quantum case.)
PS: A trick for today - "reducing entropy":
• Flip an unbiased coin, but don't look at the result (H=1 bit).
• Open your eyes. Now you know its state, so its entropy is H=0 bits.
-
The well-tested second law of thermodynamics is not violated, because the second law is a statement about the production of entropy $\Delta_i S \geq 0$. For an isolated system $\Delta S = \Delta_i S$ and entropy does not decrease $\Delta S \geq 0$, but for open or closed systems $\Delta S = (\Delta_i S + \Delta_e S) \geq \Delta_e S$ and entropy can increase decrease or remain constant in function of the flow term $\Delta_e S$. For instance for mature living systems entropy remains approx. constant because the flow term compensates the production of entropy $\Delta_e S = - \Delta_i S$.
Regarding quantum systems, generalized measurements can decrease the entropy of the system, but non-selective ideal measurements never decrease the entropy. Both cases are compatible with the second law of thermodynamics.
-
A more universal formula for entropy valid under a wide range of conditions is $$S = - k_B \sum_i p_i \ln p_i$$ In this case we have an electron that can be in one of two degenerate states if there is no electrical or magnetic field present. The two states are spin +1/2 and spin -1/2. A given electron will be in one of those states with equal probability and there are two states, so $p_1 = 1/2$ and $p_2 = 1/2$.
Plugging these into the formula above we get $S = k_B \ln 2$ where $k_B$ is Boltzmann's constant (left out of the equation in the question).
Since we don't know what state the electron is in, flipping it to the other state does nothing to the entropy. So we must separate the states by, say, the application of a magnetic field. One state will now have more energy and the other less. If we find the electron in the upper energy state, $p_1 = 0$ and $p_2 = 1$. Evaluating the formula is a bit tricky because we have a term $0\ln 0$ which can be shown to evaluate to zero. The term $1\ln 1$ also evaluates to zero so we now have $S = 0$ and the entropy has been reduced.
Does this violate the Second Law? The answer is no primarily because you can't apply thermodynamics to a single particle.
I apologize for the skeletal answer here, but the subject is fairly complex and one would need to read the appropriate chapter(s) in a statistical thermodynamics book for a full discussion.
-
But one can (and must) apply thermodynamics to the system consisting of the electron and the measurement device. – Arnold Neumaier Oct 22 '12 at 7:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229423403739929, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/19255/bounds-on-a-partition-theorem-with-ambivalent-colors/19322 | ## Bounds on a partition theorem with ambivalent colors
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've been running into the following type of partition problem.
Given positive integers h, r, k, and a real number ε ∈ (0,1), find n such that if every (unordered) r-tuple from an n element set X is assigned a set of at least εk 'valid' colors out of a total of k possible colors, then you can find H ⊆ X of size h and a single color which is 'valid' for all r-tuples from H.
Lower bounds on the smallest such n can be obtained from lower bounds for Ramsey's Theorem. If k is sufficiently large, then partition the set of colors into [1/ε] pairwise disjoint sets of approximately equal size to emulate a proper [1/ε]-coloring of r-tuples. A simple pigeonhole argument shows that this is essentially sharp when r = 1 and k is large enough, i.e. one color must be 'valid' for at least nε points.
Is the Ramsey bound more or less sharp for r > 1 or are there better lower bounds? The interesting case is when k is large since the proposed Ramsey lower bound is (surprisingly?) independent of k.
-
## 2 Answers
I do not think that the lower bound could depend only on epsilon. Below is the sketch of my argument.
Fix h=3, r=2, eps=1/4, thus we color the edges of a graph, each with 25% of all the colors and we are looking for a "monochromatic" triangle. Let us take k random bipartitions of the vertices and color the corresponding edges of the bipartite graph with one color. Using Hoeffding or some similar inequality we get that for big enough k every edge is colored at least k/4 times if n is at most exp(ck), where c is some fixed constant with some positive probability. Therefore the bound must depend on k and not only on epsilon.
-
Very nice! After coffee, I got the explicit lower bound $n > \sqrt{2}\exp(3k/40)$ when $h=3$ and $\epsilon=1/4$, by using Bernstein's Inequality instead. – François G. Dorais♦ Mar 25 2010 at 14:15
After more coffee, I successfully generalized your trick to arbitrary h, r; I will post the details separately. Thank you very much for your answer! – François G. Dorais♦ Mar 25 2010 at 14:49
Thx, you're welcome! – domotorp Mar 25 2010 at 16:25
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is a generalization of domotorp's answer to arbitrary h > r > 1.
Independently for each color i ∈ {1,2,...,k}, pick a random Hi from a family H of r-hypergraphs that don't contain any complete r-hypergraph of size h. Declare color i to be 'valid' for the r-tuple t = {t1,...,tr} iff t ∈ Hi. Let Yt be the number of 'valid' colors for t. Note that Yt is binomial with parameters (k, p) for some 0 < p ≤ 1/2 which is independent of k and also independent of t when H is closed under isomorphism. Hoeffding's Inequality then gives
Prob[Yt ≤ εk] ≤ exp(-2k(p-ε)2)
for 0 < ε < p. So the probability that Yt ≥ εk for all i is positive whenever n ≤ exp(2k(p-ε)2/r) (not optimal).
This is not enough since p implicitly depends on n. However, for fixed h > r > 1, p can be bounded away from 0. This can be seen by using for H the family of r-partite hypergraphs as domotorp did, but different choices of H give better bounds.
-
This answer is community wiki because domtorp deserves all the credit. – François G. Dorais♦ Mar 25 2010 at 16:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9035097360610962, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/44358/compact-open-topology | compact-open topology
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a natural reason for defining the compact-open topology on the set of continuous functions between two locally compact spaces. For example "to make ... functions continuous". Or in another way of asking this, is there an adjoint functor of the functor, say F, which assigns the topological space `$F(X,Y):=Hom_C(X,Y)$` (with the compact-open topology on it) to the couple X,Y.
-
6
On natural reason is that with that topology things that we want to converge converge, and things that we want to not converge do not converge. It is a very natural abstraction of convergence notions lik e "uniform convergence on compact sets" and others which show up all over the place in nature. One can probably wrap the topology in the elegant clothes of a functorial explanation (this is done in Munkres, iirc) but the real reason the topology is useful is in my first sentence. – Mariano Suárez-Alvarez Oct 31 2010 at 17:56
11
One: this makes composition a continuous map. Two (this is sort of the same as one): this functor is right adjoint to the product in the compactly-generated category, i.e. $\operatorname{Hom}(X\times Y, Z)\simeq \operatorname{Hom}(X, \operatorname{Hom}(Y, Z))$. – Daniel Litt Oct 31 2010 at 18:25
3 Answers
In regard to your question I recommend Topologies on spaces of continuous functions, Topology Proceedings, volume 26, number 2, pp. 545-564, 2001-2002 by Martin Escardo and Reinhold Heckmann.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I always liked the following reason:
Let's call a topology on a space "admissible" if the evaluation function $e: Hom(X,Y) \times X \rightarrow Y$ is continuous. Then the compact-open topology is coarser than any other admissible topology. In particular, in any case where the compact-open topology is admissible, it is the smallest possible topology that does this.
EDIT: See comments for some references. I don't claim any originality here :)
-
3
This reason which you like, including the word "admissible", appears in one of the first papers on the compact-open topology. See R. Arens, A Topology for Spaces of Transformations, Annals of Math. 47 (1946), 480--495. – KConrad Oct 31 2010 at 20:51
Yes- sorry I forgot to put the reference. I got it from the book "Algebraic topology from a homotopical viewpoint" – Dylan Wilson Nov 1 2010 at 1:56
Is there a connection to the notion of an admissible representations? – Marc Palm Nov 19 2010 at 17:50
I have absolutely no idea- but it seems to me like there isn't. – Dylan Wilson Nov 19 2010 at 20:33
Exactly as you say, adjoint functors are the answer! (Or at least, they're one possible answer.) In particular, for reasonable spaces $X,Y,Z$, there is a natural isomorphism
$\mathrm{Hom} (X \times Y, Z) \cong \mathrm{Hom} (X, [Y,Z])$
where $[Y,Z]$ denotes $\mathrm{Hom}(Y,Z)$ with the compact-open topology. This is exactly the categorical characterisation of an exponential object.
This certainly holds when $X,Y,Z$ are compactly-generated Hausdorff spaces, so the category of such spaces is Cartesian closed. It actually also holds under rather weaker conditions on $X,Y$ and $Z$ individually — I can't recall them off the top of my head though and am in a bit of a rush, so am wiki'ing this answer in hopes someone can fill them in. Now, this also holds if we take weaker conditions on $X$ and $Y$ (just Hausdorffness), and a slightly stronger condition on $Y$ (local compactness). $Z$ may be arbitrary.
Re Mariano's comment: yes, in some sense this is just fancy language for “things we want to converge, converge; things we don't, don't”. But I think this helps explain why we want the things we want to converge, to converge. ☺
-
OR: this explains why we turn to the compactly generated $T_2$ spaces: they are nice enough to behave when we use the topology we want to use :) – Mariano Suárez-Alvarez Oct 31 2010 at 18:38
Well, they're nice enough to behave when we use some topology — we can appreciate that even if we didn't decide on this one in advance! I don't know of any other topology on the function space which works well with a such a wide class of spaces? But IANAToplogist, so there may be other options I'm ignorant of… – Peter LeFanu Lumsdaine Oct 31 2010 at 18:51
You just need compactly generated, no separation axioms- but the proof of this is more difficult. – David Carchedi Oct 31 2010 at 19:12
Well, from what I've seen if you give $Map(X,Y)$ the limit topology of $Map(K,Y)$- each with its compact-open topology, ranging over all maps $K \to X$ with compact Hausdorff domain. Whether or not this agrees with the compact open on $Map(X,Y)$, I'd have to check. – David Carchedi Oct 31 2010 at 19:14
1
Also, I'm not sure, but I think you may have to take the k-ification of the function spaces (and of the product, certainly) if you want to stay in the category of compactly generated Hausdorff spaces. – Dylan Wilson Oct 31 2010 at 19:32
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164878129959106, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/114756-finding-horizontal-oblique-asymptotes.html | # Thread:
1. ## Finding horizontal and oblique asymptotes
Can anyone show me how to find the horizontal and oblique asymptotes for
$f(x)=\frac{x^2-1}{x-1}$. Any help would be appreciated!
EDIT: My issue is that f(x) can be simplified to x+1, which should be the slant asymptote, right? But my book says that no asymptotes exist... This is why I'm having an issue.
2. Originally Posted by paupsers
Can anyone show me how to find the horizontal and oblique asymptotes for
$f(x)=\frac{x^2-1}{x-1}$. Any help would be appreciated!
EDIT: My issue is that f(x) can be simplified to x+1, which should be the slant asymptote, right? But my book says that no asymptotes exist... This is why I'm having an issue.
the graph of the rational function is the graph of y = x + 1 with a removable discontinuity (a "hole") at the point (1,2).
there is no slant asymptote. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318912625312805, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/11537?sort=oldest | ## Calculating fourier transform at any frequency
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I know that if we have some data representing some wave, for example image line values, we can use fourier transform to get frequency function of that wave. But we have N values at points x=0...N-1 And we get only N frequencies at the output. So I want to analyze the wave everywhere in the range [0, N-1] For example at point u = 1.5. How can I do it?
-
## 4 Answers
You need to have some a priori knowledge about your wave between and beyond the sampling points to get a meaningful guess about the full Fourier transform. The $N$ values that you get doing the discrete Fourier transform have anything to do with the continuous Fourier transform only for indices much less than $N$. Note that different assumptions will lead to different answers for large frequences. If you need only relatively low frequences, your signal is compactly supported in time, and your sampling points cover the entire support, you can safely interpolate the values of the discrete FT to guess the continuous one but there is no way to get reliable high precision values for the continuous Fourier transform at frequences comparable to $N$. The $N$-point resolution on $[0,1]$ is just not high enough to catch those frequences without an error.
Also note that you want the frequences not on "on the interval $[0,N-1]$" as you wrote, but rather on the interval $[-M,M]$ where $M\ll N$. For decent signals, the frequences with indices close to $N$ in the discrete $FT$ actually catch the negative low frequences in the continuous FT, not the high ones.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Strictly speaking, you can't. That is, you have to interpolate to do it, and to do that in a meaningful way requires some knowledge of the problem domain. However, the easiest way to do so is to note that the given function is naturally written in terms of its discrete Fourier transform as a trigonometric polynomial, so you can just go ahead and evaluate that trigonometric polynomial at any intermediate point. The same thing goes with the function and its Fourier transform interchanged, of course. Though it does not really make good mathematical sense to do this interpolation in both domains simultaneously.
-
If what you need is a simple practical method to do the interpolation, then just multiply the "time domain" samples by the linear-phase signal $\{\exp\left(\alpha\cdot i 2\pi k /N\right)\}_{k=0}^{N-1}$, where $\alpha\in(0,1)$ is the sub-sample shift in the "frequency domain." (I'm not sure if an additional constant of absolute value 1 is required here). As already noted in previous answers, the motivation for this is the assumption that your samples vector $\{x_k\}$ comes from sampling some continuous-time signal $\{X(t)\}_{t\in \mathbb{R}}$, as in $x_k=X(kT_s)$ for $k=0,\ldots, N-1$, where $T_s\in \mathbb{R}_{++}$ is the sampling period. If the continuous-time signal has a compactly
supported Fourier transform, than by Shannon's sampling theorem $X$ is determined by the infinite sequence $\{X(kT_s)\}_{k\in \mathbb{Z}}$ for small enough $T_s$. Since we only have $N$ entries of the infinite sequence, we loose some information on $X$. But if $X$ decays rapidly (rather than having a compact support), then at least intuitively we don't loose much (I know that this is a dangerous and imprecise statement :) ).
For example, try the following Matlab lines:
ttt=-32:31;
x_time=exp(-ttt.^2/10);
figure;plot(x_time);
x_freq = fft(x_time);
figure;plot(fftshift(abs(x_freq)));
figure
for alpha=-4:0.5:4
x_time_2 = x_time .* exp(2 * pi * i * (0:63)/64 * alpha);
x_freq=fft(x_time_2);
plot(fftshift(abs(x_freq)));
axis([26,37,0,10]);
grid on;
pause(1);
end
-
Here is one answer from the electrical engineering(or, image processing) point of view. However this answer much non-mathematical and more for practical illustration. Since this is a non-mathematical answer, I have made it community wiki so that I don't get reputation.
In digital signal processing, you sample an analog signal into digital encoding, and let your microprocessor work with it. When you do sampling, you clearly lose some information and there is no way you can recover it. When you become too greedy and actually try to get more, the problem of "aliasing" will happen.
Here is the wikipedia link for aliasing.
For image processing, situation is even better for an illustration, since you can actually see wavy patterns. These are called Moiré patterns, and you must have seen it sometime in the real world. Here is a wikipedia link.
-
Pretty silly argument to make the answer community wiki! :D – Mariano Suárez-Alvarez Jan 12 2010 at 20:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214293360710144, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Triangular_distribution | Triangular distribution
Parameters
Probability density function
Cumulative distribution function
$a:~a\in (-\infty,\infty)$ $b:~a<b\,$ $c:~a\le c\le b\,$
$a \le x \le b \!$
$\begin{cases} 0 & \mathrm{for\ } x < a, \\ \frac{2(x-a)}{(b-a)(c-a)} & \mathrm{for\ } a \le x \leq c, \\[4pt] \frac{2(b-x)}{(b-a)(b-c)} & \mathrm{for\ } c < x \le b, \\[4pt] 0 & \mathrm{for\ } b < x. \end{cases}$
$\begin{cases} 0 & \mathrm{for\ } x < a, \\[2pt] \frac{(x-a)^2}{(b-a)(c-a)} & \mathrm{for\ } a \le x \leq c, \\[4pt] 1-\frac{(b-x)^2}{(b-a)(b-c)} & \mathrm{for\ } c < x \le b, \\[4pt] 1 & \mathrm{for\ } b < x. \end{cases}$
$\frac{a+b+c}{3}$
$\begin{cases} a+\frac{\sqrt{(b-a)(c-a)}}{\sqrt{2}} & \mathrm{for\ } c \ge \frac{a+b}{2}, \\[6pt] b-\frac{\sqrt{(b-a)(b-c)}}{\sqrt{2}} & \mathrm{for\ } c \le \frac{a+b}{2}. \end{cases}$
$c\,$
$\frac{a^2+b^2+c^2-ab-ac-bc}{18}$
$\frac{\sqrt 2 (a\!+\!b\!-\!2c)(2a\!-\!b\!-\!c)(a\!-\!2b\!+\!c)}{5(a^2\!+\!b^2\!+\!c^2\!-\!ab\!-\!ac\!-\!bc)^\frac{3}{2}}$
$-\frac{3}{5}$
$\frac{1}{2}+\ln\left(\frac{b-a}{2}\right)$
$2\frac{(b\!-\!c)e^{at}\!-\!(b\!-\!a)e^{ct}\!+\!(c\!-\!a)e^{bt}} {(b-a)(c-a)(b-c)t^2}$
$-2\frac{(b\!-\!c)e^{iat}\!-\!(b\!-\!a)e^{ict}\!+\!(c\!-\!a)e^{ibt}} {(b-a)(c-a)(b-c)t^2}$
In probability theory and statistics, the triangular distribution is a continuous probability distribution with lower limit a, upper limit b and mode c, where a < b and a ≤ c ≤ b. The probability density function is given by $Probability density function for the triangular distribution.$
whose cases avoid division by zero if c = a or c = b.
Special cases
Two points known
The distribution simplifies when c = a or c = b. For example, if a = 0, b = 1 and c = 1, then the equations above become:
$\left.\begin{matrix}f(x) &=& 2x \\[8pt] F(x) &=& x^2 \end{matrix}\right\} \text{ for } 0 \le x \le 1$
$\begin{align} E(X) & = \frac{2}{3} \\[8pt] \mathrm{Var}(X) &= \frac{1}{18} \end{align}$
Distribution of mean of two standard uniform variables
This distribution for a = 0, b = 1 and c = 0.5 is distribution of X = (X1 + X2)/2, where X1, X2 are two independent random variables with standard uniform distribution.
$f(x) = \begin{cases} 4x & \text{for }0 \le x < \frac{1}{2} \\ 4-4x & \text{for }\frac{1}{2} \le x \le 1 \end{cases}$
$F(x) = \begin{cases} 2x^2 & \text{for }0 \le x < \frac{1}{2} \\ 1-2(1-x)^2 & \text{for }\frac{1}{2} \le x \le 1 \end{cases}$
$\begin{align} E(X) & = \frac{1}{2} \\[6pt] \operatorname{Var}(X) & = \frac{1}{24} \end{align}$
Distribution of the absolute difference of two standard uniform variables
This distribution for a = 0, b = 1 and c = 0 is distribution of X = |X1 − X2|, where X1, X2 are two independent random variables with standard uniform distribution.
$\begin{align} f(x) & = 2 - 2x \text{ for } 0 \le x < 1 \\[6pt] F(x) & = 2x - x^2 \text{ for } 0 \le x < 1 \\[6pt] E(X) & = \frac{1}{3} \\[6pt] \operatorname{Var}(X) & = \frac{1}{18} \end{align}$
Generating Triangular-distributed random variates
Given a random variate U drawn from the uniform distribution in the interval (0, 1), then the variate
$\begin{matrix} \begin{cases} X = a + \sqrt{U(b-a)(c-a)} & \text{ for } 0 < U < F(c) \\ & \\ X = b - \sqrt{(1-U)(b-a)(b-c)} & \text{ for } F(c) \le U < 1 \end{cases} \end{matrix}$[1]
Where F(c) = (c-a)/(b-a)
has a Triangular distribution with parameters a, b and c. This can be obtained from the cumulative distribution function.
Use of the distribution
See also: Three-point estimation
The triangular distribution is typically used as a subjective description of a population for which there is only limited sample data, and especially in cases where the relationship between variables is known but data is scarce (possibly because of the high cost of collection). It is based on a knowledge of the minimum and maximum and an "inspired guess" [2] as to the modal value. For these reasons, the triangle distribution has been called a "lack of knowledge" distribution.
Business simulations
The triangular distribution is therefore often used in business decision making, particularly in simulations. Generally, when not much is known about the distribution of an outcome, (say, only its smallest and largest values) it is possible to use the uniform distribution. But if the most likely outcome is also known, then the outcome can be simulated by a triangular distribution. See for example under corporate finance.
Project management
The triangular distribution, along with the Beta distribution, is also widely used in project management (as an input into PERT and hence critical path method (CPM)) to model events which take place within an interval defined by a minimum and maximum value.
Audio dithering
The symmetric triangular distribution is commonly used in audio dithering, where it is called TPDF (Triangular Probability Density Function). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8706417679786682, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2012/01/13/electromotive-force/?like=1&source=post_flair&_wpnonce=c1c9c39881 | # The Unapologetic Mathematician
## Electromotive Force
When we think of electricity, we don’t usually think of charged particles pushing and pulling on each other, mediated by vector fields. Usually we think of electrons flowing along wires. But what makes them flow?
The answer is summed up in something that is named — very confusingly — “electromotive force”. The word “force” is just a word here, so try to keep from thinking of it as a force. In fact, it’s more analogous to work, in the same way the electric field is analogous to force.
We calculate the work done by a force $F$ in moving a particle along a path $C$ is given by the line integral
$\displaystyle W=\int\limits_CF\cdot dr$
If $F$ is conservative, this amounts to the difference in “potential energy” between the start and end of the path. We often interpret a work integral as an energy difference even in a more general setting. Colloquially, we sometimes say that particles “want to move” from high-energy states to low-energy ones.
Similarly, we define the electromotive force along a path to be the line integral
$\displaystyle\mathcal{E}=-\int\limits_CE\cdot dr$
If the electric field $E$ is conservative — the gradient of some potential function $\phi$ — then the electromotive force over a path is the difference between the potential at the end and at the start of the path. But, we may ask, why the negative sign? Well, it’s conventional, but I like to think of it in terms of electrons, which have negative charge; given electric field “pushes” an electron in the opposite direction, thus we should take the negative to point the other way.
In any event, just like the electric field measures force per unit of charge, electromotive force measures work per unit of charge, and is measured in units of energy per unit of charge. In the SI system, there is is a unit called a volt — with symbol $\mathrm{V}$ — which is given by
$\displaystyle\mathrm{V}=\frac{\mathrm{kg}\cdot\mathrm{m}^2}{\mathrm{C}\cdot\mathrm{s}^2}$
And often electromotive force is called “voltage”. For example, a battery — through chemical processes — induces a certain difference in the electric potential between its two terminals. In a nine-volt battery this difference is, predictably enough, $9\mathrm{V}$, and the same difference is induced along the path of a wire leading from one terminal, through some electric appliance, and back to the other terminal. Just like the potential energy difference “pushes” particles along from high-energy states to low-energy ones, so the voltage difference “pushes” charged particles along the wire.
## 4 Comments »
1. [...] wire following a closed path . There’s no battery or anything that might normally induce an electromotive force around the circuit by chemical or other means. And, as we saw when discussing Gauss’ law, [...]
Pingback by | January 14, 2012 | Reply
2. [...] But we don’t work explicitly with force as much as we do with the fields. We do have an analogue for work, though — electromotive force: [...]
Pingback by | February 1, 2012 | Reply
3. [...] Faraday’s law tells us about the electromotive force induced on the [...]
Pingback by | February 14, 2012 | Reply
4. [...] start with Ohm’s law, which basically says that — as a first approximation — the electromotive force around a circuit is proportional to the current around it; push harder and you’ll move charge [...]
Pingback by | February 16, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501979351043701, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/32614/what-is-electron-momentum-density-in-solids-and-molecules | # What is electron momentum density in solids and molecules?
Can someone kindly help me to know how can i get electron momentum density for one orbital like home? what is the theory of electron momentum density? how can I derive electron momentum density from real electron density? how can i use Fourier transform?
-
Coming from a particle physicist's perspective I find the terminology of this post difficult (different sub-fields/different vocabulary), but even leaving that aside the question is unclear. It seems you have the spacial distribution and know that the momentum distribution is connected to it by the Fourier transform. So what, exactly, is your question? – dmckee♦ Jul 23 '12 at 2:38
The momentum density of the electrons is zero in molecules in free space, because of time reversal symmetry. This means that the orbitals can be thought of as occupied states superposed with occupied time-reversed states. You can linearly combine complex orbitals which have momentum into real states, since every orbital comes with a complex conjugate. This makes the question ill posed. You can ask about electron momentum in a magnetic field, or for a given form of orbital, but if you use real orbitals, it will be zero by time-reversal invariance. – Ron Maimon Jul 23 '12 at 7:13
## 1 Answer
Electron momentum density $\rho(p)$ is Fourier transform of electron radial density $\rho(r)$.
$$\rho(p)=\sqrt(2/pi) (-i)^l \int j(pr) \rho(r) r^2 \mathrm{d}r$$
-
– dmckee♦ Aug 1 '12 at 19:06
This is assuming a spherically symmetric distribution of electrons. – Ron Maimon Aug 2 '12 at 2:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046400189399719, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/81707?sort=oldest | ## Localizability of differential operators a la Grothendieck
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
Maybe this question is trivial, so sorry
Let $A$ be a (comm. with 1) $k$-algebra, where $k$ is a ring (comm. with 1).
Then we can define the module of differential operators $D^{\leq n} (A)$, a submodule of $End_k (A,A)$ (endomorphisms of vector spaces). $D^{\leq -1} = 0$, and then inductively $D^{\leq n} = { d | [d,a]\in D^{\leq n-1}}$.
We have a lemma:
Lemma. Let $f \in A$. Then for every $d \in D^{\leq n}(A)$ we can find unique $e \in D^{\leq n}(A_f )$, such that $l\circ d = e \circ l$, where $l: A \to A_f$ is the localization map.
I think that I know how to prove the lemma, by induction on the order of diff. op. (just need to see how to apply operators to fractions). It gives us a map $D^{\leq n}(A) \to D^{\leq n}(A_f)$.
Question 1. Under which assumptions on $A/k$ this map $D^{\leq n}(A) \to D^{\leq n}(A_f)$ is a localization map (i.e. becomes an isomorphism after tensoring (say on the left, it does not matter) with $A_f$)?
Question 2 (my real question). If $A/k$ is finitely generated, or finitely presented, is this a localization map?
Somehow, I am having trouble with the "surjectivness" part. Maybe there is some reference?
Thank you, Sasha
-
1
Take a look at EGA IV, section 16 and in particular 16.8.3. (Trivial aside $D^{\le n}(A)$ isn't a ring, but the union is.) – Donu Arapura Nov 23 2011 at 18:44
Thanks very much, I'll look there. – Sasha Nov 25 2011 at 14:56
## 1 Answer
Two observations (with $k$ a field of characteristic zero):
• If $A$ is a domain over a field $k$, then elements of $D(A)$ extend to elements of $D(A_S)$ for all multiplicatively closed sets $S$.
• Your questions become easier if you ask instead about the subalgebra $\Delta(A)$ of $D(A)$ generated by $A$ and derivations: then the answer is yes to your two questions. Now, if $A$ is finitely generated and regular, then $D(A)=\Delta(A)$, so in this case the answer is yes for $D(A)$ too.
You'll find this in the last chapter of McConnell and Robson's book on noetherian rings.
(Finally, $D^{\leq n}(A)$ is an $A$-bimodule which is not symmetric, so tensoring with $A_f$ on one side or the other is not the same thing)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8999797105789185, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/102493/infinite-number-of-composite-pairs-6n-1-6n-1?answertab=active | # Infinite number of composite pairs $(6n + 1), (6n -1)$
I came across a problem in Dudley's Elementary Number Theory. Part (a) is to find one $n$ such that both $6n+1$ and $6n-1$ are composite. So I stumbled across $n=50$, which gives $299=13 \cdot 23$, and $301=7 \cdot 43$
The motivation for this was knowing that $9|10^k -1$ and $7|10^{2k}+1$ and trying to find an even power of ten (obviously $300$ is not a power of ten).
Part (b) asks to prove that there are infinitely many $n$ such that both are composite.
How do we show this?
Setting $6n \pm 1 = 10^{2k} \pm 1$ didn't work.
I looked at $(6n+1)(6n-1) = 36n^2-1 = 6(6n^2) -1$, so I can generate composite numbers of the form $6m -1$, but what about the corresponding $6m+1$ ?
Should I be using my result from (a) to generate these pairs?
I looked at the solutions to this related question, but they seemed focused on proving that both are prime.
Other methods that I've considered are using modular arithmetic (though that hasn't been covered at this point in the book) and using the well-ordering principle in some way... (Hints are acceptable, as I want to be an autonomous mathematician, as much as possible)
Added
If we let $n = 6^{2k} = 36^k$, then $6n \pm 1 = 6^{2k+1} \pm 1$, which (for $k \geq 1$) factors into $(6 \pm 1)(6^{2k} \mp ... +1)$
Is this correct? Here I am using a factorization of sums/differences of odd powers, which "hasn't been covered" (which isn't itself a problem).
-
6
Chinese Remainder Theorem, $6x\equiv 1 \pmod{47}$, $6x\equiv -1 \pmod {61}$. – André Nicolas Jan 26 '12 at 1:35
@André, I will try to digest this, thanks! – The Chaz 2.0 Jan 26 '12 at 1:47
@Andre: That's really nice. I like that a lot. – mixedmath♦ Jan 26 '12 at 1:56
1
You don't need to use $47$ and $61$, any pair of primes relatively prime to $6$ will do. @AndréNicolas. For example $6x\equiv 1\pmod 5$ and $6x\equiv -1 \pmod 7$ yields $x\equiv 1\pmod {35}$. – Thomas Andrews Jan 26 '12 at 2:10
1
@Thomas Andrews: I picked two numbers that were obviously not special, though unfortunately I picked two primes. Thought that perhaps the OP could chase this down to get a family of examples. – André Nicolas Jan 26 '12 at 2:45
## 3 Answers
Hint: Instead of even powers of $10$, try odd powers of $6$.
-
Ah yes! I will show my work, many thanks. – The Chaz 2.0 Jan 26 '12 at 1:49
1
@TheChaz: Your edit is correct :-) $x^{2n+1} + 1$ is divisible by $x+1$ and $x^{2n+1} - 1$ is divisible by $x-1$. – Aryabhata Jan 26 '12 at 2:06
Great. – The Chaz 2.0 Jan 26 '12 at 2:09
Here is a way of solving the problem "non-constructively"
You can find infinitely many disjoint sequences of $8$ consecutive composite numbers, and among any 8 consecutive numbers you can find a pair of the type $6n \pm 1$.
Once you figure this "non-constructive" solution, the standard examples lead to simple solutions. For example, it is easy to figure that $k!+5, k!+7$ work for all $k \geq 7$... This would lead to $n= \frac{k!}{6}+1$.
-
Try to make the expressions fit forms that you know factor. Sums and differences of cubes factor. So $n=6^2k^3$ will make the expressions take on those forms.
-
That's the final step that I was missing. Thanks – The Chaz 2.0 Jan 26 '12 at 2:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536139965057373, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/65938/the-behavior-of-pure-sheaves-under-functor-hom-f/65943 | ## The behavior of pure sheaves under functor Hom( F, -)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We know that a submodule A of B is pure if and only if the functor $Hom(M, -)$ is exact on the sequence $0 \rightarrow A \rightarrow B \rightarrow C \rightarrow 0$ for every finitely presented module M. So, let X be a scheme and ${\cal G}$ be a subsheaf of ${\cal H}$. Is there any equivalent statement for pure sheaves. ($\cal G$ is pure in $\cal H$, if ${\cal G}(U)$ is pure in ${\cal H}(U)$ for every affine open subset U of X.)
-
Perhaps you should assume $X$ to be qs qc to get the affine criterion out of a more general one. – Martin Brandenburg May 25 2011 at 10:47
I don't know what you mean by qs qc scheme. May you explain me more details, please? – Gholam May 28 2011 at 11:04
## 1 Answer
Let $\mathbf{A}$ be a cocomplete abelian category, such as any category of sheaves. An object $M$ is finitely presented if the representable functor $\operatorname{Hom}_{\mathbf{A}}(M,-)$ preserves filtered colimits. Now that you know what a finitely presentable object is, just mimick the definition you know of pure subobject. All this is standard.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8927068710327148, "perplexity_flag": "head"} |
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_2&diff=30183&oldid=30065 | # User:Michiexile/MATH198/Lecture 2
### From HaskellWiki
(Difference between revisions)
| | | | |
|----------|-------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 35: | | Line 35: | |
| | | | |
| | ===Monomorphisms=== | | ===Monomorphisms=== |
| | | + | |
| | | + | We say that an arrow <math>f</math> is ''left cancellable'' if for any arrows <math>g_1,g_2</math> we can show <math>fg_1 = fg_2 \Rightarrow g_1=g_2</math>. In other words, it is left cancellable, if we can remove it from the far left of any equation involving arrows. |
| | | + | |
| | | + | We call a left cancellable arrow in a category a ''monomorphism''. |
| | | | |
| | ====In concrete categories==== | | ====In concrete categories==== |
| | | + | |
| | | + | Left cancellability means that if, when we do first <math>g_1</math> and then <math>f</math> we get the same as when we do first <math>g_2</math> and then <math>f</math>, then we had equality already before we followed with <math>f</math>. |
| | | + | |
| | | + | In other words, when we work with functions on sets, <math>f</math> doesn't introduce relations that weren't already there. Anything non-equal before we apply <math>f</math> remains non-equal in the image. This, translated to formulae gives us the well-known form for ''injectivity'': |
| | | + | ::<math>x\neq y\Rightarrow f(x)\neq f(y)</math> or moving out the negations, |
| | | + | ::<math>f(x)=f(y) \Rightarrow x=y</math>. |
| | | | |
| | ====Subobjects==== | | ====Subobjects==== |
| | | | |
| | ===Epimorphisms=== | | ===Epimorphisms=== |
| | | + | |
| | | + | ''Right cancellability'', by analogy, is the implication |
| | | + | :<math>g_1f = g_2f \Rightarrow g_1 = g_2</math> |
| | | + | The name, here comes from that we can remove the right cancellable <math>f</math> from the right of any equation it is involved in. |
| | | + | |
| | | + | A right cancellable arrow in a category is an ''epimorphism''. |
| | | | |
| | ====In concrete categories==== | | ====In concrete categories==== |
| | | + | |
| | | + | For epimorphims the interpretation in set functions is that whatever <math>f</math> does, it doesn't hide any part of the things <math>g_1</math> and <math>g_2</math> do. So applying <math>f</math> first doesn't influence the total available scope <mamth>g_1</math> and <math>g_2</math> have. |
| | | | |
| | ===Terminal objects=== | | ===Terminal objects=== |
| Line 51: | | Line 69: | |
| | | | |
| | ===Zero objects=== | | ===Zero objects=== |
| | | + | |
| | | + | ===Internal and external hom=== |
| | | | |
| | * Isomorphisms and existence of inverses. | | * Isomorphisms and existence of inverses. |
## Revision as of 22:54, 22 September 2009
IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ.
## Contents
### 1 Morphisms and objects
Some morphisms and some objects are special enough to garner special names that we will use regularly.
In morphisms, the important properties are
• cancellability - the categorical notion corresponding to properties we use when solving, e.g., equations over $\mathbb N$:
$3x = 3y \Rightarrow x = y$
• existence of inverses - which is stronger than cancellability. If there are inverses around, this implies cancellability, by applying the inverse to remove the common factor. Cancellability, however, does not imply that inverses exist: we can cancel the 3 above, but this does not imply the existence of $1/3\in\mathbb N$.
Thus, we'll talk about isomorphisms - which have two-sided inverses, monomorphisms and epimorphisms - which have cancellability properties, and split morphisms - which are mono's and epi's with correspodning one-sided inverses. We'll talk about what these concepts - defined in terms of equationsolving with arrows - apply to more familiar situations. And we'll talk about how the semantics of some of the more wellknown ideas in mathematics are captured by these notions.
For objects, the properties are interesting in what happens to homsets with the special object as source or target. An empty homset is pretty boring, and a large homset is pretty boring. The real power, we find, is when all homsets with the specific source or target are singleton sets. This allows us to formulate the idea of a 0 in categorical terms, as well as capturing the roles of the empty set and of elements of sets - all using only arrows.
### 2 Isomorphisms
An arrow $f:A\to B$ in a category C is an isomorphism if it has a twosided inverse g. In other words, we require the existence of a $g:B\to A$ such that fg = 1B and gf = 1A.
#### 2.1 In concrete categories
In a category of sets with structure with morphisms given by functions that respect the set structure, isomorphism are bijections respecting the structure. In the category of sets, the isomorphisms are bijections.
#### 2.2 Representative subcategories
Very many mathematical properties and invariants are interesting because they hold for objects regardless of how, exactly, the object is built. As an example, most set theoretical properties are concerned with how large the set is, but not what the elements really are.
If all we care about are our objects up to isomorphisms, and how they relate to each other - we might as well restrict ourselves to one object for each isomorphism class of objects.
Doing this, we get a representative subcategory: a subcategory such that every object of the supercategory is isomorphic to some object in the subcategory.
#### 2.3 Groupoids
A groupoid is a category where all morphisms are isomorphisms. The name originates in that a groupoid with one object is a bona fide group; so that groupoids are the closest equivalent, in one sense, of groups as categories.
### 3 Monomorphisms
We say that an arrow f is left cancellable if for any arrows g1,g2 we can show $fg_1 = fg_2 \Rightarrow g_1=g_2$. In other words, it is left cancellable, if we can remove it from the far left of any equation involving arrows.
We call a left cancellable arrow in a category a monomorphism.
#### 3.1 In concrete categories
Left cancellability means that if, when we do first g1 and then f we get the same as when we do first g2 and then f, then we had equality already before we followed with f.
In other words, when we work with functions on sets, f doesn't introduce relations that weren't already there. Anything non-equal before we apply f remains non-equal in the image. This, translated to formulae gives us the well-known form for injectivity:
$x\neq y\Rightarrow f(x)\neq f(y)$ or moving out the negations,
$f(x)=f(y) \Rightarrow x=y$.
### 4 Epimorphisms
Right cancellability, by analogy, is the implication
$g_1f = g_2f \Rightarrow g_1 = g_2$
The name, here comes from that we can remove the right cancellable f from the right of any equation it is involved in.
A right cancellable arrow in a category is an epimorphism.
#### 4.1 In concrete categories
For epimorphims the interpretation in set functions is that whatever f does, it doesn't hide any part of the things g1 and g2 do. So applying f first doesn't influence the total available scope <mamth>g_1</math> and g2 have.
### 8 Internal and external hom
• Isomorphisms and existence of inverses.
• Epi- and mono-morphisms and cancellability.
• Examples in concrete categories.
• Monomorphisms and subobjects:
• Factoring through. Equivalence relation by mutual factoring.
• Subobjects as equivalence classes of monomorphisms.
• Splitting and the existence of inverses.
• Terminal and initial objects.
• Constants. Pointless sets.
## 9 Morphisms
The arrows of a category are called morphisms. This is derived from homomorphisms.
Some arrows have special properties that make them extra helpful; and we'll name them:
Endomorphism
A morphism with the same object as source and target.
Monomorphism
A morphism that is left-cancellable. Corresponds to injective functions. We say that f is a monomorphism if for any g1,g2, the equation fg1 = fg2 implies g1 = g2. In other words, with a concrete perspective, f doesn't introduce additional relations when applied.
Epimorphism
A morphism that is right-cancellable. Corresponds to surjective functions. We say that f is an epimorphism if for any g1,g2, the equation g1f = g2f implies g1 = g2.
Note, by the way, that cancellability does not imply the existence of an inverse. Epi's and mono's that have inverses realizing their cancellability are called split.
Isomorphism
A morphism is an isomorphism if it has an inverse. Split epi and split mono imply isomorphism. Specifically, $f:v\to w$ is an isomorphism if there is a $g:w\to v$ such that fg = 1w and g = 1v.
Automorphism
An automorphism is an endomorphism that is an isomorphism.
## 10 Objects
In a category, we use a different name for the vertices: objects. This comes from the roots in describing concrete categories - thus while objects may be actual mathematical objects, but they may just as well be completely different.
Just as with the morphisms, there are objects special enough to be named. An object v is
Initial
if [v,w] has exactly one element for all other objects w.
Terminal
if [w,v] has exactly one element for all other objects w.
A Zero object
if it is both initial and terminal.
All initial objects are isomorphic. If i1,i2 are both initial, then there is exactly one map $i_1\to i_2$ and exactly one map $i_2\to i_1$. The two possible compositions are maps $i_1\to i_1$ and $i_2\to i_2$. However, the initiality condition holds even for the morphism set [v,v], so in these, the only existing morphism is $1_{i_1}$ and $1_{i_2}$ respectively. Hence, the compositions have to be this morphism, which proves the statement.
## 11 Dual category
The same proof carries over, word by word, to the terminal case. This is an illustration of a very commonly occurring phenomenon - dualization. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.89445960521698, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Quantities_of_information | # Quantities of information
A simple information diagram illustrating the relationships among some of Shannon's basic quantities of information.
The mathematical theory of information is based on probability theory and statistics, and measures information with several quantities of information. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. The most common unit of information is the bit, based on the binary logarithm. Other units include the nat, based on the natural logarithm, and the hartley, based on the base 10 or common logarithm.
In what follows, an expression of the form $p \log p \,$ is considered by convention to be equal to zero whenever p is zero. This is justified because $\lim_{p \rightarrow 0+} p \log p = 0$ for any logarithmic base.
## Self-information
Shannon derived a measure of information content called the self-information or "surprisal" of a message m:
$I(m) = \log \left( \frac{1}{p(m)} \right) = - \log( p(m) ) \,$
where $p(m) = \mathrm{Pr}(M=m)$ is the probability that message m is chosen from all possible choices in the message space $M$. The base of the logarithm only affects a scaling factor and, consequently, the units in which the measured information content is expressed. If the logarithm is base 2, the measure of information is expressed in units of bits.
Information is transferred from a source to a recipient only if the recipient of the information did not already have the information to begin with. Messages that convey information that is certain to happen and already known by the recipient contain no real information. Infrequently occurring messages contain more information than more frequently occurring messages. This fact is reflected in the above equation - a certain message, i.e. of probability 1, has an information measure of zero. In addition, a compound message of two (or more) unrelated (or mutually independent) messages would have a quantity of information that is the sum of the measures of information of each message individually. That fact is also reflected in the above equation, supporting the validity of its derivation.
An example: The weather forecast broadcast is: "Tonight's forecast: Dark. Continued darkness until widely scattered light in the morning." This message contains almost no information. However, a forecast of a snowstorm would certainly contain information since such does not happen every evening. There would be an even greater amount of information in an accurate forecast of snow for a warm location, such as Miami. The amount of information in a forecast of snow for a location where it never snows (impossible event) is the highest (infinity).
## Entropy
The entropy of a discrete message space $M$ is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message $m$ from that message space:
$H(M) = \mathbb{E} \{I(M)\} = \sum_{m \in M} p(m) I(m) = -\sum_{m \in M} p(m) \log p(m).$
where
$\mathbb{E} \{ \}$ denotes the expected value operation.
An important property of entropy is that it is maximized when all the messages in the message space are equiprobable (e.g. $p(m) = 1/M$). In this case $H(M) = \log |M|.$.
Sometimes the function H is expressed in terms of the probabilities of the distribution:
$H(p_1, p_2, \ldots , p_k) = -\sum_{i=1}^k p_i \log p_i,$ where each $p_i \geq 0$ and $\sum_{i=1}^k p_i = 1.$
An important special case of this is the binary entropy function:
$H_\mbox{b}(p) = H(p, 1-p) = - p \log p - (1-p)\log (1-p).\,$
## Joint entropy
The joint entropy of two discrete random variables $X$ and $Y$ is defined as the entropy of the joint distribution of $X$ and $Y$:
$H(X, Y) = \mathbb{E}_{X,Y} [-\log p(x,y)] = - \sum_{x, y} p(x, y) \log p(x, y) \,$
If $X$ and $Y$ are independent, then the joint entropy is simply the sum of their individual entropies.
(Note: The joint entropy should not be confused with the cross entropy, despite similar notations.)
## Conditional entropy (equivocation)
Given a particular value of a random variable $Y$, the conditional entropy of $X$ given $Y=y$ is defined as:
$H(X|y) = \mathbb{E}_{{X|Y}} [-\log p(x|y)] = -\sum_{x \in X} p(x|y) \log p(x|y)$
where $p(x|y) = \frac{p(x,y)}{p(y)}$ is the conditional probability of $x$ given $y$.
The conditional entropy of $X$ given $Y$, also called the equivocation of $X$ about $Y$ is then given by:
$H(X|Y) = \mathbb E_Y \{H(X|y)\} = -\sum_{y \in Y} p(y) \sum_{x \in X} p(x|y) \log p(x|y) = \sum_{x,y} p(x,y) \log \frac{p(y)}{p(x,y)}.$
A basic property of the conditional entropy is that:
$H(X|Y) = H(X,Y) - H(Y) .\,$
## Kullback–Leibler divergence (information gain)
The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions, a "true" probability distribution p, and an arbitrary probability distribution q. If we compress data in a manner that assumes q is the distribution underlying some data, when, in reality, p is the correct distribution, Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression, or, mathematically,
$D_{\mathrm{KL}}(p(X) \| q(X)) = \sum_{x \in X} p(x) \log \frac{p(x)}{q(x)}.$
It is in some sense the "distance" from q to p, although it is not a true metric due to its not being symmetric.
## Mutual information (transinformation)
It turns out that one of the most useful and important measures of information is the mutual information, or transinformation. This is a measure of how much information can be obtained about one random variable by observing another. The mutual information of $X$ relative to $Y$ (which represents conceptually the average amount of information about $X$ that can be gained by observing $Y$) is given by:
$I(X;Y) = \sum_{y\in Y} p(y)\sum_{x\in X} p(x|y) \log \frac{p(x|y)}{p(x)} = \sum_{x,y} p(x,y) \log \frac{p(x,y)}{p(x)\, p(y)}.$
A basic property of the mutual information is that:
$I(X;Y) = H(X) - H(X|Y).\,$
That is, knowing Y, we can save an average of $I(X; Y)$ bits in encoding X compared to not knowing Y. Mutual information is symmetric:
$I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y).\,$
Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) of the posterior probability distribution of X given the value of Y to the prior distribution on X:
$I(X;Y) = \mathbb E_{p(y)} \{D_{\mathrm{KL}}( p(X|Y=y) \| p(X) )\}.$
In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:
$I(X; Y) = D_{\mathrm{KL}}(p(X,Y) \| p(X)p(Y)).$
Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.
## Differential entropy
See main article: Differential entropy.
The basic measures of discrete entropy have been extended by analogy to continuous[disambiguation needed] spaces by replacing sums with integrals and probability mass functions with probability density functions. Although, in both cases, mutual information expresses the number of bits of information common to the two sources in question, the analogy does not imply identical properties; for example, differential entropy may be negative.
The differential analogies of entropy, joint entropy, conditional entropy, and mutual information are defined as follows:
$h(X) = -\int_X f(x) \log f(x) \,dx$
$h(X,Y) = -\int_Y \int_X f(x,y) \log f(x,y) \,dx \,dy$
$h(X|y) = -\int_X f(x|y) \log f(x|y) \,dx$
$h(X|Y) = \int_Y \int_X f(x,y) \log \frac{f(y)}{f(x,y)} \,dx \,dy$
$I(X;Y) = \int_Y \int_X f(x,y) \log \frac{f(x,y)}{f(x)f(y)} \,dx \,dy$
where $f(x,y)$ is the joint density function, $f(x)$ and $f(y)$ are the marginal distributions, and $f(x|y)$ is the conditional distribution. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9135856628417969, "perplexity_flag": "head"} |
http://nrich.maths.org/2864/note | ### Round and Round a Circle
Can you explain what is happening and account for the values being displayed?
# Sine and Cosine for Connected Angles
##### Stage: 4 Challenge Level:
The relationship between trig values for angles that are in the ratio $1:2$ is important for further study in trigonometry.
The pegboard gives a simple context in which to discover and explore this relationship.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272781014442444, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/93969/list | ## Return to Question
2 deleted 22 characters in body
I'd like to have some simple examples of quasi-categories to understand better some concepts and one of the most basic (for me) should be the category of chain complexes.
Has anyone ever written down (more or less explicitly) what the simplicial set corresponding to the quasi-category associated with the category of (say unbounded) chain complexes on an abelian category looks like?
I am not looking for an enhancement of the derived category or anything like this, I'm thinking of the much simpler infinity category where higher morphisms correspond to homotopies between complexes(i.e. $Hom(A,B[-i])$). . My understanding is that the derived category should then be constructed as a localization of this $\infty$-category.
I am guessing my problem lies with the coherent nerve for simplicial categories.
1
# how to make the category of chain complexes into an $\infty$-category
I'd like to have some simple examples of quasi-categories to understand better some concepts and one of the most basic (for me) should be the category of chain complexes.
Has anyone ever written down (more or less explicitly) what the simplicial set corresponding to the quasi-category associated with the category of (say unbounded) chain complexes on an abelian category looks like?
I am not looking for an enhancement of the derived category or anything like this, I'm thinking of the much simpler infinity category where higher morphisms correspond to homotopies between complexes (i.e. $Hom(A,B[-i])$). My understanding is that the derived category should then be constructed as a localization of this $\infty$-category.
I am guessing my problem lies with the coherent nerve for simplicial categories. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209699630737305, "perplexity_flag": "head"} |
http://chemistry.stackexchange.com/questions/2554/why-does-gas-particle-velocity-affect-rate-of-effusion | # Why does gas particle velocity affect rate of effusion?
I understand why smaller particles have more velocity, but I don't understand what velocity has to do with rate of effusion:
My reasoning is thus:
1. Pressure is the number of impacts of particles in a given period of time.
2. If He and Ar are both in balloons at, say, 1.5 atm, both gasses have the same average Kinetic energy but He is moving faster (because it's smaller). Simple enough.
3. If both gasses are subject to the same pressure, they have the same number of impacts over a given area over a given period in time.
4. This should mean that each gas has an equal number of particles approaching a hole over a given period of time.
5. If a hole is big enough for both particles to fit through, the rate of effusion should be the same, since the same number of particles are approaching the exit hole.
For an analogy to explain my thinking ... if you have two lanes of cars going through a checkpoint, one has 30 cars per minute at 50 miles per hour and one has 30 cars per minute at 30 miles per hour, the resulting number of cars through the checkpoint should still be 30 cars per minute in each lane, regardless of their velocity.
My book (and professor, and Wikipedia...) say that it is the higher velocity of He particles which cause the faster rate of effusion; however, from my reasoning, that doesn't make sense except with small enough holes where the size of the particles would have an effect, i.e. the holes are very small in size and He could fit through but Ar could not, or there are simply more holes that He could fit through.
But Graham's law is referring to pinholes, which should be large enough for either atom to fit through.
So why does velocity affect rate of effusion?
-
## 1 Answer
Your point 1 is mistaken (or incomplete) $$P=\frac{F}{A}=\frac1A\frac{\Delta p}{\Delta t}=\frac1A\times\frac{2nmv}{\Delta t}$$
This is assuming all particles hit perpendicularly (you can always modify this for all colissions by taking the average component of velocity in the perpendicular direction).
So, pressure is proportional to number of impacts, mass, and velocity. Pressure is not simply "number of impacts", rather it is a combination of number of impacts with other stuff. Since the energies are equal, we can say that $v\propto m^{-\frac12}$, and we get $P\propto n m^{\frac12}$. P is also the same, so $n\propto\frac1{\sqrt m}\propto v$.
Thus, when avg KE and pressure are the same, rate of diffusion is inversely proportional to square root of mass.
-
But if I know this is pretty late ... but isn't it still the same number of impacts? i.e. a 5L balloon of helium has the same n as a 5L balloon of Xe. The difference is that the Xe is slower with more mass, but it should still be the same number of impacts shouldn't it? – Daniel Ball Nov 27 '12 at 17:19
Nevermind, I see what I'm missing. Thanks for your help. – Daniel Ball Nov 27 '12 at 17:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954783022403717, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/130038-cooling-problem.html | # Thread:
1. ## Cooling problem
A thermometer is taken from a room temperature 22degreesC and placed outside where the temperature is -10degreesC. One minute later, the thermometer reads 10degreesC. Use Newton's law of cooling to find the temperature T of the thermometer after t minutes.
2. Originally Posted by calculuskid1
A thermometer is taken from a room temperature 22degreesC and placed outside where the temperature is -10degreesC. One minute later, the thermometer reads 10degreesC. Use Newton's law of cooling to find the temperature T of the thermometer after t minutes.
Newton's law of cooling says that the rate of temperature change of an object is directly proportional to the difference between the object's temperature and the temperature of its environment.
$\frac{dT}{dt} = k[T - (-10)]$
separate variables, integrate, and get temperature, $T$ , as a function of time, $t$ . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8941865563392639, "perplexity_flag": "middle"} |
http://stats.stackexchange.com/questions/47707/ar1-and-law-of-iterated-expectations-no-serial-correlation | # AR1 and Law Of Iterated Expectations : No serial correlation
In the AR(1) model $y_{t}=\beta_{0}+\beta_{1}y_{t-1}+u_{t}$, assuming $E(u_{t-1}|y_{t-1},y_{t-2}...)=0$, how does the law of iterated expectations ensure that the errors must be uncorrelated: $E(u_{t}u_{s}|x_{t},x_{s})=0$?
-
1
Is this a homework problem? In any event, the statement that errors are uncorrelated is an assumption of the AR(1) model, not a (genuine) derivable result. – Arthur Small Jan 14 at 20:36
1
No not homework - reading through Woolridge intro - page 385. It is stated that given AR(1) and the assumption that the error term has zero mean given all past values of y, the errors are uncorrelated. – B_Miner Jan 14 at 20:42
## 1 Answer
Consider $s \neq t$. The law of iterated expectations tells us that
$$\begin{equation*} \mathbb{E}[u_su_t \mid y_{s-1}, y_{t-1}] = \mathbb{E}_{u_s}\left[\mathbb{E}[u_su_t \mid u_s, y_{s-1}, y_{t-1}]\right] = \mathbb{E}_{u_s}\left[u_s\mathbb{E}[u_t \mid u_s, y_{s-1}, y_{t-1}]\right]. \end{equation*}$$
Suppose that $s < t$, with the opposite case following by symmetry.
The AR(1) process has the property that, for $y_t = \beta_0 + \beta_1 y_{t-1} + u_t$, $\mathbb{E}[u_t \mid y_{t-1}, y_{t-2}, \dots] = \mathbb{E}[u_t \mid y_{t-1}] = 0$. In words, if I know what happened last period, then nothing else in the past can help me guess what happens today.
Rewriting our AR(1) equation reveals that $u_s = y_s - \beta_0 - \beta_1 y_{s-1}$; there is a one-to-one transformation betweent $u_s$ and $y_s$. As a result, conditioning on $u_s$ is equivalent to conditioning on $y_s$.
Using this relationship and our fact about the AR(1) process, we find that $$\begin{equation*} \mathbb{E}[u_t \mid u_s, y_{s-1}, y_{t-1}] = \mathbb{E}[u_t \mid y_s, y_{s-1}, y_{t-1}] = \mathbb{E}[u_t \mid y_{t-1}] = 0. \end{equation*}$$
Plugging this in to the LIE result above gives that $$\begin{equation*} \mathbb{E}_{u_s}\left[u_s\mathbb{E}[u_t \mid u_s, y_{s-1}, y_{t-1}]\right] = \mathbb{E}_{u_s}\left[u_s \times 0\right] = 0. \end{equation*}$$
-
In the first part, the conditioning is on $y_{s-1},y_{t-1}$ correct? And when you condition on on $u_{s}$ that amounts to making $u_{s}$ a constant? – B_Miner Jan 25 at 0:47
1
Yes to both points. – Charlie Jan 25 at 15:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127969145774841, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/207636-how-sequence-relates-derivative.html | # Thread:
1. ## how a sequence relates to a derivative?
i'm having some trouble with the ideas behind this question.
f is a continuous, defferentiable function with f(0)=0
given $a_{n}=nf(\frac{1}{n})$ prove that $\lim_{x\to\infty}=f^\prime(0)$
so i need a way to relate the value that some sequence converges to and the first derivative of a related function evaluated at 0.
i'm having a really hard time with this question. effectively i've got nothing on it and its been a good 5-7 hours so i don't think i can solve it with what i know already.
2. ## Re: how a sequence relates to a derivative?
$\lim_{n\to\infty}a_n=\lim_{n\to\infty}\frac{f\left (\frac{1}{n} \right)}{\frac{1}{n}}$
This is the indeterminate form 0/0, so apply L'Hôpital's rule to get the desired result.
3. ## Re: how a sequence relates to a derivative?
so the numerator would be $f^\prime(1/n)*(1/n)^\prime$ by the chain rule
and the denominator $f(1/n)^\prime$
so then i have $\lim_{x\to\infty} a_n = \lim_{x\to\infty} f^\prime(1/n)$
and since 1/n goes to 0 then it becomes
$\lim_{x\to\infty} a_n = \lim_{x\to\infty} f^\prime(0)$
and i'm done?
4. ## Re: how a sequence relates to a derivative?
Your numerator is correct, but the denominator would simply be (1/n)', which cancels with this same factor in the numerator that results from the chain rule.
After that, what you wrote is correct, except let n go to infinity rather than x.
kk thanks.
6. ## Re: how a sequence relates to a derivative?
$f^\prime(1/n)*(1/n)^\prime$ by the chain rule
and the denominator $(1/n)^\prime$
so then i have $\lim_{n\to\infty} a_n = \lim_{n\to\infty} f^\prime(1/n)$
and since 1/n goes to 0 then it becomes
$\lim_{n\to\infty} a_n = f^\prime(0)$
and i'm done?
so then this? sorry im exhausted.
7. ## Re: how a sequence relates to a derivative?
Yes, that looks good! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338328242301941, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/179061-orthonormal-set.html | Thread:
1. Orthonormal Set
I have this problem that says show that $\left \{ \frac12, sin(x), cos(x) \right \}$ is an orthonormal set relative to the inner product defined by $\frac1\pi \int_{-\pi}^{\pi}f(x)g(x)dx$. When I take the norms of each item in the set I get 1/2, 1, 1 respectively. Could someone point out to what I am doing wrong?
Thanks
2. Are you sure it's supposed to be an orthonormal set and not just an orthogonal set? I get what you get for the norms, so the set is definitely not orthonormal w.r.t. that inner product. Maybe they meant to have the set
$\left\{\frac{\sqrt{2}}{2},\sin(x),\cos(x)\right\},$
which I believe is orthonormal w.r.t. that inner product.
3. I went ahead and made it orthonormal and I got the same set that you derived. I am not sure if my teacher made a mistake or he wanted us to make them orthonormal. Thanks for your help!
4. You're welcome! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558155536651611, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/36181/list | ## Return to Answer
2 added 281 characters in body
There does not exist a reasonable necessary and sufficient condition for an infinite centerless group to be complete. More precisely, letting $V$ be the set-theoretic universe, there exists an infinite complete group $G \in V$ and a $c.c.c$ notion of forcing $\mathbb{P}$ such that $G$ has an outer automorphism in the generic extension $V^{\mathbb{P}}$. An example can be found in :
S. Thomas, The automorphism tower problem II, Israel J. Math. 103 (1998), 93-109.
In fact, it can be shown that the group $G$ in this paper satisfies the stronger property that $G \not \cong Aut(G)$ as abstract groups in $V^{\mathbb{P}}$. In other words, there does not even exist a ''non-canonical isomorphism''.
For more along these lineson the nonabsoluteness'' of the height of automorphism towers, see:
J. Hamkins and S. Thomas, Changing the heights of automorphism towers, Annals Pure Appl. Logic 102 (2000), 139-157.
1
There does not exist a reasonable necessary and sufficient condition for an infinite centerless group to be complete. More precisely, letting $V$ be the set-theoretic universe, there exists an infinite complete group $G \in V$ and a $c.c.c$ notion of forcing $\mathbb{P}$ such that $G$ has an outer automorphism in the generic extension $V^{\mathbb{P}}$. An example can be found in :
S. Thomas, The automorphism tower problem II, Israel J. Math. 103 (1998), 93-109.
For more along these lines, see:
J. Hamkins and S. Thomas, Changing the heights of automorphism towers, Annals Pure Appl. Logic 102 (2000), 139-157. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8539268374443054, "perplexity_flag": "head"} |
http://jellymatter.com/2011/09/14/carbon-rejuvination-not-capture/ | # Jellymatter
The blog that is not afraid of equations… or bees
## Carbon rejuvination not capture
by Joshaniel Cooper
Carbon capture has been widely purported to be an easy solution to one of the many global crises (the carbon dioxide one in this case). It essentially revolves around the principle that $CO_2$ in the atmosphere is bad so we should take it out of the atmosphere and put it somewhere else (tankers, the sea, secret underground lairs etc.) and for the most part the concept is correct. It is also a little retarded though as any $CO_2$ caught will eventually escape and add to the issues of a future generation (not to mention all the hassle of catching it in the first place). Fortunately help is at hand!
I recently went to a conference on electrochemistry, the last talk of the conference was by a guy called Andrew Bocarsly and was by far the most interesting talk of the conference (and I enjoy electrochemistry so the rest of them weren’t exactly boring). His (as it turned out serendipitously discovered) solution to the issue of $CO_2$ storage was ingeneous- $CO_2$ has carbon in and fuel has carbon in so just convert the $CO_2$ into fuel again! The caveat is of course that this must be done whilst expending less energy (or $CO_2$) than getting rid of it would.
Now $CO_2$ conversion back to fuel is not a simple chemical process: Carbon dioxide and water, $CO_2 + H_2O$, goes to methanol and oxygen, $CH_3OH + \tfrac{3}{2}O_2$. This is a six electron transfer process which involves at least three intermediate steps, thought to be $CO_2$ to formic acid to formaldehyde to methanol if you want to know. Finding suitable catalysts for this process would seem to be a nightmare (ask any chemist, they would agree I think) but here is the serendipitous part: it turns out that the catalyst for every step of this process is the same and it turns out that it isn’t even platinum! (Almost all catalysts in electrochemistry are platinum it seems) Instead it is an incredibly simple organic molecule called pyradine – a benzene ring with a nitrogen in it, and thats it. Though, many other aromatic molecules work too.
Using this catalyst and a semiconducting electrode it is possible to use photoelectrochemistry (light) to drive the reaction of $CO_2$ to methanol (and now butanol if you get the conditions right which is technically more relevant as it is already used as a fuel). Additionally, it is even possible using only low energy photons (red etc.) so it doesn’t even need to be in space to find UV, which is nice.
Of course the research is ongoing and things may get even better (or may turn out not to be scalable), but this may well be a glimpse of the future. For further details see this paper or look up Andrew Bocarsly (googling photoelectrochemistry carbon dioxide methanol seems to work as well). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602096676826477, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/complex-systems?sort=faq&pagesize=15 | # Tagged Questions
The complex-systems tag has no wiki summary.
1answer
83 views
### How to find the value of the parameter $a$ in this transfer function?
I am given a transfer function of a second-order system as: $$G(s)=\frac{a}{s^{2}+4s+a}$$ and I need to find the value of the parameter $a$ that will make the damping coefficient $\zeta=.7$. I am not ...
2answers
257 views
### Hamiltonian or not?
Is there a way to know if a system described by a known equation of motion admits a Hamiltonian function? Take for example $$\dot \vartheta_i = \omega_i + J\sum_j \sin(\vartheta_j-\vartheta_i)$$ ...
3answers
447 views
### What are some of the best books on complex systems?
I'm rather interested in getting my feet wet at the interface of complex systems and emergence. Can anybody give me references to some good books on these topics? I'm looking for very introductory ...
1answer
127 views
### Scale invariance and self organized criticality
On wikipedia i have found this statement: In physics, self-organized criticality (SOC) is a property of (classes of) dynamical systems which have a critical point as an attractor. Their ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8962968587875366, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/158089/infinite-products-reference-needed/158099 | # Infinite products - reference needed!
I am looking for a small treatment of basic theorems about infinite products ; surprisingly enough they are nowhere to be found after googling a little. The reason for this is that I am beginning to read Davenport's Multiplicative Number Theory, and the treatment of L-functions in there requires to understand convergence/absolute convergence of infinite products, which I know little about. Most importantly I'd like to know why
$$\prod (1+|a_n|) \to a < \infty \quad \Longrightarrow \quad \prod (1+ a_n) \to b \neq 0.$$
I believe I'll need more properties of products later on, so just a proof of this would be appreciated but I'd also need the reference.
Thanks in advance,
-
1
By the way, working over $\mathbb C$. – Patrick Da Silva Jun 14 '12 at 3:45
1
Is your question why does $\prod (1+a_n)$ converge? or why is the limit not $0$? or both? – user17762 Jun 14 '12 at 3:48
@Marvis : Both! This property is used when treating convergence of the product expression for $L(s,\chi)$. – Patrick Da Silva Jun 14 '12 at 3:52
There must also be an additional assumption that none of the terms $a_n$ are $-1$ for the product to be non-zero i.e. something like $\prod (1+a_n) = 0$ iff at-least one $a_n = -1$. – user17762 Jun 14 '12 at 3:55
3
I don't have any books with me, but the basics of infinite products are often covered in introductory complex analysis textbooks (because they are useful in any number of places in complex analysis, and arguably, rigorous development of the formal use of infinite products was one of the main historical motivations for complex analysis / analysis in general). Ahlfors's Complex analysis and Conway's Functions of one complex variable I both come to mind as possible sources for this. – leslie townes Jun 14 '12 at 4:02
show 2 more comments
## 2 Answers
"Most importantly I'd like to know why $$\prod (1+|a_n|) \to a < \infty \quad \Longrightarrow \quad \prod (1+ a_n) \to b \neq 0. "$$
We will first prove that if $\sum \lvert a_n \rvert < \infty$, then the product $\prod_{n=1}^{\infty} (1+a_n)$ converges. Note that the condition you have $\prod (1+|a_n|) \to a < \infty$ is equivalent to the condition that $\sum \lvert a_n \rvert < \infty$, which can be seen from the inequality below. $$\sum \lvert a_n \rvert \leq \prod (1+|a_n|) \leq \exp \left(\sum \lvert a_n \rvert \right)$$
Further, we will also show that the product converges to $0$ if and only if one of its factors is $0$.
If $\sum \lvert a_n \rvert$ converges, then there exists some $M \in \mathbb{N}$ such that for all $n > M$, we have that $\lvert a_n \rvert < \frac12$. Hence, we can write $$\prod (1+a_n) = \prod_{n \leq M} (1+a_n) \prod_{n > M} (1+a_n)$$ Throwing away the finitely many terms till $M$, we are interested in the infinite product $\prod_{n > M} (1+a_n)$. We can define $b_n = a_{n+M}$ and hence we are interested in the infinite product $\prod_{n=1}^{\infty} (1+b_n)$, where $\lvert b_n \rvert < \dfrac12$. The complex logarithm satisfies $1+z = \exp(\log(1+z))$ whenever $\lvert z \rvert < 1$ and hence $$\prod_{n=1}^{N} (1+b_n) = \prod_{n=1}^{N} e^{\log(1+b_n)} = \exp \left(\sum_{n=1}^N \log(1+b_n)\right)$$ Let $f(N) = \displaystyle \sum_{n=1}^N \log(1+b_n)$. By the Taylor series expansion, we can see that $$\lvert \log(1+z) \rvert \leq 2 \lvert z \rvert$$ whenever $\lvert z \rvert < \frac12$. Hence, $\lvert \log(1+b_n) \rvert \leq 2 \lvert b_n \rvert$. Now since $\sum \lvert a_n \rvert$ converges, so does $\sum \lvert b_n \rvert$ and hence so does $\sum \lvert \log(1+b_n) \rvert$. Hence, $\lim_{N \rightarrow \infty} f(N)$ exists. Call it $F$. Now since the exponential function is continuous, we have that $$\lim_{N \to \infty} \exp(f(N)) = \exp(F)$$ This also shows that why the limit of the infinite product $\prod_{n=1}^{\infty}(1+a_n)$ cannot be $0$, unless one of its factors is $0$. From the above, we see that $\prod_{n=1}^{\infty}(1+b_n)$ cannot be $0$, since $\lvert F \rvert < \infty$. Hence, if the infinite product $\prod_{n=1}^{\infty}(1+a_n)$ is zero, then we have that $\prod_{n=1}^{M}(1+a_n) = 0$. But this is a finite product and it can be $0$ if and only if one of the factors is zero.
Most often this is all that is needed when you are interested in the convergence of the product expressions for the $L$ functions.
-
Hm. So absolutely convergent products defined in this sense (the $\prod (1 + |a_n|)$ way) makes more sense to me now. Thanks! This is precisely what I needed to get over the point I was stuck at. +1 & check! – Patrick Da Silva Jun 14 '12 at 4:46
won't believe what it is valuable to me, Marvis. Thanks @Martin. Thanks Marvis. I want to read every words in it instantly. many +1 :) – Babak S. Nov 24 '12 at 8:11
I am making this CW, so that other people can add further references.
Books
• Konrad Knopp: Infinite Sequences and Series; see p.92. I believe Knopp's books can be considered a classical references for this.
• Konrad Knopp: Theorie und Anwendung der unendlichen Reihen; or the English translation Theory and application of infinite series. There is a whole chapter devoted to infinite products, see p.218. The book is freely available here. This is given as reference in Wikipedia article. (You can read this on Talk page of that article: This article is probably very non-ideal, but when I needed this material a while ago it was hard to find, and so I figured wikipedia would be a good place to hold it. So it seems you are not the there are other people who had problems with finding references about infinite products.)
• Earl David Rainville: Infinite Series. This book was mentioned in connection with infinite products in this answer.
• Reinhold Remmert: Classical Topics in Complex Function Theory, Graduate Texts in Mathematics, Volume 172, translated from German.
Part A of this outstanding book is dedicated to infinite products. It has six chapters on 140+ pages and covers a lot of classical material, from very basic convergence theory to quite advanced material on functions in one . Highlights include the sine product, partition products, a detailed treatment of the $\Gamma$-function and the $\mathrm{B}$-function, the Weierstraß product theorem, Iss'sa's theorem, Mittag Leffler's theorem and much more. In addition the book has a lot of historical references and remarks and recommendations for further reading.
• J. N. Sharma: Infinite Series and Products, see p.129. I did not know about this book, I found it using Google Books - see below.
Online
• S. E. Payne: Infinite Sums, Infinite Products, and $\zeta(2k)$
• W.W.L. Chen: Fundamentals of analysis Chapter 7
• Bibliography for Infinite Products (compiled by John H. Mathews)
Searches
The reason I've included this part is that I only knew about Knopp's book(s) offhand, but it seemed very probable that there are plenty of notes available online. This is how I found Payne's and Chen's notes; you can check the search results for yourself to see, whether you find some other interesting things.
-
1
Now I realized that I am not sure which of the above references deal with infinite product of real numbers only (obviously, you want to know about complex numbers). But I believe that some of the proofs (Cauchy criterion, absolute convergence) are similar for the real and complex case. – Martin Sleziak Jun 14 '12 at 6:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441605806350708, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/86250/finding-all-functions-f-satisfying-ft-ft-int-abftdt/86254 | # Finding all functions $f$ satisfying $f'(t)=f(t)+\int_a^bf(t)dt$
I am trying to find all functions f satisfying $f'(t)=f(t)+\int_a^bf(t)dt$.
This is a problem from Spivak's Calculus and it is the chapter about Logarithms and Exponential functions. I gave up and read the solution (which I quickly regretted, but at the same time realized that I had not carefully read one very important theorem* in the text) to find that it begins with:
We know $f''(t)=f'(t)$.
How do we know this? Also, in general, how would you have approached this problem? Any solution with your thoughts written out would be very appreciated. I am not interested in the solution, but the thought process behind this.
*For the curious, the theorem was that:
If $f$ is differentiable and $f'(x)=f(x)$ for all $x$ then there is a number $c$ s.t. $f(x)=ce^x$ for all $x$.
-
2
Unfortunately, $t$ is used in two different roles here: an ordinary variable and a "dummy variable" in the integral. It would be better practice, and less confusing, to use a different letter for the dummy variable, e.g. $f'(t) = f(t) + \int_a^b f(s)\ ds$. That makes it more obvious that this term is constant. – Robert Israel Nov 28 '11 at 8:10
2
Sri, thanks for the edit. I could barely read the question before you intervened :) – The Chaz 2.0 Dec 7 '11 at 0:07
## 4 Answers
1. How do we know that $f''(x) = f'(x)$? Differentiate both sides of $$f'(x) = f(x) + \int_a^b f(t)\,dt.$$ Remember: $\int_a^bf(t)\,dt$ is a number (the net signed area between the graph of $y=f(x)$, the $x$-axis, and the lines $x=a$ and $x=b$). So what is its derivative?
Why would you do this? Because that integral is somewhat annoying: if you just had $f'(x) = f(x)$, then you would be able to solve the differential equation simply enough (e.g., with the theorem you have). But since all that is standing in our way is a constant that is adding, differentiating should spring to mind: that will get rid of the constant, and just "shift" the problem "one derivative down" (to a relation between $f''(x)$ and $f'(x)$).
2. Once you know that $f''(x) = f'(x)$, let $g(x) = f'(x)$. Then we have $g'(x)=g(x)$, so the theorem applies to $g(x)$ (exactly what we were hoping for). And you go from there.
Added. As both Didier Piau and Robert Israel point out, it's probably definitely bad practice to use the same letter as both an actual variable and the variable of integration (sometimes called the 'dummy variable'). Though I see from looking at my copy of Spivak that this originated in the text and not with you.
-
1
Sorry but why one would reproduce OP's (mal)practice of using the same unknown in and out of the integral baffles me. Before saying everything else, I would try to explicitly discourage it. – Did Nov 28 '11 at 8:26
1
@DidierPiau: A fair point; though Spivak does exactly that, so it's not the OP's practice. Problem 26 in Chapter 17 reads: "Find all functions $f(t)$ that satify $f'(t) = f(t) + \int_0^1 f(t)\,dt$." (At least in my Second Edition in Spanish) – Arturo Magidin Nov 28 '11 at 14:13
Yes. But if your edition is faithful to the original in English, you can check that Problem 26 is the one and only (unfortunate) occurrence. Everywhere else, Spivak is correct. Anyway, to me all this has little to do with what we write here and how we choose to write it (and I do not understand the probably in your Addendum). – Did Nov 28 '11 at 18:08
Since $\int_a^bf(t)dt$ is a constant, we denote it as $C$, so we get $f'(t)=f(t)+C$. It is $\frac{df(t)}{dt}=f(t)+C$, i.e. $\frac{df(t)}{f(t)+C}=dt$. Integrate it, we get $ln(f(t)+C)=t+D$, where D is a constant. Thus, $f(t)=e^{t+D}-C$. Let $e^D=K$, we get $f(t)=K{e^{t}}-C$.
Now plug it into the original eqution, we get $Ke^t=Ke^t-C+\int_a^b{(Ke^t-C)}dt$. After some manipulation, we have $K=\frac{b-a+1}{e^b-e^a}C$. Note here we assume that $b \neq a$.
If $b=a$,$C$ is obviously $0$, so the equations can be simplified to $f'(t)=f(t)$. In this case, $f(t)=ke^t$, where k is an arbitrary constant and $k \neq 0$
-
@DidierPiau, of course, It my mistake. – Emmad Kareem Nov 28 '11 at 9:14
+1, clear answer. – Emmad Kareem Nov 28 '11 at 13:03
You know that $f''(t)=f'(t)$ because you can differentiate the equation you are trying to solve.
Since the second term in the right hand side of the equation you are trying to solve is annoying, it is a good idea to try to get rid of it. Since it is constant, then that can be done by differentiating the equality with respect to $t$.
-
Yes it's a solved problem. Here's a synopsis that may be a little easier to follow. Start with the equation $$f^\prime(t) = f(t) + \int_a^bf(t)dt,$$ for $a<b$ and note that the integral on the RHS is $(b-a)$ times the average value of $f$ on $[a,b]$; call this term M, which is constant for a given $f$. Any solution of $f^\prime(t)=f(t)+M$ is differentiable, so the RHS is differentiable, implying that $f^{\prime\prime}=f^\prime$ (and, by recursion, that $f$ is actually infinitely differentiable). This has solution $f^\prime(t)=ce^t$ by a standard argument (the additive constant must be zero) so that $f(t)=ce^t+d$. But then $$M=\int_a^bf(t)dt=\left[ce^t+dt\right]_a^b=c(e^b-e^a)+d(b-a),$$ giving us a constraint on the constants $c$ and $d$: $$ce^t=f^\prime(t)=f(t)+M=ce^t+d+M \quad\implies\quad M=-d \quad\implies$$ $$0=M+d=c(e^b-e^a)+d(b-a+1)\quad\implies\quad d=-c\frac{e^b-e^a}{1+b-a}$$ so that the general solution is $$f(t)=c\left[e^t-\frac{e^b-e^a}{1+b-a}\right].$$ Finally, the "standard argument" referred to above is that if $u^\prime=u$ for some function $u(t)$, then $$\frac{du}{u}=dt \quad\implies\quad \ln|u|=t+c_1 \quad\implies\quad |u|=e^{t+c_1} \quad\implies\quad u=ce^t$$ where $c=\pm e^{c_1}$ can be any nonzero constant but in fact also zero (which couldn't be inferred from the derivation because we divided by $u$ along the way, so we recover the solution at the end by inspection).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9607371091842651, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/56187-quadratic-question-print.html | Quadratic question!
Printable View
• October 28th 2008, 07:30 AM
cougar
Quadratic question!
f(x) = x(squared) + kx + (k+3) where k is a constant. Given that the equation f(x) = 0 has qual roots..
A) Find the possible values of k
B) Solve f(x) = 0 for each possible value of k
Given instead that k = 8
C) Express f(x) in the form (x + a)squared + b, where a and b are constants.
D) Solve f(x) = 0 giving your answers in surd form
• October 28th 2008, 07:42 AM
Jhevon
Quote:
Originally Posted by cougar
f(x) = x(squared) + kx + (k+3) where k is a constant. Given that the equation f(x) = 0 has qual roots..
should that be "equal roots"?
if so, then
Quote:
A) Find the possible values of k
B) Solve f(x) = 0 for each possible value of k
you need the discriminant to be zero
Quote:
Given instead that k = 8
C) Express f(x) in the form (x + a)squared + b, where a and b are constants.
this is asking you to complete the square, do you know how to do that? (you can google it :D)
Quote:
D) Solve f(x) = 0 giving your answers in surd form
use the quadratic formula. do you remember it?
• October 28th 2008, 07:52 AM
Rafael Almeida
Sorry to ask, but I couldn't find anywhere what a 'qual root' is.
About (C): if k=8, then $f(x) = x^2 + 8x + 24$
Now you must use the fact that $(a + b)^2 = a^2 + 2ab + b^2 \ (I)$, which is a particular case of Newton's Binomial. If you examine the expression for f(x), you can see that taking a = x and b = 4 and applying it to (I) seems promising. In fact:
$(x + 4)^2 = x^2 + 8x + 16$
Notice that it's not f(x) yet, but it's close. All you need to do now is adding 8:
$(x + 4)^2 + 8= x^2 + 8x + 16 + 8 = x^2 + 8x + 24 = f(x)$
And this answers your questions, a = 4 and b = 8. The name of this procedure is Square Completion. Check the link for further examples.
For (D), I'll leave it to you with a tip: does what you just did in (C) helps you to find the roots? Or inform you whether they exist in $\mathbb{R}$?
Regards,
• October 28th 2008, 08:13 AM
cougar
Yeh oops a typo, supposed to be equal. Thanks for the help YOU RULE :D
All times are GMT -8. The time now is 11:34 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9108070731163025, "perplexity_flag": "middle"} |
http://en.wikipedia.org/wiki/Compact_space | # Compact space
"Compactness" redirects here. For the concept in first-order logic, see compactness theorem.
In mathematics, specifically general topology and metric topology, a compact space is a mathematical space in which any infinite sequence of points sampled from the space must eventually get arbitrarily close to some point of the space. There are several different notions of compactness, noted below, that are equivalent in good cases. The version just described is known as sequential compactness. The Bolzano–Weierstrass theorem gives an equivalent condition for sequential compactness when considering subsets of Euclidean space: a set then is compact if and only if it is closed and bounded. Examples include a closed interval or a rectangle. Thus if one chooses an infinite number of points in the closed unit interval, some of those points must get arbitrarily close to some real number in that space. For instance, some of the numbers 1/2, 4/5, 1/3, 5/6, 1/4, 6/7, … get arbitrarily close to 0. (Also, some get arbitrarily close to 1.) Note that the same set of points would not have, as an accumulation point, any point of the open unit interval; hence that space cannot be compact. Euclidean space itself is not compact since it is not bounded. In particular, one could choose the sequence of points 0, 1, 2, 3, …, of which no sub-sequence ultimately gets arbitrarily close to any given real number.
Apart from closed and bounded subsets of Euclidean space, typical examples of compact spaces include spaces consisting not of geometrical points but of functions. The term compact was introduced into mathematics by Maurice Fréchet in 1906 as a distillation of this concept. Compactness in this more general situation plays an extremely important role in mathematical analysis, because many classical and important theorems of 19th century analysis, such as the extreme value theorem, are easily generalized to this situation. A typical application is furnished by the Arzelà–Ascoli theorem and in particular the Peano existence theorem, in which one is able to conclude the existence of a function with some required properties as a limiting case of some more elementary construction.
Various equivalent notions of compactness, including sequential compactness and limit point compactness, can be developed in general metric spaces. In general topological spaces, however, the different notions of compactness are not necessarily equivalent, and the most useful notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, involves the existence of certain finite families of open sets that "cover" the space in the sense that each point of the space must lie in some set contained in the family. This more subtle definition exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this latter sense, it is often possible to patch together information that holds locally—that is, in a neighborhood of each point—into corresponding statements that hold throughout the space, and many theorems are of this character.
## Introduction
An example of a compact space is the unit interval [0,1] of real numbers. If one chooses an infinite number of distinct points in the unit interval, then there must be some accumulation point in that interval. For instance, the odd-numbered terms of the sequence 1, 1/2, 1/3, 3/4, 1/5, 5/6, 1/7, 7/8, … get arbitrarily close to 0, while the even-numbered ones get arbitrarily close to 1. The given example sequence shows the importance of including the boundary points of the interval, since the limit points must be in the space itself: an open (or half-open) interval of the real numbers is not compact. It is also crucial that the interval be bounded, since in the interval [0,∞) one could choose the sequence of points 0, 1, 2, 3, …, of which no sub-sequence ultimately gets arbitrarily close to any given real number.
In two dimensions, closed disks are compact since for any infinite number of points sampled from a disk, some subset of those points must get arbitrarily close either to a point within the disc, or to a point on the boundary. However, an open disk is not compact, because a sequence of points can tend to the boundary without getting arbitrarily close to any point in the interior. Likewise, spheres are compact, but a sphere missing a point is not since a sequence of points can tend to the missing point without tending to any point within the space. Lines and planes are not compact, since one can take a set of equally spaced points in any given direction without approaching any point.
Compactness generalizes many important properties of closed and bounded intervals in the real line; that is, intervals of the form [a, b] for real numbers a and b. For instance, any continuous function defined on a compact space into an ordered set (with the order topology) such as the real line is bounded. Thus, what is known as the extreme value theorem in calculus generalizes to compact spaces. In this fashion, one can prove many important theorems in the class of compact spaces, that do not hold in the context of non-compact ones.
Various definitions of compactness may apply, depending on the level of generality. A subset of Euclidean space in particular is called compact if it is closed and bounded. This implies, by the Bolzano–Weierstrass theorem, that any infinite sequence from the set has a subsequence that converges to a point in the set. This puts a fine point on the idea of taking "steps" in a space. Various equivalent notions of compactness, such as sequential compactness and limit point compactness, can be developed in general metric spaces.
In general topological spaces, however, the different notions of compactness are not equivalent, and the most useful notion of compactness—originally called bicompactness—involves families of open sets that cover the space in the sense that each point of the space must lie in some set contained in the family. Specifically, a topological space is compact if, whenever a collection of open sets covers the space, some subcollection consisting only of finitely many open sets also covers the space. That this form of compactness holds for closed and bounded subsets of Euclidean space is known as the Heine–Borel theorem. Compactness, when defined in this manner, often allows one to take information that is known locally—in a neighbourhood of each point of the space—and to extend it to information that holds globally throughout the space. An example of this phenomenon is Dirichlet's theorem, to which it was originally applied by Heine, that a continuous function on a compact interval is uniformly continuous: here continuity is a local property of the function, and uniform continuity the corresponding global property.
## Definition
Formally, a topological space X is called compact if each of its open covers has a finite subcover. Otherwise it is called non-compact. Explicitly, this means that for every arbitrary collection
$\{U_\alpha\}_{\alpha\in A}$
of open subsets of X such that
$X=\bigcup_{\alpha\in A} U_\alpha,$
there is a finite subset J of A such that
$X=\bigcup_{i\in J} U_i.$
Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta.
### Compactness of subspaces
A subset K of a topological space X is called compact if it is compact in the induced topology. Explicitly, this means that for every arbitrary collection
$\{U_\alpha\}_{\alpha\in A}$
of open subsets of X such that
$K\subset\bigcup_{\alpha\in A} U_\alpha,$
there is a finite subset J of A such that
$K\subset\bigcup_{i\in J} U_i.$
## Historical development
In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point. Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected. The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass.[1]
In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points. The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà.[2] The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's "limit point". Towards the beginning of the twentieth century, results similar to that of Arzelà and Ascoli began to accumulate in the area of integral equations, as investigated by David Hilbert and Erhard Schmidt. For a certain class of Green functions coming from solutions of integral equations, Schmidt had shown that a property analogous to the Arzelà–Ascoli theorem held in the sense of mean convergence—or convergence in what would later be dubbed a Hilbert space. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space. It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property and coined the term compactness to refer to this general phenomenon.
However, a different notion of compactness altogether had also slowly emerged at the end of the 19th century from the study of the continuum, which was seen as fundamental for the rigorous formulation of analysis. In 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous. In the course of the proof, he made use of a lemma that from any countable cover of the interval by smaller open intervals, it was possible to select a finite number of these that also covered it. The significance of this lemma was recognized by Émile Borel (1895), and it was generalized to arbitrary collections of intervals by Pierre Cousin (1895) and Henri Lebesgue (1904). The Heine–Borel theorem, as the result is now known, is another special property possessed by closed and bounded sets of real numbers.
This property was significant because it allowed for the passage from local information about a set (such as the continuity of a function) to global information about the set (such as the uniform continuity of a function). This sentiment was expressed by Lebesgue (1904), who also exploited it in the development of the integral now bearing his name. Ultimately the Russian school of point-set topology, under the direction of Pavel Alexandrov and Pavel Urysohn, formulated Heine–Borel compactness in a way that could be applied to the modern notion of a topological space. Alexandrov & Urysohn (1929) showed that the earlier version of compactness due to Fréchet, now called (relative) sequential compactness, under appropriate conditions followed from the version of compactness that was formulated in terms of the existence of finite subcovers. It was this notion of compactness that became the dominant one, because it was not only a stronger property, but it could be formulated in a more general setting with a minimum of additional technical machinery, as it relied only on the structure of the open sets in a space.
## Examples
### General topology
• Any finite topological space, including the empty set, is compact. Slightly more generally, any space with a finite topology (only finitely many open sets) is compact; this includes in particular the trivial topology.
• Any space carrying the cofinite topology is compact.
• Any locally compact Hausdorff space can be turned into a compact space by adding a single point to it, by means of Alexandroff one-point compactification. The one-point compactification of R is homeomorphic to the circle S1; the one-point compactification of R2 is homeomorphic to the sphere S2. Using the one-point compactification, one can also easily construct compact spaces which are not Hausdorff, by starting with a non-Hausdorff space.
• The right order topology or left order topology on any bounded totally ordered set is compact. In particular, Sierpinski space is compact.
• R carrying the lower limit topology satisfies the property that no uncountable set is compact.
• In the cocountable topology on R (or any uncountable set for that matter), no infinite set is compact.
• Neither of the spaces in the previous two examples are locally compact but both are still Lindelöf
### Analysis and algebra
• The closed unit interval [0,1] is compact. This follows from the Heine–Borel theorem. The open interval (0,1) is not compact: the open cover
$\left ( \frac{1}{n}, 1 - \frac{1}{n} \right )$
for n = 3, 4, … does not have a finite subcover. Similarly, the set of rational numbers in the closed interval [0,1] is not compact: the sets of rational numbers in the intervals
$\left[0,\frac{1}{\pi}-\frac{1}{n}\right] \ \text{and} \ \left[\frac{1}{\pi}+\frac{1}{n},1\right]$
cover all the rationals in [0, 1] for n = 4, 5, … but this cover does not have a finite subcover. (Note that the sets are open in the subspace topology even though they are not open as subsets of R.)
• The set R of all real numbers is not compact as there is a cover of open intervals that does not have a finite subcover. For example, intervals (n−1, n+1) , where n takes all integer values in Z, cover R but there is no finite subcover.
• More generally, compact groups such as an orthogonal group are compact, while groups such as a general linear group are not.
• For every natural number n, the n-sphere is compact. Again from the Heine–Borel theorem, the closed unit ball of any finite-dimensional normed vector space is compact. This is not true for infinite dimensions; in fact, a normed vector space is finite-dimensional if and only if its closed unit ball is compact.
• On the other hand, the closed unit ball of the dual of a normed space is compact for the weak-* topology. (Alaoglu's theorem)
• The Cantor set is compact. In fact, every compact metric space is a continuous image of the Cantor set.
• Since the p-adic integers are homeomorphic to the Cantor set, they form a compact set.
• Consider the set K of all functions f : R → [0,1] from the real number line to the closed unit interval, and define a topology on K so that a sequence $\{f_n\}$ in K converges towards $f\in K$ if and only if $\{f_n(x)\}$ converges towards f(x) for all real numbers x. There is only one such topology; it is called the topology of pointwise convergence. Then K is a compact topological space; this follows from the Tychonoff theorem.
• Consider the set K of all functions f : [0,1] → [0,1] satisfying the Lipschitz condition |f(x) − f(y)| ≤ |x − y| for all x, y ∈ [0,1]. Consider on K the metric induced by the uniform distance
$d(f,g)=\sup_{x \in [0, 1]} |f(x)-g(x)|.$
Then by Arzelà–Ascoli theorem the space K is compact.
• The spectrum of any bounded linear operator on a Banach space is a nonempty compact subset of the complex numbers C. Conversely, any compact subset of C arises in this manner, as the spectrum of some bounded linear operator. For instance, a diagonal operator on the Hilbert space $\ell^2$ may have any compact nonempty subset of C as spectrum.
• The spectrum of any commutative ring with the Zariski topology (that is, the set of all prime ideals) is compact, but never Hausdorff (except in trivial cases). In algebraic geometry, such topological spaces are examples of quasi-compact schemes, "quasi" referring to the non-Hausdorff nature of the topology.
• The spectrum of a Boolean algebra is compact, a fact which is part of the Stone representation theorem. Stone spaces, compact totally disconnected Hausdorff spaces, form the abstract framework in which these spectra are studied. Such spaces are also useful in the study of profinite groups.
• The structure space of a commutative unital Banach algebra is a compact Hausdorff space.
• The Hilbert cube is compact, again a consequence of Tychonoff's theorem.
• A profinite group (e.g., Galois group) is compact.
## Theorems
Some theorems related to compactness (see the glossary of topology for the definitions):
• A continuous image of a compact space is compact.[3]
• The pre-image of a compact space under a proper map is compact.
• The extreme value theorem: a continuous real-valued function on a nonempty compact space is bounded above and attains its supremum.[4] (Slightly more generally, this is true for an upper semicontinuous function.)
• A closed subset of a compact space is compact.[5]
• A finite union of compact sets is compact.
• A nonempty compact subset of the real numbers has a greatest element and a least element.
• The product of any collection of compact spaces is compact. (Tychonoff's theorem, which is equivalent to the axiom of choice)
• Every topological space X is an open dense subspace of a compact space having at most one point more than X, by the Alexandroff one-point compactification. By the same construction, every locally compact Hausdorff space X is an open dense subspace of a compact Hausdorff space having at most one point more than X.
• Let X be a simply ordered set endowed with the order topology. Then X is compact if and only if X is a complete lattice (i.e. all subsets have suprema and infima).[6]
Characterizations of compactness
Assuming the axiom of choice, the following are equivalent.
1. A topological space X is compact.
2. Every open cover of X has a finite subcover.
3. X has a sub-base such that every cover of the space by members of the sub-base has a finite subcover (Alexander's sub-base theorem)
4. Any collection of closed subsets of X with the finite intersection property has nonempty intersection.
5. Every net on X has a convergent subnet (see the article on nets for a proof).
6. Every filter on X has a convergent refinement.
7. Every ultrafilter on X converges to at least one point.
8. Every infinite subset of X has a complete accumulation point.[7]
Euclidean space
For any subset A of Euclidean space Rn, the following are equivalent:
1. A is compact.
2. Every sequence in A has a convergent subsequence, whose limit lies in A.
3. Every infinite subset of A has at least one limit point in A.
4. A is closed and bounded (Heine–Borel theorem).
5. A is complete and totally bounded.
In practice, the condition (5) is easiest to verify, for example a closed interval or closed n-ball. Note that, in a metric space, every compact subset is closed and bounded. However, the converse may fail in non-Euclidean Rn. For example, the real line equipped with the discrete topology is closed and bounded but not compact, as the collection of all singleton points of the space is an open cover which admits no finite subcover.
Metric spaces
• A metric space (or uniform space) is compact if and only if it is complete and totally bounded.[8]
• If the metric space X is compact and an open cover of X is given, then there exists a number δ > 0 such that every subset of X of diameter < δ is contained in some member of the cover. (Lebesgue's number lemma)
• Every compact metric space is separable.
• A metric space (or more generally any first-countable uniform space) is compact if and only if every sequence in the space has a convergent subsequence. (Sequentially compact)
Hausdorff spaces
• A compact subset of a Hausdorff space is closed.[9] More generally, compact sets can be separated by open sets: if K1 and K2 are compact and disjoint, there exist disjoint open sets U1 and U2 such that $K_1 \subset U_1$ and $K_2 \subset U_2$. This is to say, compact Hausdorff space is normal.
• Two compact Hausdorff spaces X1 and X2 are homeomorphic if and only if their rings of continuous real-valued functions C(X1) and C(X2) are isomorphic. (Gelfand–Naimark theorem) Properties of the Banach space of continuous functions on a compact Hausdorff space are central to abstract analysis.
• Every continuous map from a compact space to a Hausdorff space is closed and proper (i.e., the pre-image of a compact set is compact.) In particular, every continuous bijective map from a compact space to a Hausdorff space is a homeomorphism.[10]
• A topological space can be embedded in a compact Hausdorff space if and only if it is a Tychonoff space.
Characterization by continuous functions
Let X be a topological space and C(X) the ring of real continuous functions on X. For each p∈X, the evaluation map
$\operatorname{ev}_p: C(X)\to \mathbf{R}$
given by evp(f)=f(p) is a ring homomorphism. The kernel of evp is a maximal ideal, since the residue field C(X)/ker evp is the field of real numbers, by the first isomorphism theorem. A topological space X is pseudocompact if and only if every maximal ideal in C(X) has residue field the real numbers. For completely regular spaces, this is equivalent to every maximal ideal being the kernel of an evaluation homomorphism.[11] There are pseudocompact spaces that are not compact, though.
In general, for non-pseudocompact spaces there are always maximal ideals m in C(X) such that the residue field C(X)/m is a (non-archimedean) hyperreal field. The framework of non-standard analysis allows for the following alternative characterization of compactness:[12] a topological space X is compact if and only if every point x of the natural extension *X is infinitely close to a point x0 of X (more precisely, x is contained in the monad of x0).
## Other forms of compactness
There are a number of topological properties which are equivalent to compactness in metric spaces, but are inequivalent in general topological spaces. These include the following.
• Sequentially compact: Every sequence has a convergent subsequence.
• Countably compact: Every countable open cover has a finite subcover. (Or, equivalently, every infinite subset has an ω-accumulation point.)
• Pseudocompact: Every real-valued continuous function on the space is bounded.
• Limit point compact: Every infinite subset has an accumulation point.
While all these conditions are equivalent for metric spaces, in general we have the following implications:
• Compact spaces are countably compact.
• Sequentially compact spaces are countably compact.
• Countably compact spaces are pseudocompact and weakly countably compact.
Not every countably compact space is compact; an example is given by the first uncountable ordinal with the order topology. Not every compact space is sequentially compact; an example is given by 2[0,1], with the product topology (Scarborough & Stone 1966, Example 5.3).
A metric space is called pre-compact or totally bounded if any sequence has a Cauchy subsequence; this can be generalised to uniform spaces. For complete metric spaces this is equivalent to compactness. See relatively compact for the topological version.
Another related notion which (by most definitions) is strictly weaker than compactness is local compactness.
Generalizations of compactness include H-closed and the property of being an H-set in a parent space. A Hausdorff space is H-closed if every open cover has a finite subfamily whose union is dense. Whereas we say X is an H-set of Z if every cover of X with open sets of Z has a finite subfamily whose Z closure contains X.
## Notes
1. Kline 1972, pp. 952–953; Boyer & Merzbach 1991, p. 561
2. Arkhangel'skii & Fedorchuk 1990, Theorem 5.2.2; See also Compactness is preserved under a continuous map, PlanetMath.org.
## References
• Alexandrov, Pavel; Urysohn, Pavel (1929), "Mémoire sur les espaces topologiques compacts", Koninklijke Nederlandse Akademie van Wetenschappen te Amsterdam, Proceedings of the section of mathematical sciences 14 .
• Arkhangel'skii, A.V.; Fedorchuk, V.V. (1990), "The basic concepts and constructions of general topology", in Arkhangel'skii, A.V.; Pontrjagin, L.S., General topology I, Encyclopedia of the Mathematical Sciences 17, Springer, ISBN 978-0-387-18178-3 .
• Arkhangel'skii, A.V. (2001), "Compact space", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 .
• (Purely analytic proof of the theorem that between any two values which give results of opposite sign, there lies at least one real root of the equation).
• Borel, Émile (1895), "Sur quelques points de la théorie des fonctions", , 3 12: 9–55, JFM 26.0429.03
• Boyer, Carl B. (1959), The history of the calculus and its conceptual development, New York: Dover Publications, MR 0124178 .
• Arzelà, Cesare (1895), "Sulle funzioni di linee", Mem. Accad. Sci. Ist. Bologna Cl. Sci. Fis. Mat. 5 (5): 55–74 .
• Arzelà, Cesare (1882–1883), "Un'osservazione intorno alle serie di funzioni", Rend. Dell' Accad. R. Delle Sci. Dell'Istituto di Bologna: 142–159 .
• Ascoli, G. (1883–1884), "Le curve limiti di una varietà data di curve", Atti della R. Accad. Dei Lincei Memorie della Cl. Sci. Fis. Mat. Nat. 18 (3): 521–586 .
• Fréchet, Maurice (1906), "Sur quelques points du calcul fonctionnel", 22 (1): 1–72, doi:10.1007/BF03018603 .
• Gillman, Leonard; Jerison, Meyer (1976), Rings of continuous functions, Springer-Verlag .
• Kelley, John (1955), General topology, Graduate Texts in Mathematics 27, Springer-Verlag .
• Kline, Morris (1972), Mathematical thought from ancient to modern times (3rd ed.), Oxford University Press (published 1990), ISBN 978-0-19-506136-9 .
• .
• Robinson, Abraham (1996), Non-standard analysis, Princeton University Press, ISBN 978-0-691-04490-3, MR0205854 .
• Scarborough, C.T.; Stone, A.H. (1966), "Products of nearly compact spaces", Transactions of the American Mathematical Society (Transactions of the American Mathematical Society, Vol. 124, No. 1) 124 (1): 131–147, doi:10.2307/1994440, JSTOR 1994440 .
• Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 507446 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9195394515991211, "perplexity_flag": "head"} |
http://leifjohnson.net/post/the-haps-in-sf | # Leif Johnson » The haps in SF
## imaginary numbers
today in class we started going over imaginary numbers. i'm not sure i've ever thought much about this topic, just more or less accepted it and moved on. now, don't get me wrong ; i'm a fan of the patented alex rosefielde smile-and-nod methodology. when faced with potentially upsetting new situations, it lets one pass along and not get too tied up on things that are probably not that important in the long run. but now, looking back a little on imaginary numbers, i started thinking that i need to get to a deeper understanding : just accepting the definition isn't really helping explain the concept to other people.
in particular, prompted by persistent questions from a couple of students in class, i started thinking about what an imaginary number represents. perplexed students in class kept pressuring the teacher to explain “which number $i$ represents.” his best answer, which is more or less the one i would have given, is that $i$ is not a number that we would recognize. it's a different sort of number.
coming from a math background, surely this “which number” deal sounds at first like a nonsensical question : after all, $i$ squared is defined to be negative one, plain and simple. but what, really, does an imaginary number represent ? i wondered for a bit in class and abruptly came across a seemingly simple example : a parabola with no real roots, i.e. one that doesn't touch the $x$-axis. clearly such functions have imaginary roots, but can't those roots somehow be displayed on the $x$-$y$ coordinate plane ? perhaps something like the magnitude of a vector from the origin to the base of the parabola ...
consider $f(x) = x^2 + 1$. the roots of this equation are $i$ and $-i$. or a more contrived example : $g(x) = (x - 1)^2 + 1 = x^2 - 2x + 2$. this parabola's base lies at the point $(1, 1)$, and its roots are $1 + i$ and $1 - i$. how about the parabola whose base is at $(2, 5)$ : $h(x) = (x - 2)^2 + 5 = x^2 - 4x + 9$. the roots of $h(x)$ are $2 + \sqrt{5}i$ and $2 - \sqrt{5}i$.
in these simple cases, the roots, though complex, contain the real numbers that locate the base of the parabola. interesting. is this always true ? i fear a symbolic solution is needed. not going to happen here in front of my computer though, stupid html.
## aikido
last night i booked it over to the aikido dojo on my bike, just barely getting there in time to sneak on the matt (after class had started though). practice was excellent, though : lots of bokken work and concentration on moving from the center and such. it's really challenging to do this, and quite difficult to explain in words.
at the end of class we got out these dodgeballs and did a leg exercise against the wall : lean up against the wall, ball between back and wall, knees and feet about shoulder width apart, so your knees and toes point straight forward. lower your torso so your knees are at a 90 degree angle, then raise up 45 degrees, and lower back to 90 degrees. pretty tough on the thighs, but it's good to get some strength training back on these weak, weak upper legs of mine. the knees held up well, i really just need to get in better shape.
it's funny : in high school, being in shape was just one of those additional benefits of exercising, and the only real threat of injury was from exercising itself. now being in shape is becoming a necessity for avoiding injury during an otherwise exercise–less existence. hmm.
## chinatown
i wonder if i can get my monthly food budget to under \$ 100 a month. i just got back from this one store just off of pacific on the east side of stockton, where i bought a dozen eggs, 6 bananas, an onion, a tub of tofu (made in san francisco), a bag of snow peas, and a can of coconut milk : \$ 4.75 total.
© Copyright 2003 Leif Johnson. Licensed for reuse under CC-BY-SA. Originally published at 19:10 on 14 Oct 2003. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9583901166915894, "perplexity_flag": "middle"} |
http://math.stackexchange.com/users/4726/tpofofn?tab=activity&sort=all&page=2 | # Tpofofn
reputation
514
bio
website
location
age
member for 2 years, 5 months
seen 21 hours ago
profile views 139
| | | bio | visits | | |
|------------|------------------|---------|----------|--------------|-------------------|
| | 1,527 reputation | website | | member for | 2 years, 5 months |
| 514 badges | location | | seen | 21 hours ago | |
# 241 Actions
| | | |
|-------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Mar23 | comment | An eigenvector is a non-zero vector such that…@ScottH. I think that you are confusing the notions of a space and a basis. A basis is a non-zero vector which is linearly independent (i.e. cannot contain the zero vector) and spans the space. An eigenvector cannot be zero because it is a basis. The space spanned by the eigenvector must contain the zero vector. |
| Mar22 | comment | An eigenvector is a non-zero vector such that…An eigenspace is spanned by a non-zero eigenvector associated with a particular eigenvalue. The eigenspace must be at least one dimensional and therefore excludes using the zero vector as an eigenvector. |
| Mar6 | comment | New vector positionIs the axis aligned with one of your coordinate axes. If so, then you can omit that axis in your calculations. Don't include it in your distance calculations and don't include it in your scaling calculations. If it is not aligned with a coordinate axis, then you need something more sophisticated. |
| Mar6 | comment | New vector positionIt is essentially the same equation just written a different way. The steps you need to go through are as follows: (1) compute the average distance, (2) compute the relative vector to the center $v_i - c$, (3) normalize it to unit length by dividing it by its own length, (4) scaling it to the average length and then finally (5) adding the center back onto the result. The referenced post does essentially the same thing, except that it distributes the scaling to each component. |
| Mar5 | comment | New vector positionAlmost. You should use: npX = (vts[0] - cs[0]) / (distanceFromCenter[for this point]) * averageDistance + cs[0] |
| Mar5 | comment | New vector positionJust the magnitude of the vector. I believe that you compute it in your first loop and call it distanceFromCenter. |
| Mar5 | revised | How to get Euler angles with respect to initial Euler angleadded 435 characters in body |
| Mar5 | answered | New vector position |
| Mar5 | answered | How to get Euler angles with respect to initial Euler angle |
| Mar5 | comment | New vector positionSo you have 8 points (for example) and a center point. You want to update the 8 points based on their distance to the center point. It is not clear what you want the new points to be based on their distance to the center point. Please clarify |
| Mar5 | comment | New vector positionobserving your code, it appears that you are summing the coordinates in oldCoordArray. If you divide by the number of points, you should have the center of the points. |
| Mar5 | comment | New vector positionI guess I meant, compute the average coordinate in each axis separately (i.e. x, y, z). |
| Mar5 | comment | New vector positionIs it safe to assume that you want to compute the new center so that it has a minimum average distance to each point? If so, compute average distances in each of the coordinates separately. These three average coordinates will give you the new center point. |
| Mar5 | comment | Identifying a pattern in an arrayDo you know what type of process generated the array or is it completely unknown? |
| Mar5 | revised | identify unknown variables in a graph plotedited tags |
| Mar5 | comment | identify unknown variables in a graph plotPlease explain what you mean by "how many variables are responsible for the plot". If you have a plot of distance vs. time then you have two variables (1) time (the independent variable) and (2) distance (the dependent variable). |
| Mar3 | revised | I cannot solve this problem about surface area of a coneInserted LaTeX to make more readable. |
| Mar3 | suggested | suggested edit on I cannot solve this problem about surface area of a cone |
| Mar2 | answered | What is the intuitive meaning of the basis of a vector space and the span? |
| Feb27 | comment | Subspace of a Vector Space must be non-empty.To have a subspace the span must form a group under vector addition. A group must contain an identity element, the zero vector $\mathbf 0$. A subspace spanned by ${\mathbf 0}$ is zero-dimensional (i.e. consists of one point). This is the smallest subspace that one can have. | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9031050801277161, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/14075/list | ## Return to Answer
2 added 1585 characters in body
Addendum: Some references:
Kurihara, Akira On some examples of equations defining Shimura curves and the Mumford uniformization. J. Fac. Sci. Univ. Tokyo Sect. IA Math. 25 (1979), no. 3, 277--300.
$\$
Reichert, Markus A. Explicit determination of nontrivial torsion structures of elliptic curves over quadratic number fields. Math. Comp. 46 (1986), no. 174, 637--658.
http://www.math.uga.edu/~pete/Reichert86.pdf
$\$
Gonzàlez Rovira, Josep Equations of hyperelliptic modular curves. Ann. Inst. Fourier (Grenoble) 41 (1991), no. 4, 779--795.
http://www.math.uga.edu/~pete/Gonzalez.pdf
$\$
Noam Elkies, equations for some hyperelliptic modular curves, early 1990's. [So far as I know, these have never been made publicly available, but if you want to know an equation of a modular curve, try emailing Noam Elkies!]
$\$
Elkies, Noam D. Shimura curve computations. Algorithmic number theory (Portland, OR, 1998), 1--47, Lecture Notes in Comput. Sci., 1423, Springer, Berlin, 1998.
http://arxiv.org/abs/math/0005160
$\$
An algorithm which was used to find explicit defining equations for $X_1(N)$, $N$ prime, can be found in
Pete L. Clark, Patrick K. Corn and the UGA VIGRE Number Theory Group, Computation On Elliptic Curves With Complex Multiplication, preprint.
http://math.uga.edu/~pete/TorsCompv6.pdf
This is just a first pass. I probably have encountered something like 10 more papers on this subject, and I wasn't familiar with some of the papers that others have mentioned.
1
No, there does not exist a comprehensive list of equations: the known equations are spread out over several papers, and some people (e.g. Noam Elkies, John Voight; and even me) know equations which have not been published anywhere.
When I have more time, I will give bibliographic data for some of the papers which give lists of some of these equations. Some names of the relevant authors: Ogg, Elkies, Gonzalez, Reichert.
In my opinion, it would be a very worthy service to the number theory community to create an electronic source for information on modular curves (including Shimura curves) of low genus, including genus formulas, gonality, automorphism groups, explicit defining equations...In my absolutely expert opinion (that is, I make and use such computations in my own work, but am not an especially good computational number theorist: i.e., even I can do these calculations, so I know they're not so hard), this is a doable and even rather modest project compared to some related things that are already out there, e.g. William Stein's modular forms databases and John Voight's quaternion algebra packages.
It is possible that it is a little too easy for our own good, i.e., there is the sense that you should just do it yourself. But I think that by current standards of what should be communal mathematical knowledge, this is a big waste of a lot of people's time. E.g., by coincidence I just spoke to one of my students, J. Stankewicz, who has spent some time implementing software to enumerate all full Atkin-Lehner quotients of semistable Shimura curves (over Q) with bounded genus. I assigned him this little project on the grounds that it would be nice to have such information, and I think he's learned something from it, but the truth is that there are people who probably already have code to do exactly this and I sort of regret that he's spent so much time reinventing this particular wheel. (Yes, he reads MO, and yes, this is sort of an apology on my behalf.)
Maybe this is a good topic for the coming SAGE days at MSRI? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923427164554596, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/621/does-ntru-decrypt-correctly-now/623 | # Does NTRU decrypt correctly now?
The NTRU public-key cryptosystem has a lot of interesting properties (being resistant to quantum computer attacks, being standardized by several important bodies), but it also has a pretty unique property:
The decryption algorithm does not always work. Sometimes it just gives wrong answers.
Has this been fixed? Is it really a cryptosystem if having the private key is insufficient to decrypt the encrypted messages?
For instance, from Howgrave-Graham et al. (2003) one reads,
“First, we notice that decryption failures cannot be ignored, as they happen much more frequently than one would have expected. If one strictly follows the recommendations of the EESS standard [3], decryption failures happen as often as every 212 messages with N = 139, and every 225 messages with N = 251. It turns out that the probability is somewhat lower (around 2−40) with NTRU products, as the key generation implemented in NTRU products surprisingly differs from the one recommended in [3]. In any case, decryption failures happen sufficiently often that one cannot dismiss them, even in NTRU products.”
• Nick Howgrave-Graham, Phong Q. Nguyen, David Pointcheval, John Proos, Joseph H. Silverman, Ari Singer, and William Whyte. "The Impact of Decryption Failures on the Security of NTRU Encryption", Advances in Cryptology - CRYPTO 2003, 23rd Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 2003, Proceedings, Lecture Notes in Computer Science, 2003, Volume 2729/2003, 226-246, DOI:10.1007/978-3-540-45146-4_14
-
2
I don't see those claims in the Wikipedia article you link to. Your quote does not appear anywhere in that Wikipedia article, or indeed anywhere in Wikipedia that I can find. Also, the article has not been modified since June 10, so it's not like this is something that has been changed since you posted your question. In short, I'm having trouble telling where you got this from. Can you give a citation for who is making such a claim? – D.W. Sep 5 '11 at 4:18
The ">" is used to emphasize the claim. It is not a direct quote. As near as I can tell the decryption failures are common (1 in 10000 for a standardized version of NTRU), are non-recoverable as in the message is simply gone, and are important in several attacks that recover the private key by seeing if a valid ciphertext runs into the decryption failure problem. – Jack Schmidt Sep 5 '11 at 16:34
@D.W.: I think if you read the section of the wikipedia article titled "History" you'll see the claims made quite clearly. – Jack Schmidt Sep 5 '11 at 20:05
1
– fgrieu Sep 7 '11 at 19:27
## 3 Answers
The likelihood of a decryption failure can be made arbitrarily small. IEEE P1363.1 says in appendix A.4.10:
For ternary polynomials with $d$ $+1$s and the same number of $-1$s, the chance of a decryption failure is given by [B30]:
$$\operatorname{Prob}_{(q, d, N)}(\text{Decryption fails}) = P_{(d, N)} \left( \frac{q - 2}{6} \right)$$
where
$$P_{(d, N)}(c) = N \times \operatorname{erfc} \left( \frac{c}{\sigma\sqrt{2N}} \right)$$
and
$$\sigma(d, N) = \sqrt{\frac{8d}{3N}}$$
where $\operatorname{erfc}(x)$ is the Gauss error function.
As a practical example, for the EES1087EP2 parameter set where $N=1087$, $q=2048$, and $d=120$, the failure probability is $5.62·10^{-78}$, which is a bit less than $2^{-256}$. Those parameters have been chosen for a $256$-bit security level in general, and the failure probability is also smaller than $2^{-256}$. So exploiting a decryption failure requires just as much work as breaking other parts of the system.
Ps. The [B30] reference is to the paper "Hybrid Lattice reduction and Meet in the Middle Resistant Parameter Selection for NTRUEncrypt" by P. Hirschhorn, J. Hoffstein, N. Howgrave-Graham, J. Pipher, J. H. Silverman and W. Whyte.
-
1
Can this be illustrated with practical values of q, d, N please? – fgrieu Sep 8 '11 at 5:47
4
For the EES1087EP2 parameter set where $N=1087$, $q=2048$, and $d=120$, the failure probability is $5.62·10^{-78}$. – Prashand Gupta Sep 8 '11 at 6:44
2
Just to make it even more obvious: those parameters have been chosen for a $2^{256}$ security level in general, and the failure probability is also smaller than $2^{-256}$. So exploiting a decryption failure requires just as much work as breaking other parts of the system. – Nakedible Sep 8 '11 at 7:44
1
I edited your answer to have nicer math and quote formatting; could you please check that I didn't break anything? (Ps. I looked at the P1363.1 draft and the Hirschhorn et al. paper, but I have to say I couldn't really figure out how they managed to get those specific formulas out of that paper. Maybe someone more familiar with it can clarify?) – Ilmari Karonen May 12 '12 at 1:21
@Ilmari Looks good, thanks. – Prashand Gupta May 31 '12 at 19:29
show 1 more comment
Decryption in NTRU is probabilistic, however for correctly chosen parameters, the chance of a decryption failure is very small. It is not a worry in practice.
-
Does this mean we can simply repeat the decryption it there was a failure, and get other results? – Paŭlo Ebermann♦ Sep 5 '11 at 10:03
I really don't know the specifics, but if I remember right the problem was that a "random" parameter selected by the encrypter may make the decryption process fail - and it is impossible for the encrypter to verify if this is the case without the private key. Maybe! Please confirm my understanding. – Nakedible Sep 5 '11 at 10:25
5
– Nakedible Sep 5 '11 at 10:37
1
Decryption isn't probabilistic: running the decryption algorithm multiple times always gives the same result. (Paulo Ebermann asks the right question here). However, it may be inconsistent with encryption, which is a different thing. – William Whyte May 11 '12 at 11:14
I'm Chief Scientist at Security Innovation, which owns NTRU, and have contributed to the design of NTRUEncrypt and NTRUSign.
The headline answer here is: NTRUEncrypt doesn't necessarily require decryption failures; it's a tradeoff you make, trading off key and ciphertext size against decryption failure probabilities. Parameter sets that don't give decryption failures are possible but ones that have a small but non-zero decryption failure probability are more efficient.
The most helpful way to understand this is to think about NTRUEncrypt as a lattice cryptosystem. Here, encryption is a matter of selecting a point (which is effectively a random vector mod q) and adding the message (which is a small vector) to it. Decryption is a matter of mapping the ciphertext point back to the lattice point and recovering the message as the difference between the two. Call this lattice point the "masking point" because it's used to mask the message.
Say we have a two-dimensional lattice, the private basis vectors are (5, 0) and (0, 5), and the message vector is defined as having coordinates with absolute value 1 or 0. So you have 9 possible messages that can be encrypted. In this case, each encrypted message is always closer to the masking point to any other point. (In the case where the masking point is (10, 15), the possible encrypted message values are (9, 14), (9, 15), (9, 16), ... , (11, 16)).
If we said the message vector could have coordinates with absolute value (0, 1, 2), we could encrypt 25 possible messages and the encrypted message would still be closer to the masking point than to any other point.
However, if we said the message vector could have coordinates with absolute value (0, 1, 2, 3), then although we could encrypt 49 messages, any message with a 3 as one of the coordinates would be closer to some other point than to the masking point (because 3 rounds in a different direction mod 5 than 2 does).
What happens in NTRUEncrypt is similar, modulo the differences that you get from moving to higher dimensions. We've defined constraints on the message to be encrypted that ensure that almost all the time, the message will round back to the masking point. We can estimate the probability that the rounding will happen incorrectly and set it to be less than the security level (as Prashand Gupta said). We could also eliminate decryption failures altogether by increasing q, which would corresponding to increasing the size of the private basis relative to the message; we don't see a need to do this, because the decryption failure probability is sufficiently low already and bringing it to 0 would increase q from 2048 to 4096 or 8192, adding N or 2N bits to the size of the ciphertext and key.
-
1
A failure rate of 2^-256 is small enough, since hardware failures are certainly more likely than that. But I imagine convincing people that it doesn't matter isn't easy. In my experience many programmers have an irrational fear of probabilistic algorithms. – CodesInChaos May 11 '12 at 12:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276527762413025, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/190436/confidence-interval-for-uniform | # Confidence interval for uniform
A random variable is uniformly distributed over $(0,\theta)$. The maximum of a random sample of n, call $y_n$ is sufficient for $\theta$ and it is also the maximum likelihood estimator. Show also that a 100$\gamma$% confidence interval for θ is $(y_n, y_n /(1 − γ )^{1/n})$.
Could anyone tell me how to deal with this problem? Do I have to use the central limit theorem?
-
The central limit theorem won't help because it's about the distribution of a mean or a sum, not of a maximum. – Michael Hardy Sep 3 '12 at 12:26
I'm getting $(y_n,y_n/(1-\gamma)^{1/n})$. I'm guessing what you wrote was intended to be that. – Michael Hardy Sep 3 '12 at 12:32
yes, that was a typo. I apologize for the mistake. – adamG Sep 3 '12 at 12:42
In general asymptotic theory doesn't help here because the question requires an exact result. But there is asymptotic theory for the maximum. It is called Gnedenko's theorem and can be applied to the uniform distribution. – Michael Chernick Sep 3 '12 at 16:03
## 1 Answer
You need to show that $$\Pr\left(y_n<\theta<\frac{y_n}{(1-\gamma)^{1/n}}\right) = \gamma.$$ Since $y_n$ is necessarily always less than $\theta$, this probability is the same as $$\Pr\left(\theta<\frac{y_n}{(1-\gamma)^{1/n}}\right)=\Pr\left( \theta(1-\gamma)^{1/n} < y_n\right) = 1-\Pr\left( y_n < \theta(1-\gamma)^{1/n} \right).$$
Notice that $$\Pr(y_n < c) = \Pr(\text{All $n$ observations}<c) = \Big( \Pr(\text{A single observation}<c) \Big)^n = \left( \frac c\theta \right)^n.$$ Apply this last sequence of equalities with $\theta(1-\gamma)^{1/n}$ in place of $c$.
-
Thanks Michael, I didn't think about using the given interval and work the reverse way. How would you reason to choose this confidence interval if it was not specified in advance? Could you give an intuitive answer? – adamG Sep 3 '12 at 13:03
2
This confidence interval has $100\gamma%$ within the interval, and $100(1-\gamma)%$ to the right of the interval. One could also construct a confidence interval with $(100(1-\gamma)/2)%$ in each tail. Or even 100(1-\gamma)%$to the left of the interval, with the right endpoint equal to$\infty$. The problem so far doesn't completely specify these things. But one could speak of a unique "confidence distribution", and find that. The second line of displayed$\TeX\$ in my answer above is the first step in doing that. – Michael Hardy Sep 3 '12 at 13:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415790438652039, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2011/11/29/homotopy/?like=1&source=post_flair&_wpnonce=a34a077b61 | # The Unapologetic Mathematician
## Homotopy
The common layman’s definition of topology generally involves rubber sheets or clay, with the idea that things are “the same” if they can be stretched, squeezed, or bent from one shape into the other. But the notions of topological equivalence we’ve been using up until now don’t really match up to this picture. Homeomorphism — or diffeomorphism, for differentiable manifolds — is about having continuous maps in either direction, but there’s nothing at all to correspond to the whole stretching and squeezing idea.
Instead, we have homotopy. But instead of saying that spaces are homotopic, we say that two maps $f_0,f_1:M\to N$ are homotopic if the one can be “stretched and squeezed” into the other. And since this stretching and squeezing is a process to take place over time, we will view it sort of like a movie.
We say that a continuous function $H:M\times[0,1]\to N$ is a continuous homotopy from $f_0$ to $f_1$ if $H(p,0)=f_0(p)$ and $H(p,1)=f_1(p)$ for all $p\in M$. For any time $t\in[0,1]$, the map $p\mapsto H(p,t)$ is a continuous map from $M$ to $N$, which is sort of like a “frame” in the movie that takes us from $f_0$ to $f_1$. As time passes over the interval, we highlight one frame at a time to watch the one function transform into the other.
To flip this around, imagine starting with a process of stretching and squeezing to turn one shape into another. In this case, when we say “shape” we really mean a subspace or submanifold of some outside space we occupy, like the three-dimensional space that contains our idiomatic doughnuts and coffee mugs. The maps in this case are the inclusions of the subspaces into the larger space.
Anyway, next imagine carrying out this process, but with a camera recording it at each step. Then cut out all the frames from the movie and stack them up. We see in each layer of this flipbook how the shape $M$ at that time is included into the larger space $N$. That is, we have a homotopy.
Now, for an example: we say that a space is “contractible” if its inclusion into itself is homotopic to a map of the whole space to a single point within the space. As a particular example, the unit ball $B^n\subseteq\mathbb{R}^n$ is contractible. Explicitly, we define a homotopy $H:B^n\times[0,1]\to B^n$latex H(p,t)=(1-t)p\$, which is certainly smooth; we can check that $H(p,0)=p$ and $H(p,1)=0$, so at one end we have the identity map of $B^n$ into itself, while at the other we have the constant map sending all of $B^n$ to the single point at the origin.
We should be careful to point out that homotopy only requires that the function $H$ be continuous, and not invertible in any sense. In particular, there’s no guarantee that the frame $p\mapsto H(p,t)$ for some fixed $t$ is a homeomorphism from $M$ onto its image. If it turns out that each frame is a homeomorphism of $M$ onto its image, then we say that $H$ is an “isotopy”.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 5 Comments »
1. [...] can think of homotopies between maps as morphisms in a category that has the maps as objects. In terms of the movie [...]
Pingback by | November 29, 2011 | Reply
2. [...] time, while talking about homotopies as morphisms I said that I didn’t want to get too deeply into the reparameterization thing [...]
Pingback by | November 30, 2011 | Reply
3. [...] we’ve seen that differentiable manifolds, smooth maps, and homotopies form a 2-category, but it’s not the only 2-category around. The algebra of differential forms [...]
Pingback by | December 2, 2011 | Reply
4. [...] a great example of this, let’s say that is a contractible manifold. That is, the identity map and the constant map for some are homotopic. These two maps [...]
Pingback by | December 6, 2011 | Reply
5. [...] say that a space is “simply-connected” if any closed curve with is homotopic to a constant curve that stays at the single point . Intuitively, this means that any loop in the [...]
Pingback by | December 14, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364429712295532, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/229230/analytic-functions-defined-by-integrals | # Analytic functions defined by integrals
Suppose I define a function using an integral:
$$f(z)=\int_{\mathbb R} g(z,x)\ dx,$$
where $g$ is some function, $z$ is a complex variable, and $x$ is a real variable. Suppose the integral exists for $z\in U$, where $U$ is some open region. What are sufficient conditions on $g$ so that $f$ is analytic here, and why do they suffice?
I looked in Ahlfors but couldn't find anything relevant.
-
Ahlfors probably states and proves Morera's theorem. That can prove analyticity of the Gamma function and the Riemann zeta function. – Michael Hardy Nov 5 '12 at 0:44
Perhaps consult some previous questions and their answers: math.stackexchange.com/questions/177953 or math.stackexchange.com/questions/81949 or maybe several others – GEdgar Nov 5 '12 at 1:22
## 2 Answers
It suffices that $g$ is analytic in $z \in U$ for each $x\in {\mathbb R}$ and $\int_{\mathbb R} |g(z,x)|\ dx$ is locally uniformly bounded on compact subsets of $U$. For then if $\Gamma$ is any closed triangle in $U$, Fubini's theorem says $\oint_\Gamma f(z) \ dz = \int_{\mathbb R} \oint_\Gamma g(z,x)\ dz\; dx = 0$, and Morera's theorem says $f$ is analytic in $U$.
EDIT: I guess we'd better also assume that $g(z,x)$ is measurable.
-
In the long run, a person will better preserve their sanity if such questions are construed as asking when an integral of a function-valued function lies again in the same space as the integrands (and, naturally, with other reasonable compatibilities). Although I anticipate that the kind of answer I am about to give is not as "immediate" as probably desired, I do recommend it long-term: in almost all cases I know, an integral of (parametrized) vectors lies again in the same TVS when the integral can be written as a continuous, compactly-supported integral of vector-valued functions, and ... – paul garrett Nov 5 '12 at 0:58
... invoke existence of Gelfand-Pettis (a.k.a. "weak") integrals of such functions, with values in a quasi-complete (locally convex) TVS. – paul garrett Nov 5 '12 at 0:58
Hint Morera's theorem should tell you something.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938855767250061, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/82533/a-question-related-to-hilberts-irreducibility-theorem/82535 | ## A question related to Hilbert’s Irreducibility Theorem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My question is whether for every extension of number fields $L\subset K$, and for every $f_0(x),...,f_n(x)$ in $K[x]$, there is some $\alpha\in L$ such that $$f_n(\alpha)T^n+...+f_1(\alpha)T+f_0(\alpha)$$ is irreducible as a polynomial in $K[T]$.
If $L=K$ this is known from Hilbert's Irreducibility Theorem. I find it hard to believe that there is a counter-example to this, but on the other hand I can't seem to conjure up a proof.
-
## 1 Answer
The answer is yes, assuming that the two-variable polynomial $f_n(x)T^n + \dots + f_1(x)T + f_0(x)$ is irreducible over $K$.
This follows from the version of Hilbert's irreducibility theorem for number fields proved as Theorem 46 of p.298 of Schinzel's book Polynomials with special regard to reducibility: the relevant passage can be viewed on Google Books
In fact, if I'm reading it correctly, it looks like one has irreducibility for all rational integers $\alpha$ belonging to an appropriate residue class. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306169748306274, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/4768/every-permutation-is-either-even-or-odd-but-not-both | # every permutation is either even or odd,but not both
How we can show every permutation is either even or odd,but not both......I can't arrive at a proof for this ..... Can anybody give me the proof...
Thanks in advance...
-
– alext87 Sep 16 '10 at 10:44
1
Unless I'm having a particularly daft day... I question the validity of the first two proofs on wikipedia: they don't seem to eliminate the possibility that every permutation is both even and odd. – Douglas S. Stones Sep 16 '10 at 10:56
1
The identity permuation is even. – alext87 Sep 16 '10 at 11:19
3
Depends on your definition of parity of the permutation. If you simply define it as the parity of number of inversion pairs, this is a no-brainer. – Aryabhata Sep 16 '10 at 18:26
– Grumpy Parsnip Feb 18 '12 at 23:51
## 4 Answers
There is a proof given here: An Historical Note on the Parity of Permutations, T. L. Bartlow, American Mathematical Monthly Vol. 79, No. 7 (Aug. - Sep., 1972), pp. 766-769.
Here's an outline of Bartlow's proof (it matches the proof given in Ted's answer).
• Divide $S_n$ (the symmetric group on $n$ letters) into two classes according to the parity of the number of cycles (fixed points counted as 1-cycles) in their unique decomposition into disjoint cycles. [E.g. in $S_5$ the permutation $(1,2,3)(4)(5)$ has 3 disjoint cycles.]
• These classes are thus well-defined and disjoint, and the identity permutation belongs to exactly one class (since it decomposes into $n$ disjoint cycles, and $n$ is either even or odd).
• He showed that by multiplying a permutation in one class by a transposition, will result in a permutation in the other class.
• He concludes that the two classes are, in fact, the sets of even and odd permutations in $S_n$.
(NB. In earlier versions of this answer I incorrectly labelled this proof as faulty. In fact, this is an excellent proof, and doesn't rely on any auxiliary functions or unrelated concepts.)
I claim all three "proofs" of this result (currently) on wikipedia are incomplete. Let's begin with the observation that, assuming that identity permutation is not an odd permutation, then the result follows fairly easily.
Theorem: Assuming the identity permutation is not an odd permutation, then all permutations are either even xor odd.
Proof: Let $\sigma$ be both an even and an odd permutation. Then there exists transpositions $t_i$ and $s_j$ such that $$\sigma=t_1 \circ t_2 \circ \cdots \circ t_k=s_1 \circ s_2 \circ \cdots \circ s_m$$ where $k$ is even and $m$ is odd. Note that $$\sigma^{-1}=s_m \circ s_{m-1} \circ \cdots \circ s_1.$$
Then the identity permutation $\sigma \circ \sigma^{-1}$ is the odd permutation $$t_1 \circ t_2 \circ \cdots \circ t_k \circ s_m \circ s_{m-1} \circ \cdots \circ s_1,$$ giving a contradiction. Thus completing the proof of the theorem.
The problem is that we cannot just assume that the identity permutation not an odd permutation (yes, it's an even permutation, but what's to stop it being an odd permutation also?), it does not follow from the definition and it is the base case of the "induction". To prove it, we need to show that the identity permutation cannot be decomposed into an odd number of transpositions (without using the theorem itself).
However, we can deduce from the above theorem that either (a) all permutations are both even and odd or (b) all permutations are either even xor odd.
On wikipedia: Proof 1 states that the identity permutation is an even permutation (which it is) then assumes that the identity permutation is not also an odd permutation (this is analogous to assuming that a closed set is not an open set). Proof 2 essentially says that we can switch signs by multiplying by a transposition (which is fine, if you know a priori that there exists some permutation that is not even or not odd). Proof 3 neglects the possibility that an even-length word might be equal to an odd-length word.
Keith Conrad also gave a proof that the identity permutation is not an odd permutation here. It is almost a page long (and is the majority of the proof of the result in question).
-
1
I think that the Wikipedia article is technically correct, because it is explicit that the well-definedness of the transposition-based definition is assumed (with a reference). However, I cannot see the point of stating a proof of the equivalence of the inversion-based definition and the transposition-based definition in the article without stating a proof of the well-definedness of the latter. As is often the case with Wikipedia, I do not know whom the article is intended for. – Tsuyoshi Ito Sep 16 '10 at 12:32
@Douglas: The wiki page has this definition: "Parity of permutation is the parity of the number of inversion pairs". This trivially implies that a permutation cannot be both even and odd. – Aryabhata Sep 16 '10 at 18:26
A permutation can be written as the composition of transpositions in an infinite number of ways... how do you check all infinity of them? [besides, the alarm bells should be going off: you're claiming that the OP's theorem is trivial from the definition] – Douglas S. Stones Sep 17 '10 at 1:37
@Douglas: What I am claiming is that it depends on the definition. OP never provided one. The wiki page provided one in terms of inversion pairs (transpositions was proven equivalent) and so is probably correct (I didn't go through the whole thing) in assuming it is either even or odd, but not both. The question asked by OP is trivial under the definition in terms of inversion pairs. I do agree, if we go via transpositions, we have work to do. – Aryabhata Sep 17 '10 at 5:01
Hmm... I equated "inversion pairs" with "transpositions" in the earlier comment (which is incorrect). But there's still an infinite number of ways to write a permutation as the composition of inversion pairs. I'm might be missing something obvious here, but why is this now a no-brainer? – Douglas S. Stones Sep 17 '10 at 5:39
show 3 more comments
One way is to define the sign of a permutation $\sigma$ using the polynomial $\Delta = \Pi (x_i-x_j)$ with $1\le i < j \le n$.
It is easy to see that $\sigma(\Delta) = \Pi (x_{\sigma(i)}-x_{\sigma(j)})$ satisfies $\sigma(\Delta)=\pm\Delta$. Now define the sign by $sign(\sigma)=\frac{\Delta}{\sigma(\Delta)}$
With a little more work you can show that this function is a homomorphism of groups, and that on transpositions it return -1. Therefore, if $\sigma=\tau_1\cdots\tau_k$ is a way to write $\sigma$ as a product of transpositions, we have $sign(\sigma)=(-1)^k$ and so for even permutations (permutations whose sign is 1) k must always be even, and for odd permutations (whose sign is -1) it must always be odd.
-
1
I've often wondered why people use that polynomial instead of simply the integer $\prod_{1\le i<j\le n} (i-j)$, which seems to me more elementary than using a polynomial in several variables. – lhf Sep 16 '10 at 13:56
I can't see a reason myself. It might be because algebraists are used to working with symmetric polynomials, and this is a private case. – Gadi A Sep 16 '10 at 14:08
2
@lhf: The symmetric group cannot act on that integer per se; it acts on the form in which you have expressed it! Thus you are computing with it as if it were a polynomial in which the indexes are names for the variables. – whuber Sep 16 '10 at 15:14
1
I think this is proof is really interesting because it's not very difficult and provides an easy way to see that the definition with inversion pairs is equivalent to the one with transpositions. – Joel Cohen Oct 14 '11 at 12:03
@JoelCohen- good point! You have totally demystified the inversion-pairs definition for me. – Ben Blum-Smith Feb 19 '12 at 0:15
This is overkill, but it follows from general facts about Coxeter groups as outlined here. In particular, it follows from the fact that $S_n$ has presentation $s_i^2 = 1, (s_i s_{i+1})^3 = 1, s_i s_j = s_j s_i, |i - j| \ge 2$ (which follows from the faithfulness of the geometric representation) that the homomorphism $s_i \to -1$ is well-defined.
-
– Qiaochu Yuan Sep 16 '10 at 19:49
It is enough to show that the product of an odd number of transpositions cannot be the identity.
Every permutation of a finite set $S$ is a unique product of disjoint cycles in which every element of $S$ occurs exactly once (where we include fixed points as 1-cycles). Let $p$ be any permutation of $S$, let $(ij)$ be a transposition ($i,j \in S$), and let $q=p \cdot (ij)$. It is easy to check that if $i$ and $j$ are in the same cycle in $p$, then that cycle splits into two in $q$; if $i$ and $j$ are in different cycles in $p$, then those cycles merge into one in $q$. Cycles of $p$ not containing either $i$ or $j$ remain the same in $q$. Therefore, $q$ has either one more or one less cycle than $p$ does.
Now let $t$ be any product of an odd number of transpositions. Then by the above, multiplying any permutation by $t$ changes the parity of the number of cycles in the permutation. Therefore $t$ cannot be the identity.
-
Excellent answer. Gallian uses this argument. – Stefan Smith Jan 9 at 14:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360865950584412, "perplexity_flag": "head"} |