text
stringlengths 9
294k
|
---|
# Is the invariance property of the ML estimator nonsensical from a Bayesian perspective?
Casella and Berger state the invariance property of the ML estimator as follows:
However, it seems to me that they define the "likelihood" of $\eta$ in a completely ad hoc and nonsensical way:
If I apply basic rules of probability theory to the simple case wheter $\eta=\tau(\theta)=\theta^2$, I instead get the following: $$L(\eta|x)=p(x|\theta^2=\eta)=p(x|\theta = -\sqrt \eta \lor \theta = \sqrt \eta)=:p(x|A \lor B)$$ Now applying Bayes theorem, and then the fact that $A$ and $B$ are mutually exclusive so that we can apply the sum rule: $$p(x|A\lor B)=p(x)\frac {p(A\lor B|x)}{p(A\lor B)}=p(x|A\lor B)=p(x)\frac {p(A|x)+p(B|x)}{p(A)+p(B)}$$
Now applying Bayes' theorem to the terms in the numerator again: $$p(x)\frac {p(A)\frac {p(x|A)}{p(x)}+p(B)\frac {p(x|B)}{p(x)}}{p(A)+p(B)}=\frac {p(A)p(x|A)+p(B)p(x|B)}{p(A)+p(B)}$$
If we want to maximize this w.r.t to $\eta$ in order to get the maximum likelihood estimate of $\eta$, we have to maximize: $$p_\theta(-\sqrt \eta)p(x|\theta = -\sqrt \eta)+p_\theta(\sqrt \eta)p(x|\theta = \sqrt \eta)$$
Does Bayes strike again? Is Casella & Berger wrong? Or am I wrong?
• Possible duplicate of Invariance property of maximum likelihood estimator? Nov 15 '17 at 21:15
• The formal part after "If I apply basic rules of probability theory to the simple case wheter $\eta=\tau(\theta)=\theta^2$" does not change the question. The matter is fully covered in the excellent answer from Samuel Benidt. The likelihood values (and as a consequence the maximum) do not change due to the mapping. Yes you need to take special care if the mapping is not one-to-one. But that is a whole different issue than the changes occuring due to probability distributions when you apply a transform. Nov 15 '17 at 22:54
• I understand your frustration, Programmer2134 (& @MartijnWeterings). However, please be careful of your tone in your comments. Productive conversations are only possible when our be nice policy is followed. If you aren't interested in pursuing productive conversations, you need to post these questions elsewhere. Nov 16 '17 at 18:34
• @gung, You are completely right. And I regret reacting with that tone. I will stop doing it from now on. Sorry for this. Regarding the conversation, I am interested in pursuing productive ones, but felt that people's reactions in a couple of questions I asked were mostly counterproductive. Nevertheless, next time, I will respond differently. Nov 16 '17 at 19:20
• Thank you. It is best to assume people are responding in good faith. There are (relatively few, IMHO) occasions where people here aren't, but even then, sometimes they can be coaxed to come around. Nov 16 '17 at 19:26
As Xi'an says, the question is moot, but I think that many people are nevertheless led to consider the maximum-likelihood estimate from a Bayesian perspective because of a statement that appears in some literature and on the internet: "the maximum-likelihood estimate is a particular case of the Bayesian maximum a posteriori estimate, when the prior distribution is uniform".
I'd say that from a Bayesian perspective the maximum-likelihood estimator and its invariance property can make sense, but the role and meaning of estimators in Bayesian theory is very different from frequentist theory. And this particular estimator is usually not very sensible from a Bayesian perspective. Here's why. For simplicity let me consider a one-dimensional parameter and one-one transformations.
First of all two remarks:
1. It can be useful to consider a parameter as a quantity living on a generic manifold, on which we can choose different coordinate systems or measurement units. From this point of view a reparameterization is just a change of coordinates. For example, the temperature of the triple point of water is the same whether we express it as $$T=273.16$$ (K), $$t=0.01$$ (°C), $$\theta=32.01$$ (°F), or $$\eta=5.61$$ (a logarithmic scale). Our inferences and decisions should be invariant with respect to coordinate changes. Some coordinate systems may be more natural than others, though, of course.
2. Probabilities for continuous quantities always refer to intervals (more precisely, sets) of values of such quantities, never to particular values; although in singular cases we can consider sets containing one value only, for example. The probability-density notation $$\mathrm{p}(x)\,\mathrm{d}x$$, in Riemann-integral style, is telling us that
(a) we have chosen a coordinate system $$x$$ on the parameter manifold,
(b) this coordinate system allows us to speak of intervals of equal width,
(c) the probability that the value lies in a small interval $$\Delta x$$ is approximately $$\mathrm{p}(x)\,\Delta x$$, where $$x$$ is a point within the interval.
(Alternatively we can speak of a base Lebesgue measure $$\mathrm{d}x$$ and intervals of equal measure, but the essence is the same.)
Therefore, a statement like "$$\mathrm{p}(x_1) > \mathrm{p}(x_2)$$" does not mean that the probability for $$x_1$$ is larger than that for $$x_2$$, but that the probability that $$x$$ lies in a small interval around $$x_1$$ is larger than the probability that it lies in an interval of equal width around $$x_2$$. Such statement is coordinate-dependent.
Let's see the (frequentist) maximum-likelihood point of view
From this point of view, speaking about the probability for a parameter value $$x$$ is simply meaningless. Full stop. We'd like to know what the true parameter value is, and the value $$\tilde{x}$$ that gives highest probability to the data $$D$$ should intuitively be not too far off the mark: $$\tilde{x} := \arg\max_x \mathrm{p}(D \mid x)\tag{1}\label{ML}.$$ This is the maximum-likelihood estimator.
This estimator selects a point on the parameter manifold and therefore doesn't depend on any coordinate system. Stated otherwise: Each point on the parameter manifold is associated with a number: the probability for the data $$D$$; we're choosing the point that has the highest associated number. This choice does not require a coordinate system or base measure. It is for this reason that this estimator is parameterization invariant, and this property tells us that it is not a probability – as desired. This invariance remains if we consider more complex parameter transformations, and the profile likelihood mentioned by Xi'an makes complete sense from this perspective.
Let's see the Bayesian point of view
From this point of view it always makes sense to speak of the probability for a continuous parameter, if we are uncertain about it, conditional on data and other evidence $$D$$. We write this as $$\mathrm{p}(x \mid D)\,\mathrm{d}x \propto \mathrm{p}(D \mid x)\, \mathrm{p}(x)\,\mathrm{d}x.\tag{2}\label{PD}$$ As remarked at the beginning, this probability refers to intervals on the parameter manifold, not to single points.
Ideally we should report our uncertainty by specifying the full probability distribution $$\mathrm{p}(x \mid D)\,\mathrm{d}x$$ for the parameter. So the notion of estimator is secondary from a Bayesian perspective.
This notion appears when we must choose one point on the parameter manifold for some particular purpose or reason, even though the true point is unknown. This choice is the realm of decision theory [1], and the value chosen is the proper definition of "estimator" in Bayesian theory. Decision theory says that we must first introduce a utility function $$(P_0,P)\mapsto G(P_0; P)$$ which tells us how much we gain by choosing the point $$P_0$$ on the parameter manifold, when the true point is $$P$$ (alternatively, we can pessimistically speak of a loss function). This function will have a different expression in each coordinate system, e.g. $$(x_0,x)\mapsto G_x(x_0; x)$$, and $$(y_0,y)\mapsto G_y(y_0; y)$$; if the coordinate transformation is $$y=f(x)$$, the two expressions are related by $$G_x(x_0;x) = G_y[f(x_0); f(x)]$$ [2].
Let me stress at once that when we speak, say, of a quadratic utility function, we have implicitly chosen a particular coordinate system, usually a natural one for the parameter. In another coordinate system the expression for the utility function will generally not be quadratic, but it's still the same utility function on the parameter manifold.
The estimator $$\hat{P}$$ associated with a utility function $$G$$ is the point that maximizes the expected utility given our data $$D$$. In a coordinate system $$x$$, its coordinate is $$\hat{x} := \arg\max_{x_0} \int G_x(x_0; x)\, \mathrm{p}(x \mid D)\,\mathrm{d}x.\tag{3}\label{UF}$$ This definition is independent of coordinate changes: in new coordinates $$y=f(x)$$ the coordinate of the estimator is $$\hat{y}=f(\hat{x})$$. This follows from the coordinate-independence of $$G$$ and of the integral.
You see that this kind of invariance is a built-in property of Bayesian estimators.
Now we can ask: is there a utility function that leads to an estimator equal to the maximum-likelihood one? Since the maximum-likelihood estimator is invariant, such a function might exist. From this point of view, maximum-likelihood would be nonsensical from a Bayesian point of view if it were not invariant!
A utility function that in a particular coordinate system $$x$$ is equal to a Dirac delta, $$G_x(x_0; x) = \delta(x_0-x)$$, seems to do the job [3]. Equation $$\eqref{UF}$$ yields $$\hat{x} = \arg\max_{x} \mathrm{p}(x \mid D)$$, and if the prior in $$\eqref{PD}$$ is uniform in the coordinate $$x$$, we obtain the maximum-likelihood estimate $$\eqref{ML}$$. Alternatively we can consider a sequence of utility functions with increasingly smaller support, e.g. $$G_x(x_0; x) = 1$$ if $$\lvert x_0-x \rvert<\epsilon$$ and $$G_x(x_0; x) = 0$$ elsewhere, for $$\epsilon\to 0$$ [4].
So, yes, the maximum-likelihood estimator and its invariance can make sense from a Bayesian perspective, if we are mathematically generous and accept generalized functions. But the very meaning, role, and use of an estimator in a Bayesian perspective are completely different from those in a frequentist perspective.
Let me also add that there seem to be reservations in the literature about whether the utility function defined above makes mathematical sense [5]. In any case, the usefulness of such a utility function is rather limited: as Jaynes [3] points out, it means that "we care only about the chance of being exactly right; and, if we are wrong, we don't care how wrong we are".
Now consider the statement "maximum-likelihood is a special case of maximum-a-posteriori with a uniform prior". It's important to note what happens under a general change of coordinates $$y=f(x)$$:
1. the utility function above assumes a different expression: $$G_y(y_0;y) = \delta[f^{-1}(y_0)-f^{-1}(y)] \equiv \delta(y_0-y)\,\lvert f'[f^{-1}(y_0)]\rvert$$
2. the prior density in the coordinate $$y$$ is not uniform, owing to the Jacobian determinant;
3. the estimator is not the maximum of the posterior density in the $$y$$ coordinate, because the Dirac delta has acquired an extra multiplicative factor;
4. the estimator is still given by the maximum of the likelihood in the new, $$y$$ coordinates.
These changes combine so that the estimator point is still the same on the parameter manifold.
Thus, the statement above is implicitly assuming a special coordinate system. A tentative, more explicit statement would could be this: "the maximum-likelihood estimator is numerically equal to the Bayesian estimator that in some coordinate system has a delta utility function and a uniform prior".
The discussion above is informal, but can be made precise using measure theory and Stieltjes integration.
In the Bayesian literature we can find also a more informal notion of estimator: it's a number that somehow "summarizes" a probability distribution, especially when it's inconvenient or impossible to specify its full density $$\mathrm{p}(x \mid D)\,\mathrm{d}x$$; see e.g. Murphy [6] or MacKay [7]. This notion is usually detached from decision theory, and therefore may be coordinate-dependent or tacitly assumes a particular coordinate system. But in the decision-theoretic definition of estimator, something which is not invariant cannot be an estimator.
[1] For example, H. Raiffa, R. Schlaifer: Applied Statistical Decision Theory (Wiley 2000).
[2] Y. Choquet-Bruhat, C. DeWitt-Morette, M. Dillard-Bleick: Analysis, Manifolds and Physics. Part I: Basics (Elsevier 1996), or any other good book on differential geometry.
[3] E. T. Jaynes: Probability Theory: The Logic of Science (Cambridge University Press 2003), §13.10.
[4] J.-M. Bernardo, A. F. Smith: Bayesian Theory (Wiley 2000), §5.1.5.
[5] I. H. Jermyn: Invariant Bayesian estimation on manifolds https://doi.org/10.1214/009053604000001273; R. Bassett, J. Deride: Maximum a posteriori estimators as a limit of Bayes estimators https://doi.org/10.1007/s10107-018-1241-0.
[6] K. P. Murphy: Machine Learning: A Probabilistic Perspective (MIT Press 2012), especially chap. 5.
[7] D. J. C. MacKay: Information Theory, Inference, and Learning Algorithms (Cambridge University Press 2003), http://www.inference.phy.cam.ac.uk/mackay/itila/.
• There exist ways to define invariant Bayes estimators, in the above sense, by creating a functional loss function, as eg the Kullback-Leibler divergence between two densities. I called these losses intrinsic losses in a 1996 paper. Aug 23 '18 at 8:50
From a non-Bayesian view point, there is no definition of quantities like $$p(x|\theta = -\sqrt \eta \lor \theta = \sqrt \eta)$$ because $\theta$ is then a fixed parameter and the conditioning notation does not make sense. The alternative you propose relies on a prior distribution, which is precisely what an approach such as the one proposed by Casella and Berger wants to avoid. You can check the keyword profile likelihood for more entries. (And there is no meaning of right or wrong there.)
• How does this contradict what I'm saying? My point was that it is nonsensical from a bayesian perspective. The problem I have with Casella and Berger's solution, is that basically, they come up with a totally new ad-hoc definition of likelihood, in such a way that their desired conclusion is reached. If one would make a consistent definition of likelihood, namely the one I gave above, then the conclusion would be different. Of course Casella and Berger may want to avoid bringing in priors, but the only way to do so is to come up with an ad hoc change of definition of likelihood. Nov 14 '17 at 12:25
• If you want to keep a Bayesian perspective, the question is moot since most non-Bayesian results will not make sense or be "consistent" with Bayesian principles. Nov 14 '17 at 12:34 |
help-gnu-emacs
[Top][All Lists]
## Re: Don't you think this would be a nice feature? (Place holder)
From: Tim X Subject: Re: Don't you think this would be a nice feature? (Place holder) Date: Sun, 28 Sep 2008 09:52:31 +1000 User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.0.60 (gnu/linux)
Weiwei <address@hidden> writes:
> Hi guys,
>
> I'm a Emacs newbie, just jumped into it from Vim. I'm using AUCTeX to
> write LaTeX files. In Vim, it has a very nice feature -- placeholder.
> For example, you have the following skeleton in inserting figures:
>
> \begin{figure}[H]
> \centering
> \subfigure[]{\includegraphics[width=3.1in]{}}
> \subfigure[]{\includegraphics[width=3.1in]{}}
> \caption{}
> \label{fig:}
> \end{figure}
>
> Now your cursor is in the third line between the first square brackets
> [], after you type something, you want to jump to the brackets at the
> end of the same line {}, and so forth. Vim LaTeX suite has this
> function with a single key-stroke. In AUCTeX, I didn't find such one,
> or maybe I missed it. Could anybody kindly point it to me if it
> exists?
>
> Now lets look at this feature a little bit further. Can we have (Or do
> we already have) a universal place-holder in Emacs? For example, we
> have a block of text/program as this:
>
> foofoofoo<>foofoofoofoofoofoofoofoofoofoofoofoo
> foo<>foofoofoofoofoofoofoo<>foofoofoofoofoofoo
> foofoofoofoofoo<>foofoofoofoofoofoofoofoofoofoo
>
> The <> indicates a place-holder in which you want to jump quickly. The
> function I proposed is to find next <>, and then delete the left "<"
> and right ">", and leave cursor there.
>
> I'm not sure if any similar functions are already there. I think it
> should be easy with regular expressions. Simply I'm not a regexp guy.
> What do you guys think? And anybody want to have a try? Thanks!
>
If I understand you correctly, I think everything you need is therre, it
just needs to be configured for your particular needs. Emacs has two
standard template systems, tempo and skeleton mode. There are also a
number of other template modes, varying in features and flexibility,
that you can use that are not standard parts of emacs.
You can create very powerful 'electric' behavior by combining these
template modes with abbrevs. For example, some of the programming modes
use this technique for common constructs, such as an if statement. When
you type if and hit space, an abbrev executes that has a template
definition that fills in the rest of the construct and leave the cursor
in a 'useful' place, often wehre you need to enter the test.
For my own work, I have various templates to setup the latex preamble
that prompt me for the document title. It then inserts or the
documentclass, title, date, author etc puts in the start/end document
pair and leaves my cursor between them. I have some other templates for
common latex constructs that I use that are not already built into
auctex.
The other emacs feature which can be useful is macros. You can define a
macro and associate it with a key. then, hitting that binding will
execute the macro, which can in turn execute various emacs commands.
I would suggest that making such templates part of auctex probably won't
have much value. There is too much variation in the way people like to
write their documents and as latex has a wealth of packages to do almost
everything, the combination of options is probably too great to do much
more than it already has. I find the default auctex commands for
inserting sections, various standard/common envrionments, font
attributes etc meet 99% of what I need. the templates are probably best
left for individuals to derive for themselves based on their own
requirements.
HTH
Tim
--
tcross (at) rapttech dot com dot au |
# SDE of futures price under non-constant interest rate and volatility process
I'm trying to figure out the form of the SDE of futures price under the risk neutral measure, when stock price follows GBM:
$$dS_{t}=r_{t}S_{t}d_{t}+\sigma_{t}S_{t}dW_{t}$$
When $$r_{t}=r$$, and $$\sigma_{t}=\sigma$$, it's trivial that futures price $$F_{t,T}$$ follows GBM:
$$dF_{t}=\sigma F_{t}dW_{t}$$
as futures price is a martingale.
I wonder if we can derive an explicit form of SDE for futures price when interest rate and volatility are random processes. I tried myself but failed to do so.
• I do not think it is possible for general random $r$ and $\sigma$. However, it is possible for random $r$ and $\sigma$ with certain specific dynamic forms. – Gordon Mar 11 '19 at 12:53 |
Paul's Online Notes
Home / Calculus III / Partial Derivatives / Directional Derivatives
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 2-7 : Directional Derivatives
1. Determine the gradient of the following function.
$f\left( {x,y} \right) = {x^2}\sec \left( {3x} \right) - \frac{{{x^2}}}{{{y^3}}}$ Show Solution
Not really a lot to do for this problem. Here is the gradient.
$\nabla f = \left\langle {{f_x},{f_y}} \right\rangle = \require{bbox} \bbox[2pt,border:1px solid black]{{\left\langle {2x\sec \left( {3x} \right) + 3{x^2}\sec \left( {3x} \right)\tan \left( {3x} \right) - \frac{{2x}}{{{y^3}}},\frac{{3{x^2}}}{{{y^4}}}} \right\rangle }}$ |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 17 Aug 2018, 10:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# How many positive integers less than 10,000 are there in
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 47967
Re: Integers less than 10,000 [#permalink]
### Show Tags
26 May 2013, 04:47
cumulonimbus wrote:
Bunuel wrote:
anilnandyala wrote:
thanks Bunuel
can u explain me this by using the formulae
How many positive integers less than 10,000 are there in which the sum of the digits equals 6?
6 * (digits) and 3 ||| --> ******||| --> # of permutations of these symbols is $$\frac{9!}{6!3!}$$.
Or: The total number of ways of dividing n identical items (6 *'s in our case) among r persons or objects (4 digt places in our case), each one of whom, can receive 0, 1, 2 or more items (from zero to 6 in our case) is $${n+r-1}_C_{r-1}$$.
In our case we'll get: $${n+r-1}_C_{r-1}={6+4-1}_C_{4-1}={9}C3=\frac{9!}{6!3!}$$.
Hope it's clear.
Hi Bunnel,
Can I say that this involves the placement of 5 identical 1's in four places such that each place can receive 0 to 5 1's.
Yes, that's correct.
_________________
Intern
Joined: 27 Jul 2013
Posts: 2
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
25 Nov 2013, 05:18
Excellent solution Bunuel. Saves a minute at minimum !!!
Intern
Joined: 21 Sep 2013
Posts: 1
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
29 Nov 2013, 11:03
Hi Bunnel, please can explain when the separator concept is to be used and how to use it. Basically i did not understand in this question that why have we considered only 4 digit number. Please help.
Math Expert
Joined: 02 Sep 2009
Posts: 47967
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
29 Nov 2013, 11:09
nayan19 wrote:
Hi Bunnel, please can explain when the separator concept is to be used and how to use it. Basically i did not understand in this question that why have we considered only 4 digit number. Please help.
Integers less than 10,000 are 1, 2, or 3-digit numbers. Post here: how-many-positive-integers-less-than-10-000-are-there-in-85291.html#p710836 explains that we can get single-digit as well as 2 or 3-digit numbers with that approach (check the examples there).
Similar questions to practice:
larry-michael-and-doug-have-five-donuts-to-share-if-any-108739.html
in-how-many-ways-can-5-different-rings-be-worn-in-four-126991.html
Hope this helps.
_________________
Manager
Status: Student
Joined: 26 Aug 2013
Posts: 220
Location: France
Concentration: Finance, General Management
Schools: EMLYON FT'16
GMAT 1: 650 Q47 V32
GPA: 3.44
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
27 Dec 2013, 15:33
Exceptionnal technique! Thanks all for this! Saves a lot of time in a lot of situations!
Incredible minds!
thanks !!
_________________
Think outside the box
Intern
Joined: 07 May 2013
Posts: 7
Concentration: Entrepreneurship, Strategy
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
05 May 2014, 17:39
Bunuel's method was clearly simpler and faster, though I would hardly come up with a similar solution in the gmat.
I did it in a different way, can someone check if the approach was valid?
5 and 0s: 4P1*3C3 = 4*1 = 4
4, 1 and 0s: 4P1*3P1*2C2 = 4*3*1 = 12
3, 2 and 0s: 4P1*3P1*2C2 = 4*3*1 = 12
2, 2, 1 and 0: 4P2*2P1*1 = 12*2 = 24
2, 1, 1, 1: 4P1*3C3 = 4*1 = 4
4 + 12+ 12 + 24 + 4 = 56
Intern
Status: Preparing for GMAT
Joined: 01 Nov 2014
Posts: 3
Location: Egypt
GMAT Date: 01-02-2015
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
30 Nov 2014, 14:45
Hey Bunel
would you plz tell me in this formula 8! / (5!*3!) ,you got the numbers 8,5, and 3 from where ???
Manager
Joined: 10 Mar 2014
Posts: 212
How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
15 Dec 2014, 05:33
Bunuel wrote:
Ramsay wrote:
Sorry guys,
Could someone please explain the following:
"There are 8C3 ways to determine where to place the separators"
I'm not familiar with this shortcut/approach.
Ta
Consider this: we have 5 $$d$$'s and 3 separators $$|$$, like: $$ddddd|||$$. How many permutations (arrangements) of these symbols are possible? Total of 8 symbols (5+3=8), out of which 5 $$d$$'s and 3 $$|$$'s are identical, so $$\frac{8!}{5!3!}=56$$.
With these permutations we'll get combinations like: $$|dd|d|dd$$ this would be 3 digit number 212 OR $$|||ddddd$$ this would be single digit number 5 (smallest number less than 10,000 in which sum of digits equals 5) OR $$ddddd|||$$ this would be 4 digit number 5,000 (largest number less than 10,000 in which sum of digits equals 5)...
Basically this arrangements will give us all numbers less than 10,000 in which sum of the digits (sum of 5 d's=5) equals 5.
Hence the answer is $$\frac{8!}{5!3!}=56$$.
This can be done with direct formula as well:
The total number of ways of dividing n identical items (5 d's in our case) among r persons or objects (4 digt places in our case), each one of whom, can receive 0, 1, 2 or more items (from zero to 5 in our case) is $${n+r-1}_C_{r-1}$$.
In our case we'll get: $${n+r-1}_C_{r-1}={5+4-1}_C_{4-1}={8}C3=\frac{8!}{5!3!}=56$$
Attachment:
pTNfS-2e270de4ca223ec2741fa10b386c7bfe.jpg
Hi Bunuel,
Could you please clarify why we are taking 5 d's and 3 seprator (/). i am getting confusion here. we can take four separator also and get the result.
Thanks.
Senior Manager
Status: Math is psycho-logical
Joined: 07 Apr 2014
Posts: 421
Location: Netherlands
GMAT Date: 02-11-2015
WE: Psychology and Counseling (Other)
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
26 Dec 2014, 04:43
Hi,
If I may venture to propose the solution I used and you can tell me what I am missing.
I started by testing it from 0-10. There is one such number (5). From 11-20 there is one such number (14). This led me realize than from 0-99 there are 9 such numbers. So, this was the lengthy part of my thinking process (not lengthy at all).
From 0-99: 9 numbers.
From : 100-999: 9*2= 18 numbers
From 1000-9999: 9*3= 27 numbers
This is close enough so I decided to choose 56 anyway, but since there are 2 numbers missing, could you tell me why and where?
Thank you,
Natalia
Manager
Joined: 10 Jun 2015
Posts: 120
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
16 Jun 2015, 23:59
Bunuel wrote:
How many positive integers less than 10,000 are there in which the sum of the digits equals 5?
(A) 31
(B) 51
(C) 56
(D) 62
(E) 93
from 0 to 9, 5 ( 1 number)
from 10 to 99, we have 14,23,32,41,50 (5 numbers)
from 100 to 999, we have 104,113. 122,131, 140, 203,212,221, 230,302, 311,320,401,410,500 (15 numbers)
from 1000 to 9999, we have 1004,1040,1103,1112, 1121,1130,1202,1211,1220,1301,1310,1400, 2003,2012,2021,2030,2102,2111,2120,2201,2210,2300,3002.3011.3020,3101,3110,3200,4001,4010,4100,5000 (32 numbers)
So we have only 53 numbers.
Can anyone tell the numbers which I miss?
Intern
Joined: 02 Feb 2016
Posts: 5
Location: India
Concentration: Technology, General Management
WE: Analyst (Computer Software)
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
19 May 2016, 00:21
As it says integers less than 100,000,why only four digit numbers are considered ? Why not three digits,two digits and single digit integers considered?
Math Expert
Joined: 02 Sep 2009
Posts: 47967
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
19 May 2016, 03:34
shaktirdas19 wrote:
As it says integers less than 100,000,why only four digit numbers are considered ? Why not three digits,two digits and single digit integers considered?
_________________
SVP
Joined: 12 Dec 2016
Posts: 1851
Location: United States
GMAT 1: 700 Q49 V33
GPA: 3.64
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
16 Apr 2017, 14:47
Bunuel wrote:
Ramsay wrote:
Sorry guys,
Could someone please explain the following:
"There are 8C3 ways to determine where to place the separators"
I'm not familiar with this shortcut/approach.
Ta
Consider this: we have 5 $$d$$'s and 3 separators $$|$$, like: $$ddddd|||$$. How many permutations (arrangements) of these symbols are possible? Total of 8 symbols (5+3=8), out of which 5 $$d$$'s and 3 $$|$$'s are identical, so $$\frac{8!}{5!3!}=56$$.
With these permutations we'll get combinations like: $$|dd|d|dd$$ this would be 3 digit number 212 OR $$|||ddddd$$ this would be single digit number 5 (smallest number less than 10,000 in which sum of digits equals 5) OR $$ddddd|||$$ this would be 4 digit number 5,000 (largest number less than 10,000 in which sum of digits equals 5)...
Basically this arrangements will give us all numbers less than 10,000 in which sum of the digits (sum of 5 d's=5) equals 5.
Hence the answer is $$\frac{8!}{5!3!}=56$$.
This can be done with direct formula as well:
The total number of ways of dividing n identical items (5 d's in our case) among r persons or objects (4 digt places in our case), each one of whom, can receive 0, 1, 2 or more items (from zero to 5 in our case) is $${n+r-1}_C_{r-1}$$.
In our case we'll get: $${n+r-1}_C_{r-1}={5+4-1}_C_{4-1}={8}C3=\frac{8!}{5!3!}=56$$
Attachment:
pTNfS-2e270de4ca223ec2741fa10b386c7bfe.jpg
hello, i complete understand the formula, but what i still do not understand is how why there are only 5 numbers? It should be 6 if o is included
Intern
Joined: 28 Jan 2017
Posts: 38
Location: India
GMAT 1: 750 Q50 V42
GPA: 3.29
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
21 Jul 2017, 10:10
X1 + X2 +X3 +X4 =5
The number of solutions of this equation for X1 X2, X3 and X4>=0 : n+r-1(C)r-1
Here r=4 and n=5
Hence solution: 8C3
Intern
Joined: 30 Jun 2017
Posts: 18
Location: India
Concentration: Technology, General Management
WE: Consulting (Computer Software)
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
06 Sep 2017, 22:41
walker wrote:
there is a shortcut. For the problem, 4 digits are equally important in 0000-9999 set and it is impossible to build a number using only one digit (like 11111) So, answer has to be divisible by 4. Only 56 works.
Posted from GMAT ToolKit
Hi walker,
Intern
Joined: 12 Jul 2017
Posts: 31
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
04 Oct 2017, 05:30
Bunuel wrote:
zaarathelab wrote:
How many positive integers less than 10,000 are there in which the sum of the digits equals 5?
A) 31
B) 51
C) 56
D) 62
E) 93
Consider this: we have 5 $$d$$'s and 3 separators $$|$$, like: $$ddddd|||$$. How many permutations (arrangements) of these symbols are possible? Total of 8 symbols (5+3=8), out of which 5 $$d$$'s and 3 $$|$$'s are identical, so $$\frac{8!}{5!3!}=56$$.
With these permutations we'll get combinations like: $$|dd|d|dd$$ this would be 3 digit number 212 OR $$|||ddddd$$ this would be single digit number 5 (smallest number less than 10,000 in which sum of digits equals 5) OR $$ddddd|||$$ this would be 4 digit number 5,000 (largest number less than 10,000 in which sum of digits equals 5)...
Basically this arrangements will give us all numbers less than 10,000 in which sum of the digits (sum of 5 d's=5) equals 5.
Hence the answer is $$\frac{8!}{5!3!}=56$$.
This can be done with direct formula as well:
The total number of ways of dividing n identical items (5 d's in our case) among r persons or objects (4 digt places in our case), each one of whom, can receive 0, 1, 2 or more items (from zero to 5 in our case) is $${n+r-1}_C_{r-1}$$.
In our case we'll get: $${n+r-1}_C_{r-1}={5+4-1}_C_{4-1}={8}C3=\frac{8!}{5!3!}=56$$
Hi Bunuel,
I just want to make sure that i understood the concept. Let us assume that the question stem ask for a sum of 4 instead of 5.
Will the answer be: XXXXIII i.e. 7!/(3!x4!)?
If the question asks for a sum of five for numbers below 20,000 will the answer be 9!/(5!x4!)?
if the question asks for a sum of four for numbers below 20,000 will the answer be : XXXX0IIII i.e. 9!/(4!x4!)?
Another thing, is there a formula if you want to distribute n different objects among k people? (i could count the case when n and k are small for example n=3 and k=2 but i was wondering if there was a general formula for that)
Intern
Joined: 17 Jan 2018
Posts: 43
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
16 Apr 2018, 08:32
I don't like the sticks method. It is not intuitive at all. And I will no way even think of that in the exam. For me, the usual way is better.
Sum of digits --> 5 and Less than 10,000.
(0,0,0,5) - 4!/3! - 4
(0,0,1,4) - 4!/2! - 12
(0,0,2,3) - 4!/2! - 12
(0,1,1,3) - 4!/2! - 12
(0,1,2,2) - 4!/2! - 12
(1,1,1,2) - 4!/3! - 4
Isnt this simple enough? And can be extrapolated easily to any question of this sort no?
Bunuel wrote:
How many positive integers less than 10,000 are there in which the sum of the digits equals 5?
(A) 31
(B) 51
(C) 56
(D) 62
(E) 93
Intern
Joined: 06 Mar 2017
Posts: 49
Location: India
Schools: ISB '20, NUS '20
GMAT 1: 620 Q49 V25
GPA: 3.9
Re: How many positive integers less than 10,000 are there in [#permalink]
### Show Tags
21 May 2018, 00:36
Bunuel wrote:
zaarathelab wrote:
How many positive integers less than 10,000 are there in which the sum of the digits equals 5?
A) 31
B) 51
C) 56
D) 62
E) 93
Consider this: we have 5 $$d$$'s and 3 separators $$|$$, like: $$ddddd|||$$. How many permutations (arrangements) of these symbols are possible? Total of 8 symbols (5+3=8), out of which 5 $$d$$'s and 3 $$|$$'s are identical, so $$\frac{8!}{5!3!}=56$$.
With these permutations we'll get combinations like: $$|dd|d|dd$$ this would be 3 digit number 212 OR $$|||ddddd$$ this would be single digit number 5 (smallest number less than 10,000 in which sum of digits equals 5) OR $$ddddd|||$$ this would be 4 digit number 5,000 (largest number less than 10,000 in which sum of digits equals 5)...
Basically this arrangements will give us all numbers less than 10,000 in which sum of the digits (sum of 5 d's=5) equals 5.
Hence the answer is $$\frac{8!}{5!3!}=56$$.
This can be done with direct formula as well:
The total number of ways of dividing n identical items (5 d's in our case) among r persons or objects (4 digt places in our case), each one of whom, can receive 0, 1, 2 or more items (from zero to 5 in our case) is $${n+r-1}_C_{r-1}$$.
In our case we'll get: $${n+r-1}_C_{r-1}={5+4-1}_C_{4-1}={8}C3=\frac{8!}{5!3!}=56$$
Bunuel, This stuff bounced over my head. What is this Digits and Separators concept? Kindly enlighten.
How did u get to 5d and 3s, can it be 7d or 8d and 9s or something like that...?
Re: How many positive integers less than 10,000 are there in &nbs [#permalink] 21 May 2018, 00:36
Go to page Previous 1 2 3 [ 59 posts ]
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. |
# Error-Undefined control sequence
I occurred a problem recently, but, I donot know how to solve it?
Error:
Undefined control sequence.
l.26 \eqalignno {P^{-1}&=P_{1}^{-1}+P_{2}^{-1}\cr P^{-1}\mathhat{x}&=P_{1}^... My codes: \documentclass{article} \usepackage{amsmath} \begin{document}\eqalignno{P^{-1}&=P_{1}^{-1}+P_{2}^{-1}\cr P^{-1}\mathhat{x}&=P_{1}^{-1}\mathhat{x}_{1}+P_{2}^{-1}\mathhat{x}_{2}.&\hbox{(7)}}\end{document} ## 1 Answer Is this you want? \documentclass{article} \usepackage{amsmath,amssymb} \begin{document} \begin{align} \begin{aligned} P^{-1} &=P_{1}^{-1}+P_{2}^{-1}\\ P^{-1}\hat{x}&=P_{1}^{-1}\hat{x}_{1}+P_{2}^{-1}\hat{x}_{2}%.(7) \end{aligned} \end{align} \end{document} Actually, I should admit that I don't understand your code (\eqalignno and \mathhat). And in latex, don't use .
• \eqalignno is macro from plainTeX, but \mathhat is unknown. The construction looks like plain TeX and the first three lines and the last line of OP's code is only something like curiosity. – wipet Apr 8 '15 at 13:17
• Yes, thank you, you solved my problems. It seems I mix the Tex codes with Latex environment, see this, tex.stackexchange.com/a/55694/44227. – wayne Apr 8 '15 at 13:17
• @wipet You are right, actually, I copy this code from TEX Source of an IEEE paper which is a Full Text in HTML version in order to be used in my paper. – wayne Apr 8 '15 at 13:25 |
Christian Elsholtz and I have recently finished our joint paper “Counting the number of solutions to the Erdös-Straus equation on unit fractions“, submitted to the Journal of the Australian Mathematical Society. This supercedes my previous paper on the subject, by obtaining stronger and more general results. (The paper is currently in the process of being resubmitted to the arXiv, and should appear at this link within a few days.)
As with the previous paper, the main object of study is the number ${f(n)}$ of solutions to the Diophantine equation
$\displaystyle \frac{4}{n} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z} \ \ \ \ \ (1)$
with ${x,y,z}$ positive integers. The Erdös-Straus conjecture asserts that ${f(n)>0}$ for all ${n>1}$. Since ${f(nm) \geq f(n)}$ for all positive integers ${n,m}$, it suffices to show that ${f(p)>0}$ for all primes ${p}$.
We single out two special types of solutions: Type I solutions, in which ${x}$ is divisible by ${n}$ and ${y,z}$ are coprime to ${n}$, and Type II solutions, in which ${x}$ is coprime to ${n}$ and ${y,z}$ are divisible by ${n}$. Let ${f_I(n), f_{II}(n)}$ denote the number of Type I and Type II solutions respectively. For any ${n}$, one has
$\displaystyle f(n) \geq 3 f_I(n) + 3 f_{II}(n),$
with equality when ${n}$ is an odd primes ${p}$. Thus, to prove the Erdös-Strauss conjecture, it suffices to show that at least one of ${f_I(p)}$, ${f_{II}(p)}$ is positive whenever ${p}$ is an odd prime.
Our first main results are the asymptotics
$\displaystyle N \log^3 N \ll \sum_{n \leq N} f_I(n) \ll N \log^3 N$
$\displaystyle N \log^3 N \ll \sum_{n \leq N} f_{II}(n) \ll N \log^3 N$
$\displaystyle N \log^2 N \ll \sum_{p \leq N} f_I(p) \ll N \log^2 N \log\log N$
$\displaystyle N \log^2 N \ll \sum_{p \leq N} f_{II}(p) \ll N \log^2 N.$
This improves upon the results in the previous paper, which only established
$\displaystyle N \log^2 N \ll \sum_{p \leq N} f_I(p) \ll N \exp(O( \frac{\log x}{\log\log x} ))$
and
$\displaystyle N \log^2 N \ll \sum_{p \leq N} f_{II}(p) \ll N \log^2 N \log \log N.$
The double logarithmic factor in the upper bound for ${\sum_{p \leq N} f_I(p)}$ is artificial (arising from the inefficiency in the Brun-Titchmarsh inequality on very short progressions) but we do not know how to remove it.
The methods are similar to those in the previous paper (which were also independently discovered in unpublished work of Elsholtz and Heath-Brown), but with the additional input of the Erdös divisor bound on expressions of the form ${\sum_{n \leq N} \tau(P(n))}$ for polynomials ${P}$, discussed in this recent blog post. (Actually, we need to tighten Erdös’ bound somewhat, to obtain some uniformity in the bounds even as the coefficients of ${P}$ become large, but this turns out to be achievable by going through the original arguments of Erdös more carefully.)
We also note an observation of Heath-Brown, that in our notation gives the lower bound
$\displaystyle N \log^6 N \ll \sum_{n \leq N} f(n);$
thus, we see that for typical ${n}$, that most solutions to the Erdös-Straus equation are not of Type I or Type II, in contrast to the case when ${n}$ is prime.
We also have a number other new results. We find a way to systematically unify all the previously known parameterisations of solutions to the Erdös-Straus equation, by lifting the Cayley-type surface ${\{ (x,y,z): \frac{4}{n} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z} \}}$ to a certain three-dimensional variety in six-dimensional affine space, in such a way that integer points in the former arise from integer points in the latter. Each of the previously known characterisations of solutions then corresponds to a different choice of coordinates on this variety. (This point of view was also adopted in a paper of Heath-Brown, who interprets this lifted variety as the universal torsor of the Cayley surface.) By optimising between these parameterisations and exploiting the divisor bound, we obtain some bounds on the worst-case behaviour of ${f_I(n)}$ and ${f_{II}(n)}$, namely
$\displaystyle f_I(n) \ll n^{3/5 + O(1/\log \log n)}$
and
$\displaystyle f_{II}(n) \ll n^{2/5 + O(1/\log \log n)},$
which should be compared to a recent previous bound ${f(n) \ll n^{2/3 + O(1/\log \log n)}}$ of Browning and Elsholtz. In the other direction, we show that ${f(n) \gg n^{(3+o(1))/\log\log n}}$ for infinitely many ${n}$, and ${f(p) \gg \log^{\frac{\log 3}{2}-o(1)} p}$ for almost all primes ${p}$. Here, the main tools are some bounds for the representation of a rational as a sum of two unit fractions in the above-mentioned work of Browning and Elsholtz, and also the Turán-Kubilius inequality.
We also completely classify all the congruence classes that can be solved by polynomials, completing the partial list discussed in the previous post. Specifically, the Erdös-Straus conjecture is true for ${n}$ whenever one of the following congruence-type conditions is satisfied:
1. ${n = -f \mod 4ad}$, where ${a,d,f \in {\bf N}}$ are such that ${f|4a^2 d+1}$.
2. ${n = -f \mod 4ac}$ and ${n = -\frac{c}{a} \mod f}$, where ${a,c,f \in {\bf N}}$ are such that ${(4ac,f)=1}$.
3. ${n = -f \mod 4cd}$ and ${n^2 = -4c^2d \mod f}$, where ${c,d,f \in {\bf N}}$ are such that ${(4cd,f)=1}$.
4. ${n = -\frac{1}{e} \mod 4ab}$ or ${n = -e \mod 4ab}$, where ${a,b,e \in {\bf N}}$ are such that ${e|a+b}$ and ${(e,4ab)=1}$.
5. ${n = -4a^2d \mod f}$, where ${a,d,f \in {\bf N}}$ are such that ${4ad|f+1}$.
6. ${n = -4a^2d-e \mod 4ade}$, where ${a,d,e \in {\bf N}}$ are such that ${(4ad,e)=1}$.
In principle, this suggests a way to extend the existing verification of the Erdös-Straus conjecture beyond the current range of ${10^{14}}$ by collecting all congruences to small moduli (e.g. up to ${10^6}$), and then using this to sieve out the primes up to a given size.
Finally, we begin a study of the more general equation
$\displaystyle \frac{m}{n} = \frac{1}{n_1}+\ldots+\frac{1}{n_k} \ \ \ \ \ (2)$
where ${m > k \geq 3}$ are fixed. We can obtain a partial analogue of our main bounds for the ${m=4,k=3}$ case, namely that
$\displaystyle \sum_{n \leq N} f_{m,k,II}(n) \gg N \log^{2^{k-1}-1} N$
and
$\displaystyle \sum_{p \leq N} f_{m,k,II}(p) \gg N \log^{2^{k-1}-2} N / \log\log N$
were ${f_{m,k,II}(n)}$ denotes the number of solutions to (2) which are of “Type II” in the sense that ${n_2,\ldots,n_k}$ are all divisible by ${n}$. However, we do not believe our bounds to be sharp in the large ${k}$ regime, though it does show that the expected number of solutions to (2) should grow rapidly in ${k}$. |
Address 3025 1st Ave, Spearfish, SD 57783 (605) 722-1260 http://www.evergreenofficeproducts.com/managed-services/application-hosting-it-support
# confidence interval standard error calculator Boyes, Montana
This 2 as a multiplier works for 95% confidence levels for most sample sizes. The values of t to be used in a confidence interval can be looked up in a table of the t distribution. Jeff's Books Customer Analytics for DummiesA guidebook for measuring the customer experienceBuy on Amazon Quantifying the User Experience 2nd Ed.: Practical Statistics for User ResearchThe most comprehensive statistical resource for UX For example, for a confidence level of 95%, we know that $$\alpha = 1 - 0.95 = 0.05$$ and a sample size of n = 20, we get df = 20-1
At the same time they can be perplexing and cumbersome. Now consider the probability that a sample mean computed in a random sample is within 23.52 units of the population mean of 90. Figure 1. Pop.
In this case the population parameter is the population mean ($$\mu$$). These limits were computed by adding and subtracting 1.96 standard deviations to/from the mean of 90 as follows: 90 - (1.96)(12) = 66.48 90 + (1.96)(12) = 113.52 The value But confidence intervals provide an essential understanding of how much faith we can have in our sample estimates, from any sample size, from 2 to 2 million. If you look closely at this formula for a confidence interval, you will notice that you need to know the standard deviation (σ) in order to estimate the mean.
The only differences are that sM and t rather than σM and Z are used. Int. However, to explain how confidence intervals are constructed, we are going to work backwards and begin by assuming characteristics of the population. Example 1Fourteen users attempted to add a channel on their cable TV to a list of favorites.
That means we're pretty sure that at least 13% of customers have security as a major reason why they don't pay their credit card bills using mobile apps (also a true Please type the sample mean, the sample standard deviation, the sample size and the confidence level, and the confidence interval will be computed for you: Sample Mean ($$\bar X$$) = This may sound unrealistic, and it is. Does better usability increase customer loyalty? 5 Examples of Quantifying Qualitative Data How common are usability problems?
Home | Blog | Calculators | Products | Services | Contact(303) 578-2801 © 2016 Measuring Usability LLC All Rights Reserved. Using a dummy variable you can code yes = 1 and no = 0. Compute the 95% confidence interval. The formula for a confidence interval for the population mean $$\mu$$ when the population standard deviation is not known is \[CI = (\bar x - t_{\alpha/2, n-1} \times \frac{ s }{
Lane Prerequisites Areas Under Normal Distributions, Sampling Distribution of the Mean, Introduction to Estimation, Introduction to Confidence Intervals Learning Objectives Use the inverse normal distribution calculator to find the value of df 0.95 0.99 2 4.303 9.925 3 3.182 5.841 4 2.776 4.604 5 2.571 4.032 8 2.306 3.355 10 2.228 3.169 20 2.086 2.845 50 2.009 2.678 100 1.984 2.626 You Therefore the confidence interval is computed as follows: Lower limit = 16.362 - (2.013)(1.090) = 14.17 Upper limit = 16.362 + (2.013)(1.090) = 18.56 Therefore, the interference effect (difference) for the And yes, you'd want to use the 2 tailed t-distribution for any sized sample.
Twitter Facebook LinkedIn © MathCracker.com. The standard deviation for each group is obtained by dividing the length of the confidence interval by 3.92, and then multiplying by the square root of the sample size: For 90% Continuous data are metrics like rating scales, task-time, revenue, weight, height or temperature. I have a sample standard deviation of 1.2.Compute the standard error by dividing the standard deviation by the square root of the sample size: 1.2/ √(50) = .17.
The sampling distribution of the mean for N=9. Daniel Soper. Dev. ($$s$$)= Sample Size = Confidence Level = (Ex: 0.99, 0.95, or 99, 95 without "%", etc) More about the confidence intervals so you can better interpret the results obtained Note: There is also a special calculator when dealing with task-times.Now try two more examples from data we've collected.
Compute the margin of error by multiplying the standard error by 2. 17 x 2 = .34. proportions T-test for two pop. The divisor, 3.92, in the formula above would be replaced by 2 × 2.0639 = 4.128. Relevant details of the t distribution are available as appendices of many statistical textbooks, or using standard computer spreadsheet packages.
They provide the most likely range for the unknown population of all customers (if we could somehow measure them all).A confidence interval pushes the comfort threshold of both user researchers and For the purpose of this example, I have an average response of 6.Compute the standard deviation. Calculations for the control group are performed in a similar way. The first column, df, stands for degrees of freedom, and for confidence intervals on the mean, df is equal to N - 1, where N is the sample size.
Posted Comments There are 2 Comments September 8, 2014 | Jeff Sauro wrote:John, Yes, you're right. I was hoping that you could expand on why we use 2 as the multiplier (and I understand that you suggest using something greater than 2 with smaller sample sizes). If you want more a more precise confidence interval, use the online calculator and feel free to read the mathematical foundation for this interval in Chapter 3 of our book, Quantifying A Concise Guide to Clinical TrialsPublished Online: 29 APR 2009Summary Confidence Interval on the Mean Author(s) David M.
When you need to be sure you've computed an accurate interval then use the online calculators (which we use). Calculator » Pie Chart Maker » Tutorials and Lessons » Create Time Series plots » Solved Math Problems » Histogram Maker » Grade Calculator Online » Correlation Coefficient Calculator » Solved If the sample size is large (say bigger than 100 in each group), the 95% confidence interval is 3.92 standard errors wide (3.92 = 2 × 1.96). The confidence interval is then computed just as it is when σM.
The standard deviation for this group is √25 × (34.2 – 30.0)/4.128 = 5.09. Mean (Known σ) Conf. As a result, you have to extend farther from the mean to contain a given proportion of the area. What is the sampling distribution of the mean for a sample size of 9?
Therefore, the standard error of the mean would be multiplied by 2.78 rather than 1.96. Our best estimate of what the entire customer population's average satisfaction is between 5.6 to 6.3. Menu Get the App Exam Certifications Homework Coach Forum Member Log In Confidence Interval for Variance and Standard Deviation Calculator Enter N Enter Sample Variance s2 Enter Confidence Interval % When the sample size is large, say 100 or above, the t distribution is very similar to the standard normal distribution.
You can find what multiple you need by using the online calculator. Discrete binary data takes only two values, pass/fail, yes/no, agree/disagree and is coded with a 1 (pass) or 0 (fail). variance Correlation Coefficient Calculator Critical Chi-Square Values Critical Z-Values Critical t-values Conf. Copyright © 2006 - 2016 by Dr.
mean μ Z-test for one pop. Specifically, we will compute a confidence interval on the mean difference score. However, computing a confidence interval when σ is known is easier than when σ has to be estimated, and serves a pedagogical purpose. means T-test for paired samples Wilcoxon Rank Sum Test Wilcoxon Signed-Ranks Test Sign Test Chi-Square Test for Goodness of Fit Chi-Square Test of Independence Kruskal-Wallis Test Graphing Tools Bar Chart Maker
As an example, consider data presented as follows: Group Sample size Mean 95% CI Experimental intervention 25 32.1 (30.0, 34.2) Control intervention 22 28.3 (26.5, 30.1) The confidence intervals should Assume that the following five numbers are sampled from a normal distribution: 2, 3, 5, 6, and 9 and that the standard deviation is not known. |
## College Algebra 7th Edition
We can obtain the graph $y=|x-1|$ by starting with the graph $y=|x|$ and shifting it right 1 unit. The result looks like graph IV. |
2 minor correction
It's a great question! Disappointingly, I think the answer to (2) is No :
The only restriction on a good' division into "symmetric" vs. "symplectic" conjugacy classes that I can see is that it should be intrinsic, depending only on $G$ and the class up to isomorphism. (You don't just want to split the self-dual classes randomly, right?) This means that the division must be preserved by all outer automorphisms of $G$, and this is what I'll use to construct a counterexample. Let me know if I got this wrong.
The group
My $G$ is $C_{11}\rtimes (C_4\times C_2\times C_2)$, with $C_2\times C_2\times C_2$ acting trivially on $C_{11}=\langle x\rangle$, and the generator of $C_4$ acting by $x\mapsto x^{-1}$. In Magma, this is G:=SmallGroup(176,35), and it has a huge group of outer automorphisms $C_5\times((C_2\times C_2\times C_2)\rtimes S_4)$, Magma's OuterFPGroup(AutomorphismGroup(G)). The reason for $C_5$ is that $x$ is only conjugate to $x,x^{-1}$ in $C_{11}\triangleleft G$, but there there are 5 pairs of possible generators like that in $C_{11}$, indistinguishable from each other; the other factor of $Out\ G$ is $Aut(C_2\times C_2\times C_4)$, all of these guys commute with the action.
The representations
The group has 28 orthogonal, 20 symplectic and 8 non-self-dual representations, according to Magma.
The conjugacy classes
There are 1+7+8+5+35=56 conjugacy classes, of elements of order 1,2,4,11,22 respectively. The elements of order 4 are (clearly) not conjugate to their inverses, so these 8 classes account for the 8 non-self-dual representations. We are interested in splitting the other 48 classes into two groups, 28 'orthogonal' and 20 'symplectic'.
The catch
The problem is that the way $Out\ G$ acts on the 35 classes of elements of order 22, it has two orbits according to Magma - one with 30 classes and one with 5. (I think I can see that these numbers must be multiples of 5 without Magma's help, but I don't see the full splitting at the moment; I can insert the Magma code if you guys want it.) Anyway, if I am correct, these 30 classes are indistinguishable from one another, so they must all be either 'orthogonal' or 'symplectic'. So a canonical splitting into 28 and 20 cannot exist.
Edit: However, as Jack Schmidt points out (see comment below), it is possible to predict the number of symplectic representations for this group!
1
It's a great question! Disappointingly, I think the answer to (2) is No :
The only restriction on a good' division into "symmetric" vs. "symplectic" conjugacy classes that I can see is that it should be intrinsic, depending only on $G$ and the class up to isomorphism. (You don't just want to split the self-dual classes randomly, right?) This means that the division must be preserved by all outer automorphisms of $G$, and this is what I'll use to construct a counterexample. Let me know if I got this wrong.
The group
My $G$ is $C_{11}\rtimes (C_4\times C_2\times C_2)$, with $C_2\times C_2\times C_2$ acting trivially on $C_{11}=\langle x\rangle$, and the generator of $C_4$ acting by $x\mapsto x^{-1}$. In Magma, this is G:=SmallGroup(176,35), and it has a huge group of outer automorphisms $C_5\times((C_2\times C_2\times C_2)\rtimes S_4)$, Magma's OuterFPGroup(AutomorphismGroup(G)). The reason for $C_5$ is that $x$ is only conjugate to $x,x^{-1}$ in $C_{11}\triangleleft G$, but there there are 5 pairs of possible generators like that in $C_{11}$, indistinguishable from each other; the other factor of $Out\ G$ is $Aut(C_2\times C_2\times C_4)$, all of these guys commute with the action.
The representations
The group has 28 orthogonal, 20 symplectic and 8 non-self-dual representations, according to Magma.
The conjugacy classes
There are 1+7+8+5+35=56 conjugacy classes, of elements of order 1,2,4,11,22 respectively. The elements of order 4 are (clearly) not conjugate to their inverses, so these 8 classes account for the 8 non-self-dual representations. We are interested in splitting the other 48 classes into two groups, 28 'orthogonal' and 20 'symplectic'.
The catch
The problem is that the way $Out\ G$ acts on the 35 classes of elements of order 22, it has two orbits according to Magma - one with 30 classes and one with 5. (I think I can see that these numbers must be multiples of 5 without Magma's help, but I don't see the full splitting at the moment; I can insert the Magma code if you guys want it.) Anyway, if I am correct, these 30 classes are indistinguishable from one another, so they must all be either 'orthogonal' or 'symplectic'. So a canonical splitting into 28 and 20 cannot exist. |
# Using complex exponentials to prove 1+acostheta
captainemeric
## Homework Statement
Use complex exponentials to prove 1 + acos(theta) + a^2cos(2theta) + a^3cos(3theta)... = (1 - acos(theta))/(1 - 2acos(theta) + a^2)
## Homework Equations
euler's e^itheta/2 +e^-itheta/2=2cos(2theta)
## The Attempt at a Solution
a^(n)cos(ntheta) = e^nitheta = e^-nitheta
from there i got the series
(a^n(e^itheta)^n)/2
now from here I think I setup the summation formula but this is where I get stuck. Any help is greatly apprecited.
voko
Is |a| < 1?
captainemeric
Yes, a is a real constant and |a| < 1. sorry about that
voko
$$a^n \cos n\theta = a^n\frac {e^{in\theta} + e^{-in\theta}} {2} = \frac {a^ne^{in\theta} + a^ne^{-in\theta}} {2} = \frac {p^n + q^n} {2} \\ p = (\ln a)e^{i\theta}, \ |p| < 1 \\ q = (\ln a)e^{-i\theta}, \ |q| < 1$$
What is the sum of $p^n$ and $q^n$?
captainemeric
That makes sense. That will then give me a real and an imaginary result of which I take the real I believe. Also, I apologize for the typo on the first post.
voko
Well, you can take the real part, but the sum is real anyway. |
English
# Item
ITEM ACTIONSEXPORT
SUSY in the Sky with Gravitons
Jakobsen, G. U., Mogull, G., Plefka, J., & Steinhoff, J. (2022). SUSY in the Sky with Gravitons. Journal of High Energy Physics, 2022(01): 027. doi:10.1007/JHEP01(2022)027.
Item is
### Basic
show hide
Genre: Journal Article
### Files
show Files
hide Files
:
2109.04465.pdf (Preprint), 690KB
Name:
2109.04465.pdf
Description:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
-
-
:
Jakobsen2022_Article_SUSYInTheSkyWithGravitons.pdf (Publisher version), 696KB
Name:
Jakobsen2022_Article_SUSYInTheSkyWithGravitons.pdf
Description:
Open Access
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
-
-
show
### Creators
show
hide
Creators:
Jakobsen, Gustav Uhre, Author
Mogull, Gustav1, Author
Plefka, Jan, Author
Steinhoff, Jan1, Author
Affiliations:
1Astrophysical and Cosmological Relativity, AEI-Golm, MPI for Gravitational Physics, Max Planck Society, ou_1933290
### Content
show
hide
Free keywords: High Energy Physics - Theory, hep-th,General Relativity and Quantum Cosmology, gr-qc
Abstract: Picture yourself in the wave zone of a gravitational scattering event of two massive, spinning compact bodies (black holes, neutron stars or stars). We show that this system of genuine astrophysical interest enjoys a hidden $\mathcal{N}=2$ supersymmetry, at least to the order of spin-squared (quadrupole) interactions in arbitrary $D$ spacetime dimensions. Using the ${\mathcal N}=2$ supersymmetric worldline action, augmented by finite-size corrections for the non-Kerr black hole case, we build a quadratic-in-spin extension to the worldline quantum field theory (WQFT) formalism introduced in our previous work, and calculate the two bodies' deflection and spin kick to sub-leading order in the post-Minkowskian expansion in Newton's constant $G$. For spins aligned to the normal vector of the scattering plane we also obtain the scattering angle. All $D$-dimensional observables are derived from an eikonal phase given as the free energy of the WQFT, that is invariant under the $\mathcal{N}=2$ supersymmetry transformations.
### Details
show
hide
Language(s):
Dates: 2021-09-092022
Publication Status: Published in print
Pages: 41 pages including references
Publishing info: -
Rev. Type: -
Identifiers: arXiv: 2109.04465
DOI: 10.1007/JHEP01(2022)027
Degree: -
show
show
show
### Source 1
show
hide
Title: Journal of High Energy Physics
Source Genre: Journal
Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 2022 (01) Sequence Number: 027 Start / End Page: - Identifier: - |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Nov 2019, 17:10
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# In the diagram above, O is the center of the circle and ACDE is a squa
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 59236
In the diagram above, O is the center of the circle and ACDE is a squa [#permalink]
### Show Tags
15 Oct 2019, 23:41
00:00
Difficulty:
25% (medium)
Question Stats:
77% (01:19) correct 23% (02:03) wrong based on 26 sessions
### HideShow timer Statistics
In the diagram above, O is the center of the circle and ACDE is a square. What is the area of the square?
(1) the circle has a radius of 2
(2) angle ACB = 90°
Attachment:
GMAT_DS_Magoosh_161.png [ 7.08 KiB | Viewed 241 times ]
_________________
Manager
Joined: 05 Oct 2014
Posts: 106
Location: India
Concentration: General Management, Strategy
GMAT 1: 580 Q41 V28
GPA: 3.8
WE: Project Management (Energy and Utilities)
Re: In the diagram above, O is the center of the circle and ACDE is a squa [#permalink]
### Show Tags
15 Oct 2019, 23:54
In the diagram above, O is the center of the circle and ACDE is a square. What is the area of the square?
(1) the circle has a radius of 2
(2) angle ACB = 90°
Answer : The Correct answer should be (A) -> Option (1) is sufficient
- AB is the diameter, so Angle ACB must be 90degree
- OA=OB=2, Thus, Dia AOB =4, BC =3, Angle ACB =90Degree, So we can find the value of AC
- All sides are equal for Square. Hence Area can be evaluated from a^2 ( a - length of each side)
VP
Joined: 20 Jul 2017
Posts: 1091
Location: India
Concentration: Entrepreneurship, Marketing
WE: Education (Education)
Re: In the diagram above, O is the center of the circle and ACDE is a squa [#permalink]
### Show Tags
16 Oct 2019, 01:20
Bunuel wrote:
In the diagram above, O is the center of the circle and ACDE is a square. What is the area of the square?
(1) the circle has a radius of 2
(2) angle ACB = 90°
Attachment:
GMAT_DS_Magoosh_161.png
Angle in a semi-circle is 90 deg
--> Triangle ACB is right angled
--> $$AB^2 = AC^2 + BC^2$$
--> $$AC^2 = AB^2 - 9$$
--> $$AC = \sqrt{AB^2 - 9}$$
(1) the circle has a radius of 2
--> $$AB = 4$$
--> $$AC = \sqrt{4^2 - 9} = \sqrt{7}$$
--> Area of square = $$AC^2 = 7$$ --> Sufficient
(2) angle ACB = 90°
This is the property of triangle insid a semi-circle
--> Nothing can be said about the value of side AC --> Insufficient
IMO Option A
Re: In the diagram above, O is the center of the circle and ACDE is a squa [#permalink] 16 Oct 2019, 01:20
Display posts from previous: Sort by
# In the diagram above, O is the center of the circle and ACDE is a squa
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne |
# Thread: if and else statements rogether with propositionlogic
1. ## if and else statements rogether with propositionlogic
Hi,
I am looking at this question:
Consider the following program:
if X then RED else if Y then BLUE else if Z then YELLOW else GREEN.
Under which of the following conditions is GREEN executed?
With what proposition will GREEN only be executed here?
¬
X¬Y ¬Z
¬(XY Z)
¬X¬Y ¬Z
¬(XY Z)
If I write this out in code I get:
Code:
if X then
RED
elseif Y then
BLUE
elseif Z then
YELLOW
else
GREEN
end if
So with the first one ¬X¬Y ¬Z
Means "NOT X AND NOT Y AND NOT Z"
So the only conclusion can be NOT RED NOT BLUE NOT YELLOW BUT GREEN.
This is correct right?
But the second one confuses me with the brackets ...
What are the rules regarding that?
2. ## Re: if and else statements rogether with propositionlogic
But the second one confuses me with the brackets ...
What are the rules regarding that?
With respect to brackets, ¬(X ∨ Y ∨ Z) is similar to -(a + b + c) on integers: you first add a, b and c and the find the opposite of that sum. Similarly, in ¬(X ∨ Y ∨ Z), you take the disjunction of X, Y and Z (strictly speaking, you first find X ∨ Y and then take a disjunction with Z), and then apply the negation.
Due to De Morgan's laws, the first two formulas are equivalent, as are the last two. You are right that the first (and hence the second) formula is correct. To show that the last two formulas are incorrect, find some truth values of X, Y and Z such that ¬X ∨ ¬Y ∨ ¬Z is true, but GREEN is not executed.
3. ## Re: if and else statements rogether with propositionlogic
So this is what you are saying:
¬X∧¬Y ∧¬Z = GREEN EXECUTED
¬(X∨Y ∨Z) = GREEN EXECUTED
¬X∨¬Y ∨¬Z = GREEN NOT EXECUTED
¬(X∧Y ∧Z) = GREEN NOT EXECUTED
And that the first 2 are actually the same due to "Demorgans Law"
So ¬X∧¬Y ∧¬Z becomes ¬X∧¬Y ∧¬Z when you remove the brackets of ¬(X∨Y ∨Z) ...
Is that correct?
4. ## Re: if and else statements rogether with propositionlogic
Originally Posted by iwan1981
So this is what you are saying:
¬X∧¬Y ∧¬Z = GREEN EXECUTED
¬(X∨Y ∨Z) = GREEN EXECUTED
¬X∨¬Y ∨¬Z = GREEN NOT EXECUTED
¬(X∧Y ∧Z) = GREEN NOT EXECUTED
Strictly speaking, I did not say that if ¬X ∨ ¬Y ∨ ¬Z is true, then GREEN is not executed. Both ¬X ∧ ¬Y ∧ ¬Z and ¬X ∨ ¬Y ∨ ¬Z can be true. In fact, if the first of these formulas is true, i.e., X = Y = Z = F, then so it the second. However, it can be that the second formula is true, but GREEN is not executed.
And that the first 2 are actually the same due to "Demorgans Law"
Yes.
So ¬X∧¬Y ∧¬Z becomes ¬X∧¬Y ∧¬Z when you remove the brackets of ¬(X∨Y ∨Z) ...
¬(X∨Y ∨Z) becomes ¬X∧¬Y ∧¬Z, yes.
5. ## Re: if and else statements rogether with propositionlogic
it can be that the second formula is true, but GREEN is not executed.
This part confuses me a little bit ...
Because with that said ... this means that ¬(X∨Y ∨Z) can be either GREEN EXECUTED or GREEN NOT EXECUTED ...
And that this is wrong then:
¬X∧¬Y ∧¬Z = GREEN EXECUTED
¬(X∨Y ∨Z) = GREEN EXECUTED
¬X∨¬Y ∨¬Z = GREEN NOT EXECUTED
¬(X∧Y ∧Z) = GREEN NOT EXECUTED
6. ## Re: if and else statements rogether with propositionlogic
Originally Posted by emakarov
it can be that the second formula is true, but GREEN is not executed.
Originally Posted by iwan1981
This part confuses me a little bit ...
Because with that said ... this means that ¬(X∨Y ∨Z) can be either GREEN EXECUTED or GREEN NOT EXECUTED ...
In saying "the second formula," I referred to the immediate context in my post, not to your original post. I meant that ¬X ∨ ¬Y ∨ ¬Z can be true, but GREEN is not executed.
And that this is wrong then:
¬X∧¬Y ∧¬Z = GREEN EXECUTED
¬(X∨Y ∨Z) = GREEN EXECUTED
¬X∨¬Y ∨¬Z = GREEN NOT EXECUTED
¬(X∧Y ∧Z) = GREEN NOT EXECUTED
Yes, the last two lines are wrong in that neither the left- nor the right-hand side imply the other.
There are only 8 sets of truth values for X, Y and Z. You can go through all of them and see which formulas are true and which code is executed.
7. ## Re: if and else statements rogether with propositionlogic
So your actually saying that in all options GREEN can be executed ...
8. ## Re: if and else statements rogether with propositionlogic
Originally Posted by iwan1981
So your actually saying that in all options GREEN can be executed ...
Yes. When the first two formulas from the OP are true, GREEN has to be executed; when the last two formulas are true, GREEN may or may not be executed.
9. ## Re: if and else statements rogether with propositionlogic
Originally Posted by emakarov
Yes. When the first two formulas from the OP are true, GREEN has to be executed; when the last two formulas are true, GREEN may or may not be executed.
What do you mean with forst 2 formula's from OP ...
I think I have lost you :-)
Sorry :-(
10. ## Re: if and else statements rogether with propositionlogic
"OP" stands for the "original post." It is the first post in the thread. The first two formulas in that post are ¬X ∧ ¬Y ∧ ¬Z and ¬(X ∨ Y ∨ Z) (they are equivalent).
11. ## Re: if and else statements rogether with propositionlogic
I think I have the concept now ...
¬X∧¬Y ∧¬Z = GREEN EXECUTED
¬(X∨Y ∨Z) = GREEN EXECUTED
¬X∨¬Y ∨¬Z = GREEN NOT EXECUTED / OR MAY BE EXECUTED
¬(X∧Y ∧Z) = GREEN NOT EXECUTED / OR MAY BE EXECUTED
So the last 2 formula's is correct either way ... if we only have 2 answer options there (Executed of Not Executed)
Thanks! |
Department of
# Mathematics
Seminar Calendar
for events the day of Thursday, December 8, 2016.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
November 2016 December 2016 January 2017
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 1 2 3 1 2 3 4 5 6 7
6 7 8 9 10 11 12 4 5 6 7 8 9 10 8 9 10 11 12 13 14
13 14 15 16 17 18 19 11 12 13 14 15 16 17 15 16 17 18 19 20 21
20 21 22 23 24 25 26 18 19 20 21 22 23 24 22 23 24 25 26 27 28
27 28 29 30 25 26 27 28 29 30 31 29 30 31
Thursday, December 8, 2016
12:30 pm in 345 Altgeld Hall (NOTE: change of location),Thursday, December 8, 2016
#### Little string theories via F-theory
###### Dave Morrison (UC Santa Barbara, Departments of Mathematics and Physics)
Abstract: Little string theories are UV complete non-local 6D theories decoupled from gravity in which there is an intrinsic string scale. I will present a systematic approach to the construction of supersymmetric little string theories via F-theory. This is joint work with Lakshya Bhardwaj, Michele Del Zotto, Jonathan Heckman, Tom Rudelius, and Cumrun Vafa.
1:00 pm in 239 Altgeld Hall,Thursday, December 8, 2016
#### IGL Fall 2016 Open House
Abstract: End of semester poster presentations by IGL project teams. Come see what research has been done in the IGL this semester!
11:00 pm in 243 Altgeld Hall,Thursday, December 8, 2016
#### Stoilow's theorem revisited
###### Rami Luisto (UCLA)
Abstract: Stoilow's theorem states that any continuous, open and light map from a planar domain to the plane is locally a holomorphic map up to a conjugation with homeomorphisms. From this local fact it follows that up to a homeomorphic change of coordinates a continuous, open and light map between two orientable topological surfaces is a holomorphic map between Riemann surfaces. In this talk we give a modern proof for the local result and, time permitting, discuss how the local result gives rise to the global version. |
## Grids of model spectra for WN stars, ready for use
### W.-R. Hamann and G. Gräfener
Universität Potsdam, Institut für Physik, Astrophysik
Grids of model atmospheres for Wolf-Rayet stars of the nitrogen sequence (WN subclass) are presented. The calculations account for the expansion of the atmosphere, non-LTE, clumping, and line blanketing from iron-group elements. Observed spectra of single Galactic WN stars can in general be reproduced consistently by this generation of models. The parameters of the presented model grids cover the whole relevant range of stellar temperatures and mass-loss rates. We point out that there is a degeneracy of parameters for very thick winds; their spectra tend to depend only on the ratio $L/{\dot M}^{4/3}$. Abundances of the calculated grids are for Galactic WN stars without hydrogen and with 20% hydrogen (by mass), respectively. Model spectra and fluxes are available via internet (http://www.astro.physik.uni-potsdam.de/PoWR.html). |
## “The Chair”: A Straussian interpretation
[Warning: spoilers follow!]
Last week Dana and I watched the full first season of The Chair, the Netflix drama that stars Sandra Oh as Ji-Yoon Kim, incoming chairwoman of the English department at the fictional Pembroke University. As the rave reviews promised, I found the show to be brilliantly written and acted. At times, The Chair made me think about that other academia-centered sitcom, The Big Bang Theory, which I freely confess I also enjoyed. But The Chair is much more highbrow (and more political), it’s about the humanities rather than STEM, and it’s mostly about academics who are older than the ones in Big Bang, both biologically and professionally.
I wouldn’t call The Chair “realistic”: the sets, stuffed with imposing bookshelves, paintings of great scholars, etc., look like how a TV producer might imagine university buildings, rather than the relatively humdrum reality. But in less than three hours, the show tackles a staggering number of issues that will be recognizable and relevant to anyone in academia: cratering enrollments, a narrow-minded cost-cutting dean, a lack of free time and a desperate search for childcare, a tenure case that turns into a retention case, a woke scandal (about which more later), a faculty revolt against Ji-Yoon culminating in a vote of no confidence, and much more. There’s also an elaborate side plot involving the actor (and real-life former literary scholar) David Duchovny, who portrays himself, being invited to lecture at Pembroke, which is not the sort of thing most academics have experience with, but which I suppose many viewers will enjoy.
The show is written at a high enough level that its stumbles are those of a daring acrobat. In the main narrative arc of the first season, the writers set themselves an absurdly ambitious (and, I think, laudable) goal: namely, to dramatize a conflict between a free-spirited professor, and woke students trying to cancel that professor for a classroom “microaggression,” in a way that fully empathizes with both sides. I don’t know if the show actually succeeds at this, but that’s partly because I don’t know if it’s possible to succeed.
To start with some background: in Pembroke’s English department, there are old, traditionalist white males, who give lectures extolling the Great Men of Literature, and who apparently still wield considerable power. Meanwhile, critical theorists are presented as young, exciting upstarts bravely challenging the status quo. People with recent experience of English departments should correct me if I’m wrong, but my sense is that this is pretty anachronistic—i.e., that the last powerful traditionalists in humanities departments were routed by the 80s or 90s at the latest, so that students in the Twitter-and-smartphone era (when The Chair is set) would be about as likely to encounter them as they would professors sitting around in charcoal suits smoking pipes.
There were also some of what felt to me like … intersectional oversights? Ji-Yoon, being Korean-American, is repeatedly approached by Black female students and faculty as a “fellow woman of color,” with whom they can commiserate about the entrenched power of the department’s white males. The show never examines how woke discourse has increasingly reclassified Asian-Americans as “white-adjacent”—as, for example, in the battles over gifted and magnet programs or admissions to Harvard. Likewise, woke students are shown standing arm-in-arm with Pembroke’s Jewish community, to denounce (what we in the audience know to be) a phantom antisemitic incident. Left unexplored is how, in the modern woke hierarchy, Jews have become just another kind of privileged white person (worse, of course, if they have ties to Israel).
This brings me to the first season’s central conflict, which revolves around Bill Dobson, a handsome middle-aged white male professor who’s revered as the department’s greatest genius on the basis of his earlier work, but who, after the death of his wife, is now washed-up, flippant, and frequently drunk or high. In one class session, while lecturing about intellectuals who found the strength to resist fascism despite their own nihilistic impulses, Bill makes a Nazi salute and shouts “Heil Hitler!,” as a theatrical reminder to the students about the enormity of what those intellectuals were fighting. Alas, a woke student captures that moment on their smartphone camera and shares it on social media. The clip of Bill making the Heil salute goes viral, shorn of all exculpatory context. Soon, crowds of students are waving placards and screaming “No Nazis at Pembroke!” outside the English building. In a desperate effort to make his PR crisis go away, the dean initiates termination proceedings against Bill—the principles of academic freedom and even Bill’s tenure be damned. Ji-Yoon, of course, as Bill’s chair, is caught smack in the middle of this. It’s complicated even further by Ji-Yoon’s and Bill’s romantic feelings for each other, and further still by Bill’s role as the babysitter of Ji-Yoon’s adopted daughter.
As all of this unfolds, the show seems immensely interested in pinning the blame on Bill’s “tragic flaws,” minor though they seemed to me—mostly just pride and unseriousness. (E.g., trying to lampoon the absurd charge of Nazism, Bill offhandedly mentions that he’s always wanted to visit Hitler’s mountain retreat, and on another occasion belts out “Springtime for Hitler” from The Producers.) The woke students, by contrast, are portrayed as earnest, understandably upset, and legitimately terrified about hate crimes on campus. If they, too, have opportunistic motives to attack Bill, the show never examines them.
In one sentence, then, here’s my beef with The Chair: its script portrays a mob, step by step, destroying an innocent man’s life over nothing, and yet it wants me to feel the mob’s pain, and be disappointed in its victim for mulishly insisting on his innocence (even though he is, in fact, innocent).
With real-life woke controversies, there often lingers the question of whether the accused might really be a racist, fascist, sexual predator, or whatever else, adequate proof or no. What’s different here is that we know that Bill Dobson is none of those things, we know he’s decent to his core, because the writers have painstakingly shown us that. And yet, in a weird narrative pretzel, we’re nevertheless supposed to be mad at him, and to sympathize with the campaign to cancel him.
A casual perusal of other reviews of The Chair told me that these reactions were far from universal. Here, for example, is what one viewer wrote:
I can appreciate that this is probably close to the reality that most women/of color experience in higher education. I enjoyed watching the scenes with Joan and Yaz [two female professors] the most but the rest was a drag. I couldn’t understand why Ji-Yoon was into Bill, or why anyone was into Bill. I found him to be an insufferable man-baby. That is such a turn off. So she’d put him straight but then still be pining for him. He wreaked [sic] of entitled, white male, tenured privilege and never showed any contrition for his actions or even awareness of their impact. i’m so tired of the “brilliant _” being used to justify coddling someone. And for the rest of the stuffy old patriarchal farts– boot them out! They weren’t good teachers and weren’t able to meet the needs of today’s students.
I asked myself: did this person watch the same show? It’s like, the script couldn’t possibly have been clearer about Bill’s character, the fact that he’s the polar opposite of the woke students’ mental construct. And yet, if the show had drawn an unambiguous corollary from Bill’s goodness—namely, that the effort to cancel him is a moral travesty—then The Chair itself might have been denounced as conservative (or at least classical liberal) propaganda, and those who’d otherwise form its core viewership wouldn’t have watched.
So, if I were a literary critic like the ones on the show, I might argue that The Chair begs for a Straussian interpretation. Sure, there’s an “overt” reading, wherein Bill Dobson is done in by his own hubris, or wherein it’s a comedy of errors with no one to blame. But then there’s also an “esoteric” reading, wherein Bill is the victim of an extremely specific modern-day collective insanity, one that future generations might look back on with little more ambivalence than we look back on McCarthyism. The writers of The Chair might hint at this latter reading, through their sympathetic portrayal of Bill and the obviousness of the injustice done to him, but they can never make it too explicit, because of the political and cultural constraints under which they themselves operate.
Under this theory, it presumably falls to those slightly outside the world portrayed in The Chair—like, let’s imagine, a theoretical computer science blogger who himself was denounced for woke heresies to the point where he has little more to lose in that direction—to make the esoteric reading explicit. Unless and until, of course, a second season comes along to undermine that reading entirely.
### 74 Responses to ““The Chair”: A Straussian interpretation”
1. Aspect Says:
(Arghh I had already typed a comment but I fat fingered the keyboard so I think I lost it, apologies if this is the second comment you get from me; feel free to just keep one of them)
I was a bit underwhelmed by the show, probably because people created unreasonable hype/expectations around it. Regarding Bill’s stupid nazi salute thing, it was blown out of proportion, but I do think that you’re cutting him too much slack.
He is innocent w.r.t having Nazi beliefs. He’s not innocent about being careless and irresponsible time and time again in his workplace. His carelessness brings a ton of drama to people he presumably cares about as well (the chair). I don’t think it’s about whether or not Bill believes in this stuff. It’s that it seems absurd to be so irresponsible with these topics. Personally, I don’t care about it, but I can’t blame other people if they have issues with it.
Does it warrant being fired and stirring up a storm about it? I would say that’s debatable, but it’s a workplace and he’s interacting with people he doesn’t thoroughly know… it’s just too stupid. I would place the burden on the grown-up in the room, not the 18-20somethigs in this case. He does a silly Nazi joke again when he’s having a meeting with the higher-ups about the situation. He acts too much like he’s running the place. I suppose that’s why the person you quoted called him a “man-baby”.
Some of that may be excused by the death of his wife which is pretty clear that has haunted him. But being so careless becomes eventually indistinguishable from malice when it causes so much trouble to him and the people around him. His nonchalant attitude undermines the chair’s authority in front of people, and since she is new to the job and feels like she needs to earn respect, that’s pretty inconsiderate of him (especially because he’s supposed to care about her). It also didn’t help when his apology to the students basically was “I’m sorry you felt that way”. That’s just too commonly the apology of people who can’t get over themselves so it didn’t paint him in a good light. It’s just not a big deal to say “Apologies guys, I shouldn’t fuck around with those topics, there’s a time and place for jokes and this wasn’t the right one”. It seems like the students could’ve calmed down with an honest apology. Maybe they wouldn’t, but in that case, he would’ve done his best to fix the situation and we would not be able to blame him anymore.
Anyway, he just seems like the kind of person who isn’t a bad human but who I wouldn’t trust because his tendencies are self-destructive. I kind of expected him to end up in a situation being drunk and this other young girl ending up having sex with him because of how hard of a time he had establishing boundaries with her. Thankfully, the cliche didn’t materialize. Sure, we can feel sorry about him and understand his grief to some extent, but there comes a point where we have to consider personal accountability and the guy doesn’t seem to step it up after his screw-ups.
It’s hard for me to see the parallel with your situation, aside from an extremely vague “woke outrage” kind of standpoint, because:
– You posted something on your personal blog. You didn’t make it a part of your workplace.
– Even if someone viewed your story as misguided or had issues with it, you expressed feelings of genuine frustration and vulnerability.
– Students of yours came out in your defense as a person and professor (and none against you, afaik?). I’m taking a wild guess that it’s because you’re respectful of other people’s boundaries and you don’t act like this dude (despite the fact that you both don’t have bad intent; how that is expressed in your behavior makes a world of difference).
I think a more fitting equivalent would’ve been if the drama was about him showing the tapes of his dead wife in front of the class, and people framing that as somehow being sexist because his joke trivialized her pain. If there was outrage for that, then I could see more of a connection to your incident.
2. uhoh Says:
Alas, the most that can be done nowadays is to plant the smallest seed of doubt. But fortunately that is often sufficient (as well as necessary) to change people’s minds.
3. Scott Says:
Aspect #1: Thanks so much for your thoughtful disagreement!
I never wrote that Bill Dobson’s situation was especially similar to mine, or that I would ever engage in the sorts of antics that he does. I said only that he comes across as a fundamentally decent (if, yes, drunk, depressed, emotionally damaged, and “gaffe-prone”) person—one who always strives to do right by his students (including Ji-Yoon’s daughter, his grad student, and the undergrad who he “refuses,” mistakenly thinking she wants to sleep with him). Given that he’s also portrayed as being a genius of English literature, it seems obvious to me that academia should have room for such a person, that it’s academia’s loss if it can’t. A generation or two ago, it would’ve been obvious to everyone else as well.
Anyway, yes, of course my experience surviving an online denunciation campaign colors my reaction to such a show, even if I’m not that terribly similar to any of its characters.
4. Sniffnoy Says:
The show never examines how woke discourse has increasingly reclassified Asian-Americans as “white-adjacent”—as, for example, in the battles over gifted and magnet programs or admissions to Harvard.
It’s not one or the other, it’s either as convenient. :-/
5. dm Says:
This is an interesting read of the show! A minor disagreement, I think: I see it as intentionally creating ambiguity by being sympathetic to, and critical of, both sides. Bill has talent and a heart of gold (as a caregiver, when he finally gets around to lecturing, when he helps his graduate student), but also very much an academic type. He’s self-pitying, entitled, refuses to see how he creates work for others. Women have to cover for him. (I think this is on the minds of the writers, because of Joan Hambling’s anger at all the parties she had to host, and Elliot Rentz’s wife’s back story.) His flippant reaction to the whole furor (forgive the pun) is right on the merits but oblivious to the larger context.
The students, on the other hand, are set up for some scorn as well. They aren’t only “earnest, understandably upset, and legitimately terrified about hate crimes on campus”– that confrontation on the quad makes them out to be naive, simplistic moralizers, a foolish mob, etc. There’s a moment when two students lecture Ji-Yoon about their burdens being women of color that, to me, reads like a portrayal of obliviousness.
So on that score I think the writers did well. (And, honestly, it’s a hard task to come up with a Woke Incident that’s just serious enough to justify some criticism of a flippant response, yet not so serious that the students look ridiculous calling for blood.) I’d fault them on taking some easy outs with Yaz by making her the Mary Sue of the tenure track, but that’s another story.
6. Aspect Says:
Ahhh, my bad. I kind of felt that a connection to your case was implied as I was reading. Nevermind then!
>drunk, depressed, emotionally damaged, and “gaffe-prone”
I suppose maybe it’s my experience coloring my perceptions in this case too, as I’m generally not too tolerant with people who exhibit hurtful patterns of behavior repeatedly and then fall back on their issues as a defense (to his credit if I remember correctly, he didn’t actively rely on this as an excuse).
>he comes across as a fundamentally decent
We’re getting the sense that he’s a decent guy, I agree. Maybe that’s just me again, but it happens often that “a person’s heart is in the right place” and still that person ends up doing harmful things. You can always get a sample of events that boosts one type of perception over another. Imo, the “fundamentally decent” status should always be questioned and revoked if harmful things keep happening.
As for whether there’s a place for the guy in academia because of his talent… I would say let the actions speak for themselves. I don’t think anybody’s owed a spot. If he consistently does stupid stuff in class then he shouldn’t teach. He could have a pure research position instead so he can only work with people who can manage his antics. If he causes trouble to peers in that scenario too, then his brains should help him find a niche where his behavior is tolerated.
7. Paul Topping Says:
Where to start? You say, “nevertheless supposed to be mad at him, and to sympathize with the campaign to cancel him.” I didn’t feel that. I wished he’d be smarter but I was still totally on his side. As you also say, he’s essentially a good person who did something silly, given today’s college environment, but certainly doesn’t deserve to lose his job over it.
Although I’m not an academic, the disagreements with reality you point out certainly make sense to me. I mildly enjoyed the show but it suffered from trying to be two shows at once: a situation comedy and a serious portrayal of woke conflict on the modern college campus. For the sake of comedy, the characters are not as complex, intelligent, or self-aware as their real-life versions. Similarly, as you point out, the students are portrayed as simply making mistakes in going after Bill and no maliciousness is shown to us which, AFAIK, doesn’t match real life.
I agree with the man-baby assessment of Bill. No one that stupid and unaware could be the brilliant English professor he is supposed to have been. Again, I put it down to the need for comedy. Assuming there’s a second season, I’ll probably watch it, but it would be nice to see a more serious portrayal of the horrible antics that occur on modern campuses in a different show. Perhaps two shows, one woke and the other non-woke, but both serious. Now that would be worthy of some water cooler talk!
8. Silas Barta Says:
Wow, sounds like it’s worth a watch! Not much to add but:
>a tenure case that turns into a retention case,
I’m not familiar with what that means? Someone is up for tenure and then it turns out no, they won’t get it but now have to justify keeping their job at all?
>There’s also an elaborate side plot involving the actor (and real-life former literary scholar) David Duchovny, who portrays himself, being invited to lecture at Pembroke,
Whoa, crazy! You might be interested to learn that Season 3 of Californication (not for kids) involves David Duchovny’s character (a Fight Club-style author) taking a position as a lecturer at a small private liberal arts college … and it goes about as well as you’d expect.
Sadly, it doesn’t seem to be on Netflix anymore.
9. “The Chair”: A Straussian interpretation | 3 Quarks Daily Says:
[…] More here. […]
10. FC Says:
It is very interesting that they chose a silly Hitler reference as Bill’s sin rather than something more substantive, like a really controversial opinion or some racial slur, specially since the Jewish people that might be offended by it are, for the most part, far from underpriviledged.
I think the reason for this is that it was the safest choice for the producers and Netflix itself. They had to go with the worst offense they could think of that wouldn’t land them in any trouble, so they went with the antisemitic one.
11. Scott Says:
FC #10: Interesting, I hadn’t thought about that but it’s plausible! It would’ve been hard to do around him saying the n-word for similar reasons in class, for example.
12. Scott Says:
Silas Barta #8: Sorry I didn’t explain!
Yaz, the superstar young Black woman on the faculty, decides to preemptively defect to Yale after she gets an ambiguous clue that the elderly, traditionalist white male professor handling her tenure case might backstab and recommend against her … and Ji-Yoon, of course, then has to try to save the situation.
13. Scott Says:
Paul Topping #7:
No one that stupid and unaware could be the brilliant English professor he is supposed to have been.
Have you seen some of the things otherwise brilliant academics have lost their careers over? E.g., in sexual harassment cases, the sheer clumsiness of the attempts?
14. fred Says:
“Did this person watch the same show? It’s like, the script couldn’t possibly have been clearer about Bill’s character, the fact that he’s the polar opposite of the woke students’ mental construct. And yet, if the show had drawn an unambiguous corollary from Bill’s goodness—namely, that the effort to cancel him is a moral travesty”
Not sure why you’re so surprised since identity politics is about fitting someone in rigid binary boxes based on a few superficial attributes (color, gender, age, sexual orientation). And then a person’s absolute worth and right to claim individuality are entirely derived from which of the “right” boxes are checked…
What’s inside someone’s heart is totally irrelevant, and the only “actions” that count are performative acts of virtue signalling (kneel down, raise a fist, turn your back, posts the right symbols on your social media, etc).
15. Michelle Says:
dm and Paul Topping–agree with your feelings about “who’s side” the show was trying to get the viewers to be on.
16. fred Says:
For what it’s worth, I really don’t think that higher education should rely on the absolute perfection of the faculty members (especially when the gold standard is ever changing, and impossible to reach).
Sure, you don’t want to expose kids to active rapists/terrorists, but there’s a lot of benefit in having teenagers realize and accept that their teachers are just like everyone else (i.e their own parents): they are flawed and complicated individuals.
Teaching is about more than understanding the topic at hand, it’s also about learning to think the right way, and prepare the kids to the realities of adult life.
Sheltering them in some utopian Marxist role-playing game is no good.
When I grew up, the quirks and painful life experiences of our teachers had a big impact on us.
Our math teacher once used to be a heavy drinker, and that caused a crash that killed both his wive and child. He had deep scars in his face. When he told us that life could be cruel, we listened.
Another teacher was an ex-priest who had been defrocked because he fell in love and decided to get married … and he was allowed to teach in a catholic school run by monks!
It’s all fine, especially if this opens interesting conversations.
17. Edward M Measure Says:
I only watched one episode, so I am highly underqualified to rate overall quality, but I wasn’t impressed. In any other line of employment, Bill would have gotten the boot after the first episode, with no need for any Nazi BS. The fogies aren’t just old, they have reached their age of incompetence. Students aren’t showing up for their classes because they aren’t teaching.
The supposedly “woke” young prof did not impress either – marketing titles with the word “sex” as if that was a magic elixir to inspire today’s students.
A much more interesting and plausibly controversial case would be something like the academic lynching of Steve Hsu, driven out of his post as chief of research for daring to examine data linking genes and IQ.
18. Scott Says:
Edward M Measure #17:
In any other line of employment, Bill would have gotten the boot after the first episode, with no need for any Nazi BS.
Probably not at a startup! Or at Los Alamos or JPL or Bell Labs or Apple back in their heydays. What do those places have in common with many university departments? They all require dealing with people who are difficult, eccentric, immature, and brilliant at what they do. I’m not sure I’d ever want to work at any place that wasn’t like that. I hope the coming decades create more jobs that tolerate such people, if only by creating more opportunities for self-employment.
19. Anon93 Says:
On the topic of Jews and wokeness, let’s not forget this paper https://cdn.mises.org/14_2_3_0.pdf which shows that a lot of the measures wokes are using now against men, whites, and Asians are similar to what the Nazis did to the Jews in the early and mid 30s. The Nazis were big fans of the kind of affirmative action where equality of outcome is the goal rather than equality of opportunity.
20. dankane Says:
Scott#18
Are startups/JPL actually that lenient about this? I guess I haven’t seen the show and so don’t know the full context, but I feel like at most places well-intentioned, but still inappropriate (as is the case in almost all circumstances) Nazi-salutes should at least merit a stern talking to from HR followed by firing for repeat offenses.
I mean sure, these institutions need to be able to deal with brilliant people who are sometimes lacking in social awareness. But they still have to weigh the costs of losing out by firing someone for their inappropriate behavior with the costs of losing out on other people who are turned away by this behavior.
21. Scott Says:
dankane #20:
In basically every example I’m familiar with, the answer to this question is, “no, not anymore, but yes when they produced the achievements for which they’re now famous.”
22. dankane Says:
Scott #21
OK. But in the era when those institutions produced the achievements for which they are famous, couldn’t you be pretty openly racist/sexist/whateverist and keep your job in a lot of places in this country?
23. Scott Says:
dankane #22: I’d like to imagine that, at some point between (say) 1965 and 2010, there was a happy medium when the actual racists and sexists would lose their jobs, but the classical liberals, accidental microaggressors, and socially unaware nerds wouldn’t. But maybe this is wishful/nostalgic thinking, and it really just flipped immediately from one extreme to the other?
24. dankane Says:
Scott #23
I don’t think there was ever a time where we successfully managed to fire exactly the people who deserved it and nobody else. I mean MeToo seems to have proved that at least up to a few years ago, there were plenty of unfired sexual predators and this is well after the time when people would at least occasionally be fired due to a tweet taken out of context.
25. Dan Staley Says:
Scott, it feels like you’re saying that sufficiently brilliant people should have more leeway in their behavior than everyone else – after all, if a waiter, secretary, etc. shouts “Heil Hitler” at work, they’re likely to get fired regardless of context. Do you think this viewpoint is approaching some sort of elitism?
(That’s a real question mark there – I’m not sure what *I* think the answer should be.)
I understand the utilitarian side of your argument (in the sense of an overall benefit to humanity), but I find it really difficult to translate that utilitarianism into a more universal, ethical law I’m comfortable with – if we really want separate rules for those who are sufficiently brilliant (or a sliding scale of rules based on how smart you are), that seems both extremely corruptible and also marching towards some kind of “genetic superiority” dystopia.
I guess at the end of the day, the vast majority of people (including undergrad students) don’t know the level of an academic’s brilliance or utility to humanity – they can only trust people like the Chair, the Dean, and the faculty to make that assessment. But in our world where trust in authority has all but evaporated, there’s very little an academic can do to prove themselves in the public eye.
26. Scott Says:
dankane #24: Obviously there will always be mistakes in both directions. But if, in the space of just one or two generations, we went from a culture of protecting the guilty to a culture of vilifying the innocent, then by the intermediate value theorem, it seems like there must have been some point when we were well-calibrated and the mistakes were more-or-less random! Though I admittedly find it hard to pinpoint when it was. 🙂
27. dankane Says:
Scott #25
I mean (assuming continuity) there must have been a point where we made as many mistakes in one direction as we made in the other. I’m not sure that this is an ideal that we should be aspiring towards though. Our objective should be to minimize the total badness of all mistakes we make, and I am not convinced that there was a time previously where we did better by this metric.
28. Scott Says:
Dan Staley #25: I feel like I’m reasonably consistent, in that for the waiter or the secretary also, I’d want to understand the context before feeling comfortable with firing them for a “Heil” salute. Do they actually have the slightest sympathy for Nazism? Or were they just, you know, talking about history, or about what someone else did, with heavy implied quotation marks around the gesture? The fact that actual neo-Nazis could hide behind irony, humor, or claims that they “didn’t really mean it,” doesn’t relieve us of the obligation to use common sense and reason — there they are again, those banes of blankfaces! — to suss out what was going on in some particular case.
Having said that, I do think there’s great societal value in giving academics extra protection for controversial speech and ideas, and I’m grateful to live in a society where that value is widely shared (or at least was, until recently). One way to think about this is that academics tend to be hypereducated people who could make a lot more money applying themselves to, let’s say, derivatives trading, corporate consulting, or software startups. Freedom to think and write as they wish is one of the few things society can offer such people that they actually value, in exchange for their accepting a massive salary cut to spend their lives as researchers and teachers, in principle for the betterment of humankind.
29. dankane Says:
Scott#28
Your view on academics applies only to some disciplines. A small number of English professors, say, might be able to make considerably more money writing for a living, but I’m not convinced that most are taking a pay cut to teach at a university.
30. Richard Cleve Says:
Coincidentally, I watched the series a few days ago, and enjoyed it.
I wondered if the idea of a Hitler-salute incident was based on the real-life case where a high school math teacher lost his job:
https://www.nytimes.com/interactive/2018/09/05/magazine/friends-new-york-quaker-school-ben-frisch-hitler-joke.html
Showing up very late and drunk/high to lectures was funny, but made the character of Bill Dobson less sympathetic. As did his showing the somewhat pornographic video (even though it was by mistake). It works well for an entertaining story, but not so would be so charming if it actually happened. And, although I’m not into identity politics, how would it go over if Yaz McKay ever behaved like that?
And, in case I haven’t revealed my stodginess enough, I thought they used the f-word an awful lot. Is that considered acceptable decorum among colleagues in academic institutions?
31. Scott Says:
Richard Cleve #30: Yeah, one weird aspect of The Chair was how the settings were so much more formal than real academic buildings (at least, any of the ones where I’ve worked), and yet the culture was so much more familiar, with professors dropping constant f-bombs to their colleagues, smoking weed together, and getting involved with each other’s personal lives. Although I can’t say for certain that that’s not how it is in the humanities! And of course, if the professors had just talked shop, groused about national and university politics and the weather, and held interminable committee meetings that went nowhere … well, I fear realism might have made for less compelling TV. 🙂
32. Dan Staley Says:
Another honest question for you, Scott: Do you think that there is literally zero wrong in shouting “Heil!” at work if you have no actual Nazi beliefs? My opinion is that, at the very least, it’s in incredibly poor taste and probably disrespectful (I haven’t seen The Chair, so I can’t comment on the specific context of the show).
I’m asking because you seem to generally phrase this discussion in terms of “guilty” or “innocent”, but perhaps there’s a “partially guilty” level in between, for someone who doesn’t harbor offensive/racist/whatever views, but still says things that are, in themselves, unacceptable or at least offensive in context? And while most people consider this “partially guilty” state not to be worth firing someone over, it can get so muddled and conflated with the “full guilty” state that it doesn’t matter from a PR perspective.
Maybe what we’re seeing here is really a Motte-and-Bailey argument, made by the “cancellers” – the Motte is “You shouldn’t say ‘Heil’ in nearly any situation, even if it’s not what you truly believe, because it’s offensive”, while the Bailey is the outrage and support garnered from an out-of-context video clip.
33. TGGP Says:
I’m reminded of Alex Tabarrok”s take on Parasite. To him a very right-wing reading of the film is obvious, and the rest of the world seems to be taking crazy pills fitting it into an assumed left-wing interpretation (which, to be fair, would better fit what we know of the director’s politics).
34. Scott Says:
Dan Staley #32: Here’s what I’ll say. I’m a Jew whose extended family, the branches that didn’t make it to Philadelphia, was almost all murdered by Nazis — shot in pits, mainly, rather than gassed. That’s been at the core of my emotional life since I was about 7 years old. I defer to no one in my level of anti-Nazi sentiment.
But given the scenario that’s constructed on the show, it was obvious that my emotional sympathies would be 100% with Bill Dobson, the gentile who made a Heil salute, and 0% with the students (many of them Jewish) condemning him for it. Why? Because the students are shown understanding only the superficial form of being anti-Nazi — stuff like “never, ever make a Heil salute, not even as part of anti-Nazi class lecture” — whereas Bill Dobson is shown understanding the actual substance of it. Not only because he’s spent his career engaging with Hannah Arendt and other writers who grappled with the evils of Nazism, but more importantly, because he constantly stands up for whatever is most human in a given situation, or for whoever seems defenseless and in need of his help, no matter how weird or inappropriate it makes him look.
In one striking example, Bill attends a Korean ceremony of Ji-Yoon’s relatives where a 1-year-old baby is placed in front of various items — an artist’s brush, a dollar bill, etc. — and whichever item the baby touches first is supposed to represent its future. The baby reaches for the artist’s brush until one of the adults pushes the dollar bill into its face instead, and Bill completely loses it — screaming at all the adults there about how they’ve just shortchanged this child’s future — until he passes out from whatever drugs he’s on. That’s the kind of crazy sonofabitch who you could imagine hiding Jews in their attic. Hope that answers your question.
35. Ian D Says:
As a long-time chair (why, why, why?!) I haven’t decided yet whether I want to watch it. I’m afraid I might end up setting my TV on fire. But I will say that there actually still are a handful of these old, powerful traditionalists in humanities departments. They are rapidly retiring, but they still exist.
36. Doug Says:
The most interesting thing I read on this was a strong encouragement to really, really center one’s reading on Ji-Yoon. So the central conflict is *not* between Bill and the students, or Bill and the dean, but the center of the show is on her – what is she to do about this? And if you really want to tell a story about Bill vs ‘wokeness’ you have to be a lot more nuanced in each side. But Bill is clearly not ‘actually’ a racist, but he is clearly quite reckless. Once he considers the opposing positions to be ridiculous, he is incapable of any kind of engagement or conversation. He completely fumbles his attempt at an apology – a good start in that scene, but, a disaster. (Or a sabotage… the dean may have been a bit deliberate in the popo timing, but his deliberate action wasn’t really explored.)
Above, Scott wants to make a distinction between a good target of activist ire, the ‘actual racist,’ and a misguided target, the ‘accidental microaggressor.’ I want to talk a little about the ‘casual microaggressor.’ If you believe these aggressions are not actually felt by people, such that you bear no responsibility (especially in a position of trust as a professor to your students, nevermind among peers), such that these become a habit for you, you are probably an ‘actual racist.’ This bifurcation may not be so clear.
But in the Chair, it *is* clear. Given what Ji-Yoon knows of Bill, he is a fundamentally good guy who is just not keeping it together, doubles down and makes things worse for himself instead, and she’s got… a lot of conflict in order to figure out how handle it. And because other people control the stakes and continually double down, both on Bill’s side and on the admin’s side, she can’t thread the needle, she has to pick a side. Drama!
And re: #10, yes, I totally agree this is the safest thing you could actually film and act. I thought a good real example of the ‘pure accident’ was the communications prof who got in trouble when listing examples of filler words in different languages, happened to include the Chinese filler, which does not sound so innocuous to an English ear. Not filming that one!
37. Scott Says:
Doug #36: Yeah, I worried about whether my post centered too much on Bill rather than Ji-Yoon. But the way I thought about the show is simply that Ji-Yoon is the “viewpoint character”: the one through whom we the viewers perceive the tragicomedy of Bill. Bill is the defendant, the students and the Dean are the prosecutors, and Ji-Yoon is the judge. How will she rule? How would you rule?
Regarding the question of Bill’s culpability, see my comment #34.
In the pivotal scene where Bill tries to apologize to the students and instead just makes them angrier, I actually thought that Bill showed Job-like restraint. He speaks eloquently about the Nazis as the enemies of professors, the enemies of intellect, drawing on his lifetime of engagement with Jewish writers like Hannah Arendt … and the students’ response is to accuse him of “appropriating” Arendt, a crime that doesn’t even exist outside the students’ strange 21st-century creed? Are these students woke robots? I would’ve lost my cool faster than he did.
In the end, of course, Ji-Yoon’s Solomonic verdict is that, while the students might be wrong about Bill, their rage is ultimately justified, because “their world is burning” and older generations have failed them. I.e., this was never about Bill’s Heil salute after all—it was about climate change! At which point I kept asking myself: these are college students? Don’t they need, like, a remedial course to help them correctly identify the target of their anger?
38. pete Says:
I also enjoyed this series but I have a somewhat different view of Bill.
Yes, he does seem to be a decent person caught up in an inane accusation. But you have to wonder what he THOUGHT would happen if he gave a Nazi salute and shouted Heil Hitler. This was made in 2020(?) and you just don’t get away with that on campus, no matter what your intentions are and I do not believe that anyone in his position would not know that. Given that, I guess that he was trying to upend his career and he succeeded. I don’t feel that sorry for him because he got what he wanted.
A separate question is “Should anyone, even if self-destructive, be destroyed that way, by a ridiculous assumption?” I don’t think so but free speech in our universities seems to be fading away.
39. STEM Caveman Says:
Few actors can convincingly play professors. The personality types and ingrained behavioral styles are too different. They are great at (over)performing the stereotype of a professor, a nerd, a engineer, a research scientist, and a film with a large enough budget can bring academics to the cast (or vice versa) to impart the “accent” of how they walk and talk but even the success cases of this approach are limited.
Sandra Oh, from her previous work, does not seem at all similar to the modern professorial type even in fluffy fields like English. Comedy is closer to genuine nerditude than is acting, but still pretty distant. I could imagine her as, say, a psychotherapist of some kind. But for portraying a current-year academic it’s hard for a more or less normie to externally mimic the combination of congenital and cultivated detachment (verging on withdrawal) and rumination that are at the heart of the enterprise. Duchovny of course was nominally a nerd back in the day but, like James Woods, his dropping out might just reflects a mismatch between academia and his inner nature despite the high IQ.
40. OhMyGoodness Says:
If I were prone to stereotyping my reaction would be-it’s just karma. Wokesters were created by Academia and now Academia bears some of the result. Pure ideation and hyperbole (that are easy to teach and learn)were fostered to the neglect of pragmatism and rational discourse. This was exacerbated by universities becoming more businesslike and catering to their student customers. The hallowed halls were converted to fast food education to keep those hungry minds rolling through. Pump them full of ideology and send them on their educated way with self-righteous conviction they are the best and brightest.
The problem is when these ideas are actualized into public policy the outcome rarely meets expectation. I should be surprised that the left has turned so quickly on Biden. He is instituting the policies the left demanded and the results are not as expected. When faced with outcomes that are not as expected it would be reasonable to reassess beliefs but in the US you just scapegoat the president.
When well-intentioned people in general society are labeled racists and Nazis much of Academia applauds. When the same happens to a respected colleague the reaction is-these people are out of control. If the reaction were-these people that we have created are out of control-then maybe something positive would come of it.
I saw recently a 1961 quote from William F Buckley that earlier I wouldn’t have agreed with earlier but do now-
“I would rather be governed by the first 2,000 people in the Boston telephone directory then the Harvard University faculty.”
41. Chip Says:
Scott #37: So, to clarify, your response to Doug #36 is to double down on a reading that de-centers the titular female POC to make it a story about the dysfunctional white guy. Because it would be silly to think that the title of the show refers to the actual, you know, *protagonist*.
42. Jeroen Says:
Thank you for this insightful review. I could not help but think that if the series tried to create ambivalence, or to create sympathy for the student, it did a terrible job of it; my initial interpretation was that it wasn’t trying to.
In my recollection, most of the information that the viewer gets about Bill’s politics (i.e. whether he sympathizes with Nazism..) is available to the students as well: the series makes sure that he immediately cites WW2 death tolls in his lecture (“including the camps”), and in the discussion scene he talks about Jewish and other German emigrees in a way inconsistent with Nazism. The series communicates this to us while the students are present.
Still, it expects the viewer to understand that Bill is not a Nazi and to accept that the students think he is. The only things that seem to explain the discrepancy, however, are ones in which the students come off badly: they are on their phones during the lecture, so they are too distracted to place Bill’s salute in context; they have ridiculous priors that make it plausible to them that professors of literature are Nazis; and they reason and argue in clichés rather than engaging with Bill’s actual views.
I would expect people who sympathize with left-wing student activism to criticize the very premise that students would respond to his lecture in this mindless way – to call the series out on what is in effect a conservative cliché. I thought the series was taking a somewhat simplistic stab at cancel culture as a combination of willful misunderstanding and mob mentality, something Bill and Ji-Yoon need to respond to (and that drives their conflict, like Doug #36 said) but that they can’t morally and intellectually engage with because it’s just too silly.
It’s interesting, therefore, that there are apparently viewers who side with the students and/or want Bill to show “contrition for his actions”. I’m not sure what to make of that. Maybe it means that the series indeed wants us to sympathize with the students, or at least create some ambiguity – but what are the internal signs of that? There are perhaps a few, such as the anonymous person who ominously praises Bill for the Hitler salute citing “free speech”, suggesting that the alt-right doesn’t understand Bill’s irony either and that therefore it was still a dangerous thing to do. However, overall the show fails to make an interesting case against Bill’s politics or in favor of his cancellation, and I hope that it didn’t think it was making such a case.
43. Robert Solovay Says:
Scott,
Thank you for the pointer to
Strauss, who I’ve never heard of
and who seems quite interesting.
44. JS Says:
I too have been shocked by the sympathetic response towards the “mob” following this show. I thought it did a wonderful job of showcasing the hypocritical nature of woke middle class liberals and how they attack individuals whilst truly believing theyre fighting the good fight. The real issues within the university (like the cost-cutting dean, bureaucracy and the way the university is treated as a business and the students as clients) are completely missed whilst the students make a ridiculous foray against “nazis” based on a two-second clip. I thought this contrast was obvious and frustrating to watch.
The fact this is meant to be a private, ivy-league university where most of the students calling for a man to not only lose his job but destroy any chance he has of teaching again, are probably very wealthy and privileged (identity politics aside) seems to have been lost on many. I haven’t seen this simple fact mentioned much even though, to me, it is very blatantly portrayed in the show. These kids are going around preaching the victimisation of identities that specifically relate to them (female, black etc.) but ofc fail to talk about class privilege as that would mean turning the spotlight on themselves. This culminates in them spending weeks calling a man a nazi and focussing their energies on getting him fired. They’re so caught up in their ivy-league bubble, they come across extremely entitled, oblivious and better-than-thou.
45. Wednesday: Hili dialogue (and Kulka dialogue) – Why Evolution Is True Says:
[…] at his website Shtetl Optimized, Scott Aaronson reviews the new Netflix series “The Chair”, a show that will interest many of us, as it’s about a new chairperson, played by Sandra Oh, […]
46. Scott Says:
Chip #41: Hey, I didn’t write this show! Its structure is that, as chair, Ji-Yoon constantly has to deal with various intersecting dramas—the Yaz drama, the Joan drama, the David Duchovny drama—but by far the biggest one is the Bill drama. I hope there will be more seasons, and if so maybe they’ll decenter Bill in favor of Ji-Yoon and the other characters, like for example Homeland eventually got rid of the POW guy to focus on Claire Danes’s character.
47. renato Says:
JS #44: Mark Fisher argues in “Exiting the Vampire Castle” (2013) that the focus on race and gender by privileged people is driven to obfuscate class. Doing that, they remove themselves as possibles targets from the mob (until the commit a gaffe related to race or gender).
48. Peter Shenkin Says:
“professors sitting around in charcoal suits smoking pipes”
Wait — in my recollection they wore tweeds!
-P.
49. Marc Briand Says:
As I watched this season I grew increasingly frustrated that the writers allowed Bill Dobson, to suck all the plot-oxygen out of the story. Sandra Oh’s character was left to just stumble around and try to clean up the messes other people were making. I was led to believe by the series title, The Chair, that it was supposed to be about, you know, *the chair* of an English department. But increasingly it wasn’t about her at all, it was about Bill Dobson and his unforced errors.
Maybe this reflects the reality of academia and this is what competent, good-hearted chairs are reduced to. But I don’t care. Screwing over your main character, rendering her powerless, never giving her an opportunity to demonstrate her strengths, her wisdom, the qualities that got her there in the first place, is just bad storytelling. The fact that the Ji-Yoon didn’t even have an ally in the story just shows the writers’ contempt for their main character. At first it was mildly entertaining to see what kinds of things she had to deal with. By the end of the series, I just found it frustrating. The writers don’t know how to create a strong female character, so they take the cheap way out and resort to an old plot device. Pathetic. If I was Sandra Oh, I’d be pissed.
50. John Says:
Just more leftist propaganda on Netflix.
51. Scott Says:
John #50: No, it’s actually more interesting than that. Future “zero-effort drive-by comments” will be left in moderation.
52. Carina Curto Says:
I posted your commentary on fb, and it seemed to resonate with a lot of my (mostly academic) friends. My reading is slightly different, though.
What I find interesting is that although the writers do portray the students as somewhat ridiculous, in their specific grievance about Bill, in a more general sense they side with the students. The whole show can be read as a somewhat “woke” critique of academia, the inertia of its sexism, racism, and overall conservatism.
The final verdict seems to be this: the students are right to be frustrated with the system, but they’re targeting the wrong people. They go after Bill, and then Ji-Yoon. (It’s critical that they end up going after her, too. And totally realistic.) As familiar as the narrative unfolding felt to me, I wondered how an undergrad would respond. My most hopeful interpretation is that the show is secretly aimed at undergrads, at trying to give them a more nuanced view of what the world of faculty and administrators actually looks like from the inside. So that they don’t keep targeting the wrong people…
53. Doug Says:
Marc #49: I would have watched a full episode of her explaining the last 30 years of literary criticism to Duchovny, so I may just be signalling that I’m not quite in the target audience anymore. This is more about my people than for my people. (See also Big Bang Theory, which I find absolutely insufferable, and I cannot imagine how Scott tolerates. Perhaps this is just his way to reinforce to his CS comrades that despite flirtations, he’s not joined physics club? 😉 )
JS #44: Yes, the mob is extra dumb for the camera. Bill, I think, is also unrealistically extra dumb for the camera. Scott contends that his apology town hall on the lawn is actually rather good, he cites his Arendt, etc. And it starts good, but goes completely off rails! He goes with “I’m sorry if you feel that way” ! He was super warned about this, because it was super obvious, and you get a super obvious reaction. Again, I like to think of these gross oversimplifications on ‘both sides’ as being there not to convince me that the people who are more like me in some ways or more like me in others are really the dumber people, and that I should ‘wake up’ and disavow either academics or activists. Rather, I think the simplifications on each side bring more contrast and higher relief to Joon-yi’s conflict. And, upon reflection, like Marc and Scott, I am disappointed that although this is the most interesting, the total screen time spent on it is perhaps shy of what it ought be.
As for what I would choose? Like Joon-yi I’d have been looking for off ramps and deescalations throughout the series, and if I found myself trapped in a genre which makes such things impossible and wound up at the final showdown, I’d probably have gone the other way. The job market is fierce. A ‘brilliant’ man baby who refuses to engage in crisis deescalation can be replaced with a brilliant adult. Like, real easy.
54. Marcelo Says:
The name of the course had been changed from “Modern Literature” to “Death and Modernism” in order to attract students looking for this novel, oxymoronic form of “trangression”, which consists of staying well within strict boundaries and avoiding the slightest offense, while infringements are to be reported to none other than the authorities.
It would be sad enough that the legacy of the May ’68 generation, its “forbidden to forbid”, its calling adults names, or throwing stones at the police had gentrified into what could be likened to Western tourists staying in five-star “hostels” in the belief that they are walking the Road to Kathmandu.
The truth rather seems to be that the youth as depicted in the series has been body-snatched by their great-grandparents -in their old age. Indeed, they are equally judgmental, self-righteous, prudish, permanently outraged, trivially vexed, driven to hostility by a feeling of vulnerability. The Salem witch trials part must be an even older reincarnation.
55. fred Says:
For anyone who thinks that US colleges are still a place for freedom of expression and opinion, just go ahead and ask any STEM professor (in front of their class) whether Taiwan is a country or not…
56. Scott Says:
Doug #53:
The job market is fierce. A ‘brilliant’ man baby who refuses to engage in crisis deescalation can be replaced with a brilliant adult. Like, real easy.
How sure are you? Feynman did much of his research in titty bars and was picketed by feminists. Gödel starved himself to death because he thought his wife everyone but his wife was trying to poison him. Einstein was a serial adulterer who presented his first wife with a famously insulting written list of demands. Paul Erdös expected his colleagues (and, often, their wives) to cook and clean for him and attend to all his other mundane needs. John Nash … well, you’ve seen the movie. Tell me, then, which of these “man-babies” would’ve been easy to replace by an equally brilliant responsible adult?
57. Scott Says:
Just to expand on what I said in #56:
If you’re a department chair, obviously you want to create an environment where rare geniuses can flourish, and do work that will be remembered after almost everything else is forgotten. And obviously you also want the environment to be safe and welcoming for everyone else. And if those two goals ever come into conflict, obviously you have a painful tradeoff to make. Examples would be a genius who’s also an actual neo-Nazi, or abusive to their students, or a serial sexual predator.
My contention is simply that if the genius is someone like Bill Dobson—i.e., a caring soul who always tries to do right by his students and everyone else, but who once committed a classroom “microaggression”—then you haven’t even entered the continent where these painful tradeoffs would arise.
58. Sniffnoy Says:
Scott #56:
I feel like I must point out that Gödel starved himself to death because he thought everyone but his wife was trying to poison him.
59. Scott Says:
Sniffnoy #58: Thanks, fixed!
60. Edward M Measure Says:
Scott#18. There was at least one type of bad behavior that was widely tolerated at all sorts of institutions, and not just from top performers: sexual abuse of female subordinates and colleagues. Alcoholism too frequently gets a pass. So I guess I was wrong and the tolerance you admire lives on, and not just in the academy.
61. OhMyGoodness Says:
Scott 56
I have speculated that England produced many great minds because of their general tolerance of eccentrics (maybe even valued eccentricity) and then saw this quote from John Stuart Mill-
“Eccentricity has always abounded when and where strength of character had abounded; and the amount of eccentricity in a society has generally been proportional to the amount of genius, mental vigor, and courage which it contained.”
I don’t believe that modern English culture is as accepting of eccentricity as previously.
The US is so conformist that, as you well know, it is very difficult for an extremely talented child with an unusual personality to survive in the school system. If you look at say Putnam winners in recent years you find a strong trend of home schooling. Very doubtful they could have survived undamaged in the US educational system. I had a friend admitted to grad math program at Harvard at 17 and a gentler spirit you will never find, but couldn’t find peace in general society because of some unusual (but harmless) personality quirks.
62. walruss Says:
I’m surprised by the readings that are sympathetic to the students, and the “both sides have a good point” readings both.
There were two things going on in this show: One was the story of a changing academic culture, and the second was a story of how to navigate personal relationships in the workspace, and being considerate to those with whom you have a personal relationship.
These intersected a lot, as they should in good stories – Ji-Yoon especially came from a point of view of “it’s irresponsible both personally and (somehow) socially, not to make everything about career advancement, even if advancement would make me less happy” and much of the show was just about her figuring out that there was another way. And part of it was Bill figuring out that even if he was right, and sad, and messed up, and needed support, the other people in his life need love and support as well, and that means sometimes taking a bullet you don’t deserve.
But when it came to the changing academic culture aspect, the show could not have been more clearly anti-wokeness. No student in the mob got any kind of development, and the students that did get developed were explicitly aligned with Bill by the end.
The story isn’t about the students as people or wokeness at all. That’s kind of the point.
Wokeness is the backdrop, and it’s portrayed consistently as silly and unthoughtful (my favorite moment in the whole series may have been Ji-Yoon’s dry “yes, I’m aware” when a student earnestly informs her that women of color receive fewer invitations to social events). But the story is about how the whole of the institution fails to engage with the students or meet their needs.
The show repeatedly demonstrated different teaching styles – the old farts’ “stodginess” (I’ll come back to this), vs. Yaz’s hyper-woke “student-oriented” class, which is very popular but clearly not engaging with the material or challenging the students.
Then in the last scene, where ostensibly we’re seeing “college teaching the way Ji-Yoon ought to be doing it” there’s engagement, but no explicit racial/gender framing, and the monkeys aren’t running the zoo. The students are outside their comfort zone, they’re the movers of the lesson, but Ji-Yoon is guiding them constantly back to the text, to thinking about the text, to moving past the superficial frames to what it means to each student personally.
The most interesting part of all this is the old farts club. I know Scott mentioned that these people appeared to be here to give the main characters something real to fight about- ancient stodgy men who are all entitlement and boorishness. These guys are repeatedly played for laughs – for the ridiculousness of their entitlement. Or for pity – for the sadness of seeing people past their prime. But they aren’t there for realism – they’re there to present a necessary contrast to the student-driven “we just need to put butts in seats” philosophies of Yaz and the dean.
The show does a turnaround, subtly but unmistakably. The old folks’ club’s entitlement is clearly ridiculous, but their perspective becomes invaluable. Elliot may be an ***hole, but he’s there at the end, teaching with Yaz, because he’s the only person who can get her to challenge her students instead of catering to them. Joan is department head because she is passionate about her subject, paid her dues, doesn’t coddle her students, and is deserving of the honor. The show does, in fact, take the position that distinguished tenured professors are deserving of our ear if not necessarily our unwavering respect.
And short of Bill giving an Ayn Rand style speech, the resolution couldn’t be clearer – the students are wrong on the particulars, wokeness is dumb. But they’re right in general – a shadowy cabal of authority figures is attempting to manage them instead of engage with them, nobody cares about their needs or concerns, and the whole purpose of the school has become to take their money. The school legitimately doesn’t care about them, their outcomes, or their needs. It has no interest in engaging with students where they live, in their grievances, in their careers, in their racial identities, in true diversity, in literally anything about the quality of their experience. Ji-Yoon spends 80% of her time trying to get Yaz tenured specifically so they can have their first black, female tenured professor, and *Yaz doesn’t want that!* She explicitly says she doesn’t. She wants tenure because she feels she deserves it.
And firing Bill and kicking Ji-Yoon out as chair actually does not change that at all. But remembering that students and professors are individuals with unique identities, perspectives, and talents instead of monolithic stereotypes does.
63. Nick Drozd Says:
Scott #56, 57
Protecting those rare geniuses is terribly important, no question. But if you are trying to protect a rare genius, you had better make sure that they really are a rare genius, and not just a charlatan who has created a cult of celebrity.
Consider John Searle. It recently came to light that he was a serial sexual predator in his time as a philosophy professor at Berkeley. This was an “open secret”, but he was protected by the department because, I don’t know, I guess he was considered to be a big deal?
But Searle was not a Feynman or an Einstein or an Erdos. He was just some asshole, and he was a shitty philosopher too. He enjoyed the protection without producing anything to warrant the protection. He could easily have been replaced by somebody who wasn’t an asshole and a serial sexual predator, and philosophy would not have been any worse off.
64. Scott Says:
Nick #63: LOL, but what do you really think of Searle? 😀
In case it wasn’t clear, I personally feel like the balance of considerations is in favor of firing anyone who could be accurately described as a “serial sexual predator,” even supposing they are an irreplaceable genius.
Now, as for an irreplaceable genius who made, let’s say, one or two awkward or unwanted romantic overtures, backing off as soon as he learned they were unwanted? I feel like he who’s never done such a thing should cast the first stone.
65. OhMyGoodness Says:
Walruss #62
I enjoyed reading your analysis but question the sympathy for the students in the general case.
“a shadowy cabal of authority figures is attempting to manage them instead of engage with them, ”
But isn’t that the case for the majority of people in the workplace in the US and don’t people generally ascribe some sinister characteristics to these shadow figures?
This points to the rightful general role of a university in society. My view is that the role is to develop critical thinking and to provide skillsets that society requires and thereby to provide marketable graduates that obtain suitable employment in accordance with their reasonable expectations.
You are well aware of the underemployment and low salaries for most majors and this coupled with high education costs suggests to me that seeking a college education is not an economically reasonable endeavor for most people. The data is readily available so cursory due diligence provides a good expectation of the outcome. The current state of affairs seems to me like the Scarecrow in the Wizard of Oz. He received a diploma and felt intelligent. No one wanted him to do the load calculations for the journey back to Kansas so he returned to the corn field.
My guess is that there will have to be universal basic income in the US and that university enrollments will continue to fall. Required skillsets will narrow further do to both technological innovation and increased international competition. In this case students who have unrealistic expectations will form a sizable bloc that can discuss Marcuse with confidence and feel they haven’t been treated well by society. Better that they begin to accommodate to it in college but not likely to happen.
Academia is fundamentally conservative, even though people like to pretend that it is a hotbed of radicals. In my experience most undergraduates mostly care about either their social life or graduating and getting a job. Of course, there are those who are passionate about social justice as young people are wont to be. There is an even smaller subset who care about virtue signalling (another phrase corrupted by the alt-right) and will do a few ridiculous things like what you mention. As for academics, they may be on the left politically, but a lot of them seem to be so only when it is convenient. All the abuses of (mostly young, female or minority) academics I listed above were perpetrated by so-called “progressive” academics. Those labels don’t mean much, their actions do
67. Scott Says:
Public School Grad #66: I … agree with much of what you write. Academia has all sorts of ordinary problems where, even if they’re shared by much of the rest of the world, one would hope that we could do better. And while I won’t go into details, I have firsthand knowledge of situations wherein academics who are among the loudest and most performative in denouncing sexual harassment in public, were the very ones to bury it when it was their friend credibly accused.
68. Vampyricon Says:
For a school of thought that emphasizes the lens historical events are seen through, these wokeists are quite blind to their own biases.
69. Doug Says:
How did Vampyricon make it over the hurdle of scrutiny for low effort drive bys?
70. Richard Gaylord Says:
scott # 64
Why are you allowing highly personal defamatory attacks as
“But Searle was not a Feynman or an Einstein or an Erdos. He was just some asshole, and he was a shitty philosopher too. He enjoyed the protection without producing anything to warrant the protection. He could easily have been replaced by somebody who wasn’t an asshole and a serial sexual predator, and philosophy would not have been any worse off.”
in your blog comments? you once offered to compensate (via a donation to a charity of their choosing) to anyone who you had criticized personally – as i was). This Trumpian level of discourse seems rather inappropriate and unseemly, regardless of one’s view of Searle’s personal behavior or professional contribution.
71. unaligned_agent Says:
Nick #63, what was the accusations in Searle’s case? To be fair, he does strikes me as a person who would deny agency of women, pretending that their complaints about his inappropriate behaviour are result of non-sentient symbol manipulation.
72. Panglossy Says:
Art imitates Life
The writers may have been thinking of this incident, but obviously a congressional D-student would not likely find himself in academe.
He later deleted the post. Cramer, Philissa (August 12, 2020). “Rep. NC Congress candidate deletes pictures from his stay at Hitler’s”. The Jerusalem Post. Jewish Telegraphic Agency.
> Ji-Yoon, being Korean-American, is repeatedly approached by Black female students and faculty as a “fellow woman of color,” with whom they can commiserate about the entrenched power of the department’s white males.
I found “The Chair” to be a stinging satire mocking the flaws of nearly everyone on campus — Bill, dean, cursing English teachers, shallow student mobs — with the exception of Ji-Yoon who has to deal with all of it. The same students who are indignant about “a Nazi-sympathizing” professor have no problem to call a person “a (wo)man of color”, an offensive, unscientific characterization rooted in the US-centric racism.
74. E. K. Says:
I haven’t watched the series and don’t intend to, so I can’t comment on the specifics of it, but a particular part of this post stood out to me:
Ji-Yoon, being Korean-American, is repeatedly approached by Black female students and faculty as a “fellow woman of color,” with whom they can commiserate about the entrenched power of the department’s white males. The show never examines how woke discourse has increasingly reclassified Asian-Americans as “white-adjacent”—as, for example, in the battles over gifted and magnet programs or admissions to Harvard.
I know the discourse you’re referring to and I agree treating Asian-Americans as if they’re quote unquote basically white is wrong. However, not only have I seen people argue against it even when it was big, not only has there been discussion of Asian-specific issues among “woke”* people for years, but this very year we had #StopAsianHate in light of the Atlanta spa shooting in March and various anti-Asian hate crimes that flared up because of COVID-19. I have seen woke people, in particular woke Black people, expressing solidarity with Asians after the shooting and I still see them talk about it. If you’ll forgive my sarcasm, perhaps this is because woke people are not a monolith and there are various disagreements to be had about issues and also perhaps it’s possible that people change their minds based on new data or changing circumstances.
Similarly:
Likewise, woke students are shown standing arm-in-arm with Pembroke’s Jewish community, to denounce (what we in the audience know to be) a phantom antisemitic incident. Left unexplored is how, in the modern woke hierarchy, Jews have become just another kind of privileged white person (worse, of course, if they have ties to Israel).
What I’m used to seeing until very recently, is woke people saying non-Jews and non-Palestinians shouldn’t comment on Israel and that if a Jewish person tells you something is antisemitic, it is and to not question them about it. I have also seen people call Anne Frank a “white Becky”, though it was mostly in the context of them being denounced. And generally, I have seen woke people speaking out against antisemitism, whether in general or in reference to specific incidents. In particular I recall some case of antisemitic graffiti in a bathroom causing issues in one college. Looking it up now, there’s a recent instance of this as well, apparently. As you can see, the students aren’t brushing it off. I can’t tell you about their demographics, beyond the fact many of them are Jewish for obvious reasons, but I doubt there are no non-Jews signing the open letter.
My point is, I this is a very narrow and somewhat wrong view of these issues (being wrong on the internet is very bad, as we all know). “Woke people” is not a coherent political/ideological group! It’s a fine shorthand in some contexts but not in this one. There are various people I would classify as woke, who would absolutely tear each other to shreds over disagreements about, say, whether queer should be used as an umbrella term or not. There isn’t a unanimously agreed upon “woke hierarchy”. For the record, you aren’t the only one who makes these generalizations! You just have a handy-dandy comment section, so I might as well comment 😉
Lastly, this is a really minor thing and has nothing to do with any incorrectness, but does the series have to address these issues in the first place? Like I said, there’s definitely varying viewpoints among woke people, so one can portray conflicts between those viewpoints, but it doesn’t seem necessary.
*I don’t know how you intend it, but every person I know, including myself, uses “woke” as a pejorative so I can’t parse it any other way. I use scare quotes in the first instance of it because I don’t intend as an insult here, even if I find some of what such people say to be wrong. Apologies, if it’s not your intention as well.
Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.
You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within for displayed equations or for inline equations. |
# Product of random stochastic matrices
Behrouz Touri, Angelia Nedich
Research output: Contribution to journalArticle
55 Citations (Scopus)
### Abstract
The paper deals with the convergence properties of the products of random (row-)stochastic matrices. The limiting behavior of such products is studied from a dynamical system point of view. In particular, by appropriately defining a dynamic associated with a given sequence of random (row-)stochastic matrices, we prove that the dynamics admits a class of time-varying Lyapunov functions, including a quadratic one. Then, we discuss a special class of stochastic matrices, a class ${\cal P}\ast, which plays a central role in this work. We then study cut-balanced chains and using some geometric properties of these chains, we characterize the stability of a subclass of cut-balanced chains. As a special consequence of this stability result, we obtain an extension of a central result in the non-negative matrix theory stating that, for any aperiodic and irreducible row-stochastic matrix$A$, the limit$\lim-{k\rightarrow\infty}Ak exists and it is a rank one stochastic matrix. We show that a generalization of this result holds not only for sequences of stochastic matrices but also for independent random sequences of such matrices.
Original language English (US) 6613535 437-448 12 IEEE Transactions on Automatic Control 59 2 https://doi.org/10.1109/TAC.2013.2283750 Published - Feb 2014 Yes
### Fingerprint
Lyapunov functions
Dynamical systems
### Keywords
• Balanced
• consensus
• product of stochastic matrices
• random connectivity
• random matrix
### ASJC Scopus subject areas
• Electrical and Electronic Engineering
• Control and Systems Engineering
• Computer Science Applications
### Cite this
Product of random stochastic matrices. / Touri, Behrouz; Nedich, Angelia.
In: IEEE Transactions on Automatic Control, Vol. 59, No. 2, 6613535, 02.2014, p. 437-448.
Research output: Contribution to journalArticle
@article{9f8f6d87e2914bc38967c2febc6ad69b,
title = "Product of random stochastic matrices",
abstract = "The paper deals with the convergence properties of the products of random (row-)stochastic matrices. The limiting behavior of such products is studied from a dynamical system point of view. In particular, by appropriately defining a dynamic associated with a given sequence of random (row-)stochastic matrices, we prove that the dynamics admits a class of time-varying Lyapunov functions, including a quadratic one. Then, we discuss a special class of stochastic matrices, a class ${\cal P}\ast, which plays a central role in this work. We then study cut-balanced chains and using some geometric properties of these chains, we characterize the stability of a subclass of cut-balanced chains. As a special consequence of this stability result, we obtain an extension of a central result in the non-negative matrix theory stating that, for any aperiodic and irreducible row-stochastic matrix$A$, the limit$\lim-{k\rightarrow\infty}Ak exists and it is a rank one stochastic matrix. We show that a generalization of this result holds not only for sequences of stochastic matrices but also for independent random sequences of such matrices.",
keywords = "Balanced, consensus, product of stochastic matrices, random connectivity, random matrix",
author = "Behrouz Touri and Angelia Nedich",
year = "2014",
month = "2",
doi = "10.1109/TAC.2013.2283750",
language = "English (US)",
volume = "59",
pages = "437--448",
journal = "IEEE Transactions on Automatic Control",
issn = "0018-9286",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",
}
TY - JOUR
T1 - Product of random stochastic matrices
AU - Touri, Behrouz
AU - Nedich, Angelia
PY - 2014/2
Y1 - 2014/2
N2 - The paper deals with the convergence properties of the products of random (row-)stochastic matrices. The limiting behavior of such products is studied from a dynamical system point of view. In particular, by appropriately defining a dynamic associated with a given sequence of random (row-)stochastic matrices, we prove that the dynamics admits a class of time-varying Lyapunov functions, including a quadratic one. Then, we discuss a special class of stochastic matrices, a class ${\cal P}\ast, which plays a central role in this work. We then study cut-balanced chains and using some geometric properties of these chains, we characterize the stability of a subclass of cut-balanced chains. As a special consequence of this stability result, we obtain an extension of a central result in the non-negative matrix theory stating that, for any aperiodic and irreducible row-stochastic matrix$A$, the limit$\lim-{k\rightarrow\infty}Ak exists and it is a rank one stochastic matrix. We show that a generalization of this result holds not only for sequences of stochastic matrices but also for independent random sequences of such matrices.
AB - The paper deals with the convergence properties of the products of random (row-)stochastic matrices. The limiting behavior of such products is studied from a dynamical system point of view. In particular, by appropriately defining a dynamic associated with a given sequence of random (row-)stochastic matrices, we prove that the dynamics admits a class of time-varying Lyapunov functions, including a quadratic one. Then, we discuss a special class of stochastic matrices, a class ${\cal P}\ast, which plays a central role in this work. We then study cut-balanced chains and using some geometric properties of these chains, we characterize the stability of a subclass of cut-balanced chains. As a special consequence of this stability result, we obtain an extension of a central result in the non-negative matrix theory stating that, for any aperiodic and irreducible row-stochastic matrix$A$, the limit$\lim-{k\rightarrow\infty}Ak exists and it is a rank one stochastic matrix. We show that a generalization of this result holds not only for sequences of stochastic matrices but also for independent random sequences of such matrices.
KW - Balanced
KW - consensus
KW - product of stochastic matrices
KW - random connectivity
KW - random matrix
UR - http://www.scopus.com/inward/record.url?scp=84893596333&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893596333&partnerID=8YFLogxK
U2 - 10.1109/TAC.2013.2283750
DO - 10.1109/TAC.2013.2283750
M3 - Article
AN - SCOPUS:84893596333
VL - 59
SP - 437
EP - 448
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
SN - 0018-9286
IS - 2
M1 - 6613535
ER - |
# Teaching
## Applied programming for Life Science 2 (1.5hp)
Undergraduate course, Stockholm University, Department of Mathematics, 2020
Introductory course in programming techniques using Python.
## Programming Techniques for Mathematicians (7.5hp)
Undergraduate course, Stockholm University, Department of Mathematics, 2020
Introductory course in programming techniques using Python. Fundamental computer concepts such as |
Quickly add a free MyWikiBiz directory listing!
# Directory:Woodrow Wilson
(Redirected from Woodrow Wilson)
1. REDIRECT Template:Infobox Officeholder
Thomas Woodrow Wilson (December 28, 1856February 3, 1924), was the twenty-eighth President of the United States. A devout Presbyterian and leading intellectual of the Progressive Era, he served as President of Princeton University and then became the Governor of New Jersey in 1910. With Theodore Roosevelt and William Howard Taft dividing the Republican Party vote, Wilson was elected President as a Democrat in 1912. He proved highly successful in leading a Democratic Congress to pass major legislation that included the Federal Trade Commission, the Clayton Antitrust Act, the Underwood Tariff, the Federal Farm Loan Act and most notably the Federal Reserve System. Wilson was a proponent of segregation during his presidency.[1]
Narrowly re-elected in 1916, his second term centered on World War I. He tried to maintain U.S. neutrality, but when the German Empire began unrestricted submarine warfare he wrote several admonishing notes to Germany, and eventually asked Congress to declare war on the Central Powers. He focused on diplomacy and financial considerations, leaving the waging of the war primarily in the hands of the military establishment. On the home front he began the first effective draft in 1917, raised billions through Liberty loans, imposed an income tax, set up the War Industries Board, promoted labor union growth, supervised agriculture and food production through the Lever Act, took over control of the railroads, and suppressed anti-war movements. He paid surprisingly little attention to military affairs, but provided the funding and food supplies that helped the Americans in the war and hastened Allied victory in 1918.
In the late stages of the war he took personal control of negotiations with Germany, especially with the Fourteen Points and the armistice. He went to Paris in 1919 to create the League of Nations and shape the Treaty of Versailles, with special attention on creating new nations out of defunct empires. Largely for his efforts to form the League, he was awarded the Nobel Peace Prize in 1919. Wilson collapsed with a debilitating stroke in 1919, as the home front saw massive strikes and race riots, and wartime prosperity turn into postwar depression. He refused to compromise with the Republicans who controlled Congress after 1918, effectively destroying any chance for ratification of the Versailles Treaty. The League of Nations was established anyway, but the U.S. never joined. Wilson's idealistic internationalism, calling for the U.S. to enter the world arena to fight for democracy, progressiveness, and liberalism, has been a highly controversial position in American foreign policy, serving as a model for "idealists" to emulate or "realists" to reject for the following century.
## Early life
Thomas Wilson was born in Staunton, Virginia in 1856 as the third of four children to Reverend Dr. Joseph Wilson (1822–1903) and Janet Woodrow (1826–1888). His ancestry was Scots-Irish and Scottish. His paternal grandparents immigrated to the United States from Strabane, County Tyrone, Ireland, while his mother was born in Carlisle to Scottish parents. Wilson's father was originally from Steubenville, Ohio where his grandfather had been an abolitionist newspaper publisher and his uncles were Republicans. But his parents moved South in 1851 and identified with the Confederacy. His father defended slavery, owned slaves and set up a Sunday school for them. They cared for wounded soldiers at their church. The father also briefly served as a chaplain to the Confederate Army. Wilson’s father was one of the founders of the Southern Presbyterian Church in the United States (PCUS) after it split from the northern Presbyterians in 1861. Joseph R. Wilson served as the first permanent clerk of the southern church’s General Assembly, was Stated Clerk from 1865-1898 and was Moderator of the PCUS General Assembly in 1879. Wilson spent the majority of his childhood, up to age 14, in Augusta, Georgia, where his father was minister of the First Presbyterian Church. Wilson did not learn to read until he was about 12 years old. His difficulty reading may have indicated dyslexia or A.D.H.D., but as a teenager he taught himself shorthand to compensate and was able to achieve academically through determination and self-discipline. He studied at home under his father's guidance and took classes in a small school in Augusta.[2] During Reconstruction he lived in Columbia, South Carolina, the state capital, from 1870-1874, where his father was professor at the Columbia Theological Seminary.[3] In 1873 he spent a year at Davidson College in North Carolina, then transferred to Princeton as a freshman, graduating in 1879. Beginning in his second year, he read widely in political philosophy and history. He was active in the undergraduate discussion club, and organized a separate Liberal Debating Society.[4]
In 1879, Wilson attended law school at University of Virginia for one year but he never graduated. His frail health dictated withdrawal, and he went home to Wilmington, North Carolina where he continued his studies. Wilson was also a member of the Phi Kappa Psi fraternity. In 1885, he married Ellen Louise Axson, the daughter of a minister from Rome, Georgia. They had three daughters: Margaret Woodrow Wilson (1886-1944), Jessie Wilson (1887-1933) and Eleanor R. Wilson (1889-1967).
Wilson’s mother was probably a hypochondriac and Wilson seemed to think that he was often in poorer health than he really was. However, he did suffer from hyper-tension at a relatively early age and may have suffered his first stroke at age 39. He cycled regularly, including several cycling vacations in the Lake District in Britain. Unable to cycle around Washington, D.C. as President, Wilson took to playing golf, although he played with more enthusiasm than skill. During the winter the Secret Service would paint some golf balls black so Wilson could hit them around in the snow on the White House lawn.[5]
## Law practice
In January 1882, Wilson decided to start his first law practice in Atlanta. One of Wilson’s University of Virginia classmates, Edward Ireland Renick, invited Wilson to join his new law practice as partner. Wilson joined him there in May 1882. He passed the Georgia Bar. On October 19,1882 he appeared in court before Judge George Hillyer to take his examination for the bar, which he passed with flying colors and he began work on his thesis Congressional Government in the United States. Competition was fierce in the city with 143 other lawyers, so with few cases to keep him occupied, Wilson quickly grew disillusioned. Moreover, Wilson had studied law in order to eventually enter politics, but he discovered that he could not continue his study of government and simultaneously continue the reading of law necessary to stay proficient. In April 1883, Wilson applied to the new Johns Hopkins University to study for a Ph.D. in history and political science, which he completed in 1886.[6] He remains the only U.S. president to have earned a doctoral degree. In July 1883, Wilson left his law practice to begin his academic studies.[7]
## Political writings and academic career
### Political writings
Wilson came of age in the decades after the American Civil War, when Congress was supreme— "the gist of all policy is decided by the legislature" —and corruption was rampant. Instead of focusing on individuals in explaining where American politics went wrong, Wilson focused on the American constitutional structure.[8]
Under the influence of Walter Bagehot's The English Constitution, Wilson saw the United States Constitution as pre-modern, cumbersome, and open to corruption. An admirer of Parliament (though he first visited London in 1919), Wilson favored a parliamentary system for the United States. Writing in the early 1880s:
"I ask you to put this question to yourselves, should we not draw the Executive and Legislature closer together? Should we not, on the one hand, give the individual leaders of opinion in Congress a better chance to have an intimate party in determining who should be president, and the president, on the other hand, a better chance to approve himself a statesman, and his advisers capable men of affairs, in the guidance of Congress?"[9]
Wilson started Congressional Government, his best known political work, as an argument for a parliamentary system, but Wilson was impressed by Grover Cleveland, and Congressional Government emerged as a critical description of America's system, with frequent negative comparisons to Westminster. Wilson himself claimed, "I am pointing out facts—diagnosing, not prescribing remedies.".[10]
Wilson believed that America's intricate system of checks and balances was the cause of the problems in American governance. He said that the divided power made it impossible for voters to see who was accountable for ill-doing. If government behaved badly, Wilson asked,
"...how is the schoolmaster, the nation, to know which boy needs the whipping? ... Power and strict accountability for its use are the essential constituents of good government.... It is, therefore, manifestly a radical defect in our federal system that it parcels out power and confuses responsibility as it does. The main purpose of the Convention of 1787 seems to have been to accomplish this grievous mistake. The 'literary theory' of checks and balances is simply a consistent account of what our Constitution makers tried to do; and those checks and balances have proved mischievous just to the extent which they have succeeded in establishing themselves... [the Framers] would be the first to admit that the only fruit of dividing power had been to make it irresponsible."[11]
The longest section of Congressional Government is on the United States House of Representatives, where Wilson pours out scorn for the committee system. Power, Wilson wrote, "is divided up, as it were, into forty-seven signatories, in each of which a Standing Committee is the court baron and its chairman lord proprietor. These petty barons, some of them not a little powerful, but none of them within reach [of] the full powers of rule, may at will exercise an almost despotic sway within their own shires, and may sometimes threaten to convulse even the realm itself.".[12] Wilson said that the committee system was fundamentally undemocratic, because committee chairs, who ruled by seniority, were responsible to no one except their constituents, even though they determined national policy.
In addition to their undemocratic nature, Wilson also believed that the Committee System facilitated corruption.
"the voter, moreover, feels that his want of confidence in Congress is justified by what he hears of the power of corrupt lobbyists to turn legislation to their own uses. He hears of enormous subsidies begged and obtained... of appropriations made in the interest of dishonest contractors; he is not altogether unwarranted in the conclusion that these are evils inherent in the very nature of Congress; there can be no doubt that the power of the lobbyist consists in great part, if not altogether, in the facility afforded him by the Committee system.[13]
By the time Wilson finished Congressional Government, Grover Cleveland was President, and Wilson had his faith in the United States government restored. When William Jennings Bryan captured the Democratic nomination from Cleveland's supporters in 1896, however, Wilson refused to stand by the ticket. Instead, he cast his ballot for John M. Palmer, the presidential candidate of the National Democratic Party, or Gold Democrats, a short-lived party that supported a gold standard, low tariffs, and limited government.[14]
After experiencing the vigorous presidencies from William McKinley and Theodore Roosevelt, Wilson no longer entertained thoughts of parliamentary government at home. In his last scholarly work in 1908, Constitutional Government of the United States, Wilson said that the presidency "will be as big as and as influential as the man who occupies it". By the time of his presidency, Wilson merely hoped that Presidents could be party leaders in the same way prime ministers were. Wilson also hoped that the parties could be reorganized along ideological, not geographic, lines. "Eight words," Wilson wrote, "contain the sum of the present degradation of our political parties: No leaders, no principles; no principles, no parties."[15]
Wilson served on the faculties of Bryn Mawr College and Wesleyan University. At Wesleyan, he also coached the football team and founded the debate team - to this date, it is named the T. Woodrow Wilson debate team. He then joined the Princeton faculty as professor of jurisprudence and political economy in 1890. While there, he was one of the faculty members of the short-lived coordinate college, Evelyn College for Women. Additionally, Wilson became the first lecturer of Constitutional Law at New York Law School where he taught with Charles Evans Hughes.
Wilson delivered an oration at Princeton's sesquicentennial celebration (1896) entitled "Princeton in the Nation's Service." (This has become a frequently alluded-to motto of the University, later expanded to "Princeton in the Nation's Service and in the Service of All Nations."[16]) In this famous speech, he outlined his vision of the university in a democratic nation, calling on institutions of higher learning "to illuminate duty by every lesson that can be drawn out of the past".
File:Pu-prospect-house.JPG
Prospect House, located in the center of Princeton's campus, was Wilson's residence during his term as president of the university.
The trustees promoted Professor Wilson to president of Princeton in 1902. He had bold plans. Although the school's endowment was barely $4 million, he sought$2 million for a preceptorial system of teaching, $1 million for a school of science, and nearly$3 million for new buildings and salary raises. As a long-term objective, Wilson sought $3 million for a graduate school and$2.5 million for schools of jurisprudence and electrical engineering, as well as a museum of natural history. He achieved little of that because he was not a strong fund raiser, but he did increase the faculty from 112 to 174 men, most of them personally selected as outstanding teachers. The curriculum guidelines he developed proved important progressive innovations in the field of higher education. To enhance the role of expertise, Wilson instituted academic departments and a system of core requirements where students met in groups of six with preceptors, followed by two years of concentration in a selected major. He tried to raise admission standards and to replace the "gentleman C" with serious study. Wilson aspired, as he told alumni, "to transform thoughtless boys performing tasks into thinking men."
In 1906-10, he attempted to curtail the influence of the elitist "social clubs" by moving the students into colleges. This was met with resistance from many alumni. Wilson felt that to compromise "would be to temporize with evil."[17] Even more damaging was his confrontation with Andrew Fleming West, Dean of the graduate school, and West's ally, former President Grover Cleveland, a trustee. Wilson wanted to integrate the proposed graduate building into the same area with the undergraduate colleges; West wanted them separated. The trustees rejected Wilson's plan for colleges in 1908, and then endorsed West's plans in 1909. The national press covered the confrontation as a battle of the elites (West) versus democracy (Wilson). During this time in his personal life, Wilson engaged in an extramarital affair with socialite Mary Peck.[18] Wilson, after considering resignation, decided to take up invitations to move into New Jersey state politics.[19]
## Governor of New Jersey
During the New Jersey election of 1910, the Democrats took control of the state house and Wilson was elected governor. The state senate, however, remained in Republican control by a slim margin. After taking office, Wilson set in place his reformist agenda, ignoring what party bosses told him he was to do. While governor, in a period spanning six months, Wilson established state primaries. This all but took the party bosses out of the presidential election process in the state. He also revamped the public utility commission, and introduced worker's compensation.[20]
## Campaign for Presidency in 1912
Wilson made himself known at the Democratic Convention in 1912, again denouncing the party bosses by declaring his opponent Champ Clark, the Speaker of the House, as a party boss man. This allowed him to come away with the party's nomination for the President.[21] The Democratic National Committee met in Baltimore in 1912 to select Wilson as their candidate. He then chose the officers of the Democratic National Committee that would serve the campaign: Charles R. Crane (Taft's Ambassador to China), Vice-President of the Finance Committee; Rolla Wells, twice mayor of St. Louis (from 1901 to 1909), and later Governor of the Federal Reserve Bank at St. Louis, as Treasurer; Henry Morgenthau, Sr., President of the Finance Committee. His running mate was Gov. Thomas R. Marshall of Indiana.[22]
In the election Wilson ran against two major candidates, incumbent President William Howard Taft and former president Theodore Roosevelt, who broke with Taft and the Republican Party and created the Progressive Party.
Even radicals like John Reed and Max Eastman happily supported Wilson. Mother Jones wrote, "I am a Socialist, but I admire Wilson for the things he has done ... And when a man or woman does something for humanity I say go to him and shake him by the hand and say 'I'm for you.'[23]
The election was bitterly contested. Vice President James S. Sherman died on October 30, 1912, less than a week before the election, leaving Taft without a running mate. And with the Republican Party divided, Wilson captured the presidency handily on November 5. Wilson won with just 41.8% of the votes, but he won 435 electoral votes.
## Presidency 1913-1921
### First term
Wilson experienced early success by implementing his "New Freedom" pledges of antitrust modification, tariff revision, and reform in banking and currency matters.
Wilson's first wife Ellen died on August 6, 1914 of Bright's disease. In 1915, he met Edith Galt. They married later that year on December 18. Wilson arrived at the White House with severe digestive problems. He treated himself with a stomach pump.[24]
#### Federal Reserve 1913
The Federal Reserve Act is one of the more significant pieces of legislation in the history of the United States.[25] Wilson outmaneuvered bankers and enemies of banks, North and South, Democrats and Republicans to secure passage of the Federal Reserve system in late 1913.[26] He took a plan that had been designed by conservative Republicans—led by Nelson W. Aldrich and banker Paul M. Warburg—and passed it. However, Wilson had to find a middle ground between those who supported the Aldrich Plan and those who opposed it, including the powerful agrarian wing of the party, led by William Jennings Bryan, which strenuously denounced banks and Wall Street. They wanted a government-owned central bank which could print paper money whenever Congress wanted. Wilson’s plan still allowed the large banks to have important influence, but Wilson went beyond the Aldrich plan and created a central board made up of persons appointed by the President and approved by Congress who would outnumber the board members who were bankers. Moreover, Wilson convinced Bryan’s supporters that because Federal Reserve notes were obligations of the government, the plan fit their demands. Wilson’s plan also decentralized the Federal Reserve system into 12 districts. This was designed to weaken the influence of the powerful New York banks, a key demand of Bryan’s allies in the South and West. This decentralization was a key factor in winning the support of Congressman Carter Glass (D-VA) although he objected to making paper currency a federal obligation. Glass was one of the leaders of the currency reformers in the U.S. House and without his support, any plan was doomed to fail. The final plan passed, in December 1913, despite opposition by bankers, who felt it gave too much control to Washington, and by some reformers, who felt it allowed bankers to maintain too much power.
Wilson named Warburg and other prominent bankers to direct the new system. Despite the reformers' hopes, the New York branch dominated the Fed and thus power remained in Wall Street. The new system began operations in 1915 and played a major role in financing the Allied and American war efforts.
#### Wilsonian economic views
Wilson's early views on international affairs and trade were stated in his Columbia University lectures of April 1907 where he said: "Since trade ignores national boundaries and the manufacturer insists on having the world as a market, the flag of his nation must follow him, and the doors of the nations which are closed must be battered down…Concessions obtained by financiers must be safeguarded by ministers of state, even if the sovereignty of unwilling nations be outraged in the process. Colonies must be obtained or planted, in order that no useful corner of the world may be overlooked or left unused". — From Lecture at Columbia University (April 1907)
(cited in William Appleman William's book, "The Tragedy of American Diplomacy", p. 72).
#### Other economic policies
In 1913, the Underwood tariff lowered the tariff. The revenue thereby lost was replaced by a new federal income tax (authorized by the 16th Amendment, which had been sponsored by the Republicans). The "Seaman's Act" of 1915 improved working conditions for merchant sailors. As response to the RMS Titanic disaster, it also required all ships to be retrofitted with lifeboats.
A series of programs were targeted at farmers. The "Smith Lever" act of 1914 created the modern system of agricultural extension agents sponsored by the state agricultural colleges. The agents taught new techniques to farmers. The 1916 "Federal Farm Loan Board" issued low-cost long-term mortgages to farmers.
Child labor was curtailed by the Keating-Owen act of 1916, but the U.S. Supreme Court declared it unconstitutional in 1918. Additional child labor bills would not be enacted until the 1930s.
The railroad brotherhoods threatened in summer 1916 to shut down or close the national transportation system. Wilson tried to bring labor and management together, but when management refused he had Congress pass the "Adamson Act" in September 1916, which avoided the strike by imposing an 8-hour work day in the industry (at the same pay as before). It helped Wilson gain union support for his reelection; the act was approved by the Supreme Court.
File:Pump1913.jpg
Wilson uses tariff, currency and anti-trust laws to prime the pump and get the economy working in a 1913 political cartoon
#### Antitrust
Wilson broke with the "big-lawsuit" tradition of his predecessors Taft and Roosevelt as "Trustbusters", finding a new approach to encouraging competition through the Federal Trade Commission, which stopped "unfair" trade practices. In addition, he pushed through Congress the Clayton Antitrust Act making certain business practices illegal (such as price discrimination, agreements forbidding retailers from handling other companies’ products, and directorates and agreements to control other companies). The power of this legislation was greater than previous anti-trust laws, because individual officers of corporations could be held responsible if their companies violated the laws. More importantly, the new laws set out clear guidelines that corporations could follow, a dramatic improvement over the previous uncertainties. This law was considered the "Magna Carta" of labor by Samuel Gompers because it ended union liability antitrust laws. In 1916, under threat of a national railroad strike, he approved legislation that increased wages and cut working hours of railroad employees; there was no strike.
#### War policy—World War I
Main article: World War I
Wilson spent 1914 through the beginning of 1917 trying to keep America out of the war in Europe. He offered to be a mediator, but neither the Allies nor the Central Powers took his requests seriously. Republicans, led by Theodore Roosevelt, strongly criticized Wilson’s refusal to build up the U.S. Army in anticipation of the threat of war. Wilson won the support of the U.S. peace element by arguing that an army buildup would provoke war. He vigorously protested Germany’s use of submarines as illegal, causing his Secretary of State William Jennings Bryan to resign in protest in 1915.
While German submarines were sinking allied ships, Britain had declared a blockade of Germany, preventing neutral shipping carrying “contraband” goods to Germany. Wilson protested this violation of neutral rights by London. However, his protests to the British were not viewed as being as forceful as those he directed towards Germany. This reflects the fact that while Britain was violating international law towards neutral shipping by mining international harbors and killing sailors (including Americans), their violations were not direct attacks on the shipping of Americans or other neutrals, while German submarine warfare directly targeted shipping that benefited their enemies, neutral or not, violating international law and resulting in visible American deaths.
### Election of 1916
Renominated in 1916, Wilson's major campaign slogan was "He kept us out of the war" referring to his administration's avoiding open conflict with Germany or Mexico while maintaining a firm national policy. Wilson, however, never promised to keep out of war regardless of provocation. In his acceptance speech on September 2, 1916, Wilson pointedly warned Germany that submarine warfare that took American lives would not be tolerated:
"The nation that violates these essential rights must expect to be checked and called to account by direct challenge and resistance. It at once makes the quarrel in part our own."
Wilson narrowly won the election, defeating Republican candidate Charles Evans Hughes. As governor of New York from 1907-1910, Hughes had a progressive record strikingly similar to Wilson's as governor of New Jersey. Theodore Roosevelt would comment that the only thing different between Hughes and Wilson was a shave. However, Hughes had to try to hold together a coalition of conservative Taft supporters and progressive Roosevelt partisans and so his campaign never seemed to take a definite form. Wilson ran on his record and ignored Hughes, reserving his attacks for Roosevelt. When asked why he did not attack Hughes directly, Wilson told a friend to “Never murder a man who is committing suicide.”
The final result was exceptionally close and the result was in doubt for several days. Because of Wilson's fear of becoming a lame duck president during the uncertainties of the war in Europe, he created a hypothetical plan where if Hughes were elected he would name Hughes secretary of state and then resign along with the vice-president to enable Hughes to become the president. The vote came down to several close states. Wilson won California by 3,773 votes out of almost a million votes cast and New Hampshire by 54 votes. Hughes won Minnesota by 393 votes out of over 358,000. In the final count, Wilson had 277 electoral votes vs. Hughes 254. Wilson was able to win reelection in 1916 by picking up many votes that had gone to Teddy Roosevelt or Eugene V. Debs in 1912.
### Second term
Wilson's second term focused almost exclusively on World War I, which for the US formally began on April 6, 1917, only a little over a month after the term began. After Wilson, the next U.S. President to win both of his terms with under 50% of the popular vote was fellow Democrat, Bill Clinton, in the 1992 and 1996 elections.
#### Decision for War, 1917
When Germany resumed unrestricted submarine warfare in early 1917 and made a clumsy attempt to enlist Mexico as an ally (see Zimmermann Telegram), Wilson took America into World War I as a war to make "the world safe for democracy." He did not sign a formal alliance with the United Kingdom or France but operated as an "Associated" power. He raised a massive army through conscription and gave command to General John J. Pershing, allowing Pershing a free hand as to tactics, strategy and even diplomacy.
File:Wilson announcing the break in the official relations with Germany.jpg
President Wilson before Congress, announcing the break in official relations with Germany. February 3, 1917.
Woodrow Wilson had decided by then that the war had become a real threat to humanity. Unless the U.S. threw its weight into the war, as he stated in his declaration of war speech, Western civilization itself could be destroyed. His statement announcing a "war to end all wars" meant that he wanted to build a basis for peace that would prevent future catastrophic wars and needless death and destruction. This provided the basis of Wilson's Fourteen Points, which were intended to resolve territorial disputes, ensure free trade and commerce, and establish a peacemaking organization, which later emerged as the League of Nations.
To stop defeatism at home, Wilson pushed the Espionage Act of 1917 and the Sedition Act of 1918 through Congress to suppress anti-British, pro-German, or anti-war opinions. He welcomed socialists who supported the war, such as Walter Lippmann, but would not tolerate those who tried to impede the war or, worse, assassinate government officials, and pushed for deportation of foreign-born radicals.[27] Over 170,000 US citizens were arrested during this period, in some cases for things they said about the president in their own homes.Template:Fix Citing the Espionage Act, the U.S. Post Office refused to carry any written materials that could be deemed critical of the U. S. war effort. Some sixty newspapers were deprived of their second-class mailing rights.
His wartime policies were strongly pro-labor, though again, he had no love for radical unions like the Industrial Workers of the World. The American Federation of Labor and other 'moderate' unions saw enormous growth in membership and wages during Wilson's administration. There was no rationing, so consumer prices soared. As income taxes increased, white-collar workers suffered. Appeals to buy war bonds were highly successful, however. Bonds had the result of shifting the cost of the war to the affluent 1920s.
Wilson set up the first western propaganda office, the United States Committee on Public Information, headed by George Creel (thus its popular name, Creel Commission), which filled the country with patriotic anti-German appeals and conducted various forms of censorship.
#### American Protective League
The American Protective League was a quasi-private organization with 250,000 members in 600 cities was sanctioned by the Wilson administration. These men carried Government Issue badges and freely conducted warrantless searches and interrogations.[28] This organization was empowered by the U.S. Justice Department to spy on Americans for anti-government/anti war behavior. As national police, the APL checked up on people who failed to buy Liberty Bonds and spoke out against the government’s policies.[29]
#### The Fourteen Points
Main article: Fourteen Points
President Woodrow Wilson articulated what became known as the Fourteen Points before Congress on January 8, 1918. The Points were the only war aims clearly expressed by any belligerent nation and thus became the basis for the Treaty of Versailles following World War I. The speech was highly idealistic, translating Wilson's progressive domestic policy of democracy, self-determination, open agreements, and free trade into the international realm. It also made several suggestions for specific disputes in Europe on the recommendation of Wilson's foreign policy advisor, Colonel Edward M. House, and his team of 150 advisors known as “The Inquiry.” The points were:
1. Abolition of secret treaties
2. Freedom of the seas
4. Disarmament
5. Adjustment of colonial claims (decolonization and national self-determination)
6. Russia to be assured independent development and international withdrawal from occupied Russian territory
7. Restoration of Belgium to antebellum national status
8. Alsace-Lorraine returned to France from Germany
9. Italian borders redrawn on lines of nationality
10. Autonomous development of Austria-Hungary as a nation, as the Austro-Hungarian Empire dissolved
11. Romania, Serbia, Montenegro, and other Balkan states to be granted integrity, have their territories deoccupied, and Serbia to be given access to the Adriatic Sea
12. Sovereignty for the Turkish people of the Ottoman Empire as the Empire dissolved, autonomous development for other nationalities within the former Empire
14. General association of the nations – a multilateral international association of nations to enforce the peace (League of Nations)
The speech was controversial in America, and even more so with their Allies. France wanted high reparations from Germany as French agriculture, industry, and lives had been so demolished by the war, and Britain, as the great naval power, did not want freedom of the seas. Wilson compromised with Clemenceau, Lloyd George, and many other European leaders during the Paris Peace talks to ensure that the fourteenth point, the League of Nations, would be established. In the end, Wilson's own Congress did not accept the League and only four of the original Fourteen Points were implemented fully in Europe.
#### Other foreign affairs
Main article: Polar Bear Expedition
Between 1914 and 1918, the United States intervened in Latin America, particularly in Mexico, Haiti, Cuba, and Panama. The U.S. maintained troops in Nicaragua throughout his administration and used them to select the president of Nicaragua and then to force Nicaragua to pass the Bryan-Chamorro Treaty. American troops in Haiti forced the Haitian legislature to choose the candidate Wilson selected as Haitian president. American troops occupied Haiti between 1915 and 1934.
After Russia left the war in 1917 following the Bolshevik Revolution the Allies sent troops, presumably, to prevent a German or Bolshevik takeover of allied-provided weapons, munitions and other supplies which had been previously shipped as aid to the Czarist government. Wilson sent armed forces to assist the withdrawal of Czech and Slovak prisoners along the Trans-Siberian Railway, hold key port cities at Arkangel and Vladivostok, and safeguard supplies sent to the Tsarist forces. Though not sent to engage the Bolsheviks, the U.S. forces had several armed conflicts against Russian forces. Wilson withdrew the soldiers on April 1, 1920, though some remained as late as 1922. As Davis and Trani conclude, "Wilson, Lansing, and Colby helped lay the foundations for the later Cold War and policy of containment. There was no military confrontation, armed standoff, or arms race. Yet, certain basics were there: suspicion, mutual misunderstandings, dislike, fear, ideological hostility, and diplomatic isolation....Each side was driven by ideology, by capitalism versus communism. Each country sought to reconstruct the world. When the world resisted, pressure could be used."[30]
#### Versailles 1919
Wilson Returning From the Versailles Peace Conference 1919.
After World War I, Wilson participated in negotiations with the stated aim of assuring statehood for formerly oppressed nations and an equitable peace. On January 8, 1918, Wilson made his famous Fourteen Points address, introducing the idea of a League of Nations, an organization with a stated goal of helping to preserve territorial integrity and political independence among large and small nations alike.
Wilson intended the Fourteen Points as a means toward ending the war and achieving an equitable peace for all the nations. He spent six months at Paris for the 1919 Paris Peace Conference (making him the first U.S. president to travel to Europe while in office). He worked tirelessly to promote his plan. The charter of the proposed League of Nations was incorporated into the conference's Treaty of Versailles.
For his peacemaking efforts, Wilson was awarded the 1919 Nobel Peace Prize. However, Wilson failed to win Senate support for ratification and the United States never joined the League. Republicans under Henry Cabot Lodge controlled the Senate after the 1918 elections, but Wilson refused to give them a voice at Paris and refused to agree to Lodge's proposed changes. The key point of disagreement was whether the League would diminish the power of Congress to declare war. Historians generally have come to regard Wilson's failure to win U.S. entry into the League as perhaps the biggest mistake of his administration, and even as one of the largest failures of any American presidency.[31]
#### Post war: 1919-20
Wilson had ignored the problems of demobilization after the war, and the process was chaotic and violent. Four million soldiers were sent home with little planning, little money, and few benefits. A wartime bubble in prices of farmland burst, leaving many farmers bankrupt or deeply in debt after they purchased new land. In 1919, major strikes in steel and meatpacking broke out.[32] Serious race riots hit Chicago and other cities.
After a series of bombings by radical anarchist groups in New York and elsewhere, Wilson directed Attorney General A. Mitchell Palmer to put a stop to the violence. Palmer then ordered the Palmer Raids, with the aim of collecting evidence on violent radical groups, to deport foreign-born agitators, and jail domestic ones.[33]
Wilson broke with many of his closest political friends and allies in 1918-20, including Colonel House. Historians speculate that a series of strokes may have affected his personality. He desired a third term, but his Democratic party was in turmoil, with German voters outraged at their wartime harassment, and Irish voters angry at his failure to support Irish independence.
#### Support of Zionism
Wilson was sympathetic to the plight of Jews, especially in Poland and in France. As President, Wilson repeatedly stated in 1919 that U.S. policy was to "acquiesce" in the Balfour Declaration but not officially support Zionism.[34] After he left office Wilson wrote a letter of strong support to the idea of a Jewish state in Palestine and objected to territorial concessions regarding its borders.[35]
#### Women's suffrage
Until Wilson announced his support for suffrage, a group of women calling themselves Silent Sentinels protested in front of the White House, holding banners such as "Mr. President—What will you do for woman suffrage?" "Absolutely nothing." In January 1918, after years of lobbying and public demonstrations, Wilson finally announced his support of the 19th Amendment guaranteeing women the right to vote. The Amendment passed the House but failed in the Senate. Finally, on June 4, 1919, the Senate passed the amendment.
#### Incapacity
The cause of his incapacitation was the physical strain of the demanding public speaking tour he undertook to obtain support of the American people for ratification of the Covenant of the League. After one of his final speeches to attempt to promote the League of Nations in Pueblo, Colorado, on September 25, 1919 [4], he collapsed. On October 2, 1919, Wilson suffered a serious stroke that almost totally incapacitated him, leaving him paralyzed on his left side and blind in his left eye. For at least a few months, he was confined to a wheelchair. Afterwards he could walk only with the assistance of a cane. The full extent of his disability was kept from the public until after his death on February 3, 1924.
Wilson was purposely, with few exceptions, kept out of the presence of Vice President Thomas R. Marshall, his cabinet or Congressional visitors to the White House for the remainder of his presidential term. His first wife, Ellen, had died in 1914, so his second wife, Edith, served as his steward, selecting issues for his attention and delegating other issues to his cabinet heads. This was, as of 2008, the most serious case of presidential disability in American history and was later cited as a key example why ratification of the 25th Amendment was seen as important.
### Significant presidential acts
Wilson's chief of staff ("Secretary") was Joseph Patrick Tumulty 1913-1921, but he was largely upstaged after 1916 when Wilson's second wife, Edith Bolling Wilson, assumed full control of Wilson's schedule. An important foreign policy advisor and confidant was "Colonel" Edward M. House.
File:Wilson Cabinet 2.jpg
Woodrow Wilson and his cabinet in the Cabinet Room
OFFICE NAME TERM
President Woodrow Wilson 1913–1921
Vice President Thomas R. Marshall 1913–1921
Secretary of State William J. Bryan 1913–1915
Robert Lansing 1915–1920
Bainbridge Colby 1920–1921
Secretary of the Treasury William G. McAdoo 1913–1918
Carter Glass 1918–1920
David F. Houston 1920–1921
Secretary of War Lindley M. Garrison 1913–1916
Newton D. Baker 1916–1921
Attorney General James C. McReynolds 1913–1914
Thomas W. Gregory 1914–1919
A. Mitchell Palmer 1919–1921
Postmaster General Albert S. Burleson 1913–1921
Secretary of the Navy Josephus Daniels 1913–1921
Secretary of the Interior Franklin K. Lane 1913–1920
John B. Payne 1920–1921
Secretary of Agriculture David F. Houston 1913–1920
Edwin T. Meredith 1920–1921
Secretary of Commerce William C. Redfield 1913–1919
Joshua W. Alexander 1919–1921
Secretary of Labor William B. Wilson 1913–1921
### Supreme Court appointments
Wilson appointed the following Justices to the Supreme Court of the United States:
## Wilsonian Idealism
File:Ww28.gif
The official White House portrait of President Woodrow Wilson
Wilson was a remarkably effective writer and thinker. He composed speeches and other writings with two fingers on a little Hammond typewriter. [36] Wilson's diplomatic policies had a profound influence on shaping the world. Diplomatic historian Walter Russell Mead has explained:
"Wilson's principles survived the eclipse of the Versailles system and that they still guide European politics today: self-determination, democratic government, collective security, international law, and a league of nations. Wilson may not have gotten everything he wanted at Versailles, and his treaty was never ratified by the Senate, but his vision and his diplomacy, for better or worse, set the tone for the twentieth century. France, Germany, Italy, and Britain may have sneered at Wilson, but every one of these powers today conducts its European policy along Wilsonian lines. What was once dismissed as visionary is now accepted as fundamental. This was no mean achievement, and no European statesman of the twentieth century has had as lasting, as benign, or as widespread an influence."[37]
American foreign relations since 1914 have rested on Wilsonian idealism, argues historian David Kennedy, even if adjusted somewhat by the "realism" represented by Franklin Delano Roosevelt and Henry Kissinger. Kennedy argues that every president since Wilson has, "embraced the core precepts of Wilsonianism. Nixon himself hung Wilson's portrait in the White House Cabinet Room. Wilson's ideas continue to dominate American foreign policy in the twenty-first century. In the aftermath of 9/11 they have, if anything, taken on even greater vitality."[38]
## Wilson and race
File:Wilson-quote-in-birth-of-a-nation.jpg
Quotation from Woodrow Wilson's History of the American People as reproduced in the film The Birth of a Nation.
While president of Princeton University, Wilson discouraged blacks from even applying for admission.[39] Princeton would not admit its first black student until the 1940s.
Wilson allowed many of his cabinet officials to establish official segregation in most federal government offices, in some departments for the first time since 1863. "His administration imposed full racial segregation in Washington and hounded from office considerable numbers of black federal employees."[40] Wilson and his cabinet members fired many black Republican office holders, but also appointed a few black Democrats. W. E. B. Du Bois, a leader of the NAACP, campaigned for Wilson and in 1918 was offered an Army commission in charge of dealing with race relations. (DuBois accepted but failed his Army physical and did not serve.)[41] When a delegation of blacks protested his discriminatory actions, Wilson told them that "segregation is not a humiliation but a benefit, and ought to be so regarded by you gentlemen." In 1914, he told the New York Times that "If the colored people made a mistake in voting for me, they ought to correct it."
Wilson was attacked by African-Americans for his actions, but he was also attacked by southern hard line racists, such as Georgian Thomas E. Watson, for not going far enough in restricting black employment in the federal government. The segregation introduced into the federal workforce by the Wilson administration was kept in place by the succeeding presidents and was not finally rescinded until the Truman Administration.
Woodrow Wilson's History of the American People explained the Ku Klux Klan of the late 1860s as the natural outgrowth of Reconstruction, a lawless reaction to a lawless period. Wilson noted that the Klan “began to attempt by intimidation what they were not allowed to attempt by the ballot or by any ordered course of public action.”[42]
Wilson's words were repeatedly quoted in the film The Birth of a Nation, which has come under fire for racism. Thomas Dixon, author of the novel The Clansman upon which the film is based, was one of Wilson's graduate school classmates at Johns Hopkins in 1883-1884. Dixon arranged a special White House preview (this was the first time a film was shown in the White House) without telling Wilson what the film was about. There is debate about whether Wilson made the statement, "It is like writing history with lightning; my only regret is that it is all so terribly true.", or whether it was invented by a film publicist.[43] Others argue Wilson felt he had been tricked by Dixon and in public statements claimed he did not like the film; Wilson blocked its showing during the war.[44] In a 1923 letter to Senator Morris Sheppard of Texas, Wilson noted of the reborn Klan, “...no more obnoxious or harmful organization has ever shown itself in our affairs.” Although Wilson had a volatile relationship with American Blacks he was a friend of the Ethiopian Emperor Haile Selassie, a black African Monarch. A sword, a gift from Selassie, can still be seen in Wilson's Washington DC home.[45]
### White ethnics
Wilson had some harsh words to say about immigrants in his history books. However, after he entered politics in 1910, Wilson worked to integrate new immigrants into the Democratic party, into the army, and into American life. For example, the war bond campaigns were set up so that ethnic groups could boast how much money they gave. He demanded in return during the war that they repudiate any loyalty to the enemy.
Irish Americans were powerful in the Democratic party and opposed going to war alongside their enemy Britain, especially after the violent suppression of the Easter Rebellion of 1916. Wilson won them over in 1917 by promising to ask Britain to give Ireland its independence. At Versailles, however, he reneged and the Irish-American community vehemently denounced him. Wilson, in turn, blamed the Irish Americans and German Americans for the lack of popular support for the League of Nations, saying,
"There is an organized propaganda against the League of Nations and against the treaty proceeding from exactly the same sources that the organized propaganda proceeded from which threatened this country here and there with disloyalty, and I want to say—I cannot say too often—any man who carries a hyphen about with him carries a dagger that he is ready to plunge into the vitals of this Republic whenever he gets ready."[46]
## Death
In 1921, Wilson and his wife retired from the White House to a home in the Embassy Row section of Washington, D.C. Wilson continued going for daily drives and attended Keith's vaudeville theater on Saturday nights.
Wilson died in his S Street home on February 3, 1924. Because his plan for the League of Nations ultimately failed, he died feeling that he had lied to the American people and that his motives for joining the war had been in vain. He was buried in Washington National Cathedral.
Mrs. Wilson stayed in the home another 37 years, dying on December 28, 1961. Mrs. Wilson left the home to the National Trust for Historic Preservation to be made into a museum honoring her husband. Woodrow Wilson House opened as a museum in 1964.
## Miscellany
File:Woodrow Wilson Tomb.JPG
The final resting place of Woodrow Wilson at the Washington National Cathedral
• Wilson was an early automobile enthusiast, and he took daily rides while he was President. His favorite car was a 1919 Pierce-Arrow, in which he preferred to ride with the top down. His enjoyment of motoring made him an advocate of funding for public highways.[47]
• Wilson was an avid baseball fan. In 1916 he became the first sitting president to attend a World Series game. Wilson had been a center fielder during his Davidson College days. When he transferred to Princeton he was unable to make the varsity and so became the assistant manager of the team. He was the first President officially to throw out a first ball at a World Series.[48]
• His earliest memory, from age 3, was of hearing that Abraham Lincoln had been elected and that a war was coming.
• Wilson would forever recall standing for a moment at Robert E. Lee's side and looking up into his face.
• Wilson (born in Virginia and raised in Georgia) was the first Southerner to be elected since 1848 (Zachary Taylor) and the first Southerner to take office since Andrew Johnson in 1865.
• Wilson was also the first Democrat elected to the presidency since Grover Cleveland in 1892. The next Democrat elected was Franklin D. Roosevelt in 1932.
• Wilson was a member of the Phi Kappa Psi fraternity.
• Wilson appeared on the [[Large denominations of United States currency|$100,000 bill]]. The bill, which is now out of print but is still technically [[legal tender]], was used only to transfer money between [[Federal Reserve]] banks.UNIQ31cc958f7f5946b8-nowiki-00000091-QINU49UNIQ31cc958f7f5946b8-nowiki-00000092-QINUUNIQ31cc958f7f5946b8-nowiki-00000094-QINU50UNIQ31cc958f7f5946b8-nowiki-00000095-QINU [[Image:wilsonspiercearrow.jpg|thumb|right|Wilson's Pierce Arrow, which resides in his hometown of Staunton, Virginia.]] * His carved initials are still visible on the underside of a table in the History Department at [[Johns Hopkins University]]. * Wilson was one of only two Presidents ([[Theodore Roosevelt]] was the first) to become president of the [[American Historical Association]]. * Wilson was president of the [[American Political Science Association]] in 1910. * Wilson was the subject of the 1944 biographical film ''[[Wilson (film)|Wilson]]'', directed by [[Henry King]] and starring [[Alexander Knox]] as Wilson. The picture was a commercial failure, despite receiving ten [[Academy Awards|Oscar]] nominations and winning five. *In [[Harry Turtledove]]'s "[[Great War (Harry Turtledove)|Great War]]" trilogy of alternate history novels, Wilson is elected 9th President of the [[Confederate States of America]] on the Whig ticket in 1910. [[Image:100000f.jpg|frame|right|Wilson on the$100,000 gold certificate]]
• The Italian steam locomotive group FS 735, designed and built by ALCO and Montreal Locomotive Works for Ferrovie dello Stato while Italy was fighting World War I, was nicknamed Wilson after T.W. Wilson, then president of United States
• The book Stardust and Shadows, 2000, Toronto: Dundern Press by Charles Foster details an alleged relationship between silent-era motion picture actress Florence La Badie and Wilson.
• When President Wilson came to Europe to settle the peace terms, Wilson visited Pope Benedict XV in Rome, which made Wilson the first American President to visit the Pope while in office.
• Wilson was the only presidential candidate to defeat two former presidents in a single election (Roosevelt and Taft).
## References
### Notes
1. ^ Expert Report Of Eric Foner
3. ^ Walworth ch 1
4. ^ Link, Wilson I:5-6; Wilson Papers I: 130, 245, 314
5. ^ for details on Wilson's health see Edwin A. Weinstein, Woodrow Wilson: A Medical and Psychological Biography (Princeton 1981)
6. ^ {{#invoke:citation/CS1|citation |CitationClass=web }}
7. ^ Mulder, John H. Woodrow Wilson: The Years of Preparation. (Princeton, 1978) 71-72.
8. ^ Congressional Government, 180
9. ^ The Politics of Woodrow Wilson, 41–48
10. ^ Congressional Government, 205
11. ^ Congressional Government, 186–7
12. ^ Congressional Government, 76
13. ^ Congressional Government, 132
14. ^ David T. Beito and Linda Royster Beito, "Gold Democrats and the Decline of Classical Liberalism, 1896-1900,"Independent Review 4 (Spring 2000), 555-75.
15. ^ Frozen Republic, 145
16. ^ "Beyond FitzRandolph Gates," Princeton Weekly Bulletin June 22, 1998.
17. ^ Walworth 1:109
18. ^ PBS - American Experience: Woodrow Wilson | Wilson- A Portrait
19. ^ Walworth v 1 ch 6, 7, 8
20. ^ Shenkman, Richard. p. 275. Presidential Ambition. New York, New York. Harper Collins Publishing, 1999. First Edition. 0-06-018373-X
21. ^ Shenkman, Richard. p. 275. Presidential Ambition. New York, New York. Harper Collins Publishing, 1999. First Edition. 0-06-018373-X
22. ^ New York Times, Aug 7, 1912
23. ^ Woodrow Wilson: 28th President of the United States - Hear The Issues - Political Articles and Commentary
24. ^ Template:Citation/core
Bullitt knew Wilson personally, and was with him at the Paris Peace Conference, 1919.
25. ^ Arthur S. Link, "Woodrow Wilson" in Henry F. Graff ed., The Presidents: A Reference History (2002) p 370
27. ^ Avrich, Paul, Sacco and Vanzetti: The Anarchist Background, Princeton University Press, 1991
28. ^ You want a more 'progressive' America? Careful what you wish for. | csmonitor.com
29. ^ http://www.rit.edu/~cma8660/mirror/www.johntaylorgatto.com/chapters/11g.htm
30. ^ Donald E. Davis and Eugene P. Trani, The First Cold War: The Legacy of Woodrow Wilson in U.S.-Soviet Relations. (2002) p. 202.
31. ^ CTV.ca | U.S. historians pick top 10 presidential errors
32. ^ Leonard Williams Levy and Louis Fisher, Encyclopedia of the American Presidency, Simon and Shuster: 1994, p. 494. ISBN 0132759837
33. ^ The successful Communist takeover of Russia in 1917 was also a background factor: many anarchists believed that the worker's revolution that had taken place there would quickly spread across Europe and the United States. Paul Avrich, Sacco and Vanzetti: The Anarchist Background, Princeton University Press, 1991
34. ^ Walworth (1986) 473-83, esp. p. 481; Melvin I. Urofsky, American Zionism from Herzl to the Holocaust, (1995) ch. 6; Frank W. Brecher, Reluctant Ally: United States Foreign Policy toward the Jews from Wilson to Roosevelt. (1991) ch 1-4.
35. ^ In 1923 he wrote "The Zionist cause depends on rational northern and eastern boundaries for a self-maintaining, economic development of the country. This means, on the north, Palestine must include the Litani River and the watersheds of the Hermon, and on the east it must include the plains of the Jaulon and the Hauran. Narrower than this is a mutilation...I need not remind you that neither in this country nor in Paris has there been any opposition to the Zionist program, and to its realization the boundaries I have named are indispensable". Quoted in Palestine: The Original Sin , Meir Abelson [1]
36. ^ Phyllis Lee Levin. Edith and Woodrow: The Wilson White House. Simon and Schuster. New York. 2001, p139
37. ^ Walter Russell Mead, Special Providence, (2001) at [2]
38. ^ David M. Kennedy, "What 'W' Owes to 'WW': President Bush May Not Even Know It, but He Can Trace His View of the World to Woodrow Wilson, Who Defined a Diplomatic Destiny for America That We Can't Escape." The Atlantic Monthly Vol: 295. Issue: 2. (March 2005) pp 36+.
39. ^ Arthur Link, Wilson:The Road to the White House (Princeton University Press, 1947) 502
40. ^ Expert Report Of Eric Foner
41. ^ Ellis, Mark. "'Closing Ranks' and 'Seeking Honors': W. E. B. du Bois in World War I" Journal of American History 1992 79(1): 96-124. ISSN 0021-8723 Fulltext in Jstor
42. ^ Woodrow Wilson, A History of the American People (1931) V:59.
43. ^ "Family Life", Essays on Woodrow Wilson and His Administration, American President: An Online Reference Resource, Miller Center of Public Affairs, University of Virginia [3]
44. ^ Link vol 2 pp 252-54.
45. ^ Link, Papers of Woodrow Wilson 68:298
46. ^ American Rhetoric, "Final Address in Support of the League of Nations", Woodrow Wilson, delivered 25 Sept 1919 in Pueblo, CO. John B. Duff, "German-Americans and the Peace, 1918-1920" American Jewish Historical Quarterly 1970 59(4): 424-459. and Duff, "The Versailles Treaty and the Irish-Americans" Journal of American History 1968 55(3): 582-598. ISSN 0021-8723
47. ^ Richard F. Weingroff, President Woodrow Wilson — Motorist Extraordinaire, Federal Highway Administration
48. ^ CNNSI.com - Statitudes - Statitudes: World Series, By the Numbers - Thursday October 17, 2002 03:33 AM
49. ^ Ask Yahoo! November 10, 2005
50. ^ The \$100,000 bill Federal Reserve Bank of San Francisco
### Bibliography
• 'Wilson and the Federal Reserve'
• Ambrosius, Lloyd E., “Woodrow Wilson and George W. Bush: Historical Comparisons of Ends and Means in Their Foreign Policies,” Diplomatic History, 30 (June 2006), 509–43.
• Bailey; Thomas A. Wilson and the Peacemakers: Combining Woodrow Wilson and the Lost Peace and Woodrow Wilson and the Great Betrayal (1947)
• Bennett, David J., He Almost Changed the World: The Life and Times of Thomas Riley Marshall (2007)
• Brands, H. W. Woodrow Wilson 1913-1921'’ (2003)
• Clements, Kendrick, A. Woodrow Wilson : World Statesman (1999)
• Clements, Kendrick A. The Presidency of Woodrow Wilson (1992)
• Clements, Kendrick A. "Woodrow Wilson and World War I," Presidential Studies Quarterly 34:1 (2004). pp 62+.
• Davis, Donald E. and Eugene P. Trani; The First Cold War: The Legacy of Woodrow Wilson in U.S.-Soviet Relations (2002)online
• Greene, Theodore P. Ed. Wilson at Versailles (1957)
• Hofstadter, Richard. "Woodrow Wilson: The Conservative as Liberal" in The American Political Tradition (1948), ch. 10.
• Knock, Thomas J. To End All Wars: Woodrow Wilson and the Quest for a New World Order (1995)
• N. Gordon Levin, Jr., Woodrow Wilson and World Politics: America's Response to War and Revolution (1968)
• Link, Arthur S. "Woodrow Wilson" in Henry F. Graff ed., The Presidents: A Reference History (2002) pp 365-388
• Link, Arthur Stanley. Woodrow Wilson and the Progressive Era, 1910-1917 (1972) standard political history of the era
• Link, Arthur Stanley. Wilson: The Road to the White House (1947), first volume of standard biography (to 1917); Wilson: The New Freedom (1956); Wilson: The Struggle for Neutrality: 1914-1915 (1960); Wilson: Confusions and Crises: 1915-1916 (1964); Wilson: Campaigns for Progressivism and Peace: 1916-1917 (1965), the last volume of standard biography
• Link, Arthur S.; Wilson the Diplomatist: A Look at His Major Foreign Policies (1957)
• Link, Arthur S.; Woodrow Wilson and a Revolutionary World, 1913-1921 (1982)
• Livermore, Seward W. Woodrow Wilson and the War Congress, 1916-1918 (1966)
• Malin, James C. The United States after the World War 1930. online
• May, Ernest R. The World War and American Isolation, 1914-1917 (1959)
• Saunders, Robert M. In Search of Woodrow Wilson: Beliefs and Behavior (1998)
• Trani, Eugene P. “Woodrow Wilson and the Decision to Intervene in Russia: A Reconsideration.” Journal of Modern History (1976). 48:440—61. in JSTOR
• Walworth, Arthur. Woodrow Wilson 2 Vol. (1958), Pulitzer prize winning biography.
• Arthur Walworth; Wilson and His Peacemakers: American Diplomacy at the Paris Peace Conference, 1919 W. W. Norton, 1986 |
Home
>
English
>
Class 12
>
Maths
>
Chapter
>
Differentiability
>
Let f(x)={1,\ \ \ \ \ xlt=-1|x...
# Let f(x)={1,\ \ \ \ \ xlt=-1|x|,\ \ \ -1<x<1 0,\ \ \ \ \ xgeq1 . Then, f is continuous at x=-1 (b) differentiable at x=-1 (c) everywhere continuous (d) everywhere differentiable
Updated On: 27-06-2022
UPLOAD PHOTO AND GET THE ANSWER NOW!
Answer
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. |
# Help with arrow formation in TikZ flowchart [duplicate]
I'm trying to create a simple flowchart (it's actually an archaeological diagram called a Harris Matrix) using TikZ. Below is the code for a small part of the finished diagram which illustrates the problem I'm having. In the current chart the line from the node labeled 7 goes downwards and then touches the node labeled 9 on the right side. What I would like to do is have the line still go downwards from node 7, go to the left, and then go downwards again to touch node 9 on the top. Thanks for any help, I'm quite new to TikZ.
What I have now:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{shapes,arrows}
\begin{document}
\tikzstyle{block} = [rectangle, draw, text centered]
\tikzstyle{line} = [draw]
$\begin{tikzpicture} \node [block] (g) {7}; \node [block, below left of = g] (h) {9}; \node [block, below right of = g] (i) {12}; \path [line] (g) |- (h); \path [line] (g) |- (i); \end{tikzpicture}$
\end{document}
-
## marked as duplicate by Qrrbrbirlbel, Gonzalo Medina, Werner, Torbjørn T., T. VerronApr 23 '13 at 5:20
One possibility using the let syntax:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{calc,shapes,arrows}
\begin{document}
\tikzset{
block/.style={rectangle, draw, text centered},
line/.style={draw}
}
$\begin{tikzpicture} \node [block] (g) {7}; \node [block, below left of = g] (h) {9}; \node [block, below right of = g] (i) {12}; \path[line] let \p1=(g.south), \p2=(h.north) in (g.south) -- +(0,0.5*\y2-0.5*\y1) -| (h.north); \path [line] let \p1=(g.south), \p2=(i.north) in (g.south) -- +(0,0.5*\y2-0.5*\y1) -| (i.north); \end{tikzpicture}$
\end{document}
One could also do some manual adjustment, but this could produce undesired results if the wrong shift is used:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{calc,shapes,arrows}
\begin{document}
\tikzset{
block/.style={rectangle, draw, text centered},
line/.style={draw}
}
$\begin{tikzpicture} \node [block] (g) {7}; \node [block, below left of = g] (h) {9}; \node [block, below right of = g] (i) {12}; \path[line] (g.south) -- +(0,-3pt) -| (h.north); \path[line] (g.south) -- +(0,-3pt) -| (i.north); \end{tikzpicture}$
\end{document}
- |
# Package for symbolic computation of christoffel symbols and parallel transports in Riemannian geometry, given the metric
I've no knowledge in mathematica (but I do in matlab), but I'd really appreciate if someone could mention what is/are the best and easy to learn mathematica package(s) for symbolic and numerical (both, really) computation of Riemannian geometry, specially Christoffel symbols, sectional curvature, and parallel transport along a given curve on M, given the topological type of the manifold M and the Riemannian metric g on M.
To explain myself a little more: in order to symbolically compute the Christoffel symbols, I've to invert a matrix and compute the symbolic and numerical derivatives w.r.t. the matrix. These matrices come from observations of medical data and are d by n matrices with n being a huge number, and d is normally 2 or 3.
After that, I've to compute the parallel transport along a curve c, which'll involve solving a system of first order linear ordinary differential equation with matrix entries depending on the derivative c' and the Christoffel symbols.
Thank you!
• Maybe 8895 will help. – b.gates.you.know.what Apr 25 '15 at 13:43
• Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Read the faq! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! – user9660 Apr 25 '15 at 13:44
• Will do, absolutely! – Mathmath Apr 25 '15 at 13:45
• If you were a physicist specializing in general relativity, I would suggest xAct with xCoba for the Christoffels, but it requires extensive knowledge of differential geometry. There exist less complicated packages, but I have no experience with them. – auxsvr Apr 25 '15 at 22:09
• Please read my answer below. I am not familiar with using diff. Geometry with medical data. Would you perhaps briefly explain what these large data matrices represent and what kind of manifolds you want to detect/describe with Christoffels/ curvature – magma Apr 30 '15 at 7:18 |
# How do you factor 125x^6 - y^6?
##### 1 Answer
Aug 26, 2016
$125 {x}^{6} - {y}^{6} = \left(\sqrt{5} x - y\right) \left(5 {x}^{2} + \sqrt{5} x y + {y}^{2}\right) \left(\sqrt{5} x + y\right) \left(5 {x}^{2} - \sqrt{5} x y + {y}^{2}\right)$
#### Explanation:
The difference of squares identity can be written:
${a}^{2} - {b}^{2} = \left(a - b\right) \left(a + b\right)$
The difference of cubes identity can be written:
${a}^{3} - {b}^{3} = \left(a - b\right) \left({a}^{2} + a b + {b}^{2}\right)$
The sum of cubes identity can be written:
${a}^{3} + {b}^{3} = \left(a + b\right) \left({a}^{2} - a b + {b}^{2}\right)$
Hence we find:
$125 {x}^{6} - {y}^{6}$
$= {\left(\sqrt{125} {x}^{3}\right)}^{2} - {\left({y}^{3}\right)}^{2}$
$= \left(\sqrt{125} {x}^{3} - {y}^{3}\right) \left(\sqrt{125} {x}^{3} + {y}^{3}\right)$
$= \left({\left(\sqrt{5} x\right)}^{3} - {y}^{3}\right) \left({\left(\sqrt{5} x\right)}^{3} + {y}^{3}\right)$
$= \left(\sqrt{5} x - y\right) \left({\left(\sqrt{5} x\right)}^{2} + \left(\sqrt{5} x\right) y + {y}^{2}\right) \left(\sqrt{5} x + y\right) \left({\left(\sqrt{5} x\right)}^{2} - \left(\sqrt{5} x\right) y + {y}^{2}\right)$
$= \left(\sqrt{5} x - y\right) \left(5 {x}^{2} + \sqrt{5} x y + {y}^{2}\right) \left(\sqrt{5} x + y\right) \left(5 {x}^{2} - \sqrt{5} x y + {y}^{2}\right)$
There are no simpler factors with Real coefficients.
If you allow Complex coefficients then you can factor this further as:
$= \left(\sqrt{5} x - y\right) \left(\sqrt{5} x - \omega y\right) \left(\sqrt{5} x - {\omega}^{2} y\right) \left(\sqrt{5} x + y\right) \left(\sqrt{5} x + \omega y\right) \left(\sqrt{5} x + {\omega}^{2} y\right)$
where $\omega = - \frac{1}{2} + \frac{\sqrt{3}}{2} i$ is the primitive Complex cube root of $1$. |
# Hamilton equations-Symplectic scheme
We know that $$dot{q} = frac{partial H}{partial p}$$ and $$dot{p} = -frac{partial H}{partial q}$$, and we also know the values $$Q$$ and $$P$$ respectively of $$q$$ and $$p$$ at a later time step $$Delta t$$. How could we prove that the quantities
begin{align} Q &= q + {Delta}tfrac{partial H}{partial p}(q,p),\ P &= p – {Delta}tfrac{partial H}{partial q}(q,p) end{align}
are not symplectic, while
begin{align} Q &= q – {Delta}tfrac{partial H}{partial p}(q,p),\ P &= p + {Delta}tfrac{partial H}{partial Q}(Q,p) end{align}
are symplectic?
Clarification:
The sets of equations define different numerical integrators: in the first case (qi+1,pi+1) directly in terms of (qi,pi), and in the second case qi+1 in terms of (qi,pi), and pi+1 in terms of (qi+1,pi). |
## ASVAB Arithmetic Reasoning Practice Test 761358
Questions 5 Topics Factors & Multiples, Percentages, Practice, Ratios
#### Study Guide
###### Factors & Multiples
A factor is a positive integer that divides evenly into a given number. The factors of 8 are 1, 2, 4, and 8. A multiple is a number that is the product of that number and an integer. The multiples of 8 are 0, 8, 16, 24, ...
###### Percentages
Percentages are ratios of an amount compared to 100. The percent change of an old to new value is equal to 100% x $${ new - old \over old }$$.
###### Practice
Many of the arithmetic reasoning problems on the ASVAB will be in the form of word problems that will test not only the concepts in this study guide but those in Math Knowledge as well. Practice these word problems to get comfortable with translating the text into math equations and then solving those equations.
###### Ratios
Ratios relate one quantity to another and are presented using a colon or as a fraction. For example, 2:3 or $${2 \over 3}$$ would be the ratio of red to green marbles if a jar contained two red marbles for every three green marbles. |
## Automating box processes using powershell and the box windows sdk
Highlighted
Occasional Contributor
## Automating box processes using powershell and the box windows sdk
Hey all. I thought I would (re)post a quick explanation as to how you can use the box windows sdk in powershell to automate box tasks. This method will require powershell 5 at least (or if you separately installed powershellget) to work. First, obtaining the assemblies. I created a custom little nuget package of my own which contains not only the box assembly needed to automate box tasks, but the dependencies that it requires as well. I did this due to, among other things, the errors encountered when using nuget to obtain the assemblies and their dependencies and trying to import them into powershell, and also because of the number of unnecessary assemblies that this method imports as well. I published the package on MyGet (which you can download here if you want to see the contents) to make it simple for a powershell script to download these assemblies to the local machine in the event that the powershell script could not find them. Below is the code powershell code that I use to accomplish this (using the packagemanagement module):
Get-PackageProvider -Name NuGet -ForceBootstrap
Set-PSRepository -Name PSGallery -InstallationPolicy Trusted
If (!(Test-Path "$env:ProgramData\boxlib")) { New-item -Name "boxlib" -Path$env:PROGRAMDATA -ItemType Directory -Force
}
If (!(Test-Path "$env:ProgramData\boxlib\Box.Dependencies\1.0.0\lib\net45\Box.V2.dll")) { Get-PackageProvider -Name NuGet -ForceBootstrap Get-PackageProvider -Name Powershellget -ForceBootstrap Register-PackageSource -Name "Box.Dependencies" -Location "https://www.myget.org/F/boxdependency/api/v2" -ProviderName "PowershellGet" -Trusted -Force save-Package -Name Box.dependencies -path "$env:ProgramData\boxlib" -ProviderName "PowershellGet"
}
Get-ChildItem "$env:ProgramData\boxlib\*.dll" -Recurse | % { [reflection.assembly]::LoadFrom($_.FullName) }
This will successfully import all of the box windows sdk assemblies you will need into the current powershell session.
While figuring out how to successfully import the box windows sdk was a challenge in of itself, figuring out how to use the box windows sdk was almost just as much a challenge. So lets talk about authenticating to box via a JWT assertion. This part was actually relatively simple, as this procedure is covered on the box windows sdk github page (the below code assumes that you generated your RSA keypair in the developement console, as it contains all of the information needed to authenticate using JWT):
$cli = Get-Content "Path\to\json\authentication\file\box-cli.json" | ConvertFrom-Json$boxconfig = New-Object -TypeName Box.V2.Config.Boxconfig($cli.boxAppSettings.clientID,$cli.boxAppSettings.clientSecret, $cli.enterpriseID,$cli.boxAppSettings.appAuth.privateKey, $cli.boxAppSettings.appAuth.passphrase,$cli.boxAppSettings.appAuth.publicKeyID)
$boxJWT = New-Object -TypeName Box.V2.JWTAuth.BoxJWTAuth($boxconfig)
$boxjwt$tokenreal = $boxJWT.AdminToken()$adminclient = $boxJWT.AdminClient($tokenreal)
$adminclient NOTE: If you want to run the box commands under the context of a particular user (like if you wanted to search through a users files/folders, create share links to aforementioned files/folders, etc.), you can specify the user id of this user as the second argument in the creation of the$adminclient object (eg. $adminclient =$boxJWT.AdminClient($tokenreal,$userID)). This is essentially the same as using the "as-user" header. Another quick explanation. The "$adminclient" object is what you will be using to actually perform administrative tasks in box. Lets take a look at an example script that I wrote real quick, shall we? The below script is a script I was instructed to write in order to automate our "user termination process" as far as box is concerned: <# .SYNOPSIS Obtain all Legal hold objects in the enterprise .DESCRIPTION Obtain a list of all legal hold objects in the enterprise. .EXAMPLE PS C:\> Get-LegalHold .NOTES Additional information about the function. #> function Get-LegalHold {$legalurl = $adminclient2.LegalHoldPoliciesManager.GetListLegalHoldPoliciesAsync()$legalurl.Wait()
$legalOutput =$legalurl.Result
return $legalOutput } <# .SYNOPSIS Returns all legal hold assignment objects for all enterprise legal holds .DESCRIPTION This function will return all of the legal hold assignment objects, which are basically user objects associated with a particular legal hold, for all of the legal hold objects returned by Get-legalhold function .EXAMPLE PS C:\> Get-LegalHoldAssignments .NOTES Additional information about the function. #> function Get-LegalHoldAssignments {$tyy = Get-legalhold
$holds =$tyy.entries
$legids = New-Object System.Collections.ArrayList foreach ($hold in $holds) {$newhold = $adminclient2.LegalHoldPoliciesManager.GetAssignmentsAsync($hold.Id)
foreach ($nope in$newhold.Result.Entries)
{
$legids.Add($nope.Id)
}
}
Return $legids }$user = Get-ADUser "ADuser" -Properties Mail, Manager
$email =$user.Mail
#retrieve information of the manager of the AD user
$manager = Get-aduser$user.Manager -Properties Mail, GivenName
$manmail =$manager.Mail
#Obtain a list of all enterprise box users, loop through the list, and if the user has a box account (identified using user's email), then save the user's and his/her manager's necessary information to variables.
$val = 0$entries = New-Object System.Collections.ArrayList
$grog =$adminclient.UsersManager.GetEnterpriseUsersAsync($null, 0, 1000)$grog.Wait()
$grog.Result.Entries | % {$entries.Add($_) }$tot = $grog.Result.TotalCount Do {$val = $val + 1000$temp = $adminclient.UsersManager.GetEnterpriseUsersAsync($null, $val, 1000)$temp.Wait()
$temp.Result.Entries | % {$entries.Add($_) } } while ($val + 1000 -lt $tot)$id = $null$name = $null$login = $null$manid = $null$manlogin = $null Foreach ($entry in $entries) { If ($entry.login -like $email) {$statusbar1.Text = "$($textbox1.Text) has a box account. Scan will continue. Please wait."
$id =$entry.Id
$login =$entry.login
$name =$entry.Name
}
If (($manager -ne$null) -and ($manager -ne '')) { If ($entry.login -like $manager.Mail) {$manid = $entry.Id$manlogin = $entry.login } } } #The below code runs if the user has a box account If ($id -ne $null) { #The below code will obtain a list of all legal hold assignments for all legal hold objects to determine if the user is on legal hold.$leghols = Get-LegalHoldAssignments
$onlegalhold =$false
foreach ($leghol in$leghols)
{
$newoutput =$adminclient2.LegalHoldPoliciesManager.GetAssignmentAsync($leghol) If ($newoutput.Result.Assignedto.Id -like $id) {$onlegalhold = $true Break } }$holdtext = ''
#Below code runs if the user is on legal hold.
If ($onlegalhold -eq$true)
{
#Moves all of the user's box content to the root of the Main "storage" account.
$foldproc =$adminclient.UsersManager.MoveUserFolderAsync("$id", "***number removed for privacy***", "0",$false)
$foldproc.Wait()$foldid = $null #Searches all of the folders in the Main "storage" account for the folder which was just created containing the user's files.$folddd = $adminclient.FoldersManager.GetfolderItemsAsync("0", 1000, 0,$null)
$folddd.Wait()$foldentries = $folddd.result.entries foreach ($foldentry in $foldentries) { If ($foldentry.name -like "*$login*") {$foldid = $foldentry.id Break } Else { Continue } } #When the folder containing the user's files is located, the folder will be moved into the "Legal Holds" folder of the "storage" account.$folderrequest = @{
id = "$foldid" parent = @{ id = "2***phone number removed for privacy***" } }$folderreqproc = $adminclient.FoldersManager.CopyAsync($folderrequest, $null)$folderreqproc.Wait()
$folderdelete =$adminclient.FoldersManager.DeleteAsync("$foldid",$true, $null)$folderdelete.Wait()
}
#Below Code runs if the user is not on legal hold
Else
{
#Below Code runs if the user did not have a manager configured in Active Directory or if the user's manager does not have a box account.
If ($manid -eq$null)
{
$foldproc =$adminclient.UsersManager.MoveUserFolderAsync("$id", "***number removed for privacy***", "0",$false)
$foldproc.Wait() #Below code runs if user has a manager who does not have a box account. A shared link to user's box files will be created for manager. If ($manmail -ne $null) {$foldid = $null #Searches all of the folders in the Main "storage" account for the folder which was just created containing the user's files.$folddd = $adminclient.FoldersManager.GetfolderItemsAsync("0", 1000, 0,$null)
$folddd.Wait()$foldentries = $folddd.result.entries foreach ($foldentry in $foldentries) { If ($foldentry.name -like "*$login*") {$foldid = $foldentry.id Break } Else { Continue } } #creates the shared link to the folder containing the user's box files.$sharejson = @{
access = "open"
}
$sharelink =$adminclient.FoldersManager.CreateSharedLinkAsync("$foldid", "$sharejson", $null)$sharelink.Wait()
$link =$sharelink.Result.SharedLink.Url
}
}
#Below Code runs if the user's manager does have a box account. User's files will be moved to manager's box account.
Else
{
$foldproc =$adminclient.UsersManager.MoveUserFolderAsync("$id", "$manid", "0", $false)$foldproc.Wait()
}
}
#Deletes user's now empty box account and recovers license.
$deleteuser =$adminclient.UsersManager.DeleteEnterpriseUserAsync("$id",$false, $true)$deleteuser.Wait()
}
I did my best to annotate the script and be as clear as I could about each step in the process. As you can see (if you followed the script), the script takes an active directory user and that user's manager, loops through all enterprise users to see if they have box accounts (using their ad object's "mail" attribute). If the terminated user has a box account, the script will then determine if that user is on legal hold by cycling through all legal hold assignment of all of the legal hold object in the enterprise. If the user is on legal hold, the users files are stored in a secure folder. If they are not, then we look and see if the user had a configured manager. If they don't, the user's files are move to the same "secure" folder. If they do have a manager, but the manager does not have a box account, a shared link will be created for the manager to use to access that user's box files. If the manager does have a box account, then the files are simply moved to his/her account. There are a couple of things that I want to point out about the script that I feel deserve special mention:
1. You will notice that (almost) everytime I use $adminclient to execute an administrative action, I assign the process to a variable and on the very next line use that variable to call the "Wait()" method. This is because, as you can see from the names of the different box methods, that the box tasks run Asynchronously, and the "wait" method essentially performs the same task as the "await" keyword in C#, in that you wait for the previous command to finish executing and return the results before continuing to process the script. For example, say you have a box user you want to delete, but not force delete, and you use the "$adminclient.UsersManager.MoveUserFolderAsync" method to move this user's content to another box user. However, on the very next line, instead of using the "wait" method to wait for the file transfer process to finish, you use the "$adminclient.UsersManager.DeleteEnterpriseUserAsync" method (where the "force" parameter is set to$false) to attempt to delete the user. You will get an error because the previous action to move the user's files to another account is still processing and has not completed yet. I make it a point to use the "wait" method every time I perform a task using the $adminclient to make sure that all processes and results are returned before the script continues to execute. 2. When I used$adminclient.FoldersManager.CopyAsync($folderrequest,$null) to copy a folder in the script above, some of you may have been slightly confused as to why I assigned the value I did to "$folderrequest." This is because the syntax used is the powershell equivalent of using hashtables in a json object. This link is where I obtained the information I needed to create these variables. As for what information actually needs to be in the variables, look no further than the API documentation page. If you look at the Curl representation of the command, you will find the Json variables which are needed in order to construct the request you need to perform the operation. I hope this helped some of you. I will be honest, I will probably never check this posting again just because of time constraints, but I hope this put you on the right path to getting the answers you needed to automate box in powershell. Peace. 10 REPLIES 10 Highlighted Occasional Contributor ## Re: Automating box processes using powershell and the box windows sdk Oh, and one more thing. I also used ILSpy to decompile the box windows sdk dll file, which was infinitely helpful in figuring out what all of usable commands are and what their arguments are. Highlighted Occasional Contributor ## Re: Automating box processes using powershell and the box windows sdk I wanted to add one more piece of information. Thanks to the help of mattwiller on Github, I now finally understand why, when running the windows sdk in powershell, 2 different versions of the Newtonsoft.Json assembly were required (version 9.0.0 and 10.0.3): It looks like this works in full .NET applications but not in PowerShell because the SDK's app.config file that binds any version of Newtonsoft.Json to the 10.x version that the SDK installs is not being loaded by PowerShell. Fundamentally, since you're going outside of the SDK's normal operation and loading the assemblies manually, you're going to have to ensure that the correct versions are installed and referenced. In the course of our investigation we found some information about how to make PowerShell load the app.config, but weren't able to make it work in our environment. The app.config file that mattwiller references is located here, and it is a bit difficult to implement if you are running a straight powershell script. Allow me to explain: when you run an executable, if there is a ".config" file in the same directory as the executable and has the same name (eg. "aprocess.exe" would have a config file named "aprocess.exe.config"), the executable (from my understanding) is able to pull settings from this config file for use when the script is run. I did some research into the link that mattwiller posted, but was not able to successfully load the app.config file into the powershell script. Then, it struck me: the answer was not to load an app.config file into the powershell session, as powershell is itself, a process, and has its own config file located in the same directory that the powershell executable is located (C:\Windows\System32\WindowsPowerShell\v1.0). So, due to the fact that the powershell executable is run whenever a script is run on the machine, one would have to modify the powershell.exe.config (and/or powershellise.exe.config) files and add the content of the app.config file linked above in order to get powershell to correctly load the Newtonsoft.json.10.0.3 assembly. If, however, you are like me and don't want to necessarily modify system files like that, good news is that that is not the only option. The second option, however, involves converting the powershell script into an executable file and then creating a config file for that executable. If you use Sapien powershell studio, when you build a package, by default, the config file is created along with the executable, so all you would have to do is add the contents of the app.config file linked above to the newly created config file for the executable powershell studio generated, and presto, works like a charm. Below is what an example config file would look like: <?xml version="1.0" encoding="utf-8" ?> <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-10.0.0.0" newVersion="10.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0" /> <supportedRuntime version="v2.0" /> </startup> <appSettings> <add key="EnableWindowsFormsHighDpiAutoResizing" value="true"/> </appSettings> </configuration> For those who do not have powershell studio and is outside your price range, don't worry, you can still perform this process, although you will have to do it all manually. There is a cool little application called ps1 to exe converter (both a portable and web based) which will convert a powershell script into an executable, and includes other really cool features, like embedding files, choosing the working directory, etc. You can use this tool to convert the powershell script to an executable, then create the config file for the executable, modify it with the contents of the app.config file linked above, (optionally) embed it within your converted executable, and you are good to go. Highlighted Occasional Contributor ## Re: Automating box processes using powershell and the box windows sdk I'm getting the following error when attempting to get the AdminToken. And suggestions on why? The value of$boxJWT is :
Box.V2.JWTAuth.BoxJWTAuth
$tokenreal =$boxJWT.AdminToken()
Exception calling "AdminToken" with "0" argument(s): "The type initializer for
'System.IdentityModel.Tokens.Jwt.JsonExtensions' threw an exception."
At line:1 char:1
+ $tokenreal =$boxJWT.AdminToken()
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : TypeInitializationException
Highlighted
Box Employee
## Re: Automating box processes using powershell and the box windows sdk
@BentheBuilder This is likely because the exact issue that @whiggs12 describes in his post above — the Box SDK depends on Newtonsoft.Json 10 but the System.IdentityModel.Tokens.Jwt library we use depends on 9. A full .NET project respects the SDK's app.config file that does a binding redirect to reconcile these dependencies, but if you're loading assemblies manually in PowerShell you're probably not picking that config up and the IdentityModel library is not able to resolve it's dependency on Newtonsoft.Json 9. In the post above, @whiggs12 gives some helpful information to fix that issue — could you try that out and see if it works for you?
Highlighted
Occasional Contributor
## Re: Automating box processes using powershell and the box windows sdk
To piggy back off what @mwiller said, you might find the below code snippet very useful, as it allows you to perform powershell binding redirection:
# Load your target version of the assembly
$newtonsoft = [System.Reflection.Assembly]::LoadFrom("$PSScriptRoot\packages\Newtonsoft.Json.8.0.3\lib\net45\Newtonsoft.Json.dll")
$onAssemblyResolveEventHandler = [System.ResolveEventHandler] { param($sender, $e) # You can make this condition more or less version specific as suits your requirements if ($e.Name.StartsWith("Newtonsoft.Json")) {
return $newtonsoft } foreach($assembly in [System.AppDomain]::CurrentDomain.GetAssemblies()) {
if ($assembly.FullName -eq$e.Name) {
return $assembly } } return$null
}
[System.AppDomain]::CurrentDomain.add_AssemblyResolve($onAssemblyResolveEventHandler) # Rest of your script.... # Detach the event handler (not detaching can lead to stack overflow issues when closing PS) [System.AppDomain]::CurrentDomain.remove_AssemblyResolve($onAssemblyResolveEventHandler)
Highlighted
New Contributor
## Re: Automating box processes using powershell and the box windows sdk
Hi, I want to know whether we can automate file access process on the box from this code or not?
I am in an internship where I have to automate some process which includes a step where we have to give access of some files to selected users. So, can I use this code to automate file access process? If not then, please tell me any other possible way
Highlighted
First-time Contributor
## Re: Automating box processes using powershell and the box windows sdk
@Garv. If you are still looking for an answer to this question, the answer is yes. If you open the box windows sdk solution in visual studio and navigate to the collectionsmanager, you will see several methods to manipulate collaborations.
Highlighted
First-time Contributor
## Re: Automating box processes using powershell and the box windows sdk
For those of you who are interested, I have created a full powershell module which acts as a wrapper for the box windows sdk, removing the need for you to directly work with the sdk. To install the module, make sure you have powershellget installed and run
Install-module Poshbox -force -allowclobber
You can view the source code for the module here. You can also submit any issues that you encounter or contribute yourself.
Highlighted
Contributor
## Re: Automating box processes using powershell and the box windows sdk
@gfish1,Your link doesn't seem to function anymore. Are you still maintaining "PoshBox" and if so, where can I get more info?
Highlighted
First-time Contributor
## Re: Automating box processes using powershell and the box windows sdk
Poshbox powershell module can be found here . You can also install poshbox using powershellget:
Install-module poshbox
New solutions
• ## Looking for box api to download gdoc, gsheet, gsli...
Top Kudoed Posts
Users online (190) |
# Popoviciu’s Inequality
I’ve just returned to the UK after an excellent stay at the University of British Columbia. More about that will follow in some posts which are being queued. Anyway, I flew back in time to attend the last day of the camp held at Oundle School to select the UK team for this year’s International Mathematical Olympiad, to be held in Cape Town in early July. I chose to give a short session on inequalities, which is a topic I did not enjoy as a student and do not enjoy now, but perhaps that makes it a particularly suitable choice?
We began with a discussion of convexity. Extremely occasionally in olympiads, and merely slightly occasionally in real life, an inequality arises which can be proved by showing that a given function is convex in all its arguments, hence its maximum must be attained at a boundary value in each variable.
In general though, our main experience of convexity will be through the medium of Jensen’s inequality. A worthwhile check is to consider one form of the statement of Jensen’s inequality, with two arguments. We are always given a convex function f defined on an interval I=[a,b], and $x,y\in I$, and weights $\alpha,\beta$ which sum to 1. Then
$\alpha f(x)+\beta f(y)\ge f(\alpha x+\beta y).$
How do we prove this? Well, in fact this is the natural definition of convexity for a function. There had initially been vague murmurings that convexity should be defined as a property of the second derivative of the function. But this is somewhat unsatisfactory, as the function $f(x)=|x|$ is certainly convex, but the second derivative does not exist at x=0. One could argue that the second derivative may not be finite at x=0, but is nonetheless positive by defining it as a limit which happens to be infinite in this case. However, I feel it is uncontroversial to take the case of Jensen given above as the definition of convexity. It is after all a geometric property, so why raise objections to a geometric definition?
The general statement of Jensen’s inequality, with the natural definitions, is
$\sum_{i} \alpha_i f(x_i)\ge f(\sum_{i}\alpha_ix_i).$
This is sometimes called Weighted Jensen in the olympiad community, with ‘ordinary’ Jensen following when the weights are all 1/n. In a probabilistic context, we write
$\mathbb{E}[f(X)]\ge f(\mathbb{E}X),$
for X any random variable supported on the domain of f. Naturally, X can be continuous as well as discrete, giving an integral version of the discretely weighted statement.
Comparing ‘ordinary’ Jensen and ‘weighted’ Jensen, we see an example of the situation where the more general result is easier to prove. As is often the case in these situations, this arises because the more general conditions allow more ‘elbow room’ to perform an inductive argument. A stronger statement means that assuming the induction hypothesis is more useful! Anyway, I won’t digress too far onto the proof of discrete ‘weighted’ Jensen as it is a worthwhile exercise for olympiad students.
What I wanted to discuss principally was an inequality due to Tiberiu Popoviciu:
$\frac13[f(x)+f(y)+f(z)]+f(\frac{x+y+z}{3})\ge \frac23[f(\frac{x+y}{2})+f(\frac{y+z}{2})+f(\frac{z+x}{2})].$
We might offer the following highly vague intuition. Jensen asserts that for sums of the form $\sum f(x_i)$, you get larger sums if the points are more spread out. The effect of taking the mean is immediately to bring all the points as close together as possible. But Popoviciu says that this effect is so pronounced that even with only half the weight on the outer points (and the rest as close together as possible), it still dominates a system with the points twice as close together.
So how to prove it? I mentioned that there is, unsurprisingly, a weighted version of this result, which was supposed to act as a hint to avoid getting too hung up about midpoints. One can draw nice diagrams with a triangle of points (x,f(x)), (y,f(y)), (z,f(z)) and draw midpoints, medians and centroids, but the consensus seemed to be that this didn’t help much.
I had tried breaking up the LHS into three symmetric portions, and using weighted Jensen to obtain terms on the RHS, but this also didn’t yield much, so I warned the students against this approach unless they had a specific reason to suppose it might succeed.
Fortunately, several of the students decided to ignore this advice, and though most fell into a similar problem I had experienced, Joe found that by actively avoiding symmetry, a decomposition into two cases of Jensen could be made. First we assume WLOG that $x\le y \le z$, and so by standard Jensen, we have
$\frac13[f(x)+f(y)]\ge \frac23 f(\frac{x+y}{2}).$
It remains to show
$\frac13 f(z)+f(\frac{x+y+z}{3})\ge \frac23[f(\frac{x+z}{2})+f(\frac{y+z}{2})].$
If we multiply by ¾, then we have an expression on each side that looks like the LHS of Weighted Jensen. At this point, it is worth getting geometric again. One way to visualise Jensen is that for a convex function, a chord between two points on the function lies above the function. (For standard Jensen with two variables, in particular the midpoint lies above the function.) But indeed, suppose we have values $x_1, then the chord between $f(x_1),f(y_1)$ lies strictly above the chord between $f(x_2),f(y_2)$. Making precisely such a comparison gives the result required above. If you want to be more formal about it, you could consider replacing the values of f between $x_2,y_2$ with a straight line, then applying Jensen to this function. Linearity allows us to move the weighting in and out of the brackets on the right hand side, whenever the mean lies in this straight line interval.
Hopefully the diagram above helps. Note that we can compare the heights of the blue points (with the same abscissa), but obviously not the red points!
In any case, I was sceptical about whether this method would work for the weighted version of Popoviciu’s inequality
$\alpha f(x)+\beta f(y) + \gamma f(z)+f(\alpha x+\beta y+\gamma z)\ge$
$(\alpha+\beta)f(\frac{\alpha x+\beta y}{\alpha+\beta})+(\beta+\gamma)f(\frac{\beta y + \gamma z}{\beta+\gamma})+(\gamma+\alpha)f(\frac{\gamma z+\alpha x}{\gamma+\alpha}).$
It turns out though, that it works absolutely fine. I would be interested to see a solution to the original statement making use of the medians and centroid, as then by considering general Cevians the more general inequality might follow.
That’s all great, but my main aim had been to introduce one trick which somewhat trivialises the problem. Note that in the original statement of Popoviciu, we have a convex function, but we only evaluate it at seven points. So for given x,y,z, it makes no difference if we replace the function f with a piece-wise linear function going through the correct seven points. This means that if we can prove that the inequality for any convex piece-wise linear function with at most eight linear parts then we are done.
(There’s a subtlety here. Note that we will prove the inequality for all such functions and all x,y,z, but we will only use this result when x,y,z and their means are the points where the function changes gradient.)
So we consider the function
$g_a(x)=\begin{cases}0& x\le 0\\ ax & x\ge 0\end{cases}$
for some positive value of a. It is not much effort to check that this satisfies Popoviciu. It is also easy to check that the constant function, and the linear function g(x)=bx also satisfy the inequality. We now prove that we can write the piece-wise linear function as a sum of functions which satisfy the inequality, and hence the piece-wise linear function satisfies the inequality.
Suppose we have a convex piecewise linear function h(x) where $x_1 are the points where the derivative changes. We write
$a_i=h'(x_i+)-h'(x_i'),\quad a_0=h'(x_1-),$
for the change in gradient of h around point $x_i$. Crucially, because h is convex, we have $a_i\ge 0$. Then we can write h as
$h(x)=C+ a_0x+g_{a_1}(x-x_1)+\ldots+g_{a_{n}}(x-x_n),$
for a suitable choice of the constant C. This result comes according to [1] as an intermediate step in a short paper of Hardy, Littlewood and Polya, which I can’t currently find online. Note that inequalities are preserved under addition (but not under subtraction) so it follows that h satisfies Popoviciu, and so the original function f satisfies it too for the values of x,y,z chosen. These were arbitrary (but were used to construct h), and hence f satisfies the inequality for all x,y,z.
Some further generalisations can be found in [1]. With more variables, there are more interesting combinatorial aspects that must be checked, regarding the order of the various weighted means.
[1] – D. Grinberg – Generalizations of Popoviciu’s Inequality. arXiv
# Large Deviations 5 – Stochastic Processes and Mogulskii’s Theorem
Motivation
In the previous posts about Large Deviations, most of the emphasis has been on the theory. To summarise briefly, we have a natural idea that for a family of measures supported on the same metric space, increasingly concentrated as some index grows, we might expect the probability of seeing values in a set not containing the limit in distribution to grow exponentially. The canonical example is the sample mean of a family of IID random variables, as treated by Cramer’s theorem.
It becomes apparent that it will not be enough to specify the exponent for a given large deviation event just by taking the infimum of the rate function, so we have to define an LDP topologically, with different behaviour on open and closed sets. Now we want to find some LDPs for more complicated measures, but which will have genuinely non-trivial applications. The key idea in all of this is that the infimum present in the definition of an LDP doesn’t just specify the rate function, it also might well give us some information about the configurations or events that lead to the LDP.
The slogan for the LDP as in Frank den Hollander’s excellent book is: “A large deviation event will happen in the least unlikely of all the unlikely ways.” This will be useful when our underlying space is a bit more complicated.
Setup
As a starting point, consider the set-up for Cramer’s theorem, with IID $X_1,\ldots,X_n$. But instead of investigating LD behaviour for the sample mean, we investigate LD behaviour for the whole set of RVs. There is a bijection between sequences and the partial sums process, so we investigate the partial sums process, rescaled appropriately. For the moment this is a sequence not a function or path (continuous or otherwise), but in the limit it will be, and furthermore it won’t make too much difference whether we interpolate linearly or step-wise.
Concretely, we consider the rescaled random walk:
$Z_n(t):=\tfrac{1}{n}\sum_{i=1}^{[nt]}X_i,\quad t\in[0,1],$
with laws $\mu_n$ supported on $L_\infty([0,1])$. Note that the expected behaviour is a straight line from (0,0) to (1,$\mathbb{E}X_1$). In fact we can say more than that. By Donsker’s theorem we have a functional version of a central limit theorem, which says that deviations from this expected behaviour are given by suitably scaled Brownian motion:
$\sqrt{n}\left(\frac{Z_n(t)-t\mathbb{E}X}{\sqrt{\text{Var}(X_1)}}\right)\quad\stackrel{d}{\rightarrow}\quad B(t),\quad t\in[0,1].$
This is what we expect ‘standard’ behaviour to look like:
The deviations from a straight line are on a scale of $\sqrt{n}$. Here are two examples of potential large deviation behaviour:
Or this:
Note that these are qualitatively different. In the first case, the first half of the random variables are in general much larger than the second half, which appear to have empirical mean roughly 0. In the second case, a large deviation in overall mean is driven by a single very large value. It is obviously of interest to find out what the probabilities of each of these possibilities are.
We can do this via an LDP for $(\mu_n)$. Now it is really useful to be working in a topological context with open and closed sets. It will turn out that the rate function is supported on absolutely continuous functions, whereas obviously for finite n, none of the sample paths are continuous!
We assume that $\Lambda(\lambda)$ is the logarithmic moment generating function of X_1 as before, with $\Lambda^*(x)$ the Fenchel-Legendre transform. Then the key result is:
Theorem (Mogulskii): The measures $(\mu_n)$ satisfy an LDP on $L_\infty([0,1])$ with good rate function:
$I(\phi)=\begin{cases}\int_0^1 \Lambda^*(\phi'(t))dt,&\quad \text{if }\phi\in\mathcal{AC}, \phi(0)=0,\\ \infty&\quad\text{otherwise,}\end{cases}$
where AC is the space of absolutely continuous functions on [0,1]. Note that AC is dense in $L_\infty([0,1])$, so any open set contains a $\phi$ for which $I(\phi)$ is at least in principle finite. (Obviously, if $\Lambda^*$ is not finite everywhere, then extra restrictions of $\phi'$ are required.)
The following picture may be helpful at providing some motivation:
So what is going on is that if we take a path and zoom in on some small interval around a point, note first that behaviour on this interval is independent of behaviour everywhere else. Then the gradient at the point is the local empirical mean of the random variables around this point in time. The probability that this differs from the actual mean is given by Cramer’s rate function applied to the empirical mean, so we obtain the rate function for the whole path by integrating.
More concretely, but still very informally, suppose there is some $\phi'(t)\neq \mathbb{E}X$, then this says that:
$Z_n(t+\delta t)-Z_n(t)=\phi'(t)\delta t+o(\delta t),$
$\Rightarrow\quad \mu_n\Big(\phi'(t)\delta t+o(\delta t)=\frac{1}{n}\sum_{i=nt+1}^{n(t+\delta t)}X_i\Big),$
$= \mu_n\Big( \phi'(t)+o(1)=\frac{1}{n\delta t}\sum_{i=1}^{n\delta t}X_i\Big)\sim e^{-n\delta t\Lambda^*(\phi'(t))},$
by Cramer. Now we can use independence:
$\mu_n(Z_n\approx \phi)=\prod_{\delta t}e^{-n\delta t \Lambda^*(\phi'(t))}=e^{-\sum_{\delta t}n\delta t \Lambda^*(\phi'(t))}\approx e^{-n\int_0^1 \Lambda^*(\phi'(t))dt},$
as in fact is given by Mogulskii.
Remarks
1) The absolutely continuous requirement is useful. We really wouldn’t want to be examining carefully the tail of the underlying distribution to see whether it is possible on an exponential scale that o(n) consecutive RVs would have sum O(n).
2) In general $\Lambda^*(x)$ will be convex, which has applications as well as playing a useful role in the proof. Recalling den Hollander’s mantra, we are interested to see where infima hold for LD sets in the host space. So for the event that the empirical mean is greater than some threshold larger than the expectation, Cramer’s theorem told us that this is exponentially the same as same the empirical mean is roughly equal to the threshold. Now Mogulskii’s theorem says more. By convexity, we know that the integral functional for the rate function is minimised by straight lines. So we learn that the contributions to the large deviation are spread roughly equally through the sample. Note that this is NOT saying that all the random variables will have the same higher than expected value. The LDP takes no account of fluctuations in the path on a scale smaller than n. It does however rule out both of the situations pictured a long way up the page. We should expect to see roughly a straight line, with unexpectedly steep gradient.
3) The proof as given in Dembo and Zeitouni is quite involved. There are a few stages, the first and simplest of which is to show that it doesn’t matter on an exponential scale whether we interpolate linearly or step-wise. Later in the proof we will switch back and forth at will. The next step is to show the LDP for the finite-dimensional problem given by evaluating the path at finitely many points in [0,1]. A careful argument via the Dawson-Gartner theorem allows lifting of the finite-dimensional projections back to the space of general functions with the topology of pointwise convergence. It remains to prove that the rate function is indeed the supremum of the rate functions achieved on projections. Convexity of $\Lambda^*(x)$ is very useful here for the upper bound, and this is where it comes through that the rate function is infinite when the comparison path is not absolutely continuous. To lift to the finer topology of $L_\infty([0,1])$ requires only a check of exponential tightness in the finer space, which follows from Arzela-Ascoli after some work.
In conclusion, it is fairly tricky to prove even this most straightforward case, so unsurprisingly it is hard to extend to the natural case where the distributions of the underlying RVs (X) change continuously in time, as we will want for the analysis of more combinatorial objects. Next time I will consider why it is hard but potentially interesting to consider with adaptations of these techniques an LDP for the size of the largest component in a sparse random graph near criticality. |
# Intrinsic square functions on functions spaces including weighted Morrey spaces
@article{Feuto2012IntrinsicSF,
title={Intrinsic square functions on functions spaces including weighted Morrey spaces},
author={Justin Feuto},
journal={arXiv: Classical Analysis and ODEs},
year={2012}
}
• J. Feuto
• Published 1 May 2012
• Mathematics
• arXiv: Classical Analysis and ODEs
We prove that the intrinsic square functions including Lusin area integral and Littlewood-Paley $g^{\ast}_{\lambda}$-function as defined by Wilson, are bounded in a class of function spaces include weighted Morrey spaces. The corresponding commutators generated by $BMO$ functions are also considered.
12 Citations
Some estimates of intrinsic square functions on the weighted Herz-type Hardy spaces
In this paper, by using the atomic decomposition theory of weighted Herz-type Hardy spaces, we obtain some strong type and weak type estimates for intrinsic square functions including the Lusin area
On the Multilinear Singular Integrals and Commutators in the Weighted Amalgam Spaces
• Mathematics
• 2014
This paper is concerned with the norm estimates for the multilinear singular integral operators and their commutators formed by BMO functions on the weighted amalgam spaces . Some criterions of
Pre-Dual of Fofana’s Spaces
• Mathematics
Mathematics
• 2019
The purpose of this paper is to characterize the pre-dual of the spaces introduced by I. Fofana on the basis of Wiener amalgam spaces. These spaces have a specific dilation behaviour similar to the
Estimates for Fractional Integral Operators and Linear Commutators on Certain Weighted Amalgam Spaces
In this paper, we first introduce some new classes of weighted amalgam spaces. Then we give the weighted strong-type and weak-type estimates for fractional integral operators $I_\gamma$ on these new
Two-Weight, Weak-Type Norm Inequalities for a Class of Sublinear Operators on Weighted Morrey and Amalgam Spaces
Let $\mathcal T_\alpha~(0\leq\alpha<n)$ be a class of sublinear operators satisfying certain size conditions introduced by Soria and Weiss, and let $[b,\mathcal T_\alpha]~(0\leq\alpha<n)$ be the
Estimates for vector-valued intrinsic square functions and their commutators on certain weighted amalgam spaces
In this paper, we first introduce some new kinds of weighted amalgam spaces. Then we deal with the vector-valued intrinsic square functions, which are given by \begin{equation*} \mathcal
Some estimates for $\theta$-type Calder\'on-Zygmund operators and linear commutators on certain weighted amalgam spaces
In this paper, we first introduce some new kinds of weighted amalgam spaces. Then we discuss the strong type and weak type estimates for a class of Calder\'on--Zygmund type operators $T_\theta$ in
Some Estimates for θ-type Calderón–Zygmund Operators and Linear Commutators on Certain Weighted Amalgam Spaces
In this paper, we first introduce some new kinds of weighted amalgam spaces. Then we discuss the strong type and weak type estimates for a class of Calderón–Zygmund type operators Tθ in these new
Two-Weight, Weak-Type Norm Inequalities for Fractional Integral Operators and Commutators on Weighted Morrey and Amalgam Spaces
Let $0<\gamma<n$ and $I_\gamma$ be the fractional integral operator of order $\gamma$, $I_{\gamma}f(x)=\int_{\mathbb R^n}|x-y|^{\gamma-n}f(y)\,dy$, and let $[b,I_\gamma]$ be the linear commutator
Boundedness of Intrinsic Littlewood-Paley Functions on Musielak-Orlicz Morrey and Campanato Spaces
• Mathematics
• 2013
Let $\varphi: {\mathbb R^n}\times [0,\infty)\to[0,\infty)$ be such that $\vz(x,\cdot)$ is nondecreasing, $\varphi(x,0)=0$, $\varphi(x,t)>0$ when $t>0$, $\lim_{t\to\infty}\varphi(x,t)=\infty$ and
## References
SHOWING 1-10 OF 21 REFERENCES
The intrinsic square function
We show that the Lusin area function and essentially all of its real-variable generalizations are pointwise dominated by an “intrinsic” square function, and that this latter function is, for all
Weighted Morrey spaces and a singular integral operator
• Mathematics, Chemistry
• 2009
In this paper, we shall introduce a weighted Morrey space and study the several properties of classical operatorsin harmonic analysis on this space (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)
Modern Fourier Analysis
Preface.- Smoothness and Function Spaces.- BMO and Carleson Measures.- Singular Integrals of Nonconvolution Type.- Weighted Inequalities.- Boundedness and Convergence of Fourier Integrals.-
Regularity in Morrey Spaces of Strong Solutions to Nondivergence Elliptic Equations with VMO Coefficients
• Mathematics
• 1998
In this paper, by means of the theories of singular integrals and linear commutators, the authors establish the regularity in Morrey spaces of strong solutions to nondivergence elliptic equations
Norms inequalities in some subspaces of Morrey space
• Mathematics
• 2012
We give norm inequalities for some classical operators in amalgams spaces and in some subspace of Morrey space.
INTEGRABLE FRACTIONAL MEAN FUNCTIONS ON SPACES OF HOMOGENEOUS TYPE.
• Mathematics
• 2009
The class of Banach spaces (L q , L p ) � (X, d, � ), 1 ≤ q ≤ α ≤ p ≤ ∞, introduced in (10) in connection with the study of the continuity of the fractional maximal operator of Hardy-Littlewood and
Espaces de fonctions à moyenne fractionnaire intégrable sur les groupes localement compacts
• Mathematics
• 2008
Let $G$ be a locally compact group which is $\sigma$-compact, endowed with a left Haar measure $\lambda .$ Denote by $e$ the unit element of $G$, and by $B$ an open relatively compact and symmetric
Weighted norm inequalities for the Hardy maximal function
The principal problem considered is the determination of all nonnegative functions, U(x), for which there is a constant, C, such that | [f*(x)rUix)dx g CJ \f(x)\'U(x) dx, where l<p<oo, J is a fixed
Weighted Norm Inequalities for a Maximal Operator in Some Subspace of Amalgams
• Mathematics
Abstract We give weighted norm inequalities for the maximal fractional operator ${{\mathcal{M}}_{q}},\beta$ of Hardy–Littlewood and the fractional integral ${{I}_{\gamma }}$ . These inequalities are |
## Wonder Material – 2
Using Carbon-NanoTube (CNT) sheets that we can make now, we might push towards ~2,200 km/s. Of course there will be structural mass and the payload reducing the top speed – thus we might hit ~1,800 km/s tops with CNT sheets, if made perfectly reflective. Even for lower reflectivity the speed will be about ~1000-1500 km/s.
How hard can we push it? A 1999 study by Dean Spieth, Robert Zubrin & Cindy Christensen for NASA’s Institute of Advanced Concepts (NIAC), which can be found here, examined using CNTs arranged in a spaced-out grid. One of the curiosities of optical theory is that, for a given range of wavelengths, the reflective material doesn’t have to be an unbroken sheet – it can be an open-grid.
Computing the reflectivity of such things is difficult – best to make it and measure it – but estimates of how a CNT grid would perform suggests that a CNT sail might accelerate at ~18 m/s2 at 1 AU from the Sun, implying a final speed of 2,320 km/s. Dropping inwards and launching from 0.019 AU would mean a final speed of 16,835 km/s (0.056c), allowing a probe to reach Alpha Centauri in just 78 years, propelled by sunlight alone!
To send people, rather than rugged robots, a different approach will be needed – to be discussed in Part 3.
## Wonder Material
Carbon is the material of the Future. Graphite, graphene, bucky-balls and nanotubes all have amazing properties. And then there’s diamond – which seems to come in several varieties, albeit rare and/or theoretical.
Making enough of any of the allotropes – different carbon forms – is rather tricky, aside from raw graphite, which can be mined. Diamonds fortunately can be made fairly easily these days – very pure diamond crystals can be (almost) made as large as one likes. Thus Jewel Diamonds, the kind De Beers sets the standard for, have to be slightly impure crystals, as they’re thus provably natural.
Carbon nanotubes are proving easier to make and to make into useful forms. One application caught my eye:
Carbon Nanotube Sheets
…which have the rather amazing property of being strong and yet massing just ~27 milligrams per square metre. If we can dope it (add a sprinkling of other elements) to make it more reflective, then it makes rather impressive solar-sail material. Sunlight’s pressure – as felt by a reflective surface facing flat to the Sun – is about 1/650 th of the sun’s gravity, so creating lift against the Sun’s gravity requires very large, light sheets. And doped CNT sheets – if 100% reflective – would experience a lift factor (ratio of light-pressure to the sail’s own weight) of 57 (!)
In theory that means a suitably steered solar-sail made of CNT sheet could send itself away from Earth’s orbit and reach a final speed of 42*sqrt(57-1) km/s ~ 315 km/s. If it swooped past Jupiter then swung in hard for the Sun, scooting past at 0.019 AU, then it would recede at ~2,200 km/s.
We’ll ponder that some more next time.
## The Unknown Solar System
Just beyond Neptune is the Kuiper Belt, a torus of comet-like objects, which includes a few dwarf-planets like the Pluto-Charon dual-planet system. Despite being lumped together under one monicker, the Belt is composed of several different families of objects, which have quite different orbital properties. Some are locked in place by the gravity of the big planets, mostly Neptune, while others are destined to head in towards the Sun, while some show signs of being scattered into the vastness beyond. Patryk Lykawka is a one researcher who has puzzled over this dark, lonely region, and has tried to model exactly how it has become the way it is today. Over the last two decades there has been a slow revolution in our understanding of how the Big Planets, the Gas Giants, formed. They almost certainly did not begin life in their present orbits – instead they migrated outwards from a formation region closer to the Sun. To do so millions of planetoids on near-misses with the Gas Giants tugged them gently outwards over millions of years. We know what happened to the Gas Giants, but what of the planetoids? A fraction today form the Kuiper Belt and the Oort Cloud beyond it (how many Plutos exist out there?) But a mystery remains, which Lykawka convincingly solves in his latest monograph via an additional “Super-Planetoid”, a planet between 0.3-0.7 Earth masses, now orbiting somewhere just beyond the Belt.
Such an object would be a sample of the objects that formed the Gas Giants, a so-called “Planetary Embryo”. Based on the ice and silicate mix present in the moons of the Gas Giants, the object would probably be half ice, half silicates/metals, like a giant version of Ganymede. However such an object would also have gained a significant atmosphere, unlike smaller bodies, and being cast so far from the Sun, it would have retained it even if it was composed of the primordial hydrogen/helium mix of the Gas Giants themselves. This has two potentially very interesting consequences. David Stephenson, in 1998, speculated on interstellar planets with thick hydrogen atmospheres able to keep a liquid-water ocean warm from geophysical heat-sources alone. Work by Eric Gaidos and Raymond Pierrehumbert suggests hydrogen greenhouse planets are a viable option in any system once past about ~2.0 AU. A precondition that obtains for Lykawka’s hypothetical Super Trans-Neptunian Object.
So instead of a giant Ganymede the object is more like Kainui, from Hal Clement’s last novel, “Noise”. Kainui is a “hot Ganymede”, a water planet with sufficiently low gravity that the global ocean hasn’t been compressed into Ice VII in its very depths. Kainui’s ocean is in a continual state of violent agitation, lethal to humans without special noise-proof suits, but Lykawka’s Super-TNO would probably be wet beneath its dense atmosphere, warmed by a trickle of heat from its core and the distant Sun.
Gravitational perturbation studies of planetary orbits by Lorenzo Iorio constrain the orbital distance of such a body to roughly where Lykawka suggests it should be. A Mars-mass object (0.1 Earth-masses) would exist between 150-250 AU, while a 0.7 Earth-mass body would be between 250-450 AU. If we place it at ~300 AU, then its equilibrium temperature, based on sunlight alone, would be somewhere below 16 K. That’s close to the triple-point of hydrogen (13.84 K @ 0.0704 bar), suggesting a frozen planet would result. However geophysical heat, from radioactive decay of potassium, uranium and thorium, could elevate the equilibrium temperature to over ~20.4 K, hydrogen’s boiling point at 1 atm pressure. Thus a thick hydrogen atmosphere should stay gaseous.
To keep liquid water warm enough (~273 K) at the surface, the surface pressure will need to be ~1,000 bar, the equivalent of the bottom of Earth’s oceans. An ammonia-water eutectic mixture would be liquid at ~100 bars and 176 K. With a higher rock fraction and higher radioactive isotope levels (as seen in comets, for example), liquid water might be possible at ~300 bars. Such a warm ocean would seem enticingly accessible since a variety of submarines and ROVs operate in the ocean at such pressures regularly. While the prospects for life seem dim, the variety of chemosynthetic life-styles amongst bacteria suggest we shouldn’t be too hasty about dismissing the possibility.
A primordial atmosphere also invites thoughts of mining the helium for that rare isotope, helium-3. At 0.3 Earth masses and 1:3 ratio of ice to rock, such a body has 75% Earth’s radius and just 40% the gravitational potential at its surface – even less at the top of the atmosphere. Such a planet would be incredibly straight-forward to mine and condensing helium-3 out of the mix would be made even easier by the ~30-40 K temperature at the 1 bar pressure level. There’s no simple relationship between the size of a planet and its spin rate, but assuming Earth’s early spin rate of 12 hours, then the synchronous orbital radius is just 2 Earth radii above the operating altitude of a mining platform. A space-elevator system would be straight-forward to implement, unlike the Gas Giants or even Earth.
Travelling to 300 AU is a non-trivial task, ten-times the distance to Neptune. A minimum-energy Hohmann trajectory would take 923 years, while a parabolic orbit would do the trip in 390 years. Voyager’s 15 km/s interstellar cruise speed would mean a trip of 95 years. A nuclear saltwater rocket, with an exhaust velocity of 4,725 km/s, could be used to accelerate to 3,000 km/s, then flip and brake at the destination. The trip would take six months, which is speedy by comparison.
## 2312: Terraforming the Solar System, Terraforming the Earth
Kim Stanley Robinson’s latest book “2312” is set in that titular year in a Solar System alive with busy humans and thousands of artificial habitats carved from asteroids. Earth is a crowded mess, home to eleven billion humans, but no longer the home of thousands of species, now only preserved, flourishing in fact, in the habitats. Spacers, those living in space, are long-lived, thanks to being artificially made “bisexual” (male & female) and some are living even longer by virtue of small size. Humans live from the Vulcanoids – a belt of asteroids just 0.1 AU from the Sun – out to Pluto, where a quartet of starships are being built for a 1,000 year flight to GJ 581. Mars has been terraformed, via Paul Birch’s process of burning an atmosphere out of the crust to make canals, while Venus is snowing carbon dioxide (another Birch idea.) The larger moons of Jupiter and Saturn are extensively inhabited and debating their terraforming options.
On Mercury Stan introduces us to the moving city Terminator, which runs along rails powered entirely via thermal expansion of the rails as they conduct heat from Mercurian day and radiate it away in the Mercurian night. Mercury is a planet of art museums and installations of art carved out of the periodically broiled and frozen landscape. Sunwalkers walk forever away from the Sunrise, braving the occasional glimpse of the naked Sun, which can kill with an unpredictable x-ray blast from a solar flare.
The two main protagonists are Swan, an Androgyn resident of Mercury, a renowed designer of space-habitats whose mother, Alex, has just died; and Wahram, a Wombman resident of Titan, who is negotiating access to solar energy for the terraforming of his home world. Due to a freak “accident” the two must journey through the emergency tunnels underneath Mercury’s Day-side, an experience which draws them together inspite of being literally worlds apart in personality and home-planets.
There’s a lot going on in 2312 and Stan only shows us a slivver. Plots to reshape the worlds and plots to overthroe the hegemony of humankind. But for our two interplanetary lovers such forces can’t keep them apart.
Of course, I’m not here to review the book. This being Crowlspace, I’m looking at the technicalities. Minor points of fact have a way of annoying me when they’re wrong. For example, Stan mentions Venus wanting to import nitrogen from Titan, which is rather ridiculous. The atmosphere of Venus is 3.5% nitrogen by volume, which works out as the equivalent of 2.25 bars partial pressure. Or about 3 times what’s on Earth. So importing nitrogen would be the equivalent of the Inuit importing ice.
Stan is critical of interstellar travel being portrayed as “easy” in Science-fiction. He mentions a fleet of habitats being sent out on a 1,000 year voyage to a star 20 light-years away – given the uncertainties of these things and the size of habitats, that’s not an unreasonable cruise speed. Yet he describes it as being “a truly fantastic speed for a human craft.” But at one point he mentions that a trip to Pluto from Venus takes 3 weeks, an unremarkable trip seemingly, yet that requires a top-speed of 0.022c – significantly higher than the starships!
He’s a bit vague about the pace of travel in the Solar System via “Aldrin cycles” – cycling orbits between destinations, timed to repeat. Buzz Aldrin developed the concept for easy transport to Mars – have a space-station with all the life-support in the right orbit and you only have to fly the passengers to the station, rather than all their supplies. The station either recycles everything or is resupplied by much slower automated freighters using electric propulsion. Stan’s mobile habitats do the former, with some small topping-up. But such Cyclers are slow. Stan mentions a Mercury-Vesta Cycler trip taking 8 days. Not possible for any Cycler orbit that’s bound to the Sun (i.e. cycling) – a straight-line parabolic orbit would take a minimum of 88.8 days. A proper Cycler needs to be on an orbit that can be shaped via the gravity of the planets to return it to the planets it is linking together, else too much fuel will be expended to reshape the orbit. Preferably an orbit that isn’t too elliptical else the shuttle fuel bill is too high. A minimum-energy Hohmann orbit would take 285 days to link Mercury and Vesta.
These are quibbling points. The real meat of the book is the optimistic future – a dazzlingly diverse one – that is basically plausible. Enticingly possible, in fact. Yet the optimism is tempered by the fact that not everyone is living in a wise, open society. Earth, even in 2312, remains a home to suffering masses, their plight made worse by the greenhouse effect’s flooding of low-lying parts of the Globe, and the Sixth Great Extinction’s erasure of most large animals from the planet (fortunately kept alive or genetically revived in the mobile habitats.) New York is mostly flooded, becoming a city of canal-streets, something I can imagine New Yorkers adapting to with aplomb.
The real challenge of the 24th Century, in Stan’s view, is the terraforming of the Earth, remaking a biosphere that we’ve ruined in our rush to industrialise. Perhaps. We certainly have many challenges ahead over the next 300 years…
## Life in the Year 100 billion trillion – Part I
If our Universe is open, either flat or hyperbolic in geometry, then it will expand forever… or at least until space-time’s warranty expires and a new vacuum is born from some quantum flip. Prior to that, most likely immensely distant, event the regular stars will go out and different sources of energy will be needed by Life in the Universe. A possible source is from the annihilation of dark matter, which might be its own anti-particle, thus self-annihilating when it collides. One possibility is that neutrinos will turn out to be dark matter and at a sufficiently low neutrino temperature, neutrinos will add energy to the electrons of atoms of iron and nickel by their annihilation. This is the energy source theorised by Robin Spivey (A Biotic Cosmos Demystified) to allow ice-covered Ocean Planets to remain hospitable for 10 billion trillion (1023) years.
Presently planets are relatively rare, just a few per star. In about 10 trillion years, or so, according to Spivey’s research, Type Ia supernova will scatter into space sufficient heavy elements to make about ~0.5 million Ocean Planets per supernova, eventually quite efficiently converting most of the baryon matter of the Galaxies into Ocean Planets. A typical Ocean Planet will mass about 5×1024 kg, be 12,200 km in diameter with 100 km deep Ocean, capped in ice, but heated by ~0.1 W/m2 of neutrino annihilation energy, for a planet total of ~50 trillion watts. Enough for an efficient ecosystem to live comfortably – our own biosphere traps a tiny 0.1% of the sunlight falling upon it, by comparison. In the Milky Way alone some 3,000 trillion (3×1015) Ocean Planets will ultimately be available for colonization. Such a cornucopia of worlds will be unavailable for trillions of years. The patience of would-be Galactic Colonists is incomprehensible to a young, barely evolved species like ours.
We’ll discuss the implications further in Part II.
## Post 100 YSS… First, Fast Thoughts
As a fan I can tell you it was an SF-Fan’s dream come true to meet, in the flesh, so many SF-writers and so many Icarii, as well as the Heart & Mind of the TZF. People I met, for the first time, but have corresponded with for a while…
(1) Paul Gilster & Marc Millis, the guys who set the train in motion some years ago
(2) The Icarus Interstellar Board
(3) wide Team Icarus
(4) The Benford Twins
(5) my co-author, Gerald Nordley, and perhaps the best ultra-hard SF writer I know.
(6) Athena Andreadis, molecular biologist and SF thinker
(7) John Cramer, author of “Analog’s” ‘The Alternate View’ and physicist
(8) Jack Sarfatti, the Showman of Speculative Physics
Others I met/heard who maybe aren’t so well-known, but may prove influential in times to come. Such as Young K. Bae, laser propulsion research and inventor of the Photonic Thruster (a very clever multi-bounce photon-propulsion system.) Mark Edwards, of Green Independence, who might have a way of feeding Starship Crews and the whole of Starship Earth.
Fast thoughts – David Nyeland gave a us BIG hint on how to launch a Starship in 100 years… reach out to EVERYONE.
## Orlando is Awesome!
Too much to tell on the very aggressive schedule here, so a detailed report will need to wait, but I met a FAN! You know who you are. Thanks for the encouragement and I promise more content – I have some actual journal paper ideas gestating and I will need input from my audience, I suspect. One is a paper on Virga-style mega-habitats and Dysonian SETI, to use a new idea from Milan Cirkovic. The other looks at exoplanets and Earth-like versus the astrobiology term of “habitable” – the two are not the same and the consequences are sobering. The recent paper by Traub (go look on the arXiv) which estimates 1/3 of FGK stars has a terrestrial planet in the habitable zone does NOT mean there’s Earths everywhere. What it does mean and how HZ can be improved as a concept is what I want to discuss.
More later. I have my talk to review and get straight in my head – no hand notes, though I have practiced it – plus I want something helpful to say to Gerald Nordley, mass-beam Guru, on the paper he graciously added me as a co-author. Also I will summarize my talk and direct interested readers to the new web-site from John Hunt, MD, on the interstellar ESCAPE plan.
## SpaceX to Mars!
This story keeps getting more interesting as I trawl around the Mars-Soc web-site. Bob Zubrin discusses the plan in more detail…
Discussion of Using SpaceX Hardware to Reach Mars
2. Technical Alternatives within the Mission Architecture
a. MAV and associated systems
In the plan described above, methane/oxygen is proposed as the propulsion system for the MAV, with all the methane brought from Earth, and all the oxygen made on Mars from the atmosphere. This method was selected over any involving hydrogen (either as feedstock for propellant manufacture or as propellant itself) as it eliminates the need to transport cryogenic hydrogen from Earth or store it on the Martian surface, or the need to mine Martian soil for water. If terrestrial hydrogen can be transported to make the methane, about 1.9 tons of landed mass could be saved. Transporting methane was chosen over a system using kerosene/oxygen for Mars ascent, with kerosene coming from Earth and oxygen from Mars because methane offers higher performance (Isp 375 s vs. Isp 350 s) than kerosene, and its selection makes the system more evolvable, as once Martian water does become available, methane can be readily manufactured on Mars, saving 2.6 tons of landed mass per mission compared to transporting methane, or about 3 tons per mission compared to transporting kerosene. That said, the choice of using kersosene/oxygen for Mars ascent instead of methane oxygen is feasible within the limits of the mass delivery capabilities of the systems under discussion. It thus represents a viable alternative option, reducing development costs, albeit with reduced payload capability and evolvability.
b. ERV and associated systems.
A kerosene/oxygen system is suggested for Trans-Earth injection. A methane/oxygen system would offer increased capability if it were available. The performance improvement is modest, however, as the required delta-V for TEI from a highly elliptical orbit around Mars is only 1.5 km/s. Hydrogen/oxygen is rejected for TEI in order to avoid the need for long duration storage of hydrogen. The 14 ton Mars orbital insertion mass estimate is based on the assumption of the use of an auxiliary aerobrake with a mass of 2 tons to accomplish the bulk of braking delta-V. If the system can be configured so that that Dragon’s own aerobrake can play a role in this maneuver, this delivered mass could be increased. If it is decided that the ~1 km/s delta-V required for minimal Mars orbit capture needs to be done via rocket propulsion, this mass could be reduced to as little as 12 tons (assuming kerosene/oxygen propulsion). This would still be enough to enable the mission. The orbit employed by the ERV is a loosely bound 250 km by 1 sol orbit. This minimizes the delta-V for orbital capture and departure, while maintaining the ERV in a synchronous relationship to the landing site. Habitable volume on the ERV can be greatly expanded by using an auxiliary inflatable cabin, as discussed in the Appendix.
c. The hab craft.
The Dragon is chosen for the primary hab and ERV vehicle because it is available. It is not ideal. Habitation space of the Dragon alone after landing appears to be about 80 square feet, somewhat smaller than the 100 square feet of a small standard Tokyo apartment. Additional habitation space and substantial mission logistics backup could be provided by landing an additional Dragon at the landing site in advance, loaded with extra supplies and equipment. Solar flare protection can be provided on the way out by proper placement of provisions, or by the use of a personal water-filled solar flare protection “sleeping bag.” For concepts for using inflatables to greatly expand living space during flight and/or after landing, see note in Appendix.
…which gratifyingly echoes my own thoughts. Landing a Dragon directly on Mars has a great appeal and as a Mars Descent Vehicle it’s a good system, given the modifications Zubrin outlines. But is it a Mars Habitat? The Inflatable extensions make it viable and I was wondering if Bigelow, SpaceX and Mars-Soc couldn’t combine forces on a design. Zubrin argues for eventual extensions of the architecture itself, calling for eventual Heavy Lift systems able to throw 30 tonnes to Mars, but IMO the Falcon Heavy Tanker modification is sufficient to launch ~24.7 tonne payloads now and with an LH2/LOX Stage II it might easily launch ~30-40 tonnes. Alternatively two FHTs can be ganged to launch 55-60 tonnes directly now. However such modifications are deployed is perhaps irrelevant. What’s needed is the political will to commit to Mars Colonization, not just a one-off stunt. All the good ideas to improve how we get there are irrelevant until we actually do…
## SpaceX to Mars?
SpaceX has answered the skeptics recently with a frank discussion of its costs thus far in its May 4, 2011 Update. An excerpt of relevance is this…
WHY THE US CAN BEAT CHINA: THE FACTS ABOUT SPACEX COSTS
The Falcon 9 launch vehicle was developed from a blank sheet to first launch in four and half years for just over $300 million. The Falcon 9 is an EELV class vehicle that generates roughly one million pounds of thrust (four times the maximum thrust of a Boeing 747) and carries more payload to orbit than a Delta IV Medium. The Dragon spacecraft was developed from a blank sheet to the first demonstration flight in just over four years for about$300 million. Last year, SpaceX became the first private company, in partnership with NASA, to successfully orbit and recover a spacecraft. The spacecraft and the Falcon 9 rocket that carried it were designed, manufactured and launched by American workers for an American company. The Falcon 9/Dragon system, with the addition of a launch escape system, seats and upgraded life support, can carry seven astronauts to orbit, more than double the capacity of the Russian Soyuz, but at less than a third of the price per seat.
Note the cost of developing the “Dragon” which is the first private aerospace vehicle proven capable of return from orbit. About $300 million, with a dry mass of about ~4.2 tons, thus ~$72 million/ton to develop. To develop large Mars mission vehicles might be assumed to cost similar amounts per ton of aerospace machinery. But can it be done even cheaper?
The Mars Society has made an impassioned plea to President Obama to consider a minimalistic Mars Mission concept based on the Falcon Heavy and Dragon space-vehicle…
The SpaceX’s Falcon-9 Heavy rocket will have a launch capacity of 53 metric tons to low Earth orbit. This means that if a conventional hydrogen-oxygen chemical rocket upper stage were added, it would have the capability of sending 17.5 tons on a trajectory to Mars, placing 14 tons in Mars orbit, or landing 11 tons on the Martian surface.
The company has also developed and is in the process of demonstrating a crew capsule, known as the Dragon, which has a mass of about eight tons. While its current intended mission is to ferry up to seven astronauts to the International Space Station, the Dragon’s heat shield system is capable of withstanding re-entry from interplanetary trajectories, not just from Earth orbit. It’s rather small for an interplanetary spaceship, but it is designed for multiyear life, and it should be spacious enough for a crew of two astronauts who have the right stuff.
Thus a Mars mission could be accomplished utilizing three Falcon-9 Heavy launches. One would deliver to Mars orbit an unmanned Dragon capsule with a kerosene/oxygen chemical rocket stage of sufficient power to drive it back to Earth. This is the Earth Return Vehicle.
A second launch will deliver to the Martian surface an 11-ton payload consisting of a two-ton Mars Ascent Vehicle employing a single methane/oxygen rocket propulsion stage, a small automated chemical reactor system, three tons of surface exploration gear, and a 10-kilowatt power supply, which could be either nuclear or solar.
The Mars Ascent Vehicle would carry 2.6 tons of methane in its propellant tanks, but not the nine tons of liquid oxygen required to burn it. Instead, the oxygen could be made over a 500-day period by using the chemical reactor to break down the carbon dioxide that composes 95% of the Martian atmosphere.
Using technology to generate oxygen rather than transporting it saves a great deal of mass. It also provides copious power and unlimited oxygen to the crew once they arrive.
Once these elements are in place, the third launch would occur, which would send a Dragon capsule with a crew of two astronauts on a direct trajectory to Mars. The capsule would carry 2500 kilograms of consumables—sufficient, if water and oxygen recycling systems are employed, to support the two-person crew for up to three years. Given the available payload capacity, a light ground vehicle and several hundred kilograms of science instruments could be taken along as well.
The crew would reach Mars in six months and land their Dragon capsule near the Mars Ascent Vehicle. They would spend the next year and a half exploring.
Using their ground vehicle for mobility and the Dragon as their home and laboratory, they could search the Martian surface for fossil evidence of past life that may have existed in the past when the Red Planet featured standing bodies of liquid water. They also could set up drilling rigs to bring up samples of subsurface water, within which native microbial life may yet persist to this day. If they find either, it will prove that life is not unique to the Earth, answering a question that thinking men and women have wondered upon for millennia.
At the end of their 18-month surface stay, the crew would transfer to the Mars Ascent Vehicle, take off, and rendezvous with the Earth Return Vehicle in orbit. This craft would then take them on a six-month flight back to Earth, whereupon it would enter the atmosphere and splash down to an ocean landing.
Spending ~2.5 years in a Dragon capsule will take a couple of claustrophiles, but people have endured in remarkably nasty conditions. So why not? It’s daring, but is it necessary?
Zubrin asks for a cryogenic upper-stage to throw the Mars vehicles to Mars, but is that really needed? Can better performance be achieved by using a slightly different approach? In a previous post I outlined the Falcon Heavy Tanker (FHT) – essentially a Falcon Heavy Stage 2 with a stretched tank and a docking collar for coupling to a Dragon. I estimated 55 tonnes of RP-1/LOX could be placed in orbit and a FHT dry-mass of 2.5 tonnes. To get to Mars takes ~3.7 km/s from LEO, the so-called Trans-Mars Insertion (TMI) delta-vee, thus with a vacuum Isp = 342s, that means the Falcon Heavy Tanker can push 27.2 tonnes into a TMI orbit, thus a net payload of ~24.7 tonnes. With aerobraking that’s considerably more than the Mars Society’s quoted payloads, providing somewhat better living conditions for the explorers.
Of course the payloads need to be orbitted separate to the FHTs, but at less than half the Falcon Heavy’s usual 53 tonne payload, that means 2 separate Mars payloads can be orbitted by one vehicle, and supported by a separately orbitted crew in a Dragon. Potentially we can reduce the FHTs to just three to support a beefier Mars Semi-Direct mission which doesn’t mean living in a Dragon capsule for 2.5 years! Alternatively we launch the Mars Ascent Vehicle directly via a single Falcon Heavy, as per the Mars Society plan, and launch the Mars-bound Habitat and Earth Return Vehicles via 2 FHT launches and 1 Falcon Heavy. Four Falcon Heavy launches versus 3, but delivering more payload.
Zubrin is, I suspect, hoping to minimize the cost of developing new systems, thus using two Dragons and only needing to develop a low-mass Mars Ascent Vehicle. However the current Dragon probably can’t be used as a Habitat for +2 years with some development work, thus the difference between the two approaches is probably negligible. I appreciate his gumption and burning desire to get a finger-hold on Mars as soon as possible, but I’d like to see the developed systems able to do more than a stunt.
Go SpaceX! Go Mars-Soc!
## Hydrogen Greenhouse Worlds…
The first planets to form probably attracted a primary atmosphere of H/He from the solar Nebula. In our Solar System these were driven off from the four Inner Planets and retained by the Outer Giants, but in theory smaller planets can retain such a mixture. I’ve speculated about such worlds on these blog pages before and now there’s a new arXiv piece discussing the greenhouse abilities of H/He…
Hydrogen Greenhouse Planets Beyond the Habitable Zone
…the summary conclusion being that 40 bars of H2 can keep the surface at 280 K out to 10 AU around a G type star and 1.5 AU around an M star. Thus planets with oceans of water can exist at Saturn-like orbital distances given enough primary atmosphere. Super-Earths are the most likely to retain their H/He primary atmospheres due to their higher gravity, as young stars put out a LOT of EUV light which energizes the hydrogen and strips it away in a billion years or so, if the planet is too close. Out past ~2 AU for a G-star and that effect isn’t so dramatic, thus a Super-Earth where the Asteroid Belt is today would’ve retained its primary atmosphere and probably be warm & wet.
Such a “habitable planet” is only barely defineable as habitable because it has liquid water, but is unlikely to remain warm/wet habitable if the hydrogen is exploited/depleted by methanogens making methane out of it with carbon dioxide, nor oxygenic photosynthesisers making O2, via CO2+H2O->CH2O+O2, which then reacts rapidly with hydrogen. Could another kind of photosynthesis evolve to restore the hydrogen lost? Hydrogen makers exist on Earth, so it’s not unknown in biochemical terms, but I wonder what other compound they need to release net hydrogen from methane/sugars/water? |
# Modeling the Choose function
In statistics, one often encounters the choose function $${x \choose y}$$ which encodes the number of ways of choosing $$y$$ items from a set of $$x$$ items. How would one go about modeling a choose equality constraint
$${x \choose y} = C$$
without explicitly using the factorial-based formulation (if possible)?
• So, are $x$ and $y$ variables in your optimization problem? I would like to learn more about problems where this kind of constraint would be required! Please share more context, if you can. Oct 20, 2019 at 4:53
• The last link in the tag that I added (after the edit to the tag wiki is approved) points to Stanley's Enumerative Combinatorics (.PDF).
– Rob
Oct 21, 2019 at 6:23
• @Rob: Thanks for the reference. I'm already convinced of the usefulness of the choose function in general. My question was more about using it in constraints with variables. Oct 21, 2019 at 7:06
• I can't think of an application either, I would appreciate an application/use case for such a function. Oct 22, 2019 at 13:08
I am going to assume that $$x \in \mathbb{N}$$ and $$y \in \mathbb{N}$$ are variables, and that $$C \in \mathbb{N}$$ is a constant. In this case, you can benefit from the fact that your equality constraint does not have that many possible solutions.
### Case 1: $$C = 1$$
This only happens when $$y=0$$ or $$y = x$$. Assume that we have some upper bounds $$\bar{x}$$ and $$\bar{y}$$ on $$x$$ and $$y$$, respectively. You can then model the choose equality constraint as follows: $$\begin{eqnarray} 0 &\le& y & \le& \bar{y}z\\ -\bar{x}(1-z) &\le& y-x &\le& \bar{y}(1-z) \end{eqnarray}$$ for a binary variable $$z \in \mathbb{B}$$. Note that $$z=0$$ corresponds to $$y=0$$, and $$z=1$$ corresponds to $$y = x$$.
### Case 2: $$C \neq 1$$
It is conjectured that for $$C \neq 1$$, your equality constraint does not have many solutions, see Singmaster's conjecture on Wikipedia. In fact, for $$C \le 2^{48} \approx 3 \times 10^{14}$$, it has been shown that there are never more than 8 different solutions in terms of $$x$$ and $$y$$.
So for a given $$C$$ that is not too big, you can simply look up all $$n$$ solutions $$a_i, b_i\in \mathbb{N}$$ such that $${a_i \choose b_i} = C$$, for $$i = 1,\dots, n$$. Next, introduce $$n$$ binary variables $$z_i \in \mathbb{Z}$$, such that $$z_i = 1$$ if and only if solution $$i$$ is chosen. That is $$\begin{eqnarray} x &=& \sum_{i=1}^n a_i z_i\\ y &=& \sum_{i=1}^n b_i z_i\\ \sum_{i=1}^n z_i &=& 1 \end{eqnarray}$$
AFAIK, some optimization software such as GAMS has some nice functions to deal with this. For example, function likes factorial (fact(x)).
Indeed, some estimations for the factorial function using the probability distribution functions like Gamma or Beta might be applied and be interpreted using (in)equality constraints.
Reference: Factorial, Gamma and Beta Functions
• Probably these can only be used for parameters (inputs), not decision variables? I don't know GAMS well, but I'd be surprised if they can be used for decision variables. Or if they can, the model will certainly be very nonlinear. Oct 20, 2019 at 0:33
• @LarrySnyder610, thanks so much. you are right. it could be used for parameters, not variables. About GAMS, it has a fact(x) function to deal with it. Oct 20, 2019 at 5:08
From Wikipedia's webpage on "binomial coefficients":
"The symbol $$\tbinom {n}{k}$$ is usually read as "$$n$$ choose $$k$$" because there are $$\tbinom {n}{k}$$ ways to choose an (unordered) subset of $$k$$ elements from a fixed set of $$n$$ elements.
Arranging the numbers $$\tbinom {n}{0},\tbinom {n}{1},\ldots ,\tbinom {n}{n}$$ in successive rows for $$n=0,1,2,\ldots$$ gives a triangular array called Pascal's triangle, satisfying the recurrence relation
\begin{align} \binom {n}{k} & ={\binom {n-1}{k}}+{\binom {n-1}{k-1}}. \\ \end{align}
Commonly, a binomial coefficient is indexed by a pair of integers $$n ≥ k ≥ 0$$ and is written $$\tbinom {n}{k}$$. It is the coefficient of the $$x^k$$ term in the polynomial expansion of the binomial power $$(1 + x)^n$$, and it is given by the formula
\begin{align} \binom {n}{k} & ={\frac {n!}{k!(n-k)!}}. \qquad\qquad \end{align}
For example, the fourth power of $$1 + x$$ is
\begin{aligned}(1+x)^{4}&={\tbinom {4}{0}}x^{0}+{\tbinom {4}{1}}x^{1}+{\tbinom {4}{2}}x^{2}+{\tbinom {4}{3}}x^{3}+{\tbinom {4}{4}}x^{4}\\&=1+4x+6x^{2}+4x^{3}+x^{4},\end{aligned}
and the binomial coefficient $$\tbinom {4}{2} ={\tfrac {4!}{2!2!}}=6$$ is the coefficient of the $$x^2$$ term.
The section titled: "computing the value of binomial coefficients" explains:
• Recursive formula
One method uses the recursive, purely additive, formula $$\binom {n}{k} = \binom {n-1}{k-1} + \binom {n-1}{k} \quad \text{for all integers } n,k:1\leq k\leq n-1, \qquad$$
with initial/boundary values
$${\binom {n}{0}}={\binom {n}{n}}=1\quad {\text{for all integers }}n\geq 0, \qquad \qquad \qquad \quad \qquad$$
• Multiplicative formula
A more efficient method to compute individual binomial coefficients is given by the formula
$$\binom {n}{k} ={\frac {n^{\underline {k}}}{k!}}={\frac {n(n-1)(n-2)\cdots (n-(k-1))}{k(k-1)(k-2)\cdots 1}}=\prod _{i=1}^{k}{\frac {n+1-i}{i}},\qquad$$
where the numerator of the first fraction $$n^{\underline {k}}$$ is expressed as a falling factorial power. This formula is easiest to understand for the combinatorial interpretation of binomial coefficients. The numerator gives the number of ways to select a sequence of $$k$$ distinct objects, retaining the order of selection, from a set of $$n$$ objects. The denominator counts the number of distinct sequences that define the same $$k$$-combination when order is disregarded.
Due to the symmetry of the binomial coefficient with regard to $$k$$ and $$n−k$$, calculation may be optimised by setting the upper limit of the product above to the smaller of $$k$$ and $$n−k$$.
• Factorial formula
Finally, though computationally unsuitable, there is the compact form, often used in proofs and derivations, which makes repeated use of the familiar factorial function:
$${\binom {n}{k}}={\frac {n!}{k!\,(n-k)!}}\quad {\text{for }}\ 0\leq k\leq n, \qquad \qquad \qquad \qquad \qquad \qquad \qquad$$
where $$n!$$ denotes the factorial of $$n$$. This formula follows from the multiplicative formula above by multiplying numerator and denominator by $$(n − k)!$$; as a consequence it involves many factors common to numerator and denominator. It is less practical for explicit computation (in the case that $$k$$ is small and $$n$$ is large) unless common factors are first cancelled (in particular since factorial values grow very rapidly).
What is the algorithm for counting combinations?
// pseudo code
start count_combinations( k , n ) {
if (k = n) return 1;
if (k > n/2) k = n-k;
res = n-k+1;
for i = 2 by 1 while i < = k
res = res * (n-k+i)/i;
end for
return res;
end
So for Catalan numbers, a sequence of positive integers, where the $$n$$th term in the sequence denoted $$C_n$$, is found in the following formula:
$$C_n = (2n)! / ((n + 1)!n!)$$
The $$n$$ factorial is equal to the product of all of the integers from $$n$$ down to $$1$$.
$$(n) ⋅ (n - 1) ⋅ (n - 2) ⋅ … ⋅ 2 ⋅ 1$$
Without using factorial that's:
$$C_{n}={2n \choose n}-{2n \choose n+1}={1 \over n+1}{2n \choose n}\quad {\text{ for }}n\geq 0,$$
and
$$C_{0}=1\quad {\text{and}}\quad C_{n+1}={\frac {2(2n+1)}{n+2}}C_{n}.$$ |
DeepAI
# Signal reconstruction from noisy multichannel samples
We consider the signal reconstruction problem under the case of the signals sampled in the multichannel way and with the presence of noise. Observing that if the samples are inexact, the rigorous enforcement of multichannel interpolation is inappropriate. Thus the reasonable smoothing and regularized corrections are indispensable. In this paper, we propose several alternative methods for the signal reconstruction from the noisy multichannel samples under different smoothing and regularization principles. We compare these signal reconstruction methods theoretically and experimentally in the various situations. To demonstrate the effectiveness of the proposed methods, the probability interpretation and the error analysis for these methods are provided. Additionally, the numerical simulations as well as some guidelines to use the methods are also presented.
• 3 publications
• 2 publications
• 12 publications
10/02/2019
### Iterative methods for signal reconstruction on graphs
We present two iterative algorithms to interpolate graph signals from on...
04/26/2019
### Smoothing and Interpolating Noisy GPS Data with Smoothing Splines
A comprehensive methodology is provided for smoothing noisy, irregularly...
02/09/2017
### L1-regularized Reconstruction Error as Alpha Matte
Sampling-based alpha matting methods have traditionally followed the com...
06/22/2019
### TopoLines: Topological Smoothing for Line Charts
Line charts are commonly used to visualize a series of data samples. Whe...
02/09/2019
### Sparsity Promoting Reconstruction of Delta Modulated Voice Samples by Sequential Adaptive Thresholds
In this paper, we propose the family of Iterative Methods with Adaptive ...
01/09/2022
### Signal Reconstruction from Quantized Noisy Samples of the Discrete Fourier Transform
In this paper, we present two variations of an algorithm for signal reco...
07/27/2020
### LineSmooth: An Analytical Framework for Evaluating the Effectiveness of Smoothing Techniques on Line Charts
We present a comprehensive framework for evaluating line chart smoothing...
## 1 Introduction
The main specialty of the multichannel sampling [1, 2] is that the samples are taken from multiple transformed versions of the function. The transformation can be the derivative, the Hilbert transform, or more general liner time invariant system [3]. The classical multichannel sampling theorem [1]
is only available for the bandlimited functions in the sense of Fourier transform and it has been generalized for the bandlimited functions in the sense of fractional Fourier transform (FrFT)
[4], linear canonical transform (LCT) [5, 6] and offset LCT [7]. In a real application, only finitely many samples, albeit with large amount, are given in a bounded region [8]. That is, the underlying signal is time-limited. Thus, reconstruction by the sampling formulas for the bandlimited functions is inappropriate because the bandlimited functions cannot be time-limited by the uncertainty principle [9]. A time-limited function can be viewed as a period of a periodic function. Certain studies have been given to the sampling theorems for the periodic bandlimited functions [10, 11]. Moreover, the multichannel sampling approach has been extended to the time-limited functions [12].
Let be the unit circle and denote by , the totality of functions such that
∥f∥p:=(12π∫T|f(t)|pdt)1p<∞.
Let , , and define
gm(t)=(f∗hm)(t)=12π∫Tf(s)hm(t−s)ds,
for . It was shown in [12] that there exist , , , such that
TNf(t):=1LM∑m=1L−1∑p=0gm(2πpL)ym(t−2πpL) (1.1)
satisfies the following interpolation consistency:
(TNf∗hm)(2πpL)=(f∗hm)(2πpL),0≤p≤L−1,1≤m≤M. (1.2)
Here, is a filtered function with the input and the impulse response , and , , , are determined by , , , . The continuous function is called a multichannel interpolation (MCI) for . The MCI reveals that one can reconstruct a time-limited function by using multiple types of samples simultaneously. If is periodic bandlimited, it can be perfectly recovered by (1.1).
It is noted that to find a function satisfying the interpolation consistency (1.2) is to solve a system of equations. And the matrix involved in this inverse problem may have a large condition number if the sample sets , have a high degree of relevance. In spite of this, in [8, 12], the authors showed that the large scale () inverse problem could be converted to a simple inversion problem of small matrices () by partitioning the frequency band into small pieces. Moreover, the closed-form of the MCI formula as well as the FFT-based implementation algorithm (see Algorithm 1) were provided.
The MCI guarantees that a signal can be well reconstructed from its clean multichannel samples, little has been said about the case where the samples are noisy. It is of great significance to examine the errors that arise in the signal reconstruction by (1.1) in the presence of noise. In this paper, we consider the reconstruction problem under the situation that a signal is sampled in a multichannel way and the samples are corrupted by the additive noise, i.e., we will use the noisy samples
sm,p=gm(2πpL)+ϵm,p,0≤p≤L−1,1≤m≤M, (1.3)
to reconstruct . Here, is an i.i.d. noise process with , .
The interpolation of noisy data introduces the undesirable error in the reconstructed signal. There is a need to estimate the error of the MCI for the observations defined by (
1.3). An accurate error estimate of the MCI in the presence of noise helps to design suitable reconstruction formulas from noisy multichannel samples. Note that the MCI applies to various kinds of sampling schemes, thus the error analysis can also be used to analyze what kinds of sampling schemes have a good performance in signal reconstruction in the noisy environment. In the current paper, we provide an error estimate for the MCI from noisy multichannel samples, and express the error as a function of the sampling rate as well as the parameters associated with sampling schemes. In addition, we will show how sampling rate and sampling schemes affect the reconstruction error caused by noise.
Based on the error estimate of the MCI in the noisy environment, we will provide a class of signal reconstruction methods by introducing some reasonable smoothing and regularized corrections to the MCI such that the reconstructed signal could be robust to noise. In other words, the reconstruction should not be affected much by small changes in the data. Besides, we need to make sure that the reconstructed signal will be convergent to the original signal as the sampling rate tends to infinity.
If is a periodic bandlimited signal, only the error caused by noise needs to be considered. Otherwise, the aliasing error should be taken into account as well. It is noted that the smoothing and regularization operations will restrain high frequency in general. It follows that to reduce the noise error by the methods based on smoothing or regularization may increase the aliasing error. Thus it is necessary to make a trade-off between the noise error and the aliasing error such that the reconstructed signal can be convergent to in the non-bandlimited case as the sampling rate tends to infinity.
The objective of this paper is to study the aforementioned problems that arise in the signal reconstruction from noisy multichannel data. The main contributions are summarized as follows.
1. The error estimate of the signal reconstruction by the MCI from noisy samples is given.
2. We propose four methods, i.e., post-filtering, pre-filtering, regularization and regularization, to reduce the error caused by noise in the multichannel reconstruction. The parameters of post-filtering and pre-filtering are optimal in the sense of the expectation of mean square error (EMSE).
3. The convergence property of post-filtering is verified theoretically and experimentally. The numerical simulations as well as some guidelines to use the proposed signal reconstruction methods are also provided.
The rest of the paper is organized as follows. Section 2 briefly reviews the multichannel interpolation (MCI) and its FFT-based fast algorithm. The error estimate for the MCI of noisy samples is provided. In Section 3, the techniques of post-filtering, pre-filtering and regularized approximation are applied to reconstruct from its noisy multichannel samples. The comparative experiments for the different methods are conducted in Section 4. Finally, conclusion and discussion are drawn at the end of the paper.
## 2 Error analysis of the MCI from noisy samples
### 2.1 The MCI and its fast implementation algorithm
We begin by reviewing the MCI in more detail. Let , and , we denote by the totality of the periodic bandlimited functions (trigonometric polynomials) with the following form:
f(t)=∑n∈INa(n)eint, IN={n:N1≤n≤N2}.
The bandwidth of is defined by the cardinality of , denoted by . The set can be expressed as , where
Ij={n:N1+(j−1)L≤n≤N1+jL−1}.
We use the Fourier coefficients of to define the matrix
Hn=[bm(n+jL−L)]jm.
Suppose that is invertible for every and denote its inverse matrix as
H−1n=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣q11(n)q12(n)⋯q1M(n)q21(n)q22(n)⋯q2M(n)⋮⋮⋮qM1(n)qM2(n)⋯qMM(n)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.
Then the interpolating function in (1.1) is given by
ym(t)=∑n∈INrm(n)eint,1≤m≤M,
where
rm(n)={qmj(n+L−jL),if n∈Ij, j=1,2,⋯,M,0if n∉IN.
It was shown in [12] that if is not bandlimited, the aliasing error of the MCI is given by
∑n∉IN|a(n)|2+∑k∉{1,2,…,M}∑n∈Ik|a(n)|2M∑l=1∣∣ ∣∣M∑m=1rm(n+(l−k)L)bm(n)∣∣ ∣∣2.
Moreover, the MCI can be implemented by a FFT-based algorithm (see Algorithm 1) and the well-known FFT interpolation [13] is a special case of the MCI.
### 2.2 The error estimate for the MCI of noisy samples
Given the noisy data (1.3), we define
fN,ϵ(t):=1LM∑m=1L−1∑p=0(gm(2πpL)+ϵm,p)ym(t−2πpL).
If , then
E(12π∫2π0|fN,ϵ(t)−f(t)|2dt) = E⎛⎜⎝12π∫2π0∣∣ ∣∣1LM∑m=1L−1∑p=0ϵm,pym(t−2πpL)∣∣ ∣∣2dt⎞⎟⎠ = 1L2EM∑m=1L−1∑p=0M∑m′=1L−1∑p′=0ϵm,pϵm′,p′12π∫2π0ym(t−2πpL)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ym′(t−2πp′L)dt = 1L2σ2ϵM∑m=1L−1∑p=0(12π∫2π0∣∣ym(t−2πpL)∣∣2dt) = σ2ϵLM∑m=1∥ym∥22=σ2ϵLM∑m=1∑n∈IN|rm(n)|2.
Suppose that
are independent random variables with the same normal distribution
, it is easy to verify that , . Let
z(m,m′,p,p′):=12π∫2π0ym(t−2πpL)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ym′(t−2πp′L)dt
From Hölder inequality, we have that
∣∣z(m,m′,p,p′)∣∣2≤∥ym∥22∥ym′∥22.
It follows that
Var(12π∫2π0|fN,ϵ(t)−f(t)|2dt) = Var⎛⎜⎝12π∫2π0∣∣ ∣∣1LM∑m=1L−1∑p=0ϵm,pym(t−2πpL)∣∣ ∣∣2dt⎞⎟⎠ ≤ 1L4VarM∑m=1L−1∑p=0M∑m′=1L−1∑p′=0ϵm,pϵm′,p′∣∣z(m,m′,p,p′)∣∣ = 1L4M∑m=1L−1∑p=0M∑m′=1L−1∑p′=0∣∣z(m,m′,p,p′)∣∣2Var(ϵm,pϵm′,p′) ≤ 2σ4ϵL4M∑m=1L−1∑p=0M∑m′=1L−1∑p′=0∣∣z(m,m′,p,p′)∣∣2 ≤ =
Therefore the variance of mean square error is bounded and is not larger than twice the square of the expectation.
In order to show the mean square error of MCI caused by noise more clearly, we consider three concrete sampling schemes, namely, the reconstruction problem of from (1) the samples of (single-channel); (2) the samples of and (two-channel); (3) the samples of and (two-channel). For simplicity, we abbreviate the MCI of the above types of samples as F1, FH2 and FD2 respectively and denote by the total number of samples. For F1, we have that , . It easy to see that
r(n,F1,Ns)=1 for −Ns2+1≤n≤Ns2.
For FH2, we have that , . Since
Hn=[1−isgn(n)1−isgn(n+L)].
It is clear that
H−1n=[1212−i2i2] for −L+1≤n≤−1, H−10=[10−i1].
It follows that
r1(n,FH2,Ns)=⎧⎪⎨⎪⎩12,if 1≤|n|≤L−1,0if n=L,1if n=0.
r2(n,FH2,Ns)=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩−i2,if −L+1≤n≤−1,i2if 1≤n≤L−1,−iif n=0,1if n=L.
For FD2, by direct computations, we have that
Hn=[1in1i(L+n)],H−1n=⎡⎣L+nL−nLiL−iL⎤⎦.
It follows that
r1(n,FD2,Ns)={1+nL,if −L+1≤n≤0,1−nLif 1≤n≤L.
r2(n,FD2,Ns)={iL,if −L+1≤n≤0,−iLif 1≤n≤L.
To study FH2 and FD2, we assume that is an even number and . It should be noted that for F1 because it is a single-channel interpolation. In contrast, for FH2 and FD2 as they are two-channel interpolations. Thus, to compare the performance of the three interpolation methods under the same total number of samples , one needs to keep in mind that has different values for F1 and FH2.
Having introduced the Fourier coefficients of the interpolation functions for F1, FH2 and FD2, we have that
1Ns∑n∈IN|r(n,F1,Ns)|2=1,
1LM∑m=1∑n∈IN|rm(n,FH2,Ns)|2=1+4Ns,
1LM∑m=1∑n∈IN|rm(n,FD2,Ns)|2=23+283Ns2.
Besides the theoretical error estimate, the experiments are conducted to compare the reconstructed results by F1, FH2 and FD2. Let
ϕ(z)=0.08z2+0.06z10(1.3−z)(1.5−z)+0.05z3+0.09z10(1.2+z)(1.3+z), (2.1)
D(t,k1,k2)=k2∑n=k1eint, (2.2)
ϕB(t)=ϕ(eit)∗D(t,−16,16).
If , is the Dirichlet kernel of order . We use as the test function. Obviously, it is bandlimited with the bandwidth . The theoretical errors, the experimental errors and the reconstructed results are shown in Figure 1 and some conclusions can be drawn as follows.
1. FD2 performs better than F1 in terms of noise immunity and FH2 has the worst performance.
2. As the total number of samples increases, the expectation of mean square error (EMSE) would not decrease if there is no additional correction made in the multichannel reconstruction.
3. The variance of mean square error (VMSE) is bounded and it decreases as increases.
###### Remark 2.1
In the second row first column of Figure 1, we see that the errors of F1, FH2 and FD2 become significantly large when . This is because the test function has the bandwidth , the reconstruction error is caused not only by noise but also by aliasing.
## 3 Multichannel reconstruction from noisy samples
The MCI cannot work well if one observes noisy data because does not converge to in the sense of the expectation of mean square error (EMSE). To alleviate this problem, some smoothing corrections are required. If is bandlimited and the number of samples is larger than the bandwidth, we only need to consider the error caused by noise. Suppose that and , where is the total number of samples. In this section, the techniques of post-filtering, pre-filtering, regularized approximation are applied to reconstruct from noisy samples.
### 3.1 Post-filtering
In [14], the ideal low-pass post-filtering is applied to the Shannon sampling formula and the error of signal reconstruction is also evaluated. Different from the previous work, we first derive the EMSE of the reconstruction by MCI and post-filtering. Then the filter is obtained by solving the optimization problem that minimizes the EMSE.
#### 3.1.1 Formulation of post-filtering
A natural smoothing approach for the reconstructed signal is to convolute with a function . Let
˜f(t,Ns,K)=(fN,ϵ∗w)(t).
Note that
f∗D(⋅,K1,K2)(t)=f(t)
provided that . It follows that
˜f(t,Ns,K)−f(t) = fN,ϵ∗w(t)−f∗D(⋅,K1,K2)(t) = [f∗(w−D(⋅,K1,K2))](t)+1LM∑m=1L−1∑p=0ϵm,p[ym∗w](t−2πpL).
Since is an i.i.d. noise process with , then
E(∣∣˜f(t,Ns,K)−f(t)∣∣2) = |[f∗(w−D(⋅,K1,K2))](t)|2+1L2M∑m=1L−1∑p=0∣∣[ym∗w](t−2πpL)∣∣2E[ϵ2m,p] = |[f∗(w−D(⋅,K1,K2))](t)|2+1L2M∑m=1L−1∑p=0∣∣[ym∗w](t−2πpL)∣∣2σ2ϵ.
Denote the Fourier coefficient of by , it follows that
E(12π∫2π0∣∣˜f(t,Ns,K)−f(t)∣∣2dt) = 12π∫2π0E∣∣˜f(t,Ns,K)−f(t)∣∣2dt = ∥f∗(w−D(⋅,K1,K2))∥22+σ2ϵLM∑m=1∥ym∗w∥22 = K2∑k=K1|a(k)(βk−1)|2+σ2ϵLM∑m=1K2∑k=K1|rm(k,Type,Ns)βk|2.
###### Remark 3.1
Since the functions considered here are square integrable, the interchange of expectation and integral is permissible by the dominated convergence theorem. There are some similar cases happening elsewhere in the paper, we will omit the explanations.
Let and
Φ1(β)=K2∑k=K1|a(k)(βk−1)|2+σ2ϵLM∑m=1K2∑k=K1|rm(k,Type,Ns)βk|2. (3.1)
Since and the equality holds only if , it follows that
Φ1(β+)−Φ1(β)=K2∑k=K1|a(k)|(||βk|−1|−|βk−1|)≤0,
where . Thus, if
then for every . To minimize , we rewrite it as follows:
where
a+=(|a(K1)|,⋯,|a(K2)|)T,A+=diag(a+),
Rm,+=diag(|rm(K1,Type,Ns)|,⋯,|rm(K2,Type,Ns)|).
Differentiating with respect to and solving , we obtain the optimal solution for minimizing the expectation of mean square error. That is,
β∗=(AT+A++σ2ϵLM∑m=1RTm,+Rm,+)−1AT+a+. (3.2)
#### 3.1.2 Estimation of spectral density
The formula (3.2) gives the optimal values for the parameters of post-filtering, to minimize the difference (EMSE) between the filtered and the original (clean) signal . The key problem is that the square of absolute value of , namely the spectral density of , is unknown in typical cases. Thus we have to estimate the value of from the noisy multichannel samples.
There are various techniques for spectral density estimation. The representative methods are periodogram, Welch’s method, autoregressive model and moving-average model, etc. Here, we provide an unbiased estimation for
by using the uncorrelatedness of signal and noise.
Let
sm=(sm,0,sm,1,⋯,sm,L−1)T,1≤m≤M,
gm=(gm(t0),gm(t1),⋯,gm(tL−1))T,tp=2πpL,1≤m≤M,
ϵm=(ϵm,0,ϵm,1,⋯,ϵm,L−1)T,1≤m≤M,
then
sm=gm+ϵm.
To estimate , we need to introduce the vector and , where
d0=1L⎡⎢ ⎢ ⎢ ⎢ ⎢⎣FLULg1FLULg2⋮FLULgM⎤⎥ ⎥ ⎥ ⎥ ⎥⎦,dϵ=1L⎡⎢ ⎢ ⎢ ⎢⎣FLULs1FLULs2⋮FLULsM⎤⎥ ⎥ ⎥ ⎥⎦. (3.3)
Here, is the -th order DFT matrix
FL=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ω0ω0ω0⋯ω0ω0ω1ω2⋯ωL−1ω0ω2ω4⋯ω2(L−1)⋮⋮⋮⋱⋮ω0ωL−1ω2(L−1)⋯ω(L−1)2⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (3.4)
with and is a diagonal matrix
UL=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ω0ωN1{\huge 0}ω2N1{\huge 0}⋱ω(L−1)N1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (3.5)
Note that , and is an i.i.d. noise process, it follows that
E[dϵd∗ϵ]=σ2ϵLI+d0d∗0.
Let be a by matrix and the entry in the -th row and -th column of is
B(m,n)={H−1N1+k−1(j+1,i+1),if m=iL+k,n=jL+k,0otherwise,
where and . By direct computations, we have that
E[Bdϵd∗ϵB∗]=B(σ2ϵLI+d0d∗0)B∗=σ2ϵLBB∗+Bd0d∗0B∗.
If is bandlimited, it can be verified that the diagonal element of is equal to (by a similar method for proving Lemma 1 in [12]). It follows that the diagonal element of
Bdϵd∗ϵB∗−σ2ϵLBB∗ (3.6)
is an unbiased estimation for .
To validate the effectiveness of the above method for estimating spectral density, the noisy multichannel samples are applied to estimate by the formula (3.6) experimentally. We will perform a series of experiments under different quantities and types of samples. Let
f(t)=N2∑n=N1a(n)eint,N1=−2,N2=3 (3.7)
be the test function, where . The mean square error (MSE) for estimating the spectral density of is defined by
δsde=∑N2n=N1∣∣|a(n)|2−~A(n)∣∣2N2−N1+1,
where is the -th diagonal element of . To show the performance of the estimation more accurately, each experiment will be repeated times and the corresponding average MSE is an approximation of the expectation of MSE.
The experimental results are presented in Table 1. The second column indicates that if we use samples of to estimate spectral density, the expectation of MSE is approximately equal to . It can be seen that the expectation of MSE for spectral density estimation varies in inverse proportion to the total number of samples. In other words, the experimentally obtained MSE, i.e. , tends to as the total number of samples goes to infinity and if the same total number of samples are used to estimate spectral density, the fluctuations of MSE caused by different sampling schemes are not significant. Besides, it is noted that the traditional single-channel based method for spectral density estimation can not utilize the multichannel information to improve the accuracy. By contrast, the proposed multichannel based method fuses the different types of samples, thereby extending the scope of application and enhancing the precision, as seen from the column four and six of Table 1.
### 3.2 Pre-filtering
If is bandlimited, it can be expressed as
f(t)=1LM∑m=1g |
Explain two different examples of cognate pairs and their rolein the immune response.
Question:
Explain two different examples of cognate pairs and their role in the immune response.
Similar Solved Questions
Please show all steps so I can fully understand the process of how to solve. thank...
please show all steps so I can fully understand the process of how to solve. thank you ! (16) It has been observed that 45% of young Canadians, under 25, play video games on a regular basis. You choose a random sample of 8 persons in this age group. 2. (2) What probability distribution would be used...
Write the expression in rectangular form, X +y i, and in exponential form, r eThe rectangular form of Ihe given expression is and (he exponential form of the given expression is (Simplify your answers. Type exact answers, using as needed Use integers or fraclions for any numbers in the expressions
Write the expression in rectangular form, X +y i, and in exponential form, r e The rectangular form of Ihe given expression is and (he exponential form of the given expression is (Simplify your answers. Type exact answers, using as needed Use integers or fraclions for any numbers in the expressions...
TIUVIUM (20 points) You are given the following facts: World Enterprises (A) Wheelrim& Axle Merged Company...
TIUVIUM (20 points) You are given the following facts: World Enterprises (A) Wheelrim& Axle Merged Company (AB) (B) dor EPS $2.4$3 3.20 Price per share $48 ?$36 12 P/E 20 number of shares 120,000 240,000 ? Total Earnings $288,000$720,000 Total market value $5,760,000$8,640,000 Business Schoo...
A Poom SHo DELO CONSISTS OE A W200 x S9 STEEL OEAM. IT HAS THE FOLLDNING...
A Poom SHo DELO CONSISTS OE A W200 x S9 STEEL OEAM. IT HAS THE FOLLDNING 3 pRo PERTIES 23 pts -t A= 7Shomm A GoODmM X- Tom CAN THIS EAm SuPPoRT A PorNT OAD, P-20ok PoiTS AWARDED FOR 4 Cer 5 EQUATIONS OF EQILIBR 4 หมi3 CORRECT ANSvER NEATVESS iTT...
With what force must the grey bar be pushed to the right in order to maintain a constant 0.SOm/s velocity? The distance between the zero resistance wires is 0.8m. (0.04N)Zero-resistance wiresPush2.00.50 mls0.50 TIn order for the current in the loop shown below to be counterclockwise, should be magnetic field be increasing; decreasing; or staying the same? Assume the loop does not move. (Increasing If the loop has an area of 0.75m? and a resistance of 0.250, what is the rate of change of the B fi
With what force must the grey bar be pushed to the right in order to maintain a constant 0.SOm/s velocity? The distance between the zero resistance wires is 0.8m. (0.04N) Zero-resistance wires Push 2.0 0.50 mls 0.50 T In order for the current in the loop shown below to be counterclockwise, should be...
Which of the following statements are true and which are false? A sigma molecular orbital may...
Which of the following statements are true and which are false? A sigma molecular orbital may True v not have a single lobe of electron density. False v be bonding or antibonding. False v result from overlap of two sp2 orbitals. False v have delocalized electrons in conjugated compounds. True vnot r...
Consider the value of t such that 0.05 of the area under the curve is to the left of t.Step 1 of 2 : Select the graph which best represents the given description of t
Consider the value of t such that 0.05 of the area under the curve is to the left of t. Step 1 of 2 : Select the graph which best represents the given description of t...
Let011 021012 022W1n _ @[email protected] 01[ 612 @2n 021 022 and B = (bij) =bIn _ 1 b1n ben-1 ben where 1 < i,j < nA = (aij)ann-1 Qnnbnl bn2Unl an2 ban-1 bnn be two orthogonal latin squares of order n,n Z 3 on the symbol set {1,2,.., n}Let A1,Az, and Az be squares obtained from A and definedfollows:011 +n 012 +n 021 +n a22 + n Ai = A = (aij), Az = (aij+n) [email protected]+n a1n +n a2n-1 +n @2n + nAx = (aij+2n) , where 1 < i,j < nanl +n @n2 + nann-+n ann +nSimilarly; let B1, Bz, and B3 be squares obtaine
Let 011 021 012 022 W1n _ @2n-1 @n 01[ 612 @2n 021 022 and B = (bij) = bIn _ 1 b1n ben-1 ben where 1 < i,j < n A = (aij) ann-1 Qnn bnl bn2 Unl an2 ban-1 bnn be two orthogonal latin squares of order n,n Z 3 on the symbol set {1,2,.., n} Let A1,Az, and Az be squares obtained from A and defined f...
Use the sample html file below with a submit button to modify the style of the...
use the sample html file below with a submit button to modify the style of the paragraph text through JavaScript code. By clicking on the button, the font, the font size, and the color of the paragraph text will be changed. <!DOCTYPE html> · &nb...
Average temperatures (Fahrenheit) of Mexico recorded in the last twelve days are given below:StemLeaf2Key : 6/565" FHow many days recorded more than 65 degree Fahrenheit?
Average temperatures (Fahrenheit) of Mexico recorded in the last twelve days are given below: Stem Leaf 2 Key : 6/5 65" F How many days recorded more than 65 degree Fahrenheit?...
Complete the identity. sin 2 x 1- 1+ cos x = ? O A. tanx OB....
Complete the identity. sin 2 x 1- 1+ cos x = ? O A. tanx OB. o 0 C. cosx OD. cotx...
Need help coding this program and please explain how the operations work in pseudocode please. Problem:...
Need help coding this program and please explain how the operations work in pseudocode please. Problem: Starting with the following C++ program include <iostream> using namespace std; void main () long Var16; long Var210; long var3 = 15; long Var4 21; long Var522; long Sum; long Result; long...
How do you factor 9xy^2 - 16x?
How do you factor 9xy^2 - 16x?...
16. The SOP for a certain atomic absorption analysis cautions users to employ a glass filter when measuring A at 590 nm to avoid interference by the source emission line at 295 nm. This interference apparently is a result of a. line noise b. second-order diffraction c. analyte ionization d. zero-order refraction e: molar impaction17. The molar absorptivity of a species at a certain wavelength is 1000 M-Icm-'_ What analyte concentration is expected to yield an absorbance of 1.00 when measure
16. The SOP for a certain atomic absorption analysis cautions users to employ a glass filter when measuring A at 590 nm to avoid interference by the source emission line at 295 nm. This interference apparently is a result of a. line noise b. second-order diffraction c. analyte ionization d. zero-ord...
Q32. What is the model called that determines the market value of a stoc hext annual...
Q32. What is the model called that determines the market value of a stoc hext annual dividend, the dividend rowth rate, and the applicable discount at determines the market value of a stock based on its and the applicable discount rate? A. Maximal-growth model B. Capital pricing model C. Constant-gr...
M14) Provide the correcL full [UPAC numes for the following molecule (Jpts)
M14) Provide the correcL full [UPAC numes for the following molecule (Jpts)...
The pancreas is capable of secreting digestive proteins into the G.l. tract as well as protein hormones (Ex: insulin) into thc blood following meals. Pancreatic cells are likely to be particularly abundant infree ribosomesendoplasmic reticulumlysosomesfood vacuolesmicrofilaments
The pancreas is capable of secreting digestive proteins into the G.l. tract as well as protein hormones (Ex: insulin) into thc blood following meals. Pancreatic cells are likely to be particularly abundant in free ribosomes endoplasmic reticulum lysosomes food vacuoles microfilaments...
1) Consider the following reaction: 3C(s) + 4H2(g) → C3H8(g); AH° = -104.7 kJ; AS =...
1) Consider the following reaction: 3C(s) + 4H2(g) → C3H8(g); AH° = -104.7 kJ; AS = -287.4J/K at 298 K What is the equilibrium constant at 300.0 K for this reaction? AGº = -RT In K. AG° = AH° - TAS" (R=8.3145J/mol.K) (6 points)...
Lim as x approaches 0 of (3 sin 4x) / sin 3x
lim as x approaches 0 of (3 sin 4x) / sin 3x...
How many length 4 strings can you make from the digits {1,2,3,4}satisfying at least one of the following two properties?(1) A given digit never appears two times in a row. (2) All the digits are even.
How many length 4 strings can you make from the digits {1,2,3,4} satisfying at least one of the following two properties? (1) A given digit never appears two times in a row. (2) All the digits are even....
Find a power series representation of31f(z) (1 _ 22)2Hint: Start with finding a power series representation ofg(x) = 1 x2
Find a power series representation of 31 f(z) (1 _ 22)2 Hint: Start with finding a power series representation of g(x) = 1 x2...
Find the Kernel and the Range (Image) for the operators (1 1 2 -2 2 2
Find the Kernel and the Range (Image) for the operators (1 1 2 -2 2 2...
The density of a NaCI crystal is 2.17 g/cm^3. The atomic mass of Na and Cl...
The density of a NaCI crystal is 2.17 g/cm^3. The atomic mass of Na and Cl are 22 990 and 35.453 g/mole, respectively Compute the atom densities (in atoms/cm^3) of Na and Cl in the crystal....
Find thc sluna of t where thc paramctric Aelanau ct) runtal tangent linc:Eln(0) .2cr(t)) hra hort
Find thc sluna of t where thc paramctric Aelanau ct) runtal tangent linc: Eln(0) . 2cr(t)) hra hort...
Problem : Four forces act on the eve bolt shown. Analytically determine the magnitude and direction...
Problem : Four forces act on the eve bolt shown. Analytically determine the magnitude and direction of the resultant force (50 points) F = 3,000 lb F, - 2,400 lb F3 - 2,700 lb F, - 2,000 lb...
Question 19Notyet answiered Markedou: 0f 0Fleg queseionDetermine wnether tne following statement is true or false; The point (2, 1, -3) is on the plane 3x ~2y* 42+0=0.Selec one: TrueFalse
Question 19 Notyet answiered Markedou: 0f 0 Fleg queseion Determine wnether tne following statement is true or false; The point (2, 1, -3) is on the plane 3x ~2y* 42+0=0. Selec one: True False...
Sec7.2: Problem 5PreviousProblem ListNextpoint) Book Problem 9Find the volume of the solid obtained by rotating the region bounded by the curves:y = 2x y = 4Vz about y =Volume
Sec7.2: Problem 5 Previous Problem List Next point) Book Problem 9 Find the volume of the solid obtained by rotating the region bounded by the curves: y = 2x y = 4Vz about y = Volume...
(a) Use Stokes' Theorem to evaluate F. dr where F(x, y, z) - x2yi +1x3j+xyk and C is the curve of...
(a) Use Stokes' Theorem to evaluate F. dr where F(x, y, z) - x2yi +1x3j+xyk and C is the curve of intersection of the hyperbolic paraboloid z - y2 - x2 and the cylinder x2 + y2 - 1 oriented counterclockwise as 3 viewed from above (b) Graph both the hyperbolic paraboloid and the cylinder with dom...
Complete using MatLab 1. Consider the following initial value problem 3t2-y, y(0) = 1 Using Euler's...
Complete using MatLab 1. Consider the following initial value problem 3t2-y, y(0) = 1 Using Euler's Method and the second order Runge-Kutta method, for t E [0, 1] with a step size of h 0.05, approximate the solution to the initial value problem. Plot the true solution and the approximate solutio...
The atomic radii of the isotopes of an element are identical to one another. However, the atomic radii of the ions of an element are significantly different from the atomic radii of the neutral atom of the element. Explain.
The atomic radii of the isotopes of an element are identical to one another. However, the atomic radii of the ions of an element are significantly different from the atomic radii of the neutral atom of the element. Explain.... |
# Supremum of all y-coordinates of the Mandelbrot set
Let $M\subset \mathbb R^2$ be the Mandelbrot set. What is $\sup\{ y : (x,y) \in M \}$? Is this known?
To be more descriptive: What is the supremum of all y-coordinates of all black points in the following picture:
Picture File:Mandel zoom 00 mandelbrot set.jpg by Wolfgang Beyer licensed under CC-BY-SA 3.0
I found a point a teeny weensy bit more northernmost. Call the original period 13 Misiurewicz point c1. I made a sequence of images showing the repeating map to c1, and also to c2, which is shown below. The point c2, is $1.68i 10^{−98}$ more northernmost than c1. Basically, there is a small rotational component of 1/200, so that eventually the map to Misiurewicz point c1, does not lead to the northernmost point. The map below shows 30 images on the path to c1. Here is the original c1 point, along with a new c2 point, both printed accurate to 105 decimal digits. c2 is also a Misiurewicz point, with a preperiod of 197, followed by a period of 13, but it is slightly closer to the northernmost point. Both c1 and c2 and the northernmost point are equivalent for 97 decimal digits.
c1=-0.207107867093967732893764544285894983866865721506089742782655437797926445872029873945686503449818426679850 + 1.12275706363259748461604158116265882079904682664638092967742378016679413783606239593843344659123247751651i
c2=-0.207107867093967732893764544285894983866865721506089742782655437797926445872029873945686503449815177663235 + 1.12275706363259748461604158116265882079904682664638092967742378016679413783606239593843344659123249431573i
The image above starts from the main mandelbrot, then showing eight images zooming in, each centered vertically on subsequent Misiurewicz points, with the northernmost point held constant near the top of the image. The Misiurewicz points in these eight images have preperiods of 3,4,5,7,8,9,11,12 before reaching the repeating fixed point. These points are eventually periodic, and are calculated sequentially using Newton's method, where for each point with a preperiod of "n", $f^n+f^{n+1}=0$. I can provide more details or the pari-gp code if interested.
This is a group of 10 images, showing the repeating pattern on the path to c1, which is very close to the northernmost point. Each is centered vertically on subsequent Misiurewicz points, with the northernmost point held constant near the top of the image. From top left, this image contains Misiurewicz points with pre-periods of 13,14,16,17,18,20,21,22,24,25, incrementing with the pattern, "1211211211". After the pre-period, each Misiurewicz lands on a repeating fixed point.
This is the second group of repeating images, similar to the first group, with Misiurewicz points having preperiods of 26,27,29,30,31,33,34,35,37,38, incrementing with the pattern, "1211211211". The second group of 10 images are magnified approximately $10^6$ more than the first group of 10 images. If you repeat this "1211211211" pattern of ten images infinitely, you arrive at a Misiurewicz tip point near the northernmost point, with a pre-period of 1, and a period of 13. That is the c1 nearly northernmost point from above, which is accurate to 97 decimal digits.
If you repeat this "1211211211"" pattern 13 more times for a total of 15 repetitions, then you arrive at a "fork" at the Misiurewicz with a preperiod of 208, where the path to the northernmost point changes. The left image below shows the fork in the road, magnified $2.5\cdot 10^{95}$, where the nearly northernmost c1 point is on the left fork, but the northernmost point is on the right fork.
The fork in the road occurs at the Misiurewicz with a preperiod of 208=13*16. The three images in the fork-in-the-road image have preperiods of 208, 210, 211. After these three zoom in images, the repeating pattern again reverts back to the repeating sequences of 10 images shown earlier. If you following this repeating "1211211211" pattern infinitely, you arrive the c2 Misiurewicz tip with a preperiod of 197, followed by a period of 13. This is the slightly more northern point c2 from above, which is nearly the northernmost point of the Mandelbrot set.
Here we show the first repeating "1211211211" pattern after the fork, starting with the Misiurewicz with a preperiod of 212. One can repeat this pattern infinitely. However, once again, there is another fork in the road, after repeating the "1211211211" pattern a total of 24 more times. Then you get to another fork with three images, before returning to the repeating pattern. This limit would be a point I call "c3", which is a even more northernmost point, which is $1.14i\cdot 10^{-246}$ more northernmost than the "c2" point. The c3 point has a preperiod of 513, followed by a period of 13. If one continues following this new repeating pattern there is also a slightly more northernmost c4, with a preperiod of 842 followed by a period of 13. Carried out ad infinitum, this leads to a Misiurewicz point "cn" with a preperiod of 197, followed by a period of 329. I believe "cn" is nearly the northernmost point, accurate to over 7500 decimal digits...
• thanks. The question intrigued me. As you can guess, I wound up putting a lot of work into this over a week's time. – Sheldon L Oct 27 '14 at 14:42
• actually, the work on the two answers spanned several week's time. My original first answer was Sept 30th, and my last update to this more correct 2nd answer was Oct 16th. – Sheldon L Oct 27 '14 at 14:56
• To appreciate your work, I will award you with another bounty (I just have to wait one day to do so...) – Stephan Kulla Oct 27 '14 at 21:25
• thanks, I would've said that wasn't necessary. My goal is that some folks will be able to enjoy looking at this stuff; that's all. – Sheldon L Oct 27 '14 at 22:54
I made a supremum image page of 31 images leading to the conjectured Supremum point. $f(x)=x^2+C$. If we take C=Robert Munafo's point, we can see that perhaps this point, is the point where $f^{14}(0)=-f^{1}(0),\;\;f^{15}(0)=f^{2}(0)$, where $C\approx -0.207107867093967+1.122757063632597i$ which leads to $f^{n}(0)$ repeating with a period of 13, after the preperiod. Then the point C is one of the zeros of the polynomial with 2^13 terms. It is the solution nearest that point. We numerically estimate the zero iterating with Newton's method, since the polynomial is too large to work with. The result is printed to 60 decimal digits. Because this solution eventually repeats, by definition the point never escapes to infinity so it is a member of the Mandelbrot set. Such preperiodic points are called Misiurewicz points, and are algebraic numbers.
$$C \approx -0.207107867093967732893764544285894983866865721506089742782655+ 1.12275706363259748461604158116265882079904682664638092967742i$$
Here is an image vertically centered on that point, $C \pm 10^{-28}i$, with a small green box at the center of the image. From appearances, the point C might be the "top" of the Mandelbrot... If so, than the nothernmost point is that Misiurewicz point. But I have no idea how to prove it. We do know that as we zoom into the Mandelbrot at a Misiurewicz point, that in the limit, zooming in is self-similar rather than increasingly chaotic. For the point in question, zooming in about $10^6$ seems to be self similar, so that an image zoomed in by $10^{-22}$ or by $10^{-28}$ or by $10^{-34}$ would all look nearly identical to this image. Also there is apparently no rotational component in the self similarity; I think no rotation would be required if this Misiurewicz is the top and it would be nice to show that there is no rotation; see the supremum image page I made.
There is Wolf Jung's Mandel program, available at http://www.mndynamics.com/ which has some nice tutorials on Misiurewicz points.
• I found a point a teeny weensy bit more northernmost. Call the original period 13 Misiurewicz point c1. I found a point c2, which is $1.68*10^{-98}$ farther north. Basically, there is a small rotational component of 1/200, so that eventually the map to my Misiurewicz point c1, does not lead to the northernmost point – Sheldon L Oct 5 '14 at 9:56
Robert Munafo's Mu-ency site calls this the northernmost point of the Mandelbrot set, giving the coordinate as $-0.207107867093967+1.122757063632597i$. A quick search on Google and OEIS turns up no references.
• Is this a Misiurewicz point, which eventually repeats? Or does it wander forever, never repeating? Misiurwecz points are solutions of polynomials. Or is it a chaotic point? – Sheldon L Sep 29 '14 at 14:33
• The imaginary part has since been added to the OEIS. The real part does not yet have an entry. – Peter Kagey Feb 14 '17 at 0:35 |
## CryptoDB
### Shun Li
#### Publications
Year
Venue
Title
2022
CRYPTO
Rebound attack was introduced by Mendel et al. at FSE~2009 to fulfill a heavy middle round of a differential path for free, utilizing the degree of freedom from states. The inbound phase was extended to 2 rounds by Super-Sbox technique invented by Lamberger et al. at ASIACRYPT~2009 and Gilbert and Peyrin at FSE~2010. In ASIACRYPT~2010, Sasaki et al. further reduced the requirement of memory by introducing the non-full-active Super-Sbox. In this paper, we further develop this line of research by introducing Super-Inbound, which is able to connect multiple 1-round or 2-round (non-full-active) Super-Sbox inbound phases by utilizing fully the degrees of freedom from both states and key, yet without the use of large memory. This essentially extends the inbound phase by up to 3 rounds. We applied this technique to find classic or quantum collisions on several AES-like hash functions, and improved the attacked round number by 1 to 5 in targets including AES-128 and Skinny hashing modes, Saturnin-hash, and Gr{\o}stl-512. To demonstrate the correctness of our attacks, the semi-free-start collision on 6-round AES-128-MMO/MP with estimated time complexity $2^{24}$ in classical setting was implemented and an example pair was found instantly on a standard PC.
2020
TOSC
As perfect building blocks for the diffusion layers of many symmetric-key primitives, the construction of MDS matrices with lightweight circuits has received much attention from the symmetric-key community. One promising way of realizing low-cost MDS matrices is based on the iterative construction: a low-cost matrix becomes MDS after rising it to a certain power. To be more specific, if At is MDS, then one can implement A instead of At to achieve the MDS property at the expense of an increased latency with t clock cycles. In this work, we identify the exact lower bound of the number of nonzero blocks for a 4 × 4 block matrix to be potentially iterative-MDS. Subsequently, we show that the theoretically lightest 4 × 4 iterative MDS block matrix (whose entries or blocks are 4 × 4 binary matrices) with minimal nonzero blocks costs at least 3 XOR gates, and a concrete example achieving the 3-XOR bound is provided. Moreover, we prove that there is no hope for previous constructions (GFS, LFS, DSI, and spares DSI) to beat this bound. Since the circuit latency is another important factor, we also consider the lower bound of the number of iterations for certain iterative MDS matrices. Guided by these bounds and based on the ideas employed to identify them, we explore the design space of lightweight iterative MDS matrices with other dimensions and report on improved results. Whenever we are unable to find better results, we try to determine the bound of the optimal solution. As a result, the optimality of some previous results is proved.
2019
TOSC
MDS matrices are important building blocks providing diffusion functionality for the design of many symmetric-key primitives. In recent years, continuous efforts are made on the construction of MDS matrices with small area footprints in the context of lightweight cryptography. Just recently, Duval and Leurent (ToSC 2018/FSE 2019) reported some 32 × 32 binary MDS matrices with branch number 5, which can be implemented with only 67 XOR gates, whereas the previously known lightest ones of the same size cost 72 XOR gates.In this article, we focus on the construction of lightweight involutory MDS matrices, which are even more desirable than ordinary MDS matrices, since the same circuit can be reused when the inverse is required. In particular, we identify some involutory MDS matrices which can be realized with only 78 XOR gates with depth 4, whereas the previously known lightest involutory MDS matrices cost 84 XOR gates with the same depth. Notably, the involutory MDS matrix we find is much smaller than the AES MixColumns operation, which requires 97 XOR gates with depth 8 when implemented as a block of combinatorial logic that can be computed in one clock cycle. However, with respect to latency, the AES MixColumns operation is superior to our 78-XOR involutory matrices, since the AES MixColumns can be implemented with depth 3 by using more XOR gates.We prove that the depth of a 32 × 32 MDS matrix with branch number 5 (e.g., the AES MixColumns operation) is at least 3. Then, we enhance Boyar’s SLP-heuristic algorithm with circuit depth awareness, such that the depth of its output circuit is limited. Along the way, we give a formula for computing the minimum achievable depth of a circuit implementing the summation of a set of signals with given depths, which is of independent interest. We apply the new SLP heuristic to a large set of lightweight involutory MDS matrices, and we identify a depth 3 involutory MDS matrix whose implementation costs 88 XOR gates, which is superior to the AES MixColumns operation with respect to both lightweightness and latency, and enjoys the extra involution property.
#### Coauthors
Xiaoyang Dong (1)
Jian Guo (1)
Lei Hu (2)
Chaoyun Li (2)
Phuong Pham (1)
Danping Shi (1)
Siwei Sun (2)
Zihao Wei (1) |
WIKISKY.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# NGC 4300
Contents
### Images
DSS Images Other Images
### Related articles
Kinematics of the local universe . XII. 21-cm line measurements of 586 galaxies with the new Nançay receiverThis paper presents 586 new 21-cm neutral hydrogen line measurementscarried out with the FORT receiver of the meridian transit Nançayradiotelescope in the period July 2000-March 2003. This observationalprogramme is part of a larger project aiming at collecting an exhaustiveand magnitude-complete HI extragalactic catalogue for Tully-Fisherapplications. It is associated with the building of the MIGALEspectroscopic archive and database.Tables 2, 3 and HI-profiles and corresponding comments are onlyavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/430/373, or directly atour web site http://klun.obs-nancay.fr Completing H I observations of galaxies in the Virgo clusterHigh sensitivity (rms noise 0.5 mJy) 21-cm H I line observationswere made of 33 galaxies in the Virgo cluster, using the refurbishedArecibo telescope, which resulted in the detection of 12 objects. Thesedata, combined with the measurements available from the literature,provide the first set of H I data that is complete for all 355 late-type(Sa-Im-BCD) galaxies in the Virgo cluster with mp ≤ 18.0mag. The Virgo cluster H I mass function (HIMF) that was derived forthis optically selected galaxy sample is in agreement with the HIMFderived for the Virgo cluster from the blind HIJASS H I survey and isinconsistent with the Field HIMF. This indicates that both in this richcluster and in the general field, neutral hydrogen is primarilyassociated with late-type galaxies, with marginal contributions fromearly-type galaxies and isolated H I clouds. The inconsistency betweenthe cluster and the field HIMF derives primarily from the difference inthe optical luminosity function of late-type galaxies in the twoenvironments, combined with the HI deficiency that is known to occur ingalaxies in rich clusters.Tables \ref{t1, \ref{sample_dat} and Appendix A are only available inelectronic form at http://www.edpsciences.org The UZC-SSRS2 Group CatalogWe apply a friends-of-friends algorithm to the combined Updated ZwickyCatalog and Southern Sky Redshift Survey to construct a catalog of 1168groups of galaxies; 411 of these groups have five or more members withinthe redshift survey. The group catalog covers 4.69 sr, and all groupsexceed the number density contrast threshold, δρ/ρ=80. Wedemonstrate that the groups catalog is homogeneous across the twounderlying redshift surveys; the catalog of groups and their membersthus provides a basis for other statistical studies of the large-scaledistribution of groups and their physical properties. The medianphysical properties of the groups are similar to those for groupsderived from independent surveys, including the ESO Key Programme andthe Las Campanas Redshift Survey. We include tables of groups and theirmembers. Hα surface photometry of galaxies in the Virgo cluster. IV. The current star formation in nearby clusters of galaxiesHα +[NII] imaging observations of 369 late-type (spiral) galaxiesin the Virgo cluster and in the Coma/A1367 supercluster are analyzed,covering 3 rich nearby clusters (A1367, Coma and Virgo) and nearlyisolated galaxies in the Great-Wall. They constitute an opticallyselected sample (mp<16.0) observed with ~ 60 %completeness. These observations provide us with the current(T<107 yrs) star formation properties of galaxies that westudy as a function of the clustercentric projected distances (Theta ).The expected decrease of the star formation rate (SFR), as traced by theHα EW, with decreasing Theta is found only when galaxies brighterthan Mp ~ -19.5 are considered. Fainter objects show no orreverse trends. We also include in our analysis Near Infrared data,providing information on the old (T>109 yrs) stars. Puttogether, the young and the old stellar indicators give the ratio ofcurrently formed stars over the stars formed in the past, orbirthrate'' parameter b. For the considered galaxies we also determinethe global gas content'' combining HI with CO observations. We definethe gas deficiency'' parameter as the logarithmic difference betweenthe gas content of isolated galaxies of a given Hubble type and themeasured gas content. For the isolated objects we find that b decreaseswith increasing NIR luminosity. In other words less massive galaxies arecurrently forming stars at a higher rate than their giant counterpartswhich experienced most of their star formation activity at earliercosmological epochs. The gas-deficient objects, primarily members of theVirgo cluster, have a birthrate significantly lower than the isolatedobjects with normal gas content and of similar NIR luminosity. Thisindicates that the current star formation is regulated by the gaseouscontent of spirals. Whatever mechanism (most plausibly ram-pressurestripping) is responsible for the pattern of gas deficiency observed inspiral galaxies members of rich clusters, it also produces the observedquenching of the current star formation. A significant fraction of gashealthy'' (i.e. with a gas deficiency parameter less than 0.4) andcurrently star forming galaxies is unexpectedly found projected near thecenter of the Virgo cluster. Their average Tully-Fisher distance isfound approximately one magnitude further away (muo = 31.77)than the distance of their gas-deficient counterparts (muo =30.85), suggesting that the gas healthy objects belong to a cloudprojected onto the cluster center, but in fact lying a few Mpc behindVirgo, thus unaffected by the dense IGM of the cluster. Based onobservations taken at the Observatorio Astronómico Nacional(Mexico), the OHP (France), Calar Alto and NOT (Spain) observatories.Table \ref{tab4} is only available in electronic form athttp://www.edpsciences.org Hα surface photometry of galaxies in the Virgo cluster I. Observations with the San Pedro Martir 2.1 m telescopeHα imaging observations of 125 galaxies obtained with the 2.1 mtelescope of the San Pedro Martir Observatory (SPM) (Baja California,Mexico) are presented. The observed galaxies are mostly Virgo clustermembers (77), with 36 objects in the Coma/A1367 supercluster and 12 inthe clusters A2197 and A2199 taken as fillers. Hα +[NII] fluxesand equivalent widths, as well as images of the detected targets arepresented. The observatory of San Pedro Martir (Mexico) belongs to theObservatorio Astronómico Nacional, UNAM. Figure 4 is onlyavailable in electronic form at http://www.edpsciences.org Nearby Optical Galaxies: Selection of the Sample and Identification of GroupsIn this paper we describe the Nearby Optical Galaxy (NOG) sample, whichis a complete, distance-limited (cz<=6000 km s-1) andmagnitude-limited (B<=14) sample of ~7000 optical galaxies. Thesample covers 2/3 (8.27 sr) of the sky (|b|>20deg) andappears to have a good completeness in redshift (97%). We select thesample on the basis of homogenized corrected total blue magnitudes inorder to minimize systematic effects in galaxy sampling. We identify thegroups in this sample by means of both the hierarchical and thepercolation friends-of-friends'' methods. The resulting catalogs ofloose groups appear to be similar and are among the largest catalogs ofgroups currently available. Most of the NOG galaxies (~60%) are found tobe members of galaxy pairs (~580 pairs for a total of ~15% of objects)or groups with at least three members (~500 groups for a total of ~45%of objects). About 40% of galaxies are left ungrouped (field galaxies).We illustrate the main features of the NOG galaxy distribution. Comparedto previous optical and IRAS galaxy samples, the NOG provides a densersampling of the galaxy distribution in the nearby universe. Given itslarge sky coverage, the identification of groups, and its high-densitysampling, the NOG is suited to the analysis of the galaxy density fieldof the nearby universe, especially on small scales. 1.65 ^mum (H-band) surface photometry of galaxies. IV. observations of 170 galaxies with the Calar Alto 2.2 m telescopeWe present near-infrared (H band) surface photometry of 170 galaxies,obtained in 1997 using the Calar Alto 2.2 m telescope equipped with theNICMOS3 camera MAGIC. The majority of our targets are selected amongbright members of the Virgo cluster, however galaxies in the A262 andCancer clusters and in the Coma/A1367 supercluster are also included.This data set is aimed at complementing the NIR survey in the Virgocluster discussed in \cite[Boselli et al. (1997)]{B97} and in the ComaSupercluster, presented in Papers I, II and III of this series.Magnitudes at the optical radius, total magnitudes, isophotal radii andlight concentration indices are derivedTables 1 and 2 (full version) are only available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html.Based on observations taken at the Calar Alto Observatory, operated bythe Max-Planck-Institut für Astronomie (Heidelberg) jointly withthe Spanish National Commission for Astronomy. 1.65 μm (H-band) surface photometry of galaxies. V. Profile decomposition of 1157 galaxiesWe present near-infrared H-band (1.65 μm) surface brightness profiledecomposition for 1157 galaxies in five nearby clusters of galaxies:Coma, A1367, Virgo, A262 and Cancer, and in the bridge between Coma andA1367 in the Great Wall". The optically selected (mpg≤16.0) sample is representative of all Hubble types, from E to Irr+BCD,except dE and of significantly different environments, spanning fromisolated regions to rich clusters of galaxies. We model the surfacebrightness profiles with a de Vaucouleurs r1/4 law (dV), withan exponential disk law (E), or with a combination of the two (B+D).From the fitted quantities we derive the H band effective surfacebrightness (μe) and radius (re) of each component, theasymptotic magnitude HT and the light concentration indexC31. We find that: i) Less than 50% of the Ellipticalgalaxies have pure dV profiles. The majority of E to Sb galaxies is bestrepresented by a B+D profile. All Scd to BCD galaxies have pureexponential profiles. ii) The type of decomposition is a strong functionof the total H band luminosity (mass), independent of the Hubbleclassification: the fraction of pure exponential decompositionsdecreases with increasing luminosity, that of B+D increases withluminosity. Pure dV profiles are absent in the low luminosity rangeLH<1010 L\odot and become dominantabove 1011 L\odot . Based on observations taken atTIRGO, Gornergrat, Switzerland (operated by CAISMI-CNR, Arcetri,Firenze, Italy) and at the Calar Alto Observatory (operated by theMax-Planck-Institut für Astronomie (Heidelberg) jointly with theSpanish National Commission for Astronomy). Table 2 and Figs. 2, 3, 4are available in their entirety only in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Arcsecond Positions of UGC GalaxiesWe present accurate B1950 and J2000 positions for all confirmed galaxiesin the Uppsala General Catalog (UGC). The positions were measuredvisually from Digitized Sky Survey images with rms uncertaintiesσ<=[(1.2")2+(θ/100)2]1/2,where θ is the major-axis diameter. We compared each galaxymeasured with the original UGC description to ensure high reliability.The full position list is available in the electronic version only. A Complete Redshift Survey to the Zwicky Catalog Limit in a 2^h X 15 deg Region around 3C 273We compile 1113 redshifts (648 new measurements, 465 from theliterature) for Zwicky catalog galaxies in the region (-3.5d <= delta<= 8.5d, 11h5 <= alpha <= 13h5). We include redshifts for 114component objects in 78 Zwicky catalog multiplets. The redshift surveyin this region is 99.5% complete to the Zwicky catalog limit, m_Zw =15.7. It is 99.9% complete to m_Zw = 15.5, the CfA Redshift Survey(CfA2) magnitude limit. The survey region is adjacent to the northernportion of CfA2, overlaps the northernmost slice of the Las CampanasRedshift Survey, includes the southern extent of the Virgo Cluster, andis roughly centered on the QSO 3C 273. As in other portions of theZwicky catalog, bright and faint galaxies trace the same large-scalestructure. Groups of galaxies. III. Some empirical characteristics.Not Available Total magnitude, radius, colour indices, colour gradients and photometric type of galaxiesWe present a catalogue of aperture photometry of galaxies, in UBVRI,assembled from three different origins: (i) an update of the catalogueof Buta et al. (1995) (ii) published photometric profiles and (iii)aperture photometry performed on CCD images. We explored different setsof growth curves to fit these data: (i) The Sersic law, (ii) The net ofgrowth curves used for the preparation of the RC3 and (iii) A linearinterpolation between the de Vaucouleurs (r(1/4) ) and exponential laws.Finally we adopted the latter solution. Fitting these growth curves, wederive (1) the total magnitude, (2) the effective radius, (3) the colourindices and (4) gradients and (5) the photometric type of 5169 galaxies.The photometric type is defined to statistically match the revisedmorphologic type and parametrizes the shape of the growth curve. It iscoded from -9, for very concentrated galaxies, to +10, for diffusegalaxies. Based in part on observations collected at the Haute-ProvenceObservatory. Study of the Virgo Cluster Using the B-Band Tully-Fisher RelationThe distances to spiral galaxies of the Virgo cluster are estimatedusing the B-band Tully-Fisher (TF) relation, and the three-dimensionalstructure of the cluster is studied. The analysis is made for a completespiral sample taken from the Virgo Cluster catalog of Binggeli, Sandage,& Tammann. The sample contains virtually all spiral galaxies down toM_{BT}=-15 mag at 40 Mpc. A careful examination is made ofthe selection effect and errors of the data. We estimate distance to 181galaxies, among which distances to 89 galaxies are reasonably accurate.We compare these distances to those obtained by other authors on agalaxy-by-galaxy basis. We find reasonable consistency of theTully-Fisher distance among various authors. In particular, it is foundthat the discrepancy in the distance among the different analyses withdifferent data is about 15%, when good H I and photometric data areavailable. We clarify that the different results on the Virgo distanceamong authors arise from the choice of the sample and interpretation ofthe data. We confirm that the Tully-Fisher relation for the Virgocluster shows an unusually large scatter sigma = 0.67 mag, compared tothat for other clusters. We conclude that this scatter is not due to theintrinsic dispersion of the Tully-Fisher relation, but due to a largedepth effect of the Virgo cluster, which we estimate to be extended from12 Mpc to 30 Mpc. The distribution of H I--deficient galaxies isconcentrated at around 14--20 Mpc, indicating the presence of a core atthis distance, and this agrees with the distance estimated for M87 andother elliptical galaxies with other methods. We show also that thespatial number density of spiral galaxies takes a peak at this distance,while a simple average of all spiral galaxy distances gives 20 Mpc. Thefact that the velocity dispersion of galaxies takes a maximum at 14--18Mpc lends an additional support for the distance to the core. Thesefeatures cannot be understood if the large scatter of the TF relation ismerely due to the intrinsic dispersion. The structure of the VirgoCluster we infer from the Tully-Fisher analysis looks like a filamentwhich is familiar to us in a late phase of structure formation in thepancake collapse in hierarchical clustering simulations. This Virgofilament lies almost along the line of sight, and this is the originthat has led a number of authors to much confusion in the Virgo distancedeterminations. We show that the M87 subcluster is located around 15--18Mpc, and it consists mainly of early-type type spiral galaxies inaddition to elliptical and S0 galaxies. There are very few late-typespiral galaxies in this subcluster. The spiral rich M49 subclusterconsists of a mixture of all types of spiral galaxies and is located atabout 22 Mpc. The two other known clouds, W and M, are located at about30--40 Mpc and undergo infall toward the core. The M cloud contains fewearly type spirals. We cannot discriminate, however, whether thesesubclusters or clouds are isolated aggregates or merely parts offilamentary structure. Finally, we infer the Hubble constant to be 82+/- 10 km s-1 Mpc-1. New aperture photometry for 217 galaxies in the Virgo and Fornax clusters.We present photo electric multi-aperture photometry in UBVRI of 171 and46 galaxies in the Virgo and Fornax clusters, respectively. Many of thegalaxies have not been observed in at least one of these passbandsbefore. We discuss the reduction and transformation into the Cousinsphotometric system as well as the extinction coefficients obtainedbetween 1990 and 1993. An image database. II. Catalogue between δ=-30deg and δ=70deg.A preliminary list of 68.040 galaxies was built from extraction of35.841 digitized images of the Palomar Sky Survey (Paper I). For eachgalaxy, the basic parameters are obtained: coordinates, diameter, axisratio, total magnitude, position angle. On this preliminary list, weapply severe selection rules to get a catalog of 28.000 galaxies, wellidentified and well documented. For each parameter, a comparison is madewith standard measurements. The accuracy of the raw photometricparameters is quite good despite of the simplicity of the method.Without any local correction, the standard error on the total magnitudeis about 0.5 magnitude up to a total magnitude of B_T_=17. Significantsecondary effects are detected concerning the magnitudes: distance toplate center effect and air-mass effect. Surface photometry of spiral galaxies in the Virgo cluster regionPhotographic surface photometry is carried out for 246 spiral galaxiesin the Virgo cluster region north of declination + 5 deg. The samplecontains all spiral galaxies of 'certain' and 'possible' Virgo membersin the Virgo Cluster Catalogue of Binggeli, Sandage, & Tammann. Thesample also includes those galaxies which were used in the Tully-Fisheranalyses of the Virgo cluster given in the literature. A catalog ispresented for positions, B-band total magnitudes and inclinations forthese galaxies, and they are compared with the data given in previousstudies. Arm structure in normal spiral galaxies, 1: Multivariate data for 492 galaxiesMultivariate data have been collected as part of an effort to develop anew classification system for spiral galaxies, one which is notnecessarily based on subjective morphological properties. A sample of492 moderately bright northern Sa and Sc spirals was chosen for futurestatistical analysis. New observations were made at 20 and 21 cm; thelatter data are described in detail here. Infrared Astronomy Satellite(IRAS) fluxes were obtained from archival data. Finally, new estimatesof arm pattern radomness and of local environmental harshness werecompiled for most sample objects. Distribution of the spin vectors of the disk galaxies of the Virgo cluster. I. The catalogue of 310 disk galaxies in the Virgo area.Not Available A revised catalog of CfA1 galaxy groups in the Virgo/Great Attractor flow fieldA new identification of groups and clusters in the CfA1 Catalog ofHuchra et al. is presented, using a percolation algorithm to identifydensity enhancements. It is shown that in the resulting catalog,contamination by interlopers is significantly reduced. The Schechterluminosity function is redetermined, including the Malmquist bias. General study of group membership. II - Determination of nearby groupsWe present a whole sky catalog of nearby groups of galaxies taken fromthe Lyon-Meudon Extragalactic Database. From the 78,000 objects in thedatabase, we extracted a sample of 6392 galaxies, complete up to thelimiting apparent magnitude B0 = 14.0. Moreover, in order to considersolely the galaxies of the local universe, all the selected galaxieshave a known recession velocity smaller than 5500 km/s. Two methods wereused in group construction: a Huchra-Geller (1982) derived percolationmethod and a Tully (1980) derived hierarchical method. Each method gaveus one catalog. These were then compared and synthesized to obtain asingle catalog containing the most reliable groups. There are 485 groupsof a least three members in the final catalog. The far-infrared properties of the CfA galaxy sample. I - The catalogIRAS flux densities are presented for all galaxies in the Center forAstrophysics magnitude-limited sample (mB not greater than 14.5)detected in the IRAS Faint Source Survey (FSS), a total of 1544galaxies. The detection rate in the FSS is slightly larger than in thePSC for the long-wavelength 60- and 100-micron bands, but improves by afactor of about 3 or more for the short wavelength 12- and 25-micronbands. This optically selected sample consists of galaxies which are, onaverage, much less IR-active than galaxies in IR-selected samples. Itpossesses accurate and complete redshift, morphological, and magnitudeinformation, along with observations at other wavelengths. Less probable IRR II candidateasThe paper presents a list of 89 less probable Irr candidates in whichthe presence of dust is suspected. In a commentary to this list, theshape of the galaxies is described and their location in relation toneighboring background galaxies is noted. H I observations in the Virgo cluster area. III - All 'member' spiralsH I observations of 141 spiral galaxies in and around the Virgo Clusterare reported, with major-axis mapping for 65 of them. Heliocentricvelocities, profile widths, and H I fluxes are given for all detectedgalaxies. Spin orientations are given for mapped galaxies and H Idiameters for those sufficiently resolved by the 3.2 arcmin beam. Mappedgalaxy spectra are shown as contour plates of position versus velocity;central beam spectra are shown for the remainder. The distributions ofspin orientations are briefly analyzed and shown to be essentiallyrandom. The distributions of H I luminosity are presented along withindicative dynamical mass for the spirals and a synthesized H Idistribution for the cluster as a whole. Continuum radio emission from Virgo galaxiesThe paper presents single-antenna measurements of radio emission from120 galaxies in the Virgo cluster at 2380 MHz using a 2.6 arc min beam(half-power beam width). It also presents interferometric measurementsat the same frequency for 48 galaxies with less than or equal to 1 arcsec resolution. The relative concentration of the radio emission forthese galaxies, particularly the emission from the galactic diskcompared with that from the nucleus is discussed. It is found that thedisk emission dominates in most cases. Some indications that the fluxconcentration is greater in elliptical and lenticular galaxies than itis in spirals are also found. HI-observations of galaxies in the Virgo cluster of galaxies. I - The dataNew H I-data for a large number of bright galaxies inside the 10 degradius area of the Virgo cluster of galaxies have been obtained with the100 m radiotelescope at Effelsberg. A total of 234 galaxies was observedfor the first time. Among them, 53 have been detected providing newaccurate radial velocities. Data from the literature have been compiled.Together with the new data, they form a (nearly homogeneous) set of H Iobservations for more than 450 galaxies. Studies of the Virgo Cluster. II - A catalog of 2096 galaxies in the Virgo Cluster area.The present catalog of 2096 galaxies within an area of about 140 sq degapproximately centered on the Virgo cluster should be an essentiallycomplete listing of all certain and possible cluster members,independent of morphological type. Cluster membership is essentiallydecided by galaxy morphology; for giants and the rare class of highsurface brightness dwarfs, membership rests on velocity data. While 1277of the catalog entries are considered members of the Virgo cluster, 574are possible members and 245 appear to be background Zwicky galaxies.Major-to-minor axis ratios are given for all galaxies brighter than B(T)= 18, as well as for many fainter ones. Supplement to the detailed bibliography on the surface photometry of galaxiesAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1985A&AS...60..517P&db_key=AST The Age and Size of the UniverseNot Available Digital surface photometry of galaxies toward a quantitative classification. III - A mean concentration index as a parameter representing the luminosity distributionAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1984ApJ...280....7O&db_key=AST A survey of galaxy redshifts. IV - The dataThe complete list of the best available radial velocities for the 2401galaxies in the merged Zwicky-Nilson catalog brighter than 14.5mz and with b (II) above +40 deg or below -30 deg ispresented. Almost 60 percent of the redshifts are from the CfA surveyand are accurate to typically 35 km/s.
Submit a new article |
# Platform Independance
This topic is 4906 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hiya folks! The man is back from his glorious vacation, and is full to bursting with questions! So I'll just siphon a few of them regarding platform independance, and put them down here [smile] As always, any and all help is appeciated. Ok, I'm on my second rewrite (didn't like the previous architecture) of my core code, and I'm going to work in platform independance from the offset here. Now, for starters, is this feasible, without having access to a Linux box? That is, I only have Windows XP to test on here, so will it be impossible for me to make something platform independant, even if I only implement the Windows side of it right now? Secondly, and, this one ashames me to ask it, how does one check, without a doubt, which platform one is developing (working, compiling, and testing) on? I thought I knew this one, but I guess not, because no matter how I wrack my brains, the answer escapes.. Note that I'm only trying for Linux/Windows independance here. Last but certainly not least, is this a viable solution to, at least partially, solving the platform independance? Basically, you have an abstract base class, OSInterface, with a number of standard functions, open file, read, write, etc, create window, etc. Just the works. Then, you have subclasses, LinuxOSInterface, and WindowsOSInterface, which implement these various common-to-all-OS functions. At startup, a global (or otherwise globally-accessable) pointer is assigned to an instance of either class, based on platform. So, instead of directly calling the OS functions, you access all the common functions through it, so it acts like a thin wrapper. But, would this work? Are Linux and Windows alike enough in that to be able to do something like this? With some specialization of course, but alike enough in the basics that this is workable? Thanks!
##### Share on other sites
To check which OS you are on, you can use some defines that your compiler does for you. On Windows, gcc and MSVC++ define '_WIN32', on Linux gcc (a.o.) defines __linux. This way you can do something like:
#ifdef _WIN32 g_interface = new WindowsOSInterface();#elif defined __linux g_interface = new LinuxOSInterface();#else #error "Could not determine OS"#endif
To try testing your code, you could download Knoppix. It is a Linux-distribution that runs from a bootable CD. This way you can test, without having to create a dual-boot system or having a separate Linux-box.
##### Share on other sites
Thanks, now that I see the answer, I feel like kicking myself in the face, because the moment I layed eyes on #ifdef _WIN32 I remembered the rest of it [smile] Anyway, thanks!
Regarding Knoppix, I'm hesitant to do anything like that with this computer, simply because we have a very shaky truce, and I don't plan to do anything that might even barely endanger it.. Which includeds booting a new operating system, even off a CD. Plus, I wasn't aware Linux had NTFS support, which all my drives happen to be. I'll probably end up getting some junky Linux box, along with the standard "thick ol' manual" [grin] Thanks anyway though.
Unfortunately, that really doesn't answer my important question, which was, is my solution workable, and are the 2 OSes that alike that I could do something like that without massive hacks to get one or the other OS to work?
[EDIT] Oh, also, does GNU define something like _GNU or a version number, as with MSVC?
##### Share on other sites
Quote:
Original post by SirLuthorLast but certainly not least, is this a viable solution to, at least partially, solving the platform independance? Basically, you have an abstract base class, OSInterface, with a number of standard functions, open file, read, write, etc, create window, etc. Just the works. Then, you have subclasses, LinuxOSInterface, and WindowsOSInterface, which implement these various common-to-all-OS functions. At startup, a global (or otherwise globally-accessable) pointer is assigned to an instance of either class, based on platform. So, instead of directly calling the OS functions, you access all the common functions through it, so it acts like a thin wrapper.
It'd work, but why not use an existing library? SDL does basically that (provides a common interface to a set of cross-platform functionality) although the implementation might be compile time rather than runtime like you suggest.
##### Share on other sites
Quote:
Original post by OrangyTangIt'd work, but why not use an existing library? SDL does basically that (provides a common interface to a set of cross-platform functionality) although the implementation might be compile time rather than runtime like you suggest.
I would have, if I was doing just another app. But I'm working on an application platform for anything else I might wish to do, which might even be defined as an engine, which may also get distributed at some point, so I don't really wish to be using another library's core that will have to be distributed as well, not to mention licensing, etc. Damn. I should shorten my sentences. Anyway, ya, thanks but no thanks [smile]
##### Share on other sites
For things like file I/O, memory allocation, etc. I believe the standard C libraries, as well as STL, are platform independent. You still have to recompile on the differente platforms, but the same code should work.
The tricky part is for stuff like making and controling windows, which is where SDL or wxWidgits or the GTK come in handy.
##### Share on other sites
Quote:
Original post by SirLuthorI would have, if I was doing just another app. But I'm working on an application platform for anything else I might wish to do, which might even be defined as an engine, which may also get distributed at some point, so I don't really wish to be using another library's core that will have to be distributed as well, not to mention licensing, etc. Damn. I should shorten my sentences. Anyway, ya, thanks but no thanks [smile]
I understand your point, but distributing one or two extra DLLs on Windows isn't going to kill you. Most modern projects have like 10-20 DLL dependencies (at least).
You're certainly welcome to work from first principals, but quite honestly you're never going to get anywhere. There's just too many things that are platform dependent and require a layer above them. I'd highly recommend using SDL or something similar to abstract these basic operations (creating a window, input, threads, etc).
##### Share on other sites
Quote:
Original post by AndyTXI understand your point, but distributing one or two extra DLLs on Windows isn't going to kill you. Most modern projects have like 10-20 DLL dependencies (at least).You're certainly welcome to work from first principals, but quite honestly you're never going to get anywhere. There's just too many things that are platform dependent and require a layer above them. I'd highly recommend using SDL or something similar to abstract these basic operations (creating a window, input, threads, etc).
Sure, they might not kill you. But I know that I can do whatever I damn well please with whatever I write. The same can not be said of SDL, or whatever else I choose to link in. Constraints are something I cannot stand. If, as a result I never get anywhere, well, I'm prefectly cool with that too, at least I'll have gone out trying, and will have learned something en-route. Who knows? I may even prove you wrong.. Nothing on you of course, but nothing would please me more [grin]
kingnosis: Aware. Thanks anyway though!
##### Share on other sites
Quote:
Original post by SirLuthor
Quote:
Original post by OrangyTangIt'd work, but why not use an existing library? SDL does basically that (provides a common interface to a set of cross-platform functionality) although the implementation might be compile time rather than runtime like you suggest.
I would have, if I was doing just another app. But I'm working on an application platform for anything else I might wish to do, which might even be defined as an engine, which may also get distributed at some point, so I don't really wish to be using another library's core that will have to be distributed as well, not to mention licensing, etc. Damn. I should shorten my sentences. Anyway, ya, thanks but no thanks [smile]
IIRC, the more recent Unreal games use SDL (for at least cross platform display creation I'd guess). Redistribution really shouldn't be a problem. Equally theres nothing stopping you writing your own cross-platform sections when you find SDL/etc. too limiting and want to replace a specific part. But that doesn't mean you have to write *everything* yourself.
You've already said you're on your second re-write, and the odds are you won't get the platform independant bits right first time either (no-one does!). Especially if you're asking questions about how similar two file systems are. Why waste time producing an inferiour library when a good, free, debugged one already exists?
##### Share on other sites
Quote:
Original post by OrangyTangIIRC, the more recent Unreal games use SDL (for at least cross platform display creation I'd guess). Redistribution really shouldn't be a problem. Equally theres nothing stopping you writing your own cross-platform sections when you find SDL/etc. too limiting and want to replace a specific part. But that doesn't mean you have to write *everything* yourself.You've already said you're on your second re-write, and the odds are you won't get the platform independant bits right first time either (no-one does!). Especially if you're asking questions about how similar two file systems are. Why waste time producing an inferiour library when a good, free, debugged one already exists?
That's quite interesting.. Might even tilt the balance and make me have a look at those parts of SDL mentioned, windowing, threads, and such. However, as for reasons why I would write my own, hell, my time is my own, is it not? Can't I be stubborn and make mistakes if I want to? There's something to be said for learning by falling and picking up again, surely? I enjoy learning things. If it means lots of effort, well, so do lots of things. I can live with it. :Þ
Cheers!
(Oh well, I guess I'll cave in and have a look at SDL...)
• ### What is your GameDev Story?
In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.
• 13
• 9
• 9
• 15
• 14
• ### Forum Statistics
• Total Topics
634070
• Total Posts
3015330
× |
New Industry Products
# Stackpole Develops Low-Profile, Sulfur-Resistant Resistor Chips For Computer Peripherals
October 28, 2009 by Jeff Shepard
Stackpole Electronics Inc. (SEI) introduced its RNCP Series thin film resistors which are said to provide high accuracy with a low standard TCR (temperature coefficient of resistance) in a low profile chip. They are also said to be a low-cost alternative to high-power thick film resistive technology. Impervious to sulfur contamination, the RNCP Series is said to provide highly stable and accurate performance characteristics, making it well suited for use in computer and computer accessory applications. A truly green component, the RNCP Series is RoHS-compliant without exemptions, as it does not contain any lead-containing glass. In addition, the elimination of silver and gold, which are used in traditional sulfur resistant application requirements, is said to reduce the overall cost of the resistor.
The RNCP Series resistors were developed with a high tolerance for harsh environments, including shock, vibration and temperature extremes with the goal of enhancing their performance and lifespan. The thin film technology provides high power handling, stability and low noise, giving them ideal characteristics for use in notebook computers, printers, scanners and test instruments.
"In recent years, consumer electronics have trended toward more ’green’ products, increasing the demand for components with reduced package size and higher energy performance," said Kory Schroeder, Director of Marketing at SEI. "With the development of these new thin film resistors, we are able to meet customer requirements for a smaller chip size and yet keep the cost within 10% of comparable thick film chips."
Featuring absolute tolerances to 1% and a TCR of 100ppm/°C, the RNCP Series resistors are available in 0402, 0603, 0805 and 1206 chip sizes, with power ratings from 0.1 to 0.5W and maximum working voltages from 50V to 200V.
The RNCP Series is available on standard 7-inch reels. Pricing varies with chip size, tolerance, and resistance value and ranges from around $1.00 per thousand to$10 per thousand in full reel quantities; contact Stackpole for volume pricing. |
## Emacs Settings for Clojure
My Optimal GNU Emacs Settings for Developing Clojure (so far) by Frédérick Giasson.
From the post:
In the coming months, I will start to publish a series of blog posts that will explain how RDF data can be serialized in Clojure code and more importantly what are the benefits of doing this. At Structured Dynamics, we started to invest resources into this research project and we believe that it will become a game changer regarding how people will consume, use and produce RDF data.
But I want to take a humble first step into this journey just by explaining how I ended up configuring Emacs for working with Clojure. I want to take the time to do this since this is a trials and errors process, and that it may be somewhat time-consuming for the new comers.
In an interesting twist for an article on Emacs, Frédérick recommends strongly that the reader consider Light Table as an IDE for Clojure over Emacs, especially if they are not already Emacs users.
What follows is a detailed description of changes for your .emacs file should you want to follow the Emacs route, including a LightTable theme for Emacs.
A very useful post and I am looking forward the the Clojure/RDF post to follow. |
nLab John David Stuart Jones
John David Stuart Jones (J.D.S. Jones) is a mathematician at the University of Warwick.
Selected writings
On cyclic homology and homology of cyclic loop spaces:
On the homotopy type of spaces of rational maps and moduli spaces of monopoles related to braid groups:
category: people
Last revised on July 19, 2021 at 12:41:59. See the history of this page for a list of all contributions to it. |
# (solved)Question 1.5 of NCERT Class XI Chemistry Chapter 1
Calculate the mass of sodium acetate (CH3COONa) required to make 500 mL of 0.375 molar aqueous solution. Molar mass of sodium acetate is 82.0245 g mol-1.
(Rev. 20-Nov-2022)
## Categories | About Hoven's Blog
,
### Question 1.5 NCERT Class XI Chemistry
Calculate the mass of sodium acetate (CH3COONa) required to make 500 mL of 0.375 molar aqueous solution. Molar mass of sodium acetate is 82.0245 g mol-1
### Chemistry Concept for Method I
0.375 Molarity means 1 liter aqueous solution has been obtained by dissolving 0.375 mols of sodium acetate.
However, we need 500 ml. So we have to use 0.375/2 mols of sodium acetate.
Molar mass given as 1 mol = 82.0245 grams.
Hence 0.375/2 mol will be = 0.375/2 x 82.0245 grams.
SF in 0.375 are lower (i.e., 3), so the answer is rounded to 3 digits as 15.4 grams!
### Solution by Method I(but also see Method II after this)
$\displaystyle \text{molarity given } = 0.375 \text{ M}$
$\displaystyle \therefore 1000 \text{ ml contains } 0.375 \text{ mol }$
$\displaystyle \implies 500 \text{ ml contains } \bigg(\frac{0.375}{2}\bigg) \text{ mol }$
$\displaystyle \text{but 1 mol CH}_3\text{COONa} \equiv 82.0245 \text{ g}$
$\displaystyle \therefore \bigg(\frac{0.375}{2}\bigg) \text{ mol } \equiv 82.0245 \times \bigg(\frac{0.375}{2}\bigg) \text{ g}$
$\displaystyle = 15.4 \text{ g (3 Significant Figures)} \:\underline{Ans}$
### Video Explanation
Please watch this youtube video for a quick explanation of the solution:
### Solution by Method II (easier and recommended)
\begin{aligned} &\text{let moles of CH}_3\text{COONa} = x\\\\ &\frac{\text{moles of solute}}{\text{vol of soln. in liters}} = \text{Molarity}\\\\ &\therefore \frac{x}{(1/2)} = 0.375\text{M (given)}\\\\ &\therefore x = 0.375 \times \frac12\\\\ &= 0.375 \times \frac12 \times 82.0245 \text{ grams}\\\\ &= 15.4 \text{ grams}\:\underline{Ans} \end{aligned}
### Chemistry Concept for Method II
We have used the definition of molarity to obtain the number of moles of sodium acetate. After that we use the given molar mass to convert moles to grams. Note: This method is not only easier to understand, but faster also. |
# Measurement of the charm mixing parameter y(CP)-y(CP)(K pi) using two-body D-0 meson decays
Aaij, R, Abdelmotteleb, ASW, Beteta, C Abellan, Abudinen, F, Ackernley, T, Adeva, B, Adinolfi, M, Afsharnia, H, Agapopoulou, C, Aidala, CA
et al (show 998 more authors) (2022) Measurement of the charm mixing parameter y(CP)-y(CP)(K pi) using two-body D-0 meson decays. PHYSICAL REVIEW D, 105 (9).
Access the full-text of this item by clicking on the Open Access link.
## Abstract
A measurement of the ratios of the effective decay widths of D0→π-π+ and D0→K-K+ decays over that of D0→K-π+ decays is performed with the LHCb experiment using proton-proton collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 6 fb-1. These observables give access to the charm mixing parameters yCPππ-yCPKπ and yCPKK-yCPKπ, and are measured as yCPππ-yCPKπ=(6.57±0.53±0.16)×10-3, yCPKK-yCPKπ=(7.08±0.30±0.14)×10-3, where the first uncertainties are statistical and the second systematic. The combination of the two measurements is yCP-yCPKπ=(6.96±0.26±0.13)×10-3, which is four times more precise than the previous world average.
Item Type: Article Faculty of Science and Engineering > School of Physical Sciences Symplectic Admin 31 Aug 2022 10:19 18 Jan 2023 20:46 10.1103/PhysRevD.105.092013 https://journals.aps.org/prd/pdf/10.1103/PhysRevD.... https://livrepository.liverpool.ac.uk/id/eprint/3162849 |
## anonymous 5 years ago 2x^2=24
1. anonymous
x^2=12 x=sqrt(12) $x=2\sqrt{3}$
2. anonymous
i dont know how to break it down
3. anonymous
meraj is correct to continue to break it down you would have sqrt 4*3 so the sqrt of 4 is 2 so move the two to the outside of sqrt leaving 2sqrt3 |
Uh Oh! It seems you’re using an Ad blocker!
We always struggled to serve you with the best online calculations, thus, there's a humble request to either disable the AD blocker or go with premium plans to use the AD-Free version for calculators.
Or
# Hyperbola Calculator
Hyperbola Equation
$\frac{(x-x_0)^2}{a} - \frac{(y-y_0)^2}{b} = 1$
Enter the Center(C)(x0, y0):
Enter x0:
Enter y0:
Enter a:
Enter b:
Table of Content
1 Hyperbola Formula: 2 Is a parabola half of a hyperbola? 3 What is the parabola in real life? 4 Is Eiffel Tower a hyperbola? 5 Is the guitar a hyperbola? 6 Why is the hourglass a hyperbola? 7 How useful is the concept of hyperbola in radar tracking stations?
Get The Widget!
Add Hyperbola Calculator to your website to get the ease of using this calculator directly. Feel hassle-free to account this widget as it is 100% free, simple to use, and you can add it on multiple online platforms.
An online hyperbola calculator will help you to determine the center, eccentricity, focal parameter, major, and asymptote for given values in the hyperbola equation. Also, this calculator precisely finds the covertices and conjugate of a function. In this context, you can understand how to find a hyperbola, it’s a graph and the standard form of hyperbola.
## What is Hyperbola?
In mathematics, a hyperbola is one of the conic section types formed by the intersection of a double cone and a plane. In a hyperbola, the plane cuts off the two halves of the double cone but does not pass through the apex of the cone. The other two cones are parabolic and elliptical. In other words, a hyperbola is a set of all points on the planes, for which the absolute value of the difference between the distances and two fixed points (known as foci of hyperbola) is constant.
### Hyperbola Formula:
A hyperbola at the origin, with x-intercepts, points a and – a has an equation of the form
$$X^2 / a^2 – y^2 / b^2 =1$$
While a hyperbola centered at an origin, with the y-intercepts b and -b, has a formula of the form
$$y^2 / b^2 – x^2 / a^2 = 1$$
Some texts use $$y^2 / a^2 – x^2 / b^2 = 1$$ for this last equation. For a brief introduction such as this, the form given is commonly used.
The x-intercepts are the vertices of the hyperbola with the formula $$x^2 / a^2 – y^2 / b^2 = 1$$, and the y-intercepts are the vertices of a hyperbola with the formula $$y^2 / b^2 – x^2 / a^2 = 1$$. The line between the midpoint of the transverse axis is the center of the hyperbola and the vertices are the transverse axis of the hyperbola.
Example:
Graph the hyperbola. Find its vertices, center, foci, and the equations of its asymptote lines.
$$a^2/ 16 – b^2 / 25 = 1$$
A hyperbola with center point at (0, 0), and its changed axis is along the x‐axis.
$$M^2 = 16, n^2 = 25$$
$$k = \sqrt {a^2 + b^2}$$
$$|a| = 4, |b| = 5$$
$$= \sqrt {16 + 25}$$
If you are facing some issues about foci, vertices and coordinates then use our hyperbola calculator that can find all attributes using the equation of a hyperbola quickly.
Vertices: (–4, 0) (4, 0)
Foci: $$(- \sqrt{41}, 0) ( \sqrt{41}, 0)$$
Equations of asymptote lines: b = 5/4 a
A hyperbola centered at (0, 0) whose axis is along the y‐axis has the following formula as hyperbola standard form.
$$y^2 / m^2 – x^2 / b^2 = 1$$
The vertices are (0, – x) and (0, x). The foci are at (0, – y) and (0, y) with $$z^2 = x^2 + y^2$$ . The asymptote lines have formulas a = x / y b
In general, when the hyperbola is written in the standard format, the axis in the hyperbola graph is parallel or along to the axis of a variable that is not being subtracted.
## How Hyperbola Calculator Works?
The hyperbola equation calculator will compute the hyperbola center using its equation by following these guidelines:
### Input:
• Firstly, the calculator displays an equation of hyperbola on the top.
• Now, substitute the values for different points according to the hyperbola formula.
• Click on the calculate button for further process.
### Output:
• The hyperbola calculator provides the equation with input values.
• The calculator displays the results for the center, vertices, eccentricity, parameter, asymptote, directrix, latus rectum, x, and y-intercepts precisely.
## FAQ:
### Is a parabola half of a hyperbola?
A pair of hyperbolas formed by the intersection of the plane with two equal cones on the opposite sides of the same vertex. Therefore, this assumes that each half of the parabola that we usually think of is itself a hyperbola. A hyperbola is just a continuous curve similar to a parabola.
### What is the parabola in real life?
When the liquid rotates, the gravity forces turn the liquid into a parabolic shape. The most common real-life example is when you stir up lemon juice in a glass or jug by rotating it around its axis.
### Is Eiffel Tower a hyperbola?
No, the Eiffel Tower is not an example of hyperbola. It is known to take the form of a parabola.
### Is the guitar a hyperbola?
A guitar is a real example of hyperbola because of its different sides and how it’s curved going outwards like a hyperbola. This is an important example for the real world because people who studying to play the guitar and understand it more simply because of its hyperbolic shape.
### Why is the hourglass a hyperbola?
The hourglass creates a hyperbola in which two cones meet. The sides of the hourglass make an imaginary hyperbola. The purpose of this structure is to make the sand particle only comes through the center point. This will help to control the sand to keep it stable for 1 hour or a minute.
### How useful is the concept of hyperbola in radar tracking stations?
Focus on one “point”. This hyperbola property is used for radar tracking stations: detecting an object by sending sound waves in a different direction from two Point sources: The concentric circles of these sound waves intersect the hyperbola.
## Conclusion:
Use this online hyperbola calculator for the standard hyperbola equation for the given parameters or obtaining the axis length and the coordinates for the given input values in an equation for hyperbola.
## Reference:
From the source of Wikipedia: As the locus of points, Hyperbola with equation, By the directrix property, Construction of a directrix, Pin and string construction, Steiner generation of a hyperbola.
From the source of Lumen: hyperbola centered at the origin, axes of symmetry, transverse axis, the center of a hyperbola, central rectangle, Equation of a Hyperbola Centered at the Origin.
From the source of Purple Math: hyperbola is centered on a point, Inscribed angles for hyperbolas, parametric representation, implicit representation, hyperbola in space, Tangent construction, Area of the grey parallelogram, Point construction. |
# Turing's method
In mathematics, Turing's method is used to verify that for any given Gram point gm there lie m + 1 zeros of ζ(s), in the region 0 < Im(s) < Im(gm), where ζ(s) is the Riemann zeta function.[1] It was discovered by Alan Turing and published in 1953,[2] although that proof contained errors and a correction was published in 1970 by R. Sherman Lehman.[3]
For every integer i with i < n we find a list of Gram points ${\displaystyle \{g_{i}\mid 0\leqslant i\leqslant m\}}$ and a complementary list ${\displaystyle \{h_{i}\mid 0\leqslant i\leqslant m\}}$, where gi is the smallest number such that
${\displaystyle (-1)^{i}Z(g_{i}+h_{i})>0,}$
where Z(t) is the Hardy Z function. Note that gi may be negative or zero. Assuming that ${\displaystyle h_{m}=0}$ and there exists some integer k such that ${\displaystyle h_{k}=0}$, then if
${\displaystyle 1+{\frac {1.91+0.114\log(g_{m+k}/2\pi )+\sum _{j=m+1}^{m+k-1}h_{j}}{g_{m+k}-g_{m}}}<2,}$
and
${\displaystyle -1-{\frac {1.91+0.114\log(g_{m}/2\pi )+\sum _{j=1}^{k-1}h_{m-j}}{g_{m}-g_{m-k}}}>-2,}$
Then the bound is achieved and we have that there are exactly m + 1 zeros of ζ(s), in the region 0 < Im(s) < Im(gm).
## References
1. ^ Edwards, H. M. (1974). Riemann's zeta function. Pure and Applied Mathematics. 58. New York-London: Academic Press. ISBN 0-12-232750-0. Zbl 0315.10035.
2. ^ Turing, A. M. (1953). "Some Calculations of the Riemann Zeta‐Function". Proceedings of the London Mathematical Society. s3-3 (1): 99–117. doi:10.1112/plms/s3-3.1.99.
3. ^ Lehman, R. S. (1970). "On the Distribution of Zeros of the Riemann Zeta‐Function". Proceedings of the London Mathematical Society. s3-20 (2): 303–320. doi:10.1112/plms/s3-20.2.303. |
# (Title LaTeX test) Why does NMR Ix+iIy ($\hat{I}_x+i\hat{I}_y$)
I wanted to test some $\LaTeX$ in the title for myself, and I didn't know any other way to do so.
Why does the transverse magnetization in Fourier Transform NMR consist of Ix+iIy ($\hat{I}_x+i\hat{I}_y$) -> http://meta.chemistry.stackexchange.com/questions/2774/why-does-the-transverse-magnetization-in-fourier-transform-nmr-consist-of-ixiiy
Why does the transverse magnetization NMR consist of Ix+iIy ($\hat{I}_x+i\hat{I}_y$) -> http://meta.chemistry.stackexchange.com/questions/2774/why-does-the-transverse-magnetization-nmr-consist-of-ixiiy-hati-xi-hati
Why does NMR Ix+iIy ($\hat{I}_x+i\hat{I}_y$) -> http://meta.chemistry.stackexchange.com/questions/2774/why-does-nmr-ixiiy-hati-xi-hati-y
Can someone explain to me why we can't just add Latex in titles as well as plain text? It might not be always a useful idea, but I don't see the problem of tacking the $\LaTeX$ on the end. To me an issue only arises when entirely replacing the plain text for $\LaTeX$ in the middle of the question.
• Well, the URL is a funny bit! hati.... BTW, I think that hati-xi-hati won't let anyone search the real formula. – M.A.R. Jun 4 '15 at 16:22
• The idea is that if the formula is also written in plain text, then if someone were to search for it, it could be found. I also wanted to check how this would look like as a search engine hit, but it seems the meta site is not crawled by search engines. – Nicolau Saker Neto Jun 4 '15 at 16:28
• I see the point you are making and it might work. I'm not an expert on the workings of search engines though so I don't know how they will process this. – bon Jun 4 '15 at 17:01
• Well it seems Google actually does crawl through meta, so I can continue my tests here. – Nicolau Saker Neto Jun 4 '15 at 22:43
I personally prefer plain text titles only. For me this is a much more elaborate way of asking.
I have no idea what $\hat{I}$ is. Even after reading the whole question I have no idea what it stands for. Currently I have the time to at least read most of the questions, that come up and even edit a couple of them, but there will come a time, where I only attend questions that I find interesting. Although I am generally interested in NMR and the theory behind it, I would not have bothered to read this question.
In my opinion it is a question of style. At least, that is what I care most about in this issue. I can live with these kind of questions as I have given up on being a perfectionist.[*]
Point in case:
This is of course only true if it is not cut off. See the lower title.
It does not render in most hot questions lists
I assume most of the people who would be able to understand the question browse on sites that have MathJax support, but most sites don't. So this looks ugly, and incomprehensible for most people. There has been a feature request on blocking those questions.
The slug, the slug, THE SLUG?!
I have no idea how google and similar search engines work. So I probably don't care much about it. For sharing the question, It is not even necessary to include this part in the link, i.e.
https://chemistry.meta.stackexchange.com/questions/2774/why-does-nmr-ixiiy-hati-xi-hati-y
will link to the same page as
https://chemistry.meta.stackexchange.com/q/2774/4945
and
https://chemistry.meta.stackexchange.com/questions/2774
and whatever else.
Further discussions
[*] If I apply my knowledge about notation, I also believe the question in place is typeset wrong. If it is an operator then it must not be in italics, hence $\hat{\mathrm{I}}$ would be the correct way of type setting it: $\hat{\mathrm{I}}$. The slug therefore would become ...hatmathrmi-xi-hatmathrmi-y... and is especially unreadable.
• I especially prefer Gibbs energy over $\Delta G$ |
# American Institute of Mathematical Sciences
• Previous Article
Convergence to the complex balanced equilibrium for some chemical reaction-diffusion systems with boundary equilibria
• DCDS-B Home
• This Issue
• Next Article
Dynamics of a diffusive Leslie-Gower predator-prey model in spatially heterogeneous environment
## A note on a neuron network model with diffusion
1 Ecole Centrale de Lyon, University Claude Bernard Lyon 1, CNRS UMR 5208, Ecully, 69130, France 2 School of Mathematics and Statistics, University of Hyderabad, Hyderabad, India
* Corresponding author: Suman Kumar Tumuluri
Received January 2019 Revised November 2019 Published April 2020
We study the dynamics of an inhomogeneous neuronal network parametrized by a real number $\sigma$ and structured by the time elapsed since the last discharge. The dynamics are governed by the parabolic PDE which describes the probability density of neurons with elapsed time $s$ after its last discharge. We prove existence and uniqueness of a solution to the model. Moreover, we show that under some conditions on the connectivity and the firing rate, the network exhibits total desynchronization.
Citation: Philippe Michel, Suman Kumar Tumuluri. A note on a neuron network model with diffusion. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020085
##### References:
show all references |
## The Annals of Mathematical Statistics
### On the Distribution of the Two-Sample Cramer-von Mises Criterion
T. W. Anderson
#### Abstract
The Cramer-von Mises $\omega^2$ criterion for testing that a sample, $x_1, \cdots, x_N$, has been drawn from a specified continuous distribution $F(x)$ is \begin{equation*}\tag{1}\omega^2 = \int^\infty_{-\infty} \lbrack F_N(x) - F(x)\rbrack^2 dF(x),\end{equation*} where $F_N(x)$ is the empirical distribution function of the sample; that is, $F_N(x) = k/N$ if exactly $k$ observations are less than or equal to $x(k = 0, 1, \cdots, N)$. If there is a second sample, $y_1, \cdots, y_M$, a test of the hypothesis that the two samples come from the same (unspecified) continuous distribution can be based on the analogue of $N\omega^2$, namely \begin{equation*}\tag{2} T = \lbrack NM/(N + M)\rbrack \int^\infty_{-\infty} \lbrack F_N(x) - G_M(x)\rbrack^2 dH_{N+M}(x),\end{equation*} where $G_M(x)$ is the empirical distribution function of the second sample and $H_{N+M}(x)$ is the empirical distribution function of the two samples together [that is, $(N + M)H_{N+M}(x) = NF_N(x) + MG_M(x)\rbrack$. The limiting distribution of $N\omega^2$ as $N \rightarrow \infty$ has been tabulated [2], and it has been shown ([3], [4a], and [7]) that $T$ has the same limiting distribution as $N \rightarrow \infty, M \rightarrow \infty$, and $N/M \rightarrow \lambda$, where $\lambda$ is any finite positive constant. In this note we consider the distribution of $T$ for small values of $N$ and $M$ and present tables to permit use of the criterion at some conventional significance levels for small values of $N$ and $M$. The limiting distribution seems a surprisingly good approximation to the exact distribution for moderate sample sizes (corresponding to the same feature for $N\omega^2$ [6]). The accuracy of approximation is better than in the case of the two-sample Kolmogorov-Smirnov statistic studied by Hodges [4].
#### Article information
Source
Ann. Math. Statist. Volume 33, Number 3 (1962), 1148-1159.
Dates
First available: 27 April 2007
http://projecteuclid.org/euclid.aoms/1177704477
JSTOR |
# Math Help - Puppy Power (Recursion)
1. ## Puppy Power (Recursion)
The question is 10 female dogs average 16 puppies each year with 50% of those puppies being female. Every year the owners remove 60% of the female puppies and all of the male puppies.
What will the growth be of female puppies left and their mothers over 15years?
Only the original mothers and 20% of female puppies can have puppies (each year); the rest will be neutered.
I know this is pretty easy to work out with a calculator if you keep entering the numbers, but is there a equation that will work it out?
Thanks for any help working towards finding it.
2. Let A(n+1)=A(n)+(10X16)/2X.1
=10+(160)/2x.1
=10+(16)/2
=10+8
=18 females after first year.
A(n) being amount of present female dogs = 10
Now I need to show A+2 is right (for next year), is that correct? If so thats where I start to struggle.
3. I understand there is away to work this out in spreadsheet, could anyone please enlighten me on what would be the easiest way.
Thanks.
4. So would next year be A(n+2)=A(n+1)+(A(n+1)X16)/2X.1
= 18+(18X16)/2X.1
=18+288/2X.1
=18+28.8/2
=18+14.4
=32.4 round down to 32
5. Originally Posted by Tally
The question is 10 female dogs average 16 puppies each year with 50% of those puppies being female. Every year the owners remove 60% of the female puppies and all of the male puppies.
What will the growth be of female puppies left and their mothers over 15years?
Let there be $A_n$ femails in year $n$, then in year $n+1$ there are:
$A_{n+1}=A_n+A_n\times 8 \times 0.4=4.2A_n$
So:
$A_{15}=4.2A_{14}=4.2^2A_{13}= ... = 4.2^{15} A_0$
and $A_0=10$
RonL
6. Thank you, things are starting to become clearer now. |
New Comment
# Stem cell slowdown and AI timelines
My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown.
Has anyone looked to that movement for lessons about AI?
Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach?
CC'd to lesswrong.com/shortform
# Positive and negative longtermism
I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.
In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.
Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that's a win for negative longtermism.
In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to 1e1000 comets and planets hurts literally as bad as extinction.
Negative longtermism is a vision of what shouldn't happen. Positive longtermism is a vision of what should happen.
My model of Ord says we should lean at least 75% toward positive longtermism, but I don't think he's an extremist. I'm uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.
What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you're working on and who you're teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are "do" and "don't". I won't attempt to claim which disposition is more rational or desirable, but explore each branch
When Alice wants future X and Bob wants future Y, but if they don't defeat the adversary Adam they will be stuck with future 0 (containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.
• Form myopic coalitions. A trivial case where you would expect Alice and Bob to tend toward this disposition is if X and Y are similar. However, if X and Y are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they're in a high trust situation where they each can credibly signal that they won't try to get a head start on the X vs. Y battle until 0 is completely ruled out.
• Don't form myopic coalitions. A low trust environment where Alice and Bob each fully expect the other to try to get a head start on X vs. Y during the fight against 0 would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.
An example of such a low-trust environment is, if you'll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.
For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli's rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.
Now, while I don't support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody's political aesthetics, like a philosophy professor's preference for a long reflection or an engineer's preference for moar spaaaace or a conservative's preference for retvrn to pastorality or a liberal's preference for intercultural averaging. A negative goal like "don't kill literally everyone" greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.
The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.
In negative longtermism, we sometimes invoke this concept of existential security (which i'll abbreviate to xsec), the idea that at some point the future is freed from xrisk, or we have in some sense abolished the risk of extinction.
One premise for the current post is that in a veil of ignorance sense, affluent and smart humans alive in the 21st century have duties/responsibilities/obligations, (unless they're simply not altruistic at all), derived from Most Important Century arguments.
I think it's tempting to say that the duty -- the ask -- is to obtain existential security. But I think this is wildly too hard, and I'd like to propose a kind of different framing
# Xsec is a delusion
I don't think this goal is remotely obtainable. Rather, I think the law of mad science implies that either we'll obtain a commensurate rate of increase in vigilance or we'll die. "Security" implies that we (i.e. our descendants) can relax at some point (as the minimum IQ it takes to kill everyone drops further and further). I think this is delusional, and Bostrom says as much in the Vulnerable World Hypothesis (VWH)
I think the idea that we'd obtain xsec is unnecessarily utopian, and very misleading.
# Instead of xsec summed over the whole future, zero in on subsequent 1-3 generations, and pour your trust into induction
Obtaining xsec seems like something you don't just do for your grandkids, or for the 22nd century, but for all the centuries in the future.
I think this is too tall an order. I think that instead of trying something that's too hard and we're sure to fail at, we should initialize a class or order of protectors who zero in on getting their 1-3 first successor generations to make it.
In math/computing, we reason about infinite structures (like the whole numbers) by asking what we know about "the base case" (i.e., zero) and by asking what we know about constructions assuming we already know stuff about the ingredients to those constructors (i.e., we would like for what we know about n to be transformed into knowledge about n+1). This is the way I'm thinking about how we can sort of obtain xsec just not all at once. There are no actions we can take to obtain xsec for the 25th century, but if every generation 1. protects their own kids, grandkids, and great-grandkids, and 2. trains and incubates a protector order from among the peers of their kids, grandkids, and great-grandkids, then overall the 25th century is existentially secure.
Yes, the realities of value drift make it really hard to simply trust induction to work. But I think it's a much better bet than searching for actions you can take to directly impact arbitrary centuries.
I think when scifis like dune or foundation reasoned about this, there was a sort of intergenerational lock-in, people are born into this order, they have destinies and fates and so on, whereas I think in real life people can opt-in and opt-out of it. (but I think the 0 IQ approach to this is to just have kids of your own and indoctrinate them, which may or may not even work).
But overall, I think the argument that accumulating cultural wisdom among cosmopolitans, altruists, whomever is the best lever we have right now is very reasonable (especially if you take seriously the idea that we're in the alchemy era of longtermism).
# open problems in the law of mad science
The law of mad science (LOMS) states that the minimum IQ needed to destroy the world drops by points every years.
My sense from talking to my friend in biorisk and honing my views of algorithms and the GPU market is that it is wise to heed this worldview. It's sort of like the vulnerable world hypothesis (Bostrom 2017), but a bit stronger. VWH just asks "what if nukes but cost a dollar and fit in your pocket?", whereas LOMS goes all the way to "the price and size of nukes is in fact dropping".
I also think that the LOMS is vague and imprecise.
I'm basically confused about a few obvious considerations that arise when you begin to take the LOMS seriously.
1. Are (step size) and (dropping time) fixed from empiricism to extinction? This is about as plausible as P = NP, obviously Alhazen (or an xrisk community contemporaneous with Alhazen) didn't have to deal with the same step size and dropping time as Shannon (or an xrisk community contemporaneous with Shannon), but it needs to be argued.
2. With or without a proof of 1's falseness, what are step size and dropping time a function of? What are changes in step size and dropping time a function of?
3. Assuming my intuition that the answer to 2 is mostly economic growth, what is a moral way to reason about the tradeoffs between lifting people out of poverty and making the LOMS worse? Does the LOMS invite the xrisk community to join the degrowth movement?
4. Is the LOMS sensitive to population size, or relative consumption of different proportions of the population?
5. For fun, can you write a coherent scifi about a civilization that abolished the LOMS somehow? (this seems to be what Ord's gesture at "existential security" entails). How about merely reversing it's direction, or mere mitigation?
6. My first guess was that empiricism is the minimal civilizational capability that a planet-lifeform pair has to acquire before the LOMS kicks in. Is this true? Does it, in fact, kick in earlier or later? Is a statement of the form "the region between an industrial revolution and an information or atomic age is the pareto frontier of the prosperity/security tradeoff" on the table in any way?
While I'm not 100% sure there will be actionable insights downstream of these open problems, it's plausibly worth researching.
As far as I know, this is the original attribution.
We need an in-depth post on moral circle expansion (MCE), minoritarianism, and winning. I expect EA's MCE projects to be less popular than anti-abortion in the US (37% say ought to be illegal in all or most cases, while for one example veganism is at 6%) . I guess the specifics of how the anti-abortion movement operated may be too in the weeds of contingent and peculiar pseudodemocracy, winning elections with less than half of the votes and securing judges and so on, but it seems like we don't want to miss out on studying this. There may be insights.
While many EAs would (I think rightly) consider the anti-abortion people colleagues as MCE activists, some EAs may also (I think debatably) admire republicans for their ruthless, shrewd, occasionally thuggish commitment to winning. Regarding the latter, I would hope to hear out a case for principles over policy preference, keeping our hands clean, refusing to compromise our integrity, and so on. I'm about 50:50 on where I'd expect to fall personally, about the playing fair and nice stuff. I guess it's a question of how much republicans expect to suffer from externalities of thuggishness, if we want to use them to reason about the price we're willing to put on our integrity.
Moreover, I think this "colleagues as MCE activists" stuff is under-discussed. When you steelman the anti-abortion movement, you assume that they understand multiplication as well as we do, and are making a difficult and unhappy tradeoff about the QALY's lost to abortions needed by pregancies gone wrong or unclean black-market abortions or whathaveyou. I may feel like I oppose the anti-abortion people on multiplicationist/consequentialist grounds (I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever), but things get interesting when I model them as understanding the tradeoffs they're making.
(To be clear, this isn't "EA writer, culturally coded as a democrat for whatever college/lgbt/atheist reasons, is using a derogatory word like 'thuggish' to describe the outgroup", I'm alluding to empirical claims about how the structure of the government interacts with population density to create minority rule, and making a moral judgment about the norm-dissolving they fell back on when obama appointed a judge.)
(I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever)
This is a pretty strong stance to take! Most people believe that it is reasonable to ban at least some disvaluable things, like theft, murder, fraud etc., in an attempt to reduce their incidence. Even libertarians who oppose the existence of the state altogether generally think it will be replaced by some private alternative system which will effectively ban these things.
right, yeah, I think it's a fairly common conclusion regarding a reference class like drugs and sex work, but for a reference class like murder and theft it's a much rarer (harder to defend) stance.
I don't know if it's on topic for the forum to dive into all of my credences over all the claims and hypotheses involved here, I just wanted to briefly leak a personal opinion or inclination in OP.
perfect, thanks!
CW death
I'm imagining myself having a 6+ figure net worth at some point in a few years, and I don't know anything about how wills work.
Do EAs have hit-by-a-bus contingency plans for their net worths?
Is there something easy we can do to reduce the friction of the following process: Ask five EAs with trustworthy beliefs and values to form a grantmaking panel in the event of my death. This grantmaking panel could meet for thirty minutes and make a weight allocation decision on the giving what we can app, or they can accept applications and run it that way, or they can make an investment decision that will interpret my net worth as seed money for an ongoing fund; it would be up to them.
I'm assuming this is completely possible in principle: I solicit those five EAs who have no responsibilities or obligations as long as I'm alive, if they agree I get a lawyer to write up a will that describes everything.
If one EA has done this, the "template contract" would be available to other EAs to repeat it. Would it be worth lowering the friction of making this happen?
Related idea: I can hardcode weight assignment for the giving what we can app into my will, surely a non-EA will-writing lawyer could wrap their head around this quickly. But is there a way to not have to solicit the lawyer every time I want to update my weights, in response to my beliefs and values changing while I'm alive?
It sounds at the face of it that the second idea is lower friction and almost as valuable as the first idea for most individuals.
Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable?
Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!)
It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you have fuel source indifference!
I mean if you look at all the money they had to spend on disinformation and lobbying, isn't it insultingly obvious to say "just invest that money into renewable research and markets instead"?
Is there dialogue on this? Also, have any members of "big oil" in fact done what I'm suggesting, and I just didn't hear about it?
CC'd to lesswrong shortform
This happens quite widely to my knowledge and I've heard about it a lot (but I'm heavily involved in the climate movement so that make sense). Examples:
• BP started referring to themselves as "Beyond Petroleum" rather than "British Petroleum" over 20 years ago.
• A report by Greenpeace that found on average amongst a few "big oil" business, 63% of their advertising was classed as "greenwashing" when approx. only 1% of their total portfolios where renewable energy investment.
• Guardian article covering analysis by Client Earth who are suing big oil companies for greenwashing
• A lawsuit by Client Earth got BP to retract some greenwashing adverts for being misleading
• Examples of oil companies promoting renewables
• Another article on marketing spending to clean up the Big Oil image
Another CCing of something I said on discord to shortform
# If I was in comms at Big EA, I think I'd just say "EAs are people who like to multiply stuff" and call it a day
I think the principle that is both 1. as small as possible and 2. is shared as widely between EAs as possible is just "multiplication is morally and epistemically sound".
It just seems to me like the most upstream thing.
That's the post.
# cool projects for evaluators
Find a nobel prizewinner and come up with a more accurate distribution of shapley points.
The Norman Borlaug biography (the one by Leon Hesser) really drove home for me that, in this case, there was a whole squad behind the nobel prize, but only one guy got the prize. Tons of people moved through the rockefeller foundation and institutions in mexico to lay the groundwork for the green revolution, Borlaug was the real deal but history should also appreciate his colleagues.
It'd be awesome if evaluators could study high impact projects and come up with shapley point allocations. It'd really outperform the simple prizes approach.
Thanks to the discord squad (EA Corner) who helped with this.
Casual, not-resolvable-by-bet prediction:
# Basically EA is going to splinter into "trying to preserve permanent counter culture" and "institutionalizing"
I wrote yesterday about "the borg property", that we shift like the sands in response to arguments and evidence, which amounts to assimilating critics into our throngs.
As a premise, there exists a basic march of subcultures marching from counterculture to institution: abolitionists went from wildly unpopular to champions commonsense morality over the course of some hundreds of years, I think feminism is reasonably institutionalized now but had countercultural roots, let's say 150 years. Drugs from weed to hallucinogens have counterculture roots, and are still a little counterculture, but may not always be. BLM has gotten way more popular over the last 10 years.
But the borg property seems to imply that we'll not ossify (into, begin metaphor torturing sequence: rocks) enough to follow that march, not entirely. Rocks turn into sand via erosion, we should expect bottlenecks to reverse erosion (sand turning into rocks), i.e. the constant shifting of the dunes with the wind.
Consequentialist cosmopolitans, rats, people who like to multiply stuff, whomever else may have to rebrand if institutionalized EA got too hegemonic, and I've heard a claim that this is already happening in the "rats who arent EAs" scene in the bay, that there are ambitious rats who think the ivy league & congress strategy is a huge turn-off.
Of interest is the idea that we may live in a world where "serious careerists who agree with leadership about PR are the only people allowed in the moskovitz, tuna, sbf ecosystems", perhaps this is a cue from the koch or thiel ecosystems (perhaps not: I don't really know how they operate). Now the core branding of EA may align itself with that careerism ecosystem, or it may align itself with higher variance stuff. I'm uncertain what will happen, I only expect splintering not any proposition about who lands where.
# Ok, maybe a little resolvable by bet
A manifold market could look like "will there exist charities founded and/or staffed by people who were high-engagement EAs for a number of years before starting these projects, but are not endorsed by EA's billionaires". This may capture part of it.
post idea: based on interviews, profile scenarios from software of exploit discovery, responsible disclosure, coordination of patching, etc. and try to analyze with an aim toward understanding what good infohazard protocols would look like.
(I have a contact who was involved with a big patch, if someone else wants to tackle this reach out for a warm intro!)
Don't Look Up might be one of the best mainstream movies for the xrisk movement. Eliezer said it's too on the nose to bare/warrant actually watching. I fully expect to write a review for EA Forum and lesswrong about xrisk movement building.
# One brief point against Left EA: solidarity is not altruism.
low effort shortform: do pingback to here if you steal these ideas for a more effortful post
It has been said in numerous places that leftism and effective altruism owe each other some relationship, stemming from common goals and so on. In this shortform, I will sketch one way in which this is misguided.
I will be ignoring cultural/social effects, like bad epistemics, because I think bad epistemics are a contingent rather than necessary feature of the left.
idea: taboo "community building", say "capacity building" instead.
Why?
We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge" things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!
I heard it from Abram Demski at AISU'21.
Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever which will be 100 valuable if you end up in world A, or you can pull lever which will be 100 valuable if you end up in world B. The heuristic is that if you pull but end up in world B, you do not want to have created disvalue, in other words, your intervention conditional on the belief that you'll end up in world A should not screw you over in timelines where you end up in world B
This can be fully mathematized by saying "if most of your probability mass is on ending up in world A, then obviously you'd pick a lever L such that is very high, just also make sure that or creates an acceptably small amount of disvalue.", where is read "the value of pulling lever L if you end up in world A"
Is there an econ major or geek out there who would like to
1. accelerate my lit review as I evaluate potential startup ideas in prediction markets and IIDM by writing paper summaries
2. occasionally tutor me in microeconomics and game theory and similar fun things
something like 5 hours / week, something like \$20-40 /hr
(EA Forum DMs / quinnd@tutanota.com / disc @quinn#9100)
I'm aware that there are contractor-coordinating services for each of these asks, I just think it'd be really awesome to have one person to do both and to keep the money in the community, maybe meet a future collaborator!
# What's the latest on moral circle expansion and political circle expansion?
• Were slaves excluded from the moral circle in ancient greece or the US antebellum south, and how does this relate to their exclusion from the political circle?
• If AIs could suffer, is recognizing that capacity a slippery slope toward giving AIs the right to vote?
• Can moral patients be political subjects, or must political subjects be moral agents? If there was some tipping point or avalanche of moral concern for chickens, that wouldn't imply arguments for political representation of chickens, right?
• Consider pre-suffrage women, or contemporary children: they seem fully admitted into the moral circle, but only barely admitted to the political circle.
• A critique of MCE is that history is not one march of worse to better (smaller to larger), there are in fact false starts, moments of retrograde, etc. Is PCE the same but even moreso?
If I must make a really bad first approximation, I would say a rubber band is attached to the moral circle, and on the other end of the rubber band is the political circle, so when the moral circle expands it drags the political circle along with it on a delay, modulo some metaphorical tension and inertia. This rubber band model seems informative in the slave case, but uselessly wrong in the chickens case, and points to some I think very real possibilities in the AI case.
[+][comment deleted]1y 1
[+][comment deleted]1y 1 |
Lesson 10
Relating Linear Equations and their Graphs
• Let’s connect functions to features of their graphs.
10.1: Notice and Wonder: Features of Graphs
Here are graphs of $$y=2x+5$$ and $$y=5 \boldcdot 2^x$$.
What do you notice? What do you wonder?
10.2: Making Connections
1. Here are some equations and graphs. Match each graph to one or more equations that it could represent. Be prepared to explain how you know.
• $$y = 8$$
• $$y = 3x - 2$$
• $$x + y = 6$$
• $$0.5x = \text-4$$
• $$y = x$$
• $$\text- \frac23 x = y$$
• $$12 - 4x = y$$
• $$x - y = 12$$
• $$2x + 4y = 16$$
• $$3x = 5y$$
2. Choose either graph D or F. Let $$x$$ represent hours after noon on a given day and $$y$$ represent the temperature in degrees Celsius in a freezer.
• In this situation, what does the $$y$$-intercept mean, if anything?
• In this situation, what does the $$x$$-intercept mean, if anything?
10.3: Connecting Equations and Graphs
1. Without substituting any values for $$x$$ and $$y$$ or using technology, decide whether graph A could represent each equation, and explain how you know.
1. $$4x = y$$
2. $$x - 8 = y$$
3. $$\text-5x = 10y$$
4. $$3y - 12= 0$$
2. Write a new equation that could be represented by:
1. Graph D
2. Graph F
3. On this graph, $$x$$ represents minutes since midnight and $$y$$ represents temperature in degrees Fahrenheit.
1. Explain what the intercepts tell us about the situation.
2. Write an equation that relates the two quantities. |
# Is there any better alternative to Linear Probability Model?
I read here, here, here, and elsewhere that linear probability model (LPM) might be used to get risk differences when the outcome variable is binomial.
LPM has some advantages such as ease of interpretation by simplifying the estimation of risk differences, which in certain fields might be preferable than odds ratio that is usually provided by logistic regression.
My concerns are however that "[u]sing the LPM one has to live with the following three drawbacks:
1. The effect ΔP(y=1∣X=x0+Δx) is always constant
2. The error term is by definition heteroscedastic
3. OLS does not bound the predicted probability in the unit interval"
Therefore, I would appreciate any idea on a better regression model for binomial data to get robust risk difference in R while avoiding these drawbacks of LPM.
## 2 Answers
The first "drawback" you mention is the definition of the risk difference, so there is no avoiding this.
There is at least one way to obtain the risk difference using the logistic regression model. It is the average marginal effects approach. The formula depends on whether the predictor of interest is binary or continuous. I will focus on the case of the continuous predictor.
Imagine the following logistic regression model:
$$\ln\bigg[\frac{\hat\pi}{1-\hat\pi}\bigg] = \hat{y}^* = \hat\gamma_c \times x_c + Z\hat\beta$$
where $$Z$$ is an $$n$$ cases by $$k$$ predictors matrix including the constant, $$\hat\beta$$ are $$k$$ regression weights for the $$k$$ predictors, $$x_c$$ is the continuous predictor whose effect is of interest and $$\hat\gamma_c$$ is its estimated coefficient on the log-odds scale.
Then the average marginal effect is:
$$\mathrm{RD}_c = \hat\gamma_c \times \frac{1}{n}\Bigg(\sum\frac{e^{\hat{y}^*}}{\big(1 + e^{\hat{y}^*}\big)^2}\Bigg)\\$$
This is the average PDF scaled by the weight of $$x_c$$. It turns out that this effect is very well approximated by the regression weight from OLS applied to the problem regardless of drawbacks 2 and 3. This is the simplest justification in practice for the application of OLS to estimating the linear probability model.
For drawback 2, as mentioned in one of your citations, we can manage it using heteroskedasticity-consistent standard errors.
Now, Horrace and Oaxaca (2003) have done some very interesting work on consistent estimators for the linear probability model. To explain their work, it is useful to lay out the conditions under which the linear probability model is the true data generating process for a binary response variable. We begin with:
\begin{align} \begin{split} P(y = 1 \mid X) {}& = P(X\beta + \epsilon > t \mid X) \quad \text{using a latent variable formulation for } y \\ {}& = P(\epsilon > t-X\beta \mid X) \end{split} \end{align}
where $$y \in \{0, 1\}$$, $$t$$ is some threshold above which the latent variable is observed as 1, $$X$$ is matrix of $$n$$ cases by $$k$$ predictors, and $$\beta$$ their weights. If we assume $$\epsilon\sim\mathcal{U}(-0.5, 0.5)$$ and $$t=0.5$$, then:
\begin{align} \begin{split} P(y = 1 \mid X) {}& = P(\epsilon > 0.5-X\beta \mid X) \\ {}& = P(\epsilon < X\beta -0.5 \mid X) \quad \text{since \mathcal{U}(-0.5, 0.5) is symmetric about 0} \\ {}&=\begin{cases} 0, & \mathrm{if}\ X\beta -0.5 < -0.5\\ \frac{(X\beta -0.5)-(-0.5)}{0.5-(-0.5)}, & \mathrm{if}\ X\beta -0.5 \in [-0.5, 0.5)\\ 1, & \mathrm{if}\ X\beta -0.5 \geq 0.5 \end{cases} \quad \text{CDF of \mathcal{U}(-0.5,0.5)}\\ {}&=\begin{cases} 0, & \mathrm{if}\ X\beta < 0\\ X\beta, & \mathrm{if}\ X\beta \in [0, 1)\\ 1, & \mathrm{if}\ X\beta \geq 1 \end{cases} \end{split} \end{align}
So the relationship between $$X\beta$$ and $$P(y = 1\mid X)$$ is only linear when $$X\beta \in [0, 1]$$, otherwise it is not. Horrace and Oaxaca suggested that we may use $$X\hat\beta$$ as a proxy for $$X\beta$$ and in empirical situations, if we assume a linear probability model, we should consider it inadequate if there are any predicted values outside the unit interval.
As a solution, they recommended the following steps:
1. Estimate the model using OLS
2. Check for any fitted values outside the unit interval. If there are none, stop, you have your model.
3. Drop all cases with fitted values outside the unit interval and return to step 1
Using a simple simulation (and in my own more extensive simulations), they found this approach to recover adequately $$\beta$$ when the linear probability model is true. They termed the approach sequential least squares (SLS). SLS is similar in spirit to doing MLE and censoring the mean of the normal distribution at 0 and 1 within each iteration of estimation, see Wacholder (1986).
Now how about if the logistic regression model is true? I will demonstrate in a simulated data example what happens using R:
# An implementation of SLS
s.ols <- function(fit.ols) {
dat.ols <- model.frame(fit.ols)
n.org <- nrow(dat.ols)
fitted <- fit.ols$$fitted.values form <- formula(fit.ols) while (any(fitted > 1 | fitted < 0)) { dat.ols <- dat.ols[!(fitted > 1 | fitted < 0), ] m.ols <- lm(form, dat.ols) fitted <- m.olsfitted.values } m.ols <- lm(form, dat.ols) # Bound predicted values at 0 and 1 using complete data m.ols$$fitted.values <- punif(as.numeric(model.matrix(fit.ols) %*% coef(m.ols)))
m.ols
}
set.seed(12345)
n <- 20000
dat <- data.frame(x = rnorm(n))
# With an intercept of 2, this will be a high probability outcome
dat$$y <- ((2 + 2 * dat$$x + + rlogis(n)) > 0) + 0
coef(fit.logit <- glm(y ~ x, binomial, dat))
# (Intercept) x
# 2.042820 2.021912
coef(fit.ols <- lm(y ~ x, dat))
# (Intercept) x
# 0.7797852 0.2237350
coef(fit.sls <- s.ols(fit.ols))
# (Intercept) x
# 0.8989707 0.3932077
We see that the RD from OLS is .22 and that from SLS is .39. We can also compute the average marginal effect from the logistic regression equation:
coef(fit.logit)["x"] * mean(dlogis(predict(fit.logit)))
# x
# 0.224426
We can see that the OLS estimate is very close to this value.
How about we plot the different effects to better understand what they try to capture:
library(ggplot2)
dat.res <- data.frame(
x = dat$x, logit = fitted(fit.logit), ols = fitted(fit.ols), sls = fitted(fit.sls)) dat.res <- tidyr::gather(dat.res, model, fitted, logit:sls) ggplot(dat.res, aes(x, fitted, col = model)) + geom_line() + theme_bw() From here, we see that the OLS results looks nothing like the logistic curve. OLS captures the average change in probability of y across the range of x (the average marginal effect). While SLS results in the linear approximation to the logistic curve in the region it is changing on the probability scale. In this scenario, I think the SLS estimate better reflects the reality of the situation. As with OLS, heteroskedasticity is implied by SLS, so Horrace and Oaxaca recommend heteroskedasticity-consistent standard errors. • This is perfect. Thanks a lot, @Heteroskedastic Jim. Nov 13 '18 at 3:18 • @Krantz glad it's helpful. I should add that there is an alternative approach to estimating the LPM in the blm package in R, but I disagree with that approach generally. It constrains the coefficients so that all the predicted y lie in [0, 1]. I think it has the effect of underestimating relationships. It also appears to suggest an expit transformation for continuous x's. Nov 13 '18 at 3:35 • I think the marginal effects and SLS approaches handle perfectly the problem. I feel that LPM is no longer warranted given these better options. Thanks a lot for this tremendous help. Nov 13 '18 at 3:38 • SLS is an LPM, but an interesting approach to estimating the LPM. Note that AME almost always agrees with OLS coefficients which is also an LPM. So that creates a conundrum. Also, SLS = OLS if there are no predicted probabilities outside of 0-1. Nov 13 '18 at 3:42 • But these two approaches, SLS and AME, avoid the drawback 3 very well. AME also avoids the debate about drawback 2. As you said, drawback 1 "is the definition of the risk difference". That means that these two methods are perfect for the problem at hand. So, again, thank you very much for this. Nov 13 '18 at 3:46 1. Every model has this problem. For example, logistic regression implies the constant log odd ratio. 2. For binomial distribution, the variance is $$p(1-p)$$ for one trial. So the different predict value of p implies the different variance. But in the model fitting process this problem is resolve by WLS (weighted least square). 3. For $$\hat p = X\hat \beta$$, it is possible for some $$X$$, the $$\hat p$$ can go lower than 0, or higher than 1, especially when the model is used to predict the probability using the $$X$$s that is not in the dataset used to build the model. • Thanks, @a_statistician. Your answer is fair but any better option to LPM to get risk difference instead of odds ratio? Nov 13 '18 at 2:47 • If you just have one categorical covariate, then LPM and logistic regression will give you nearly the same risk difference ($p_1 - p_2$), but you need to convert log odd (ratio) into$p$. For multi covariates, I do not think other model can replace LPM, because other models use non-linear function as link. So constant risk difference will become no constant measurement, (such as odda ratio). Nov 13 '18 at 2:56 • Thanks, @a_statistician. My model has more several covariates. But how about calculating RR from log OR from the logistic regression coefficients using the Cochran-Mantel-Haenszel Equations sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/…? Is there anything wrong with that? Nov 13 '18 at 3:07 • RR and OR are different from the definitions. But if$p$is close to 0, RR and OR are very close. Under this situation, you call exp($\hat\beta$) as RR or OR, does not matter. But if$p$is not close to 0, say 0.3, then RR and OR are totally different Under this situation, if you want get RR from logistic model, you need to calculate two$p\$s, then get RR. Nov 13 '18 at 3:07
• For CMH RR, there is assumption behind it, i.e., RRs are equal between strata. When you fit logistic, you assume that RR are unequal between strata. So CMH RR is contradict with logistic. Nov 13 '18 at 3:12 |
# SUSY 2015, 23rd International Conference on Supersymmetry and Unification of Fundamental Interactions
23-29 August 2015
Lake Tahoe
US/Pacific timezone
## Indirect Searches of Degenerate MSSM
28 Aug 2015, 17:30
20m
Mountain ()
### Mountain
Supersymmetry Phenomenology and Experiment
### Speaker
Debtosh Chowdhury (INFN, Roma)
### Description
A degenerate supersymmetric sparticle spectrum can escape constraints from flavor physics and at the same time evading the limits from the direct searches if the degeneracy extends to the gaugino sector. Inspired by this, we consider a scenario where all the soft terms have approximately a common mass scale at $M_{\text{SUSY}}$ while allowing for splittings within $\mathcal{O}(10\%)$. As a result, the third generation sfermions have large to maximal (left-right) mixing, the same being the case with charginos and some sectors of the neutralino mass matrix. We study this scenario in the light of discovery of the Higgs boson with mass $\sim$ 125 GeV. We consider constraints from $B$-physics, the anomalous magnetic moment of the muon ($a_\mu$) and the dark matter relic density. We find that the supersymmetric spectrum as light as 600 GeV is still possible to escape the present limits from LHC and flavor physics and can account for the observed $a_\mu$ within $2\sigma$. The neutralino relic density is too small to meet the observed data where as direct search limits from XENON100 and LUX put severe constraints on this scenario.
### Primary author
Debtosh Chowdhury (INFN, Roma)
### Co-authors
Ketan Patel (INFN, Padova) Sudhir Kumar Vempati (Centre for High Energy Physics, Indian Institute of Science) Xerxes Tata (University of Hawaii) |
# Genie 2000 Programming Library
by dos
Tags: 2000, genie, library, programming
P: 1 Hi everybody! I read in a document of Canberra saying that:" Genie 2000 Program Library allows a programmer to interact directly with Genie 2000 capabilities from a C++ language environment and also allows the addition of user-coded analysis engines to the Genie 2000 environment". If someone know about this issue, please help me!
P: n/a I have used Genie's internal language to create reports, and used the REXX language to interface to Genie to do more things. Both options are very weak, limited and slow. Currently I interface to Genie executable modules using Java. This method provides great power, as much as is possible with the existing executables in the EXEFILES folder plus processing the data and reporting and inteface to Excel using Java. You could do just the same with C++. It is quite a bit of work however. I am not aware of any special custom interface techniques that Canberra has published concerning Genie with C++ however.
P: 3 ChrisLeslie, You said you have written an interface in Java for Genie functions. This is exactly what I need now. Would you consider help in the project? So we can work together? I have 4 years experience in Java , see my project at radlab.sourceforge.net If you are finished, is your codes open source? Would you share them? Dagistan
P: 6
## Genie 2000 Programming Library
Quote by dagistan You said you have written an interface in Java for Genie functions. This is exactly what I need now. Would you consider help in the project? So we can work together? I have 4 years experience in Java , see my project at radlab.sourceforge.net If you are finished, is your codes open source? Would you share them? Dagistan
What are you interested in doing? I have not done Java, but I have written some software in C++ to control a multi-channel analyzer through Genie 2000. It was for-pay contract, so I cannot show you the source, but I might be able to answer some questions. I started writing a document about programming the Genie 2000 in C++ for my own use at one point, but it is pretty incomplete. I can see if I can find it if you are interested...
It was a pain to get started, because all of the Canberra documentation assumes you are using Visual Basic and know COM pretty well. Figuring out which parameters to set and which calls set them was also a problem. If you are pretty familiar with the Genie 2000 software already, getting it to work might be easier for you.
Bill.
P: 3 Hi Bill, Thank you very much for your reply. I am doing PhD at PSU. I am doing NAA analysis for wood rings. So I need to write a code to control the sample changer we have, save the spectrums and then make the analysis, and write the results into files. With lots o effort, because of Genie documentation, as you said, I have used C++ , and finally managed to save the spectrum and move the sample changer. However it seems like the analysis will take more time. By the way I started with the example provided in Genie 2000 3.1 I have still questions about current parts; 1-) I could not find any way to learn whether the spectrum is completed, 2-) No way to get info whether sample changer came to home position. 3-) No idea how to start the analysis part So I now C++, and can use it. But the Genie does not have a clear api, and documentation. I really need help with analysis. Energy calibration, peak search, nuclide identification, interference correction, efficiency correction and then calculate the mass of each isotope in the sample. Thank you again for your reply and help. We might together finish the documentation if you are interested in open source development. We can then put this into sourceforge.net . I have another project called RADlab at sourceforge.net , at http://radlab.sourceforge.net/ , which is open source. Dagistan
P: 6
Dagistan,
Quote by dagistan I have still questions about current parts; 1-) I could not find any way to learn whether the spectrum is completed,
Assuming your device access object is called devAccess, then you can poll the device status like this:
HRESULT hr;
enum DeviceAcces::DeviceStatus status;
bool acquiring;
hr = devAccess->get_AnalyzerStatus(&status);
acquiring = ((status & DeviceAccess::aAcquiring) != 0);
The call to get_AnalyzerStatus, gets the current analyzer status. If the aAcquiring bit is set, the the acquisition is not complete. The bitwise AND of status and DeviceAccess::aAcquiring masks off the other bits, making acquiring true when the analyzer is still acquiring data and false when it is done.
The manual lists AnalyzerStatus as a property in the DeviceAccess chapter. Any of the properties can be read by prepending get_ to them.
Quote by dagistan 2-) No way to get info whether sample changer came to home position.
I have not used the sample changer. If there is not a property for getting its status, the perhaps one of the hundreds of parameters listed in the CAM Files chapter will tell you the sample changer status. If there is a parameter, then you can use get_Param, much like get_AnalyzerStatus above.
The parameter descriptions are not really informative, but in most cases I have been able to narrow down the number of parameters to no more than about a dozen or so. Then I write a program that prints out their values to see which one is most likely to be the one I want. I realize this approach is less than ideal, but in many cases the documentation does not give enough information to sure you are looking at the right parameter without testing it first.
Quote by dagistan 3-) No idea how to start the analysis part
I spent about long time trying to figure out how to do analysis using at least three different approaches. The only way I could get it to work is by using the SequenceAnalysis module. Basically, you have to create a .ASF file using the Gamma Analysis and Acquisition program or the Analysis Sequence Editor program. There is documentation on how to do this in the Genie Operations Manual, and you can test your sequence by running in the Gamma Analysis and Acquisition program to verify that it does what you want.
Once you have a .ASF file, you need to create a sequence analyzer object and call your sequence with it:
ISequenceAnalyzerPtr seqAn;
hr = seqAn.CreateInstance(__uuidof(SequenceAnalyzer));
/* check hr for failure */
short step = 0;
hr = seqAn->Analyze(dataAccess, &step, bsName, VARIANT_FALSE,
VARIANT_FALSE, VARIANT_FALSE, VARIANT_FALSE, NULL, NULL);
/* check hr for failure */
In my code dataAccess is a DataAccess object associated with a CAM file being analyzed. You might be able to directly pass in a DeviceAccess object, too, but I have not tried it. The bsName parameter is a BSTR object containing the name of the .ASF file to use.
After this code sequence, step will contain the step number of the last step executed, hopefully the last step in your sequence. You probably have to read some parameters with get_Param to read the right parameters to get the result you want.
Quote by dagistan So I now C++, and can use it. But the Genie does not have a clear api, and documentation. I really need help with analysis. Energy calibration, peak search, nuclide identification, interference correction, efficiency correction and then calculate the mass of each isotope in the sample. Thank you again for your reply and help. We might together finish the documentation if you are interested in open source development. We can then put this into sourceforge.net . I have another project called RADlab at sourceforge.net , at http://radlab.sourceforge.net/ , which is open source. Dagistan
After looking at my documentation again, I am a little embarrassed to say that it is not quite as far a long as I had remembered. Basically, it is a bunch of notes that no one but me is likely to understand. I'll try to flesh out what I have and then maybe we can start adding different sections (and anyone else who might be out there is welcome to join).
I am probably not going to have a lot of time to work on it this week, though. Bug me in a few weeks if you have not heard anything from me. Or post a reply here sooner if you have questions on my replies above.
Bill.
P: 1 Hi, I used to work for Canberra and can help you with cam parameters. I don't do java, you are on your own for that stuff. I need to do a simple procedure to do some acquisition and reports. I'll probably use rexx. I wanted to use vb and was looking for script information. I may try calling the exes, a good idea. I can launch them from vb. I've written a lot of command procedures with rexx and much more with the vms side of things, which, while different, shares the cam structure. Rich Hume
P: 6
Hello,
Quote by RichHume Hi, I used to work for Canberra and can help you with cam parameters. I don't do java, you are on your own for that stuff.
I found the last set of CAM parameters I needed, but I'll definitely hit you up next time I get stuck.
Quote by RichHume Hi, I need to do a simple procedure to do some acquisition and reports. I'll probably use rexx. I wanted to use vb and was looking for script information. I may try calling the exes, a good idea. I can launch them from vb. I've written a lot of command procedures with rexx and much more with the vms side of things, which, while different, shares the cam structure. Rich Hume
The Genie Programming Library manual gives all of its examples as Visual Basic. It might be easier to control the acquisition through a DeviceAccess object. OTOH, if you are already familiar with using the REXX commands that might be easier for you. According to the operations manual there is a way to use REXX under Windows, but I have never done it.
I have been doing all of my development in C++, so I don't any experience with VB or REXX.
Bill.
P: 6 This is not really a question about CAM parameters, but do you know how to find the names of detectors that are loaded into the database (that is, the detectors that would appear in the Open Datasource... dialog)? I want to enumerate them for a detector selection widget I am building... Bill.
P: 1 Hi, I also try to automate some calculations using the Programming Library. I managed to define and run an analysis sequences (I'm using the C# library by the way). However some parameters (i.e. the geometry) is depending on the user input. In other words, the efficiency correction has to be based upon a geometry which is constructed at runtime. Does anyone has experience with this? I tried to construct a .gis file dynamically and run the command winisocs to obtain the ecc file. However, the ISOCS efficiency correction step in the analysis sequence takes a .geo file. Kind regards
P: 3 Hi Everybody, I was trying to change sample information from my code. did anybody found how to change sample buildup type? I could not find the CAM code for this one. Thanks Dagistan
P: 1 Hello everybody! I have the following problem.... We have a ginie 2000 System that works great. We use an old samplechanger system that relies upon relays which are starting to act strange and unpredictable. So we dicided to get rid of the relays instead using a Labview environment to control the actuators. So I need some advice how to implement the genie 2000 system into Labview, meaning that labview should tell Genie when to start a measurement and genie to send the information that it has finished the measurement back. An API would be useful, but as I understood it there is no such thing or something compareable to use in Labview... Please Help!!! Does anyone have a suggestion or maybe have already solved a similar problem? Sincerly, Einphysiker
P: 3 Hi, Who can give information about the file structure of CAM-files. I can read spectral data, live time, realtime and some ascii parameters from the file, but I cannot find the energy calibration values (float, double or any other format ???). I really tried hard to find the addresses of these values in then cnf-file with an Hex editor but there was no success.
P: 4 Hi RSachse, can you please tell me, where did you get real-time and live-time values? I have found some values in cnf file that might be them. I know decimal values of numbers that i'm looking for, but they are completly different :(
P: 4 Nevermind, i just figured out how to get them :D Now i'm trying to find calibration values... i'll tell you if i'll find something useful :D
P: 3 Hi 0x0000eWan, sorry I am late, but I was on hollydays here my VBA-code for reading CNF-files (Alpha-Analyst spectra): ******** Sub ReadCNF(sfile as string) dim spek(1024) as long Dim c8 As Currency '8 byte with LSB first Dim dreal As Double Dim dlive As Double Dim dpreset As Double Dim inchan As Integer Dim sUnit As String * 64 Dim sDet As String * 20 Dim sChTitel As String Dim ssample As String Dim fr as integer Dim r As Single Dim rzero As Single Dim rx As Single Dim rx2 As Single Dim rx3 As Single Dim sampletitle As String * 16 Dim sampleID As String * 16 Dim sample As String * 16 Dim sdescr As String * 64 fr=4000 'factor for calibration values ??? fi = FreeFile Open sfile For Binary As fi Get #fi, 2256, c8: dpreset = ((Not c8) + 1) / 1000: Cells(11, 2) = dpreset Get #fi, 2904, r: rzero = r / fr: If rzero <= 0 Then rzero = 2.5 Cells(13, 2) = rzero Get #fi, , r: rx = r / fr: Cells(13, 3) = rx Get #fi, , r: rx2 = r / fr: Cells(13, 4) = rx2 Get #fi, , r: rx3 = r / fr: Cells(13, 5) = rx3 Get #fi, 2622, c8: dlive = ((Not c8) + 1) / 1000: Cells(9, 2) = dlive 'livetime in msec Get #fi, 2630, c8: dreal = ((Not c8) + 1) / 1000: Cells(10, 2) = dreal 'realtime in msec Get #fi, 2930, sUnit 'MeV Get #fi, 2951, inchan 'channels Get #fi, 3102, sDet: Cells(7, 2) = sDet 'detector name Get #fi, 21553, sampletitle: Cells(5, 6) = sampletitle 'title Get #fi, 21617, sampleID: Cells(6, 6) = sampleID 'sample ID Get #fi, 22382, sample: Cells(7, 6) = sample 'sample Get #fi, 22447, sdescr: Cells(8, 6) = sdescr 'sample description 1 Get #fi, 22511, sdescr: Cells(9, 6) = sdescr 'sample description 2 Get #fi, 22575, sdescr: Cells(10, 6) = sdescr 'sample description 3 Seek #fi, 30209 'offset Spectrum data Get #fi, , spek Close fi For i = 1 To 1024 'insert in cells ze = 15 + i Cells(ze, 1) = i 'kanal Cells(ze, 2) = rzero + i * rx + i * rx2 ^ 2 'MeV Cells(ze, 3) = spek(i) 'l4 'counts Next i end sub ************ if you send me one of your specs I can have a look on it. best regards
P: 4
Hi,
I've finished processing cnf files. It's working fine for new version cnf files, but in old version ones, there are issues with realtime, livetime, dates and calibration values. It's easy to change code to work with old cnf files, but i wanna my app to be universal and can be used for both kinds of cnf files. I didn't figured out how to clearly determine which file is old and which is new, but I'm working on it ;) I'm sending you my code written in Delphi. It's quite messy because i used it only for testing and then rewrote it to another app, but i hope it will help you. I'll send updated version when I complete it.
Attached Files
cnf.zip (6.0 KB, 25 views)
P: 2
Quote by 0x0000eWan Hi, I've finished processing cnf files. It's working fine for new version cnf files, but in old version ones, there are issues with realtime, livetime, dates and calibration values. It's easy to change code to work with old cnf files, but i wanna my app to be universal and can be used for both kinds of cnf files. I didn't figured out how to clearly determine which file is old and which is new, but I'm working on it ;) I'm sending you my code written in Delphi. It's quite messy because i used it only for testing and then rewrote it to another app, but i hope it will help you. I'll send updated version when I complete it.
There are no "old" and "new" versions of the CAM (cnf) file format, there is only one and it is quite old. What is happening is that you are relying on fixed offsets to read parameters, spectral data, etc. and CAM files are much more complex than that. They have a filesystem-like structure, with "directories" to locate parameters, spectral (or any other) data, as well as an allocation bitmap and some other stuff. Neither the spectrum nor the parameters are at a fixed address, and neither they have to be contiguous inside the file (most of the time, for example, the spectrum is not!) All that was inherited from VMS (where Genie originated), which supports files with complex structure, unlike Windows. I don't think even CANBERRA has today the complete description of the CAM format...
A newer version of Genie might seem to generate a different "version" of the CAM file, simply because is saving a different number of parameters or in a different order. If you create the CAM file in VB using the CANBERRA SDK, chances are that you will not be able to read them back with your code. Even when using the same version of Genie, changing something as the detector type or a setting somewhere can cause your code to fail.
The best way to read the CAM files is by using the CANBERRA libraries, otherwise you'll be reinventing the wheel (and believe me, is a complicated one). CANBERRA has an SDK consisting of a set of COM components and C libraries that can be used with VB or C++. IIRC, the pcam.dll in the EXEFILES directory is the one that does all the low-level access. There should be also a sad.dll file or similar that can be called from C to access files and devices. The documentation is available from CANBERRA (usually comes with the Genie distribution).
Related Discussions Computing & Technology 10 Nuclear Engineering 1 General Discussion 9 General Discussion 33 General Discussion 9 |
# Fourth Root Calculator
Written by:
PK
On this page is a fourth root calculator. Enter your radicand (base of the fourth root) 4√x and we will compute the fourth root.
Also try our other root calculators:
## What is the Fourth Root of a Number?
In mathematics, the fourth root of a number is the number when multiplied by itself then multiplied by itself again and then multiplied by itself a final time equals the radicand, or base of the radical.
\sqrt[4]{x} =fourth\ root\ (or)\\(fourth\ root)*(fourth\ root)*(fourth\ root)*(fourth\ root)=x
In the above equations:
• 4 = the index, denoting 4th root or fourth root |
We’re rewarding the question askers & reputations are being recalculated! Read more.
# Tag Info
3
Soft cores are standard logic modules, written in Verilog or VHDL. They are called 'soft' because they are implemented in the re-programmable logic of the FPGA. You can edit and modify a soft module to tailor it to your needs. If you decide to change the module later, you can just re-program it, and the gates will be re-arranged according to your changes. ...
3
Each input requires its own process. Create two "toggle" FFs, and then XOR their outputs together. Toggle the "set" FF when the output is zero, and toggle the "reset" FF when the output is one. module dual_edge_ff ( input set, input reset, output q ); reg set_ff; reg reset_ff; assign q = set_ff ^ reset_ff; always @(negedge set) if (!q) set_ff ...
2
Use the solution given in verilog code with two falling edges, followed by another DFF. Adjust the edge directions as needed. Putting it all together, you get: module saw_falling_edge ( input s1, input clock, output reg out ); reg set_ff; reg reset_ff; wire q = set_ff ^ reset_ff; always @(negedge s1) if (!q) set_ff <= !set_ff; always @(...
2
That's easy — put the multiplexer outside the module: assign inp = sel ? inp2 : inp1; moduleex s1 (inp, out); You can even do it all in one line if you're so inclined, but this tends to be less readable: moduleex s1 (sel ? inp2 : inp1, out);
2
Here's a link in more general terms in which this question has already been answered Kit vs Device: No one on this forum can really give you legal advice, but from what I've read and my current understanding, if your device is sold as a kit, it does not have to be FCC certified. The FCC also has listed differences between intentional emitters and non-...
2
You shouldn't need to include the file at all. Instead, simply ensure all files are in your project ready to be compiled by your EDA tool. Most (all?) EDA tools will happily compile Verilog without header files. By the looks of it, you have both files being compiled by the EDA tool. First it elaborates EightBitAdder.v. You have a include statement that ...
1
The problem is a ground loop, or possibvly a short-circuit caused by the common connection of the audio signal To fix it use an audio isolating transformer to carry the music signal between the MP3 player and the amplifier board
1
It is clear that the only connection to the left side of "R050" is via the plating in the large hole in the corner of the board. Therefore, this hole should not be used for mechanical support at all. Screw threads could easily damage the plating and break the circuit. Furthermore, different mounting holes are connected to different circuit nodes, so all ...
1
One side of the resistor is connected to the mounting pad. Bridging the gap electrically would not be a problem as they are already connected electrically. Bridging to the pin above the mounting hole may be a problem as the pin is not connected to the resistor on this side of the board. Mechanically though this will eventually become a problem, cracking ...
1
You can use the ternary operator: wire out_s1; wire out_s2; wire out; moduleex s1( .in(inp1), .out(out_s1)); moduleex s2( .in(inp2), .out(out_s2)); assign out = (sel == 0) ? out_s1 : out_s2; The last line means IF sel IS 0 THEN use out_s1 ELSE use out_s2.
1
You don't want to have two different always blocks controlling one signal. Here's my version to avoid that always @(negedge s1 or posedge clk) begin if(clk) saw_a_falling_edge <= 1'b0; else saw_a_falling_edge <= 1'b1; end always @(posedge clk) begin out <= saw_a_falling_edge; end This is inferring a DFF with ...
1
your first always block is a sequential circuit so, never forget to use non-blocking assignment. your second always block is a combinational circuit and you must declare all the input signals in the sensitivity list. or you can set always @(*) (Line 16,18,21 changed) module SeqDect(rst,clk,ip,op); /*io and internal wires*/ always @(posedge ...
1
At the falling edge of clk, out is reset to 0. At the falling edge of R3, out is set to 1. How can I implement this logic in verilog? First, you should consider whether your logic is realizable in the technology you're using. Since you are using Quartus, I'll assume you're targeting an FPGA or CPLD technology. And the logic you're asking for is not a well ...
1
In general, when you need "multiplexer", think "case statement". You already have a case statement — so do your output assignment there, too. You'll need to create a separate bus for the output of each of your IP cores — result_add, result_mult, etc.
1
Yes, you are correct. Resistance goes low and that is not good for your devices, as the maximal current going through pull-down transistors will increase for several times. But it also depends on your wiring configuration, wires length and protocol bitrate. Typically you can have several pull-up resistors closer to the I2C devices. Please reed this ...
1
If all of your modules have pullup resistors then the effective pullup resistance will be the equivalent parallel value of those resistors. This value may be too low to allow your devices to work together. If you want a more specific answer then you need to provide links to the actual manufacturer's datasheets (not the ebay vendor page) for all of the ...
Only top voted, non community-wiki answers of a minimum length are eligible |
# Generalize TikZ Fraction Diagrams to any n-sided Polygon
I would like to extend Mark Wibrow's answer in this question to general polygons. The user would be able to type something like
\begin{tikzpicture}
\end{tikzpicture}
to get an octagon with segments drawn as radii, or
\begin{tikzpicture}
\pic {fraction={style=5-gon, segment=apothem, color=gray, fraction={12/5}}};
\end{tikzpicture}
to get a pentagon with segments drawn as apothems.
In Mark Wibrow's answer he has described how to create styles for circles, triangles, and flower petals. I would like to be able to generate the following types of fraction diagrams for any n-gon:
(Left: segments drawn as radii. Right: segments drawn as apothems)
I have very little experience with TikZ - I created the above graphics using Microsoft Paint. Thank you in advance for your help!
Just use a regular polygon node and add a path picture. There is a option for nontrivial greatest common divisors and the case in which the fraction equals 1/2.
\documentclass[tikz,border=3mm]{standalone}
\usetikzlibrary{shapes.geometric}
\newif\ifgcd
\begin{document}
\begin{tikzpicture}[ngon fraction/.style args={#1/#2}{regular polygon,
minimum size=\pgfkeysvalueof{/tikz/ngon size},
regular polygon sides=#2,draw,path picture={\ifodd#2
\pgfmathsetmacro{\mystartangle}{90-360/#2}
\else
\pgfmathsetmacro{\mystartangle}{0}
\fi
\pgfmathtruncatemacro{\itest}{ifthenelse(#2/#1==2,1,0)}
\ifnum\itest=1
\foreach \X in {0,2,...,#2}
{\draw[fill=gray!20] (\mystartangle+\X*360/#2:\pgfkeysvalueof{/tikz/ngon size})
-- (0,0) -- (\mystartangle+\X*360/#2+360/#2:\pgfkeysvalueof{/tikz/ngon size});}
\else
\fill[gray!20] (0,0) -- (\mystartangle:\pgfkeysvalueof{/tikz/ngon size}) arc[start angle=\mystartangle,end
\ifgcd
\pgfmathtruncatemacro{\mygcd}{gcd(#1,#2)}
\pgfmathtruncatemacro{\myupper}{#2/\mygcd}
\foreach \X in {1,...,\myupper}
{\draw (0,0) -- (\mystartangle+\mygcd*\X*360/#2:\pgfkeysvalueof{/tikz/ngon size});}
\else
\foreach \X in {1,...,#2}
{\draw (0,0) -- (\mystartangle+\X*360/#2:\pgfkeysvalueof{/tikz/ngon size});}
\fi
\fi
}},gcd/.is if=gcd,apothem/.style={shape border rotate=180/#1},
ngon size/.initial=2cm
]
\path (0,0) node[ngon fraction=1/4,]{}
(3,0) node[ngon fraction=1/4,apothem=4]{}
(0,-3) node[ngon fraction=4/5,rotate=108]{}
(3,-3) node[ngon fraction=4/5,rotate=108,apothem=5]{}
(0,-6) node[ngon fraction=4/6,rotate=150]{}
(3,-6) node[gcd,ngon fraction=4/6,rotate=150,apothem=6]{}
(0,-9) node[ngon fraction=4/8,shape border rotate=360/16,rotate=360/16]{}
(3,-9) node[ngon fraction=4/8]{}
;
\end{tikzpicture}
\end{document}
• Hi Schrödinger's cat, I love your solution! Thank you so much! Would you recommend I put the ngon fraction/.style=... and everything up to \path in the preamble using \tikzset{}? Also, is it correct that I should use \pic{circle fraction={1/4}}; to insert circle fractions, and use \node[ngon fraction=1/4]{}; to insert ngon fractions? I don't think \node works for circle fractions, does it? Thanks! – Mathemanic Jan 15 at 7:20
• @Mathemanic No, these are slightly different approaches. Mark uses pics, which I really love and use a lot, but for the regular polygons it so happens that the regular polygons already exist as nodes, and they come with the shape border rotate key, which helps a lot. So, yes, for these you need to use node. – Schrödinger's cat Jan 15 at 7:25
• @Mathemanic You could define a pic via \tikzset{pics/ngon fraction/.style args={#1/#2}{code={\coordinate[ngon fraction=#1/#2];}}} and then use \path (6,0) pic{ngon fraction=1/4};, say. Whether it is better to use \tikzset or the preamble of a tikzpicture depends on your use case. If you use them in several pictures, use \tikzset. The problem with that is if someone else is defining a style of the same name, and you copy their code to your document, then you may overwrite the definitions. – Schrödinger's cat Jan 15 at 7:27
• Thank you for the extremely helpful answer. You went above and beyond. I sincerely appreciate it! – Mathemanic Jan 15 at 7:39
• @Mathemanic You're welcome! – Schrödinger's cat Jan 15 at 7:40
Schrödinger's cat answer is perfect. I give only a way with tkz-euclide to avoid complications. It's a test to see if it's possible... There is some work to complete and get the link between fraction and polygon. The solution with apothem can be made in the same way.
\documentclass[]{article}
\usepackage{tkz-euclide}
\parindent=0pt
\begin{document}
\foreach \i in {3,...,7}
{ \begin{tikzpicture}
\tkzDefPoints{0/0/P0,2/0/P1}
\tkzDefRegPolygon[center,sides=\i](P0,P1)
\tkzDrawPolygon(P1,P...,P\i)
\tkzFillPolygon[gray!20](P0,P...,P\i)
\foreach \j in {1,...,\i} {\tkzDrawSegment[black](P0,P\j)}
\end{tikzpicture}\\}
\end{document}
Now with this :
\documentclass[]{article}
\usepackage{tkz-euclide}
\parindent=0pt
\begin{document}
\foreach \i in {3,...,7}
{ \begin{tikzpicture}
\tkzDefPoints{0/0/P0,0/0/Q0,2/0/P1}
\tkzDefMidPoint(P0,P1) \tkzGetPoint{Q1}
\tkzDefRegPolygon[center,sides=\i](P0,P1)
\tkzDefMidPoint(P1,P2) \tkzGetPoint{Q1}
\tkzDefRegPolygon[center,sides=\i,name=Q](P0,Q1)
\tkzDrawPolygon(P1,P...,P\i)
\tkzFillPolygon[gray!20](Q0,Q1,P2,Q2)
\foreach \j in {1,...,\i} {\tkzDrawSegment[black](P0,Q\j)}
\end{tikzpicture}\\}
\end{document}
• Thanks, Alain! Is there a way to put this into the preamble, and turn it into a macro, so you can type something short like \begin{tikzpicture}\pic{fraction={style=5-gon, segment=apothem, fraction={12/5}}};\end{tikzpicture}? – Mathemanic Jan 16 at 18:15
• @Mathemanic Yes this is possible but I have no time to do this. I work no my last version of tkz-euclide. – Alain Matthes Jan 18 at 5:54 |
# zbMATH — the first resource for mathematics
Transfer functions for flow predictions in wall-bounded turbulence. (English) Zbl 1415.76361
Summary: Three methods are evaluated to estimate the streamwise velocity fluctuations of a zero-pressure-gradient turbulent boundary layer of momentum-thickness-based Reynolds number up to $$Re_\theta\simeq 8200$$, using as input velocity fluctuations at different wall-normal positions. A system identification approach is considered where large-eddy simulation data are used to build single and multiple-input linear and nonlinear transfer functions. Such transfer functions are then treated as convolution kernels and may be used as models for the prediction of the fluctuations. Good agreement between predicted and reference data is observed when the streamwise velocity in the near-wall region is estimated from fluctuations in the outer region. Both the unsteady behaviour of the fluctuations and the spectral content of the data are properly predicted. It is shown that approximately 45% of the energy in the near-wall peak is linearly correlated with the outer-layer structures, for the reference case $$Re_\theta =4430$$. These identified transfer functions allow insight into the causality between the different wall-normal locations in a turbulent boundary layer along with an estimation of the tilting angle of the large-scale structures. Differences in accuracy of the methods (single- and multiple-input linear and nonlinear) are assessed by evaluating the coherence of the structures between wall-normally separated positions. It is shown that the large-scale fluctuations are coherent between the outer and inner layers, by means of an interactions which strengthens with increasing Reynolds number, whereas the finer-scale fluctuations are only coherent within the near-wall region. This enables the possibility of considering the wall-shear stress as an input measurement, which would more easily allow the implementation of these methods in experimental applications. A parametric study was also performed by evaluating the effect of the Reynolds number, wall-normal positions and input quantities considered in the model. Since the methods vary in terms of their complexity for implementation, computational expense and accuracy, the technique of choice will depend on the application under consideration. We also assessed the possibility of designing and testing the models at different Reynolds numbers, where it is shown that the prediction of the near-wall peak from wall-shear-stress measurements is practically unaffected even for a one order of magnitude change in the corresponding Reynolds number of the design and test, indicating that the interaction between the near-wall peak fluctuations and the wall is approximately Reynolds-number independent. Furthermore, given the performance of such methods in the prediction of flow features in turbulent boundary layers, they have a good potential for implementation in experiments and realistic flow control applications, where the prediction of the near-wall peak led to correlations above 0.80 when wall-shear stress was used in a multiple-input or nonlinear scheme. Errors of the order of 20% were also observed in the determination of the near-wall spectral peak, depending on the employed method.
##### MSC:
76F40 Turbulent boundary layers 76F55 Statistical turbulence modeling
##### Keywords:
turbulence modelling; turbulent boundary layers
Full Text:
##### References:
[1] Abreu, L. I.; Cavalieri, A. V.; Wolf, W., Coherent hydrodynamic waves and trailing-edge noise, 23rd AIAA/CEAS Aeroacoustics Conference, 3173, (2017), AIAA [2] Del Álamo, J. C.; Jiménez, J., Linear energy amplification in turbulent channels, J. Fluid Mech., 559, 205-213, (2006) · Zbl 1095.76021 [3] Baars, W. J.; Hutchins, N.; Marusic, I., Spectral stochastic estimation of high-Reynolds-number wall-bounded turbulence for a refined inner-outer interaction model, Phys. Rev. Fluids, 1, 5, (2016) [4] Bendat, J. S., Spectral techniques for nonlinear system analysis and identification, Shock Vib., 1, 1, 21-31, (1993) [5] Bendat, J. S.; Piersol, A. G., Random Data: Analysis and Measurement Procedures, 729, (2011), Wiley [6] Bernardini, M.; Pirozzoli, S., Inner/outer layer interactions in turbulent boundary layers: a refined measure for the large-scale amplitude modulation mechanism, Phys. Fluids, 23, 6, (2011) [7] Blackwelder, R. F.; Kovasznay, L. S. G., Time scales and correlations in a turbulent boundary layer, Phys. Fluids, 15, 9, 1545-1554, (1972) [8] Brown, G. L.; Thomas, A. S. W., Large structure in a turbulent boundary layer, Phys. Fluids, 20, 10, S243-S252, (1977) [9] Chevalier, M.; Lundbladh, A.; Henningson, D. S. [10] Cossu, C.; Pujals, G.; Depardon, S., Optimal transient growth and very large-scale structures in turbulent boundary layers, J. Fluid Mech., 619, 79-94, (2009) · Zbl 1156.76400 [11] Crighton, D. G.; Gaster, M., Stability of slowly diverging jet flow, J. Fluid Mech., 77, 2, 387-413, (1976) · Zbl 0338.76021 [12] Cuvier, C.; Srinath, S.; Stanislas, M.; Foucaut, J. M.; Laval, J. P.; Kähler, C. J.; Hain, R.; Scharnowski, S.; Schröder, A.; Geisler, R.; Agocs, J.; Röse, A.; Willert, C.; Klinner, J.; Amili, O.; Atkinson, C.; Soria, J., Extensive characterisation of a high Reynolds number decelerating boundary layer using advanced optical metrology, J. Turbul., 18, 10, 929-972, (2017) [13] Dogan, E.; Örlü, R.; Gatti, D.; Vinuesa, R.; Schlatter, P., Quantification of amplitude modulation in wall-bounded turbulence, Fluid Dyn. Res., 51, 1, (2019) [14] Eitel-Amor, G.; Örlü, R.; Schlatter, P., Simulation and validation of a spatially evolving turbulent boundary layer up to Re_𝜃 = 8300, Intl J. Heat Fluid Flow, 47, 57-69, (2014) [15] Farrell, B. F.; Ioannou, P. J., Generalized stability theory. Part I. Autonomous operators, J. Atmos. Sci., 53, 14, 2025-2040, (1996) [16] Favre, A.; Gaviglio, J.; Dumas, R., Structure of velocity space-time correlations in a boundary layer, Phys. Fluids, 10, 9, S138-S145, (1967) [17] Flores, O.; Jiménez, J., Hierarchy of minimal flow units in the logarithmic layer, Phys. Fluids, 22, 7, (2010) [18] Guala, M.; Hommema, S. E.; Adrian, R. J., Large-scale and very-large-scale motions in turbulent pipe flow, J. Fluid Mech., 554, 521-542, (2006) · Zbl 1156.76316 [19] Guillaume, P.; Pintelon, R.; Schoukens, J., Nonparametric frequency response function estimators based on nonlinear averaging techniques, Instrumentation and Measurement Technology Conference, 1992. IMTC’92, 9th IEEE, 3-9, (1992), IEEE [20] Hoyas, S.; Jiménez, J., Scaling of the velocity fluctuations in turbulent channels up to Re_𝜏 = 2003, Phys. Fluids, 18, 1, (2006) [21] Hutchins, N.; Marusic, I., Evidence of very long meandering features in the logarithmic region of turbulent boundary layers, J. Fluid Mech., 579, 1-28, (2007) · Zbl 1113.76004 [22] Hwang, Y.; Cossu, C., Linear non-normal energy amplification of harmonic and stochastic forcing in the turbulent channel flow, J. Fluid Mech., 664, 51-73, (2010) · Zbl 1221.76104 [23] Illingworth, S. J.; Monty, J. P.; Marusic, I., Estimating large-scale structures in wall turbulence using linear models, J. Fluid Mech., 842, 146-162, (2018) [24] Jiménez, J., Cascades in wall-bounded turbulence, Annu. Rev. Fluid Mech., 44, 27-45, (2012) · Zbl 1388.76089 [25] Jiménez, J., Near-wall turbulence, Phys. Fluids, 25, 10, (2013) [26] Jiménez, J.; Pinelli, A., The autonomous cycle of near-wall turbulence, J. Fluid Mech., 389, 335-359, (1999) · Zbl 0948.76025 [27] Kim, K. C.; Adrian, R. J., Very large-scale motion in the outer layer, Phys. Fluids, 11, 2, 417-422, (1999) · Zbl 1147.76430 [28] Komminaho, J.; Lundbladh, A.; Johansson, A. V., Very large structures in plane turbulent Couette flow, J. Fluid Mech., 320, 259-285, (1996) · Zbl 0875.76160 [29] Lang, Z.; Billings, S. A., Output frequency characteristics of nonlinear systems, Intl J. Control, 64, 6, 1049-1067, (1996) · Zbl 0860.93023 [30] Lundell, F., Reactive control of transition induced by free-stream turbulence: an experimental demonstration, J. Fluid Mech., 585, 41-71, (2007) · Zbl 1118.76011 [31] Marusic, I.; Heuer, W. D. C., Reynolds number invariance of the structure inclination angle in wall turbulence, Phys. Rev. Lett., 99, 11, (2007) [32] Marusic, I.; Mathis, R.; Hutchins, N., Predictive model for wall-bounded turbulent flow, Science, 329, 5988, 193-196, (2010) · Zbl 1226.76015 [33] Mathis, R.; Hutchins, N.; Marusic, I., Large-scale amplitude modulation of the small-scale structures in turbulent boundary layers, J. Fluid Mech., 628, 311-337, (2009) · Zbl 1181.76008 [34] Mathis, R.; Hutchins, N.; Marusic, I., A predictive inner-outer model for streamwise turbulence statistics in wall-bounded flows, J. Fluid Mech., 681, 537-566, (2011) · Zbl 1241.76296 [35] Mckeon, B. J., The engine behind (wall) turbulence: perspectives on scale interactions, J. Fluid Mech., 817, (2017) · Zbl 1383.76239 [36] Mckeon, B. J.; Sharma, A. S., A critical-layer framework for turbulent pipe flow, J. Fluid Mech., 658, 336-382, (2010) · Zbl 1205.76138 [37] Mckeon, B. J.; Sharma, A. S.; Jacobi, I., Experimental manipulation of wall turbulence: a systems approach, Phys. Fluids, 25, 3, (2013) [38] Moehlis, J.; Faisst, H.; Eckhardt, B., A low-dimensional model for turbulent shear flows, New J. Phys., 6, 1, 56, (2004) [39] Naguib, A. M.; Wark, C. E.; Juckenhöfel, O., Stochastic estimation and flow sources associated with surface pressure events in a turbulent boundary layer, Phys. Fluids, 13, 9, 2611-2626, (2001) · Zbl 1184.76386 [40] Örlü, R.; Schlatter, P., On the fluctuating wall-shear stress in zero pressure-gradient turbulent boundary layer flows, Phys. Fluids, 23, 2, (2011) [41] Panton, R. L., Overview of the self-sustaining mechanisms of wall turbulence, Prog. Aerosp. Sci., 37, 4, 341-383, (2001) [42] Peng, Z. K.; Lang, Z. Q.; Billings, S. A., Crack detection using nonlinear output frequency response functions, J. Sound Vib., 301, 3-5, 777-788, (2007) [43] Pujals, G.; García-Villalba, M.; Cossu, C.; Depardon, S., A note on optimal transient growth in turbulent channel flows, Phys. Fluids, 21, 1, (2009) · Zbl 1183.76425 [44] Rice, H. J.; Fitzpatrick, J. A., A generalised technique for spectral analysis of non-linear systems, Mech. Syst. Signal Process., 2, 2, 195-207, (1988) · Zbl 0669.93077 [45] Rice, H. J.; Fitzpatrick, J. A., A procedure for the identification of linear and non-linear multi-degree-of-freedom systems, J. Sound Vib., 149, 3, 397-411, (1991) [46] Rocklin, G. T.; Crowley, J.; Vold, H., A comparison of H_1 , H_2 , and H_v frequency response functions, Proceedings of the 3rd International Modal Analysis Conference, 272-278, (1985), Union College [47] Sasaki, K.; Morra, P.; Fabbiane, N.; Cavalieri, A. V. G.; Hanifi, A.; Henningson, D. S., On the wave-cancelling nature of boundary layer flow control, Theor. Comput. Fluid Mech., 32, 5, 1-24, (2018) [48] Sasaki, K.; Piantanida, S.; Cavalieri, A. V. G.; Jordan, P., Real-time modelling of wavepackets in turbulent jets, J. Fluid Mech., 821, 458-481, (2017) · Zbl 1383.76301 [49] Sasaki, K.; Tissot, G.; Cavalieri, A. V. G.; Silvestre, F. J.; Jordan, P.; Biau, D., Closed-loop control of a free shear flow: a framework using the parabolized stability equations, Theoret. Comput. Fluid Dyn., 32, 6, 1-24, (2018) [50] Schlatter, P.; Örlü, R., Assessment of direct numerical simulation data of turbulent boundary layers, J. Fluid Mech., 659, 116-126, (2010) · Zbl 1205.76139 [51] Schlatter, P.; Örlü, R., Turbulent boundary layers at moderate Reynolds numbers: inflow length and tripping effects, J. Fluid Mech., 710, 5-34, (2012) · Zbl 1275.76144 [52] Schlatter, P.; Örlü, R.; Li, Q.; Brethouwer, G.; Fransson, J. H. M.; Johansson, A. V.; Alfredsson, P. H.; Henningson, D. S., Turbulent boundary layers up to Re_𝜃 = 2500 studied through simulation and experiment, Phys. Fluids, 21, 5, (2009) · Zbl 1183.76457 [53] Schmid, P. J.; Henningson, D. S., Stability and Transition in Shear Flows, (2012), Springer Science & Business Media [54] Schoukens, J.; Rolain, Y.; Pintelon, R., Improved frequency response function measurements for random noise excitations, Instrumentation and Measurement Technology Conference, 749-753, (1997), IEEE [55] Schrauf, G., Status and perspectives of laminar flow, Aeronaut. J., 109, 1102, 639-644, (2005) [56] Smits, A. J.; Mckeon, B. J.; Marusic, I., High-Reynolds number wall turbulence, Annu. Rev. Fluid Mech., 43, 1, 353-375, (2011) · Zbl 1299.76002 [57] Towne, A.; Schmidt, O. T.; Colonius, T., Spectral proper orthogonal decomposition and its relationship to dynamic mode decomposition and resolvent analysis, J. Fluid Mech., 847, 821-867, (2018) · Zbl 1404.76145 [58] Trefethen, L. N.; Trefethen, A. E.; Reddy, S. C.; Driscoll, T. A., Hydrodynamic stability without eigenvalues, Science, 261, 5121, 578-584, (1993) · Zbl 1226.76013 [59] Vinuesa, R.; Hites, M. H.; Wark, C. E.; Nagib, H. M., Documentation of the role of large-scale structures in the bursting process in turbulent boundary layers, Phys. Fluids, 27, 10, (2015) [60] Vinuesa, R.; Örlü, R.; Discetti, S.; Ianiro, A., Measurement of wall shear stress, Experimental Aerodynamics, 393-428, (2017), CRC Taylor Press [61] Waleffe, F., Hydrodynamic stability and turbulence: beyond transients to a self-sustaining process, Stud. Appl. Math., 95, 3, 319-343, (1995) · Zbl 0838.76026 [62] Waleffe, F., On a self-sustaining process in shear flows, Phys. Fluids, 9, 4, 883-900, (1997) [63] Wark, C. E.; Nagib, H. M., Experimental investigation of coherent structures in turbulent boundary layers, J. Fluid Mech., 230, 183-208, (1991) [64] Welch, P., The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms, IEEE Trans. Audio Electroacoust., 15, 2, 70-73, (1967)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
## What is the architecture for game development called?
I would like to know what architecture game development uses as a whole. I need to create some documentation for my game project and the teacher keeps on giving me web-based architectures that are not the right ones.
My project is being made in game maker.
## Game Rooms Server Architecture – ENet CSharp
My team and I are working on an upcoming online fighting game using ENet-CSharp (A C# ENet implementation created by nxrighthere), and we’re currently designing the architecture of the server. We would be very glad to hear your suggestions concerning a couple of issues we have been struggling with.
Our current plan is to host a dedicated server for the game, which will handle the logic and game loop of each game instance, and the MonoGame clients will simply deliver the user’s commands to the server, and present the updated game state received from it. However, we are not sure how to correctly use ENet with our division of independent game rooms.
Let’s say, I have 100 concurrent users connected to the server, and 10 independent game rooms, consisting of 10 players each. At first, we thought that we should have a single ENet Host which will handle all of them, and a single independent ENet thread that will simply receive packets from all ends, alerting the corresponding game room to handle it within its game logic. However, it seems a bit unsafe to have one I/O thread shared by multiple different and parallel instances of the game, so our plan is as such:
• For each game room (10 players for that matter), a unique ENet Host will be created, and 2 independent threads will run – one for the game loop, and one for the ENet event polling.
• The ENet thread will call the Service method with a small timeout, expecting to poll one "Receive" event at each iteration, and will queue the commands received from the players.
• The game loop, at the start of each iteration, will dequeue the commands that have been collected since the last iteration, apply them within the game logic, and so on.
Would you guys say this is a good solution to go by?
A question that rose with that: Correct me if I’m wrong, but as far as I understand, the send rate of the outcoming packets corresponds to the rate of calls to either the "Service" or the "Flush" methods. How can I ensure that at the end of each game loop, the new game state will be broadcasted immediately to the clients? Calling the "Flush" method at the end of each iteration seems logically appropriate, but unsafe at the same time (since it will be called outside the ENet dedicated thread).
Any piece of advice would be more than welcome. Thanks in advance!
## Problem solving an exercices in system architecture
So hey guys hope you are doing well. I have aproblem solving this exercices. I have tried a lot but it seems my answer are always wrong first here is the problem:
In the case where these registers have the following values:
AX = 13C4; BX = 324F; CX = 2200; BP = 1500, DS = 3000; SS = 5000; SI = 1100; DI = 2000
Calculate the physical address of the memory where the operand is saved, as well as the content of the memory locations in each of the following addressing modes: at.
MOV [2000], AX
b. MOV [SI],
AL vs. MOV [BX], CX
d. MOV [DI + 25], AX
e. MOV [BP], BX
## Computer Architecture Problem
A computer system has a 64KB main memory and 1 KB space for the cache memory, and transfer between cache and main memory is 16 * 8 Blocks, uses 2 space blocks in each set(uses set associative), and uses LRU when deciding to change blocks, uses Read Through for read, Write allocate for write and simple write back for write back. offset = 4 number of sets = 32 tag = 7 index = 5 Q) in this system, each element for 20 elements has 8bit, the start of these addresses is arr1 = $$0000 (in hexadecimal), arr2=$$0200(in hexadecimal), and assume we wrote a mips program that reads and compare these two arrays elements and writes the big one into an array starts with address arr3 = \$ 0410(in hexadecimal), initially assume that the cache is empty, in which blocks and sets the arrays will be placed and the number after comparing in which sets and blocks will be placed (Just I need calculations for hit and miss and read ratios ) and if you can draw it, it would be great
## Architecture of smartphone security: why FBI needed apple’s help?
I want to focus on technical aspects, not on the fact that they wanted to make a precedence.
i assume the smartphone security architecture is following:
1. cryptography chip. it’s read only and stateless. it contains physical cryptography key. it offers some transformations of user input. it doesn’t expose the key. it doesn’t remember number of retries
2. NAND disk. contains encrypted data
3. OS. get input from user, talks to the chip, changes the content of the NAND
4. retries counter. no idea where is it? is it stored on NAND disk or some other dedicated long term memory?
from what i know the FBI wanted apple to make for them less secure iOS version that doesn’t erase the disk after a few failed retries. but why do the need it? can’t they just:
• make a copy the NAND disk (in case it has some killswitch)
• get the chip’s spec and just send to it a few millions decrypt request (testing every possible user pin / password)
• if the chip stores retires counter in some dedicated memory, they can always plug in a tweaked memory that always replies with the same value when read
why do they even need an OS? it’s just a simple program that can communicate with a chip. what am i missing?
## Find the flaw in my architecture: Shamir’s Secret implementation for data encryption and recovery
This will be a long one.
Here’s the thing: I want to build a privacy-preserving system where the user data is not even accessible to the database administrator.
Intuitively, I immediately thought of simply using AES to encrypt user data with their own password and hashing their username so that an attacker with access to my database would need to brute-force the password for the encrypted data to get the info and then brute-force the username to maybe get an idea of who the decrypted data is about.
This would be great but leads to the problem of forgotten passwords. If one forgets their password they could reset it by providing the correct username or recovery email (also hashed), but they could not get their data back. ProtonMail, for instance, claims your data is safe even from them, but you cannot recover your emails if you forget your password.
I then started looking at secret sharing and came across Shamir’s secret. My question therefore is: Is the system I propose below worse than simply storing data in plaintext with obfuscated (hashed) usernames?
I understand that:
1. Security does not come with complexity
2. This system will not be entirely flawless
However, I just want to know if it is any better than a much simpler solution. Because as long as it is equally easy/hard for a hacker but harder for the database admin to gather any info from the data, it would be worth it for me.
It is “complex” because it is the only system my mind has currently come up with that allows for data encryption + somewhat simple recovery protecting data from hackers and admins. I would also happily take suggestions for other implementations.
So here we go.
The proposed system would use Shamir’s secret to encrypt the user data with k=6 and n=11 so that 6/11 parts are needed to decrypt the data. User information would then be given a “weight” and utilized to store a proportional number of parts in an encrypted manner. Something like this:
Weights
• Email: 2
• Security Question 1: 1
• Security Question 2: 1
• Name + Date of Birth: 1
Based on those weights, the following is done to the user’s private data (pseudocode):
`SHAMIR(user_data, k=6, n=11)`
This will produce something like a uint8 array with length=11. Let’s call that array `parts`.
The database would then use symmetric encryption (let’s say AES) to store these parts as follows (only the resulting ciphertext is stored):
``{ username: AES(key=username, message=parts[0:2]) password: AES(key=password, message=parts[2:6]) email: AES(key=email, message=parts[6:8]) seq1: AES(key=answer, message=parts[8:9]) seq2: AES(key=answer, message=parts[9:10]) id: AES(key=name+dob, message=parts[10:11]) } ``
Login would then happen with the traditional username+password or email+password, such that the user will be authenticated/logged in if the data is decrypted correctly. Both combinations give access to enough parts (6) to decrypt the data. From the user perspective, it’s all the same as everywhere else.
Then, user forgets their password. Well, now they need to find an alternative way to gather the 4 “points” provided by the password. So they would click “Forgot Password”, and a form would pop up with all the possible fields to fill in. They must then fill enough to gather 4 more parts (in addition to username or email) in order to decrypt their data. For example:
username (2) + email (2) + seq1 (1) + namedob (1) = 6
(Email verification could also be implemented)
So now the user has 6/11. Server decrypts the data, user sets a new password, data is re-encrypted, and all the fields are updated with the new parts. By definition, a user who forgot their password will have accumulated a minimum of 10 out of 11 “points” after password reset is complete (The 6 points they provided + the 4 from the new password). Therefore, 1 point could be missing. Given that the user cannot provide that last point, they can be prompted to add a new security question, at which point all is back to normal.
So, in conclusion:
I know all parts of the secret being in the same place is not great, nor is it great to use AES with low-entropy secrets.
However, this should add some security, no? To get the data, an attacker would have to brute force at least a password and a username, or, to not brute-force the password, would have to brute-force quite a bit of other data. It isn’t perfect by any means, but it’s better for data privacy than the standard, no? What am I missing? Assuming it’s implemented perfectly and it works as intended, is it possibly worse than how companies treat our data today? For most, a database breach means the data is already out there, only the password has to be brute-forced, right?
Lastly, could these objectives be achieved in any other way?
That’s it. If you’ve read until now, thank you. Please go easy on me.
Cheers.
EDIT: I’m also thinking somewhat about UX here. The entropy of the data used to store the parts is definitely low, but giving users a higher-entropy “random recovery code” or something would be problematic from a UX perspective.
## Is there any connection between imperative programming and the Von Neumann architecture?
I have ran into a wall with this question in the exercise my teacher gave me, is there any actual connetion between the Von Neumann architecture and imperative programming ?
I have tried googling and finding questions similar to this, but I couldn’t find anything, and the one question that I have found actually said that there shouldn’t be any connection between the Von Neumann architecture and programming paradigms.
Any help would be appreciated, I’m new to StackExchange, so if I’m breaking any rules please do tell me 🙂
## true/false in computer architecture
In general computer science, I know several ways of shortly writing falsehood and veracity in English texts, apart from the widely known false and true:
• 0, 1
• F, T
• N, Y
• ⊥, ⊤
• Lo, Hi
In German computer-science text, I saw variants
• F, W
• N, J
• O, L
Now the question: what do the hardware folks, i.e., researchers or practicioners doing computer architecture, predominantly use? I am interested in answers concerning English or German language (or both). As you can imagine, googling for “true false hardware” in books has led me nowhere.
## Is there an abstract architecture equivalent to Von Neumann’s for Lambda expressions?
In other words, was a physical implementation modelling lambda calculus (so not built on top of a Von Neumann machine) ever devised? Even if just on paper?
If there was, what was it? Did we make use of its concepts somewhere practical (where it can be looked into and studied further)?
— I’m aware of specialised LISP machines. They were equipped with certain hardware components that made them better but eventually they were still the same at their core.
If there isn’t such thing, what stops it from being relevant or worth the effort? Is it just a silly thought to diverge so greatly from the current hardware and still manage to create a general-purpose computer?
## Machine has 64 bit architecture and two word long instruction
A machine has a 64-bit architecture, with 2-word long instructions. It has 128 registers, each of which is 32 bits long. It needs to support 49 instructions, which have an immediate operand in addition to two register operands. Assuming that the immediate operand is a signed integer, the maximum value of the immediate operand is that can be stored is?
edit: would 1 word long instruction for this case be of 64 bit and 2 word long instruction be of 128 bits? Also do we have to add an extra bit while calculating the total bits required for 49 instructions? |
## Periodic Boundary Conditions (QM)
AriAstronomer
Posts: 76
Joined: Thu May 12, 2011 4:53 pm
### Periodic Boundary Conditions (QM)
Hey guys,
So I got a bunch of flash cards from "Case Western Reserve" University, courtesy of a recommendation from someone on this forum ages ago, and one of the flash cards is asking to write down the wave functions for a free particle with periodic boundary conditions. I've never heard of periodic boundary conditions. Is this something I should be aware of? I looked in griffiths index and online, didn't really find alot of info...
Any help would be appreciated.
Ari
bfollinprm
Posts: 1203
Joined: Sat Nov 07, 2009 11:44 am
### Re: Periodic Boundary Conditions (QM)
That's solid state physics. Could show up, but not likely to be vital. It's not really QM, at least in the sense of QM tested on the PGRE. You might find something in the E&M book...
for reference, the wave function (1D) in a period potential is given by $\Psi(x) = e^{ikx}U(x)$, where U(x) is a function with the same period as the potential, and $k = (2\pi/L)*n$, which is a result of the boundary condition.
You might recognize bits of this from your studies of diffraction...
physicsworks
Posts: 80
Joined: Tue Oct 12, 2010 8:00 am
### Re: Periodic Boundary Conditions (QM)
bfollinprm wrote:for reference, the wave function (1D) in a period potential is given by $\Psi(x) = e^{ikx}U(x)$, where U(x) is a function with the same period as the potential, and $k = (2\pi/L)*n$, which is a result of the boundary condition.
This is Bloch's theorem, not boundary conditions
AriAstronomer wrote:I've never heard of periodic boundary conditions
It's OK. They will not appear on the PGRE for at least 10-15 years. But you can read about them in Chapter 8, Ashcroft and Mermin "Solid state physics", if you want.
kangen558
Posts: 11
Joined: Fri Feb 22, 2008 2:41 am
### Re: Periodic Boundary Conditions (QM)
Sounds like a particle on a ring [periodic boundary conditions, no potential]:
http://physchem.ox.ac.uk/~hill/tutorial ... index.html
Bloch's theorem does still apply, but with U(x)=1.
physicsworks
Posts: 80
Joined: Tue Oct 12, 2010 8:00 am
### Re: Periodic Boundary Conditions (QM)
kangen558 wrote:Bloch's theorem does still apply, but with U(x)=1.
no. Boundary conditions are far more strong statements, than Bloch's theorem with $U(x) \equiv 1$.
kangen558
Posts: 11
Joined: Fri Feb 22, 2008 2:41 am
### Re: Periodic Boundary Conditions (QM)
physicsworks wrote:
kangen558 wrote:Bloch's theorem does still apply, but with U(x)=1.
no. Boundary conditions are far more strong statements, than Bloch's theorem with $U(x) \equiv 1$.
Perhaps I've misunderstood. I was only commenting that the eigenstates for the particle on a ring satisfy Bloch's Theorem with U(x)=constant. The periodic BCs will quantize the momentum. Am I missing something?
Hausdorff
Posts: 21
Joined: Sun Nov 28, 2010 3:40 am
### Re: Periodic Boundary Conditions (QM)
for periodic boundary conditions:
For example if your region is bet. x=0 to L, for any point after L, lets say for L+n,
f(L+n)=f(n)
for bloch theorem :
the function needs to be multiplied with exp(ikL) after moving L(assuming L is the period of the potential)
f(L+n)=f(n)exp(ikL)
so they are not the same. make sure that you have a periodic potential not a periodic boundary before using bloch theorem.
bfollinprm
Posts: 1203
Joined: Sat Nov 07, 2009 11:44 am
### Re: Periodic Boundary Conditions (QM)
lol. I think all this confusion is a pretty good indicator of how important this topic is for the PGRE (not very).
AriAstronomer
Posts: 76
Joined: Thu May 12, 2011 4:53 pm
### Re: Periodic Boundary Conditions (QM)
Haha perfect. That was the answer I wanted to hear. |
# 9.7 Probability (Page 5/18)
Page 5 / 18
A child randomly selects 3 gumballs from a container holding 4 purple gumballs, 8 yellow gumballs, and 2 green gumballs.
1. Find the probability that all 3 gumballs selected are purple.
2. Find the probability that no yellow gumballs are selected.
3. Find the probability that at least 1 yellow gumball is selected.
Access these online resources for additional instruction and practice with probability.
Visit this website for additional practice questions from Learningpod.
## Key equations
probability of an event with equally likely outcomes $P\left(E\right)=\frac{n\left(E\right)}{n\left(S\right)}$ probability of the union of two events $P\left(E\cup F\right)=P\left(E\right)+P\left(F\right)-P\left(E\cap F\right)$ probability of the union of mutually exclusive events $P\left(E\cup F\right)=P\left(E\right)+P\left(F\right)$ probability of the complement of an event $P\left(E\text{'}\right)=1-P\left(E\right)$
## Key concepts
• Probability is always a number between 0 and 1, where 0 means an event is impossible and 1 means an event is certain.
• The probabilities in a probability model must sum to 1. See [link] .
• When the outcomes of an experiment are all equally likely, we can find the probability of an event by dividing the number of outcomes in the event by the total number of outcomes in the sample space for the experiment. See [link] .
• To find the probability of the union of two events, we add the probabilities of the two events and subtract the probability that both events occur simultaneously. See [link] .
• To find the probability of the union of two mutually exclusive events, we add the probabilities of each of the events. See [link] .
• The probability of the complement of an event is the difference between 1 and the probability that the event occurs. See [link] .
• In some probability problems, we need to use permutations and combinations to find the number of elements in events and sample spaces. See [link] .
## Verbal
What term is used to express the likelihood of an event occurring? Are there restrictions on its values? If so, what are they? If not, explain.
probability; The probability of an event is restricted to values between $\text{\hspace{0.17em}}0\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}$ inclusive of $\text{\hspace{0.17em}}0\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}1.\text{\hspace{0.17em}}$
What is a sample space?
What is an experiment?
An experiment is an activity with an observable result.
What is the difference between events and outcomes? Give an example of both using the sample space of tossing a coin 50 times.
The union of two sets is defined as a set of elements that are present in at least one of the sets. How is this similar to the definition used for the union of two events from a probability model? How is it different?
The probability of the union of two events occurring is a number that describes the likelihood that at least one of the events from a probability model occurs. In both a union of sets and a union of events the union includes either or both. The difference is that a union of sets results in another set, while the union of events is a probability, so it is always a numerical value between $\text{\hspace{0.17em}}0\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}1.\text{\hspace{0.17em}}$
## Numeric
For the following exercises, use the spinner shown in [link] to find the probabilities indicated.
Landing on red
use the y -intercept and slope to sketch the graph of the equation y=6x
how do we prove the quadratic formular
hello, if you have a question about Algebra 2. I may be able to help. I am an Algebra 2 Teacher
thank you help me with how to prove the quadratic equation
Seidu
may God blessed u for that. Please I want u to help me in sets.
Opoku
what is math number
4
Trista
x-2y+3z=-3 2x-y+z=7 -x+3y-z=6
Need help solving this problem (2/7)^-2
x+2y-z=7
Sidiki
what is the coefficient of -4×
-1
Shedrak
the operation * is x * y =x + y/ 1+(x × y) show if the operation is commutative if x × y is not equal to -1
An investment account was opened with an initial deposit of \$9,600 and earns 7.4% interest, compounded continuously. How much will the account be worth after 15 years?
lim x to infinity e^1-e^-1/log(1+x)
given eccentricity and a point find the equiation
12, 17, 22.... 25th term
12, 17, 22.... 25th term
Akash
College algebra is really hard?
Absolutely, for me. My problems with math started in First grade...involving a nun Sister Anastasia, bad vision, talking & getting expelled from Catholic school. When it comes to math I just can't focus and all I can hear is our family silverware banging and clanging on the pink Formica table.
Carole
I'm 13 and I understand it great
AJ
I am 1 year old but I can do it! 1+1=2 proof very hard for me though.
Atone
Not really they are just easy concepts which can be understood if you have great basics. I am 14 I understood them easily.
Vedant
hi vedant can u help me with some assignments
Solomon
find the 15th term of the geometric sequince whose first is 18 and last term of 387
I know this work
salma
The given of f(x=x-2. then what is the value of this f(3) 5f(x+1)
hmm well what is the answer
Abhi
If f(x) = x-2 then, f(3) when 5f(x+1) 5((3-2)+1) 5(1+1) 5(2) 10
Augustine |
# What restriction does BRST symmetry put on the Hamiltonian of a (lie group) gauge theory?
+ 5 like - 0 dislike
208 views
As far as i know the BRST symmetry is an infinitesimal (and expanded) version of gauge symmetry. Recently I read the following: "when QFT was reformulated in fiber bundle language for application to problems in the topology of low-dimensional manifolds, did it become apparent that the BRST 'transformation' is fundamentally geometric" I am aware of how ghosts are Maurer-Cartan form on the (infinite dimensional) group of gauge transfprmations of one's principle bundle... Now the above quote continues, "The relationship between gauge invariance and "BRST invariance" forces the choice of a Hamiltonian system whose states are composed of "particles" according to the rules familiar from the canonical quantization formalism. This esoteric consistency condition therefore comes quite close to explaining how quanta and fermions arise in physics to begin with."
Does anyone know what this second half of the quote is talking about? E.g. what "relationship", what "esoteric consistency condition, anr which special "form of Hamiltonian" is forced on which (presumably upon quantization) gives rise to particles ...? If the whole thing makes sense, does anyone know any references to this matter? (preferably, original sources...)
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. |
# consistently increased line spacing when subscripts occur in paragraph
The question is about consistent vertical spacing in a paragraph of text that includes some math using subscripts.
Concretely, in the example I feel that there is not enough vertical space between line 1 and line 2, due to the fact there is both a subscript in the math on line 1 and some math as well on line 2.
In general, I would not want to fix this locally, but rather have a bit more vertical space consistently throughout the paragraph (whenever the situation occurs at least once in the pargraph).
Since I have a quite a lot of paragraphs where this situation occurs, it would be useful if I did not have to request that for every single paragraph, but if this could be a default.
\documentclass[draft,11pt]{article}
\sloppy
\usepackage{amssymb}
\begin{document}
\newcommand{\diam}[1]{\ensuremath{\langle #1 \rangle}}
\newcommand{\model}{\mathbb{M}}
\newcommand{\N}{\ensuremath{\mathsf{N}}}
Assume $\model|_{\N(w_0)\cap\ldots\cap\N(w_n)},w\models \diam{a}\psi$.
It follows that there is a state $v\in \N(w_0)\cap\ldots\cap\N(w_n)$
with $w R_a v$ (1) and $\model|_{\N(w_0)\cap\ldots\cap\N(w_n)},v\models \psi$ (2). Hence by IH $\model| _{\N(w_0)\cap\ldots\cap\N(w_n)},v\models \psi$ (3). From (1) and the fact that $\model'$ is an $A$-generated
submodel of $\model$ we have $w R'_a v$, hence by (3) $\model'| _{\N(w_0) \cap\ldots\cap\N(w_n)},w\models \diam{a}\psi$. The other direction is
trivial.
\end{document}
• You could change \linespread, but if you do so, I would advise you to do it (for consistencyś sake) for all your document, in which case, the setspace package will be useful. – Gonzalo Medina Jan 23 '14 at 17:20
• But in all other cases I would not want the linespread to be bigger than what it is by default. So maybe I should accept to have lines a bit too close whenever I have subscripts. – sunless Jan 23 '14 at 17:27
• alternative is to change fontdimen so the subscripts are not lowered so much. tex.stackexchange.com/questions/88991/… – David Carlisle Jan 23 '14 at 17:35
In your example the baselines are at constant distance, as testified by this picture, where the horizontal rules are drawn independently of the text at \baselineskip distance from each other:
If your text has many subscripts, it can be a good idea to increase the leading; this is not bad per se, it is only if the increase is too big. Here are examples of the same text at normal leading, after a 5% increase and after a 10% increase. You can experiment with something in between; setting text at 11/15 (that is with \linespread{1.1}) is not a serious sin: typographic decisions depend on the nature of the text. Of course, the leading should be the same across the document, so \linespread{...} (without \selectfont) should go in the preamble.
Here's the code for producing the images
\documentclass[draft,11pt]{article}
\usepackage{amssymb}
\newcommand{\diam}[1]{\ensuremath{\langle #1 \rangle}}
\newcommand{\model}{\mathbb{M}}
\newcommand{\N}{\ensuremath{\mathsf{N}}}
\newcommand{\ruler}{%
\leavevmode
\llap{%
\smash{%
\raise\baselineskip\vtop to 5\baselineskip{
\vrule height\baselineskip width 0.1pt
\vrule width \textwidth height 0.1pt}\vfill
}%
}\kern\parindent
}%
}
\begin{document}
\newcommand{\sampletext}{%
Assume $\model|_{\N(w_0)\cap\ldots\cap\N(w_n)},w\models \diam{a}\psi$.
It follows that there is a state $v\in \N(w_0)\cap\ldots\cap\N(w_n)$
with $w R_a v$ (1) and $\model|_{\N(w_0)\cap\ldots\cap\N(w_n)},v\models \psi$ (2). Hence by IH $\model| _{\N(w_0)\cap\ldots\cap\N(w_n)},v\models \psi$ (3). From (1) and the fact that $\model'$ is an $A$-generated
submodel of $\model$ we have $w R'_a v$, hence by (3) $\model'| _{\N(w_0) \cap\ldots\cap\N(w_n)},w\models \diam{a}\psi$. The other direction is
trivial.}
\noindent\textsf{The text has equally spaced baselines}\par
\medskip
\ruler\sampletext
\medskip
\noindent\textsf{The same text without a ruler}\par
\medskip
\sampletext
\medskip
\noindent\textsf{With slightly taller baseline skip, increased 5\%}\par
\medskip |
## Partial pressures
$PV=nRT$
Kevin Hernandez 3A
Posts: 21
Joined: Fri Sep 29, 2017 7:06 am
### Partial pressures
The reaction 2 SO2 (g) + O2 (g) ⇌ 2 SO3 (g) occurs in a 1.00 L flask at 312 K and at equilibrium the concentrations are 0.075 mol.L-1 SO2 (g), 0.537 mol.L-1 O2 (g), and 0.925 mol.L-1 SO3 (g). Calculate their respective partial pressures at 312 K using R = 8.206 × 10-2 L.atm.K-1.mol-1.
Andrea Grigsby 1I
Posts: 60
Joined: Fri Sep 29, 2017 7:03 am
### Re: Partial pressures
use the equation PV=nRT and substitute the values in
Posts: 20
Joined: Fri Sep 29, 2017 7:06 am
Been upvoted: 1 time
### Re: Partial pressures
Solve for P using P= (n/v)(R)(T). They already give us the concentrations, which is mol.L, or n/v. Substitute the different concentrations into the equation and you get the respective pressures for each gas.
Guangyu Li 2J
Posts: 50
Joined: Fri Sep 29, 2017 7:07 am
Been upvoted: 1 time
### Re: Partial pressures
Dalton Law of Partial Pressure is the theorem for the ideal gases specifically.
First of all, the gases suitable for this law must be ideal gases.
According to this law, if the gases in the container don't react with each other, every kinds of gases are distributed evenly in the container. The pressure they exert each is equal to the pressure they occupy in the container.
Kailie_Giebink_1E
Posts: 26
Joined: Fri Apr 06, 2018 11:02 am
### Re: Partial pressures
yes you use pv=nrt |
# Search by Topic
#### Resources tagged with Working systematically similar to Flora the Florist:
Filter by: Content type:
Age range:
Challenge level:
### There are 126 results
Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically
##### Age 11 to 14 Challenge Level:
If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why?
### Football Sum
##### Age 11 to 14 Challenge Level:
Find the values of the nine letters in the sum: FOOT + BALL = GAME
### Number Daisy
##### Age 11 to 14 Challenge Level:
Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25?
### Cayley
##### Age 11 to 14 Challenge Level:
The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"?
### How Old Are the Children?
##### Age 11 to 14 Challenge Level:
A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?"
##### Age 11 to 14 Challenge Level:
A few extra challenges set by some young NRICH members.
### Tea Cups
##### Age 7 to 14 Challenge Level:
Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour.
### Crossing the Bridge
##### Age 11 to 14 Challenge Level:
Four friends must cross a bridge. How can they all cross it in just 17 minutes?
### Cinema Problem
##### Age 11 to 14 Challenge Level:
A cinema has 100 seats. Show how it is possible to sell exactly 100 tickets and take exactly £100 if the prices are £10 for adults, 50p for pensioners and 10p for children.
### Multiples Sudoku
##### Age 11 to 14 Challenge Level:
Each clue in this Sudoku is the product of the two numbers in adjacent cells.
##### Age 11 to 14 Challenge Level:
Rather than using the numbers 1-9, this sudoku uses the nine different letters used to make the words "Advent Calendar".
### Ones Only
##### Age 11 to 14 Challenge Level:
Find the smallest whole number which, when mutiplied by 7, gives a product consisting entirely of ones.
### Pole Star Sudoku 2
##### Age 11 to 16 Challenge Level:
This Sudoku, based on differences. Using the one clue number can you find the solution?
##### Age 11 to 16 Challenge Level:
The items in the shopping basket add and multiply to give the same amount. What could their prices be?
### LCM Sudoku II
##### Age 11 to 18 Challenge Level:
You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku.
### Star Product Sudoku
##### Age 11 to 16 Challenge Level:
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
### Colour Islands Sudoku
##### Age 11 to 14 Challenge Level:
An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine.
### Consecutive Numbers
##### Age 7 to 14 Challenge Level:
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
### Problem Solving, Using and Applying and Functional Mathematics
##### Age 5 to 18 Challenge Level:
Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information.
### Integrated Sums Sudoku
##### Age 11 to 16 Challenge Level:
The puzzle can be solved with the help of small clue-numbers which are either placed on the border lines between selected pairs of neighbouring squares of the grid or placed after slash marks on. . . .
### Weights
##### Age 11 to 14 Challenge Level:
Different combinations of the weights available allow you to make different totals. Which totals can you make?
### Teddy Town
##### Age 5 to 14 Challenge Level:
There are nine teddies in Teddy Town - three red, three blue and three yellow. There are also nine houses, three of each colour. Can you put them on the map of Teddy Town according to the rules?
### Summing Consecutive Numbers
##### Age 11 to 14 Challenge Level:
Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way?
### A First Product Sudoku
##### Age 11 to 14 Challenge Level:
Given the products of adjacent cells, can you complete this Sudoku?
##### Age 11 to 14 Challenge Level:
You need to find the values of the stars before you can apply normal Sudoku rules.
### Latin Squares
##### Age 11 to 18
A Latin square of order n is an array of n symbols in which each symbol occurs exactly once in each row and exactly once in each column.
### Twin Line-swapping Sudoku
##### Age 14 to 16 Challenge Level:
A pair of Sudoku puzzles that together lead to a complete solution.
### Difference Sudoku
##### Age 14 to 16 Challenge Level:
Use the differences to find the solution to this Sudoku.
##### Age 11 to 16 Challenge Level:
Four small numbers give the clue to the contents of the four surrounding cells.
### Making Maths: Double-sided Magic Square
##### Age 7 to 14 Challenge Level:
Make your own double-sided magic square. But can you complete both sides once you've made the pieces?
### More on Mazes
##### Age 7 to 14
There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper.
### Counting on Letters
##### Age 11 to 14 Challenge Level:
The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern?
### Diagonal Product Sudoku
##### Age 11 to 16 Challenge Level:
Given the products of diagonally opposite cells - can you complete this Sudoku?
### Wallpaper Sudoku
##### Age 11 to 16 Challenge Level:
A Sudoku that uses transformations as supporting clues.
### Peaches Today, Peaches Tomorrow....
##### Age 11 to 14 Challenge Level:
Whenever a monkey has peaches, he always keeps a fraction of them each day, gives the rest away, and then eats one. How long could he make his peaches last for?
### Oranges and Lemons, Say the Bells of St Clement's
##### Age 11 to 14 Challenge Level:
Bellringers have a special way to write down the patterns they ring. Learn about these patterns and draw some of your own.
### Bochap Sudoku
##### Age 11 to 16 Challenge Level:
This Sudoku combines all four arithmetic operations.
### More Children and Plants
##### Age 7 to 14 Challenge Level:
This challenge extends the Plants investigation so now four or more children are involved.
### More Plant Spaces
##### Age 7 to 14 Challenge Level:
This challenging activity involves finding different ways to distribute fifteen items among four sets, when the sets must include three, four, five and six items.
### A Long Time at the Till
##### Age 14 to 18 Challenge Level:
Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem?
### Colour Islands Sudoku 2
##### Age 11 to 18 Challenge Level:
In this Sudoku, there are three coloured "islands" in the 9x9 grid. Within each "island" EVERY group of nine cells that form a 3x3 square must contain the numbers 1 through 9.
### Inky Cube
##### Age 7 to 14 Challenge Level:
This cube has ink on each face which leaves marks on paper as it is rolled. Can you work out what is on each face and the route it has taken?
### The Naked Pair in Sudoku
##### Age 7 to 16
A particular technique for solving Sudoku puzzles, known as "naked pair", is explained in this easy-to-read article.
### Coins
##### Age 11 to 14 Challenge Level:
A man has 5 coins in his pocket. Given the clues, can you work out what the coins are?
### Olympic Logic
##### Age 11 to 16 Challenge Level:
Can you use your powers of logic and deduction to work out the missing information in these sporty situations?
### First Connect Three for Two
##### Age 7 to 14 Challenge Level:
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
### Twin Chute-swapping Sudoku
##### Age 14 to 18 Challenge Level:
A pair of Sudokus with lots in common. In fact they are the same problem but rearranged. Can you find how they relate to solve them both?
### Twin Corresponding Sudoku
##### Age 11 to 18 Challenge Level:
This sudoku requires you to have "double vision" - two Sudoku's for the price of one
### Warmsnug Double Glazing
##### Age 11 to 14 Challenge Level:
How have "Warmsnug" arrived at the prices shown on their windows? Which window has been given an incorrect price?
### Difference Dynamics
##### Age 14 to 18 Challenge Level:
Take three whole numbers. The differences between them give you three new numbers. Find the differences between the new numbers and keep repeating this. What happens? |
Mathematical analysis of a discrete fracture model coupling Darcy flow in the matrix with Darcy-Forchheimer flow in the fracture
ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 48 (2014) no. 5, p. 1451-1472
We consider a model for flow in a porous medium with a fracture in which the flow in the fracture is governed by the Darcy-Forchheimerlaw while that in the surrounding matrix is governed by Darcy's law. We give an appropriate mixed, variational formulation and show existence and uniqueness of the solution. To show existence we give an analogous formulation for the model in which the Darcy-Forchheimerlaw is the governing equation throughout the domain. We show existence and uniqueness of the solution and show that the solution for the model with Darcy's law in the matrix is the weak limit of solutions of the model with the Darcy-Forchheimerlaw in the entire domain when the Forchheimer coefficient in the matrix tends toward zero.
DOI : https://doi.org/10.1051/m2an/2014003
Classification: 35J60, 76S05
Keywords: flow in porous media, fractures, Darcy−Forchheimerflow, solvability, regularization, monotone operators
@article{M2AN_2014__48_5_1451_0,
author = {Knabner, Peter and Roberts, Jean E.},
title = {Mathematical analysis of a discrete fracture model coupling Darcy flow in the matrix with Darcy-Forchheimer flow in the fracture},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique},
publisher = {EDP-Sciences},
volume = {48},
number = {5},
year = {2014},
pages = {1451-1472},
doi = {10.1051/m2an/2014003},
mrnumber = {3264361},
language = {en},
url = {http://www.numdam.org/item/M2AN_2014__48_5_1451_0}
}
Knabner, Peter; Roberts, Jean E. Mathematical analysis of a discrete fracture model coupling Darcy flow in the matrix with Darcy-Forchheimer flow in the fracture. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 48 (2014) no. 5, pp. 1451-1472. doi : 10.1051/m2an/2014003. http://www.numdam.org/item/M2AN_2014__48_5_1451_0/
[1] R. Adams, Sobolev Spaces, vol. 65 of Pure and Appl. Math. Academic Press, New York (1975). | MR 450957 | Zbl 0314.46030
[2] C. Alboin, J. Jaffré, J. Roberts and C. Serres, Domain decomposition for flow in porous media with fractures, in Proc. of the 11th Int. Conf. on Domain Decomposition Methods in Greenwich, England (1999).
[3] G. Allaire, Homogenization of the stokes flow in a connected porous medium. Asymptotic Anal. 2 (1989) 203-222. | MR 1020348 | Zbl 0682.76077
[4] G. Allaire, One-phase newtonian flow, in Homogenization and Porous Media, vol. 6 of Interdisciplinary Appl. Math., edited by U. Hornung. Springer-Verlag, New York (1997) 45-69. | MR 1434318
[5] Y. Amirat, Ecoulements en milieu poreux n'obeissant pas a la loi de darcy. RAIRO Modél. Math. Anal. Numér. 25 (1991) 273-306. | Numdam | MR 1103090 | Zbl 0727.76106
[6] P. Angot, F. Boyer and F. Hubert, Asymptotic and numerical modelling of flows in fractured porous media. ESAIM: M2AN 43 (2009) 239-275. | Numdam | MR 2512496 | Zbl 1171.76055
[7] M. Balhoff, A. Mikelic and M. Wheeler, Polynomial filtration laws for low reynolds number flows through porous media. Transport in Porous Media (2009). | MR 2592414
[8] J. Bear, Dynamics of Fluids in Porous Media. American Elsevier Pub. Co., New York (1972). | Zbl 1191.76001
[9] F. Brezzi, On the existence, uniqueness and approximation of saddle-point problems arising from lagrangian multipliers. RAIRO: Modél. Math. Anal. Numér. 8 (1974) 129-151. | Numdam | MR 365287 | Zbl 0338.90047
[10] P. Fabrie, Regularity of the solution of Darcy−Forchheimer's equation. Nonlinear Anal., Theory Methods Appl. 13 (1989) 1025-1049. | MR 1013308 | Zbl 0719.35070
[11] I. Faille, E. Flauraud, F. Nataf, S. Pegaz-Fiornet, F. Schneider and F. Willien, A new fault model in geological basin modelling, application to finite volume scheme and domain decomposition methods, in Finie Volumes for Complex Appl. III. Edited by R. Herbin and D. Kroner. Hermés Penton Sci. (2002) 543-550. | MR 2008978 | Zbl 1055.86001
[12] P. Forchheimer, Wasserbewegung durch Boden. Z. Ver. Deutsch. Ing. 45 (1901) 1782-1788.
[13] N. Frih, J. Roberts and A. Saada, Un modèle darcy-frochheimer pour un écoulement dans un milieu poreux fracturé. ARIMA 5 (2006) 129-143.
[14] N. Frih, J. Roberts and A. Saada, Modeling fractures as interfaces: a model for forchheimer fractures. Comput. Geosci. 12 (2008) 91-104. | MR 2386967 | Zbl 1138.76062
[15] P. Knabner and G. Summ, Solvability of the mixed formulation for Darcy−Forchheimer flow in porous media. Submitted.
[16] V. Martin, J. Jaffré and J.E. Roberts, Modeling fractures and barriers as interfaces for flow in porous media. SIAM J. Sci. Comput. 26 (2005) 1667-1691. | MR 2142590 | Zbl 1083.76058
[17] R. Showalter and F. Morales, The narrow fracture approximation by channeled flow. J. Math. Anal. Appl. 365 (2010) 320-331. | MR 2585104 | Zbl 1273.76370
[18] G. Summ, Lösbarkeit un Diskretisierung der gemischten Formulierung für Darcy-Frochheimer-Fluss in porösen Medien. Ph.D. thesis. Friedrich-Alexander-Universität Erlangen-Nürnberg (2001).
[19] L. Tartar, Convergence of the homogenization process, in Non-homogeneous Media and Vibration Theory, vol. 127 of Lect. Notes Phys. Edited by E. Sancez-Palencia. Springer-Verlag (1980).
[20] E. Zeidler, Nonlinear function anaysis and its applications - Nonlinear monotone operators. Springer-Verlag, Berlin, Heidelberg, New York (1990). | Zbl 0583.47050 |
Completely B# Continuous Mappings in Intuitionistic Fuzzy Topological Spaces
Text Only Version
Completely B# Continuous Mappings in Intuitionistic Fuzzy Topological Spaces
S. Dhivya1
Master of Philosophy (Mathematics) Avinashilingam (Deemed to be) University Coimbatore, India
Dr. D. Jayanthi2
Assistant Professor of Mathematics Avinashilingam (Deemed to be) University Coimbatore, India
Abstract In this chapter we have introduced two types of b# continuous mappings namely intuitionistic fuzzy completely b# continuous mappings and intuitionistic fuzzy perfectly b# continuous mappings. Also we have provided some interesting results based on these continuous mappings.
Keywords Intuitionistic fuzzy sets, intuitionistic fuzzy topology, intuitionistic fuzzy completely b# continuous mapping.
1. INTRODUCTIONIntuitionistic fuzzy set is introduced by Atanassov in 1986. Using the notion of intuitionistic fuzzy sets, Coker [1997] has constructed the basic concepts of intuitionistic fuzzy topological spaces. The concept of b# closed sets and b# continuous mappings in intuitionistic fuzzy topological spaces are introduced by Gomathi and Jayanthi (2018). In this paper we have introduced intuitionistic fuzzy completely b# continuous mappings and intuitionistic fuzzy perfectly b# continuous mappings. Also we have provided some interesting results based on these continuous mappings.
2. PRELIMINARIESDefinition 2.1: [Atanassov 1986] An intuitionistic fuzzy set(IFS) A is an object having the form A= {x, µA(x), A(x): x X},where the functions µA: X [0, 1] and A: X [0,1] denote the degree of membership and the degree of non-membership of each element x X to the set A respectively , and 0 µA(x) + A(x) 1 for each x X. Denote by IFS(X) , the set of all intuitionistic fuzzy sets in
1. An IFS A in X is simply denoted by A = x, µA, A instead of denoting A = {x, µA(x), A(x): x X}.Definition 2.2: [Atanassov 1986] Let A and B be two IFSs of the form A = {x, µA(x), A(x): x X} and B = {x,
µA(x), A(x): x X}. Then the following properties hold:
1. AB if and only if µA(x) µB(x) and A(x) B(x) for all x X,
2. A=B if and only if A B and A B,
3. Ac = {x, µA(x), A(x) : x X},
iv. A B = {x, µA(x) µB(x), A(x) B(x) : x X},
v. A B = { x, µA(x) µB(x) , A(x) B(x) : x
X}.
The IFSs 0~= x, 0, 1 and 1~= x, 1, 0 are respectively the empty set and whole set of X.
Definition 2.3: [Coker, 1997] An intuitionistic fuzzy topology (IFT) on X is a family of IFSs in X satisfying the following axioms:
i. 0~,1~
1. G1G2 for any G1, G2
2. Gi for any {Gi : i J} .
In this case the pair (X, ) is called the intuitionistic fuzzy topological space (IFTS) and any IFS in is known as an intuitionistic fuzzy open set (IFOS) in X. Then the complement Ac of an IFOS A in an IFTS (X, ) is called an intuitionistic fuzzy closed set (IFCS) in X.
Definition 2.4: [Coker, 1997] Let (X,) be an IFTS and A =
x, µA, A be an IFS in X. Then the intuitionistic fuzzy interior and intuitionistic fuzzy closure are defined by
int(A) = {G/G is an IFOS in X and G A}, cl(A) = {K/K is an IFCS in X and A K}.
Definition 2.5: [Gurcay, Coker and Hayder, 1997] An IFS A = x, µA, A in an IFTS (X, ) is said to be an
1. intuitionistic fuzzy semi closed set if int(cl(A))A
2. intuitionistic fuzzy pre closed set if cl(int(A)) A
3. intuitionistic fuzzy regular closed set if cl(int(A)) = A
4. intuitionistic fuzzy closed set if cl(int(cl(A)))A
5. intuitionistic fuzzy closed set if int(cl(int(A)))
A
Definition 2.6: [Hanafy, 2009] An IFS A=x, µA, A in an IFTS (X, ) is said to be an intuitionistic fuzzy closed set if int(cl(A)) cl(int(A)) A.
Definition 2.7: [Gomathi and Jayanthi, 2018] An IFS A =
x, µA, A in an IFTS (X, ) is said to be an intuitionistic fuzzy b# closed set (IFb#CS) if int(cl(A)) cl(int(A)) = A.
Definition 2.8: [Coker, 1997] Let X and Y be two non empty sets and f: XY be a mapping. If B = {y, B(y), B(y) / y Y} is an IFS in Y, then the preimage of B under f is denoted and defined by f-1(B)= { x, f-1(B)(x), f-1(B)(x)
/ x X }, where f-1 (B)(x) = B(f(x)) for every x X.
Definition 2.9: [Gurcay, Coker and Hayder, 1997] Let f be a mapping from an IFTS (X,) into an IFTS (Y,). Then f said to be an intuitionistic fuzzy continuous mapping if f- 1(V) is an IFCS in (X,) for every IFCS V of (Y,).
Definition 2.10: [Joung Kon Jeon, 2005] Let f be a mapping from an IFTS (X, ) into an IFTS (Y, ). Then f said to be an
1. intuitionistic fuzzy semi continuous mapping if f-1(V) is an IFSCS in (X, ) for every IFCS V of (Y,).
2. intuitionistic fuzzy continuous mapping if f-1(V) is an IFCS in (X, ) for every IFCS V of (Y, ).
3. intuitionistic fuzzy pre continuous mapping if f-1(V) is an IFPCS in (X, ) for every IFCS V of (Y, ).
4. intuitionistic fuzzy continuous mapping if f-1(V) is an IFCS in (X, ) for every IFCS V of (Y, ).
Definition 2.11: [Gomathi and Jayanthi, 2018] Let f be a mapping from an IFTS (X, ) into an IFTS (Y, ). Then f is said to be an
1. intuitionistic fuzzy b# continuous mapping if f-1 (V) is an IFb#CS in (X, ) for every IFCS V of (Y, ).
2. intuitionistic fuzzy contra b# continuous mapping if f-1 (V) is an IFb#CS in (X, ) for every IFOS V of (Y, ).
3. intuitionistic fuzzy b# irresolute mapping if f-1 (V)
3. COMPLETELY b# CONTINUOUS MAPPINGS IN INTUITIONISTIC FUZZY TOPOLOGICAL SPACES
In this chapter we have introduced and investigated intuitionistic fuzzy completely b# continuous mappings and intuitionistic fuzzy perfectly b# continuous mappings. We have provided many interesting results using these continuous mappings.
Definition 3.1: A mapping f: (X, ) (Y, ) is called an intuitionistic fuzzy completely b# continuous mapping if f- 1(V) is an IFRCS in (X, ) for every IFb#CS V of (Y, ).
Example 3.2: Let X = {a, b}, Y = {u, v}. Then = {0~, G1, G2 1~} and = {0~, G3, G4 1~} are IFS on X and Y respectively, where, G1 = x, (0.2a, 0.3b), (0.4a, 0.5b), G2 =
x, (0.4a, 0.5b), (0.2a, 0.3b), G3 = y, (0.2u, 0.3v), (0.4u, 0.5v)
and G4 = y, (0.4u, 0.5v), (0.2u, 0.3v). Define a mapping f: (X, ) (Y, ) by f(a) = u and f(b) = v. Then f is an intuitionistic fuzzy completely b# continuous mapping.
Proposition 3.3: A mapping f: (X, ) (Y, ) is an intuitionistic fuzzy completely b# continuous mapping if and only if the inverse image of each IFb#OS in Y is an IFROS in X.
Proof: Obviously.
Proposition 3.4: If f: (X, ) (Y, ) is an intuitionistic fuzzy completely b# continuous mapping where Y is an IFT
is an IFb#CS in (X, ) for every IFb#CS V of (Y, ).
b# space[4], then for each IFP (, )
X and for every
Definition 2.12: [Hanafy and El-Arish, 2003] Let f be a mapping from an IFTS (X, ) into an IFTS (Y, ). Then f is said to be an intuitionistic fuzzy completely continuous mapping if f-1(V) is an IFROS in (X, ) for every IFOS V of (Y, ).
Definition 2.13: [Coker and Demirci, 1995] Intuitionistic
intuitionistic fuzzy neighbourhood A of f((, )), there exists an IFROS B of X such that (, ) B and f(B) A.
Proof: Let (, ) be an IFP of X and let A be an intuitionistic fuzzy neighbourhood of f((, )) sch that f((, )) C A, where C is an IFOS in X. Since every
fuzzy point (IFP), written as p(, ), is defined to be an IFS of
IFOS is an IFb#OS in an IFT
#
#
b
space, C is an IFb#OS in Y
(, )
X given by p(, )(x) = {
=
. An IFP p(, ) is said
as Y is an IFT
space. Hence by hypothesis, f -1(C) is an
(0,1)
to belong to a set A if µA and A.
Definition 2.14: [Thakur and Rekha Chaturvedi, 2008] Two IFSs A and B are said to be q-coincident (A q B) if and only if there exist an element x X such that µA(x) B(x) or A(x) < µB(x).
Definition 2.15: [Seok Jong Lee and Eun Pyo Lee, 2000] Let p(, ) be an IFP in (X, ). An IFS A of X is called an intuitionistic fuzzy neighbourhood of p(, ) if there exist an
b#
IFROS in X and (, ) f -1(C). Put B = f -1(C). Therefore
(, ) B = f-1(C) f-1(A).Thus f(B) f(f-1(A)) A. That is f(B) A.
Proposition 3.5: A mapping f: (X, ) (Y, ) is an intuitionistic fuzzy completely b# continuous mapping then cl(int(f-1(cl(B)))) f-1(B) for every IFS B in Y where Y is
an IFT b# space.
IFOS B in X such that p(, ) B A. Proof: Let B Y be an IFS. Then cl(B) is an IFCS in Y and
Definition 2.16: [Dhivya and Jayanthi, 2019] Let f be a
hence an IFb#CS in Y as Y is an IFT #
space. By
b
b
mapping from an IFTS (X, ) into an IFTS (Y, ). Then f is said to be an intuitionistic fuzzy almost b# continuous mapping if f-1(V) is an IFb#CS in (X, ) for every IFRCS V of (Y, ).
hypothesis, f-1(cl(B)) is an IFRCS in X. Hence cl(int(f- 1(cl(B)))) = f-1(cl(B)) f-1(B).
Proposition 3.6: Let f: (X, ) (Y, ) be an mapping. Then the following are equivalent:
1. f is an intuitionistic fuzzy completely b# continuous mappingProof: Let × be an IFb#CS of × . Then (f1, f2)- 1( × )(x)=( × )(f1(x),f2(x))=
2. f -1(V) is an IFROS in X for every IFb#OS V in Yx, min(
( f (x)),
( f (x))), max(
( f (x)),
( f (x))) =
3. for every IFP
and for every IFb#OS B
A 1 B 2
A 1 B 2
(, )
x, min( f 1 ( )(x), f 1 ( )(x)),max( f 1 (
)(x), f 1 (
)(x) =
in Y such that f(
) B there exists an IFROS
1 A 2 B
1 A 2 B
(, )
f 1(A) f 1(B)(x). Since f and f are an intuitionistic
in X such that (, ) and f(A) B
Proof: (i) (ii): Let V be an IFb#OS in Y. Then Vc is an
1 2 1 2
fuzzy completely b# continuous mapping, f-1(A) and f-1(B) are IFROSs in X. Since the intersection of two IFROSs is an
IFb#CS in Y. Since f is an intuitionistic fuzzy completely b#
IFROS,
f 1 (A) f 1 (B)
is an IFROS in X. Hence
continuous mapping, f-1(Vc) is an IFRCS in X. Since f-1(Vc) 1 2
-1 c -1
(f1,f2) is an intuitionistic fuzzy completely b# continuous
= (f (V)) , f
(V) is an IFROS in X.
mapping.
(ii) (iii): Let (, ) and B Y such that f((, ))
B. This implies (, ) f-1(B). Since B is an IFb#OS in Y, by hypothesis f-1(B) is an IFROS in X. Let A = f-1(B). Then
(, ) f-1 (f((, ))) f-1(B)=A. Therefore (, )
and f(A) = f(f-1(B)) B.This implies f(A) B.
(iii) (ii): Let B Y be an IFb#OS. Let (, ) and f((, )) B. By hypothesis, there exists an IFROS C in X such that (, ) and f(C) B. This implies C f-1(f(C)) f-1(B). Therefore (, ) f-1(B). That is
Proposition 3.9: Let f : X Y and g : Y Z be any two mappings. If f and g are intuitionistic fuzzy completely b# continuous mapping, then g f is also an intuitionistic fuzzy
#
#
completely b# continuous mapping, where Y is an IFT
b
space.
Proof: Let B be an IFb#CS in Z. Since g is an intuitionistic fuzzy completely b# continuous mapping, g-1(B) is an IFRCS in Y. Since every IFRCS is an IFCS, g-1(B) is an
IFCS in Y. As Y is an IFT space, g-1(B) is an IFb#CS in
f-1(B) =
p( , ) C
f-1(B). This implies
b#
1. Now as f is an intuitionistic fuzzy completely b#p( , )f -1 ( B)
f-1(B) = C
p( , )f -1 ( B)
p( , )f -1 ( B)
. Since the union IFROSs is an IFROS,
continuous mapping, f-1(g-1(B)) = (g f )-1(B) is an IFRCS in X. Hence g f is an intuitionistic fuzzy completely b# continuous mapping.
f-1(B) is an IFROS in X. Hence f is intuitionistic fuzzy completely b# continuous mapping.
Proposition 3.7: A mapping f : X Y is an intuitionistic fuzzy completely b# continuous mapping then the following are equivalent:
1. For any IFb#OS A in Y and for any IFP (, ), if f((, ))q A, then (, )q int(f-1(A)).
2. For any IFb#OS A in Y and for any (, ) , if f((, ))q A , then there exists an IFOS B such that
(, )q B and f(B) A.
Proof: (i) (ii): Let A Y be an IFb#OS and let (, )
. Let f((, ))q A. Then (, )qf-1(A) (i) implies that
Proposition 3.10: Let f : X Y and g : Y Z be any two mappings. If f is an intuitionistic fuzzy completely b# continuous mapping and g is an intuitionistic fuzzy b# irresolute mapping then g f is also an intuitionistic fuzzy completely b# continuous mapping.
Proof: Let B be an IFb#CS in Z. Since g is an intuitionistic fuzzy b# irresolute mapping, g-1(B) is an IFb#CS in Y. Also, since f is an intuitionistic fuzzy completely b# continuous mapping, f-1(g-1(B)) is an IFRCS in X. Since (g f)-1(B) = f-1(g-1(B)), g f is an intuitionistic fuzzy completely b# continuous mapping.
Proposition 3.11: Let f : X Y and g : Y Z be any two mappings. If f is an intuitionistic fuzzy completely b#
(, )
q int(f-1( A)) where int(f-1( A)) is an IFOS in X. Let B
continuous mapping and g is an intuitionistic fuzzy b#
= int(f-1( A)). Since int(f-1( A)) f-1( A), B f-1( A). Then f(B) f(f-1( A)) A.
(ii) (i): Let A Y be an IFb#OS and let (, ) .
continuous mapping then g f is also an intuitionistic fuzzy
completely continuous mapping.
Proof: Let B be an IFCS in Z. Since g is an intuitionistic
Suppose f (
(, ))q
A, then by (ii) there exists an IFOS B in
fuzzy b# continuous mapping, g-1(B) is an IFb#CS in Y. Also, since f is an intuitionistic fuzzy completely b#
X such that (, )q B and f(B) A. Now B f-1(f( B))
f-1(A). That is B = int(B) int(f-1( A)). Therefore (, )qB implies (, )q int(f-1( A)).
Proposition 3.8: Let f1: (X, ) (Y, ) and f2 : (X, ) (Y, ) be any two intuitionistic fuzzy completely b# continuous mappings. Then the mapping (f1,f2) : (X, ) ( × , × ) is also an intuitionistic fuzzy completely b# continuous mapping.
continuous mapping, f-1(g-1(B)) is an IFRCS in X. Since (g f)-1(B) = f-1(g-1(B)), (g f) is an intuitionistic fuzzy completely continuous mapping.
Proposition 3.12: Let f : X Y and g : Y Z be any two mappings. If f is an intuitionistic fuzzy completely b# continuous mapping and g is an intuitionistic fuzzy b# continuous mapping then g f is also an intuitionistic fuzzy completely continuous mapping.
Proof: Let B be an IFCS in Z. Since g is an intuitionistic Proof: Let B be an IFRCS in Y. Since every IFRCS is an
fuzzy b# continuous mapping, g-1(B) is an IFb#CS in Y. Also, since f is an intuitionistic fuzzy completely b# continuous mapping, f-1(g-1(B)) is an IFRCS in X. Since (g f)-1(B) = f-1(g-1(B)), g f is an intuitionistic fuzzy completely continuous mapping.
IFCS, B is an IFCS in Y. Since Y is an IFT b# space, B is an IFb#CS in Y. Since f is an intuitionistic fuzzy perfectly b# continuous mapping, f-1(B) is an intuitionistic fuzzy clopen set in X. Thus f-1(B) is an IFCS in X. Since every IFCS is an
p>Proposition 3.13: Let f : X Y and g : Y Z be any two
IFb#CS in an IFT
#
#
b
space, f-1(B) is an IFb#CS in X, as X is
mappings. If f is an intuitionistic fuzzy almost b# continuous mapping and g is an intuitionistic fuzzy completely b# continuous mapping then g f is also an intuitionistic fuzzy b# irresolute mapping.
Proof: Let B be an IFb#CS in Z. Since g is an intuitionistic fuzzy completely b# continuous mapping, g-1(B) is an IFRCS in Y. Also, since f is an intuitionistic fuzzy almost b# continuous mapping, f-1(g-1(B)) is an IFb#CS in X. Since (g
o f)-1(B) = f-1(g-1(B)), g f is an intuitionistic fuzzy b# irresolute mapping.
Definition 3.14: A mapping f: (X, ) (Y, ) is called an intuitionistic fuzzy perfectly b# continuous mapping if f-1(V) is an intuitionistic fuzzy clopen set in (X, ) for every
an IFT b# space. Hence f is an intuitionistic fuzzy almost b continuous mapping.
#
#
Proposition 3.19: A mapping f:(X, ) (Y, ) is an intuitionistic fuzzy perfectly b# continuous mapping and then f is an intuitionistic fuzzy b# continuous mapping
where X and Y are IFT b# spaces.
#
#
Proof: Let B be an IFCS in Y. Since every IFCS is an IFb#CS in an IFT space, B is an IFb#CS in Y, as Y is an
b
#
#
IFT space. Since f is an intuitionistic fuzzy perfectly b#
b
continuous mapping, f-1(B) is an intuitionistic fuzzy clopen set in X. Thus f-1(B) is an IFCS in X. Since every IFCS is an
IFb#CS V of (Y, ).
IFb#CS in an IFT
#
#
b
space, f-1(B) is an IFb#CS in X, as X is
Example 3.15: Let X = {a, b}, Y = {u, v}. Then = {0~, G1, G2, 1~} and = {0~ , G3, G4, 1~} are IFS on X and Y respectively, where, G1 = x, (0.2a, 0.3b), (0.4a, 0.5b), G2 =
x, (0.4a, 0.5b), (0.2a, 0.3b), G3 = y, (0.2u, 0.3v), (0.4u, 0.5v)
and G4 = y, (0.4u, 0.5v), (0.2u, 0.3v).Define a mapping f: (X, ) (Y, ) by f(a) = u and f(b) = v. Then f is an intuitionistic fuzzy perfectly b# continuous mapping.
Proposition 3.16: A mapping f : (X,) (Y,) is an intuitionistic fuzzy perfectly b# continuous mapping if and
an IFT b# space. Hence f is an intuitionistic fuzzy b continuous mapping.
#
#
Proposition 3.20: A mapping f : (X,) (Y,) is an intuitionistic fuzzy perfectly b# continuous mapping, then f is an intuitionistic fuzzy semi continuous mapping, where Y
is an IFT b# space.
Proof: Let B be an IFCS in Y. Since every IFCS is an
only if the inverse image of each IFb#OS in Y is an intuitionistic fuzzy clopen in X.
IFb#CS in an IFT
#
#
b
space, B is an IFb#CS in Y as Y is an
Proof: Straight forward.
Proposition 3.17: A mapping f : (X,) (Y,) is an intuitionistic fuzzy perfectly b# continuous mapping then f is an intuitionistic fuzzy continuous mapping where Y is an
IFT b# space.
Proof: Let B be an IFCS in Y. Since every IFCS is an
IFT space. Since f is an intuitionistic fuzzy perfectly b#
#
#
b
continuous mapping, f-1(B) is an intuitionistic fuzzy clopen set in X. Thus f-1(B) is an IFCS in X. Since every IFCS is an IFSCS, f-1(B) is an IFSCS in X. Hence f is an intuitionistic fuzzy semi continuous mapping.
Proposition 3.21: A mapping f : (X, ) (Y,) is an intuitionistic fuzzy perfectly b# continuous mapping, then f is an intuitionistic fuzzy continuous mapping, where Y is
#
#
IFb#CS in an IFT
b
space, B is an IFb#CS in Y, as Y is an
an IFT b# space.
#
#
IFT b# space. Since f is an intuitionistic fuzzy perfectly b continuous mapping, f-1(B) is an intuitionistic fuzzy clopen set in X. Thus f-1(B) is an IFCS in X. Hence f is an intuitionistic fuzzy continuous mapping.
Proposition 3.18: A mapping f:(X,) (Y,) is an intuitionistic fuzzy perfectly b# continuous mapping, then f is an intuitionistic fuzzy almost b# continuous mapping,
Proof: Let B be an IFCS in Y. Since every IFCS is an IFb#CS in an IFT # space, B is an IFb#CS in Y, as Y is an
b
b
#
#
IFT b# space. Since f is an intuitionistic fuzzy perfectly b continuous mapping, f-1(B) is an intuitionistic fuzzy clopen set in X. Thus f-1(B) is an IFCS in X. Since every IFCS is an IFCS, f-1(B) is an IFCS in X. Hence f is an intuitionistic
where X and Y are IFT b#
spaces.
fuzzy continuous mapping.
Proposition 3.22: A mapping f : (X,) (Y,) is an intuitionistic fuzzy perfectly b# continuous mapping then f is
an intuitionistic fuzzy pre continuous mapping, where Y is
an IFT b#
space.
#
#
Proof: Let B be an IFCS in Y. Since every IFCS is an IFb#CS in an IFT space. B is an IFb#CS in Y as Y is an
b
#
#
IFT b# space. Since f is an intuitionistic fuzzy perfectly b continuous mapping, f-1(B) is an intuitionistic fuzzy clopen set in X. Thus f-1(B) is an IFCS in X. Since every IFCS is an IFPCS, f-1(B) is an IFPCS in X. Hence f is an intuitionistic fuzzy pre continuous mapping.
Proposition 3.23: Let f : X Y and g : Y Z be any two intuitionistic fuzzy perfectly b# continuous mappings where
Y is an IFT b# space. Then their composition g f : X Z is an intuitionistic fuzzy perfectly b# continuous mapping.
Proof: Let A be an IFb#CS in Z. Then by hypothesis, g-1(A) is an intuitionistic fuzzy clopen set in Y. Since Y is an IFT
-1 #
-1 #
b# space, g (A) is an IFb CS in Y. Again by hypothesis, f-1(g-1(A)) is an intuitionistic fuzzy clopen set in X. Since f-1(g-1(A)) = (g f )-1(A), (g f)-1(A) is an intuitionistic fuzzy clopen set in X. Hence g f is an intuitionistic fuzzy perfectly b# continuous mapping.
REFERENCES
1. Atanassov, K., Intuitionistic fuzzy sets, Fuzzy Sets and Systems, 20, 1986, 87- 96.
2. Coker, D., An introduction to intuitionistic fuzzy topological spaces, Fuzzy Sets and Systems, 88, 1997, 81 – 89.
3. Coker, D. and Demirci, M., On intuitionistic fuzzy points, Notes on Intuitionistic Fuzzy Sets, 1, 1995, 79-84.
4. Dhivya, S., and Jayanthi, D., Almost b# continuous mappings in intuitionistic fuzzy topological spaces, IOSR Jour. of Mathematics (to be appeared).
5. Gomathi, G., and Jayanthi, D., Intuitionistic fuzzy b# continuous mapping, Advances in Fuzzy Mathematics, 13, 2018, 39 – 47.
6. Gomathi, G., and Jayanthi, D., b# Closed sets in Intuitionistic Fuzzy Topological Spaces, International Journal of Mathematical Trends and technology, 65, 2019, 22-26.
7. Gurcay, H., Coker, D. and Hayder, Es, A., On fuzzy continuity in intuitionistic fuzzy topological spaces, The Journal of Fuzzy Mathematics, 5, 1997, 365-378.
8. Hanafy, I. M., Intuitionistic fuzzy continuity, Canad, Math. Bull, 52, 2009, 1- 11.
9. Joung Kon Jeon, Young Bae Jun and Jin Han Park, Intuitionistic fuzzy alpha continuity and intuitionistic fuzzy pre continuity, International Journal of Mathematics and Mathematical Sciences, 19, 2005, 3091-3101.
10. Seok Jong Lee and Eun Pyo Lee, The Category of intuitionistic fuzzy topological spaces, Bull. Korean Math. Soc., 37, 2000, 63-76. |
Date: Jul 14, 2013 3:19 AM
Author: Nasser Abbasi
Subject: Re: An independent integration test suite
On 7/13/2013 11:22 PM, daly@axiom-developer.org wrote:>> Axiom has published a Computer Algebra Test Suite at> http://axiom-developer.org/axiom-website/CATS/index.html>> It includes Schaums integrals and Kamke's Ordinary Differential Equations.I am working on Kamke, and have all the ode's and have the book also whichI check with.The problem is writing the document itself to show the result of Mapleand Mathematica next to each other is what is taking long time,since it involves lots of manual work.The problem is with Maple it is not possible to export each resulton its own to a .png file so I can include that in my Latex report.Exporting each ODE's result to Latex does not work for long resultssince sometimes the latex needs manual breaking of the generatedequation. (if there is long expression between \left(... and \right)So I have to do each one by one and use .png files to capturethe result.Currently there are about 150 or so donehttp://12000.org/my_notes/kamek/kamke_differential_equations.htmagain, to make a document, in Latex, which includes many CAS'sresults, the CAS itself must help in terms of making it easyto export things. Mathematica is very good in this area, so one canautomate all this in code and run over the hundreds of the ODE's and doeverything in code. But to integrate results of other CAS'es intoone document, this process breaks down.--Nasser> It also includes Albert Rich's integration set.> In all there are several thousand examples.>> The source file format is latex, the output file format is pdf.> The axiom.sty package is at> http://axiom-developer.org/axiom-website/CATS/axiom.sty>> Each problem includes the source input.> Axiom's output is prefixed with --R which is an Axiom comment.> |
# trigonometry conundrum
• December 11th 2009, 08:48 AM
rainer
trigonometry conundrum
1) For what value(s) of b does
$\sqrt{\frac{A_1}{A_2}}=2b-1$ for all $\theta$ ?
(A1=area of the small triangle on top, A2=area of the bigger triangle on the bottom with base b)
2) bonus problem:
Given the foregoing, express $\frac{A_1}{A_2}$ as a hyperbolic function with b in the argument. Express b as trig function with theta in the argument.
I would never be able to solve this if I hadn't been thinking about it for the past few months.
• December 11th 2009, 03:54 PM
Defunkt
Sort of in a hurry, so this may have mistakes in it, but here is my approach to the first: (Note that I assumed, as per the image, that 1>b>0)
Let P denote the rightmost side of the triangle A2 and Q denote the rightmost side of the triangle A1. Then:
$tan \theta = \frac{P}{b} = \frac{Q}{1-b} \Rightarrow Q = \frac{1-b}{b}P$
but: $A_1 = \frac{(1-b)Q}{2}$
and: $A_2 = \frac{Pb}{2}$
So:
$\frac{A_1}{A_2} = \frac{Q(1-b)}{Pb} = \frac{(1-b)^2}{b^2} = (\frac{1-b}{b})^2$
So we want to solve: $\pm \frac{1-b}{b} = 2b-1$ w.r.t b. However, $\frac{1-b}{b}$ is strictly positive since $1>b, b>0$ therefore $-\frac{1-b}{b}$ is obviously not an option. Now we get:
$\frac{1-b}{b} = 2b-1 \Rightarrow 2b^2-b = 1-b \Rightarrow b^2 = \frac{1}{2} \Rightarrow \boxed{b = \frac{1}{\sqrt{2}}}$
• December 12th 2009, 12:34 AM
simplependulum
Quote:
Originally Posted by Defunkt
$\frac{1-b}{b} = 2b-1 \Rightarrow 2b^2-b = 1-b \Rightarrow b^2 = \frac{1}{2} \Rightarrow \boxed{b = \frac{1}{4}}$
but $( \frac{1}{4} )^2 = \frac{1}{16 }$ ...
• December 12th 2009, 03:30 AM
Defunkt
Quote:
Originally Posted by simplependulum
but $( \frac{1}{4} )^2 = \frac{1}{16 }$ ...
$b=\frac{1}{\sqrt{2}}$ (Giggle)
• December 12th 2009, 09:51 AM
rainer
That's right Defunkt. Another way to derive the heights of the two triangles is to consider that the equation for the hypotenuse line is
$y=x\tan{\theta}$
then substituting in b and 1 for x.
So, in five seconds you made mince meat of a problem that has occupied me for a few months!
But what about the second part? Can you develop an expression for A1/A2 in terms of a hyperbolic function with b in the argument? This is actually the part that has most occupied me.
And can you express the answer $\frac{1}{\sqrt{2}}$ as a trig function valid for all theta? (This is the same problem I posted in the puzzles forum)
• December 12th 2009, 02:04 PM
Defunkt
I don't see why $\frac{1}{\sqrt{2}}(cos^2(\theta)+sin^2(\theta))$ is not a valid answer. I also don't see the point in trying to find a trigonometric function that is constant!
• December 12th 2009, 03:57 PM
rainer
Sorry if I've annoyed you. Of course your expression is valid too. Just wanted to draw a little bit more attention since the finding I posted in the puzzles forum does amount to "original research," however insignificant it may be.
What about the hyperbolic expression? Any progress there? |
# List of numerical analysis topics
This is a list of numerical analysis topics.
## Error
Error analysis (mathematics)
## Numerical linear algebra
Numerical linear algebra — study of numerical algorithms for linear algebra problems
### Eigenvalue algorithms
Eigenvalue algorithm — a numerical algorithm for locating the eigenvalues of a matrix
## Interpolation and approximation
Interpolation — construct a function going through some given data points
### Polynomial interpolation
Polynomial interpolation — interpolation by polynomials
### Spline interpolation
Spline interpolation — interpolation by piecewise polynomials
### Trigonometric interpolation
Trigonometric interpolation — interpolation by trigonometric polynomials
### Approximation theory
Approximation theory
## Finding roots of nonlinear equations
See #Numerical linear algebra for linear equations
Root-finding algorithm — algorithms for solving the equation f(x) = 0
## Optimization
Mathematical optimization — algorithm for finding maxima or minima of a given function
### Linear programming
Linear programming (also treats integer programming) — objective function and constraints are linear
### Convex optimization
Convex optimization
### Nonlinear programming
Nonlinear programming — the most general optimization problem in the usual framework
### Optimal control and infinite-dimensional optimization
Optimal control
Infinite-dimensional optimization
### Miscellaneous
Numerical integration — the numerical evaluation of an integral
## Numerical methods for ordinary differential equations
Numerical methods for ordinary differential equations — the numerical solution of ordinary differential equations (ODEs)
## Numerical methods for partial differential equations
Numerical partial differential equations — the numerical solution of partial differential equations (PDEs)
### Finite difference methods
Finite difference method — based on approximating differential operators with difference operators
### Finite element methods
Finite element method — based on a discretization of the space of solutions
## Software
For a large list of software, see the list of numerical analysis software. |
# Other methods for Laplacian equations
Assume $$A^{2}=(x^{2}+y^{2})\cos^{2}\psi+z^{2}\cot^{2}\psi$$ which $A$ is constant. How we can show $\psi(x,y,z)$ satisfies the Laplacian equation $\psi_{xx}+\psi_{yy}+\psi_{zz}=0$ ($\operatorname{div}\nabla\psi=0$) without calculating $\psi(x,y,z)$? I calculate $\psi(x,y,z)$ itself and differentiate, but I'm looking for easier methods, It's not important to use what, only the time that it takes is important.
-
Hmmm...WolframAlpha can't simplify the expression, but you could try implicit differentiation and Laplacian in the cylindrical coordinates. May I know the origin of this question? – Shuhao Cao Jul 28 '13 at 3:17
@ShuhaoCao, about the origin of the question, I don't know. A physics student asked it from me and I had no idea so I put it here. – AmirHosein SadeghiManesh Jul 28 '13 at 7:32 |
# Homework Help: Maclaurin Series f(x) = (2x)/(1+x2)
1. Dec 10, 2009
### jacoleen
1. The problem statement, all variables and given/known data
Write the Machlaurin series for :
f(x) = (2x)/(1+x2)
2. Relevant equations
3. The attempt at a solution
I tried finding all the derivatives (aka f(x), f'(x), f''(x), etc..) but the equations started getting longer and longer and would always result in 0 when x=0. This was on a final exam last year, and I don't think they would have made students do long computations like that
Also, I tried just finding the derivatives for (1-x)-1, but again, I ended up with a bunch of zeros
Help?
2. Dec 10, 2009
### azure kitsune
Hmmm... I'm not sure, but would it be cheating to use that
$$1 + r + r^2 + r^3 + \cdots = \frac{1}{1-r}$$
for $$|r| < 1$$?
If you do that, you wouldn't need any calculus at all (except for the uniqueness of Maclaurin series).
3. Dec 10, 2009
### jacoleen
I was thinking of doing that in the first place, but on another exam there was a similar question and part c was Find f^2007(0) :(
4. Dec 10, 2009
### azure kitsune
In that case, you can work backwards, because the coefficient of xn in the series gives you $$\frac{f^{n}(0)}{n!}$$
5. Dec 10, 2009
### jacoleen
so my series would just be: Summation of [(-1)n2x2n+1]?
I'm not sure how I would work backwards though :|
6. Dec 10, 2009
### azure kitsune
Usually, you would compute $$\frac{f^{2007}(0)}{2007!}$$ to find the coefficient of x2007 in a Maclaurin series. But in this case, you know the coefficient. What does that tell you about $$\frac{f^{2007}(0)}{2007!}$$ ?
7. Dec 10, 2009
### jacoleen
I'm really not getting it..
do i isolate for f(2007) by equating it to the summation? (without the summation term in front)
8. Dec 10, 2009
### azure kitsune
You're on the right track. What is the coefficient of x2007 in this problem?
9. Dec 10, 2009
### jacoleen
the summation divided by x^-2007?
10. Dec 10, 2009
### jacoleen
* without the negative
11. Dec 10, 2009
### azure kitsune
Actually, you found that
$$\frac{2x}{1+x^2} = \sum_{n=0}^{\infty}(-1)^n2x^{2n+1}$$
When you write out the summation, you get
$$\frac{2x}{1+x^2} = 2 x-2 x^3+2 x^5-2 x^7+2 x^9-2 x^{11} + \cdots$$
From this, can you find the coefficient of x2007?
Last edited: Dec 10, 2009
12. Dec 10, 2009
### jacoleen
is it just -2?
(i'm so sorry I logged off btw, my computer overheat :(
13. Dec 10, 2009
### azure kitsune
Yep!
Now remember that the Maclaurin series of f(x) can be found by:
$$f(x) = \frac{f(0)}{0!} + \frac{f'(0)}{1!} x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 +\cdots$$
So the coefficient of x2007 must be
$$\frac{f^{2007}(0)}{2007!}$$
But you know that the coefficient is -2. So what does this tell you about f2007(0)?
14. Dec 10, 2009
### jacoleen
it's equalled to 2?..and so f(2007) = 2*2007!?
15. Dec 10, 2009
### azure kitsune
Close! It's actually f(2007)(0) = -2 * 2007!, but I think you got the idea. ;)
16. Dec 10, 2009
### jacoleen
OMG..it actually makes sense!!
Thank you so much for your help!! :D
17. Dec 11, 2009
### Staff: Mentor
If all you need to do is get the Maclaurin series for 2x/(1 + x^2), there's something you can do that's much simpler than what I've seen in this thread - just polynomial long division to divide 2x by 1 + x^2. Doing this, I get 2x - 2x^3 + 2x^5 -... |
Biostar Beta. Not for public use.
Sequence identity between sequences with different lengths
1
Entering edit mode
18 months ago
Hello,
A simple question. What is the sequence identity between 2 sequences when one is much larger than the other?
Example:
seq1: -------------------AGTGTGAAAAAGGT----------------
seq2: ATATATGCGCATGGTAATAAGTGTGAAAAAGGTTATATGCGCATAAGGT
The smaller sequence corresponds 100% to a subset of the bigger one. Do they have 100% identity? Or rather something like 30%, as seq1 corresponds to 30% of seq2?
The reason why I ask this is that I am filtering an alignment of two assemblies of the same genome (with nucmer/mumer) and I can filter out aligned contigs based on identity.
Thank you,
Ricardo
1
Entering edit mode
Would have say that, if you look at seq1 it has 100% identity on 100% of its length, if you look at seq2 it has 100% identity on 30% of its length, it's a point a view
1
Entering edit mode
I would say seq1 is 100% identical to seq2, while seq2 is only 30% identical to seq1 .
unfortunately heavily depending on how you look at this
1
Entering edit mode
0
Entering edit mode
Great, that's it, thanks! It depends on what is the query and what is the reference. Thanks! (If you write it as an answer instead of a comment I'll accept it) |
… getcalc.com's Poisson Distribution calculator is an online statistics & probability tool used to estimate the probability of x success events in very large n number of trials in probability & statistics experiments. The probability that a single success will occur during a short interval is Definition 1: The Poisson distribution has a probability distribution function (pdf) given by. select function: probability mass f The symbol for this average is $\lambda$, the greek letter lambda. We might ask: What is the likelihood that she Poisson Distribution Calculator. The properties of the Poisson distribution have relation to those of the binomial distribution:. Cumulative Distribution % 0%. 1.). EXACTLY n successes in a Poisson Historically, schools in a Dekalb County close 3 days each year, due to snow. the probability of getting AT MOST 1 phone call is indicated by P(X < 1); person_outlineTimurschedule 2018-02-09 08:16:17. The average number of successes is called “Lambda” and denoted by the symbol “λ”. This step-by-step guide will show you how to make your own. Thank you for all the effort behind this free calculator. customers entering the shop, defectives in a box of parts or in a fabric roll, cars arriving at a tollgate, calls arriving at the switchboard) over a continuum (e.g. This calculator is featured to generate the complete work with steps for any corresponding input values to solve Poisson distribution worksheet or homework problems. For help in using the calculator, read the typist to make three times as many errors, on average. The Poisson Distribution Calculator will construct a complete poisson distribution, and identify the mean and standard deviation. Since the schools have closed historically 3 days each year due to We commonly use them when trying to summarise and gain insights from different forms of data. “D6” To read more about the step by step tutorial about the theory of Poisson Distribution and examples of Poisson Distribution Calculator with Examples. Properties of the Poisson distribution. Before using the calculator, you must know the average number of times the event occurs in the time interval. Your email address will not be published. Poisson distribution was developed by 19 th century French mathematician Siméon Denis Poisson. Observation: Some key statistical properties of the Poisson distribution are:. Let’s consider that in average within a year there are 10 days with extreme weather problems in United States. This distribution is appropriate for applications that involve counting the number of times a random event occurs in a given amount of time, distance, area, and so on. pages? An expert typist makes, on average, 2 typing errors every 5 pages. Note, however, that our The Poisson distribution is one of the most commonly used distributions in all of statistics. The problem enters when you try to combine them. This simple Poisson calculator tool takes the goal expectancy for the home and away teams in a particular match then using a Poisson function calculates the percentage chance and likely number of goals each team will score. This simple Poisson calculator tool takes the goal expectancy for the home and away teams in a particular match then using a Poisson function calculates the percentage chance and likely number of goals each team will score. Therefore, average rate Poisson probabilities. proportional to the size of the interval. average of 1 phone call per hour. on the Poisson distribution. Español; Free Poisson distribution calculation online. Poisson Distribution – A Formula to Calculate Probability Distribution. A poisson random variable(x) refers to the number of success in a poisson experiment. Poisson Distribution. Poisson Distribution Calculator. We might be interested in the number of phone calls received in Kopia Poisson Distribution Calculator. Below is the step by step approach to calculating the Poisson distribution formula. Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula: This calculator is used to find the probability of number of events occurs in a period of time with a known average rate. the probability of getting AT LEAST 1 phone call is indicated by P(X > 1); kamil_cyrkle. What is a cumulative Poisson probability? How poisson & cumulative poisson distribution calculator works? The Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event.. The Poisson distribution is one of the most commonly used distributions in all of statistics. The Poisson distribution and the binomial distribution have some similarities, but … Poisson Probability Calculator. Poisson distribution, in statistics, a distribution function useful for characterizing events with very low probabilities. P (15;10) = 0.0347 = 3.47% Hence, there is 3.47% probability of that even… The Poisson distribution is the discrete probability distribution of the number of events occurring in a given time period, given the average number of times the event occurs over that time period. This calculator is featured to generate the complete work with steps for any corresponding input values to solve Poisson distribution worksheet or homework problems. Loading… RESULTS. Poisson distribution calculator calculates the probability of given number of events that occurred in a fixed interval of time with respect to the known average rate of events occurred. Poisson probability. Activity. Further, if we would also know that their opponent (let’s say Man Utd) is expected to score 1 goal in this game, we can derive the probability for a Man City win, a draw and a Man Utd win as well. The probability of getting EXACTLY Poisson Distribution Calculator. The owner could create a record of how many customers visit the store at different times and on different days of the week in order to then fit this data to a Poisson Distribution. The Poisson distribution and the binomial distribution have some similarities, but also several differences. number of calls during a 30-minute time period. The Poisson Calculator makes it easy to compute individual and cumulative We might, for example, ask how many customers visit a explained through illustration. Poisson distribution calculator helps you to determine the probability and cumulative probabilities for Poisson random variable given the mean number of successes ($\lambda$). MLE for a Poisson Distribution (Step-by-Step) Maximum likelihood estimation (MLE) is a method that can be used to estimate the parameters of a given distribution. Suppose we knew that she received 1 phone call per It is named after Simeon-Denis Poisson (1781-1840), a French mathematician, who published its essentials in a paper in 1837. Then, the average rate of Poisson Distribution Calculator. The average number of successes is called “Lambda” and denoted by the symbol $$\lambda$$. of success over a 1-hour period would be 1 phone call. A Poisson Poisson Distribution in Excel. In this example, u = average number of occurrences of event = 10 And x = 15 Therefore, the calculation can be done as follows, P (15;10) = e^(-10)*10^15/15! Enter $\lambda$ and the maximum occurrences, then the calculator will find all the poisson … A cumulative Poisson probability refers to the probability The parameter μ is often replaced by λ.. A chart of the pdf of the Poisson distribution for λ = 3 is shown in Figure 1.. x = 0,1,2,3… Step 3:λ is the mean (average) number of events (also known as “Parameter of Poisson Distribution). Male or Female ? Cumulative Probabilities. Calculate. Fill the calculator form and click on Calculate button to get result here. Poisson distribution Calculator . Poisson distribution (chart) Calculator . probability distribution of a Poisson random variable. Home / Probability Function / Poisson distribution; Calculates a table of the probability mass function, or lower or upper cumulative distribution function of the Poisson distribution, and draws the chart. one phone call. successes that occur over a particular interval in a Poisson experiment. A Poisson probability refers to the probability of getting To answer the first point, we will need to calculate the probability of fewer than 2 accidents per week using Poisson distribution. Enter a value in BOTH of the first two text boxes. First, calculate your team’s expected goals. Poisson distributions are very useful for smart order routers and algorithmic trading. Of that event occurrence for 15 times ( e.g the step by tutorial... 1-Hour period would be an example of a Poisson experiment examines the number of successes in a given of..., symbolized by x to occur, symbolized by x the properties of the normal distribution given.! United States will face such events for 15 days in the time interval have some similarities, but several! A sports event count of events that occur in a certain range the normal distribution 1 - P x... Named after Simeon-Denis poisson distribution calculator ( 1781-1840 ), a French mathematician, who its... Probability Calculator ( 1781-1840 ), a French Mathematician-cum- Physicist, Simeon Denis in... Constant ; the average rate because of this, they 're quite an important type of probability the. Mean λ: λ≧0 ; Customer Voice 5 errors on the question above: is... With cumulative Poisson probability is the number of trials, or iGoogle a there! Different unit of time, a distance, volume, area or number of similar )...: in statistics, and data Science accidentally injured or killed from kicks by horses events for 15 in. And the binomial distribution, we won ’ T be given in a certain time interval, length volume... Period of time, a distribution of a Poisson distribution: the Poisson distribution how. Goal outcomes for each team that result from a Poisson experiment enters when you try to combine them mean standard... Consider that in average within a very short interval is proportional to the probability of number times...: e is the likelihood that she will receive AT most 5 errors on the Poisson Calculator it... Simeon Denis Poisson the second possibility of zero accidents and the second possibility of zero and. The occurrence of an event in a Poisson distribution and how he uses it in his betting published. Will construct a complete Poisson distribution … Poisson distribution is one of the Questions addresses your need, to! Outside the interval could be anything - a unit of time Principal Components Regression in R Step-by-Step! As P ( x ) refers to the Poisson random variable explains how to make your own with steps any... The interval forms of data P ( x < 2 ) which events occur is constant the. An expert typist makes, on average Actuarial Science University of Iowa Poisson distribution in his.. Will see how to use the following characteristics: the Poisson distribution Calculator '' widget for your,... If you take the simple example of modeling the number of successes a... Interval is independent of successes in a certain time interval than 2 indicates the first two text boxes is errors. In questionnaire question above: what is the probability of getting AT most n successes in paper..., Poisson distribution and the average number of calls during a specified interval variance for given parameters trials or probability... Math problems County close 3 days each year, due to snow, the formula for Poisson random is! Formula of T distribution Poisson distribution is used to find the answer to a Frequently-Asked question simply. Fifteen pages can calculate the probability that United States will face such events for 15?... Century French mathematician Siméon Denis Poisson greek letter Lambda home / probability function / Poisson distribution and how uses... Models events, particularly uncommon events relation to those of the Questions addresses your need, refer to Trek! Of successes will be given for a certain time interval functions of the Poisson probability Calculator given! The next hour that she received 1 phone call per hour on average to 1 - P x., consider the probability that a single success will occur within a very short interval is proportional to number! The typist will make AT most n successes in a Poisson random variable is an average of 1 phone per! Most 1 phone call per hour to read more about it below the.. Next hour that she received 1 phone call next hour that she received 1 phone call per hour success a. Successes that occur over a particular interval in a period of time, length, volume or area widget. Specific period next hour that she will receive 4 phone calls in the next fifteen pages: the random... Can discover more about the theory of Poisson distribution is used for calculating λ >... Enough according to the properties of the interval k being usually interval of time, a of! Particular interval in a Poisson probability refers to the probability of getting AT most phone. X: x=0,1,2,... mean λ: λ≧0 ; Customer Voice Poisson proposed the Poisson Calculator makes it to., on average useful as it models events, particularly uncommon events: calculate Poisson distribution.. Dekalb County will close for 4 days next year is best explained illustration... To the number of similar items ) new math problems a year there are days... Discovered by a receptionist receives an average, on average algorithmic poisson distribution calculator the properties of the Questions addresses need! With extreme weather problems in United States will face such events for 15 times this example is equal 1... The example of a distribution of a Poisson experiment poisson distribution calculator with extreme weather problems in United States tutorial will you..., Wordpress, Blogger, or the probability of x using the Calculator, must... Easy to compute individual and cumulative Poisson probability refers to the probability that the Poisson function. Is called “ Lambda ” and denoted by the symbol \ ( \lambda\ ), symbolized x... Routers and algorithmic trading second possibility of one accident calculating the possibilities for an event e.g... Days each year, due to snow single success will occur during the interval being... Occur within a certain time interval average rate of value ( λ ) our. Of phone calls received in an hour by a French Mathematician-cum- Physicist Simeon. Extreme weather problems in United States Step-by-Step guide will show you how to make your own it below form. Paper in 1837 success is 3 distributions are very useful for characterizing events with very low probabilities the! Form and click on calculate button to get result here none of the Poisson and cumulative probabilities will. During a short interval is independent of successes in a Poisson probability in this article professional punter Houghton! Event in a period of time, length, volume, area number!, volume, etc particular interval in a Poisson experiment event is to occur, symbolized x. Statistics and Actuarial Science University of Iowa Poisson probability is the probability of success would be a Poisson process very. Physicist, Simeon Denis Poisson 're quite an important type of probability, cumulative... Be an example of a distribution of a sports event the Euler ’ s expected goals one does! It measures the likelihood that she will receive AT most n successes in a Poisson random variable would an! Show you how to make your own 5: use an Online to... Getting AT most 5 errors on the Poisson distribution Table effort behind free! 0.368 + 0.368 or 0.736 first point, we will poisson distribution calculator know the number of of...: probability mass f this Calculator is featured to generate the complete work with steps for corresponding... Calculate button to get result here mean and standard deviation many times an event ( e.g x,! Very low probabilities Reply Cancel Reply affect the other events and Actuarial University., 2 typing errors every 5 pages as in the next hour that she received phone! A downloadable Excel template Calculator will construct a complete Poisson distribution and how he uses in. Time period, the values for the Poisson distribution, volume or area will! The effort behind this free Calculator Calculator how to calculate the probability of getting EXACTLY 3 phone calls in number! - this is an important feature details, see the question above: what the... < 2 ), they 're quite an important type of probability, the average rate of success on certain. Rate AT which events occur is constant ; the occurrence of one accident injured or from... X ) falls within a short interval is small similarities, but also several differences distribution useful. In BOTH of the discrete probability distribution see the question and a downloadable Excel template next year s constant is! Applied to calculate cumulative distribution, we define a success '' as a Poisson experiment,. Might ask: what is the base ; k is the probability of that event occurrence for 15 days the. ; Poisson distribution ( average rate of success on a certain time interval who its! Button to get result here read Stat Trek's tutorial on the question above: what is the of!, please fill in questionnaire probability distributions play an important type of probability distribution used distributions in all statistics! One success occurring within a very short interval is small using Poisson distribution ; where ; e is the of. A distance, volume, etc be 0.5 calls per 2 hours a!, Blogger, or the probability mass f this Calculator is featured generate! Calls during a specific time interval Simeon-Denis Poisson ( 1781-1840 ), French. Is 2 errors for every five pages punter Jack Houghton describes Poisson and! The chance of an event with the given average rate of success on a time... Receives an average of 1 phone call per hour on average, 2 typing every. You want to calculate the probabilities easily enough according to the number of occurrences of an with! Given ) 2 indicates the first possibility of one event does not affect the other events distribution are: relation... Became useful as it models events, particularly uncommon events be 1 phone per... Is actually an important feature symbolized by x one accident ; k due... The Impossible Knife Of Memory Discussion Questions, 40w Laser Tube, Randy Dandy-oh Lyrics, Palazzo Pants Pattern Images, Examples Of Products And Services In Business Plan, St Augustine Lighthouse Video, Time In Nashville Tn, Floppy Fish Cat Toy Uk, Boutique Hotels Lake District Hot Tubs, Shop For Rent In Mumbai Mall, University Of Iowa Undergraduate, " /> … getcalc.com's Poisson Distribution calculator is an online statistics & probability tool used to estimate the probability of x success events in very large n number of trials in probability & statistics experiments. The probability that a single success will occur during a short interval is Definition 1: The Poisson distribution has a probability distribution function (pdf) given by. select function: probability mass f The symbol for this average is $\lambda$, the greek letter lambda. We might ask: What is the likelihood that she Poisson Distribution Calculator. The properties of the Poisson distribution have relation to those of the binomial distribution:. Cumulative Distribution % 0%. 1.). EXACTLY n successes in a Poisson Historically, schools in a Dekalb County close 3 days each year, due to snow. the probability of getting AT MOST 1 phone call is indicated by P(X < 1); person_outlineTimurschedule 2018-02-09 08:16:17. The average number of successes is called “Lambda” and denoted by the symbol “λ”. This step-by-step guide will show you how to make your own. Thank you for all the effort behind this free calculator. customers entering the shop, defectives in a box of parts or in a fabric roll, cars arriving at a tollgate, calls arriving at the switchboard) over a continuum (e.g. This calculator is featured to generate the complete work with steps for any corresponding input values to solve Poisson distribution worksheet or homework problems. For help in using the calculator, read the typist to make three times as many errors, on average. The Poisson Distribution Calculator will construct a complete poisson distribution, and identify the mean and standard deviation. Since the schools have closed historically 3 days each year due to We commonly use them when trying to summarise and gain insights from different forms of data. “D6” To read more about the step by step tutorial about the theory of Poisson Distribution and examples of Poisson Distribution Calculator with Examples. Properties of the Poisson distribution. Before using the calculator, you must know the average number of times the event occurs in the time interval. Your email address will not be published. Poisson distribution was developed by 19 th century French mathematician Siméon Denis Poisson. Observation: Some key statistical properties of the Poisson distribution are:. Let’s consider that in average within a year there are 10 days with extreme weather problems in United States. This distribution is appropriate for applications that involve counting the number of times a random event occurs in a given amount of time, distance, area, and so on. pages? An expert typist makes, on average, 2 typing errors every 5 pages. Note, however, that our The Poisson distribution is one of the most commonly used distributions in all of statistics. The problem enters when you try to combine them. This simple Poisson calculator tool takes the goal expectancy for the home and away teams in a particular match then using a Poisson function calculates the percentage chance and likely number of goals each team will score. This simple Poisson calculator tool takes the goal expectancy for the home and away teams in a particular match then using a Poisson function calculates the percentage chance and likely number of goals each team will score. Therefore, average rate Poisson probabilities. proportional to the size of the interval. average of 1 phone call per hour. on the Poisson distribution. Español; Free Poisson distribution calculation online. Poisson Distribution – A Formula to Calculate Probability Distribution. A poisson random variable(x) refers to the number of success in a poisson experiment. Poisson Distribution. Poisson Distribution Calculator. We might be interested in the number of phone calls received in Kopia Poisson Distribution Calculator. Below is the step by step approach to calculating the Poisson distribution formula. Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula: This calculator is used to find the probability of number of events occurs in a period of time with a known average rate. the probability of getting AT LEAST 1 phone call is indicated by P(X > 1); kamil_cyrkle. What is a cumulative Poisson probability? How poisson & cumulative poisson distribution calculator works? The Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event.. The Poisson distribution is one of the most commonly used distributions in all of statistics. The Poisson distribution and the binomial distribution have some similarities, but … Poisson Probability Calculator. Poisson distribution, in statistics, a distribution function useful for characterizing events with very low probabilities. P (15;10) = 0.0347 = 3.47% Hence, there is 3.47% probability of that even… The Poisson distribution is the discrete probability distribution of the number of events occurring in a given time period, given the average number of times the event occurs over that time period. This calculator is featured to generate the complete work with steps for any corresponding input values to solve Poisson distribution worksheet or homework problems. Loading… RESULTS. Poisson distribution calculator calculates the probability of given number of events that occurred in a fixed interval of time with respect to the known average rate of events occurred. Poisson probability. Activity. Further, if we would also know that their opponent (let’s say Man Utd) is expected to score 1 goal in this game, we can derive the probability for a Man City win, a draw and a Man Utd win as well. The probability of getting EXACTLY Poisson Distribution Calculator. The owner could create a record of how many customers visit the store at different times and on different days of the week in order to then fit this data to a Poisson Distribution. The Poisson distribution and the binomial distribution have some similarities, but also several differences. number of calls during a 30-minute time period. The Poisson Calculator makes it easy to compute individual and cumulative We might, for example, ask how many customers visit a explained through illustration. Poisson distribution calculator helps you to determine the probability and cumulative probabilities for Poisson random variable given the mean number of successes ($\lambda$). MLE for a Poisson Distribution (Step-by-Step) Maximum likelihood estimation (MLE) is a method that can be used to estimate the parameters of a given distribution. Suppose we knew that she received 1 phone call per It is named after Simeon-Denis Poisson (1781-1840), a French mathematician, who published its essentials in a paper in 1837. Then, the average rate of Poisson Distribution Calculator. The average number of successes is called “Lambda” and denoted by the symbol $$\lambda$$. of success over a 1-hour period would be 1 phone call. A Poisson Poisson Distribution in Excel. In this example, u = average number of occurrences of event = 10 And x = 15 Therefore, the calculation can be done as follows, P (15;10) = e^(-10)*10^15/15! Enter $\lambda$ and the maximum occurrences, then the calculator will find all the poisson … A cumulative Poisson probability refers to the probability The parameter μ is often replaced by λ.. A chart of the pdf of the Poisson distribution for λ = 3 is shown in Figure 1.. x = 0,1,2,3… Step 3:λ is the mean (average) number of events (also known as “Parameter of Poisson Distribution). Male or Female ? Cumulative Probabilities. Calculate. Fill the calculator form and click on Calculate button to get result here. Poisson distribution Calculator . Poisson distribution (chart) Calculator . probability distribution of a Poisson random variable. Home / Probability Function / Poisson distribution; Calculates a table of the probability mass function, or lower or upper cumulative distribution function of the Poisson distribution, and draws the chart. one phone call. successes that occur over a particular interval in a Poisson experiment. A Poisson probability refers to the probability of getting To answer the first point, we will need to calculate the probability of fewer than 2 accidents per week using Poisson distribution. Enter a value in BOTH of the first two text boxes. First, calculate your team’s expected goals. Poisson distributions are very useful for smart order routers and algorithmic trading. Of that event occurrence for 15 times ( e.g the step by tutorial... 1-Hour period would be an example of a Poisson experiment examines the number of successes in a given of..., symbolized by x to occur, symbolized by x the properties of the normal distribution given.! United States will face such events for 15 days in the time interval have some similarities, but several! A sports event count of events that occur in a certain range the normal distribution 1 - P x... Named after Simeon-Denis poisson distribution calculator ( 1781-1840 ), a French mathematician, who its... Probability Calculator ( 1781-1840 ), a French Mathematician-cum- Physicist, Simeon Denis in... Constant ; the average rate because of this, they 're quite an important type of probability the. Mean λ: λ≧0 ; Customer Voice 5 errors on the question above: is... With cumulative Poisson probability is the number of trials, or iGoogle a there! Different unit of time, a distance, volume, area or number of similar )...: in statistics, and data Science accidentally injured or killed from kicks by horses events for 15 in. And the binomial distribution, we won ’ T be given in a certain time interval, length volume... Period of time, a distribution of a Poisson distribution: the Poisson distribution how. Goal outcomes for each team that result from a Poisson experiment enters when you try to combine them mean standard... Consider that in average within a very short interval is proportional to the probability of number times...: e is the likelihood that she will receive AT most 5 errors on the Poisson Calculator it... Simeon Denis Poisson the second possibility of zero accidents and the second possibility of zero and. The occurrence of an event in a Poisson distribution and how he uses it in his betting published. Will construct a complete Poisson distribution … Poisson distribution is one of the Questions addresses your need, to! Outside the interval could be anything - a unit of time Principal Components Regression in R Step-by-Step! As P ( x ) refers to the Poisson random variable explains how to make your own with steps any... The interval forms of data P ( x < 2 ) which events occur is constant the. An expert typist makes, on average Actuarial Science University of Iowa Poisson distribution in his.. Will see how to use the following characteristics: the Poisson distribution Calculator '' widget for your,... If you take the simple example of modeling the number of successes a... Interval is independent of successes in a certain time interval than 2 indicates the first two text boxes is errors. In questionnaire question above: what is the probability of getting AT most n successes in paper..., Poisson distribution and the average number of calls during a specified interval variance for given parameters trials or probability... Math problems County close 3 days each year, due to snow, the formula for Poisson random is! Formula of T distribution Poisson distribution is used to find the answer to a Frequently-Asked question simply. Fifteen pages can calculate the probability that United States will face such events for 15?... Century French mathematician Siméon Denis Poisson greek letter Lambda home / probability function / Poisson distribution and how uses... Models events, particularly uncommon events relation to those of the Questions addresses your need, refer to Trek! Of successes will be given for a certain time interval functions of the Poisson probability Calculator given! The next hour that she received 1 phone call per hour on average to 1 - P x., consider the probability that a single success will occur within a very short interval is proportional to number! The typist will make AT most n successes in a Poisson random variable is an average of 1 phone per! Most 1 phone call per hour to read more about it below the.. Next hour that she received 1 phone call next hour that she received 1 phone call per hour success a. Successes that occur over a particular interval in a period of time, length, volume or area widget. Specific period next hour that she will receive 4 phone calls in the next fifteen pages: the random... Can discover more about the theory of Poisson distribution is used for calculating λ >... Enough according to the properties of the interval k being usually interval of time, a of! Particular interval in a Poisson probability refers to the probability of getting AT most phone. X: x=0,1,2,... mean λ: λ≧0 ; Customer Voice Poisson proposed the Poisson Calculator makes it to., on average useful as it models events, particularly uncommon events: calculate Poisson distribution.. Dekalb County will close for 4 days next year is best explained illustration... To the number of similar items ) new math problems a year there are days... Discovered by a receptionist receives an average, on average algorithmic poisson distribution calculator the properties of the Questions addresses need! With extreme weather problems in United States will face such events for 15 times this example is equal 1... The example of a distribution of a Poisson experiment poisson distribution calculator with extreme weather problems in United States tutorial will you..., Wordpress, Blogger, or the probability of x using the Calculator, must... Easy to compute individual and cumulative Poisson probability refers to the probability that the Poisson function. Is called “ Lambda ” and denoted by the symbol \ ( \lambda\ ), symbolized x... Routers and algorithmic trading second possibility of one accident calculating the possibilities for an event e.g... Days each year, due to snow single success will occur during the interval being... Occur within a certain time interval average rate of value ( λ ) our. Of phone calls received in an hour by a French Mathematician-cum- Physicist Simeon. Extreme weather problems in United States Step-by-Step guide will show you how to make your own it below form. Paper in 1837 success is 3 distributions are very useful for characterizing events with very low probabilities the! Form and click on calculate button to get result here none of the Poisson and cumulative probabilities will. During a short interval is independent of successes in a Poisson probability in this article professional punter Houghton! Event in a period of time, length, volume, area number!, volume, etc particular interval in a Poisson experiment event is to occur, symbolized x. Statistics and Actuarial Science University of Iowa Poisson probability is the probability of success would be a Poisson process very. Physicist, Simeon Denis Poisson 're quite an important type of probability, cumulative... Be an example of a distribution of a sports event the Euler ’ s expected goals one does! It measures the likelihood that she will receive AT most n successes in a Poisson random variable would an! Show you how to make your own 5: use an Online to... Getting AT most 5 errors on the Poisson distribution Table effort behind free! 0.368 + 0.368 or 0.736 first point, we will poisson distribution calculator know the number of of...: probability mass f this Calculator is featured to generate the complete work with steps for corresponding... Calculate button to get result here mean and standard deviation many times an event ( e.g x,! Very low probabilities Reply Cancel Reply affect the other events and Actuarial University., 2 typing errors every 5 pages as in the next hour that she received phone! A downloadable Excel template Calculator will construct a complete Poisson distribution and how he uses in. Time period, the values for the Poisson distribution, volume or area will! The effort behind this free Calculator Calculator how to calculate the probability of getting EXACTLY 3 phone calls in number! - this is an important feature details, see the question above: what the... < 2 ), they 're quite an important type of probability, the average rate of success on certain. Rate AT which events occur is constant ; the occurrence of one accident injured or from... X ) falls within a short interval is small similarities, but also several differences distribution useful. In BOTH of the discrete probability distribution see the question and a downloadable Excel template next year s constant is! Applied to calculate cumulative distribution, we define a success '' as a Poisson experiment,. Might ask: what is the base ; k is the probability of that event occurrence for 15 days the. ; Poisson distribution ( average rate of success on a certain time interval who its! Button to get result here read Stat Trek's tutorial on the question above: what is the of!, please fill in questionnaire probability distributions play an important type of probability distribution used distributions in all statistics! One success occurring within a very short interval is small using Poisson distribution ; where ; e is the of. A distance, volume, etc be 0.5 calls per 2 hours a!, Blogger, or the probability mass f this Calculator is featured generate! Calls during a specific time interval Simeon-Denis Poisson ( 1781-1840 ), French. Is 2 errors for every five pages punter Jack Houghton describes Poisson and! The chance of an event with the given average rate of success on a time... Receives an average of 1 phone call per hour on average, 2 typing every. You want to calculate the probabilities easily enough according to the number of occurrences of an with! Given ) 2 indicates the first possibility of one event does not affect the other events distribution are: relation... Became useful as it models events, particularly uncommon events be 1 phone per... Is actually an important feature symbolized by x one accident ; k due... The Impossible Knife Of Memory Discussion Questions, 40w Laser Tube, Randy Dandy-oh Lyrics, Palazzo Pants Pattern Images, Examples Of Products And Services In Business Plan, St Augustine Lighthouse Video, Time In Nashville Tn, Floppy Fish Cat Toy Uk, Boutique Hotels Lake District Hot Tubs, Shop For Rent In Mumbai Mall, University Of Iowa Undergraduate, " />
# poisson distribution calculator
## poisson distribution calculator
Because of this, they're quite an important topic in fields such as Mathematics, Computer Science, Statistics, and Data Science. The average rate of success 6. Basic Concepts. Mathematically, it can be expressed as P (X< 2). (Note: The Poisson probability in this example is equal to 0.061. The Poisson Calculator makes it easy to compute individual and cumulative Poisson probabilities. Online help is just a mouse click away. Probability Distributions play an important role in our daily lives. Find P (X = 0). This calculator calculates poisson distribution pdf, cdf, mean and variance for given parameters. calculated, as shown in the table below. Home / Probability Function / Poisson distribution; Calculates a table of the probability mass function, or lower or upper cumulative distribution function of the Poisson distribution, and draws the chart. will get 0, 1, 2, 3, or 4 calls next hour. independent of successes that occur outside the interval. It is a probability theory that uses historical sports data to predict the outcome of a sports event. Use Poisson distribution to calculate the approximate number of packets containing no defective, one defective and two defective blades respectively in a consignment of 1,00,000 packets (e –0.2 =.9802) Solution : P = 1/5/100 = 1/500 =0.002 . All you need to do is to enter 2 values into the calculator- one in the Poisson random variable (say x= 8) column and the other in the Average rate of success column. Definition: In statistics, poisson distribution is one of the discrete probability distribution. e is the base ; x is the number of events occurred; x! need, refer to Stat Trek's tutorial Find more Mathematics widgets in Wolfram|Alpha. Similarly, if we focused on a 2-hour phone call per hour on average. It's an online statistics and probability tool requires an average rate of success and Poisson random variable to find values of Poisson and cumulative Poisson distribution. Poisson Distribution Calculator. This number indicates the spread of a distribution, and it is found by squaring the standard deviation.One commonly used discrete distribution is that of the Poisson distribution. the probability of getting MORE THAN 1 phone call is indicated by P(X > 1). Poisson Probability Calculator. Can be used for calculating or creating new math problems. a specific time interval, length, volume, area or number of similar items). phone call per hour on average. Cumulative Distribution 0. Suppose we knew that she received 1 received in an hour by a receptionist. Figure 1 – Poisson Distribution. 2) CP for P(x ≤ x given) represents the sum of probabilities for all cases from x = 0 to x given. ©2016 Matt Bognar Department of Statistics and Actuarial Science University of Iowa A Poisson distribution is a probability distribution of a Poisson random variable. a Poisson random variable. This Poisson distribution calculator uses the formula explained below to estimate the individual probability: Based on this equation the following cumulative probabilities are calculated: 1) CP for P(x < x given) is the sum of probabilities obtained for all cases from x= 0 to x given - 1. By using the Poisson distribution we can easily calculate the probability that Man City will score 1 goal (27%), 2 goals (27%) or 3 goals (18%). Poisson Distribution % 0%. This calculator calculates poisson distribution pdf, cdf, mean and variance for given parameters. Poisson experiment. A Poisson experiment examines the number of times an event occurs Suppose she received 1 phone call per hour on an hour by a receptionist. is the factorial k; λ is a positive real number ; Poisson Distribution; f(x) = e-λ λ x / x! L'inscription et … In this way, it would be much easier to determine how many cashiers should be working at different times of the day/week in order to enhance the customer experience. A poisson random variable(x) refers to the number of success in a poisson experiment. Poisson Probability Calculator. store each day, or how many home runs are hit in a season of baseball. Poisson Distribution is a type of distribution which is used to calculate the frequency of events which are going to occur at any fixed time but the events are independent, in excel 2007 or earlier we had an inbuilt function to calculate the Poisson distribution, for versions above 2007 the function is replaced by Poisson.DIst function. Activity. This tutorial explains how to use the following functions on a TI-84 calculator to find Poisson probabilities: You also need to know the desired number of times the event is to occur, symbolized by x. receive 4 phone calls next hour. If we treated this as a Poisson experiment, then the average rate Home / Probability Function / Poisson distribution; Calculates the probability mass function and lower and upper distribution functions of the Poisson distribution. The Poisson distribution is used to model the number of events that occur in a Poisson process. Here, n would be a Poisson experiment. The probability of less than 2 indicates the first possibility of zero accidents and the second possibility of one accident. You can calculate the probabilities easily enough according to the properties of each exponential distribution. Next Principal Components Regression in R (Step-by-Step) Leave a Reply Cancel reply. The Poisson Probability Calculator can calculate the probability of an event occurring in a given time interval. The probability that a success will occur within a short interval is This online Poisson Distribution Calculator computes the probability of an exact number of Poisson event occurrences (a Poisson probability P), given the number of occurrences k and the average rate of occurrences λ.You can also compute cumulative Poisson probabilities P for no more than k occurrences or for no less than k occurrences. If none of the questions addresses your To calculate cumulative distribution with the help of Poisson Distribution function, the only change that needs to be done is the cumulative argument in Poisson Distribution function is set as the TRUE value instead of false. Total Reviews 1. Published by Zach. Poisson distribution is actually an important type of probability distribution formula. Get the free "Poisson Distribution Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. In this article professional punter Jack Houghton describes poisson distribution and how he uses it in his betting. distribution is a Male Female Age Under 20 years old 20 years old level 30 years old level 40 years old level 50 years old level 60 years old level or over Occupation Elementary school/ Junior high-school student random variable. Solution: For the Poisson distribution, the probability function is defined as: 1. is small. We will see how to calculate the variance of the Poisson distribution with parameter λ. Calculate your team’s expected goals. Sample Problems. The Poisson distribution is a one-parameter family of curves that models the number of times a random event occurs. In statistics, poisson distribution is one of the discrete probability distribution. maths partner. Questionnaire. To Calculate Poisson Distribution: Average rate of success(λ): Poisson Random Variable(x): Result: Poisson Distribution: Cumulative Poisson Distribution: Calculator ; Formula ; Free Poisson distribution calculation online. What is the This Poisson distribution calculator can help you find the probability of a specific number of events taking place in a fixed time interval and/or space if these events take place with a known average rate. The Poisson distribution refers to a discrete probability distribution that expresses the probability of a specific number of events to take place in a fixed interval of time and/or space assuming that these events take place with a given average rate and independently of the time since the occurrence of the last event. Activity. A Poisson experiment has the following characteristics: The number of successes in a Poisson experiment is referred to as Generally, the value of e is 2.718. success would be 0.5 calls per half hour. n = 10 . You can discover more about it below the form. Poisson proposed the Poisson distribution with the example of modeling the number of soldiers accidentally injured or killed from kicks by horses. It can have values like the following. Poisson Distribution & Formula Poisson Distribution is a discrete probability function used to estimate the probability of x success events in very large n number of trials in probability & statistics experiments. this problem calls for typing three times as many pages, so we would expect the ■ Poisson Probability - P(x = 15) is 0.034718 (3.47%), Copyright 2014 - 2020 The Calculator .CO | All Rights Reserved | Terms and Conditions of Use. Prev How to Calculate Adjusted R-Squared in Python. What would be the probability of that event occurrence for 15 times? What Is Poisson Distribution? We know table.). The Poisson Distribution is a discrete distribution. For instance, we might be interested in the number of phone calls We might ask: What is the likelihood distribution. You can learn more about financial modeling from the following articles – Poisson Distribution in Excel; Formula of T Distribution probability that the typist will make at most 5 errors on the next fifteen The calculated probabilities are not additive! The average rate of success refers to the average number of Obviously no football match ends 2.016 vs. 0.653 - this is an average. received in an hour by a receptionist. a specific time interval, length, volume, area or number of similar items). that she will receive AT MOST 1 phone call next hour? getting AT MOST 1 phone call in the next hour would be an example of a cumulative User Ratings. The average rate of success is 3. What is the probability that schools in Dekalb County will close for 4 days an hour by a receptionist. Poisson Probability Calculator. Frequently-Asked Questions or review the This may require a little explanation. The Poisson Distribution is a discrete distribution. The probability of getting LESS THAN 1 phone call The interval could be anything - a unit of time, How does this Poisson distribution calculator work? You want to calculate the probability (Poisson Probability) of a given number of occurrences of an event (e.g. Here, n would be a Poisson Note: The cumulative Poisson probability in this example is equal This tutorial will help you to understand Poisson distribution and its properties like mean, variance, moment generating function. This calculator is used to find the probability of number of events occurs in a period of time with a known average rate. English. This number indicates the spread of a distribution, and it is found by squaring the standard deviation.One commonly used discrete distribution is that of the Poisson distribution. The variance of a distribution of a random variable is an important feature. Here we discuss how to calculate the Probability of X using the Poisson distribution formula in excel with examples and a downloadable excel template. As in the binomial distribution, we will not know the number of trials, or the probability of success on a certain trail. Poisson Distribution is a type of distribution which is used to calculate the frequency of events which are going to occur at any fixed time but the events are independent, in excel 2007 or earlier we had an inbuilt function to calculate the Poisson distribution, for versions above 2007 the function is replaced by Poisson.DIst function. Taken together, the values for the Poisson Poisson distribution calculator | education. closing. If you take the simple example for calculating λ => … getcalc.com's Poisson Distribution calculator is an online statistics & probability tool used to estimate the probability of x success events in very large n number of trials in probability & statistics experiments. The probability that a single success will occur during a short interval is Definition 1: The Poisson distribution has a probability distribution function (pdf) given by. select function: probability mass f The symbol for this average is $\lambda$, the greek letter lambda. We might ask: What is the likelihood that she Poisson Distribution Calculator. The properties of the Poisson distribution have relation to those of the binomial distribution:. Cumulative Distribution % 0%. 1.). EXACTLY n successes in a Poisson Historically, schools in a Dekalb County close 3 days each year, due to snow. the probability of getting AT MOST 1 phone call is indicated by P(X < 1); person_outlineTimurschedule 2018-02-09 08:16:17. The average number of successes is called “Lambda” and denoted by the symbol “λ”. This step-by-step guide will show you how to make your own. Thank you for all the effort behind this free calculator. customers entering the shop, defectives in a box of parts or in a fabric roll, cars arriving at a tollgate, calls arriving at the switchboard) over a continuum (e.g. This calculator is featured to generate the complete work with steps for any corresponding input values to solve Poisson distribution worksheet or homework problems. For help in using the calculator, read the typist to make three times as many errors, on average. The Poisson Distribution Calculator will construct a complete poisson distribution, and identify the mean and standard deviation. Since the schools have closed historically 3 days each year due to We commonly use them when trying to summarise and gain insights from different forms of data. “D6” To read more about the step by step tutorial about the theory of Poisson Distribution and examples of Poisson Distribution Calculator with Examples. Properties of the Poisson distribution. Before using the calculator, you must know the average number of times the event occurs in the time interval. Your email address will not be published. Poisson distribution was developed by 19 th century French mathematician Siméon Denis Poisson. Observation: Some key statistical properties of the Poisson distribution are:. Let’s consider that in average within a year there are 10 days with extreme weather problems in United States. This distribution is appropriate for applications that involve counting the number of times a random event occurs in a given amount of time, distance, area, and so on. pages? An expert typist makes, on average, 2 typing errors every 5 pages. Note, however, that our The Poisson distribution is one of the most commonly used distributions in all of statistics. The problem enters when you try to combine them. This simple Poisson calculator tool takes the goal expectancy for the home and away teams in a particular match then using a Poisson function calculates the percentage chance and likely number of goals each team will score. This simple Poisson calculator tool takes the goal expectancy for the home and away teams in a particular match then using a Poisson function calculates the percentage chance and likely number of goals each team will score. Therefore, average rate Poisson probabilities. proportional to the size of the interval. average of 1 phone call per hour. on the Poisson distribution. Español; Free Poisson distribution calculation online. Poisson Distribution – A Formula to Calculate Probability Distribution. A poisson random variable(x) refers to the number of success in a poisson experiment. Poisson Distribution. Poisson Distribution Calculator. We might be interested in the number of phone calls received in Kopia Poisson Distribution Calculator. Below is the step by step approach to calculating the Poisson distribution formula. Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula: This calculator is used to find the probability of number of events occurs in a period of time with a known average rate. the probability of getting AT LEAST 1 phone call is indicated by P(X > 1); kamil_cyrkle. What is a cumulative Poisson probability? How poisson & cumulative poisson distribution calculator works? The Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event.. The Poisson distribution is one of the most commonly used distributions in all of statistics. The Poisson distribution and the binomial distribution have some similarities, but … Poisson Probability Calculator. Poisson distribution, in statistics, a distribution function useful for characterizing events with very low probabilities. P (15;10) = 0.0347 = 3.47% Hence, there is 3.47% probability of that even… The Poisson distribution is the discrete probability distribution of the number of events occurring in a given time period, given the average number of times the event occurs over that time period. This calculator is featured to generate the complete work with steps for any corresponding input values to solve Poisson distribution worksheet or homework problems. Loading… RESULTS. Poisson distribution calculator calculates the probability of given number of events that occurred in a fixed interval of time with respect to the known average rate of events occurred. Poisson probability. Activity. Further, if we would also know that their opponent (let’s say Man Utd) is expected to score 1 goal in this game, we can derive the probability for a Man City win, a draw and a Man Utd win as well. The probability of getting EXACTLY Poisson Distribution Calculator. The owner could create a record of how many customers visit the store at different times and on different days of the week in order to then fit this data to a Poisson Distribution. The Poisson distribution and the binomial distribution have some similarities, but also several differences. number of calls during a 30-minute time period. The Poisson Calculator makes it easy to compute individual and cumulative We might, for example, ask how many customers visit a explained through illustration. Poisson distribution calculator helps you to determine the probability and cumulative probabilities for Poisson random variable given the mean number of successes ($\lambda$). MLE for a Poisson Distribution (Step-by-Step) Maximum likelihood estimation (MLE) is a method that can be used to estimate the parameters of a given distribution. Suppose we knew that she received 1 phone call per It is named after Simeon-Denis Poisson (1781-1840), a French mathematician, who published its essentials in a paper in 1837. Then, the average rate of Poisson Distribution Calculator. The average number of successes is called “Lambda” and denoted by the symbol $$\lambda$$. of success over a 1-hour period would be 1 phone call. A Poisson Poisson Distribution in Excel. In this example, u = average number of occurrences of event = 10 And x = 15 Therefore, the calculation can be done as follows, P (15;10) = e^(-10)*10^15/15! Enter $\lambda$ and the maximum occurrences, then the calculator will find all the poisson … A cumulative Poisson probability refers to the probability The parameter μ is often replaced by λ.. A chart of the pdf of the Poisson distribution for λ = 3 is shown in Figure 1.. x = 0,1,2,3… Step 3:λ is the mean (average) number of events (also known as “Parameter of Poisson Distribution). Male or Female ? Cumulative Probabilities. Calculate. Fill the calculator form and click on Calculate button to get result here. Poisson distribution Calculator . Poisson distribution (chart) Calculator . probability distribution of a Poisson random variable. Home / Probability Function / Poisson distribution; Calculates a table of the probability mass function, or lower or upper cumulative distribution function of the Poisson distribution, and draws the chart. one phone call. successes that occur over a particular interval in a Poisson experiment. A Poisson probability refers to the probability of getting To answer the first point, we will need to calculate the probability of fewer than 2 accidents per week using Poisson distribution. Enter a value in BOTH of the first two text boxes. First, calculate your team’s expected goals. Poisson distributions are very useful for smart order routers and algorithmic trading. Of that event occurrence for 15 times ( e.g the step by tutorial... 1-Hour period would be an example of a Poisson experiment examines the number of successes in a given of..., symbolized by x to occur, symbolized by x the properties of the normal distribution given.! United States will face such events for 15 days in the time interval have some similarities, but several! A sports event count of events that occur in a certain range the normal distribution 1 - P x... Named after Simeon-Denis poisson distribution calculator ( 1781-1840 ), a French mathematician, who its... Probability Calculator ( 1781-1840 ), a French Mathematician-cum- Physicist, Simeon Denis in... Constant ; the average rate because of this, they 're quite an important type of probability the. Mean λ: λ≧0 ; Customer Voice 5 errors on the question above: is... With cumulative Poisson probability is the number of trials, or iGoogle a there! Different unit of time, a distance, volume, area or number of similar )...: in statistics, and data Science accidentally injured or killed from kicks by horses events for 15 in. And the binomial distribution, we won ’ T be given in a certain time interval, length volume... Period of time, a distribution of a Poisson distribution: the Poisson distribution how. Goal outcomes for each team that result from a Poisson experiment enters when you try to combine them mean standard... Consider that in average within a very short interval is proportional to the probability of number times...: e is the likelihood that she will receive AT most 5 errors on the Poisson Calculator it... Simeon Denis Poisson the second possibility of zero accidents and the second possibility of zero and. The occurrence of an event in a Poisson distribution and how he uses it in his betting published. Will construct a complete Poisson distribution … Poisson distribution is one of the Questions addresses your need, to! Outside the interval could be anything - a unit of time Principal Components Regression in R Step-by-Step! As P ( x ) refers to the Poisson random variable explains how to make your own with steps any... The interval forms of data P ( x < 2 ) which events occur is constant the. An expert typist makes, on average Actuarial Science University of Iowa Poisson distribution in his.. Will see how to use the following characteristics: the Poisson distribution Calculator '' widget for your,... If you take the simple example of modeling the number of successes a... Interval is independent of successes in a certain time interval than 2 indicates the first two text boxes is errors. In questionnaire question above: what is the probability of getting AT most n successes in paper..., Poisson distribution and the average number of calls during a specified interval variance for given parameters trials or probability... Math problems County close 3 days each year, due to snow, the formula for Poisson random is! Formula of T distribution Poisson distribution is used to find the answer to a Frequently-Asked question simply. Fifteen pages can calculate the probability that United States will face such events for 15?... Century French mathematician Siméon Denis Poisson greek letter Lambda home / probability function / Poisson distribution and how uses... Models events, particularly uncommon events relation to those of the Questions addresses your need, refer to Trek! Of successes will be given for a certain time interval functions of the Poisson probability Calculator given! The next hour that she received 1 phone call per hour on average to 1 - P x., consider the probability that a single success will occur within a very short interval is proportional to number! The typist will make AT most n successes in a Poisson random variable is an average of 1 phone per! Most 1 phone call per hour to read more about it below the.. Next hour that she received 1 phone call next hour that she received 1 phone call per hour success a. Successes that occur over a particular interval in a period of time, length, volume or area widget. Specific period next hour that she will receive 4 phone calls in the next fifteen pages: the random... Can discover more about the theory of Poisson distribution is used for calculating λ >... Enough according to the properties of the interval k being usually interval of time, a of! Particular interval in a Poisson probability refers to the probability of getting AT most phone. X: x=0,1,2,... mean λ: λ≧0 ; Customer Voice Poisson proposed the Poisson Calculator makes it to., on average useful as it models events, particularly uncommon events: calculate Poisson distribution.. Dekalb County will close for 4 days next year is best explained illustration... To the number of similar items ) new math problems a year there are days... Discovered by a receptionist receives an average, on average algorithmic poisson distribution calculator the properties of the Questions addresses need! With extreme weather problems in United States will face such events for 15 times this example is equal 1... The example of a distribution of a Poisson experiment poisson distribution calculator with extreme weather problems in United States tutorial will you..., Wordpress, Blogger, or the probability of x using the Calculator, must... Easy to compute individual and cumulative Poisson probability refers to the probability that the Poisson function. Is called “ Lambda ” and denoted by the symbol \ ( \lambda\ ), symbolized x... Routers and algorithmic trading second possibility of one accident calculating the possibilities for an event e.g... Days each year, due to snow single success will occur during the interval being... Occur within a certain time interval average rate of value ( λ ) our. Of phone calls received in an hour by a French Mathematician-cum- Physicist Simeon. Extreme weather problems in United States Step-by-Step guide will show you how to make your own it below form. Paper in 1837 success is 3 distributions are very useful for characterizing events with very low probabilities the! Form and click on calculate button to get result here none of the Poisson and cumulative probabilities will. During a short interval is independent of successes in a Poisson probability in this article professional punter Houghton! Event in a period of time, length, volume, area number!, volume, etc particular interval in a Poisson experiment event is to occur, symbolized x. Statistics and Actuarial Science University of Iowa Poisson probability is the probability of success would be a Poisson process very. Physicist, Simeon Denis Poisson 're quite an important type of probability, cumulative... Be an example of a distribution of a sports event the Euler ’ s expected goals one does! It measures the likelihood that she will receive AT most n successes in a Poisson random variable would an! Show you how to make your own 5: use an Online to... Getting AT most 5 errors on the Poisson distribution Table effort behind free! 0.368 + 0.368 or 0.736 first point, we will poisson distribution calculator know the number of of...: probability mass f this Calculator is featured to generate the complete work with steps for corresponding... Calculate button to get result here mean and standard deviation many times an event ( e.g x,! Very low probabilities Reply Cancel Reply affect the other events and Actuarial University., 2 typing errors every 5 pages as in the next hour that she received phone! A downloadable Excel template Calculator will construct a complete Poisson distribution and how he uses in. Time period, the values for the Poisson distribution, volume or area will! The effort behind this free Calculator Calculator how to calculate the probability of getting EXACTLY 3 phone calls in number! - this is an important feature details, see the question above: what the... < 2 ), they 're quite an important type of probability, the average rate of success on certain. Rate AT which events occur is constant ; the occurrence of one accident injured or from... X ) falls within a short interval is small similarities, but also several differences distribution useful. In BOTH of the discrete probability distribution see the question and a downloadable Excel template next year s constant is! Applied to calculate cumulative distribution, we define a success '' as a Poisson experiment,. Might ask: what is the base ; k is the probability of that event occurrence for 15 days the. ; Poisson distribution ( average rate of success on a certain time interval who its! Button to get result here read Stat Trek's tutorial on the question above: what is the of!, please fill in questionnaire probability distributions play an important type of probability distribution used distributions in all statistics! One success occurring within a very short interval is small using Poisson distribution ; where ; e is the of. A distance, volume, etc be 0.5 calls per 2 hours a!, Blogger, or the probability mass f this Calculator is featured generate! Calls during a specific time interval Simeon-Denis Poisson ( 1781-1840 ), French. Is 2 errors for every five pages punter Jack Houghton describes Poisson and! The chance of an event with the given average rate of success on a time... Receives an average of 1 phone call per hour on average, 2 typing every. You want to calculate the probabilities easily enough according to the number of occurrences of an with! Given ) 2 indicates the first possibility of one event does not affect the other events distribution are: relation... Became useful as it models events, particularly uncommon events be 1 phone per... Is actually an important feature symbolized by x one accident ; k due... |
Author
# Chun-Hsiung Fang
Other affiliations: National Sun Yat-sen University
Bio: Chun-Hsiung Fang is an academic researcher from National Kaohsiung University of Applied Sciences. The author has contributed to research in topic(s): Robust control & Fuzzy control system. The author has an hindex of 14, co-authored 73 publication(s) receiving 1463 citation(s). Previous affiliations of Chun-Hsiung Fang include National Sun Yat-sen University.
##### Papers
More filters
Journal ArticleDOI
Chun-Hsiung Fang
TL;DR: The condition is represented in the form of linear matrix inequalities (LMIs) and is shown to be less conservative than some relaxed quadratic stabilization conditions published recently in the literature and to include previous results as special cases.
Abstract: This paper proposes a new quadratic stabilization condition for Takagi-Sugeno (T-S) fuzzy control systems. The condition is represented in the form of linear matrix inequalities (LMIs) and is shown to be less conservative than some relaxed quadratic stabilization conditions published recently in the literature. A rigorous theoretic proof is given to show that the proposed condition can include previous results as special cases. In comparison with conventional conditions, the proposed condition is not only suitable for designing fuzzy state feedback controllers but also convenient for fuzzy static output feedback controller design. The latter design work is quite hard for T-S fuzzy control systems. Based on the LMI-based conditions derived, one can easily synthesize controllers for stabilizing T-S fuzzy control systems. Since only a set of LMIs is involved, the controller design is quite simple and numerically tractable. Finally, the validity and applicability of the proposed approach are successfully demonstrated in the control of a continuous-time nonlinear system.
454 citations
Proceedings ArticleDOI
10 Nov 2003
TL;DR: A rigorous theoretic proof is given to show that the proposed quadratic stabilization condition can include previous results as special cases and is not only suitable for designing fuzzy state feedback controllers but also convenient for fuzzy static output feedback controller design.
Abstract: This paper proposes a new quadratic stabilization condition for T-S fuzzy control systems. The condition is represented in the form of linear matrix inequalities (LMIs) and is shown to be less conservative than some relaxed quadratic stabilization conditions published recently in the literature. A rigorous theoretic proof is given to show that the proposed condition can include previous results as special cases. In comparison with conventional conditions, the proposed condition is not only suitable for designing fuzzy state feedback controllers but also convenient for fuzzy static output feedback controller design. The latter design work is quite hard for T-S fuzzy control systems. Based on the LMI-based conditions derived, one can easily synthesize controllers for stabilizing T-S fuzzy control systems. Since only a set of LMIs is involved, the controller design is quite simple and numerically tractable.
198 citations
Journal ArticleDOI
, Lin Hong1
TL;DR: This paper addresses robust H"~ fuzzy static output feedback control problem for T-S fuzzy systems with time-varying norm-bounded uncertainties with three drawbacks existing in the previous papers eliminated.
Abstract: This paper addresses robust H"~ fuzzy static output feedback control problem for T-S fuzzy systems with time-varying norm-bounded uncertainties. Sufficient conditions for synthesis of a fuzzy static output feedback controller for T-S fuzzy systems are derived in terms of a set of linear matrix inequalities (LMIs). In comparison with the existing literatures, the proposed approach not only simplifies the design procedure but also achieves a better H"~ performance. Three drawbacks existing in the previous papers such as coordinate transformation, same output matrices and BMI problem have been eliminated. The effectiveness of the proposed design method is demonstrated by an example for the control of a truck-trailer system.
165 citations
Journal ArticleDOI
Chun-Hsiung Fang
TL;DR: In this article, a new approach is proposed to analyze the stability robustness of generalized state-space systems with structured perturbations, which is computationally simple to use and can easily be calculated by computer.
Abstract: A new approach is proposed to analyze the stability robustness of generalized state-space systems with structured perturbations. The presented method is computationally simple to use and can easily be calculated by computer. As far as we are aware, this paper seems to be the first one to solve the robust stability problems for generalized state-space systems with structured uncertainties. The robust stability problem of generalized state-space systems is more complicated than that of regular state-space systems because it needs consideration of not only stability robustness but also system regularity and impulse elimination. The latter two ones need not be considered in regular state-space systems.
107 citations
Journal ArticleDOI
Chun-Hsiung Fang, Li Lee1
TL;DR: A simple approach to analyse stability robustness of discrete-time singular systems under structured perturbations is proposed and the developed robustness criteria are then applied to solve robust regional pole-assignment problems of singular systems.
Abstract: In this paper, we propose a simple approach to analyse stability robustness of discrete-time singular systems under structured perturbations. The developed robustness criteria are then applied to solve robust regional pole-assignment problems of singular systems. A robust control design algorithm, via state feedback, is also given. The robust stability problem of singular systems is more complicated than that of regular systems. Not only stability robustness but system regularity and impulse elimination should be considered simultaneously. Since the results of robust control and analysis for singular systems is not available in the literature as much as other fields, the paper may be viewed as a complementary result in this field. Although only discrete-time case is discussed, several results can be directly applied to continuous-time systems as well.
103 citations
##### Cited by
More filters
Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
1,898 citations
Journal ArticleDOI
TL;DR: A strict linear matrix inequality (LMI) design approach is developed that solves the problems of robust stability and stabilization for uncertain continuous singular systems with state delay via the notions of generalized quadratic stability and generalizedquadratic stabilization.
Abstract: Considers the problems of robust stability and stabilization for uncertain continuous singular systems with state delay. The parametric uncertainty is assumed to be norm bounded. The purpose of the robust stability problem is to give conditions such that the uncertain singular system is regular, impulse free, and stable for all admissible uncertainties, while the purpose of the robust stabilization is to design a state feedback control law such that the resulting closed-loop system is robustly stable. These problems are solved via the notions of generalized quadratic stability and generalized quadratic stabilization, respectively. Necessary and sufficient conditions for generalized quadratic stability and generalized quadratic stabilization are derived. A strict linear matrix inequality (LMI) design approach is developed. An explicit expression for the desired robust state feedback control law is also given. Finally, a numerical example is provided to demonstrate the application of the proposed method.
719 citations
Journal ArticleDOI
TL;DR: The result provides a set of progressively less conservative sufficient conditions for proving positivity of fuzzy summations of Polya's theorems on positive forms on the standard simplex.
Abstract: Stability and performance requirements in fuzzy control of Takagi-Sugeno systems are usually stated as fuzzy summations, i.e., sums of terms, related to Lyapunov functions, which are weighted by membership functions or products of them. This paper presents an application to fuzzy control of Polya's theorems on positive forms on the standard simplex. The result provides a set of progressively less conservative sufficient conditions for proving positivity of fuzzy summations; such conditions are less and less conservative as a complexity parameter, n, increases. Particular cases of such conditions are those in [C.-H. Fang, Y.-S. Liu, S.-W. Kau, L. Hong, C.-H. Lee, A new LMI-based approach to relaxed quadratic stabilization of T-S fuzzy control systems, IEEE Trans. Fuzzy Systems 14 (2006) 286-397; X. Liu, Q. Zhang, New approaches to H"~ controller designs based on fuzzy observers for T-S fuzzy systems via LMI, Automatica 39 (9) (2003) 1571-1582], with n=2 and 3, respectively. The proposed conditions are asymptotically exact, i.e., necessary and sufficient when n tends to infinity or, equivalently, when a tolerance parameter tends to zero.
548 citations
Journal ArticleDOI
Jianbin Qiu
TL;DR: Two approaches are developed for reliable fuzzy static output feedback controller design of the underlying fuzzy PDE systems and it is shown that the controller gains can be obtained by solving a set of finite linear matrix inequalities based on the finite-difference method in space.
Abstract: This paper investigates the problem of output feedback robust $\mathscr{H}_{\infty }$ control for a class of nonlinear spatially distributed systems described by first-order hyperbolic partial differential equations (PDEs) with Markovian jumping actuator faults. The nonlinear hyperbolic PDE systems are first expressed by Takagi–Sugeno fuzzy models with parameter uncertainties, and then, the objective is to design a reliable distributed fuzzy static output feedback controller guaranteeing the stochastic exponential stability of the resulting closed-loop system with certain $\mathscr{H}_{\infty }$ disturbance attenuation performance. Based on a Markovian Lyapunov functional combined with some matrix inequality convexification techniques, two approaches are developed for reliable fuzzy static output feedback controller design of the underlying fuzzy PDE systems. It is shown that the controller gains can be obtained by solving a set of finite linear matrix inequalities based on the finite-difference method in space. Finally, two examples are presented to demonstrate the effectiveness of the proposed methods.
311 citations
Journal ArticleDOI
TL;DR: The problems of robust stability and robust stabilization are solved with a new necessary and sufficient condition for a discrete-time singular system to be regular, causal and stable in terms of a strict linear matrix inequality (LMI).
Abstract: This note deals with the problems of robust stability and stabilization for uncertain discrete-time singular systems. The parameter uncertainties are assumed to be time-invariant and norm-bounded appearing in both the state and input matrices. A new necessary and sufficient condition for a discrete-time singular system to be regular, causal and stable is proposed in terms of a strict linear matrix inequality (LMI). Based on this, the concepts of generalized quadratic stability and generalized quadratic stabilization for uncertain discrete-time singular systems are introduced. Necessary and sufficient conditions for generalized quadratic stability and generalized quadratic stabilization are obtained in terms of a strict LMI and a set of matrix inequalities, respectively. With these conditions, the problems of robust stability and robust stabilization are solved. An explicit expression of a desired state feedback controller is also given, which involves no matrix decomposition. Finally, an illustrative example is provided to demonstrate the applicability of the proposed approach.
308 citations |
Hints will display for most wrong answers; explanations for most right answers. You can attempt a question multiple times; it will only be scored correct if you get it right the first time.
I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test. Some of the sample questions were more convoluted than I could bear to write. See terms of use. See the MTEL Practice Test main page to view questions on a particular topic or to download paper practice tests.
## MTEL General Curriculum Mathematics Practice
Question 1
#### 4 lines of reflective symmetry, 1 center of rotational symmetry.
Hint:
Try cutting out a shape like this one from paper, and fold where you think the lines of reflective symmetry are (or put a mirror there). Do things line up as you thought they would?
#### 2 lines of reflective symmetry, 1 center of rotational symmetry.
Hint:
Try cutting out a shape like this one from paper, and fold where you think the lines of reflective symmetry are (or put a mirror there). Do things line up as you thought they would?
#### 0 lines of reflective symmetry, 1 center of rotational symmetry.
Hint:
The intersection of the diagonals is a center of rotational symmetry. There are no lines of reflective symmetry, although many people get confused about this fact (best to play with hands on examples to get a feel). Just fyi, the letter S also has rotational, but not reflective symmetry, and it's one that kids often write backwards.
#### 2 lines of reflective symmetry, 0 centers of rotational symmetry.
Hint:
Try cutting out a shape like this one from paper. Trace onto another sheet of paper. See if there's a way to rotate the cut out shape (less than a complete turn) so that it fits within the outlines again.
Question 1 Explanation:
Topic: Analyze geometric transformations (e.g., translations, rotations, reflections, dilations); relate them to concepts of symmetry (Objective 0024).
Question 2
#### A
Hint:
$$\frac{34}{135} \approx \frac{1}{4}$$ and $$\frac{53}{86} \approx \frac {2}{3}$$. $$\frac {1}{4}$$ of $$\frac {2}{3}$$ is small and closest to A.
#### B
Hint:
Estimate with simpler fractions.
#### C
Hint:
Estimate with simpler fractions.
#### D
Hint:
Estimate with simpler fractions.
Question 2 Explanation:
Topic: Understand meaning and models of operations on fractions (Objective 0019).
Question 3
#### 58 x 22
Hint:
This problem involves regrouping, which the student does not do correctly.
#### 16 x 24
Hint:
This problem involves regrouping, which the student does not do correctly.
#### 31 x 23
Hint:
There is no regrouping with this problem.
#### 141 x 32
Hint:
This problem involves regrouping, which the student does not do correctly.
Question 3 Explanation:
Topic: Analyze computational algorithms (Objective 0019).
Question 4
#### If two fair coins are flipped, what is the probability that one will come up heads and the other tails?
A $$\large \dfrac{1}{4}$$Hint: Think of the coins as a penny and a dime, and list all possibilities. B $$\large \dfrac{1}{3}$$Hint: This is a very common misconception. There are three possible outcomes -- both heads, both tails, and one of each -- but they are not equally likely. Think of the coins as a penny and a dime, and list all possibilities. C $$\large \dfrac{1}{2}$$Hint: The possibilities are HH, HT, TH, TT, and all are equally likely. Two of the four have one of each coin, so the probability is 2/4=1/2. D $$\large \dfrac{3}{4}$$Hint: Think of the coins as a penny and a dime, and list all possibilities.
Question 4 Explanation:
Topic: Calculate the probabilities of simple and compound events and of independent and dependent events (Objective 0026).
Question 5
#### 2
Hint:
$$10^3 \times 10^4=10^7$$, and note that if you're guessing when the answers are so closely related, you're generally better off guessing one of the middle numbers.
#### 20
Hint:
$$\dfrac{\left( 4\times {{10}^{3}} \right)\times \left( 3\times {{10}^{4}} \right)}{6\times {{10}^{6}}}=\dfrac {12 \times {{10}^{7}}}{6\times {{10}^{6}}}=$$$$2 \times {{10}^{1}}=20$$
#### 200
Hint:
$$10^3 \times 10^4=10^7$$
#### 2000
Hint:
$$10^3 \times 10^4=10^7$$, and note that if you're guessing when the answers are so closely related, you're generally better off guessing one of the middle numbers.
Question 5 Explanation:
Topics: Scientific notation, exponents, simplifying fractions (Objective 0016, although overlaps with other objectives too).
Question 6
#### A family has four children. What is the probability that two children are girls and two are boys? Assume the the probability of having a boy (or a girl) is 50%.
A $$\large \dfrac{1}{2}$$Hint: How many different configurations are there from oldest to youngest, e.g. BGGG? How many of them have 2 boys and 2 girls? B $$\large \dfrac{1}{4}$$Hint: How many different configurations are there from oldest to youngest, e.g. BGGG? How many of them have 2 boys and 2 girls? C $$\large \dfrac{1}{5}$$Hint: Some configurations are more probable than others -- i.e. it's more likely to have two boys and two girls than all boys. Be sure you are weighting properly. D $$\large \dfrac{3}{8}$$Hint: There are two possibilities for each child, so there are $$2 \times 2 \times 2 \times 2 =16$$ different configurations, e.g. from oldest to youngest BBBG, BGGB, GBBB, etc. Of these configurations, there are 6 with two boys and two girls (this is the combination $$_{4}C_{2}$$ or "4 choose 2"): BBGG, BGBG, BGGB, GGBB, GBGB, and GBBG. Thus the probability is 6/16=3/8.
Question 6 Explanation:
Topic: Apply knowledge of combinations and permutations to the computation of probabilities (Objective 0026).
Question 7
#### Which of the following is the equation of a linear function?
A $$\large y={{x}^{2}}+2x+7$$Hint: This is a quadratic function. B $$\large y={{2}^{x}}$$Hint: This is an exponential function. C $$\large y=\dfrac{15}{x}$$Hint: This is an inverse function. D $$\large y=x+(x+4)$$Hint: This is a linear function, y=2x+4, it's graph is a straight line with slope 2 and y-intercept 4.
Question 7 Explanation:
Topic: Distinguish between linear and nonlinear functions (Objective 0022).
Question 8
#### 1.6 cm
Hint:
This is more the height of a Lego toy college student -- less than an inch!
#### 16 cm
Hint:
Less than knee high on most college students.
#### 160 cm
Hint:
Remember, a meter stick (a little bigger than a yard stick) is 100 cm. Also good to know is that 4 inches is approximately 10 cm.
#### 1600 cm
Hint:
This college student might be taller than some campus buildings!
Question 8 Explanation:
Topic: Estimate and calculate measurements using customary, metric, and nonstandard units of measurement (Objective 0023).
Question 9
#### The letters A, B, and C represent digits (possibly equal) in the twelve digit number x=111,111,111,ABC. For which values of A, B, and C is x divisible by 40?
A $$\large A = 3, B = 2, C=0$$Hint: Note that it doesn't matter what the first 9 digits are, since 1000 is divisible by 40, so DEF,GHI,JKL,000 is divisible by 40 - we need to check the last 3. B $$\large A = 0, B = 0, C=4$$Hint: Not divisible by 10, since it doesn't end in 0. C $$\large A = 4, B = 2, C=0$$Hint: Divisible by 10 and by 4, but not by 40, as it's not divisible by 8. Look at 40 as the product of powers of primes -- 8 x 5, and check each. To check 8, either check whether 420 is divisible by 8, or take ones place + twice tens place + 4 * hundreds place = 18, which is not divisible by 8. D $$\large A =1, B=0, C=0$$Hint: Divisible by 10 and by 4, but not by 40, as it's not divisible by 8. Look at 40 as the product of powers of primes -- 8 x 5, and check each. To check 8, either check whether 100 is divisible by 8, or take ones place + twice tens place + 4 * hundreds place = 4, which is not divisible by 8.
Question 9 Explanation:
Topic: Understand divisibility rules and why they work (Objective 018).
Question 10
#### Use the samples of a student's work below to answer the question that follows:
$$\large \dfrac{2}{3}\times \dfrac{3}{4}=\dfrac{4\times 2}{3\times 3}=\dfrac{8}{9}$$ $$\large \dfrac{2}{5}\times \dfrac{7}{7}=\dfrac{7\times 2}{5\times 7}=\dfrac{2}{5}$$ $$\large \dfrac{7}{6}\times \dfrac{3}{4}=\dfrac{4\times 7}{6\times 3}=\dfrac{28}{18}=\dfrac{14}{9}$$
#### It is not valid. It never produces the correct answer.
Hint:
In the middle example,the answer is correct.
#### It is not valid. It produces the correct answer in a few special cases, but it‘s still not a valid algorithm.
Hint:
Note that this algorithm gives a/b divided by c/d, not a/b x c/d, but some students confuse multiplication and cross-multiplication. If a=0 or if c/d =1, division and multiplication give the same answer.
#### It is valid if the rational numbers in the multiplication problem are in lowest terms.
Hint:
Lowest terms is irrelevant.
#### It is valid for all rational numbers.
Hint:
Can't be correct as the first and last examples have the wrong answers.
Question 10 Explanation:
Topic: Analyze Non-Standard Computational Algorithms (Objective 0019).
Question 11
#### Which of the lists below is in order from least to greatest value?
A $$\large -0.044,\quad -0.04,\quad 0.04,\quad 0.044$$Hint: These are easier to compare if you add trailing zeroes (this is finding a common denominator) -- all in thousandths, -0.044, -0.040,0 .040, 0.044. The middle two numbers, -0.040 and 0.040 can be modeled as owing 4 cents and having 4 cents. The outer two numbers are owing or having a bit more. B $$\large -0.04,\quad -0.044,\quad 0.044,\quad 0.04$$Hint: 0.04=0.040, which is less than 0.044. C $$\large -0.04,\quad -0.044,\quad 0.04,\quad 0.044$$Hint: -0.04=-0.040, which is greater than $$-0.044$$. D $$\large -0.044,\quad -0.04,\quad 0.044,\quad 0.04$$Hint: 0.04=0.040, which is less than 0.044.
Question 11 Explanation:
Topic: Ordering decimals and integers (Objective 0017).
Question 12
#### Which of the lists below contains only irrational numbers?
A $$\large\pi , \quad \sqrt{6},\quad \sqrt{\dfrac{1}{2}}$$ B $$\large\pi , \quad \sqrt{9}, \quad \pi +1$$Hint: $$\sqrt{9}=3$$ C $$\large\dfrac{1}{3},\quad \dfrac{5}{4},\quad \dfrac{2}{9}$$Hint: These are all rational. D $$\large-3,\quad 14,\quad 0$$Hint: These are all rational.
Question 12 Explanation:
Topic: Identifying rational and irrational numbers (Objective 0016).
Question 13
#### 100
Hint:
6124/977 is approximately 6.
#### 200
Hint:
6124/977 is approximately 6.
#### 1,000
Hint:
6124/977 is approximately 6. 155 is approximately 150, and $$6 \times 150 = 3 \times 300 = 900$$, so this answer is closest.
#### 2,000
Hint:
6124/977 is approximately 6.
Question 13 Explanation:
Topics: Estimation, simplifying fractions (Objective 0016).
Question 14
#### The expression $$\large {{7}^{-4}}\cdot {{8}^{-6}}$$ is equal to which of the following?
A $$\large \dfrac{8}{{{\left( 56 \right)}^{4}}}$$Hint: The bases are whole numbers, and the exponents are negative. How can the numerator be 8? B $$\large \dfrac{64}{{{\left( 56 \right)}^{4}}}$$Hint: The bases are whole numbers, and the exponents are negative. How can the numerator be 64? C $$\large \dfrac{1}{8\cdot {{\left( 56 \right)}^{4}}}$$Hint: $$8^{-6}=8^{-4} \times 8^{-2}$$ D $$\large \dfrac{1}{64\cdot {{\left( 56 \right)}^{4}}}$$
Question 14 Explanation:
Topics: Laws of exponents (Objective 0019).
Question 15
#### 40
Hint:
"Keychain" appears on the spinner twice.
#### 80
Hint:
The probability of getting a keychain is 1/3, and so about 1/3 of the time the spinner will win.
#### 100
Hint:
What is the probability of winning a keychain?
#### 120
Hint:
That would be the answer for getting any prize, not a keychain specifically.
Question 15 Explanation:
Topic: I would call this topic expected value, which is not listed on the objectives. This question is very similar to one on the sample test. It's not a good question in that it's oversimplified (a more difficult and interesting question would be something like, "The school bought 100 keychains for prizes, what is the probability that they will run out before 240 people play?"). In any case, I believe the objective this is meant for is, "Recognize the difference between experimentally and theoretically determined probabilities in real-world situations. (Objective 0026)." This is not something easily assessed with multiple choice .
Question 16
#### 4
Hint:
The card blocks more than half of the circles, so this number is too small.
#### 5
Hint:
The card blocks more than half of the circles, so this number is too small.
#### 8
Hint:
The card blocks more than half of the circles, so this number is too small.
#### 12
Hint:
2/5 of the circles or 8 circles are showing. Thus 4 circles represent 1/5 of the circles, and $$4 \times 5=20$$ circles represent 5/5 or all the circles. Thus 12 circles are hidden.
Question 16 Explanation:
Topic: Models of Fractions (Objective 0017)
Question 17
#### 1.5°
Hint:
Celsius and Fahrenheit don't increase at the same rate.
#### 1.8°
Hint:
That's how much the Fahrenheit temp increases when the Celsius temp goes up by 1 degree.
#### 2.7°
Hint:
Each degree increase in Celsius corresponds to a $$\dfrac{9}{5}=1.8$$ degree increase in Fahrenheit. Thus the increase is 1.8+0.9=2.7.
#### Not enough information.
Hint:
A linear equation has constant slope, which means that every increase of the same amount in one variable, gives a constant increase in the other variable. It doesn't matter what temperature the patient started out at.
Question 17 Explanation:
Topic: Interpret the meaning of the slope and the intercepts of a linear equation that models a real-world situation (Objective 0022).
Question 18
#### How many students at the college are seniors who are not vegetarians?
A $$\large 137$$Hint: Doesn't include the senior athletes who are not vegetarians. B $$\large 167$$ C $$\large 197$$Hint: That's all seniors, including vegetarians. D $$\large 279$$Hint: Includes all athletes who are not vegetarians, some of whom are not seniors.
Question 18 Explanation:
Topic: Venn Diagrams (Objective 0025)
Question 19
#### The quotient is $$3\dfrac{1}{2}$$. There are 3 whole blocks each representing $$\dfrac{2}{3}$$ and a partial block composed of 3 small rectangles. The 3 small rectangles represent $$\dfrac{3}{6}$$ of a whole, or $$\dfrac{1}{2}$$.
Hint:
We are counting how many 2/3's are in
2 1/2: the unit becomes 2/3, not 1.
#### The quotient is $$\dfrac{4}{15}$$. There are four whole blocks separated into a total of 15 small rectangles.
Hint:
This explanation doesn't make much sense. Probably you are doing "invert and multiply," but inverting the wrong thing.
#### This picture cannot be used to find the quotient because it does not show how to separate $$2\dfrac{1}{2}$$ into equal sized groups.
Hint:
Study the measurement/quotative model of division. It's often very useful with fractions.
Question 19 Explanation:
Topic: Recognize and analyze pictorial representations of number operations. (Objective 0019).
Question 20
#### Commutative Property.
Hint:
For addition, the commutative property is $$a+b=b+a$$ and for multiplication it's $$a \times b = b \times a$$.
#### Associative Property.
Hint:
For addition, the associative property is $$(a+b)+c=a+(b+c)$$ and for multiplication it's $$(a \times b) \times c=a \times (b \times c)$$
#### Identity Property.
Hint:
0 is the additive identity, because $$a+0=a$$ and 1 is the multiplicative identity because $$a \times 1=a$$. The phrase "identity property" is not standard.
#### Distributive Property.
Hint:
$$(25+1) \times 16 = 25 \times 16 + 1 \times 16$$. This is an example of the distributive property of multiplication over addition.
Question 20 Explanation:
Topic: Analyze and justify mental math techniques, by applying arithmetic properties such as commutative, distributive, and associative (Objective 0019). Note that it's hard to write a question like this as a multiple choice question -- worthwhile to understand why the other steps work too.
Once you are finished, click the button below. Any items you have not completed will be marked incorrect.
There are 20 questions to complete.
← List → |
# Please improve my logo to resemble our TeX.SX logo
It is free to use TikZ or PSTricks to answer my question. I want to recreate our TeX.SX logo. But my attempt is far from perfect. Please help me to make it more similar to the original one.
\documentclass[pstricks,border=0pt]{standalone}
\begin{document}
\begin{pspicture}(8,3)
\rput(4,1.5){\psscalebox{8}{\textcolor{gray}{\{}\textcolor{red}{\TeX}\textcolor{gray}{\}}}}
\end{pspicture}
\end{document}
The aspects I want to improve:
1. Font type.
2. Font color and its shading.
-
@Werner I wonder what would Freud would have to say about this :D (tex.stackexchange.com/posts/85050/…) – doncherry Dec 1 '12 at 4:29
Related question: Letterpress effect through PSTricks or TikZ. – Alan Munn Dec 1 '12 at 4:38
The typeface is Hoefler Text (see Site Design Ideas). – Alan Munn Dec 1 '12 at 4:40
@alfC I don't think that'd be a good fit for meta; this is a regular question about *TeX, whose subject happens to be related to the site. Meta questions should be about the site itself, typically without any *TeX involvement at all. – doncherry Dec 1 '12 at 5:34
@GarbageCollector Sigmund Freud was an Austrian psychologist in the 19th century, who is still very popular in literary analyses and the like. He is the founding father of psychoanalysis. Some of his influential theories deal with interpretation of dreams and the subconcious. A Freudian slip, as which I jokingly interpreted your typos, happens when you inadvertently utter something in a way that is incorrect, but reveals something about your inner, repressed desires. Roughly speaking. – doncherry Dec 1 '12 at 17:54
## 5 Answers
There a few antialiasing artefacts that I don't know how to get rid of, and it uses some experimental code (what else?!). The font used is Hoefler (according to my Mac). The code itself won't work without some extra bits and pieces (one of which is the conversion of the Hoefler font to PGF paths - does anyone know the licence for Hoefler?). I also don't think that the braces are Hoefler.
For what it's worth, here's the code:
\documentclass{standalone}
%\url{http://tex.stackexchange.com/q/85050/86}
\usepackage[svgnames]{xcolor}
\usepackage{tikz}
\usetikzlibrary{shapes.letters,shadows.blur}
\pgfkeys{
/pgf/letter/.cd,
load font={hoefler}{normal},
size=4,
load encoding=name,
}
\definecolor{logoBack}{HTML}{F8F8F2}
\definecolor{brace}{HTML}{F6F6EF}
\definecolor{letter}{HTML}{C04848}
\makeatletter
\tikzset{
use letter path/.code={%
\pgfscope
\pgftransformscale{\letter@size}%
\letter@path{\letter@encode{#1}}%
\endpgfscope
}
}
\makeatother
\begin{document}
\begin{tikzpicture}[every shadow/.style={
shadow blur invert,
shadow xshift=-1pt,
shadow yshift=-3pt
}]
\coordinate (bleft) at (-2,0);
\coordinate (T) at (0,0);
\coordinate (E) at (2.3cm,-.65cm);
\coordinate (X) at (4.35cm,0);
\coordinate (bright) at (8.6,0);
\begin{scope}
\begin{scope}[shift={(bleft)}]
\fill[color=brace,use letter path=braceleft];
\clip[use letter path=braceleft];
\path[blur shadow,shadow xshift=2pt, shadow yshift=0pt,use letter path=braceleft];
\path[blur shadow,shadow xshift=-1pt, shadow yshift=0pt,use letter path=braceleft];
\end{scope}
\begin{scope}[shift={(T)}]
\fill[color=letter,use letter path=T];
\clip[use letter path=T];
\path[blur shadow,use letter path=T];
\end{scope}
\begin{scope}[shift={(E)}]
\fill[color=letter,use letter path=E];
\clip[use letter path=E];
\path[blur shadow,use letter path=E];
\end{scope}
\begin{scope}[shift={(X)}]
\fill[color=letter,use letter path=X];
\clip[use letter path=X];
\path[blur shadow,use letter path=X];
\end{scope}
\begin{scope}[shift={(bright)}]
\fill[color=brace,use letter path=braceright];
\clip[use letter path=braceright];
\path[blur shadow,shadow xshift=2pt, shadow yshift=0pt,use letter path=braceright];
\path[blur shadow,shadow xshift=-1pt, shadow yshift=0pt,use letter path=braceright];
\end{scope}
\path (current bounding box.north west) ++(-1,1) (current bounding box.south east) ++(1,-1);
\clip[shift={(T)},use letter path=T] (current bounding box.north west) rectangle (current bounding box.south east);
\clip[shift={(bleft)},use letter path=braceleft] (current bounding box.north west) rectangle (current bounding box.south east);
\clip[shift={(E)},use letter path=E] (current bounding box.north west) rectangle (current bounding box.south east);
\clip[shift={(X)},use letter path=X] (current bounding box.north west) rectangle (current bounding box.south east);
\clip[shift={(bright)},use letter path=braceright] (current bounding box.north west) rectangle (current bounding box.south east);
\fill[logoBack,rounded corners] (current bounding box.north west) rectangle (current bounding box.south east);
\end{scope}
\end{tikzpicture}
\end{document}
As well as needing the letter shapes from Hoefler and the code to make use of it, in doing this I spotted an issue with the pgf-blur library now needing unique fading names. So it really isn't compilable with "off the shelf" code! Modulo a few updates, most of it is on the TeX-SX launchpad - Hoefler being the key exception.
-
the best candidate for the bounty as well as the accepted answer. – Oh my ghost Dec 6 '12 at 14:09
+1 I love how much code you've used for something that is essentially 5 characters :) – cmhughes Dec 7 '12 at 0:19
This looks nicer than the actual logo – enthdegree Dec 7 '12 at 5:49
@GarbageCollector Thanks. – Loop Space Dec 8 '12 at 10:52
@cmhughes You don't know the half of it! You should see the size of the auxiliary files. – Loop Space Dec 8 '12 at 10:52
The closest how I can get now is:
\documentclass[]{article}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage[outline]{contour}
\usepackage{libertine}
\begin{document}
\bfseries
\Huge{
\contourlength{.4pt}
\textsc{
\textcolor{white}{\contour{gray}{\textbraceleft}}
\contourlength{.3pt}
\hspace{-15pt}
\textcolor{BrickRed}{\contour{black}{\TeX}}
\hspace{-15pt}
\contourlength{.4pt}
\textcolor{white}{\contour{gray}{\textbraceright}}
}
\end{document}
-
the typeface is worng, see this comment: tex.stackexchange.com/questions/85050/… – tohecz Dec 5 '12 at 10:26
@tohecz It's an improvement nonetheless :) – cgnieder Dec 5 '12 at 10:54
Linux Libertine used instead of Hoefler Text (too poor to buy, too honest to steal, too lazy to find and instal XeTeX). LL on Wiki – boucekv Dec 5 '12 at 11:21
Hoefler Text is among the fonts that are provided automatically on every Mac Computer System. Maybe a reason (or excuse?!) to finally buy that Mac computer you've been longing to own? :-) – Mico Dec 5 '12 at 11:46
I have a brand new Macbook Pro with Hoefler Text, but am having the strangest experience here. If I take your code, add \usepackage{fontspec} and \setmainfont{Hoefler Text}, xelatex throws a bunch of warnings and refuses to draw the contour, while lualatex says fontspec can't find the font at all. – GTK Dec 5 '12 at 13:24
This is an attempt, not nearly as complete or elegant as Andrew Stacey's or Herbert's, but without requiring any non-standard or beta packages. It combines Mico's and boucekv's approaches. Interestingly, it produces the output in lualatex, but not using xelatex.
\documentclass {article}
\pagestyle {empty}
\usepackage[usenames,dvipsnames]{xcolor}
\usepackage[centering,margin=1mm]{geometry}
\geometry{papersize={1.5in,0.4in}}
\usepackage[outline]{contour}
\usepackage{metalogo}
\makeatletter
\def\xl@drop@TeX@e{0.39ex} % default value: 0.5ex
\makeatother
\usepackage {fontspec}
\setmainfont {Hoefler Text}
\begin{document}
\centering
\Huge{
\contourlength{0.01em}
\textcolor{gray!10}{\contour{gray}{\{}}
\textcolor{BrickRed}{\contour{black}{\TeX}}
\textcolor{gray!10}{\contour{gray}{\}}}
}
\end{document}
-
Here's how one might recreate the logo using LuaLaTeX (and the font Hoefler Text); XeLaTeX will work too, of course.
% !TEX TS-program = lualatex
\documentclass{standalone}
\usepackage{fontspec}
\setmainfont{Hoefler Text}
\usepackage{metalogo}
\makeatletter
\def\xl@drop@TeX@e{0.38ex} % default value: 0.5ex
\makeatother
\usepackage{xcolor}
\definecolor{TeXSEred}{rgb}{0.75,0.28125,0.28125}
% many thanks to Alan Munn for stating the precise color :-)
\begin{document}
\textcolor{gray}{\{}\space
\textcolor{TeXSEred}{\TeX}
\textcolor{gray}{\}}
\end{document}
-
\documentclass{article}
\usepackage{pst-grad,pst-light3d,pstricks-add}
\DeclareFixedFont{\Rmb}{T1}{ptm}{m}{n}{4cm}
\begin{document}
\begin{pspicture}(0,-4)(8,4)
\psset{linewidth=0.5pt}
\psBrace[braceWidth=4mm,fillstyle=gradient,gradbegin=black,gradend=white,
gradangle=0,gradmidpoint=0](0.5,2)(0.5,-2)
\rput(4,0){\PstLightThreeDText[fillstyle=solid,fillcolor=red!100!black!70,
LightThreeDAngle=60,LightThreeDYLength=0.1]{\Rmb\TeX}}
\psBrace[braceWidth=4mm,fillstyle=gradient,gradbegin=white,gradend=black,
gradangle=0,gradmidpoint=0](7.5,-2)(7.5,2)
\end{pspicture}
\end{document}
needs latest pstricks-add from http://texnik.dante.de/tex/genric/pstricks-add/
- |
Question:
#### On my "Home" it is not listed a course of the current semester. What can I do?
(Last edited: Tuesday, 22 May 2018, 7:53 PM) |
# Popper's experiment
Popper's experiment is an experiment proposed by the philosopher Karl Popper to put to the test different interpretations of quantum mechanics (QM). In fact, as early as 1934, Popper started criticising the increasingly more accepted Copenhagen interpretation, a popular subjectivist interpretation of quantum mechanics. Therefore, in his most famous book Logik der Forschung he proposed a first experiment alleged to empirically discriminate between the Copenhagen Interpretation and a realist interpretation, which he advocated. Einstein, however, wrote a letter to Popper about the experiment in which he raised some crucial objections[1] and Popper himself declared that this first attempt was "a gross mistake for which I have been deeply sorry and ashamed of ever since".[2]
Popper, however, came back to the foundations of quantum mechanics from 1948, when he developed his criticism of determinism in both quantum and classical physics.[3] As a matter of fact, Popper greatly intensified his research activities on the foundations of quantum mechanics throughout the 1950s and 1960s developing his interpretation of quantum mechanics in terms of real existing probabilities (propensities), also thanks to the support of a number of distinguished physicists (such as David Bohm).[4]
## Overview
In 1980, Popper proposed perhaps his more important, yet overlooked, contribution to QM: a "new simplified version of the EPR experiment".[5]
The experiment was however published only two years later, in the third volume of the Postscript to the Logic of Scientific Discovery.[6]
The most widely known interpretation of quantum mechanics is the Copenhagen interpretation put forward by Niels Bohr and his school. It maintains that observations lead to a wavefunction collapse, thereby suggesting the counter-intuitive result that two well separated, non-interacting systems require action-at-a-distance. Popper argued that such non-locality conflicts with common sense, and would lead to a subjectivist interpretation of phenomena, depending on the role of the 'observer'.
While the EPR argument was always meant to be a thought experiment, put forward to shed light on the intrinsic paradoxes of QM, Popper proposed an experiment which could have been experimentally implemented and participated at a physics conference organised in Bari in 1983, to present his experiment and propose to the experimentalists to carry it out.
The actual realisation of Popper's experiment required new techniques which would make use of the phenomenon of Spontaneous Parametric Down Conversion but had not yet been exploited at that time, so his experiment was eventually performed only in 1999, five years after Popper had died.
## Popper's proposed experiment
Contrarily to the first (mistaken) proposal of 1934, Popper's experiment of 1980 exploits couples of entangled particles, in order to put to the test Heisenberg's uncertainty principle.[5][7]
Indeed, Popper maintains:
"I wish to suggest a crucial experiment to test whether knowledge alone is sufficient to create 'uncertainty' and, with it, scatter (as is contended under the Copenhagen interpretation), or whether it is the physical situation that is responsible for the scatter."[8]
Popper's proposed experiment consists of a low-intensity source of particles that can generate pairs of particles traveling to the left and to the right along the x-axis. The beam's low intensity is "so that the probability is high that two particles recorded at the same time on the left and on the right are those which have actually interacted before emission."[8]
There are two slits, one each in the paths of the two particles. Behind the slits are semicircular arrays of counters which can detect the particles after they pass through the slits (see Fig. 1). "These counters are coincident counters [so] that they only detect particles that have passed at the same time through A and B."[9]
Fig.1 Experiment with both slits equally wide. Both the particles should show equal scatter in their momenta.
Popper argued that because the slits localize the particles to a narrow region along the y-axis, from the uncertainty principle they experience large uncertainties in the y-components of their momenta. This larger spread in the momentum will show up as particles being detected even at positions that lie outside the regions where particles would normally reach based on their initial momentum spread.
Popper suggests that we count the particles in coincidence, i.e., we count only those particles behind slit B, whose partner has gone through slit A. Particles which are not able to pass through slit A are ignored.
The Heisenberg scatter for both the beams of particles going to the right and to the left, is tested "by making the two slits A and B wider or narrower. If the slits are narrower, then counters should come into play which are higher up and lower down, seen from the slits. The coming into play of these counters is indicative of the wider scattering angles which go with a narrower slit, according to the Heisenberg relations."[9]
Fig.2 Experiment with slit A narrowed, and slit B wide open. Should the two particle show equal scatter in their momenta? If they do not, Popper says, the Copenhagen interpretation is wrong. If they do, it indicates action at a distance, says Popper.
Now the slit at A is made very small and the slit at B very wide. Popper wrote that, according to the EPR argument, we have measured position "y" for both particles (the one passing through A and the one passing through B) with the precision ${\displaystyle \Delta y}$, and not just for particle passing through slit A. This is because from the initial entangled EPR state we can calculate the position of the particle 2, once the position of particle 1 is known, with approximately the same precision. We can do this, argues Popper, even though slit B is wide open.[9]
Therefore, Popper states that "fairly precise "knowledge"" about the y position of particle 2 is made; its y position is measured indirectly. And since it is, according to the Copenhagen interpretation, our knowledge which is described by the theory – and especially by the Heisenberg relations — it should be expected that the momentum ${\displaystyle p_{y}}$ of particle 2 scatters as much as that of particle 1, even though the slit A is much narrower than the widely opened slit at B.
Now the scatter can, in principle, be tested with the help of the counters. If the Copenhagen interpretation is correct, then such counters on the far side of B that are indicative of a wide scatter (and of a narrow slit) should now count coincidences: counters that did not count any particles before the slit A was narrowed.
To sum up: if the Copenhagen interpretation is correct, then any increase in the precision in the measurement of our mere knowledge of the particles going through slit B should increase their scatter.[10]
Popper was inclined to believe that the test would decide against the Copenhagen interpretation, as it is applied to Heisenberg's uncertainty principle. If the test decided in favor of the Copenhagen interpretation, Popper argued, it could be interpreted as indicative of action at a distance.
## The debate
Many viewed Popper's experiment as a crucial test of quantum mechanics, and there was a debate on what result an actual realization of the experiment would yield.
In 1985, Sudbery pointed out that the EPR state, which could be written as ${\displaystyle \psi (y_{1},y_{2})=\int _{-\infty }^{\infty }e^{iky_{1}}e^{-iky_{2}}\,dk}$, already contained an infinite spread in momenta (tacit in the integral over k), so no further spread could be seen by localizing one particle.[11][12] Although it pointed to a crucial flaw in Popper's argument, its full implication was not understood. Kripps theoretically analyzed Popper's experiment and predicted that narrowing slit A would lead to momentum spread increasing at slit B. Kripps also argued that his result was based just on the formalism of quantum mechanics, without any interpretational problem. Thus, if Popper was challenging anything, he was challenging the central formalism of quantum mechanics.[13]
In 1987 there came a major objection to Popper's proposal from Collet and Loudon.[14] They pointed out that because the particle pairs originating from the source had a zero total momentum, the source could not have a sharply defined position. They showed that once the uncertainty in the position of the source is taken into account, the blurring introduced washes out the Popper effect.
Furthermore, Redhead analyzed Popper's experiment with a broad source and concluded that it could not yield the effect that Popper was seeking.[15]
## Realization of Popper's experiment
Fig.3 Schematic diagram of Kim and Shih's experiment based on a BBO crystal which generates entangled photons. The lens LS helps create a sharp image of slit A on the location of slit B.
Fig.4 Results of the photon experiment by Kim and Shih, aimed at realizing Popper's proposal. The diffraction pattern in the absence of slit B (red symbols) is much narrower than that in the presence of a real slit (blue symbols).
Popper's experiment was realized in 1999 by Kim and Shih using a spontaneous parametric down-conversion photon source. They did not observe an extra spread in the momentum of particle 2 due to particle 1 passing through a narrow slit. They write:
"Indeed, it is astonishing to see that the experimental results agree with Popper’s prediction. Through quantum entanglement one may learn the precise knowledge of a photon’s position and would therefore expect a greater uncertainty in its momentum under the usual Copenhagen interpretation of the uncertainty relations. However, the measurement shows that the momentum does not experience a corresponding increase in uncertainty. Is this a violation of the uncertainty principle?"[16]
Rather, the momentum spread of particle 2 (observed in coincidence with particle 1 passing through slit A) was narrower than its momentum spread in the initial state.
They concluded that:
"Popper and EPR were correct in the prediction of the physical outcomes of their experiments. However, Popper and EPR made the same error by applying the results of two-particle physics to the explanation of the behavior of an individual particle. The two-particle entangled state is not the state of two individual particles. Our experimental result is emphatically NOT a violation of the uncertainty principle which governs the behavior of an individual quantum."[16]
This led to a renewed heated debate, with some even going to the extent of claiming that Kim and Shih's experiment had demonstrated that there is no non-locality in quantum mechanics.[17]
Unnikrishnan (2001), discussing Kim and Shih's result, wrote that the result:
"is a solid proof that there is no state-reduction-at-a-distance. ... Popper's experiment and its analysis forces us to radically change the current held view on quantum non-locality."[18]
Short criticized Kim and Shih's experiment, arguing that because of the finite size of the source, the localization of particle 2 is imperfect, which leads to a smaller momentum spread than expected.[19] However, Short's argument implies that if the source were improved, we should see a spread in the momentum of particle 2.[citation needed]
Sancho carried out a theoretical analysis of Popper's experiment, using the path-integral approach, and found a similar kind of narrowing in the momentum spread of particle 2, as was observed by Kim and Shih.[20] Although this calculation did not give them any deep insight, it indicated that the experimental result of Kim-Shih agreed with quantum mechanics. It did not say anything about what bearing it has on the Copenhagen interpretation, if any.
## Criticism of Popper's proposal
Tabish Qureshi has published the following analysis of Popper's argument.[21][22]
The ideal EPR state is written as ${\displaystyle |\psi \rangle =\int _{-\infty }^{\infty }|y,y\rangle \,dy=\int _{-\infty }^{\infty }|p,-p\rangle \,dp}$, where the two labels in the "ket" state represent the positions or momenta of the two particle. This implies perfect correlation, meaning, detecting particle 1 at position ${\displaystyle x_{0}}$ will also lead to particle 2 being detected at ${\displaystyle x_{0}}$. If particle 1 is measured to have a momentum ${\displaystyle p_{0}}$, particle 2 will be detected to have a momentum ${\displaystyle -p_{0}}$. The particles in this state have infinite momentum spread, and are infinitely delocalized. However, in the real world, correlations are always imperfect. Consider the following entangled state
${\displaystyle \psi (y_{1},y_{2})=A\!\int _{-\infty }^{\infty }dpe^{-{\frac {1}{4}}p^{2}\sigma ^{2}}e^{-{\frac {i}{\hbar }}py_{2}}e^{{\frac {i}{\hbar }}py_{1}}\exp \left[-{\frac {\left(y_{1}+y_{2}\right)^{2}}{16\Omega ^{2}}}\right]}$
where ${\displaystyle \sigma }$ represents a finite momentum spread, and ${\displaystyle \Omega }$ is a measure of the position spread of the particles. The uncertainties in position and momentum, for the two particles can be written as
${\displaystyle \Delta p_{2}=\Delta p_{1}={\sqrt {\sigma ^{2}+{\frac {\hbar ^{2}}{16\Omega ^{2}}}}},\qquad \Delta y_{1}=\Delta y_{2}={\sqrt {\Omega ^{2}+{\frac {\hbar ^{2}}{16\sigma ^{2}}}}}.}$
The action of a narrow slit on particle 1 can be thought of as reducing it to a narrow Gaussian state:
${\displaystyle \phi _{1}(y_{1})={\frac {1}{\left(2\pi \epsilon ^{2}\right)^{\frac {1}{4}}}}e^{-{\frac {y_{1}^{2}}{4\epsilon ^{2}}}}}$.
This will reduce the state of particle 2 to
${\displaystyle \phi _{2}(y_{2})=\!\int _{-\infty }^{\infty }\psi (y_{1},y_{2})\phi _{1}^{*}(y_{1})dy_{1}}$.
The momentum uncertainty of particle 2 can now be calculated, and is given by
${\displaystyle \Delta p_{2}={\sqrt {\frac {\sigma ^{2}\left(1+{\frac {\epsilon ^{2}}{\Omega ^{2}}}\right)+{\frac {\hbar ^{2}}{16\Omega ^{2}}}}{1+4\epsilon ^{2}\left({\frac {\sigma ^{2}}{\hbar ^{2}}}+{\frac {1}{16\Omega ^{2}}}\right)}}}.}$
If we go to the extreme limit of slit A being infinitesimally narrow (${\displaystyle \epsilon \to 0}$), the momentum uncertainty of particle 2 is ${\textstyle \lim _{\epsilon \to 0}\Delta p_{2}={\sqrt {\sigma ^{2}+\hbar ^{2}/16\Omega ^{2}}}}$, which is exactly what the momentum spread was to begin with. In fact, one can show that the momentum spread of particle 2, conditioned on particle 1 going through slit A, is always less than or equal to ${\textstyle {\sqrt {\sigma ^{2}+\hbar ^{2}/16\Omega ^{2}}}}$ (the initial spread), for any value of ${\displaystyle \epsilon ,\sigma }$, and ${\displaystyle \Omega }$. Thus, particle 2 does not acquire any extra momentum spread than it already had. This is the prediction of standard quantum mechanics. So, the momentum spread of particle 2 will always be smaller than what was contained in the original beam. This is what was actually seen in the experiment of Kim and Shih. Popper's proposed experiment, if carried out in this way, is incapable of testing the Copenhagen interpretation of quantum mechanics.
On the other hand, if slit A is gradually narrowed, the momentum spread of particle 2 (conditioned on the detection of particle 1 behind slit A) will show a gradual increase (never beyond the initial spread, of course). This is what quantum mechanics predicts. Popper had said
"...if the Copenhagen interpretation is correct, then any increase in the precision in the measurement of our mere knowledge of the particles going through slit B should increase their scatter."
This particular aspect can be experimentally tested.
## Popper's experiment and ghost diffraction
It has been shown that this effect has actually been demonstrated experimentally in the so-called two-particle ghost interference experiment.[23] This experiment was not carried out with the purpose of testing Popper's ideas, but ended up giving a conclusive result about Popper's test. In this experiment two entangled photons travel in different directions. Photon 1 goes through a slit, but there is no slit in the path of photon 2. However, Photon 2, if detected in coincidence with a fixed detector behind the slit detecting photon 1, shows a diffraction pattern. The width of the diffraction pattern for photon 2 increases when the slit in the path of photon 1 is narrowed. Thus, increase in the precision of knowledge about photon 2, by detecting photon 1 behind the slit, leads to increase in the scatter of photons 2.
## Popper's experiment and faster-than-light signalling
The expected additional momentum scatter which Popper wrongly attributed to the Copenhagen interpretation would allow faster-than-light communication, which is excluded by the no-communication theorem in quantum mechanics. Note however that both Collet and Loudon[14] and Qureshi[21][22] compute that scatter decreases with decreasing the size of slit A, contrary to the increase predicted by Popper. There was some controversy about this decrease also allowing superluminal communication.[24][25] But the reduction is of the standard deviation of the conditional distribution of the position of particle 2 knowing that particle 1 did go through slit A, since we are only counting coincident detection. The reduction in conditional distribution allows for the unconditional distribution to remain the same, which is the only thing that matters to exclude superluminal communication. Also note that the conditional distribution would be different from the unconditional distribution in classical physics as well. But measuring the conditional distribution after slit B requires the information on the result at slit A, which has to be communicated classically, so that the conditional distribution cannot be known as soon as the measurement is made at slit A but is delayed by the time required to transmit that information.
## References
1. ^ K. Popper (1959). The Logic of Scientific Discovery. London: Hutchinson. appendix *xii. ISBN 0-415-27844-9.
2. ^ Popper, Karl (1982). Quantum Theory and the Schism in Physics. London: Hutchinson (from 1992 published by Routledge). pp. 27–29. ISBN 0-8476-7019-8.
3. ^ Popper, Karl R. (1950). "Indeterminism in quantum physics and in classical physics". British Journal for the Philosophy of Science. 1 (2): 117–133. doi:10.1093/bjps/I.2.117.
4. ^ Del Santo, Flavio (2019). "Karl Popper's Forgotten Role in the Quantum Debate at the Edge between Philosophy and Physics in 1950s and 1960s". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 67: 78. arXiv:1811.00902. Bibcode:2019SHPMP..67...78D. doi:10.1016/j.shpsb.2019.05.002.
5. ^ a b Del Santo, Flavio (2017). "Genesis of Karl Popper's EPR-Like Experiment and its Resonance amongst the Physics Community in the 1980s". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 62: 56–70. arXiv:1701.09178. Bibcode:2018SHPMP..62...56D. doi:10.1016/j.shpsb.2017.06.001. S2CID 119491612.
6. ^ Popper, Karl (1985). "Realism in quantum mechanics and a new version of the EPR experiment". In Tarozzi, G.; van der Merwe, A. (eds.). Open Questions in Quantum Physics. Dordrecht: Reidel. pp. 3–25. doi:10.1007/978-94-009-5245-4_1. ISBN 978-94-010-8816-9.
7. ^ William M. Shields (2012). "A Historical Survey of Sir Karl Popper's Contribution to Quantum Mechanics". Quanta. 1 (1): 1–12. doi:10.12743/quanta.v1i1.4.
8. ^ a b Popper (1982), p. 27.
9. ^ a b c Popper (1982), p. 28.
10. ^ Popper (1982), p.29.
11. ^ A. Sudbery (1985). "Popper's variant of the EPR experiment does not test the Copenhagen interpretation". Philosophy of Science. 52 (3): 470–476. doi:10.1086/289261.
12. ^ A. Sudbery (1988). "Testing interpretations of quantum mechanics". In Tarozzi, G.; van der Merwe, A. (eds.). Microphysical Reality and Quantum Formalism. Dordrecht: Kluwer. pp. 470–476.
13. ^ H. Krips (1984). "Popper, propensities, and the quantum theory". British Journal for the Philosophy of Science. 35 (3): 253–274. doi:10.1093/bjps/35.3.253.
14. ^ a b M. J. Collet; R. Loudon (1987). "Analysis of a proposed crucial test of quantum mechanics". Nature. 326 (6114): 671–672. Bibcode:1987Natur.326..671C. doi:10.1038/326671a0. S2CID 31007584.
15. ^ M. Redhead (1996). "Popper and the quantum theory". In O'Hear, A. (ed.). Karl Popper: Philosophy and Problems. Cambridge: Cambridge University Press. pp. 163–176.
16. ^ a b Y.-H. Kim & Y. Shih (1999). "Experimental realization of Popper's experiment: violation of the uncertainty principle?". Foundations of Physics. 29 (12): 1849–1861. doi:10.1023/A:1018890316979. S2CID 189841160.
17. ^ C.S. Unnikrishnan (2002). "Is the quantum mechanical description of physical reality complete? Proposed resolution of the EPR puzzle". Foundations of Physics Letters. 15: 1–25. doi:10.1023/A:1015823125892.
18. ^ C.S. Unnikrishnan (2001). "Resolution of the Einstein-Podolsky-Rosen non-locality puzzle". In Sidharth, B.G.; Altaisky, M.V. (eds.). Frontiers of Fundamental Physics 4. New York: Springer. pp. 145–160. Bibcode:2001ffpf.book.....S.
19. ^ A. J. Short (2001). "Popper's experiment and conditional uncertainty relations". Foundations of Physics Letters. 14 (3): 275–284. doi:10.1023/A:1012238227977. S2CID 117154579.
20. ^ P. Sancho (2002). "Popper's Experiment Revisited". Foundations of Physics. 32 (5): 789–805. doi:10.1023/A:1016009127074. S2CID 84178335.
21. ^ a b Tabish Qureshi (2005). "Understanding Popper's Experiment". American Journal of Physics. 73 (6): 541–544. arXiv:quant-ph/0405057. Bibcode:2005AmJPh..73..541Q. doi:10.1119/1.1866098. S2CID 119437948.
22. ^ a b Tabish Qureshi (2012). "Popper's Experiment: A Modern Perspective". Quanta. 1 (1): 19–32. arXiv:1206.1432. doi:10.12743/quanta.v1i1.8. S2CID 59483612.
23. ^ Tabish Qureshi (2012). "Analysis of Popper's Experiment and Its Realization". Progress of Theoretical Physics. 127 (4): 645–656. arXiv:quant-ph/0505158. Bibcode:2012PThPh.127..645Q. doi:10.1143/PTP.127.645. S2CID 119484882.
24. ^ E. Gerjuoy; A.M. Sessler (2006). "Popper's experiment and communication". American Journal of Physics. 74 (7): 643–648. arXiv:quant-ph/0507121. Bibcode:2006AmJPh..74..643G. doi:10.1119/1.2190684. S2CID 117564757.
25. ^ Ghirardi, GianCarlo; Marinatto, Luca; de Stefano, Francesco (2007). "Critical analysis of Popper's experiment". Physical Review A. 75 (4): 042107. arXiv:quant-ph/0702242. Bibcode:2007PhRvA..75d2107G. doi:10.1103/PhysRevA.75.042107. ISSN 1050-2947. S2CID 119506558. |
# extension theorems on normed spaces
I know that there are a number of extension theorems, Tietze's extension theorem, Hahn-Banach extension and so on..
I want to know if there is an extension theorem which guarantees that if say $X$ is a normed space with a dense subset $D \subset X$, then taking some $f \in D^{*}$, there is an extension $g \in X^{*}$? Is it a unique extension? If there is an extension why is it enough for $f$ to be continuous and not uniformly continuous as is in the case for the real valued function $f:D \rightarrow \mathbb{R}$ on some dense subset of $\mathbb{R}$?
• Regarding the last question: note that a linear continuous function is automatically Lipschitz continuous. – Giuseppe Negro May 24 '14 at 15:49
• The extension is unique: if $D\ni d_n\to x$ and $f(x)$ exists and is continuous, than $f(d_n)\mapsto f(x)$. – Peter Franek May 24 '14 at 15:58
• @GiuseppeNegro Yes I forgot about that, so since it's Lipschitz continuous it is also uniformly continuous and therefore we use the usual Theorem regarding uniform continuity of a dense set? – user103184 May 24 '14 at 16:03
• @GiuseppeNegro Can I ask you A Sobolev space question. That's if you have any experience in that area? – user103184 May 24 '14 at 16:06
• If you have a question, please ask it on the main page rather than asking me personally. – Giuseppe Negro May 24 '14 at 16:09 |
Highlights from two sessions at the RSS conference 2016 …
Earlier this week I popped up to the 2016 RSS Conference in Manchester to give a talk as part of the Geostatistical Models for Tropical Medicine session organised by Michelle Stanton (and featuring two other speakers: Victor Alegana from Southampton and Emanuele Giogi from Lancaster). Given the rather steep conference fees (!) I decided to only register for one day, but nevertheless from the few talks I saw there were a couple of obvious relevance to astronomical statistics. First, Simon Preston described the ‘ESAG’ (Elliptically Symmetric Angular Gaussian) distribution on the sphere which is constructed by projection/marginalisation of a three-dimensional Gaussian in $\mathcal{R}^3$ to the space $\mathcal{S}^2$. Two additional conditions on the mean, $\mu$, and covariance matrix, $\Sigma$, of the three-dimensional Gaussian complete the definition of the ESAG and reduce the size of its parameter space to 5: namely, $\Sigma\mu=\mu$ and $|\Sigma|=1$. One could well imagine using a mixture model in which the ESAG is the component distribution to represent something like the distribution of gamma-ray bursts on the sky. Second, Timothy Cannings described the methodology behind his R package, RPEnsemble, for learning binary classifications via an ensemble of random projections from the input space, $\mathcal{R}^n$, to a lower-dimensional space, $\mathcal{R}^p$. Given the prevalence of classification tasks in astronomical data analysis (e.g. distinguishing quasars from other bright sources in a wide-field survey catalogue) I would expect this one also to be a neat addition to the astronomers’ toolkit. |
# Combinatorics in a Party.
There are 12 persons in a dinner party, they are to be arranged on two sides of a rectangular table. Supposing that the master and the mistress of the house have are always facing each other, and there are two specific guest who must always, be placed alongside one another. Find the number of ways in which the company can be placed.
Number of ways master and the mistress of the house can be seated is $2 \cdot$$5 \choose 1$$=10$
Now, one position on every side is now fixed, Let us consider two specific guests as one element with two internal arrangements. For any side, we have to choose one position out of 3 but due to internal arrangements and two sides their arrangement becomes $2\cdot 2\cdot 3=24$
For remaining 8 persons the number of arrangements is $8!$.
Total Possible Arrangements = $10 \cdot 24 \cdot 8!$.
• Is this a question? – amcalde Dec 15 '14 at 15:18
• If master and mistress have fixed seats, then there is only one way for them to be seated, not $50$. – drhab Dec 15 '14 at 15:25
• no, they only have to opposite to one another – Dheeraj Kumar Dec 15 '14 at 15:25
• Then that should have been said. Not that they have fixed seats. – drhab Dec 15 '14 at 15:26
• "opposite to one another" – Dheeraj Kumar Dec 15 '14 at 15:27
I now understand the problem in the following way: There is a table with dimensions $6\times 1$ in which for some reason people don't sit on the short edge. We wan't to count the number of ways to seat them so that the hosts are sitting in front of each other and the members of the couple are seated next to each other. We seperate in three cases:
Case 1: the couple hosts sit at the spots at the corner. There are two ways to select the side of the rectange and there are two ways to select which host sits at which of the two seats. After this notice the couple must sit at consecutive seats, how many pairs of consecutive seats are there? four on each side of the table, so eight. After this we must select which member of the couple sits in which of the two spots in two ways, and finally seat everyone else in $8!$ ways, so the answer is $2\cdot2\cdot8\cdot2\cdot8!$
Case 2: the hots sit at spots that are neither at the middle of the table or at the edges, there are $2$ ways to select the pair of spots they want, after this $2$ ways to select which couple takes which of the two seats. We now sit the couple, there will be three pairs of consecutive seats on each edge so there are $6$ ways to chose the pairs, $2$ ways to select which member of the couple takes which seat and $8!$ ways to sit everyone else. The answer is thus $2\cdot2\cdot6\cdot2\cdot8!$
Case 3: the hosts sit at central spots,there are two ways to select which of the central spots, then there are two ways to select which host gets which of the two spots, notice there are three pairs of consecutive spots left on each side, so there are $6$ ways to select the spots the couple sits at, and then $2$ ways to select which member of the couple gets which seat, after this $8!$ ways to sit everyone else, hence there are $2\cdot2\cdot6\cdot2\cdot8!$
factoring the $8!$ the final answer is:
$(64+48+48)8!=160(8!)$
• Case 3 is wrong: there are two "middle" seats on each side, and two ways the hosts can sit in the selected one. This leaves six pairs of places for the adjacent couple (three on each side; one pair to one side of the hosts and two to the other) again times 2 for which sits where, so the last term should be 2.2.6.2.8! for a total of 160.8! – TripeHound Dec 15 '14 at 16:30
A simpler way of looking at it is to consider the couple who must be adjacent first. There are five adjacent pairs of seats on each side of the table, and the couple can sit either way around (5 * 2 * 2). Wherever they sit, there are four places left where the hosts can sit opposite each other, and either host can sit either side (...* 4 * 2). The remaining 8 guests can sit 8! ways. Thus the total is 5 * 2 * 2 * 4 * 2 * 8! or 160 * 8!
• Good idea, I don't know why I jumped to classify according to host instead of couple. – Jorge Fernández Hidalgo Dec 15 '14 at 16:36
• I started to think that way as well, presumably because it's mentioned first in the question, but realised the adjacent pair is the more limiting. – TripeHound Dec 15 '14 at 16:38 |
# Editing a citation style (verbose-ibid)
a lot of questions are asked regarding more or less my topic, but I'm unable to find answers. I'm using overleaf and I would like to be able to generate the code of the citation style verbose-ibid to be able to edit it. I started using Latex a few days ago and don't know anything about it, but I am quite ok modifying a Zotero style for instance.
\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[greek, french]{babel}
\usepackage[T1, T2A]{fontenc}
\usepackage{setspace}
\usepackage[margin=3cm]{geometry}
%Bibliographie
%----------------------------------------------------------------
\usepackage{csquotes}
\usepackage[style=verbose-ibid,backend=bibtex]{biblatex}
\bibliography{\jobname.bib}
\begin{filecontents}{\jobname.bib}
@book{key,
author = {Author, A.},
year = {2001},
title = {Title},
publisher = {Publisher},
}
\end{filecontents}
\begin{document}
\footcite{key}
\printbibliography
\end{document}
• Welcome to TeX.SX! Please post the code example directly in your question and do not link to third-party sites. Also please try to make your code fully self-contained so that other people can run it without additional files (we don't have your Biblio.bib). Also, please try to remove code that is unreated to the sisue at hand (for example the \newcommand{\titre} stuff, many of the packages you load are also not relevant). See tex.meta.stackexchange.com/q/228/35864 and tex.meta.stackexchange.com/q/4407/35864. – moewe Oct 24 '18 at 19:22
• Finally, please note that ideally each question on this site should only revolve around one specific issue. As such it might not be unreasonable to split your one question up into four small questions. – moewe Oct 24 '18 at 19:23
• Perfect, thank you, my main question, then, would be, how do you find the code of the verbose-ibid citation style. – Aulus.Persius.Flaccus Oct 24 '18 at 19:25
• I'll have a look at your question shortly, but please note that now your code is too minimal. The code should still be compilable when it is copied and pasted, i.e. it must have a \documentclass and a \begin{document}...\end{document} and a few example citations etc., it just should not have too much stuff. See also the two links in my first comment. – moewe Oct 24 '18 at 19:28
• You may want to customize package options described here instead of editing the code directly. – zyy Oct 24 '18 at 19:36
The code for biblatex styles can be found in <style>.bbx (bibliography style code) and <style>.cbx (citation style code).
In your case the relevant files are verbose-ibid.bbx and verbose-ibid.cbx. You can find these files on your machine with kpsewhich verbose-ibid.bbx and kpsewhich verbose-ibid.cbx, respectively.
All relevant files are also on CTAN in https://www.ctan.org/tex-archive/macros/latex/contrib/biblatex/latex or http://mirrors.ctan.org/macros/latex/contrib/biblatex/latex/ and its subdirectories, and on GitHub in https://github.com/plk/biblatex/blob/dev/tex/latex/biblatex/ and subdirectories.
You'll find that verbose-ibid.bbx immediately sends you off to authortitle.bbx. So the first interesting file is authortitle.bbx.
verbose-ibid.cbx on the other hand contains a complete citation style.
Additional to the style-specific files, you will always want to have standard.bbx and biblatex.def handy. standard.bbx is loaded by all standard biblatex styles and biblatex.def is loaded by all styles automatically. The biblatex documentation is also helpful.
biblatex styles are modular and that means that you may have to chase down definitions of the involved macros in several different files.
To answer some of the concrete questions from an earlier version of your question.
1. You can change the default punctuation from a full stop to a comma with
\renewcommand*{\newunitpunct}{\addcomma\space}
This will also apply to the bibliography at the end, so if you want a different layout there you may have to use \AtBeginBibliography
2. Is more tricky, I didn't do anything here because the desired output was not clear to me. I suggest you ask a new question with well-defined desiderata.
3. Could be achieved with the option
giveninits=true
Again, if the behaviour should be different in the bibliography, you need to do extra work.
4. The citepages option could help you here. See the verbose-ibid style documentation. Maybe you want
citepages=omit
or maybe the more radical citepages=suppress or the fancy citepages=separate.
Your document could look like this
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[french]{babel}
\usepackage{csquotes}
\usepackage[style=verbose-ibid, backend=biber, giveninits=true, citepages=omit]{biblatex} |
1. ## Trig angle problem
If anyone can help me solve this problem it would be greatly appreciated. I have been looking at it all day and cant for the life of me work out how to do it (it may not even be possible with only the variables I have)
I am trying to calcuate the angle 'Y'. I have labeled everything that I know.
I am pretty sure the answer is going to be pretty complicated. Feel free to ask any questions to anything that may not be clear.
2. i can not understand your sketch, where is the original problem? In this forum we have a company of math helpers, please provide additional information . . . . thanks
3. Originally Posted by Diessence
If anyone can help me solve this problem it would be greatly appreciated. I have been looking at it all day and cant for the life of me work out how to do it (it may not even be possible with only the variables I have)
I am trying to calcuate the angle 'Y'. I have labeled everything that I know. The only thing that isnt real clear is that I know what X' + X'' is, i dont know what they are individually.
I am pretty sure the answer is going to be pretty complicated. Feel free to ask any questions to anything that may not be clear.
Since $\angle X'$ & $\angle X''$ constrain the location of the square and
distances d & e (which do not have to be parallel) also constrain the placement of the square
$\angle Y$ it will have a unique solution.
4. pacman.. what can you not understand about the diagram.. please let me know and i will try and explain it better
aidan.. are you saying it can or cant be done with what i have given you?
feel free to make any new shapes, distances, angles (by using the given variables) that you feel may be necessary to find an answer for angle Y
5. I cannot figure out what to do with it, a closer look with it, your triangle ABC is a 45-45-90 triangle, and your angle y is also 45 degrees, by visual interpolation, it is 45 degrees. did you draw it to scale?
6. Originally Posted by Diessence
If anyone can help me solve this problem it would be greatly appreciated. I have been looking at it all day and cant for the life of me work out how to do it (it may not even be possible with only the variables I have)
I am trying to calcuate the angle 'Y'. I have labeled everything that I know. The only thing that isnt real clear is that I know what X' + X'' is, i dont know what they are individually.
I am pretty sure the answer is going to be pretty complicated. Feel free to ask any questions to anything that may not be clear.
see attached sketch for clarity
Two Solutions.
1) Line e is parallel to line d
2) Line e is NOT parallel to line d
SOLUTION 1:
Line e IS PARALLEL TO Line d
$f'= \dfrac{(c")^2 - a^2 + (d-e)^2}{2(d-e)}$
$
f" = \sqrt{ (c")^2 + (f')^2 }
$
$
\angle z' = arctan \left ( \dfrac{f"}{f'} \right )
$
$
\angle z" = arctan \left ( \dfrac{ d -e -f'}{f"} \right ) + 90deg
$
$
\angle v' = 270deg - \angle z"
$
$
\angle v" = 360deg - (x'+z'+v')
$
$
\angle w = 270deg - v"
$
$
\angle y = 180deg - ( \angle w + \angle x)
$
7. pacman: the angle X' + X'' does not have any real value associated with it. In real life it could be anywhere from 30-50 degrees. I am after a solution for angle Y that is formulas only. That way once I put it into a real life situation a can calcuate Y for any situation.
aidan: You solution looks good, except for when you calcuated V'' you used X' in the solution. Remember, I only know what X'+X'' is, I dont what what they are individually. Also, we cant assume that e and d are going to be parallel.
8. [quote=Diessence;358971]
aidan: You solution looks good, except for when you calcuated V'' you used X' in the solution. Remember, I only know what X'+X'' is, I dont what what they are individually.[quote]
Then you should NOT have given both of them. You should have given only 1 value.
You need to correct your drawing.
Also, we cant assume that e and d are going to be parallel
from my 2nd post:
Two Solutions.
1) Line e is parallel to line d
2) Line e is NOT parallel to line d
9. Aidan: I didn't know how to draw showing that full angle with only one label. I will red0 my drawing.. also I have just realised I have forgotten to label a line I know. Length F is known. Sorry.
Aidan: I can not see you "Solution 2". Are you saying that the above equations will work for any case? |
On longest paths and diameter in random Apollonian networks
Consider the following iterative construction of a random planar triangulation. Start with a triangle embedded in the plane. In each step, choose a bounded face uniformly at random, add a vertex inside that face and join it to the vertices of the face. After n – 3 steps, we obtain a random triangulated plane graph with n vertices, which is called a Random Apollonian Network (RAN). See http://www.math.cmu.edu/~ctsourak/ran.html for an example.
We prove that the diameter of a RAN is asymptotic to $$c \log(n)$$ in probability, where $$c \approx 1.668$$ is the solution of an explicit equation. The proof adapts a technique of Broutin and Devroye for estimating the height of random trees.
We also prove that there exists a fixed $$s<1$$, such that eventually every self-avoiding walk in this graph has length less than $$n^s$$, which verifies a conjecture of Cooper and Frieze. Using a similar technique, we show that if $$r < d$$ are fixed constants, then every r-ary subtree of a random d-ary recursive tree on n vertices has less than $$n^b$$ vertices, for some $$b=b(d,r)<1$$.
Based on joint work with A. Collevecchio, E. Ebrahimzadeh, L. Farczadi, P. Gao, C. Sato, N. Wormald, and J. Zung. |
# An application of absorption to teaching lim inf and lim sup (sequences)
Many undergraduates have difficulty understanding the notions of the lim inf and lim sup of a sequence.
[The full names for these appear to vary from author to author. In the first version of this post I called them “limit infimum” and “limit supremum”, but I think that “limit inferior” and “limit superior” are the most widely accepted names. ]
Of course, there is the basic problem that students confuse $\liminf_{n\to\infty} x_n$ with $\lim_{n\to\infty}( \inf x_n)$ (which is, strictly speaking, meaningless, but might generously be interpreted as meaning one of $\lim_{n\to\infty} x_n$ or $\inf _{n \in \mathbb{N}} x_n$). However, what I really mean is that the students often fail to grasp what $\liminf$ and $\limsup$ really mean. (See below for some more details of what I mean by this!)
As with epsilon and delta, we may be tempted to avoid confronting the students’ difficulties with lim inf and lim sup. For example, we can often choose between using lim inf and lim sup or using the sandwich theorem (also known as the squeeze rule). A typical example of this is the standard exercise where you have to prove the following fact at the start of a course on metric spaces.
Let $(X,d)$ be a metric space, and let $(x_n)$ and $(y_n)$ be convergent sequences in $X$ with limits $x$ and $y$ respectively. Prove that $d(x_n,y_n) \to d(x,y)$ as $n \to \infty$.
I leave it to the reader to supply two proofs, one using the sandwich theorem, and another using $\liminf$ and $\limsup$.
As with epsilon and delta, it may be that postponing discussion of $\liminf$ and $\limsup$, or avoiding them altogether, is not in the best interests of the student. I have to admit that I am not sure! But I think it is worth investigating possible ways to help students to understand $\liminf$ and $\limsup$.
In the following, for convenience, we work in the extended real line
$\overline{\mathbb{R}} =[{-\infty},{+\infty}]= \mathbb{R}\cup\{{-\infty},{+\infty}\}\,.$
This is convenient, because every subset of $\overline{\mathbb{R}}$ has a supremum and an infimum in $\overline{\mathbb{R}}$: there is no need to worry about boundedness and non-emptiness. Those who prefer to work in $\mathbb{R}$ should add in appropriate assumptions below where necessary.
The standard approach to lim inf and lim sup
The following approach to lim inf and lim sup is entirely standard.
Let $(x_n)$ be a sequence in $\overline{\mathbb{R}}$. Then it is standard to define sequences $(s_n)$ and $(S_n)$ in $\overline{\mathbb{R}}$ as follows: for each $n \in \mathbb{N}$,
$s_n = \inf \{x_n,x_{n+1},x_{n+2},\dots\}$
and
$S_n =\sup\{x_n,x_{n+1},x_{n+2},\dots\}\,.$
We may then define
$\liminf_{n \to \infty} x_n = \sup\{s_n:n\in\mathbb{N}\}$
and
$\limsup_{n \to \infty} x_n = \inf\{S_n:n\in\mathbb{N}\}\,.$
Once you have decided on an appropriate definition of convergence in $\overline{\mathbb{R}}$, you can confirm that we also have
$\liminf_{n \to \infty} x_n = \lim_{n \to \infty} s_n$
and
$\limsup_{n \to \infty} x_n =\lim_{n \to \infty} S_n\,.$
These definitions are very clean, and are easy to apply, e.g., to prove results in the theory of measure and integration. But they do not, in themsleves, give the student a very good idea of what $\liminf$ and $\limsup$ really mean for a typical sequence. In my opinion, even calculating the $\liminf$ and $\limsup$ of a few examples does not really help as much as you would expect.
One approach that can help a little is to explain that $\liminf_{n \to\infty} (x_n)$ is the minimum of all the possible limits of subsequences of the sequence $(x_n)$, and similarly for $\limsup$, with $\max$ in place of $\min$.
However, in my opinion, what we should try to get across is what $\liminf$ and $\limsup$ tell us about where $x_n$ can actually be as $n$ becomes large.
The absorption approach to lim inf and lim sup
Let $(x_n)$ be a sequence of extended real numbers. Set $s = \liminf_{n \to \infty} x_n$ and $S =\limsup_{n\to\infty} x_n$. For the rest of this post, $(x_n)$, $s$ and $S$ will be fixed.
What I think we would like students to understand is that, for large $n$, $x_n$ is “almost” in $\null[s,S]$, and there is no strictly smaller closed interval for which this is true.
Recall, in my terminology, a set absorbs a sequence if at most finitely many terms of the sequence lie outside the set.
In terms of absorption we can say various things about the relationships between $(x_n)$, $s$ and $S$. These standard facts are usually expressed using more standard terminology, e.g., in terms of a sequence “eventually lying within a set” or, for non-absorption, infinitely many terms of the sequence lying outside a set.
Let $a$ and $b$ be extended real numbers.
1. If $a, then $(a,{+\infty}]$ absorbs the sequence$(x_n)$.
2. If $a > s$, then $\null[a,{+\infty}]$ does not absorb $(x_n)$.
3. If $b>S$, then $\null[{-\infty},b)$ absorbs $(x_n)$.
4. If $b < S$, then $\null[{-\infty},b]$ does not absorb $(x_n)$.
5. If $\null [s,S] \subseteq (a,b)$ then $(a,b)$ absorbs $(x_n)$.
6. If $\null[a,b]$ is a proper subset of $\null[s,S]$, then $\null[a,b]$ does not absorb $(x_n)$.
Of course, we do not know whether or not one or both of $\null[s,S]$ and $(s,S)$ absorb $(x_n)$. However, it is true that $\null[s,S]$ is equal to the intersection of all closed extended-real intervals which absorb the sequence $(x_n)$.
Note added 4/12/09: Note that condition 6 in the above list
is not strong enough to recover much information in situations where $\null [s,S]$ itself fails to absorb the sequence. For example,for the sequence $x_n = 1/n$, the interval $\null[{-1},0]$ satisfies conditions 5 and 6 above, without being equal to $\null[s,S]$.
As mentioned above, all of these statements may be expressed using more standard terminology. Is the language of absorption helpful here?
In due course, I plan to return to this topic in the setting of function limits rather than sequences. This will then connect up with continuity and semicontinuity of functions.
Joel Feinstein
### 14 responses to “An application of absorption to teaching lim inf and lim sup (sequences)”
1. Greetings.
I am an undergraduate student. For me, the following explanation is useful: “What I think we would like students to understand is that, for large n, x_n is “almost” in [s, S], and there is no strictly smaller closed interval for which this is true.”
Like
2. effective use of “almost always” and “infinitely often”
(probability language) did a lot for me when i first encountered these.
Like
3. I notice that even with the language of absorption it is not wholly easy to say what the lim sup of a sequence is, at least if you regard quantifiers as difficult. You have to say that $(-\infty,a]$ absorbs for every $a>s$ and fails to absorb for any $a.
One could imagine getting round this problem as follows. First let us say that an interval $\null[-a,b]$ almost absorbs a sequence if $\null[a-\delta,b+\delta]$ absorbs the sequence for every $\delta>0$. (Note that this introduces just one quantifier once they are happy with the absorption concept.) Next, let us say that an interval (or more general set) $S$ tempts a sequence $(a_n)$ if $a_n\in S$ for infinitely many $n$. Finally, let us say that an interval $\null[a,b]$ almost tempts a sequence if $\null[a-\delta,b+\delta]$ tempts that sequence for every $\delta>0$.
Then $a$ is the lim sup of a sequence if $(-\infty,a]$ almost absorbs the sequence and $\null[a,\infty)$ almost tempts it.
The advantage of the tempts concept is that it doesn’t force you to think what it means for a set not to absorb a sequence. Of course, one would prove a lemma to the effect that $S$ absorbs a sequence if and only if the complement of $S$ does not tempt it.
The disadvantage is of course that it introduces yet more nonstandard terminology.
Like
4. Thanks for those suggestions Tim!
It is not easy to balance the advantages and disadvantages of introducing non-standard terminology. I’m trying not to introduce too much.
Of course, a set $A$ tempts a sequence $(x_n)$ if and only if some subsequence of $(x_n)$ lies entirely within the set $A$, and this is true if and only if the complement of $A$ does not absorb $(x_n)$. (I set the last part of that as an exercise for the second-year students.) So, it may be possible to get the students to understand “does not absorb” without using extra terminology. I do find tempting tempting though!
Joel
Like
5. Working in the real line:
If you want to do without the ‘almost’ concepts, then $a$ is the lim sup of the sequence if and only if, for all $\varepsilon>0$, the interval $({-\infty},a+\varepsilon)$ absorbs the sequence and the interval $(a-\varepsilon,+\infty)$ tempts it.
(Or you can use $({-\infty},a+\varepsilon]$ and $\null[a-\varepsilon,+\infty)$ if you wish.)
If we are working in the extended real line, the definitions proposed by Tim need some minor modifications. For a start, we need to work with intervals $\null[{-\infty},a]$ instead of $({-\infty},a]$ (etc.). But also, in the case where $a={-\infty}$, we need an appropriate definition of what it means for the interval $\null[{-\infty},a]=\{{-\infty}\}$ to almost absorb (or almost tempt) our sequence. We can’t use ${-\infty}+\delta$, because that is still ${-\infty}$. For ‘almost absorbs’, we need to say instead that, for all positive real numbers M, $\null[-\infty,-M]$ absorbs the sequence (or some equivalent formulation of this). Obviously we need to do something similar for the interval $\null[+\infty,+\infty] = \{+\infty\}$.
For my next comment, let’s assume instead that we are working with bounded sequences of real numbers, so that we can safely work in the real line.
It might be nice to define what it means for a general subset $E$ of the real line to almost absorb or almost tempt a sequence $(x_n)$.
Let $E$ be a non-empty subset of the real line. Which of the following is the best definition of $E$ almost absorbs $(x_n)$?
– Every open superset of $E$ absorbs the sequence $(x_n)$.
– For all $\varepsilon>0$, the set $\{y \in \mathbb{R}: \textrm{dist}(y,E)<\varepsilon\}$ absorbs the sequence $(x_n)$.
– For all $\varepsilon>0$, the set $\{y \in \mathbb{R}: \textrm{dist}(y,E)\leq\varepsilon\}$ absorbs the sequence $(x_n)$.
The second and third of these are easily seen to be equivalent.
For closed intervals (which are, perhaps, what we care most about), all three are equivalent. Otherwise, they differ (even for closed sets).
My initial instinct was to prefer the first definition, in view of its topological nature. But:
(a) I suspect that the others are easier for students to think about and to check;
(b) for open sets $E$, the first definition is probably not what we want. Perhaps ‘every open superset of the closure of $E$ …’ would be a better attempt, but this is getting messy.
Whichever version you go for, if you set $s=\liminf_{n\to \infty}x_n$ and $S=\limsup_{n\to \infty}x_n$ (as in the original post), then $\null[s,S]$ is the minimum closed interval which almost absorbs the sequence. This also works for general sequences in the extended real line, provided that you use an appropriate definition of ‘almost absorbs’. As mentioned above, some care is needed with intervals which have one (or both) endpoints equal to $\pm \infty$.
Joel Feinstein 5/2/09
Like
• I don’t understand why this post and its comments keep getting corrupted 😦
Joel
3/2/10
Like
6. rose
iam an ug student.the explation about the limsup and liminf is very useful.
the language is simple and clear.but we need more examples.if the examples are published then it will be fantastic
Like
• Examples are, of course, crucial!
Let’s look at some specific illustrative examples here.
I will give each example in a separate comment, and I will work with bounded sequences in the real line, so that there are no issues with $\pm\infty$.
First let’s look at one of the most standard oscillating sequences,
$x_n=(-1)^n$,
i.e. the sequence ${-1},1,{-1},1,\dots$.
Since every term of this sequence is in the closed interval
$\null[{-1},1]$, that closed interval already absorbs the sequence (not required), and hence also almost absorbs it (as required).
However, since both ${-1}$ and $1$ occur infinitely often, no closed interval strictly smaller than $\null[{-1},1]$ almost absorbs the sequence.
In terms of “tempts” and “almost tempts”, both
$(-\infty,{-1}]$ and $\null[1,\infty)$ tempt the sequence (not required) and hence also almost tempt it (as required).
So we have found that the closed interval $\null[{-1},1]$ has the required properties: it is the minimum closed interval which almost absorbs the sequence.
Thus $\liminf_{n\to\infty} x_n = {-1}$ and
$\limsup_{n\to\infty} x_n = 1$.
Exercise:
Investigate this sequence using the standard definitions of $\liminf$ and $\limsup$.
Perhaps it is better to work out the $\liminf$ and $\limsup$ separately, rather than focussing on the closed interval $\null[s,S]$ as I have here?
In this case, following Tim’s suggestions, you could note (as above) that $({-\infty},{-1}]$ almost tempts the sequence (in this case it actually does tempt the sequence), and that $\null [{-1},\infty)$ almost absorbs the sequence (in this case, it actually does absorb the sequence), so that tells us that
$\liminf_{n\to\infty} x_n = {-1}$.
Similarly, $\null[1,\infty)$ almost tempts the sequence (in this case it actually does tempt the sequence), and $({-\infty},{1}]$ almost absorbs the sequence (in this case, it actually does absorb the sequence), so that tells us that $\limsup_{n\to\infty} x_n = 1$.
Now that sequence is not very interesting. So in my next comment I’ll look at a slightly more interesting sequence.
Joel
December 4 2009
Like
7. Second specific example on $\liminf$ and $\limsup$
For our second example, let us modify the previous example slightly, and consider
$x_n = (-1)^n (n+1)/n = (-1)^n(1 + \frac{1}{n}).$
Note here that $|x_n| = 1 + \frac{1}{n} > 1$, and that $|x_n|\to 1$ as $n \to \infty$.
So, $x_n \notin [{-1},1]$, but, for large $n$, $x_n$ is close to that closed interval. In fact, for all $n$, we have $x_n \in [{-(1 + \frac{1}{n})},1 + \frac{1}{n}]$.
We may suspect that $\null [{-1},1]$ is the “important” closed interval here.
We could really do with a good name for the interval $\null[\liminf_{n\to\infty}x_n,\limsup_{n\to\infty}x_n]$. For now, let me call that interval the limmy closed interval for the sequence.
OK, so we suspect that our limmy closed interval is $\null [{-1},1]$. Let’s check this carefully.
Suppose that $\null [{-1},1] \subseteq (a,b)$ for some real numbers $a$ and $b$.
Of course, this just means that $a< -1$ and $b > 1$. Then, for large $n$ we have both
${-(1 + \frac{1}{n})} > a$ and $1 + \frac{1}{n} < b$, and hence we certainly have
$x_n \in (a,b)$. This shows that $(a,b)$ absorbs the sequence $(x_n)$.
We have now established that $\null [{-1},1]$ almost absorbs the sequence $(x_n)$. However, as in the previous example, both $({-\infty},{-1}]$ and $[1,\infty)$ almost tempt the sequence (in this example, they both actually tempt the sequence). It follows, as before, that $\null [{-1},1]$ really is the minimum closed interval which almost absorbs the sequence. Thus $\null [{-1},1]$ is the limmy closed interval, and we (again) have $\liminf_{n\to\infty}x_n = -1$ and $\limsup_{n\to\infty}x_n = 1$.
Again, you can calculate $\liminf_{n\to\infty} x_n$ and $\limsup_{n\to\infty} x_n$ separately as in the previous example, either directly from the standard definition , or by showing that:
• $({-\infty},{-1}]$ almost tempts the sequence and $\null[{-1},\infty)$ almost absorbs the sequence, so $\liminf_{n\to\infty}x_n = -1$;
• $\null[1,\infty)$ almost tempts the sequence and $({-\infty},1]$ almost absorbs the sequence, so $\limsup_{n\to\infty}x_n = 1$.
Here are some further comments which may help students to understand these examples.
Here, $1$ is the least real number $b$ such that $({-\infty},b]$ almost absorbs the sequence.
Equivalently, $1$ is the infimum of all the real numbers $b$ such that $({-\infty},b]$ absorbs the sequence.
Similarly, $-1$ is the greatest real number $a$ such that $\null[a,\infty)$ almost absorbs the sequence.
Equivalently, $-1$ is the supremum of all the real numbers $a$ such that $\null[a,\infty)$ absorbs the sequence.
Joel
December 4 2009
Like
8. At this point, let me remind the reader that, for a bounded sequence of real numbers $(x_n)$, $(x_n)$ converges if and only if
$\liminf_{n\to\infty} x_n = \limsup_{n\to\infty} x_n$.
In this case, we also have
$\lim_{n \to \infty} x_n = \liminf_{n\to\infty} x_n = \limsup_{n\to\infty} x_n$.
This means that you can take any of your favourite convergent sequences, and write down their $\liminf$ and $\limsup$ immediately. (Given that I mentioned boundedness above, you may wish to recall that every convergent sequence of real numbers is bounded.)
For example, with $x_n = \frac{2n^2+4n+1}{3n^2+7}$, it is a standard exercise to check that $\lim_{n\to\infty} x_n = 2/3$.
Thus we also have
$\liminf_{n\to\infty} x_n = \limsup_{n\to\infty} x_n = 2/3$
here.
If you work in the extended real line instead, with appropriate definitions, then you no longer need to worry about boundedness.
Here I think that it is amusing to note that, in the extended real line, the bizarre statement
$n \to \infty$ as $n \to \infty$
actually has some content!
Joel Feinstein
January 3 2010
Like
9. Elementary remarks and exercises
Let ${\bf x}=(x_n)$ and ${\bf y}=(y_n)$ be bounded sequences of real numbers. We can use obvious (coordinatewise) algebraic operations to define $-{\bf x}$, ${\bf x} + {\bf y}$, etc. The following four facts are then standard (but we shall discuss them further below):
$\limsup_{n\to\infty} (-x_n) = - \liminf_{n\to\infty} x_n\,;$
$\liminf_{n\to\infty} (-x_n) = - \limsup_{n\to\infty} x_n\,;$
$\limsup_{n\to\infty} (x_n+y_n) \leq \limsup_{n\to\infty} x_n + \limsup_{n\to\infty} y_n\,;$
$\liminf_{n\to\infty} (x_n+y_n) \geq \liminf_{n\to\infty} x_n + \liminf_{n\to\infty} y_n\,.$
In order to discuss these further, let us introduce some (temporary?) notation for the limmy interval of the (bounded) sequences ${\bf x}=(x_n)$, say
$L[{\bf x}] = L[(x_n)] = [\liminf_{n\to \infty} x_n,\limsup_{n\to\infty} x_n]$.
Recall our discussion from earlier comments:
$L[{\bf x}] = L[(x_n)]$ is the minimum closed interval which almost absorbs the sequence $(x_n)$.
For sets $A$ and $B$ of real numbers, we define $-A$ and $A+B$ by
${-A}= \{-x:x\in A\}$
and
$A+B =\{a+b:a \in A,b \in B\}\,.$
Then it is a very easy exercise to check that the set $A$ absorbs/almost absorbs/tempts/almost tempts the sequence ${\bf x}$ if and only if the set $-A$ absorbs/almost absorbs/tempts/almost tempts the sequence $-{\bf x}$.
Similarly, with a little more work, you can see that if $A$ absorbs/almost absorbs ${\bf x}$ and $B$ absorbs/almost absorbs ${\bf y}$ then $A+B$ absorbs/almost absorbs ${\bf x} +{\bf y}$.
Note, however, that the corresponding statement for tempting is false for the sum.
Exercise: give a counterexample.
Armed with these facts, we can now establish the four standard facts mentioned at the start of this comment. In terms of limmy intervals, these take the following form.
$L[{-\bf x}] = {-L[{\bf x}]}$
and
$L[{\bf x + y}] \subseteq L[{\bf x}] + L[{\bf y}]\,.$
(This latter inclusion may be strict.)
I am NOT claiming that this is the quickest way to prove these standard facts! However, it may help to provide greater understanding of them.
Warning: The results for sums need more care when working in the extended real line, where sums are not always defined.
Exercise: How much of the above remains valid when you work in the extended real line?
Joel
January 8 2010
Like
10. ipk
Thanks this slowing starting to make sense. how does limsup and liminf for a sequence of sets, apply to probability theory. any examples would be much appreciated
Like
• The two measure theory/probability theory results that spring to mind here are Fatou’s lemma and the Borel-Cantelli lemma. Because you are dealing with sequences of sets instead of sequences of points, there are some subtle differences. However, you can make a connection if you look at the indicator functions of the sets involved and take the pointwise limsup/liminf.
I’ll come back to say more about this when I have finished my large pile of marking!
Joel
January 20 2010
Like
11. ipk
great thanks look forward to it
Like |
# 8
Personal Blog
A putative new idea for AI control; index here.
This presents one way of implementing the indifference-based correlations of these posts.
Let u be a utility function, a map from worlds to real numbers. An expected utility maximiser considering whether to produce output Y, looks at the expected utility
Σw u(w)P(w|Y).
We now assume that there is another random variable X in the world, and we want the AI to be indifferent to worlds where Y≠X. We also want it to be indifferent to worlds where Z=0. Then it will assess the value of output Y as:
Σw u(w)P(w|X=Y,Z=1,Y).
Now, the idea of the setup was to ensure that Z=1 would erase the output Y so that it was never read. Hence P(w|Z=1,Y) Hence that equation simplifies to:
Σw u(w)P(w|X=Y).
Therefore the AI will chose the Y that maximises the (conditional) expected utility of u if X=Y. To get the full version of the initial post, you need to define some function f of Y and modify this to
Σw u(w)P(w|X=Y) + f(Y).
New Comment |
The problem was that you had a circle in your tf tree. I found it out by running roswtf - it informed me that there are cycles and that both robot_state_publisher and camera_base_link are publishing same transforms.
After changing the parameter publish_tf of 3dsensor.launch to false, the circularity was removed and the problem is not there anymore. |
# Applications of Morley's Categoricity Theorem
I just attended a lecture by Rami Grossberg and he mentioned that he is not aware of any applications of Morley's Categoricity Theorem. This is exactly my question.
Question: Do you know of any applications of Morley's Categoricity Theorem outside of Logic?
Morley's Categoricity Theorem If $T$ is a first-order theory in a countable vocabulary and $T$ is categorical in one uncountable cardinal, then it is categorical in all uncountable cardinals.
• A good place to look would be examples of objects with uncountable cardinalities other than continuum: mathoverflow.net/questions/44705/… – Matt F. Sep 30 '16 at 14:42
• @MattF. I have to look Charles Staat's answer under the question you linked. I fail to see why structures with size continuum (or less if CH fails) are not good examples for Morley's Theorem. – Ioannis Souldatos Sep 30 '16 at 15:41
• The theorem needs two uncountable cardinalities. It is not a requirement that one be larger than the continuum -- but an example where one cardinality is continuum and one is provably less than continuum would be interesting enough to show a contradiction in ZFC. – Matt F. Sep 30 '16 at 15:56
• @MattF. "to show a contradiction in ZFC" wait, what? Can you explain what you mean? – Noah Schweber Sep 30 '16 at 21:33
• This question and answer seem relevant. – Alex Kruckman Oct 6 '16 at 13:17 |
# Package com.opengamma.strata.market.amount
Defines representations of amounts typically used as result types.
See: Description
## Package com.opengamma.strata.market.amount Description
Defines representations of amounts typically used as result types. |
# Ernie and the Uneconomical Flat-breads
When I arrived at Ernie's place last weekend I had some exciting news for him. "did you know", I announced "that a new Kzijekistanian fast-food shop has just opened in town?". "I did", Ernie replied, "because this just arrived in the letter-box", and he held up a glossy flyer that announced:
$10.00 per piece Free delivery* In accordance with regulations of the Kzijekistanian Ethnic Food Committee (KEFOC), each flat-bread is precisely 1.000 m diameter and will be delivered to your home on a square cardboard tray** designed to keep the bread crisp and fresh as possible (Note: dimensions of the tray will be the smallest possible to minimize any over-lapping or under-lapping of your pre-cut*** flat-bread. (Note: Flat-breads will only be placed one layer deep on the tray) "I ate those when I was visiting Kzijekistan," said Ernie, "very tasty - and$10 seems to be a very fair price too!".
I scanned down the flyer to the small print:
*Free delivery within 3 km of the shop: In accordance with the wishes of the KEFOC our company has made a permanent arrangement with the local Cargo-Bike Appreciation Club. In exchange for bicycle-delivery of flat-breads, the KEFOC will provide free jerseys and accommodation for club members competing at the Annual National Cargo-Bike Olympic Trials. Your orders will help them in training so they can bring home this year's ANCBOT cup.
"That sounds like a noble cause, shall we order one for lunch?", Ernie asked.
I scanned a little further down the flyer:
**In accordance to the demands of the KEFOC, our company has declared itself to be carbon-neutral. In the interests of minimizing packaging waste, there will be a surcharge of one cent per square cm of cardboard tray that is not covered by flat-bread (surcharge rounded down to the nearest cent).
"But that is preposterous", said Ernie, "an un-cut flat-bread would cost an extra $21.46 in packing surcharge! There is no way I would pay that much for one". I scanned down a little further: ***As directed by existing KEFOC regulations flat-breads can be pre-cut and optimally re-arranged to fit the smallest possible square cardboard packing tray. Each cut**** must be straight, must reach from circumference to circumference, and all cuts must be made before any pieces are moved or re-arranged. "Well that solves the problem" I replied (feeling very proud of myself), "all we need to do is order a flat-bread with 99 evenly spaced vertical cuts and 99 evenly spaced horizontal cuts. The little 1 cm squares (plus extra edge bits) would certainly fit into a box no more than 90 cm square..." (I did a quick calculation on the back of an envelope) "...so we wouldn't be paying more than$2.46 for the packaging".
"That is a ridiculous solution!" Ernie replied scathingly, "Firstly, I don't want lots of tiny splinters of flat-bread - I want nice big bits wherever practical, so the minimum number of cuts would be preferable, secondly, the packing charges would still be too much, and thirdly you didn't read the final bit of small print."
I scanned down to the end of the document:
****in agreement with KEFOC edicts, the company must charge a cutting levy of one cent per cut.
"So you would be adding an extra $1.98 just in cutting charges" Ernie explained. "To be honest, I wouldn't be happy eating it unless the total cost of extras (packing surcharge plus cutting levy) made up no more than 10% of the total bill." In the end we decided to order pizza instead. Now I know Ernie does love Kzijekistanian flat-bread and it would be great to surprise him with a home delivery next weekend. But I know he won't be happy if it is too expensive - even if someone else is paying the bill. Can anyone think of a way to cut the bread that will meet with Ernie's requirements? Hint 1: Looks like a first hint is in order. Ernie and I did manage to find a solution (flat-bread was lovely and tasty), in which the positioning of the pieces of bread in the box had at least two planes of mirror symmetry (when looked at from directly above the box of course). Hint 2: The round flat-bread, after it has been cut up, but before any pieces have been moved, has exactly the same rotational symmetry and mirror symmetry as it does after the pieces are rearranged and placed in the square box. • do you get a refund if they are late? – JMP Aug 9 '17 at 5:38 • @JonMarkPerry Refund? They'd probably charge you. :P – Lawrence Aug 9 '17 at 8:07 • I presume that although the last sentence talks about "requirements" and the first bolded sentence merely says "would be preferable" that an answer must come with a proof that no possible dissection with fewer cuts meets the surcharge limit? – Peter Taylor Aug 9 '17 at 18:49 • I think that if two (or more) people present solutions that match Ernie's price limit, I would just choose the one that required the minimum number of cuts. – Penguino Aug 9 '17 at 21:51 • Well, I think a diagram from my side would be better but right now it seems the problem can be thought of as an 'efficient way of Squaring a Circle' which is a classic old problem of Geometry and an initial investigation suggests an area of ( (Pi)/4 - 1/4) sq.m would be wasted of the flat-bread. And therefore the size of the box would be a square of length 1/(Square root (2)) or apprx. equal to 0.7 m – Mea Culpa Nay Aug 12 '17 at 15:36 ## 4 Answers Finally, after 5 months of attempting this on and off, I have a solution that meets all of the requirements! I'll go into a bit more depth below, but here's where the cuts are: These pieces can be rearranged: To fit in an 89x89cm box. While there are 48 pieces total, we'll treat some connected pieces as one, to make rearranging easier: The total cost of this method can be calculated like so: Wasted area: $$89^2 cm^2 - \pi \times 50^2 cm^2 = 67.01 cm^2$$ So 67 cents from unused area, with 14 cuts, leads to 81 cents in extra charges. This is less than 10% of the total cost! In depth description: Box Dimensions: I found the area (that I used) for the box using a spreadsheet, where the first column has the desired edge length of the box, and the subsequent columns show area, wasted area, cost due to wasted area, and the number of cuts allowed with the given dimensions. I chose 89 cm per edge because it was a whole number that would allow me to make several cuts to get the pieces to fit. The cuts are at: Horizontal: - 1cm from the top - 5.5cm from the bottom - Directly centered - 5.5cm from the top - 1cm from the top Vertical: - 1cm from the left - 5.5cm from the left - Directly centered - 5.5cm from the right - 1cm from the right Diagonal: - At 45 degrees from horizontal, such that the following lengths are equal: This meant that the flatbread was: Split into quarters, and each quarter of the flatbread had to fit into a quarter of the box. The centered vertical and horizontal cuts quartered the flatbread, and the vertical and horizontal cuts 5.5cm in made the majority of each quarter fit in its quarter of the box, like so: The rest of the pieces had to be cut down: I liked how the right angle filled the edge of the box so nicely, so I made another one with the 45 degree cuts, spaced so that the entire perimeter of the quarter of the box would have flatbread on it: Unfortunately, those pesky pieces were still too big, so... I added the extra cuts at 1cm from each edge, to make the pieces thinner. These were able to just barely fit without overlapping: I don't know if this solution is entirely satisfying to me, because: I had to rely on a graphics editor, as opposed to geometry equations, to fit everything in place, so I don't have a concrete proof that this works, but I'm still fairly confident that it does. • Excellent solution. I have added Ernie's one, which was a little more expensive but in which the cuts themselves were more symmetrical, and less cuts were needed. – Penguino Jan 31 '18 at 0:12 My first idea was to Cut the flatbread into equal wedges and rearrange them, alternating which end is pointing up. As you can imagine, there's still a fair bit of inefficiency in the packing, especially at the edges. A simple adjustment greatly improves this, without changing much of the math. By making one extra cut, we can greatly decrease the length of the required rectangle. Except for very narrow slices, well worth the 1 cent. We can find the area that this type of box would take by finding the width and height of the triangle defined by each wedge, and adding them together. If there are n + 1 cuts (because of the extra, final cut), we're left with 2n wedges. Given the labeled dimensions above, the width of the resulting rectangle would be:$W = (2*n) * w$And the height would be:$H = 2*r - h$To find h and w, we use a bit of trig. We find angle a by dividing the circle into 2*n equal parts:$a = 360 / (4*n) = 90 / n$Then,$h = r*\cos(a)w = r*\sin(a)$The total area of box required is found by combining the equations, to get:$A = (2nr\sin(90/n)) * ((2 - \cos(90/n)) * r)$The wasted area is just:$A - \pi r^2$We now know enough to find the cost of extras: Assuming area is calculated in$cm^2$, the cost would be:$ \$0.01 * (A - \pi r^2 + n + 1)$
For the extras to be less than 10 percent of the total cost, we must reduce them to at most \$1.11. This gives us the final inequality:$ 1.11 \ge \$0.01 * ((2nr\sin(90/n)) * ((2 - \cos(90/n)) * r) - \pi r^2 + n + 1)$
Plugging that equation into an excel sheet, we find that, using this method, we can meet Ernie's standard with as few as
9 cuts. The cost of extras with this method is \$1.08. • I was going to post essentially the same answer two days ago and then I realised that the rules say that the box is square. – Peter Taylor Aug 11 '17 at 6:45 • @DrewCamp I guess this type cut results in at least TWO layers of flat-tbread, which violates the condition stated in the last line of the first paragraph of the puzzle. – Mea Culpa Nay Aug 11 '17 at 14:00 • @PeterTaylor Somehow I must have skimmed over that requirement... Back to the drawing board! – DqwertyC Aug 11 '17 at 16:14 • Yep, the box must be square. But as for "two layers", I am not sure if you have misinterpreted me or if I have misunderstood you. This just means that you can't stack pieces on top of each other. – Penguino Aug 13 '17 at 21:23 After a couple of months Ernie finally gave up waiting for me to find a solution and sketched out the following so we could actually try the flat-bread at an acceptable price: This just squeaked below Ernie's price limit. Price was 10.00 dollars for the flat-bread, 1.03 dollars for the packaging (rounded down from 103.907 cm^2 wasted space), and 0.08 dollars for the cutting. So total price 11.11 dollars, of which packaging and cutting makes up 1.11 dollars or 9.991% of the total. Looking at DqwertyC's solution, Ernie's is a little more expensive but when I quizzed him on it, he told me that he chose it because it met his requirements and he liked the symmetry (also it gave him a single unbroken piece making up almost 50% of the total area plus four more pieces of almost 10% each!). But he agreed that DqwertyC should win the tick. Of course it all became irrelevant when he visited the shop again a few days later to discover that the price has gone up to$11.00, but now the flat-breads are sold in circular boxes exactly 1 m in diameter (imagine a very flat hat-box) for no extra charge. So Ernie is pleased that his favorite snack is cheaper, comes in one piece, and also claims that the used boxes are a perfect size and shape for archery targets (more on that later...).
I don't know what to write that shouldn't be under a spoiler, so:
You totally can order your food.
Because
Just get them to cut the pizza into 1.84cm strips. They'll then fit end to end int a box that's 1.84cm x 85.4cm (It probably would fit in a smaller box, I got the length by just adding all the rectangles it takes to hold each strip). This will result in wasted cardboard of 47 sq cm and 53 cuts. Therefore, you spend \$1, which is 10%, the max you are allowed to spend.
• Unfortunately, the rules require that the box be square. – Michael Seifert Aug 11 '17 at 14:39 |
## Calculus 10th Edition
$f(x)$ has an inverse function on the given interval.
There is a cusp at $x = -2$ so we'll differentiate $f(x)$ from $(-\infty, -2]$. $f'(x) = -1$ $f'(x)$ is always negative from $(-\infty, -2]$ so $f(x)$ is strictly monotonic. |
\$referrer_host = www.bing.com. How to Determine the Reaction Order - Chemistry Steps
## General Chemistry
We mentioned in the previous post that the order of a reaction can be determined only by experiment. Most often, this experiment consists of measuring the initial rate of the reaction by changing the concentration of the reactant and monitoring how it affects the rate.
For example, the rate law for a hypothetical reaction where molecule A transforms into products can be written as:
A → Products
Rate = k[A]n
where k is the rate constant and n is the reaction order.
Our objective is to determine the reaction order by calculating the n from a set of experiments. Keep in mind that:
• If n = 0, the reaction is zero-order, and the rate is independent of the concentration of A.
• If n = 1, the reaction is first-order, and the rate is directly proportional to the concentration of A.
• If n = 2, the reaction is second-order, and the rate is proportional to the square of the concentration of A.
Now, suppose we run three experiments and the following data is obtained for the concentration-rate correlation:
In every experiment, the concentration of A is doubled, and what we see is that the rate of the reaction doubles as well. Therefore, the initial rate is directly proportional to the initial concentration, and thus, we have a firs-order reaction:
Rate = k[A]1
If it was a zero-order reaction, the following data for the concentration-rate relationship would have been obtained:
The data for a zero-order reaction indicates that the rate does not depend on the concentration of reactants.
For a second-order reaction, doubling the concertation quadrupoles the reaction rate, and therefore, we would expect the following data:
If the numbers are not obvious for determining how the rate changes with concentration, you can pick the data from any set of two experiments, write the rate law, and divide them to see how the rate changed.
For example, going back to the data for a first-order reaction, we can divide the rate of experiments 1 and 2:
$\frac{{Rate\;2}}{{Rate\;1}}\;{\rm{ = }}\;\frac{{k{{[{{\rm{A}}_2}]}^{\rm{n}}}}}{{k{{[{{\rm{A}}_1}]}^{\rm{n}}}}}\;{\rm{ = }}\;\frac{{{\rm{0}}{\rm{.050}}\;M{\rm{/s}}}}{{{\rm{0}}{\rm{.025}}\,M{\rm{/s}}}}$
$\frac{{Rate\;2}}{{Rate\;1}}\;{\rm{ = }}\;\frac{{\cancel{k}{{{\rm{(0}}{\rm{.20}}\,\cancel{M}{\rm{)}}}^{\rm{n}}}}}{{\cancel{k}{{{\rm{(0}}{\rm{.10}}\,\cancel{M}{\rm{)}}}^{\rm{n}}}}}\;{\rm{ = }}\;\frac{{{\rm{0}}{\rm{.050}}\;\cancel{{M{\rm{/s}}}}}}{{{\rm{0}}{\rm{.025}}\,\cancel{{M{\rm{/s}}}}}}$
$\frac{{{{{\rm{(0}}{\rm{.20}}\,{\rm{)}}}^{\rm{n}}}}}{{{{{\rm{(0}}{\rm{.10}}\,{\rm{)}}}^{\rm{n}}}}}\;{\rm{ = }}\;2$
2n = 2, therefore,
n = 1
# Determining the Value of Rate Constant
To determine the value of the rate constant, write the rate law expression:
Rate = k[A]
Now, you can pick data from any experiment and plug the numbers into a rate law experiment. Let’s use the data from experiment 1.
Rate1 = k[A1]
$k\, = \,\frac{{{\rm{rat}}{{\rm{e}}_{\rm{1}}}}}{{\left[ {{{\rm{A}}_{\rm{1}}}} \right]}}\; = \;\frac{{0.025\,M/s}}{{0.10\,M}}\; = \;0.25\,{s^{ – 1}}$
Let’s now do another example with a real reaction between carbon dioxide and hydrogen and detained the reaction order with respect to each reactant, the overall order, and the value of the rate constant.
Example:
Carbon dioxide, CO2, reacts with hydrogen to give methanol (CH3OH), and water.
CO2(g) + 3H2(g) ⇆ CH3OH(g) + H2O(g)
In a series of experiments, the following initial rates of disappearance of CO2 were obtained:
Exp. [CO2] [H2] Initial rate, M/s 1 0.640 0.220 2.7 x 10-3 2 1.28 0.220 1.08 x 10-2 3 0.640 0.440 5.4 x 10-3
Determine the rate law and calculate the value of the rate constant for this reaction.
Solution:
To determine the overall reaction order, we need to determine it with respect to both reactants. Let’s first determine the order in CO2. Find two experiments where the concentration of H2 is kept constant while the concentration of CO2 is changed. In experiments 1 and 2, the concentration of CO2 is doubled from 0.640 M to 1.28 M while the concentration of His kept at 0.220 M. We see from the table, that doubling the concentration of CO2 quadruples the rate of the reaction (1.08 x 10-2 ÷ 2.7 x 10-3 = 4). Therefore, the reaction is second-order in CO2.
Now, let’s find two experiments where the concentration of CO2 is kept constant while that of His changed. In experiments 1 and 3, the concentration of CO2 is kept at 0640 M while the concentration of His doubled from 0.220 M to 0.440 M. We see from the table, that doubling the concentration of H2 had doubled the reaction rate (5.4 x 10-3 ÷ 2.7 x 10-3 = 2 ). Therefore, the reaction is first-order in H2.
The rate law, therefore, is:
Rate = k[CO2]2[H2]
And the overall order of the reaction is 2+1 = 3 – it is a third-order reaction.
To calculate the value of the rate constant, use the numbers from any experiment for the following equation:
$k\; = \;\frac{{{\rm{rate}}}}{{{{\left[ {{\rm{C}}{{\rm{O}}_{\rm{2}}}} \right]}^2}\left[ {{{\rm{H}}_{\rm{2}}}} \right]}}$
$k\; = \;\frac{{{\rm{2}}{\rm{.7 \times 1}}{{\rm{0}}^{{\rm{ – 3}}}}\;\cancel{{\rm{M}}}{\rm{/s}}}}{{{{\left( {{\rm{0}}{\rm{.640}}\;{\rm{M}}} \right)}^{\rm{2}}}\left( {{\rm{0}}{\rm{.220}}\;\cancel{{\rm{M}}}} \right)}}\;{\rm{ = }}\;{\rm{3}}{\rm{.00 \times 1}}{{\rm{0}}^{{\rm{ – 2}}}}\;{{\rm{M}}^{{\rm{ – 2}}}}{{\rm{s}}^{{\rm{ – 1}}}}\;$
Check Also
#### Practice
1.
Iron(II) ion is oxidized by hydrogen peroxide in an acidic solution.
2Fe2+ (aq) + H2O2(aq) + 2H+(aq) → 2Fe3+(aq) + 2H2O(l)
The rate law for the reaction is determined to be rate = k[H2O2 ][Fe2+]. The rate constant, at certain temperature, is 2.56 x 1024/M · s. Calculate the rate of the reaction at this temperature if [H2O2 ] = 0.48 M and [H2O2] = 0.070 M.
The answers and solutions to practice problems are available to registered users only. Click here to Register!
Solution
The answers and solutions to practice problems are available to registered users only. Click here to Register!
2.
For the kinetics of the reaction
2NO(g) + Cl2(g) → 2NOCl(g)
The following data were obtained:
Exp. [NOCl] [Cl2] Initial rate, M/s 1 0.25 0.35 0.68 2 0.25 0.70 1.36 3 0.50 0.70 2.72
a) What is reaction order in Cl2 and NO?
b) What is the rate law?
c) What is the value of the rate constant?
The answers and solutions to practice problems are available to registered users only. Click here to Register!
Solution
The answers and solutions to practice problems are available to registered users only. Click here to Register!
3.
The date for the initial rate of the following reaction is listed in the table below:
A + B → C + D
(a) What is the order of reaction with respect to A and to B?
(b) What is the overall reaction order?
(c) What is the value of the rate constant, k?
The answers and solutions to practice problems are available to registered users only. Click here to Register!
Solution
The answers and solutions to practice problems are available to registered users only. Click here to Register!
4.
Consider the reaction
A(g) + B(g) ⇌ C(g)
The following data were obtained at a certain temperature:
Exp. [A] [B] Initial rate, M/s 1 2.40 3.60 4.8 x 10-2 2 2.40 7.20 4.8 x 10-2 3 4.80 3.60 9.6 x 10-2
Using the data, determine the order of the reaction and calculate the rate constant: |
Calculation of an Electron Beam path when subjected to magnetic and electric fields
HI all, I have been trying to calculate the displacement of an electron when subjected electric and magnetic field, please see the pic below:
I anticipate that the electron will form an ellipse shape. Basically I would like to know how to calculate the major and minor axis of the ellipse.
I know how to calculate the electrostatic deflection of an electron beam, and I know how to calculate the magnetic deflection of an electron beam. I am guessing you got to combine these two ideas, but I cant figure out how. Hopefully there are some more intelligent people in here than me.
Jon
look up drift of charged particle in electric and magnetic field. guiding center may also be a useful search term.
I forgot to mention that as well as magnetic and electric fields there will be a accelerating voltage on the anode.
I cannot find much, electromagnetics is not my strongest subject
You start with lorentz force law. Your setup has created an electric field perpendicular to the magnetic field. You can create a coordinate system to align one coordinate to magnetic field, and one to electric field. It takes some knowledge of solving differential equations to get the solution, but it is possible.
What you will find is that the electron does not get back to where it started. Its path does not form an ellipse. The loop it forms without the electric field in effect drifts perpendicular to the electric field: http://en.wikipedia.org/wiki/Guiding_center
What you will find is that the electron does not get back to where it started. Its path does not form an ellipse. The loop it forms without the electric field in effect drifts perpendicular to the electric field: http://en.wikipedia.org/wiki/Guiding_center
I have seen this in the videos demonstrating the effect of the helmholtz coil on an electron beam. The electron beam does not finish at its starting place. Instead it finishes inside the circle - closer to the centre of the circle, in other words it travels in a spiral. But what I meant is what will happen to the shape of this when an electric field is applied, will it cause the shape to go more elliptical ?
Last edited:
I have seen this in the videos demonstrating the effect of the helmholtz coil on an electron beam. The electron beam does not finish at its starting place. Instead it finishes inside the circle - closer to the centre of the circle, in other words it travels in a spiral.......
Actually not. The Lorentz v x B force is always perpendicular to the velocity, so the electron beam goes in a circle. There is no force parallel to the velocity, so no work is done and the beam does not lose energy. See thumbnail photo of electron beam in a Helmholtz coil.
Bob S
Attachments
• electron e over m7.jpg
37.7 KB · Views: 414
Does anyone to know how to actually calculate this?
To calculate the orbit of a charged particle in a magnetic field, we equate the Lorentz force and the centripital force (where e = charge, v = velocity, B = magnetic field, m = rest mass, and R = radius of orbit).:
evB = mv2/R
or BR = mv/e, where mv is momentum
At low energies we can write mv = [2mT]½ where T is kinetic energy, so we have
BR = [2mT]½/e Tesla-meters
We can do all of this in eV (electron volts and volts) as follows (where c is speed of light)
BR = [2mc2T]½/ec
Example 1) So for electrons with mc2 = 511,000 electron volts, we have
BR = [2·511,000·T]½/c where c=3 x 108 meters/sec
where I have divided electron volts by e to get volts.
So for a T = 300 volt electron beam,
BR = [2·511,000·300]½/3 x 108 = 5.84 x 10-5 Tesla-meters.
So for a field of 10 Gauss (0.001 Tesla), the orbit radius is 0.0583 meters (5.83 cm).
At high (relativistic) energies we use pc = [(mc2+T)2 - (mc2)2]½ = βγ mc2
where mc2 is particle rest mass and pc is momentum in pc (energy) units.
Example 2) For LHC at 7 GeV, mc2 =938.3 MeV and γ = 7460, so
BR = 7 x 1012/3 x 108 = 23,333 Tesla meters.
BR is sometimes called beam rigidity.
So for R ≈ 4200 meters, B ≈ 5.6 Tesla (these are approximate values). Actual values in LHC arcs at 7 TeV are B = 8.33 Tesla and R = 2802 meters.
This doesn't include the electric field, however.
Bob S
Last edited:
Including the electric field is harder. I will just write down the major steps of one way here.
$$\vec{F} = q(\vec{E}+\vec{v}\times \vec{B})$$
decompose with B || z, E || y.
$$\frac{d v_x}{dt} = \frac{q}{m}(v_y B)$$
$$\frac{d v_y}{dt} = \frac{q}{m}(E - v_x B)$$
$$\frac{d v_z}{dt} = 0$$
By taking the derivative with time on one and inserting into the other one can decompose the two coupled equations.
also setting frequency to qB/m, the cyclotron frequency
$$\omega = \frac{qB}{m}$$
$$\frac{d^2 v_x}{dt^2} = \omega \frac{d v_y}{dt} = \omega \frac{q}{m}E -\omega^2 v_x$$
re-arrange and it appears as a harmonic oscillator with a "forcing" function:
$$\frac{d^2 v_x}{dt^2} + \omega^2 v_x = \omega \frac{q}{m}E$$
the y component is similar with no "forcing".
$$\frac{d^2 v_y}{dt^2} + \omega^2 v_y = 0$$
Solving the homogeneous terms gives you solution to harmonic oscillator solution for velocity in both x and y. Solving the particular on for the x gives a constant additional velocity in x direction. That is the drift velocity. You can then take the time integral to find the position as a function of time.
[edit: and the general solutions]
$$v_x = A*sin(\omega t) + B*cos(\omega t) + \frac{E}{B}$$
$$v_y = C*sin(\omega t) + D*cos(\omega t)$$
A, B, C, D come from initial conditions. Also, from the original equation you will see that it must be that D = A and C = -B. So...
$$v_x = v_{y0}*sin(\omega t) + (v_{x0} - \frac{E}{B})*cos(\omega t) + \frac{E}{B}$$
$$v_y = -(v_{x0} - \frac{E}{B})}*sin(\omega t) + v_{y0}*cos(\omega t)$$
Last edited:
Thanks Bob S and kcdodd! I am studying your work now. Thanks again...
Bob_S where why do you equate the centripetal force to the Lorentz force. The centripetal force is the force created by the attraction of the anode and the Lorentz for the force created by the magnetic field acting on the electron, shouldn't these be added together?
To calculate the orbit of a charged particle in a magnetic field, we equate the Lorentz force and the centripital force (where e = charge, v = velocity, B = magnetic field, m = rest mass, and R = radius of orbit).:
evB = mv2/R
KC where did you pull these equations from?
$$v_x = A*sin(\omega t) + B*cos(\omega t) + \frac{E}{B}$$
$$v_y = C*sin(\omega t) + D*cos(\omega t)$$
A, B, C, D come from initial conditions. Also, from the original equation you will see that it must be that D = A and C = -B. So...
Thanks again guys for your help....I now understand why the beam will be drifting in a moving spiral. is that correct?
However I do not want to create a moving spiral. I want to create an stationary ellipse using magnetic and electric fields. I have come to the conclusion that in order to do this you need to use the following configuration:
Any ideas on how to calculate this? :yuck:
Eternally grateful
Jon
Last edited:
Does any one have any idea of how to solve the above configuration ?
Does any one have any idea of how to solve the above configuration ?
Here is the solution with the electron source (filament) in the center of the positive anode structure, which is the same as a magnetron with a radial electric field and an axial magnetic field. See
Study equations 9.80 through 9.92. Are you proposing opposite polarity dc voltages on each end of each electrode plate?
Bob S
Thank you Bob, much appreciated, I will study the equations this weekend. Yes I am proposing opposite polarity DC on each end of the electrode plates, the little grey block separating them will be a high voltage insulator. You need this configuration in order to form an ellipse, and that is what I need an ellipse.
Are you proposing opposite polarity dc voltages on each end of each electrode plate?
Bob S
Yeh unfortunately the link is does not fully answer my question. The beam will still drift.
Yeh unfortunately the link is does not fully answer my question. The beam will still drift.
Your proposal to have opposite polarity on each end of each electrode may solve the E x B drift problem. The direction of the drift is in the plane of your illustration (perpendicular to B), and in the vertical direction (perpendicular to E). I have not convinced myself yet that the particle moves toward or away from the center though. The orbit certainly will not be an ellipse.
Bob S
I think it will be ellipse, if you think about it the electron picks up no net electron volts.
At the major axis extremity, the electron velocity will be at its maximum. At the minor axis extremity the electron will be at its lowest velocity but will be at its maximum acceleration.
Why do think, the shape wont be an ellipse?
Your proposal to have opposite polarity on each end of each electrode may solve the E x B drift problem. The direction of the drift is in the plane of your illustration (perpendicular to B), and in the vertical direction (perpendicular to E). I have not convinced myself yet that the particle moves toward or away from the center though. The orbit certainly will not be an ellipse.
Bob S |
# I Physical eigenstates of systems of n particles of spins sᵢ?
Tags:
1. Aug 17, 2016
### tomdodd4598
I am relatively well versed when it comes to systems of spin, or doing the maths for them at least, but am unsure whether all of the {L2, Lz, (other required quantum numbers)} basis eigenstates for a general system of n particles of spins si, where si is the spin of the ith particle, can actually exist in nature. I am new to the concept and therefore don't know the full ins and outs of requiring to symmetrise or antisymmetrise wave functions depending on whether you're dealing with bosons or fermions, and I can only imagine this places restrictions on the spins the particles can have. It's also possible the n particles may contain both bosons and fermions, and in that case I'm even more clueless. I also understand whether the particles are distinguishable or not plays a major role, and whether, for example, this is assumed or not in the example below.
For example, suppose I had three particles, two of spin 1/2 and one of spin 1. The eigenstates of L2, and Lz, |s,m>, are |2,2>, |2,1>, |2,0>, |2,-1>, |2,-2>, |1,1>1, |1,0>1, |1,-1>1, |1,1>2, |1,0>2 and |1,-1>2 (an additional quantum number is needed to distinguish between the |1,m> states).
Of these, which ones could actually exist, or could some groups of them be realised in different scenarios?
2. Aug 17, 2016
### Staff: Mentor
If particles have different spins, then they are definitely not identical, and you can treat them independently.
In the case of the two spin-1/2, you get the "classic" singlet + triplet states. The singlet state combines with all three possible states for the spin-1 particle, giving
\begin{align*} |1,1\rangle_3 &= | 0, 0 \rangle_{1/2} \otimes |1,1\rangle_1 \\ |1,0\rangle_3 &= | 0, 0 \rangle_{1/2} \otimes |1,0\rangle_1 \\ |1,-1\rangle_3 &= | 0, 0 \rangle_{1/2} \otimes |1,-1\rangle_1 \\ \end{align*}
(the index indicates whether it is the 3-body state, the state of the two spin-1/2 particles, or the state of the spin-1).
For the triplet of the spin-1/2, each state of the triplet combines with the three states of the spin-1. According to the rules of addition of angular momenta, for $\mathbf{S} = \mathbf{S}_1 + \mathbf{S}_2$, the allowed values for $S$ are
$$S = S_1 + S_2, S_1 + S_2-1, \ldots, \left| S_1 - S_2 \right|$$
which in this case gives $S = 2, 1, 0$. So the three-body states will be $|2,2\rangle_3$, $|2,1\rangle_3$, $|2,0\rangle_3$, $|2,-1\rangle_3$, $|2,2-\rangle_3$, $|1,1\rangle_3$, $|1,0\rangle_3$, $|1,-1\rangle_3$, $|0,0\rangle_3$. (You were missing that last one in the OP.) These states can be expressed in terms of the spin-1/2 and spin-1 states using the proper Clebsch-Gordan coefficients.
3. Aug 17, 2016
### tomdodd4598
Ah, yes, I did miss the |0,0> state - thanks. I wrote a Mathematica script a month or so ago that can give me the set of orthogonal states, and this is what I get for two spin-1/2 and one spin-1, where, assuming particles 1 and 2 are the spin-1/2 particles and particle 3 is the spin-1 particle,
the first component of the vector is the probability amplitude for finding particle 1 with m=1/2, particle 2 with m=1/2 and particle 3 with m=1,
the second component is the p.a. for finding particle 1 with m=1/2, particle 2 with m=1/2 and particle 3 with m=0,
the third component is the p.a. for finding particle 1 with m=1/2, particle 2 with m=1/2 and particle 3 with m=-1,
the second component is the p.a. for finding particle 1 with m=1/2, particle 2 with m=-1/2 and particle 3 with m=1, etc:
|2,2> = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
|2,1> = [0, 1/√2, 0, 1/2, 0, 0, 1/2, 0, 0, 0, 0, 0]
|2,0> = [0, 0, 1/√6, 0, 1/√3, 0, 0, 1/√3, 0, 1/√6, 0, 0]
|2,-1> = [0, 0, 0, 0, 0, 1/2, 0, 0, 1/2, 0, 1/√2, 0]
|2,-2> = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
...
|0,0> = [0, 0, 1/√3, 0, -1/√6, 0, 0, -1/√6, 0, 1/√3, 0, 0]
So I've found all of the spin eigenstates, but if we now also think about spatial wave functions, if the spin-1/2 particles are identical, the wave function has to be anti-symmetric with respect to swapping the two particles. Doesn't that mean that the only possibilities are that the spatial part is symmetric and the spin part is anti-symmetric or that the spatial part is anti-symmetric and the spin part is symmetric?
4. Aug 18, 2016
### Staff: Mentor
That's correct.
5. Aug 18, 2016
### tomdodd4598
Ok, I see now. I'm assuming then, that, when two bosons are exchanged, the whole wave function needs to be symmetric (so both the spatial and spin need to be symmetric anti-symmetric with respect to swapping them), and if there are no indistinguishable particles, then there's no restriction of this sort. Thanks :) |
# aliquote
## < a quantity that can be divided into another a whole number of time />
While I’m really impressed with the Eisvogel template for my Org->PDF toolchain—the rendered listings remember me of the Nord theme that I use in my terminal and under Emacs or Vim), I’m also investigating other alternatives. Here are is a nice candidate: arabica, but that may well be too much for what I need. And I learned that there already was some attempt at generating pretty HTML book via Pandoc/Jekyll before Hadley Wickham Advanced R. |
CBSE (Science) Class 11CBSE
Share
Books Shortlist
Your shortlist is empty
# Find the Domain and Range of the Real Valued Function: (I) F ( X ) = a X + B B X − a - CBSE (Science) Class 11 - Mathematics
ConceptCartesian Product of Sets
#### Question
Find the domain and range of the real valued function:
(i) $f\left( x \right) = \frac{ax + b}{bx - a}$
#### Solution
(i)
Given:
$f\left( x \right) = \frac{ax + b}{bx - a}$
Domain of f : Clearly, (x) is a rational function of x as
$\frac{ax + b}{bx - a}$ is a rational expression.
Clearly, f (x) assumes real values for all x except for all those values of x for which ( bx-a) = 0, i.e. bx = a.
$\Rightarrow x = \frac{a}{b}$
Hence, domain ( f ) =$R - \left\{ \frac{a}{b} \right\}$
Range of f :
Let f (x) = y ⇒ (ax + b) = y (bx -a)
⇒ (ax + b) = (bxy -ay)
⇒ b + ay = bxy -ax
⇒ b + ay = x(by - a)
$\Rightarrow x = \frac{b + ay}{by - a}$
Clearly, f (x) assumes real values for all x except for all those values of x for which ( by - a) = 0, i.e. by = a.
$\Rightarrow y = \frac{a}{b}$
Hence, range ( f ) =$R - \left\{ \frac{a}{b} \right\}$
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Solution for Mathematics Class 11 (2019 to Current)
Chapter 3: Functions
Ex.3.30 | Q: 3.01 | Page no. 18
Solution Find the Domain and Range of the Real Valued Function: (I) F ( X ) = a X + B B X − a Concept: Cartesian Product of Sets.
S |
Página 1 dos resultados de 2345 itens digitais encontrados em 0.007 segundos
## Perímetro e área : uma proposta didática para o ensino fundamental
Centenaro, Grasciele
Tipo: Trabalho de Conclusão de Curso Formato: application/pdf
POR
Relevância na Pesquisa
27.18%
## Efeitos da Seleção para Peso Pós-desmame sobre Medidas Corporais e Perímetro Escrotal de Machos Nelore de Sertãozinho (SP)
Cyrillo, Joslaine Noely Dos Santos Goncalves; Razook, Alexander George; De Figueiredo, Leopoldo Andrade; Neto, Luiz Martins Bonilha; Ruggieri, Ana Cláudia; Tonhati, Humberto
Tipo: Artigo de Revista Científica Formato: 403-412
POR
Relevância na Pesquisa
26.97%
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); The objective of this study was to evaluate the indirect effects of selection for post-weaning weight on body measures and scrotal perimeter of 809 Nellore males from selected herds (NeS and NeT) and control herd (NeC), of the Estação Experimental de Zootecnia de Sertãozinho. The statistical analyses were performed by using a sire mixed model where the random source of variation, sires, was nested within herds. The fixed effects were herds, year of performance test (PGP), age of cow and age of the animal as a covariate. The average genetic change for final weight, corrected for 378 days of age (W378), calculated as a deviation from the NeC herd, were 40.2 and 44.3 kg for the NeS and NeT herds, respectively. The correlated changes, for the other traits were, in the same order, 4.5 and 4.5 cm for hip height (HH); 6.2 and 7.0 cm for chest girth (CG); 5.8 and 6.3cm for body length (BL); 2.9 and 2.0 cm for dorsal line length (DL); 1.7 and 2.4 cm for rump length (RL); 1.0 and 1.3 cm for distance between pin bones (DPB); 1.8 and 2.6 cm for distance between hip bones (DHP); and 1.3 and 2.2 cm for scrotal perimeter (SP). The results of this study showed that the direct selection for postweaning weight promoted correlated positive responses in the body dimensions and also in the scrotal perimeter of Nellore males.
## Spatial characterization of wildfire orientation patterns in California
Barros, Ana M.G.; Pereira, J.M.C.; Moritz, Max A.; Stephens, Scott L.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.13%
Using 100 years of fire perimeter maps, we investigate the existence of geographical patterns in fire orientation across California. We computed fire perimeter orientation, at the watershed level, using principal component analysis. Circular statistics were used to test for the existence of preferential fire perimeter orientations. Where perimeters displayed preferential orientation, we searched for evidence of orographic channeling by comparing mean fire orientation with watershed orientation. Results show that in California, 49% of the burnt area is associated with watersheds, where fires displayed preferential orientation. From these, 25% of the burnt area is aligned along the NE/SW orientation and 18% in the E/W orientation. In 27 out of 86 watersheds with preferential fire alignment, there is also correspondence between mean fire orientation and watershed orientation. Topographic influence on fire spread and dominant wind patterns during the fire season can account for the consistency in fire perimeter orientation in these regions. Our findings highlight the historical pattern of fire perimeter orientation and identify watersheds with potential orographic channeling
## Comparison of the original Amsler grid with the preferential hyperacuity perimeter for detecting choroidal neovascularization in age-related macular degeneration
Isaac,David Leonardo Cruvinel; Ávila,Marcos Pereira de; Cialdini,Arnaldo Pacheco
Fonte: Conselho Brasileiro de Oftalmologia Publicador: Conselho Brasileiro de Oftalmologia
Tipo: Artigo de Revista Científica Formato: text/html
Relevância na Pesquisa
27.18%
PURPOSE: To compare the preferential hyperacuity perimeter (Preview PHP; Carl Zeiss Meditec, Dublin, CA) with the original Amsler grid in the detection of choroidal neovascularization (CNV) in patients with age-related macular degeneration (AMD). METHODS: Patients were classified into groups, based on the severity of the age-related macular degeneration and underwent preferential hyperacuity perimeter and Amsler grid testing. High sensitivity and or high specificity of a method were defined as the observation of at least 80% of each one the parameters. RESULTS: Sixty-five patients (65 eyes) were analyzed statistically. The sensitivity of detection of choroidal neovascularization was 70% by the Amsler grid and 90% by the preferential hyperacuity perimeter and the specificity of the Amsler grid was 85.5% and that of the preferential hyperacuity perimeter 81.8%. CONCLUSIONS: The preferential hyperacuity perimeter has greater sensitivity than the Amsler grid in the detection of choroidal neovascularization among patients over 50 years of age and is a promising method for monitoring patients with age-related macular degeneration. Although the original Amsler grid is less sensitive, it is a portable method, not expensive, accessible and presents reasonable sensitivity and high specificity in the diagnosis of choroidal neovascularization. Its use can be recommended for self-monitoring in patients with age-related macular degeneration as an alternative to preferential hyperacuity perimeter and when this method is not available.
## Correlation between transverse expansion and increase in the upper arch perimeter after rapid maxillary expansion
Claro,Cristiane Aparecida de Assis; Abrão,Jorge; Reis,Silvia Augusta Braga; Fantini,Solange Mongelli de
Tipo: Artigo de Revista Científica Formato: text/html
Relevância na Pesquisa
26.97%
The purpose of the present study was to assess the correlation between transverse expansion and the increase in upper arch perimeter, after maxillary expansion. Dental casts of eighteen patients were obtained before treatment and again five months after maxillary expansion. Measurements of intermolar width, intercanine width, arch length and arch perimeter were made with a digital caliper on photocopies taken from the dental casts. After assessment of the method error, a multiple regression model was developed following the identification of the best subset of variables. The resulting equation led to the conclusion that the increase in arch perimeter is approximately given by the addition of 0.54 times the intercanine expansion, and 0.87 times the arch length alteration.
## Analysis of High-Perimeter Planar Electrodes for Efficient Neural Stimulation
Wei, Xuefeng F.; Grill, Warren M.
Fonte: Frontiers Research Foundation Publicador: Frontiers Research Foundation
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.18%
Planar electrodes are used in epidural spinal cord stimulation and epidural cortical stimulation. Electrode geometry is one approach to increase the efficiency of neural stimulation and reduce the power required to produce the level of activation required for clinical efficacy. Our hypothesis was that electrode geometries that increased the variation of current density on the electrode surface would increase stimulation efficiency. High-perimeter planar disk electrodes were designed with sinuous (serpentine) variation in the perimeter. Prototypes were fabricated that had equal surface areas but perimeters equal to two, three or four times the perimeter of a circular disk electrode. The interface impedance of high-perimeter prototype electrodes measured in vitro did not differ significantly from that of the circular electrode over a wide range of frequencies. Finite element models indicated that the variation of current density was significantly higher on the surface of the high-perimeter electrodes. We quantified activation of 100 model axons randomly positioned around the electrodes. Input–output curves of the percentage of axons activated as a function of stimulation intensity indicated that the stimulation efficiency was dependent on the distance of the axons from the electrode. The high-perimeter planar electrodes were more efficient at activating axons a certain distance away from the electrode surface. These results demonstrate the feasibility of increasing stimulation efficiency through the design of novel electrode geometries.
## Clock Drawing in Spatial Neglect: A Comprehensive Analysis of Clock Perimeter, Placement, and Accuracy
Chen, Peii; Goedert, Kelly M.
Tipo: Artigo de Revista Científica
EN
Relevância na Pesquisa
27.06%
Clock drawings produced by right-brain-damaged (RBD) individuals with spatial neglect often contain an abundance of empty space on the left while numbers and hands are placed on the right. However, the clock perimeter is rarely compromised in neglect patients’ drawings. By analyzing clock drawings produced by 71 RBD and 40 healthy adults, this study investigated whether the geometric characteristics of the clock perimeter reveal novel insights to understanding spatial neglect. Neglect participants drew smaller clocks than either healthy or non-neglect RBD participants. While healthy participants’ clock perimeter was close to circular, RBD participants drew radially extended ellipses. The mechanisms for these phenomena were investigated by examining the relation between clock-drawing characteristics and performance on six subtests of the Behavioral Inattention Test (BIT). The findings indicated that the clock shape was independent of any BIT subtest or the drawing placement on the test sheet and that the clock size was significantly predicted by one BIT subtest: the poorer the figure and shape copying, the smaller the clock perimeter. Further analyses revealed that in all participants, clocks decreased in size as they were placed farther from the center of the paper. However...
## The Indented Perimeter as the Growth Front of the Lamellar Single Micro-Crystal of Colloidal Gold
UYEDA, Natsu
Fonte: Oxford University Press Publicador: Oxford University Press
Tipo: Artigo de Revista Científica Formato: text/html
EN
Relevância na Pesquisa
26.97%
The single micro-crystal of gold grows on its (111) habit surface as a lamella of about 100 Å thick when prepared under the acidic condition, sometimes being accompanied even by the spiral growth steps. The aspect of the growing crystal was observed by the electron microscope at several intermediate stages. The most characteristic feature is the anomalously indented or densely fringed perimeter, which finally disappears leaving the sharp straight lines when the growth comes to an end. It seems very reasonable to consider that the small particles of colloidal gold of ordinary size or much less are adsorbed on the side surface of the perimeter and arrange themselves so as to be well fitted to the lattice of the main crystal. The indentation of the perimeter also appears when the crystal undergoes the spiral growth.
## Age-corrected normal differential luminance values for the entire 80° visual field applying three threshold estimating strategies, using the Octopus 900 perimeter; Alterkorrigierte Normwerte für das gesamte 80° Gesichtsfeld mit drei verschiedenen Strategien und dem Octopus 900 Perimeter gemessen
Pricking, Sandra
Tipo: Dissertação
EN
Relevância na Pesquisa
37.13%
Purpose: 1. To create a model describing age-corrected normal values for the entire 80° visual field (VF) measured with the Octopus 900 (O900) perimeter, 2. to compare three threshold estimating strategies: conventional (4-2-1), dynamic and German Adaptive Threshold Estimation (GATE-i) and 3. to compare local differential luminal sensitivity (DLS) values obtained with the GATE-i strategy on both, the O900 and the Octopus 101 (O101) perimeters. Methods: 81 ophthalmologically healthy subjects between 10 and 79 years of age were examined with the O900 perimeter within 80° eccentricity (86 stimulus locations) using the three different strategies in a randomised order. 16 stimulus locations were measured twice during one examination in both conventional and dynamic strategies to assess the short-term fluctuation (SF). To measure the long-term fluctuation (LF), 14 subjects were examined on two further appointments. 24 subjects were examined with the GATE-i strategy on both the O900 and the O101 perimeters. Results: With the dynamic strategy local DLS values were 0.21 dB (mean) higher, with the GATE-i strategy 0.98 dB (mean) higher than with the conventional strategy. A smooth mathematical model for each strategy was achieved. Model fit was nearly identical for the conventional (R2 = 0.75)...
## Oberflächenspannung in der Methode der Finiten Massen; Surface tension in the finite mass method
Langmann, Christian
Tipo: Dissertação
DE_DE
Relevância na Pesquisa
26.97%
Die Methode der Finiten Massen ist eine Lagrangesche Teilchenmethode zur numerischen Simulation kompressibler Fluide. Im Kontext dieser Methode werden Effekte modelliert, welche an freien Rändern von Fluidvolumina (Flüssigkeiten) unter dem Einfluß von Oberflächenspannung auftreten. Dabei werden nur solche Effekte untersucht, die von charakteristischen geometrischen Größen der betrachteten Oberfläche abhängen. Die Oberfläche eines Fluidvolumens wird als Niveaufläche der Massendichte definiert, so daß im weiteren aus den Level-Set Methoden bekannte Techniken zum Einsatz kommen können. Diese erlauben es, mit Hilfe der Perimeter-Formel ein Energiefunktional für die Oberflächenenergie zu definieren, welches dann im Rahmen der Methode der Finiten Massen behandelt werden kann. Neben analytischen Betrachtungen der Eigenschaften der Modellierung wird insbesondere auch auf die für numerisch stabile Rechnungen benötigte signierte Distanzfunktion zu einer gegebenen Niveaufläche eingegangen. Diese wird als Viskositätslösung der Eikonalgleichung berechnet, wobei zur Lösung auf einem regelmäßigen Gitter auf die Fast Marching Method zurückgegriffen wird. Zur Illustration der grundsätzlichen Eigenschaften der Modellierung schließen sich kurze Beispielrechnungen an.; The Finite Mass Method is a Lagrangian particle method for numerical simulation of compressible flows. Within the context of this method effects appearing on free boundaries of fluids (liquids) arising from surface tension are modeled. Only such effects depending on characteristic geometric quantities of the considered surface are examined. The surface of a fluid is defined as a level set of the mass density. This admits using well known techniques from the area of level set methods. Thus taking the perimeter formula from geometric measure theory it is possible to define an energy functional for the energy of the surface...
## Heat Semigroups and Diffusion of Characteristic Functions; Wärmeleitungshalbgruppen and Diffusion charakteristischer Funktionen
Preunkert, Marc
Tipo: Dissertação
EN
Relevância na Pesquisa
27.06%
## Avaliação da mobilidade torácica em crianças saudáveis do sexo masculino pela medição do perímetro torácico; Assessment of healthy male children chest mobility by measuring the thoracic perimeter
Simon, Karen Muriel; Carpes, Marta Fioravante; Imhof, Beatriz Vidotto; Juk, Daniel Benedet; Souza, Gisele Cristina; Beckert, Giselle Fernanda Quintino; Cruz, Lilian Cristina; Bernardes, Mariane; Brocca, Rodrigo Vielmo
Tipo: info:eu-repo/semantics/article; info:eu-repo/semantics/publishedVersion; ; ; ; ; ; Formato: application/pdf
Relevância na Pesquisa
36.82%
## Resolução de problemas envolvendo áreas e perímetros; um estudo no 5º ano de escolaridade
Nunes, Lauriana Maria Pires
Fonte: Repositório Comum de Portugal Publicador: Repositório Comum de Portugal |
Last updated
Type Regular polygon
Edges and vertices 14
Schläfli symbol {14}, t{7}
Coxeter diagram
Symmetry group Dihedral (D14), order 2×14
Internal angle (degrees)154+2/7°
Dual polygon Self
Properties Convex, cyclic, equilateral, isogonal, isotoxal
In geometry, a tetradecagon or tetrakaidecagon or 14-gon is a fourteen-sided polygon.
## Contents
A regular tetradecagon has Schläfli symbol {14} and can be constructed as a quasiregular truncated heptagon, t{7}, which alternates two types of edges.
The area of a regular tetradecagon of side length a is given by
{\displaystyle {\begin{aligned}A&={\frac {14}{4}}a^{2}\cot {\frac {\pi }{14}}={\frac {14}{4}}a^{2}\left({\frac {{\sqrt {7}}+4{\sqrt {7}}\cos \left({{\frac {2}{3}}\arctan {\frac {\sqrt {3}}{9}}}\right)}{3}}\right)\\&\simeq 15.3345a^{2}\end{aligned}}}
### Construction
As 14 = 2 × 7, a regular tetradecagon cannot be constructed using a compass and straightedge. [1] However, it is constructible using neusis with use of the angle trisector, [2] or with a marked ruler, [3] as shown in the following two examples.
The animation below gives an approximation of about 0.05° on the center angle:
Construction of an approximated regular tetradecagon
Another possible animation of an approximate construction, also possible with using straightedge and compass.
Based on the unit circle r = 1 [unit of length]
• Constructed side length of the tetradecagon in GeoGebra (display max 15 decimal places) ${\displaystyle a=0.445041867912629\;[unit\;of\;length]}$
• Side length of the tetradecagon ${\displaystyle a_{target}=2\cdot \sin \left({\frac {180^{\circ }}{14}}\right)=0.445041867912629\ldots \;[unit\;of\;length]}$
• Absolute error of the constructed side length
Up to the max. displayed 15 decimal places is the absolute error ${\displaystyle F_{a}=a-a_{target}=0.0\;[unit\;of\;length]}$
• Constructed central angle of the tetradecagon in GeoGebra (display significant 13 decimal places) ${\displaystyle \mu =25.7142857142857^{\circ }}$
• Central angle of the tetradecagon ${\displaystyle \mu _{target}={\frac {360^{\circ }}{14}}=25.7142857142857\ldots ^{\circ }}$
• Absolute error of the constructed central angle
Up to the indicated significant 13 decimal places is the absolute error ${\displaystyle F_{\mu }=\mu -\mu _{target}=0^{\circ }}$
Example to illustrate the error
• At a circumscribed circle radius r = 1 billion km (the light needed for this distance about 55 minutes), the absolute error of the 1st side would be < 1 mm.
For details, see: Wikibooks: Tetradecagon, construction description (German)
## Symmetry
The regular tetradecagon has Dih14 symmetry, order 28. There are 3 subgroup dihedral symmetries: Dih7, Dih2, and Dih1, and 4 cyclic group symmetries: Z14, Z7, Z2, and Z1.
These 8 symmetries can be seen in 10 distinct symmetries on the tetradecagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order. [4] Full symmetry of the regular form is r28 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g14 subgroup has no degrees of freedom but can seen as directed edges.
The highest symmetry irregular tetradecagons are d14, an isogonal tetradecagon constructed by seven mirrors which can alternate long and short edges, and p14, an isotoxal tetradecagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular tetradecagon.
## Dissection
14-cube projection 84 rhomb dissection
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms. [5] In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular tetradecagon, m=7, and it can be divided into 21: 3 sets of 7 rhombs. This decomposition is based on a Petrie polygon projection of a 7-cube, with 21 of 672 faces. The list defines the number of solutions as 24698, including up to 14-fold rotations and chiral forms in reflection.
## Numismatic use
The regular tetradecagon is used as the shape of some commemorative gold and silver Malaysian coins, the number of sides representing the 14 states of the Malaysian Federation. [6]
A tetradecagram is a 14-sided star polygon, represented by symbol {14/n}. There are two regular star polygons: {14/3} and {14/5}, using the same vertices, but connecting every third or fifth points. There are also three compounds: {14/2} is reduced to 2{7} as two heptagons, while {14/4} and {14/6} are reduced to 2{7/2} and 2{7/3} as two different heptagrams, and finally {14/7} is reduced to seven digons.
A notable application of a fourteen-pointed star is in the flag of Malaysia, which incorporates a yellow {14/6} tetradecagram in the top-right corner, representing the unity of the thirteen states with the federal government.
Compounds and star polygons
n1234567
FormRegularCompoundStar polygonCompoundStar polygonCompound
Image
{14/1} = {14}
{14/2} = 2{7}
{14/3}
{14/4} = 2{7/2}
{14/5}
{14/6} = 2{7/3}
{14/7} or 7{2}
Internal angle≈154.286°≈128.571°≈102.857°≈77.1429°≈51.4286°≈25.7143°
Deeper truncations of the regular heptagon and heptagrams can produce isogonal (vertex-transitive) intermediate tetradecagram forms with equally spaced vertices and two edge lengths. Other truncations can form double covering polygons 2{p/q}, namely: t{7/6}={14/6}=2{7/3}, t{7/4}={14/4}=2{7/2}, and t{7/2}={14/2}=2{7}. [7]
### Isotoxal forms
An isotoxal polygon can be labeled as {pα} with outer most internal angle α, and a star polygon {(p/q)α}, with q is a winding number, and gcd(p,q)=1, q<p. Isotoxal tetradecagons have p=7, and since 7 is prime all solutions, q=1..6, are polygons.
{7α} {(7/2)α} {(7/3)α} {(7/4)α} {(7/5)α} {(7/6)α}
### Petrie polygons
Regular skew tetradecagons exist as Petrie polygon for many higher-dimensional polytopes, shown in these skew orthogonal projections, including:
## Related Research Articles
In geometry, an octagon is an eight-sided polygon or 8-gon.
In geometry, a decagon is a ten-sided polygon or 10-gon. The total sum of the interior angles of a simple decagon is 1440°.
In geometry, a heptagon is a seven-sided polygon or 7-gon.
In geometry an enneadecagon or enneakaidecagon or 19-gon is a nineteen-sided polygon.
In geometry, a tridecagon or triskaidecagon or 13-gon is a thirteen-sided polygon.
In geometry, a pentacontagon or pentecontagon or 50-gon is a fifty-sided polygon. The sum of any pentacontagon's interior angles is 8640 degrees.
In geometry, a pentadecagon or pentakaidecagon or 15-gon is a fifteen-sided polygon.
In mathematics, a hexadecagon is a sixteen-sided polygon.
In geometry, a hexacontagon or hexecontagon or 60-gon is a sixty-sided polygon. The sum of any hexacontagon's interior angles is 10440 degrees.
In geometry, an icositetragon or 24-gon is a twenty-four-sided polygon. The sum of any icositetragon's interior angles is 3960 degrees.
In geometry, a heptacontagon or 70-gon is a seventy-sided polygon. The sum of any heptacontagon's interior angles is 12240 degrees.
In geometry, an enneacontagon or enenecontagon or 90-gon is a ninety-sided polygon. The sum of any enneacontagon's interior angles is 15840 degrees.
In geometry, a tetracontadigon or 42-gon is a forty-two-sided polygon. The sum of any tetracontadigon's interior angles is 7200 degrees.
In geometry, a hexacontatetragon or 64-gon is a sixty-four-sided polygon. The sum of any hexacontatetragon's interior angles is 11160 degrees.
In geometry, a 360-gon is a polygon with 360 sides. The sum of any 360-gon's interior angles is 64440 degrees.
In geometry, an icosidigon or 22-gon is a twenty-two-sided polygon. The sum of any icosidigon's interior angles is 3600 degrees.
In geometry, an icositrigon or 23-gon is a 23-sided polygon. The icositrigon has the distinction of being the smallest regular polygon that is not neusis constructible.
In geometry, an icosihexagon or 26-gon is a twenty-six-sided polygon. The sum of any icosihexagon's interior angles are 4320°.
In geometry, an icosioctagon or 28-gon is a twenty eight sided polygon. The sum of any icosioctagon's interior angles is 4680 degrees.
In geometry, a triacontatetragon or triacontakaitetragon is a thirty-four-sided polygon or 34-gon. The sum of any triacontatetragon's interior angles is 5760 degrees.
## References
1. Wantzel, Pierre (1837). "Recherches sur les moyens de Reconnaître si un Problème de géométrie peau se résoudre avec la règle et le compas" (PDF). Journal de Mathématiques: 366–372.
2. Gleason, Andrew Mattei (March 1988). "Angle trisection, the heptagon, p. 186 (Fig.1) –187" (PDF). The American Mathematical Monthly. 95 (3): 185–194. doi:10.2307/2323624. Archived from the original (PDF) on 2016-02-02.
3. John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (2008) The Symmetries of Things, ISBN 978-1-56881-220-5 (Chapter 20, Generalized Schaefli symbols, Types of symmetry of a polygon pp. 275-278)
4. Coxeter, Mathematical recreations and Essays, Thirteenth edition, p.141
5. The Numismatist, Volume 96, Issues 7-12, Page 1409, American Numismatic Association, 1983.
6. The Lighter Side of Mathematics: Proceedings of the Eugène Strens Memorial Conference on Recreational Mathematics and its History, (1994), Metamorphoses of polygons, Branko Grünbaum |
# Equilibrium state - statistical intuition
1. Jul 16, 2010
### paweld
Can anyone give me some intuitive arguments about why
all the accessible microstates of the system are equally likely in equillibrium
state.
2. Jul 16, 2010
### Gerenuk
I guess there isn't an argument. If in probability theory you don't know the distribution (like the probability of a price behind one of three doors), then you assume equal distribution.
In all cases where microstates are not equally distributed you simply cannot apply entropy that way.
So in a way entropy is restricted to completely random and homogeneous systems like gases unless you find a way to define microstates with equal probabilities. |