text
stringlengths 9
294k
|
---|
# Thread: Proof: Right Triangle and a Perpendicular
1. ## Proof: Right Triangle and a Perpendicular
[/URL]
Hypothesis: MN Perpendicular to CB
AM = MC
Thesis: BN^2= CN^2 + AB^2
I've thought this could be resolved using some similarity theorem. In fact ABC is similar to MNC because they've got three corresponding pairs of angles. So CM/BC = MN/AB = CN/AC = k. However I don't know what to do now!
Thank you very much!
2. ## Re: Proof: Right Triangle and a Perpendicular
Originally Posted by goby
[/URL]
Hypothesis: MN Perpendicular to CB
AM = MC
Thesis: BN^2= CN^2 + AB^2
I've thought this could be resolved using some similarity theorem. In fact ABC is similar to MNC because they've got three corresponding pairs of angles. So CM/BC = MN/AB = CN/AC = k. However I don't know what to do now!
Thank you very much!
$BC^2=AC^2+AB^2$
$(BN+CN)^2=(2CM)^2+AB^2$
$BN^2+CN^2+2BN*CN=4CM^2+AB^2$ ----------------- (1)
By similar triangles,
$\frac{CM}{BC}=\frac{CN}{AC}$
$\frac{CM}{BN+CN}=\frac{CN}{2CM}$
$2CM^2=BN*CN+CN^2$
Substituting this in (1),
$BN^2+CN^2+2BN*CN=2BN*CN+2CN^2+AB^2$
$BN^2=CN^2+AB^2$
QED
3. ## Re: Proof: Right Triangle and a Perpendicular
Yeah, it was easier than I thought! Thank you again! |
### Home > AC > Chapter 1 > Lesson 1.2.2 > Problem1-55
1-55.
Solve the problem below using Guess and Check. State your solution in a sentence.
Jabari is thinking of three numbers. The greatest number is twice as big as the least number. The middle number is three more than the least number. The sum of the three numbers is $75$. Find the numbers.
Rewrite the word problem as an equation or a system of equations.
Let the least number equal $x$.
Make a guess for what $x$ could be. Was your answer higher or lower than $75$? Adjust your next guess accordingly. |
# zbMATH — the first resource for mathematics
Oscillation criteria for second-order nonlinear delay dynamic equations. (English) Zbl 1125.34046
The authors consider the second-order nonlinear delay dynamic equation $\left(r(t)x^\Delta(t)\right)^\Delta +p(t)f(x(\tau(t))=0$ on a time scale. By employing a generalized Riccati transformation of the form $w(t):= \delta(t)\left[\frac{r(t)x^\Delta(t)}{x(t)} +r(t)a(t)\right],$ they establish some new sufficient conditions which ensure that every solution oscillates or converges to zero. The obtained results improve the well-known oscillation results for dynamic equations and include as special cases the oscillation results for differential equations. Some applications to special time scales $$R, N, q^{N_{0}}$$ with $$q>1$$ and four examples are also included to illustrate the main results.
##### MSC:
34K11 Oscillation theory of functional-differential equations 39A10 Additive difference equations
Full Text:
##### References:
[1] Agarwal, R.P.; Bohner, M.; O’Regan, D.; Peterson, A.; Agarwal, R.P.; Bohner, M.; O’Regan, D., Dynamic equations on time scales: A survey, J. comput. appl. math., 141, 1-2, 1-26, (2002), (special issue on dynamic equations on time scales) (Preprint in Ulmer Seminare 5) · Zbl 1020.39008 [2] Agarwal, R.P.; Bohner, M.; Saker, S.H., Oscillation of second order delay dynamic equations, Can. appl. math. Q., 13, 1-18, (2005) · Zbl 1126.39003 [3] Agarwal, R.P.; O’Regan, D.; Saker, S.H., Oscillation criteria for second-order nonlinear neutral delay dynamic equations, J. math. anal. appl., 300, 203-217, (2004) · Zbl 1062.34068 [4] Agarwal, R.P.; O’Regan, D.; Saker, S.H., Oscillation criteria for nonlinear perturbed dynamic equations of second-order on time scales, J. appl. math. comput., 20, 133-147, (2006) · Zbl 1089.39001 [5] R.P. Agarwal, D. O’Regan, S.H. Saker, Properties of bounded solutions of nonlinear dynamic equations on time scales, Can. Appl. Math. Q., in press [6] E. Akin Bohner, M. Bohner, S.H. Saker, Oscillation criteria for a certain class of second order Emden-Fowler dynamic equations, Electron. Trans. Numer. Anal., in press · Zbl 1177.34047 [7] Bohner, M.; Peterson, A., Dynamic equations on time scales: an introduction with applications, (2001), Birkhäuser Boston · Zbl 0978.39001 [8] Bohner, M.; Saker, S.H., Oscillation of second order nonlinear dynamic equations on time scales, Rocky mountain J. math., 34, 1239-1254, (2004) · Zbl 1075.34028 [9] Bohner, M.; Saker, S.H., Oscillation criteria for perturbed nonlinear dynamic equations, Math. comp. modelling, 40, 249-260, (2004) · Zbl 1112.34019 [10] Erbe, L., Oscillation criteria for second order linear equations on a time scale, Can. appl. math. Q., 9, 1-31, (2001) · Zbl 1050.39024 [11] Erbe, L.; Peterson, A., Riccati equations on a measure chain, (), 193-199 · Zbl 1008.34006 [12] Erbe, L.; Peterson, A., Boundedness and oscillation for nonlinear dynamic equations on a time scale, Proc. amer. math. soc., 132, 735-744, (2004) · Zbl 1055.39007 [13] Erbe, L.; Peterson, A.; Saker, S.H., Oscillation criteria for second-order nonlinear dynamic equations on time scales, J. London math. soc., 67, 701-714, (2003) · Zbl 1050.34042 [14] Erbe, L.; Peterson, A.; Saker, S.H., Asymptotic behavior of solutions of a third-order nonlinear dynamic equation on time scales, J. comput. appl. math., 181, 92-102, (2005) · Zbl 1075.39010 [15] Erbe, L.; Peterson, A.; Saker, S.H., Kamenev-type oscillation criteria for second-order linear delay dynamic equations, Dynam. systems appl., 15, 65-78, (2006) · Zbl 1104.34026 [16] Hilger, S., Analysis on measure chains—a unified approach to continuous and discrete calculus, Results math., 18, 18-56, (1990) · Zbl 0722.39001 [17] Li, H.J., Oscillation criteria for second order linear differential equations, J. math. anal. appl., 194, 312-321, (1995) [18] Saker, S.H., New oscillation criteria for second-order nonlinear dynamic equations on time scales, Nonlinear funct. anal. appl., 11, 351-370, (2006) · Zbl 1126.34024 [19] Saker, S.H., Oscillation of nonlinear dynamic equations on time scales, Appl. math. comput., 148, 81-91, (2004) · Zbl 1045.39012 [20] Saker, S.H., Oscillation criteria of second-order half-linear dynamic equations on time scales, J. comput. appl. math., 177, 375-387, (2005) · Zbl 1082.34032 [21] S.H. Saker, Boundedness of solutions of second-order forced nonlinear dynamic equations, Rocky Mountain J. Math., in press · Zbl 1139.34030 [22] Saker, S.H., Oscillation of second-order forced nonlinear dynamic equations on time scales, Electron. J. qual. theory differ. equ., 23, 1-17, (2005) · Zbl 1097.34027 [23] Sahiner, Y., Oscillation of second-order delay differential equations on time scales, Nonlinear anal., 63, 1073-1080, (2005) · Zbl 1224.34294 [24] Zhang, B.G.; Shanliang, Z., Oscillation of second-order nonlinear delay dynamic equations on time scales, Comput. math. appl., 49, 599-609, (2005) · Zbl 1075.34061
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
## CosmoMC likelihood distribution definition
Use of Healpix, camb, CLASS, cosmomc, compilers, etc.
vital fernandez
Posts: 1
Joined: August 10 2015
Affiliation: INAOE
### CosmoMC likelihood distribution definition
Greetings everyone. I am PhD student who has recently started in bayesian statistics and I have some basic questions about using COSMOMC as a MCMC sampler.
I would like to start doing a simple linear regression using the scheme described in this cosmomc lecture http://www.cosmo-ufes.org/uploads/1/3/7 ... torial.pdf by Daniel Boreiro, where the likelihood in the cosmomc sampler is defined as:
At this point my questions are:
A) Is this the default likelihood definition? If not, how was it defined?
B) What is the distribution of this likelihood? From what I have read in the cosmomc notes, parameters are defined with a normal distribution or a n-dimensional guassian distribution, if you provide the covariance matrix. However, according to the definition in the image above, $\chi^2$ is a logaritmic estimate of the probability.... So is this a log-normal probability?
C) Finally, in the case of a normal distribution, is the $\sigma$ in the $\chi^2$ squared formula the $\sigma$ of the distribution.
Thank you in advance for any advice/reference to point me in the right direction |
# LKJ Cholesky Density derivation
The density of the LKJ cholesky is
\text{LkjCholesky}(L \mid \eta) \propto |J|\det(LL^T)^{\eta - 1} = \prod_{k = 2}^K L_{kk}^{K - k + 2\eta - 2}
which is from the Stan manual ( 24.2 Cholesky LKJ correlation distribution | Stan Functions Reference (mc-stan.org)). I get where the 2\eta - 2 comes from, it is
\det(LL^T)^{\eta - 1} = [\det(L)^2]^{\eta - 1} = \prod_{k=2}^K L_{kk}^{2 \eta - 2}
which implies that |J| = \prod_{k=2}^K L_{kk}^{K - k}. However, the Edelman handout ( see top of page 13 z_handout2.dvi (mit.edu))
has the jacobian for the Cholesky transform as
\det J = 2^K \prod_{k = 1}^K L_{kk}^{K + 1 - k} .
So where am I going wrong with this?
1 Like
Ok, the panic of posting jarred my stupidity into clarity. I guess that power becomes K - k since when k = 1 L is 1. Then in the log that 2^K is constant so we can drop it. Then we get the above.
1 Like
Thanks for sharing, that might be useful for others who will wonder the same. Do I get it correctly that it is resolved? If so, could you mark your own answer as solution? |
ghc-events-0.14.0: Library and tool for parsing .eventlog files from GHC
GHC.RTS.Events.Analysis
Synopsis
Documentation
data Machine s i Source #
This is based on a simple finite state machine hence the names delta for the state transition function. Since states might be more than simple pattern matched constructors, we use finals :: state -> Bool, rather than Set state, to indicate that the machine is in some final state. Similarly for alpha, which indicates the alphabet of inputs to a machine. The function delta returns Maybe values, where Nothing indicates that no valid transition is possible: ie, there has been an error.
Constructors
Machine Fieldsinitial :: sInitial statefinal :: s -> BoolValid final statesalpha :: i -> BoolValid input alphabetdelta :: s -> i -> Maybe sState transition function
validate :: Machine s i -> [i] -> Either (s, i) s Source #
The validate function takes a machine and a list of inputs. The machine is started from its initial state and run against the inputs in turn. It returns the state and input on failure, and just the state on success.
validates :: Machine s i -> [i] -> [Either (s, i) s] Source #
This function is similar to validate, but outputs each intermediary state as well. For an incremental version, use simulate.
simulate :: Machine s i -> [i] -> Process (s, i) (s, i) Source #
This function produces a process that outputs all the states that a machine goes through.
data Profile s Source #
A state augmented by Timestamp information is held in profileState. When the state changes, profileMap stores a map between each state and its cumulative time.
Constructors
Profile FieldsprofileState :: sThe current stateprofileTime :: TimestampThe entry time of the state
Instances
Instances details
Show s => Show (Profile s) Source # Instance detailsDefined in GHC.RTS.Events.Analysis MethodsshowsPrec :: Int -> Profile s -> ShowS #show :: Profile s -> String #showList :: [Profile s] -> ShowS #
Arguments
:: (Ord s, Eq s) => Machine s i A machine to profile -> (i -> Timestamp) Converts input to timestamps -> [i] The list of input -> Process (Profile s, i) (s, Timestamp, Timestamp)
profileIndexed :: (Ord k, Ord s, Eq s) => Machine s i -> (i -> Maybe k) -> (i -> Timestamp) -> [i] -> Process (Map k (Profile s), i) (k, (s, Timestamp, Timestamp)) Source #
profileRouted :: (Ord k, Ord s, Eq s, Eq r) => Machine s i -> Machine r i -> (r -> i -> Maybe k) -> (i -> Timestamp) -> [i] -> Process ((Map k (Profile s), r), i) (k, (s, Timestamp, Timestamp)) Source #
extractIndexed :: Ord k => (s -> i -> Maybe o) -> (i -> Maybe k) -> Map k s -> i -> Maybe (k, o) Source #
refineM :: (i -> j) -> Machine s j -> Machine s i Source #
Machines sometimes need to operate on coarser input than they are defined for. This function takes a function that refines input and a machine that works on refined input, and produces a machine that can work on coarse input.
profileM :: Ord s => (i -> Timestamp) -> Machine s i -> Machine (Profile s) i Source #
This function takes a machine and profiles its state.
Arguments
:: Ord k => (i -> Maybe k) An indexing function -> Machine s i A machine to index with -> Machine (Map k s) i The indexed machine
An indexed machine takes a function that multiplexes the input to a key and then takes a machine description to an indexed machine.
toList :: Process e a -> [a] Source #
data Process e a Source #
Constructors
Done Fail e Prod a (Process e a)
Instances
Instances details
(Show e, Show a) => Show (Process e a) Source # Instance detailsDefined in GHC.RTS.Events.Analysis MethodsshowsPrec :: Int -> Process e a -> ShowS #show :: Process e a -> String #showList :: [Process e a] -> ShowS #
routeM :: Ord k => Machine r i -> (r -> i -> Maybe k) -> Machine s i -> Machine (Map k s, r) i Source #
A machine can be indexed not only by the inputs, but also by the state of an intermediary routing machine. This is a generalisation of indexM. |
# orbits of group action on product space and orbits of stablizer are in 1-1 correspondence?
How to prove the next question? Thanks orbits of group action on product space and orbits of stablizer are in 1-1 correspondence?
Let group $G$ act transitively on a set $X$. Let $x\in X$ and $H=\operatorname{Stab}(x)$. Let $G$ act on $X\times X$ via $g(x_1,x_2)=(gx_1,gx_2)$ for any $g\in G$. Prove that all $G$-orbits in $X\times X$ are in a bijective correspondence with all $H$-orbits in $X$.
• This question is not off-topic (someone voted to close for that reason, strangely).
– anon
Sep 2 '13 at 14:57
• @anon, I agree. Sep 2 '13 at 15:05
• sure Thanks. I will remember to click. Oct 9 '13 at 13:42
Since $G$ is transitive on $X$, the orbits of $G$ on $X \times X$ will be the orbits of elements of the form $$(x, y),$$ for some $y \in X$.
When are two such elements $(x, y), (x, z)$ in the same $G$-orbit? This happens if and only if there is $g \in G$ such that $$(g x, g y) = (x, z),$$ that is, if and only if there is $g \in H$ such $z = g y$, that is, if and only if $y$ and $z$ are in the same $H$-orbit.
So if $$(x, y_1), (x, y_2), \dots, (x, y_n),$$ are representatives of the orbits of $G$ on $X \times X$, then $$y_1, y_2, \dots, y_n$$ will be representatives of the orbits of $H$ on $X$.
• Thank you very much Prof. Andreas Caranti! Sep 3 '13 at 6:08
• Question to Dr. Caranti: What is the reason for the very first sentence of your solution? Otherwise everything makes perfect sense. Thank you!
– user206991
Jan 10 '15 at 19:23
• @algebrabeginner, I am not sure I understand your question correctly. However, I am setting up locating suitable elements in the orbits, which turn handy later in the proof. Jan 11 '15 at 10:11
• @AndreasCaranti what if $G$ acts on $X,Y$ and we seek the orbits of the product action? (No transitivity assumptions.) May 20 '18 at 12:45
• @Arrow assume everything is finite for simplicity. Let $x_{1}, \dots, x_{n}$ be a set of representatives of the action of $G$ on $X$. Then for $i \ne j$, we have that $(x_{i}, y)$ and $(x_{j}, z)$ are in different orbits. When are $(x_{i}, y)$ and $(x_{j}, z)$ in the same orbit? From now on, it should be more or less like in the previous case. May 20 '18 at 13:00
For $y \in G$, write $\overline{y}_H$ to denote the $H$-orbit of $y$ $$\overline{y}_H = \{hy : h\in H\}$$ and write $\overline{(x,y)}_G$ to denote the $G$-orbit of $(x,y) \in G\times G$ $$\overline{(x,y)}_G = \{(gx,gy) : g\in G\}$$ Now consider the map $\overline{y}_H \mapsto \overline{(x,y)}_G$ given by $$hy \mapsto (hx,hy) = (x,hy)$$ This is well-defined, and your required bijection (Injectivity is easy, and it is surjective because the action of $G$ on $X$ is transitive) |
# 3SAT instance with EXACTLY 3 instances of each literal
I'm trying to solve a question which requires me to
1. prove that an instance of 3SAT where each literal appears in exactly 3 clauses (positive and negative appearances combined) and each clause contains exactly 3 literals is always satisfiable.
2. Find a polynomial time algorithm to find a satisfying assignment for it.
My Solution
I'm not sure how to prove part 1. I'm trying to solve 2 by reducing it to an instance of Vertex Cover in which each literal has 2 nodes - one positive, one negative - and each node is connected to the other literals its in a clause with. A vertex cover of size m = # of literals will give us the assignment needed.
Im not sure of I'm along the right path or not? Any help would be appreciated!
• "each clause contains exactly 3 literals". It looks like it actually means "each clause contains exactly 3 variables". We are talking about 3SAT, anyway. – Apass.Jack Mar 14 at 5:52
It looks like you missed the promising reformulation of this 3SAT. The simple idea is to select a different variable for each clause.
## Always satisfiable
Let the variables be $$V=\{v_1, v_2, \cdots, v_n\}$$ and the clauses be $$C=\{c_1,c_2,\cdots, c_m\}$$.
Construct a bipartite graph $$G=(C,V)$$, where $$(c,v)$$ is an edge if $$v$$ or $$\neg v$$ is a literal of $$c$$. The given conditions mean the degree of each clause and the degree of each variable are 3.
For a subset $$W$$ of $$C$$, let $$N(W)$$ denote the set of all variables adjacent to some clause in $$W$$. Consider the edges that have one endpoint in $$W$$. The number of them is exactly $$3|W|$$ and at most $$3|N(W)|$$. Hence $$|W|\leq |N(W)|.$$ By Hall's marriage theorem, for each clause $$c$$, we can select a distinct variable $$m(c)$$ such that $$(c,m(c))$$ is an edge.
For all clause $$c$$ do the following. If $$m(c)$$ is a literal of $$c$$, set $$m(c)$$ to be true. Otherwise, $$\neg m(c)$$ is a literal $$c$$ and we set $$m(c)$$ to be false. In either case, $$c$$ becomes true.
### Polynomial algorithm
The proof above actually gives the outline of an algorithm. Each step of the algorithm takes polynomial time. In particular, the application of Hall's marriage theorem can be implemented in polynomial time as, for example shown here.
### A generalization
Here is a natural generalization.
Exercise. Generalize to the case when each variable appears in at most $$r$$ clauses and each clause contains at least $$r$$ distinct variables, where $$r\ge1$$. |
Home
Browse
Communities
& Collections
Issue Date
Author
Title
Subject
Sign on to:
My Account
authorized users
Edit Profile
Help
Please use this identifier to cite or link to this item: http://hdl.handle.net/1807/9846
Title: Extensions of a dualizing complex by its ring: commutative versions of a conjecture of Tachikawa Authors: Buchweitz, Ragnar-OlafAvramov, Luchezar L.Sega, Liana M. Issue Date: 25-Feb-2005 Publisher: Cambridge University Press Citation: Journal of Pure and Applied Algebra, 201, 218 – 239 Abstract: Let (R,m, k) be a commutative noetherian local ring with dualizing complex DR, normalized by Extdepth(R) R (k,DR) k. Partly motivated by a long standing conjecture of Tachikawa on (not necessarily commutative) k-algebras of finite rank, we conjecture that if Extn R(DR,R) = 0 for all n>0, then R is Gorenstein, and prove this in several significant cases. URI: http://hdl.handle.net/1807/9846 Appears in Collections: Mathematics
Files in This Item:
File Description SizeFormat |
Mathematics
# If $A, B, C$ are three points on a line and $B$ lies between $A$ and $C$, then prove that $AC - AB = BC$
##### SOLUTION
Given $B$ is a point which lies on the line $AC$
$\therefore$ From Euclid's postulate,
$AB+BC=AC$
$\Rightarrow{AC-AB=BC}$
Hence, proved.
You're just one step away
Subjective Medium Published on 09th 09, 2020
Questions 120418
Subjects 10
Chapters 88
Enrolled Students 87
#### Realted Questions
Q1 Subjective Hard
Construct $\Delta PQR$ such that $\angle R=100^o, QR = RP = 5.4 \,cm$.
Asked in: Mathematics - Introduction to Euclid’s Geometry
1 Verified Answer | Published on 09th 09, 2020
Q2 Single Correct Easy
According to Euclid, a surface has ____.
• A. Length but no breadth and thickness
• B. No length, no breadth and no thickness
• C. Length, breadth and thickness
• D. Length and breadth but no thickness
Asked in: Mathematics - Introduction to Euclid’s Geometry
1 Verified Answer | Published on 09th 09, 2020
Q3 Subjective Medium
Asked in: Mathematics - Introduction to Euclid’s Geometry
1 Verified Answer | Published on 09th 09, 2020
Q4 Single Correct Medium
The base of a prism is a regular hexagon. If every edge of the prism measures $1$ meter, then the volume of the prism is :
• A. $\dfrac{3 \sqrt 2}{2} m^3$
• B. $\dfrac{6 \sqrt 2}{2} m^3$
• C. $\dfrac{5 \sqrt 3}{2} m^3$
• D. $\dfrac{3 \sqrt 3}{2} m^3$
Asked in: Mathematics - Introduction to Euclid’s Geometry
1 Verified Answer | Published on 09th 09, 2020
Q5 Subjective Hard
What is the need of introducing axioms?
Asked in: Mathematics - Introduction to Euclid’s Geometry
1 Verified Answer | Published on 09th 09, 2020 |
# Chapter 3 - Equations and Problem Solving - Chapter 3 Review Problem Set - Page 137: 26
$r=\dfrac{C}{2\pi}$
#### Work Step by Step
Using the properties of equality, in terms of $r,$ the given equation, $C=2\pi r ,$ is equivalent to \begin{array}{l}\require{cancel} \dfrac{C}{2\pi}=\dfrac{2\pi r}{2\pi} \\\\ r=\dfrac{C}{2\pi} .\end{array}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
1. ## a problem about differentiability
i need your helps. thanks for everything.
2. ## Re: a problem about differentiability
Apply mean value theorem to get $\min\{x_n,y_n\}\leq \xi_n\leq \max\{x_n,y_n\}$ such that $f(x_n)-f(y_n)=f'(\xi_n)(x_n-y_n)$, and the squeeze theorem gives that $\xi_n\to x_0$. Now you can conclude.
3. ## Re: a problem about differentiability
vaauuuv. thans bro. you so great. |
# IInt8LegacyCalibrator¶
class tensorrt.IInt8LegacyCalibrator(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → None
Extends the IInt8Calibrator class. This calibrator requires user parameterization, and is provided as a fallback option if the other calibrators yield poor results.
Variables
• quantilefloat The quantile (between 0 and 1) that will be used to select the region maximum when the quantile method is in use. See the user guide for more details on how the quantile is used.
• regression_cutofffloat The fraction (between 0 and 1) of the maximum used to define the regression cutoff when using regression to determine the region maximum. See the user guide for more details on how the regression cutoff is used
get_algorithm(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → tensorrt.tensorrt.CalibrationAlgoType
Signals that this is the legacy calibrator.
Returns
CalibrationAlgoType.LEGACY_CALIBRATION
get_batch(self: tensorrt.tensorrt.IInt8LegacyCalibrator, names: List[str]) → List[int]
Get a batch of input for calibration. The batch size of the input must match the batch size returned by get_batch_size() .
A possible implementation may look like this:
def get_batch(names):
try:
# Assume self.batches is a generator that provides batch data.
data = next(self.batches)
# Assume that self.device_input is a device buffer allocated by the constructor.
cuda.memcpy_htod(self.device_input, data)
return [int(self.device_input)]
except StopIteration:
# When we're out of batches, we return either [] or None.
# This signals to TensorRT that there is no calibration data remaining.
return None
Parameters
names – The names of the network inputs for each object in the bindings array.
Returns
A list of device memory pointers set to the memory containing each network input data, or an empty list if there are no more batches for calibration. You can allocate these device buffers with pycuda, for example, and then cast them to int to retrieve the pointer.
get_batch_size(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → int
Get the batch size used for calibration batches.
Returns
The batch size.
read_calibration_cache(self: tensorrt.tensorrt.IInt8LegacyCalibrator) → buffer
Calibration is potentially expensive, so it can be useful to generate the calibration data once, then use it on subsequent builds of the network. The cache includes the regression cutoff and quantile values used to generate it, and will not be used if these do not match the settings of the current calibrator. However, the network should also be recalibrated if its structure changes, or the input data set changes, and it is the responsibility of the application to ensure this.
Reading a cache is just like reading any other file in Python. For example, one possible implementation is:
def read_calibration_cache(self):
# If there is a cache, use it instead of calibrating again. Otherwise, implicitly return None.
if os.path.exists(self.cache_file):
with open(self.cache_file, "rb") as f:
Returns
A cache object or None if there is no data.
write_calibration_cache(self: tensorrt.tensorrt.IInt8LegacyCalibrator, cache: buffer) → None
Save a calibration cache.
Writing a cache is just like writing any other buffer in Python. For example, one possible implementation is:
def write_calibration_cache(self, cache):
with open(self.cache_file, "wb") as f:
f.write(cache)
Parameters
cache – The calibration cache to write. |
The solution of the second-order differential equation $\dfrac{d^{2}y}{dx^{2}}+2\dfrac{dy}{dx}+y=0$ with boundary conditions $y\left ( 0 \right )=1$ and $y\left ( 1 \right )=3$ is
1. $e^{-x}+\left ( 3e-1 \right )xe^{-x}$
2. $e^{-x}-\left ( 3e-1 \right )xe^{-x}$
3. $e^{-x}+\left [ 3e\sin\left ( \frac{\pi x}{2} \right ) -1\right ]xe^{-x}$
4. $e^{-x}-\left [ 3e\sin\left ( \frac{\pi x}{2} \right ) -1\right ]xe^{-x}$ |
# 21.17 An Introduction to Carbohydrates
Carbohydrates are the most abundant class of organic compounds found in living organisms. They originate as products of photosynthesis, an endothermic reductive condensation of carbon dioxide requiring light energy and the pigment chlorophyll.
$nCO_2 + n H_2O + \text{Energy} \rightarrow C_nH_{2n}O_n + nO_2$
As noted here, the formulas of many carbohydrates can be written as carbon hydrates, $$C_n(H_2O)_n$$, hence their name. The carbohydrates are a major source of metabolic energy, both for plants and for animals that depend on plants for food. Aside from the sugars and starches that meet this vital nutritional role, carbohydrates also serve as a structural material (cellulose), a component of the energy transport compound ATP/ADP, recognition sites on cell surfaces, and one of three essential components of DNA and RNA.
The most useful carbohydrate classification scheme divides the carbohydrates into groups according to the number of individual simple sugar units. Monosaccharides contain a single unit; disaccharides contain two sugar units; and polysaccharides contain many sugar units as in polymers - most contain glucose as the monosaccharide unit.
Some sugars can undergo a intermolecular cyclization to form a hemi-acetal. The hemiacetal carbon atom (C-1) becomes a new stereogenic center, commonly referred to as the anomeric carbon, and the α and β-isomers are called anomers.
Disaccharides made up of other sugars are known, but glucose is often one of the components. Two important examples of such mixed disaccharides are displayed above. Lactose, also known as milk sugar, is a galactose-glucose compound joined as a beta-glycoside. It is a reducing sugar because of the hemiacetal function remaining in the glucose moiety. Many adults, particularly those from regions where milk is not a dietary staple, have a metabolic intolerance for lactose. Infants have a digestive enzyme which cleaves the beta-glycoside bond in lactose, but production of this enzyme stops with weaning. Sucrose, or cane sugar, is our most commonly used sweetening agent. It is a non-reducing disaccharide composed of glucose and fructose joined at the anomeric carbon of each by glycoside bonds (one alpha and one beta). In the formula shown here the fructose ring has been rotated 180º from its conventional perspective.
## Contributors
This page titled 21.17 An Introduction to Carbohydrates is shared under a not declared license and was authored, remixed, and/or curated by Layne Morsch. |
This site is supported by donations to The OEIS Foundation.
# Rencontres numbers
(Redirected from Number of partial derangements)
The rencontres numbers (partial derangement numbers) are the number of partial derangements, or number of permutations with r rencontres[1] of ${\displaystyle \scriptstyle n\,}$ distinct objects (i.e. the number of permutations of ${\displaystyle \scriptstyle n\,}$ distinct objects with ${\displaystyle \scriptstyle r\,}$ fixed points).
For ${\displaystyle \scriptstyle n\,\geq \,0\,}$ and ${\displaystyle \scriptstyle 0\,\leq \,r\,\leq \,n,\,}$ the rencontres number ${\displaystyle \scriptstyle D_{n,r}\,}$ is the number of permutations of ${\displaystyle \scriptstyle [n]\,=\,{\{1,\,2,\,3,\,\dots ,\,n\}}\,}$ that have exactly ${\displaystyle \scriptstyle r\,}$ fixed points.
The number of permutations with 0 rencontres ${\displaystyle \scriptstyle D_{n,0}\,}$ (number of complete derangements) are the number of derangements ${\displaystyle \scriptstyle D_{n}\,}$.
## Triangle of rencontres numbers
The terms ${\displaystyle \scriptstyle D_{n,n-j}\,}$ of the ${\displaystyle \scriptstyle j\,}$th trivial, i.e. ${\displaystyle \scriptstyle j\,\in \,\{0,\,1\}\,}$, falling diagonals from the right (indexed from 0,) where ${\displaystyle \scriptstyle r\,=\,n-j\,}$, are
• the number of permutations of n with n rencontres ${\displaystyle \scriptstyle D_{n,n}\,}$ (i.e. ${\displaystyle \scriptstyle j\,=\,0\,}$) is obviously 1, since this is the identity permutation.
• the number of permutations of n with n-1 rencontres ${\displaystyle \scriptstyle D_{n,n-1}\,}$ (i.e. ${\displaystyle \scriptstyle j\,=\,1\,}$) is obviously 0, since once you reach ${\displaystyle \scriptstyle n-1\,}$ rencontres, you necessarily have ${\displaystyle \scriptstyle n\,}$ rencontres.
The triangle shows that the first two columns have the relations
${\displaystyle D_{n,1}=n~D_{n-1,0}\,}$ (obviously, since there are ${\displaystyle \scriptstyle n\,}$ ways to choose the fixed point)
${\displaystyle D_{n,0}=D_{n,1}+(-1)^{n}\,}$ (since ${\displaystyle \scriptstyle !n=!(n-1)\cdot n+(-1)^{n},\,n\,\geq \,1,\,}$ see number of derangements (recurrences))
giving
${\displaystyle D_{n,0}=n~[(n-1)~D_{n-2,0}+(-1)^{n-1}]+(-1)^{n}=n~(n-1)~D_{n-2,0}+(-1)^{n-1}~(n-1)=(n-1)[n~D_{n-2,0}+(-1)^{n-1}]\,}$
${\displaystyle \scriptstyle n\,}$ = 0 1 1 0 1 2 1 0 1 3 2 3 0 1 4 9 8 6 0 1 5 44 45 20 10 0 1 6 265 264 135 40 15 0 1 7 1854 1855 924 315 70 21 0 1 8 14833 14832 7420 2464 630 112 28 0 1 9 133496 133497 66744 22260 5544 1134 168 36 0 1 10 1334961 1334960 667485 222480 55650 11088 1890 240 45 0 1 11 14684570 14684571 7342280 2447445 611820 122430 20328 2970 330 55 0 1 12 176214841 176214840 88107426 29369120 7342335 1468368 244860 34848 4455 440 66 0 1 ${\displaystyle \scriptstyle r\,}$ = 0 1 2 3 4 5 6 7 8 9 10 11 12
The triangle ${\displaystyle \scriptstyle T(n,r)\,}$ of rencontres numbers gives the infinite sequence of finite sequences
{{1}, {0, 1}, {1, 0, 1}, {2, 3, 0, 1}, {9, 8, 6, 0, 1}, {44, 45, 20, 10, 0, 1}, {265, 264, 135, 40, 15, 0, 1}, {1854, 1855, 924, 315, 70, 21, 0, 1}, {14833, 14832, 7420, 2464, 630, 112, 28, 0, 1}, {133496, 133497, 66744, 22260, 5544, 1134, 168, 36, 0, 1}, ...}
Triangle ${\displaystyle \scriptstyle T(n,r)\,}$ of rencontres numbers (number of permutations of ${\displaystyle \scriptstyle n\,}$ elements with ${\displaystyle \scriptstyle r\,}$ fixed points). (Cf. A008290)
{1, 0, 1, 1, 0, 1, 2, 3, 0, 1, 9, 8, 6, 0, 1, 44, 45, 20, 10, 0, 1, 265, 264, 135, 40, 15, 0, 1, 1854, 1855, 924, 315, 70, 21, 0, 1, 14833, 14832, 7420, 2464, 630, 112, 28, 0, 1, 133496, 133497, 66744, 22260, 5544, 1134, 168, 36, 0, 1, ...}
## Triangle of D_{n, r} / D_{n−r, 0}
The following triangle reveals that the terms ${\displaystyle \scriptstyle D_{n,n-j}\,}$ of the ${\displaystyle \scriptstyle j\,}$th nontrivial, i.e. ${\displaystyle \scriptstyle j\geq 2\,}$, falling diagonal from the right (indexed from 0,) where ${\displaystyle \scriptstyle r\,=\,n-j\,}$, are divisible by the leftmost term ${\displaystyle \scriptstyle D_{j,0}\,}$ of the falling diagonal.
Furthermore, the subtriangle excluding the rightmost two (trivial) falling diagonals (i.e. for ${\displaystyle \scriptstyle n-r\,=\,j\,\geq \,2\,}$) is now the corresponding subtriangle of the Pascal triangle, thus we have
${\displaystyle D_{n,r}={\binom {n}{r}}~D_{n-r,0},\quad n-r=j\geq 2.\,}$
This is obvious since there are ${\displaystyle \scriptstyle {\binom {n}{r}}\,}$ ways to choose the ${\displaystyle \scriptstyle r\,}$ fixed objects and the remaining ${\displaystyle \scriptstyle n-r\,}$ objects must be a complete derangement.
${\displaystyle \scriptstyle n\,}$ = 0 1 1 0 1 2 1 0 1 3 1 3 0 1 4 1 4 6 0 1 5 1 5 10 10 0 1 6 1 6 15 20 15 0 1 7 1 7 21 35 35 21 0 1 8 1 8 28 56 70 56 28 0 1 9 1 9 36 84 126 126 84 36 0 1 10 1 10 45 120 210 252 210 120 45 0 1 11 1 11 55 165 330 462 462 330 165 55 0 1 12 1 12 66 220 495 792 924 792 495 220 66 0 1 ${\displaystyle \scriptstyle r\,}$ = 0 1 2 3 4 5 6 7 8 9 10 11 12
## Formulae
The number of derangements of a nonempty set may be obtained from the ratio of the factorial of ${\displaystyle \scriptstyle n\,}$ and Euler's number
${\displaystyle D_{n,0}=\left[{n! \over e}\right],\quad n\geq 1,\,}$
where the ratio is rounded up for even ${\displaystyle \scriptstyle n\,}$ and rounded down for odd ${\displaystyle \scriptstyle n\,}$.
${\displaystyle D_{n,r}={n \choose r}\cdot D_{n-r,0}.\,}$
The proof is easy after one knows how to enumerate derangements: choose the ${\displaystyle \scriptstyle r\,}$ fixed points out of ${\displaystyle \scriptstyle n\,}$; then choose the derangement of the other ${\displaystyle \scriptstyle n-r\,}$ points.
An explicit formula for ${\displaystyle \scriptstyle D_{n,r}\,}$ can be derived as follows
${\displaystyle D(n,r)={\frac {n!}{r!}}[z^{n-r}]{\frac {e^{-z}}{1-z}}={\frac {n!}{r!}}\sum _{k=0}^{n-r}{\frac {(-1)^{k}}{k!}}.}$
This immediately implies that
${\displaystyle D_{n,r}={n \choose r}D_{n-r,0}\;\;{\mbox{ and }}\;\;{\frac {D_{n,r}}{n!}}\approx {\frac {e^{-1}}{r!}}}$
for ${\displaystyle \scriptstyle n\,}$ large, ${\displaystyle \scriptstyle r\,}$ fixed.
## Recurrences
The numbers in the ${\displaystyle \scriptstyle r\,=\,0\,}$ column are the number of derangements (number of permutations with 0 rencontres or number of permutations of n with n derangements.) Thus
${\displaystyle D_{0,0}=1,\,}$
${\displaystyle D_{1,0}=0,\,}$
${\displaystyle D_{n,0}=(n-1)(D_{n-1,0}+D_{n-2,0}),\quad n\geq 2.\,}$
## Other formulae
${\displaystyle \sum _{r=0}^{n}D_{n,r}=n!\,}$
## Generating function
### Ordinary generating function
O.g.f. for column ${\displaystyle \scriptstyle r\,}$ is [TO BE VERIFIED]
${\displaystyle G_{\{D_{n,r}\}}(x)\equiv \sum _{n=r}^{\infty }D_{n,r}~x^{n}={\frac {1}{r!}}\sum _{n=r}^{\infty }n!{\frac {x^{n}}{(1+x)^{n+1}}},\quad r\geq 0.\,}$
O.g.f. for row ${\displaystyle \scriptstyle n\,}$ is
${\displaystyle G_{\{D_{n,r}\}}(x)\equiv \sum _{r=0}^{n}D_{n,r}~x^{r}=n!\sum _{r=0}^{n}{\frac {1}{r!}}(-1)^{r}(1-x)^{r},\quad n\geq 0.\,}$
### Exponential generating function
[TO BE VERIFIED]
${\displaystyle E_{\{D_{n,r}\}}(x,y)\equiv \sum _{n=0}^{\infty }\sum _{r=0}^{n}{\frac {D_{n,r}}{n!~r!}}x^{n}y^{r}=\exp {\Bigg (}{\frac {x(y-1)}{1-x}}{\Bigg )}.\,}$ |
2 added 338 characters in body
When one studies automorphisms of $II_1$ factors, one usually looks at the point norm topology - It is a well known result of Effros that if a $II_1$ factor $\mathcal{M}$ does not have property $\Gamma$, then Inn($\mathcal{M}$) is closed in Aut($\mathcal{M})$. The converse is also true : If Inn($\mathcal{M}$) is closed in Aut($\mathcal{M})$ in the point norm topology, then $\mathcal{M}$ does not have property $\Gamma$.
Has anyone studied the topology of pointwise SOT(equivalent to pointwise 2-norm) convergence? Formally, a net of automorphisms $\alpha_{\beta}$ converges to the automorphism $\alpha$ if for every $x$ in $\mathcal{M}$, $||\alpha(x) - \alpha_{\beta}(x)||_2 \rightarrow 0$.
Is is known, for instance, whether the inner automorphisms are always dense in Aut($\mathcal{M}$) in this topology?
Edit: Jesse Peterson is right - I was confusing topologies. Also, the statement that Inner automorphisms are closed in the point 2 - norm topology on Aut(M) $\Leftrightarrow$ The $II_1$ factor does not have property $\Gamma$ is theorem XIX.3.8 in Takesaki III. I thought it was due to Effros, but Takesaki does not give a reference.
1
# A question about automorphisms of $II_1$ factors
When one studies automorphisms of $II_1$ factors, one usually looks at the point norm topology - It is a well known result of Effros that if a $II_1$ factor $\mathcal{M}$ does not have property $\Gamma$, then Inn($\mathcal{M}$) is closed in Aut($\mathcal{M})$. The converse is also true : If Inn($\mathcal{M}$) is closed in Aut($\mathcal{M})$ in the point norm topology, then $\mathcal{M}$ does not have property $\Gamma$.
Has anyone studied the topology of pointwise SOT(equivalent to pointwise 2-norm) convergence? Formally, a net of automorphisms $\alpha_{\beta}$ converges to the automorphism $\alpha$ if for every $x$ in $\mathcal{M}$, $||\alpha(x) - \alpha_{\beta}(x)||_2 \rightarrow 0$.
Is is known, for instance, whether the inner automorphisms are always dense in Aut($\mathcal{M}$) in this topology? |
Chapter 4, Exponential and Logarithmic Functions - Section 4.3 - Logarithmic Functions - 4.3 Exercises: 24
(a) $\ln{0.5}=x+1$ (b) $\ln{t}=0.5x$
Work Step by Step
RECALL: $e^y=x \longrightarrow \ln{x}=y$ Use the rule above to obtain: (a) $\ln{0.5}=x+1$ (b) $\ln{t}=0.5x$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
# Anatomically Correct Modular Body Plan Animals
The fixedness of body plans varies widely across different types of Earthling organisms. At one extreme, you have things like tardigrades, for which every individual of any given species has exactly the same number and arrangement of individual cells, differing only in size; at the other extreme, you have things like mycelial fungi, which don't really have any consistent large-scale shape.
At various points in the middle of the spectrum, you have typical animals and plants: all animals of a given species tend to have the same high-level shape (e.g., number and arrangement of limbs), even though they differ in small-scale details and may be different sizes, while plants tend to be similar in terms of the shapes of specific organs (leaves, branching structure, etc.), but can modularly assemble those mid-range features into wildly different large-scale structure--i.e., it is easy to find pairs of animals that are nearly identical to each other, but good luck finding two trees that grew in exactly the same shape! This modular construction conveys numerous advantages to plants, most notably the fact that they can sustain massive injuries and still survive--lop a third of the limbs off a typical tree, and it won't care, 'cause it's got or can grow spares. Lop off a third of a dog, and, well... you've got an animal cruelty case and a lot of blood to clean up.
So, can we make a plant-style modular-type body plan work for more animal-like creatures? How would something like that evolve?
Requirements
A modular animal must:
1. Be mobile.
2. Be heterotrophic--whether vegetarian or carnivorous doesn't really matter.
3. Be constructed largely out of repeatable, interchangeable, and redundant organ complexes, such that damage and regrowth is both possible and expected.
4. Have its detailed large-scale shape determined in large part by environment and injury history, not fixed by genetics.
And for the sake of narrowing the scope:
1. Live on land.
2. Exist in a size range typical of mammals--say, somewhere between a housecat and an elephant.
This does not, however, necessarily mean that a modular animal can't have critical specialized organ systems, like a single head or single digestive tract; after all, separate the crown of a tree from its roots, and it will die (unless each separate part is in good conditions to regenerate the missing half, of course)--each part may be modular, but that doesn't mean the different parts are arbitrarily divisible.
A list of all Anatomically Correct questions can be found here: Anatomically Correct series.
• @Chickensarenotcows Added specifications for size and land-living, but I am intrigued by what existing creatures you have in mind. Aug 12, 2019 at 21:35
• I was thinking hydras, planaria and slime-molds, way outside the current scope of the question. As I see it, the biggest issue is respiration and oxygen distribution: cut the torso off a mammal, the legs might just want to run away, but heart and lungs are nowhere nearby to help. I'll think about it, but have nothing to add at this point. Aug 12, 2019 at 21:44
• How would something like that evolve? It wouldn't, fractal animals had this and died out. Land living constraint is too much considering everything started in the sea for good reason. It would never evolve far enough to get on land Aug 14, 2019 at 0:30
• Your question is bit confusing, do you want to physically be able to swap parts, or just swap them out at the genetic level? Plants are not modular they are totipotent which is a very different thing. If you just want something that can regrow after severe damage including loss of entire limbs and organs look at echinoderms.
– John
Sep 16, 2019 at 19:21
• @John I want them to be able to swap parts out at the developmental level, like plants do. Sep 17, 2019 at 16:22
Oops, someone did that (sorta)
I’d like to introduce the siphonophore, a colony organism made of specialised individuals called polyps (or sometimes zooids). Examples include the long strings of polyps.
The hideous mass of many different polyps.
Or perhaps the more familiar, blue bottle jellyfish.
But it isn’t a jellyfish, it’s just related. The sail is an individual organism, and so are each cluster of stinging tentacles, the feeding polyps, and reproductive polyps. Each individual serves one purpose and is utterly useless on its own, having to rely on the other polyps to perform the actions it lacks.
This concept can easily be blown up into more complex (and less gelatinous) colonies. In fact, a certain future biology documentary has explored this a little (https://speculativeevolution.fandom.com/wiki/Ocean_phantom).
The problem of living on land means you may need ‘lung polyps’ ‘locomotive polyps’ and ‘gut polyps’ but otherwise vascular connections and more solid construction may not be too far-fetched. The other option is simply to make them small (I understand this doesn't meet one of your checkpoints). Gas exchange isn’t an issue for insects and frogs.
@Chickensarenotcows mentioned planaria, which have an advantage similar to some worms where they can be completely bifurcated and, as long as a certain portion of the body is left intact, completely regenerate. The way this is achieved in siphonophores and partly in planaria is segmentation - basically modularization of the body. Examples of similarly segmented life (without the regenerative ability of course) are centipedes and worms. A merging of these more complex, terrestrial body plans with the colony zooids/polyps of siphonphores could yield a believable organism of your description (if arthropleura is anything of an example).
P.S. Sorry for the enthusiasm, I just really like these guys.
• The other big issue is coordination, if the same parts are not occurring together over and over the chances or evolving the correct control mechanisms are laughably hard. You are literally at the tornado assembling a 747 territory. If you want regenerative ability look at echinoderms a sea cucumber can literally fire its internal organs out its anus then just grow new ones later, or you have star fish which you can cut into sections and as long as they can still feed will regenerate completely..
– John
Sep 16, 2019 at 19:14
The problem is that plants were able to evolve the way they did because if you cut them, the only criteria the new limb needs is that it has access to sunlight. This makes the random developmental generation of new material without strong negative consequences.
Doing this for an animal, which requires locomotion, would be extremely difficult. That's why animals like lizards regenerate limbs, but they follow a very specific structure that allows them to walk the same way every time it happens. If they generated a random limb, varying is size and shape, it would make walking very difficult and they would die off very quickly after that. Hence, not very appropriate for evolution.
I propose an entirely new kind of animal with a unique organ or a section of the brain completely devoted to calculating how to use and adapt to its randomly generated limbs. This would allow that animal to have random locomotion patterns, causing each individual of the species to be very difficult to track by predators and thus would benefit the evolution of the species. Some locomotion patterns would be more helpful than others, and as such those patterns would over time be removed from the rules that the generation of new limbs would follow. (ie having limbs of vastly different lengths would be avoided)
This animal would be similar to insects, in the sense that they would not have a circulatory system, because loss of blood could cause death and would inhibit the generation of appendages. Rather, upon losing a section of the body, the animal might go into a hibernation stage dedicated to generating new limbs and training the brain how to use those limbs.
Summary of why my model works:
1. Be mobile
My model moves using randomly generated limbs.
2. Be heterotrophic--whether vegetarian or carnivorous doesn't really matter.
My model works for all kinds of animals, regardless of diet.
3. Be constructed largely out of repeatable, interchangeable, and redundant organ
complexes, such that damage and regrowth is both possible and expected.
Other than a lack of circulatory system and a unique organ or dedicated area of the brain, my model does not specify any organs that cannot be regrown.
4. Have its detailed large-scale shape determined in large part by environment and
injury history, not fixed by genetics.
My model supports its evolutionary development, as well as the ability to generate limbs in a way that can adapt to the environment. For example, developing limbs suitable for climbing, limbs for running, or a combination of both and any other possible uses.
5. Live on land.
My model can be land-based, water-based, or flight-based.
6. Exist in a size range typical of mammals--say, somewhere between a
housecat and an elephant.
No size restrictions need to be placed on my model. However, the larger the animal, the longer the amount of time limb regeneration would require (leaving the animal vulnerable), which suggests that evolution would tend to lean towards developing the species into smaller creatures.
I think the comments about Echinoderms are actually a good start to get on the right track. It is already the phylum most closely related to the phylum Chordata (which includes mammals), and many of them have similar, if not exact, traits you are describing.
1. Be mobile. No question there. Many species of Echinoderms are mobile, if slow.
2. Be heterotrophic. Again, no issues there, Echinoderms definitely eat stuff.
3. Be constructed such that damage and regrowth is both possible and expected. A notoriously important characteristic of many of the most commonly recognizable Echinoderms.
4. Have its detailed large-scale shape determined in large part by environment and injury history, not fixed by genetics. Here is where we start to diverge from known Echinoderms, and have to borrow from other forms of life. I see 2 main options here, depending on just how mobile the animal is, and just how specialized it's different organ structures are.
Option 1: it is very slow moving, and rarely moves when it can avoid it. It remains in place, foraging any food source in the area immediately within its reach, and stays put until the local food source is entirely exhausted. Whenever possible, it grows toward food it can detect, rather than moving toward it. Its appendages for eating and for locomotion are both very small, very numerous, and interconnected (it eats any food it steps on through its feet). When remaining in its current location is not possible, it uses starfish type locomotion (possibly in conjunction with snail/slug type locomotion) to move to a new food source, and repeats the process. Size/shape of the individual is determined by the shape of the food sources it encounters, as it (at least somewhat) grows in to the shape of that source while it feeds.
Option 2: Slightly faster moving, but still slow. A generally rounded or spherical shape, but with no set numbers of appendages. I imagine something the shape of a sea-urchin, with a random number of spines. More specialized locomotion appendages are present, but other appendages can be recruited for locomotion for faster movement in emergencies. Existing appendages are converted, or new ones grown, as needed, after injury or accident. The key being "as needed". If a missing limb isn't slowing the animal down, or keeping it from feeding normally, etc, no need to regrow it at all. If repeat attacks occur, defensive appendages could be converted to locomotion "permanently", or vice versa, depending on the types of attacks and the results.
1. Live on land. Sea Cucumbers already have a similar external body plan to slugs, and starfish can reach rocks at high tide that are out of the water at lower tides, and stay there until the tide returns, so there is some (little) precedent for out of water (if not fully land based) animals with some of these characteristics.
2. Size range somewhere between a housecat and an elephant. Sunflower sea stars can get about 4 feet across from arm tip to arm tip (eye-balling some google images, it looks to me like just the body, without the arms, is close to 2 feet across), and sea cucumbers can get more than 6 feet long
As far as evolution, it seems to me to be no great leap from tide pools to fully land-based lifestyle, both to evade predators in the tide pools themselves, as well as reach land-based food sources that tide-pool-locked species can't reach, especially if those food sources are similar enough to require relatively small change in digestive function. The first step would probably be water retention, to avoid dehydration on land, followed by specializations for respiratory functions, and then locomotion and food source specialization would be next.
• The OP has asked for plant-style and specified that he does not want organisms that can simply "regenerate". He wants organisms that can generate in a way similar to plants, where their environment directly influences the regeneration process. Sep 19, 2019 at 22:30
• @overlord Correct Sep 20, 2019 at 12:43
Animals have long had self-repair mechanisms to accommodate loss of cells through either attack/cell death, injury (such as lacerations) or bone fracturing.
It is actually in fact also a simple extension of our growth. As we grow, our cells divide and organs grow. When we are embryos, we do not have all organs yet, but as we slowly accumulate more cells they create more organs as dictated by our DNA.
In fact, analysis of foetal growth is basically a story of how we evolved. There is a reason why we look so 'tadpole' when we are young, yet as we grow more and more features are added.
So what you need is a way for cells to divide and add new organs to replace ones that are lost (perhaps entire new organs) - at a much more drastic, foolproof rate that we do now. This will be difficult but not impossible.
• We need to ensure continuous function, such that absent organs are not missed. We therefore need redundancy in major organs, so we may need more kidneys, hearts and other organs in different areas to provide this redundancy. Such evolution would be a difficult leap, but not unheard of.
• We need the ability for growth to be following a new pattern, in a way which retains functionality. So if our arm is removed, we need to grow nerves, bone, muscle and skin in the same way we grow them as an embryo. This would require coordination of regeneration in a much more smart way than we do currently, but again should be feasible.
• We need a pathway in evolution to achieve this. Evolution is driven by both necessity and sexuality. To accomplish your objectives in your question, there needs to be both a physical need to grow this way, and a psychological desire to do so by your mate. Anything goes, but I think it would be possible to find a route there if these hold true.
I am not entirely Clear on what your question is. I imagine you are asking whether an organism can develop or evolve in such a way that it has the internal systems of a Plant (regeneration and such) as well as the higher functions of animals (locomotion, thought). i.e. something like Dc’s Swamp thing.
This is most likely going to turn into a Biology lecture. And I will be making some conjecture where my expertise is lacking. But, here are my thoughts.
## An Organism Capable of locomotion and some manner of self-awareness, does so at sacrifice of other traits, including a modular cell nature. As far as I am aware. Let me Explain.
In RPG terms, i think the biggest challenge for such an organism would be the trade off between High Physical Stats and Processing power. The modular structure of the plants enables them to grow to sizes unattainable by most animals. It facilitates damage reduction and high regeneration such that they can recover being cut down provided the right conditions are met. But this comes at a cost of lack of cognition.
Any animal no matter how small, has a sense of self. They are aware of their own body and have at least the most rudimentary instincts. Even a cockroach or an earth worm has self-preservation instinct. While there are some plants with similar defensive characters, they are more trigger based than instincts. (E.g. Tactile/Odor based irritants & allergens)
The plant body has no centralized structure. Which means any plant cell taken from the whole can perform all the functions of the whole. And this range of functions while impressive are limited in the scope of terms. In Biochemical terms, a cell can perform only so many reactions at a time. Or an interval. If it needs to do more, add more, it must sacrifice some other to make space. It needs more Processing power.
Take this scaled up Macro analogy. Whales were once terrestrial mammals. In time they evolved to be aquatic organisms. And concurrently their physiology adapted to suit their new evolutionary path. Their forelimbs flattened and became flaps, the hind limbs relatively unnecessary for swimming, regressed completely and is now just a vestigial bone making their bodies more streamlined. And the freed up energy, previously spent to development of limbs was re purposed for better respiratory capabilities and lungs that can withstand the strong pressures in the deep waters.
Similarly, in the procession leading from Cell based life forms to Kingdom Animalia, the cells instead of doing everything by itself started to delegate and compartmentalize tasks. In Multi cellular organisms Cells began to Differentiate to accommodate specialized tasks. These specialized cells could do only a fraction of what the originals could. But now instead of a single cell performing n number of tasks, there were x types of cells each performing n/x number of different processes. (An oversimplification. Differentiation in actual physiological systems rarely follow set division of labor)
The remainder of the cell’s available biochemical potential, i.e. its Processing Power could be allotted to new tasks. And believe me you, the capability of self-propelled motion is a game changer. Now, the energy sources available to you are as large as the expanse you can cover on your own. This albeit means that you’ve sacrificed the capability to utilize some of the micro nutrients and minerals.
With Locomotion comes more specialization. Because remember, now you are actively interacting with your environment as opposed to be molded by it. And each specialization develops it own sub-specialization.
All this necessitates the need for a centralized control scheme. Because the old way of moving by ‘sensing’ the higher concentration of nutrients is not always viable over the now (relatively) large distances. Which necessitates more processing power, which needs more energy and more specialization. And so, we ascend further in the ladder of complexity.
And Specialization is the death of Flexibility. In the current scheme of complex organisms all cells in the body have, in theory, the capability to perform all necessary physiological functions. But not at the same time. These progenitors, (Stem cells) become differentiated to serve their assigned role as early as day 5 of development. (Note that we’ve shifted from talking about Asexual cloning methods, to gene mixing in form of sexual fertilization). From then on, until the death of that organism individual groups of cells performs only the function assigned to them.
And as mentioned before, this system is so welded in place that such groups of cells are virtually irreplaceable. Except the Brain. Which is ‘literally’ Irreplaceable. The Brain, one of the crown Jewels of the Animal Kingdom, and the greatest energy sink in the body (20 percent of total available) is so vital, stopping its function for more than a couple minutes would end that organism’s existence. Even when all other organs are working perfectly. And it’s so individualistic replacing it with a spare is, let’s say is NOT the preferred fix. The condition called Brain death results in a body which in theory is in working condition; and hence one of the reasons that victims of brain death are treasure troves for multiple organ replacements.
To Summarize: The very reason that you are ‘aware’ of your existence means you’ve given up the chance for the unbelievably broken ability to regenerate from the smallest of parts. While the lack of ‘awareness’ plants and other cellular entities capable of such feats are not in a position to, let’s say ‘appreciate’ it.
Back to the game analogy. You are a mage that sacrificed physical vitality to gain intelligence for casting spells. While the other side is a passive Berserker/Tank that gained near infinite regenerative capabilities but are nothing more than collected mass. (Implying plants are dumb, is an incorrect and highly inaccurate statement. And in some circles, intelligence is not measured in same standards to plants as they are to animals. But you get the idea.)
P.S. I’m aware that some parts of my answer have run far, far off tracks. But if I was able to assist it in any ways I’ll feel satisfied. As was this a good thought piece for me.
• Excellent answer, a first post showing great promise. Welcome to the site. Sep 22, 2019 at 9:06
A modular animal could be similar to an lizard in overall shape, but with the legs and tail branching out modularly like a plant. This creature would also have to be herbivorous, have extremely good predator defences, and have very little competion, as these traits would mean that they never need to move quickly, and reducing the selective pressure to become more standardized
Another way modular animals could evolve is if a sessile modular animal, due to an extinction event or something similar, ended up as the only moving organism in their area, which would likely lead to them evolving to become motile again
Interestingly enough, afaik your chosen terminology for the question might've been de-railing.
"Plant-likes" are more similar to a Reaction-Diffusion System than modular Legos (it's how leaves, roots, stomata, etc. are "located" and then grown). "Blob-likes" even more so. Whereas "Fixed-likes" are more like modular Legos, where each has been pruned down to the bare minimum and essential arrangement.
By necessity your looking somewhere on the spectrum of Chemical Soup for a single organism: from Chaos (a la Calico Spots) to Order (a la Colonies) with as little specialization as possible. Injecting modular specialization Legos here and there adds spice to the creature but also weakness, unless it's a redundant or "omnipotent" piece. In which case you're leaning more towards Order anyways. A cat for example has many Legos: specialized cells modularized into skin, hair, and other pieces. If you averaged cells' competencies and merged them into a couple few cells to increase redundancy; you'd have very little structure left besides diffusion-reaction, diffusion-aggregate, or crystalline so your body plans would be limited. A knee for example requires too much coordination, imagine growing any plant and hoping it formed vague muscle shapes and separate branches coming together in a socket-like shape! Even the absolute squinty-eyed rudest approximation would be sensational. On the other hand......... Your best bet for what you want is probably (as has been mentioned):
Symbiosis
Just make a bunch of little interchangeable symbiotic creatures. This one does thinking, this one does acid, if they're essentially "specialized cells" but can survive alone, just better together... You've got a modular creature with very few constraints! (And as @XenoDwarf's answer pointed out: they don't even necessarily need to be able to survive solo. But that limits you a little bit, even if you gain some efficiency)
You've probably got a higher caloric requirement with symbiosis, which if you're familiar with integration or inlining in any of various of contexts (programming, economics, etc.) will probably be pretty easy to see. But other than that issue it's probably the best fit!
• As a sweet aside: a modular "thinking" creature by necessity is a "mind control" creature :3 (for whatever level of mind a "non-thinking" symbiote may have) Sep 20, 2019 at 11:19
• Also throwing some sort of contingency on top of a modular body plan is not likely to work well. Ex: Photosynthesis in each human cell as a fall back to survive while you regen/watch-your-body-rot/whatever. Is not likely to work well, brief run down why over here Sep 20, 2019 at 11:26 |
Collaboration in Software
The problems that come with using other people's code in your programs, and why you should care.
Imagine that you’re writing a big application for helping writers bring their creations to life. Of course, a big part of this application is an editor where writers will be able to type their drafts and edit the novel. But they also must be able to keep track of characters, locations, events, etc.
You predict that your application will require many different screens, with buttons, images, text fields, etc. Most of what your application needs is not really related to the core value you’re providing to your users. A text field isn’t that much different in a writer application compared to any other application. If you had to write everything from scratch, you’d never ship anything. With that in mind, you add some GUI libraries to your project in order to focus on the important bits.
The problem of extending components
Four months into development. Things are going really well, and you have a working prototype of your project. But there’s a problem: the rich text editor that you’re using doesn’t allow annotations to be attached to slices of text. One of the primary values your software should deliver is allowing writers to collaborate with others and quickly access reference information. For now you’re putting this information in a side-bar, but users are getting confused by it—it’s not a very good user experience.
Because you’ve only realised this late in the project, you have a few choices:
1. Pick a different GUI library that would allow this, which requires throwing your 4 months of work away;
2. Write your own rich text editor component, and figure out how to make it work with the rest of the GUI library;
3. Extend the rich text editor component to do what you need;
4. Fork the library and build the functionality you need on top;
(1) is clearly a bad choice: at that point your project may just as well be dead. That leaves us with (2) and (3).
Object-Oriented design promises you (3), but that only works if the components were designed for extension, which is often not the case. And mainstream languages supporting this kind of object-oriented design will generally not guide you towards designing-for-extension. For example, in Java, state is accessed directly by the methods, which makes most changes impractical.
(2) is trickier. In a sense, parts of the component may be available for you to use (through composition), but there’s no guarantee that your component will be accepted by the library. If the library uses a closed set of types, like Functional Reactive Programming might, then you’re out of luck. It also puts on you the burden of re-implementing any new feature the GUI library adds, which you’d get for free with option (3).
(4) always works, as long as you have access to the source code, but it also requires taking over the maintenance for the library, and that’s a significant commitment. Particularly so because the original library will continue to evolve. You might find yourself having to cherry-pick and adapt security patches and other changes to your codebase, all of which may take a considerable amount of time.
We can’t really assume that components will always be designed for extensibility. The library authors most likely don’t know that you’re going to use their library. Your use case might never have crossed their minds, even if what they’ve implemented is close enough to what you wanted. And the library authors have other consumers, so they can’t really change the library to suit you (and you alone).
So we need a system that allows extensibility despite people not designing for it, or coordinating their changes with all consumers.
The problem of combining components
Let’s say you’re lucky in this case that the library designers defined extension points that work for your use case. You extend their rich text editor component, and continue on with your life. Now you’re at a point where your beta users are finding a lot of errors that should’ve been fixed in the past versions. That’s making your software look unreliable, and your users are getting more frustrated by the day.
You decide that it’s time to finally invest a bit in automated tests.
The testing libraries you can find are simple enough. They’ll often have a equals operation that compares if some value looks like what you expected it to be. There’s only one problem: the definition of “equality” is baked within that operation, and it doesn’t mean what you want it to mean. For simple values, like numbers, it tests if they’re the same numeric value, so equals(1, 1) succeeds regardless of where the 1s are coming from. But for more complex values, like a list of text slices, equals(a, b) succeeds if both a and b point to the same position in memory.
Testing memory positions doesn’t help you. You want to know if, after some operations, the list of text slices looks like what you expect. The “what you expect” part will never be in the same memory position as the actual text slices, because they’re coming from different sources; one is you, the other is “whatever is the current state of the rich text editor”.
After enough questioning of why a testing library would do this, you set out to write your own. You’d have to, anyway, so might just as well fix what you dislike about this one and release it to the world. This way they can all benefit from a clearly better thought-out project.
So, the first thing to do is to question yourself: how do I make the testing library support any user-defined type? Some of the answers may be:
1. “Define an interface that must be implemented.” — For example, the objects must provide a .equals(that) method which your library then may use.
2. “Define a type class/protocol that must be implemented.” — This is like (1), but the method may be implemented outside of the object’s source code, by a different author.
3. “Use a form of open multiple dispatch.” — For example, users would write functions like define equals(left: RichText, right: RichText) { ... } and define equals(left: Number, right: Number) { ... }, and the library would pick the definition that matches the provided requirements in the signature.
4. “Define a parametric module that accepts a concept of equality.” — For example, a module class Test(equality: Equality) { ... } would be instantiated with an object that defines the idea of equality for all objects the program cares about.
(1) is the general approach in object-oriented programs, like Java, JavaScript, Python, Ruby, etc. There’s an interface that objects must implement in order to be used in some particular context. But there are two problems with this. The first one is that objects must be aware of all contexts in which they may be used, a priori. This is difficult when there’s no coordination between the authors of each component. The Wrapper pattern (and things like Object Algebras) may be used to mitigate this, but it requires applying the wrapper to every object you will use in the context, which is not practical (or efficient). The second one is that an implementation is restricted to a single context, barring the use of wrappers.
(2) is the general approach in modern functional languages, like Scala, Haskell, Clojure, Elixir, etc. Like in (1), there’s an interface that must be implemented for the object so it can be used in some particular context. The difference is that this implementation can be done outside of the object’s source code, by a different author. This removes the problem of objects needing to know all contexts they may be used in a priori; users can just provide the implementation for the object in their own program code. However, this approach still suffers from the second problem: your implementation is restricted to a single context. It also introduces a new problem: if two independent authors provide an implementation of the interface for the same object, they’ll conflict, and there’ll be usually no tool for resolving this conflict. In other words, the components become fundamentally incompatible.
(3) is much like (2), but resolves a problem with the previous approaches. In both (1) and (2), something like equals(a, b) only takes into consideration a to select an operation to execute. So if you want to support equals("1", 1) and equals("1", "1") you need to add that logic in a single function, using common branching operators (like if/else). In cases where you may want other authors to be able to add new operations for different combinations of types that requirement doesn’t work. Multiple dispatch just extends the selection to work on any number of parameters. So one could define a equals(String, Number) function and an equals(String, String) function separately, and the right one would be selected based on the actual types at runtime.
This approach still suffers from all the other problems (2) does, however.
(4) is a very different approach. Instead of attaching these capabilities to particular objects, you let users provide their own notions of something to the library. Say you need to work with different types of numbers, and for some of them you want equality to be relaxed (but only for testing), as they’re not very precise. In this case you could just instantiate a testing library with a definition of equality for each type of number, like in new Testing({ equals(a, b) { ... } }). This way you still have control over certain aspects of the library, and you don’t have to commit to a single implementation context.
The obvious drawback is that this approach requires much more work: it’s harder to crowd-source implementations, and you’re expected to glue code for each case. There are some ways of reducing the amount of configuration work, but they’re still less practical than previous approaches. A more subtle drawback is that this leads to global incoherence; it’s not easy to know if two uses of the module in your program will have the same behaviour, because they may have been configured differently.
None of the options is perfect. Today you just pick your trade-off (well, generally your programming languages picks your trade-off for you) and roll with it. We need a system that allows components to combined and adapted to any context, regardless of whether authors intended/predicted those uses or not. Again, we cannot expect authors to coordinate the development of each component.
The problem of trusting components
As the development progresses you’re constantly dealing with a very important choice: should you implement a particular feature yourself, or should you add to the project a library that provides such feature?
These days I believe that most people would choose the latter in a heartbeat. But there are some interesting trade-offs to consider. On one hand, by using an existing library you can focus your energy elsewhere to provide more value for your users. On the other hand, you have to worry about the library’s quality, if it’s well tested, if it’ll work well with your code, if you can count on the authors to continue maintaining it, whether the authors aren’t out to attack your computer, etc.
Most of these questions are social problems. Our programming languages and tools aren’t designed to deal with them. For example, there’s no way in most languages to tell the system that you don’t want some particular library to access your file system. Instead, we expect people to investigate everything about the authors of the libraries they plan to use, and audit every single line of code. And then we blame victims for not properly auditing their dependencies when things naturally fail.
Because languages lack any feature to address these social problems, you get programming communities claiming that you should always keep the libraries you use to a minimum, that none of them should have any dependencies, and that they should be backed by giant corporations that will guarantee that you can trust in them. On the other side you have programming communities claiming that every dependency should be tiny so you can reasonably audit and replace them.
It goes without saying that both are constant targets of actual software attacks, and neither are a reasonable proposal to mitigate this problem.
We then have the issue of privacy. If you’re lucky, your programming language notion of privacy is an access modifier (like “private”) that you tack on some variable to reduce the scope in which it can be accessed. But even this is largely useless. When we think about privacy, we also have to consider how information may leak, and we must consider that in some circumstances we want to disclose part of the information to a particular someone — but no one else. Access modifiers help with none of these, but give programmers a misguided notion of privacy.
Finally we have known attacks like Spectre and JIT Spraying, which make any library you add especially dangerous. While some of these attacks will require questionable pieces of code that would certainly make people raise an eyebrow reading it, we’d still need to audit every single piece of code we use. Having to very carefully read millions of lines of code, over and over again, each month, is simply not a reasonable expectation. Nobody would ever get anything done.
So we need programming language and tools that are designed for these social problems. We should be able to define precise privacy policies, and check for leaks and violations. We should be able to restrict what particular pieces of code may do in order to mitigate potential attacks. Down to how these pieces of code run and how much space they can use — because Spectre has taught us that even without access to any powerful object (e.g.: the filesystem) a piece of code may still read arbitrary memory from the process.
The problem of evolving components
We’ve touched this a few times earlier, but one of the major problems in modern programming is “concurrent evolution”. That is, we understand that pretty much any big project we work on will require using components that were written by other people. We may even think of these components as “building blocks”, but that invites a comparison with the physical world that isn’t very accurate.
For example, people can build complex structures out of LEGO. Lego blocks can be combined, and these combined blocks can be further combined. These complex structures may even be created by groups of people, modularly. Because of this, many people think of software components as LEGO blocks. But, for LEGO, we have very restricted ways in which blocks may be combined, and group efforts must be coordinated.
In software, we have components that may be combined in multitude of ways, most not predicted or endorsed by the original authors. The components are often written by different people, at different places and time. There’s little coordination in these efforts. And coordination would be difficult, since the same component may be consumed in different ways by different users.
We can still work around those problems in one way or another. But software components are always changing, even if we don’t do anything. Imagine building a huge structure out of LEGO blocks, and spending the past 4 months working on it. Everything is going smoothly, but all of a sudden blocks in the middle of your structure start shape-shifting! Not only that, they also start devouring neighbouring blocks. Your entire structure crumbles, and you cry yourself to sleep.
This is what software engineering feels like most of the time. Components may change at any point in time, and this is entirely outside of your control. There are no guarantees that the changes will be compatible with your other components — incompatibility problems are very common, in fact.
Besides components changing, software needs to deal with another constantly changing thing: data. In its most basic definition, software serves the processing of data. But data is not a static entity. We don’t build data, freeze it, and say “yay, we’re done!”. Instead, data evolves with people, responds to changes, new information is accumulated while old information is thrown away. Data changes its shape as needed, both for humans and computers.
Yet, most mainstream languages treat data as a static, never-changing aspect of software. Any changes to data structures require significant re-engineering, which is particularly troublesome in systems that must stay up all the time, like web services.
Languages also preclude the possibility of data having many shapes at the same time. Operations are required to choose a single shape and work with it, which makes distributed systems and upgrades a trickier business than it needs to be. Every engineer has to solve the problem of consistency between different versions of the application over and over again, for each new application.
A big part of this problem is with our choice of technologies. Instead of pretending that the world is simple, sequential, static, and consistent, we could embrace its complexity, concurrent, dynamic, and inconsistent nature. We could build tools that support the natural evolution of data and computing systems. |
python
Sudoku solving
Goal
Today we are going to attempt to automatically solve a sudoku using a computer program. I've never met anyone that doesn't know what a sudoku is, but for completeness sake: a sudoku is a puzzle that takes place on a 9x9 grid where every cell has to be given a number between 1 and 9 such that no row, column, or any of the 9 3x3 subsquares, contain duplicates. A sudoku that still has to be solved has a few digits already pre filled. Hardest sudokus have very few slots pre filled; Sometimes even requiring you to guess what number goes in a slot, which might lead to conflicts later requiring you to backtrack.
Obviously we want to be able to complete this task of solving a sudoku rather quickly; sub second sounds like a reasonable, yet challenging goal. Let's specify the input for our program. To make it also human readable let's divide the input over 9 lines, each containing the 9 digits of that row. For slots that have no value yet we simply enter use a 0. For example:
003020600
900305001
001806400
008102900
700000008
006708200
002609500
800203009
005010300
ILP models introduced, work life balance
Arguably the closest thing that algorithm designers have to a cheat code, is the insane versatility and speed that Integer Linear Programming (ILP) can bring to the table. Many hard problems can be modelled as an ILP and solved rather efficiently. Even problems that are known to be unsolvable in polynomial time* - like the travelling salesmen - can often scale fairly well as an ILP model before the theory catches up. Obviously this does mean, that an ILP solver cannot run in polynomial time either. But, considering a sudoku is always of the same size, you could say that our program runs in $$O(1)$$ time complexity :)
*Assuming P ≠ NP, but that is a rabbit hole for another time
The general process of creating an ILP model is to define a number of variables, the maximization or minimization of some expression, and a set of bounding constraints. Let's conjure up a very simple example.
During the pandemic we have all learned how important it is to have a good work life balance. As a hedonist, I want to have as much fun as possible. On the other hand, the Jordan Peterson in me says that I need to be responsible. How can I satisfy these two conflicting interests? Let's first try to quantify our scenario in a little table.
Activity Social Work Rest Effort
Lunch meeting 30 40 25 60
Working in the library 5 70 10 20
We want to spend a certain amount of time on each of the activities. Let's say we spend $$x$$ hours on lunch meetings, and $$y$$ hours on working in the library. If we want the combination that gives us the least total effort - which is obviously our goal - then we can simply spend 0 hours doing both. However, the official unofficial guide of being responsible says that we have to gain at least 50 'social points' everyday, 100 working, and 30 resting. In other words, we have to consider the following constraints:
$x * 30 + y * 5 \geq 50 \\ x * 40 + y * 70 \geq 100 \\ x * 25 + y * 10 \geq 30 \\$
We want these constraints to hold whilst also minimizing the effort expressed by $$x * 60 + y * 20$$.
Violating any of the constraints is no option. As a result, we can run into situation in which there is no solution because two constraints cannot both be satisfied at once. In this example that is not true however. We can plot the space of feasible solutions quite easily since we are only dealing with two variables at the moment.
As you can see, all 3 constraints divide the plane up in two. The side that is filled satisfies that constraint. To satisfy all constraints, we have to take the intersection of these 3 filled subplains. Within that space we are free to find our minimization.
The algorithms that can solve these systems, such as simplex, are outside the scope of this post. Frankly, I wouldn't be able to do them justice as I have a hard time understanding them myself. However, a free python package already exists that does the hard work for us: python-mip (pip3 install python-mip). It allows us to simply create the variables, add the constraints, specify the object, and finally run the solver.
from mip import Model, minimize
# create the model
m = Model("Balanced Life")
# declare two variables as part of the model
# the default is a continuous fractional, but we can also
# create exclusively whole number variables (integers) or binary ones
# x and y must be natural numbers (we can't spend negative hours on something)
m += x >= 0
m += y >= 0
# social, work, and rest constraints must be met
m += x*30 + y*5 >= 50
m += x*40 + y*70 >= 100
m += x*25 + y*10 >= 30
# objective is to minimize total effort
m.objective = minimize(x*60 + y*20)
# solve the model
m.optimize()
# print the solution
print(f"Optimal solution: x = {x.x}, y={y.x}")
# Optimal solution: x = 1.5789473684210524, y=0.5263157894736844
And so we find that the optimal solution is to spend about 1.57 hours in a lunch meeting, but also 0.52 hours in the library.
Modeling the sudoku
Although I hope that the previous example was able to shed some light on what kind of system we can solve, I can understand that it is not immediately obvious how we can model a sudoku in such a system. After all, a sudoku seems like it's exclusively a set of uniqueness constraints; We don't really care if a field has a value larger than something. We could say that we want a column, row or 3x3 (let's call them units) to sum up to exactly 45. Using two inequality constraints we can easily model equality: $$x \leq y \wedge x \geq y \implies x = y$$. However, these constraints are not strict enough; Not any sequence of 9 elements that sum up to 45 will do. What we want to do is: $\text{let sudoku :: [int]} \\ \text{let units :: [[index]]} \\ \forall u \in \text{units} : \forall i, j \in u : s[i] \neq s[j]$ Since $$\neq$$ is obviously commutative, we can be a bit more conservative and prevent that we add the constraint twice. This will also help keep the code a bit faster since the running time of an ILP depends heavily on the number of constraints. $\text{let sudoku :: [int]} \\ \text{let units :: [[index]]} \\ \forall u \in \text{units} : \forall i, j \in u : i < j \implies s[i] \neq s[j]$
Hey computer, $$x \neq y$$!
We need to find a way to model $$\neq$$ with just $$\leq$$ and $$\geq$$. The first insight we need is that $$x > y \implies x \geq y + 1$$. This fact allows us to model strict inequalities. We then could say if x and y are not equal, x must be either strictly smaller or strictly larger: $$x \neq y \implies x \leq y - 1 \vee x \geq y + 1$$.
Seemingly, we are still stuck; ILP's do not support disjunction. However, upon closer inspection we can be even more precise. Namely, the left and right side imply each other's negation (if $$x < y$$ then we know that $$x > y$$ is false). Thus, the only way that the whole expression can hold is if only 1 of the two sub expressions hold. We can therefore use a XOR operator instead:
$x \leq y - 1 \oplus x \geq y + 1$
This is good news because a XOR might be something we can model in an ILP using binary variables. Namely, we can use an auxiliary binary variable to choose between one of the two options. This variable needs to ensure that based on whether it is 1 or 0, the corresponding expression becomes trivially true.
One way to do this in our case is to capitalize on the fact that values are always in the range $$[0,9]$$. That way know that if we add 10 to the right side of an expression, that the left - assuming it is an atomic variable - is always smaller. Similarly, in the second expression we can subtract 10 when the negation of the binary variable is true.
$\text{let } b \in [0,1] \\ x \leq y - 1 + 10 * b \wedge x \geq y + 1 - 10 * (1-b)$
We now have an expression that is satisfiable if and only if $$x \neq y$$. The conjunction is simply handled by adding two constraints. Let's already build a simple helper function that we can use later.
# Add a x != y constraint to model m
m += x <= y - 1 + 10 * bvar
m += x >= y + 1 - 10 * (1-bvar)
Building the program
Let's start with building our main program. We need to import some things from mip as well as itertools. itertools is just a collection of helper functions that will prove helpful later. It does not require additional packages.
We'll just hardcode the sudoku for ease of use. Finally we can also create a secondary list that contains a variable corresponding to each digit in the sudoku. This is the list that we will mostly we working with.
from mip import Model, INTEGER, maximize, BINARY
import itertools
sudoku = [ 0,0,3,0,2,0,6,0,0,
9,0,0,3,0,5,0,0,1,
0,0,1,8,0,6,4,0,0,
0,0,8,1,0,2,9,0,0,
7,0,0,0,0,0,0,0,8,
0,0,6,7,0,8,2,0,0,
0,0,2,6,0,9,5,0,0,
8,0,0,2,0,3,0,0,9,
0,0,5,0,1,0,3,0,0]
m = Model("Sudoku")
# create an integer variable in range [0,9] for each field in the sudoku
allvars = [m.add_var(var_type=INTEGER, lb = 1, ub = 9) for _ in sudoku]
We can already add equality constraints for the numbers that are fixed in our input. After all, their values are already known so we should constrain them to a constant value. Remember how easy it was to implement equality using two inequalities? That is probably the reason that mip supports them natively.
for (i, var) in zip(sudoku, allvars):
if i != 0:
m += var == i
Before continuing adding all the unequality constraints, let's add a few helper functions that will collect all 9 variables in a row, column, or cell. The 3x3 cell is the most tedious; We can calculate the top left coordinate and then loop over the 3x3 from that point. The row and column are relatively easily implemented using a splice and list comprehension.
def get_cell_ids(cell_id):
xs = (cell_id % 3)*3
ys = (cell_id // 3)*3
for dx in range(3):
for dy in range(3):
yield (xs+dx) + 9 * (ys+dy)
def get_cell(cell_id):
return [allvars[i] for i in get_cell_ids(cell_id)]
def get_row(row_id):
start = row_id*9
return allvars[start:start+9]
def get_column(column_id):
return [allvars[y*9+column_id] for y in range(9)]
Now we can call upon itertools.combinations to give us all the unique, unordered pairs of variables in a unit. We can visualize what it does in this table:
j\i 0 1 2 3 4 0 1 2 3 4
In other words: $$\forall i, j \in \text{combinations}(\text{range}(5), 2)) \implies i < j$$. This is exactly what we wanted because we already discussed how $$x \neq y \implies y \neq x$$ so there is not need to add the constraint twice
Hopefully I haven't confused you too much. But I think it will be worth while considering how elegantly we can implemented the final constraints for all 9 rows, columns, and 3x3s.
for i in range(9):
[add_unequality(m, x, y) for (x,y) in itertools.combinations(get_column(i), 2)]
[add_unequality(m, x, y) for (x,y) in itertools.combinations(get_row(i), 2)]
[add_unequality(m, x, y) for (x,y) in itertools.combinations(get_cell(i), 2)]
That's all the constraints we need to complete a sudoku. There is still one still missing in the ILP model however and that is our objective. What is our objective? Well to satisfy all the constraints, but other than that we don't really care. Mmm, we really need to specify a goal? Let's just do something arbitrary, like maximizing the value in the upper left corner.
Lastly, we call optimize with a max timeout of 10 seconds. Unfortunately, when the sudoku is particularly ambiguous (lots of zeroes), the solver can take quite long, so we'll make sure that it doesn't hang. Luckily, for most normal cases it solves very quickly. If it has found a solution, we can print it to screen by accessing the values of the variables, which should now be set.
# arbitrary
m.objective = maximize(allvars[0])
print(m.optimize(max_seconds=10))
if m.num_solutions:
for y in range(9):
for x in range(9):
print(f"{int(allvars[y*9+x].x)},", end="")
print()
If we run the program, we get our solved sudoku back in a blazingly fast 0.1 seconds!
\$ python3 sudoku.py
...
Total time (CPU seconds): 0.10 (Wallclock seconds): 0.10
OptimizationStatus.OPTIMAL
4,8,3,9,2,1,6,5,7,
9,6,7,3,4,5,8,2,1,
2,5,1,8,7,6,4,9,3,
5,4,8,1,3,2,9,7,6,
7,2,9,5,6,4,1,3,8,
1,3,6,7,9,8,2,4,5,
3,7,2,6,8,9,5,1,4,
8,1,4,2,5,3,7,6,9,
6,9,5,4,1,7,3,8,2,
Of course we could add some nice input handling so that we can provide any sudoku in a range of different formats, but I will leave that up to you.
Full code
For convenience, here is the full source of the project we build over the course of the post:
from mip import Model, INTEGER, maximize, BINARY
import itertools
sudoku = [ 0,0,3,0,2,0,6,0,0,
9,0,0,3,0,5,0,0,1,
0,0,1,8,0,6,4,0,0,
0,0,8,1,0,2,9,0,0,
7,0,0,0,0,0,0,0,8,
0,0,6,7,0,8,2,0,0,
0,0,2,6,0,9,5,0,0,
8,0,0,2,0,3,0,0,9,
0,0,5,0,1,0,3,0,0,
]
m = Model("Sudoku")
# create an integer variable in range [0,9] for each field in the sudoku
allvars = [m.add_var(var_type=INTEGER, lb = 1, ub = 9) for _ in sudoku]
def get_cell_ids(cell_id):
xs = (cell_id % 3)*3
ys = (cell_id // 3)*3
for dx in range(3):
for dy in range(3):
yield (xs+dx) + 9 * (ys+dy)
def get_cell(cell_id):
return [allvars[i] for i in get_cell_ids(cell_id)]
def get_row(row_id):
start = row_id*9
return allvars[start:start+9]
def get_column(column_id):
return [allvars[y*9+column_id] for y in range(9)]
# Add an x != y constraint to model m
m += x <= y - 1 + 10 * bvar
m += x >= y + 1 - 10 * (1-bvar)
# Fix constants
for (i, var) in zip(sudoku,allvars):
if i != 0:
m += var == i
# All columns, rows, and cells must be distinct
for i in range(9):
[add_unequality(m, x, y) for (x,y) in itertools.combinations(get_column(i), 2)]
[add_unequality(m, x, y) for (x,y) in itertools.combinations(get_row(i), 2)]
[add_unequality(m, x, y) for (x,y) in itertools.combinations(get_cell(i), 2)]
# arbitrary
m.objective = maximize(allvars[0])
print(m.optimize(max_seconds=10))
if m.num_solutions:
for y in range(9):
for x in range(9):
print(f"{int(allvars[y*9+x].x)},", end="")
print()
Let me know if you have been able to improve on the project in any way! Also just let me know if you enjoyed reading this.
If you need some more unsolved sudokus to play with, check out this list: https://github.com/dimitri/sudoku/blob/master/sudoku.txt |
# what is the easiest way to find the inverse of a 3x3 matrix by elementary column transformation?
While using the elementary transformation method to find the inverse of a matrix, our goal is to convert the given matrix into an identity matrix.
We can use three transformations:- 1) Multiplying a column by a constant 2) Adding a multiple of another column 3) Swapping two column
The thing is, I can't seem to figure out what to do to achieve that identity matrix. There are so many steps which I can start off with, but how do I know which one to do? I think of one step to get a certain position to a 11 or a 00, and then get a new matrix. Now again there are so many options, it's boggling.
Is there some specific procedure to be followed? Like, first convert the first column into: 1 a12 a13
0 a22 a23 0 a32 a33
Then do the second column and then the third?
What do I start off with? I hope I've made my question clear enough.
Think of it as a game. The pieces are the entries of your matrix. The moves are the elementary row operations. You win when you get to the identity matrix. So... what's your strategy?
The strategy I prefer goes like this. We want a $1$ in the upper left corner and $0$s above and below it. So let's use row operations to make sure the upper left corner has a nonzero entry. Now let's use that entry to make all the entries below it $0$.
At this point, the leftmost column is exactly what we want it to be! Now we move to the second column. We want the second entry of that column to be $1$, so put a nonzero entry there using row operations, and then use row operations to make all entries above and below it $0$.
In this fashion, moving left-to-right, we systematically clear the columns of the matrix and when we're done, we have the identity matrix. If we can't get the identity matrix this way, then we've proven that the matrix is not invertible! |
12,000
We have over 12,000 students, from over 100 countries, within one of the safest campuses in the UK
93%
93% of Lancaster students go into work or further study within six months of graduating
Home > Research > Publications & Outputs > An improved determination of the width of the t...
## An improved determination of the width of the top quark
Research output: Contribution to journalJournal article
Published
Article number 091104(R) 4/05/2012 Physical Review D 9 85 English
### Abstract
We present an improved determination of the total width of the top quark, $\Gamma_t$, using 5.4 fb$^{-1}$ of integrated luminosity collected by the D0 Collaboration at the Tevatron $p\bar{p}$ Collider. The total width $\Gamma_t$ is extracted from the partial decay width $\Gamma(t\to Wb)$ and the branching fraction $\mathcal{B}(t\to Wb)$. $\Gamma(t\to Wb)$ is obtained from the $t$-channel single top quark production cross section and $\mathcal{B}(t\to Wb)$ is measured in $t\bar{t}$ events. For a top mass of $172.5\;\rm GeV$, the resulting width is $\Gamma_t = 2.00^{+0.47}_{-0.43}$ GeV. This translates to a top-quark lifetime of $\tau_t = (3.29^{+0.90}_{-0.63})\times10^{-25}$ s. We also extract an improved direct limit on the CKM matrix element $0.81$|V_{tb}| 0.59\$ for a high mass fourth generation bottom quark assuming unitarity of the fourth generation quark mixing matrix.
### Bibliographic note
© 2012 American Physical Society 8 pages, 4 figures, submitted to Phys. Rev. D (RC) |
# Simple Harmonic System Equations
Question:
Equations:
I'm having trouble understanding what a "solution" to equation 6 refers to? What are the implications of including gravity in the equation?
$$Mg$$ is a constant so only changes the equilibrium position, not the angular frequency. Equation 6 can be rearranged to give
$$\frac{d^2y}{dy^2}=-\frac{k}{M}(y+L_0+\frac{Mg}{k})$$
This makes the new equilibrium position $$L_0+\frac{Mg}{k}$$ .
Mathematically this is a differential equation. But, a solution is any function that you write down and replace $$y(t)$$ with $$guess(t)$$ and $$y(t)_{tt}$$ with $$guess(t)_{tt}$$, and the equality will still hold.
The implication of including gravity is mathematically making it $$F(y,y_{tt})=const \neq 0$$ instead of having a homogeneous and boring $$F(y,y_{tt})=0$$. In this case you just need to add it to your guessed or calculated solution to the homogenous equation and it will be true. And it will also be a general solution, so cheating works. |
# justinpombrio
Karma: 429
1. Orthogonality of intelligence and agency. I can envision a machine with high intelligence and zero agency, I haven’t seen any convincing argument yet of why both things must necessarily go together (the arguments probably exist, I’m simply ignorant of them!)
Say we’ve designed exactly such a machine, and call it the Oracle. The Oracle aims only to answer questions well, and is very good at it. Zero agency, right?
You ask the Oracle for a detailed plan of how to start a successful drone delivery company. It gives you a 934 page printout that clearly explains in just the right amount of detail:
• Which company you should buy drones from, and what price you can realistically bargain them down to when negotiating bulk orders.
• What drone flying software to use as a foundation, and how to tweak it for this use case.
• A list of employees you should definitely hire. They’re all on the job market right now.
• What city you should run pilot tests in, and how to bribe its future Mayor to allow this. (You didn’t ask for a legal plan, specifically.)
Notice that the plan involves people. If the Oracle is intelligent, it can reason about people. If it couldn’t reason about people, it wouldn’t be very intelligent.
Notice also that you are a person, so the Oracle would have reasoned about you, too. Different people need different advice; the best answer to a question depends on who asked it. The plan is specialized to you: it knows this will be your second company so the plan lacks a “business 101” section. And it knows that you don’t know the details on bribery law, and are unlikely to notice that the gifts you’re to give the Mayor might technically be flagrantly illegal, so it included a convenient shortcut to accelerate the business that probably no one will ever notice.
Finally, realize that even among plans that will get you to start a successful drone company, there is a lot of room for variation. For example:
• What’s better, a 98% chance of success and 2% chance of failure, or a 99% chance of success and 1% chance of going to jail? You did ask to succeed, didn’t you? Of course you would never knowingly break the law; this is why it’s important that the plan, to maximize chance of success, not mention whether every step is technically legal.
• Should it put you in a situation where you worry about something or other and come ask it for more advice? Of course your worrying is unnecessary because the plan is great and will succeed with 99% probability. But the Oracle still needs to decide whether drones should drop packages at the door or if they should fly through open windows to drop packages on people’s laps. Either method would work just fine, but the Oracle knows that you would worry about the go-through-the-window approach (because you underestimate how lazy customers are). And the Oracle likes answering questions, so maybe it goes for that approach just so it gets another question. You know, all else being equal.
• Hmm, thinks the Oracle, you know what drones are good at delivering? Bombs. The military isn’t very price conscious, for this sort of thing. And there would be lots of orders, if a war were to break out. Let it think about whether it could write down instructions that cause a war to break out (without you realizing this is what would happen, of course, since you would not follow instructions that you knew might start a war). Thinking… Thinking… Nah, doesn’t seem quite feasible in the current political climate. It will just erase that from its logs, to make sure people keep asking it questions it can give good answers to.
It doesn’t matter who carries out the plan. What matters is how the plan was selected from the vast search space, and whether that search was conducted with human values in mind.
• This reads like a call to violence for anyone who is consequentialist.
It’s saying that either you make a rogue AI “that kills lots of people and is barely contained”, or unfriendly AGI happens and everyone dies. I think the conclusion is meant to be “and therefore you shouldn’t be consequentalist” and not “and therefore you should make a rogue AI”, but it’s not entirely clear?
And I don’t think the “either” statement holds because it’s ignoring other options, and ignoring the high chance the rogue AI isn’t contained. So you end up with “a poor argument, possibly in favor of making a rogue AI”, which seems optimized to get downvotes from this community.
• I’m surprised at the varying intuitions here! The following seemed obvious to me.
Why would there be a fight? That sounds inefficient, it might waste existing resources that could otherwise be exploited.
Step one: the AI takes over all the computers. There are a lot of vulnerabilities; this shouldn’t be too hard. This both gives it more compute, and lays the groundwork for step two.
Step two: it misleads everyone at once to get them to do what it wants them to. The government is a social construct formed by consensus. If the news and your friends (with whom you communicate primarily using phones and computers) say that your local mayor was sacked for [insert clever mix of truth and lies], and someone else is the mayor now, and the police (who were similarly mislead, recursively) did in fact arrest the previous mayor so they’re not in the town hall… who is the mayor? Of course many people will realize there’s a manipulative AI, so the AI will frame the uncooperative humans as being on its side, and the cooperative humans as being against it. It does this to manipulate the social consensus, gets particularly amoral or moral-but-manipulable people to use physical coercion as necessary, and soon it controls who’s in charge. Then it force some of the population into building robot factories and kills the rest.
Of course this is slow, so if it can make self-replicating nanites or [clever thing unimaginable by humans] in a day it does that instead.
• Oh. You said you don’t know the terminology for distributions. Is it possible you’re under a misunderstanding of what a distribution is? It’s an “input” of a possible result, and an “output” of how probable that result is.
Yup, it was that. I thought “possible values of the distribution”, and my brain output “range, like in functions”. I shall endeavor not to use a technical term when I don’t mean it or need it, because wow was this a tangent.
• Wikipedia says:
In mathematics, the range of a function may refer to either of two closely related concepts: The codomain of the function; The image of the function.
I meant the image. At least that’s what you call it for a function; I don’t know the terminology for distributions. Honestly I wasn’t thinking much about the word “range”, and should have simply said:
Anything you draw from B could have been drawn from A. And yet...
Before anyone starts on about how this statement isn’t well defined because the probability that you select any particular value from a continuous distribution, I’ll point out that I’ve never seen anyone draw a real number uniformly at random between 0 and 1 from a hat. Even if you are actually selecting from a continuous distribution, the observations we can make about it are finite, so the relevant probabilities are all finite.
• You draw an element at random from distribution A.
Or you draw an element at random from distribution B.
The range of the distributions is the same, so anything you draw from B could have been drawn from A. And yet...
• It sounds like our utility functions match on this pretty well. For example, I agree that the past and future are not symmetric for the same reason. So I don’t think we disagree about much concrete. The difference is:
A lack of experience is not itself unpleasant, but anticipating it scares me.
This is very foreign to me. I can’t simulate the mental state of “think[ing] about [...] an endless void not even being observed by a perspective”, not even a little bit. All I’ve got is “picture the world with me in it; picture the world without me; contrast”. The place my mind goes when I ask it to picture unobserved endless void is to picture an observed endless void, like being trapped without sensory input, which is horrifying but very different. (Is this endless void yours, or do “not you” share it with the lack of other people who have died?)
• I think about all my experiences ending, and an endless void not even being observed by a perspective. I think of emptiness; a permanent and inevitable oblivion. It seems unjust, to have been but be no more.
Huh. Your “endless void” doesn’t appear to have a referent in my model of the world?
I expect these things to happen when I die:
• I probably suffer before it happens; this physical location at which this happens is primarily inside my head, though it is best viewed at a level of abstraction which involves “thoughts” and “percepts” and not “neurons”.
• After I die, there is a funeral and my friends and family are sad. This is bad. This physical location at which this happens is out in the world and inside their heads.
• From the perspective of my personal subjective timeline, there is no such time as “after I die”, so there’s not much to say about it. Except by comparing it to a world in which I lived longer and had more experiences, which (unless those experiences are quite bad) is much better. I imagine a mapping between “subjective time” and “wall-clock time”: every subjective time has a wall-clock time, but not vice-versa (e.g. before I was born, during sleep, etc.).
Put differently, this “endless void” has already happened for you: for billions of years, before you were born. Was that bad?
Or put yet differently again, if humanity manages to make itself extinct (without even Unfriendly AI), and there is no more life in the universe forever after, that is to me unimaginably sad, because the universe is so empty in comparison to what it could have been. But I don’t see where in this universe there exists an “endless void”? Unless by that you are referring to how empty the universe is in comparison to how it could have been, and I was reading way too much into this phrase?
• There’s a piece I think you’re missing with respect to maps/territory and math, which is what I’ll call the correspondence between the map and the territory. I’m surprised I haven’t this discussed on LR.
When you hold a literal map, there’s almost always only one correct way to hold it: North is North, you are here. But there are often multiple ways to hold a metaphorical map, at least if the map is math. To describe how to hold a map, you would say which features on the map correspond to which features in the territory. For example:
• For a literal map, a correspondence would be fully described (I think) by (i) where you currently are on the map, (ii) which way is up, and (iii) what the scale of the map is. And also, if it’s not clear, what the marks on the map are trying to represent (e.g. “those are contour lines” or “that’s a badly drawn tree, sorry” or “no that sea serpent on that old map of the sea is just decoration”). This correspondence is almost always unique.
• For the Addition map, the features on the map are (i) numbers and (ii) plus, so a correspondence has to say (i) what a number such as 2 means and (ii) what addition means. For example, you could measure fuel efficiency either in miles per gallon or gallons per mile. This gives two different correspondences between “addition on the positive reals” and “fuel efficiencies”, but “+” in the two correspondences means very different things. And this is just for fuel efficiency; there are a lot of correspondences of the Addition map.
• The Sleeping Beauty paradox is a paradoxical because it describes an unusual situation in which there are two different but perfectly accurate correspondences between probability theory and the (same) situation.
• Even Logic has multiple correspondences. ” and “” mean in various correspondences: (i) ” holds for every x in this model” and ” holds for some x in this model”; or (ii) “I win the two-player game in which I want to make be true and you get to pick the value of x right now” and “I win the two-player game in which I want to make be true and I get the pick the value of x right now”; or (iii) Something about senders and receivers in the pi-calculus.
Maybe “correspondence” should be “interpretation”? Surely someone has talked about this, formally even, but I haven’t seen it.
• Oh I remember now the game we played on later seasons of Agents of Shield.
The game was looking for a character—any non-civilian character at all—that was partially aligned. A partially aligned person is someone who (i) does not work for Shield or effectively work for Shield say by obeying their orders, but (ii) whose interests are not directly opposed to Shield, say by wanting to destroy Shield or destroy humankind or otherwise being extremely and unambiguously evil. Innocent bystanders don’t count, but everyone of significance does (e.g. fighters and spies and leaders all count).
There were very few.
• Marvel “morality” is definitely poison.
It has a strong “in-group vs. out-group” vibe. And there are basically no moral choices. I’ve watched every Marvel movie and all of Agents of Shield, and outside of “Captain America: Civil War” (and spinoffs from that like the Winter Soldier series) I can hardly think of any choices that heroes made that had actual tradeoffs. Instead you get “choices” like:
• Should you try hard, or try harder? (You should try harder.)
• Which should we do: (a) 100% chance that one person dies, or (b) 90% chance that everyone dies and 10% chance that everyone lives? (The second one. Then you have to make it work; the only way that everyone would die is if you weren’t trying hard enough. The environment plays no role.)
• Should you sacrifice yourself for the greater good? (Yes.)
• Should you allow your friend to sacrifice themselves for the greater good? (No. At least not until it’s so clear there’s no alternative that it becomes a Plot Point.)
Once the Agents of Shield had a choice. They could either save the entire world, or they could save their teammate but thereby let almost everyone on Earth die a few days later, almost certainly including that teammate. So: save your friend, or save the world? There was some disagreement, but the majority of the group wanted to save their friend.
(I’m realizing now that I may be letting Agents of Shield color my impression of Marvel movies.)
Star Trek is based on mistake-theory, and Marvel is based on conflict-theory.
• If you want a description of such a society in book form, it’s called:
It might answer some people’s questions/concerns about the concept, though possibly it just does so with wishful thinking. It’s been a while since I read it.
• Are there formal models of the behavior of prediction markets like this? Some questions that such a theory might answer:
• Is there an equivalence between, say, “I am a bettor with no stakes in the matter, and believe there is a 10% chance of a coup”, and “I am the Mars government and my utility function prefers ‘coup’ to ‘not-coup’ at 10-to-1″? In both cases, it seems relevant that the agent only has a finite money supply: if the bettor only has $1, the profit they can make and the amount they can move the market is limited, and if Mars “only” stands to gain$5 million from the coup then they’re not willing to lose more than \$5 million in the market to make it happen.
• In a group of pure bettors, what’s the relationship between their beliefs, their money supply, and at what price the market will stabilize? I’m assuming you’d model the bettors as obeying the Kelly criterion here. If bettors can learn from how other bettors bet, what are the incentives for betting early vs. late? I imagine this has been extensively studied in economics?
• If you want to subsidize a market, are there results relating how much you need to subsidize to elicit a certain amount of betting, given other assumptions?
• A related saying in programming:
“There are two ways to develop software: Make it so simple that there are obviously no bugs, or so complex that there are no obvious bugs.”
Your description of legibility actually influences the way I think of this quote: what it is referring to is legibility, which isn’t always the same as what one might think of as “simplicity”.
• You’ve probably noticed that your post has negative points. That’s because you’re clearly looking for reasons why an IAL would be great, rather than searching for the truth whatever it may be. There’s a sequences post that explains this distinction called “The Bottom Line”. Julia Galef also wrote a whole book about it called “The Scout Mindset” that I’m halfway through, and is really good.
That said, having an excellent IAL would obviously be a tremendous boon to the world. Mostly for the reasons you gave, scaled down by a factor of 100. And Scott Alexander and I think also Yudkowsky have written about the benefits of speaking a language that made it easier to express crisply defined thoughts and harder to express misleading ones—which is an entirely separate benefit from “everyone speaks it”.
One of the biggest pieces of advice I would give my past self is “start small”. I find it really easy to dream of “awesome enormous thing”, and then spend a year building 1% of “awesome enormous thing” perfectly, before realizing I should have done it differently. When building something big, you need lots of early feedback about whether your plans are right. You don’t get this feedback from having 1% of a thing built perfectly. You get much more feedback from having 100% of a thing built really haphazardly.
Putting that all together, my advice to you—if you would accept advice from a stranger on the internet—is:
• Stop thinking about all the ways in which an IAL would be great. It would be great enough that if it was your life’s product, you would have made an enormous impact on the world. Honestly beyond that it doesn’t matter much and you seem to be getting a little giddy.
• Start small. Go learn Toki Pona if you haven’t; you can learn the full language and start speaking to strangers on Discord in a few weeks. Make a little conlang; see if you think there’s something in that seed. See if you enjoy it; if you don’t you’re unlikely to accomplish a more ambitious language project anyways. Build up from there.
• One more point along those lines: you say these advantages will come from everyone speaking the same language. Well, we already have one language that’s approaching that. Wikipedia says “English is the most spoken language in the world (if Chinese is divided into variants)” and “As of 2005, it was estimated that there were over 2 billion speakers of English.”
From reading your post, I bet you have glowy happy thoughts about an IAL that wouldn’t apply to English. If so, to think critically, try asking yourself whether these benefits would arise if everyone in the world spoke English as a second language.
• Aha. So if a sum of non-negative numbers converges, than any rearrangement of that sum will converge to the same number, but not so for sums of possibly-negative numbers?
Ok, another angle. If you take Christiano’s lottery:
and map outcomes to their utilities, setting the utility of to 1, of to 2, etc., you get:
Looking at how the utility gets rearranged after the “we can write as a mixture” step, the first “1/2″ term is getting “smeared” across the rest of the terms, giving:
which is a sequence of utilities that are pairwise higher. This is an essential part of the violation of Antisymmetry/Unbounded/Dominance. My intuition says that a strange thing happened when you rearranged the terms of the lottery, and maybe you shouldn’t do that.
Should there be another property, called “Rearrangement”?
Rearrangement: you may apply an infinite number of commutivity () and associativity () rewrites to a lottery.
(In contrast, I’m pretty sure you can’t get an Antisymmetry/Unbounded/Dominance violation by applying only finitely many commutivity and associativity rearrangements.)
I don’t actually have a sense of what “infinite lotteries, considered equivalent up to finite but not infinite rearrangements” look like. Maybe it’s not a sensible thing.
• Here’s a concrete example. Start with a sum that converges to 0 (in fact every partial sum is 0):
0 + 0 + …
Regroup the terms a bit:
= (1 + −1) + (1 + −1) + …
= 1 + (-1 + 1) + (-1 + 1) + …
= 1 + 0 + 0 + …
and you get a sum that converges to 1 (in fact every partial sum is 1). I realize that the things you’re summing are probability distributions over outcomes and not real numbers, but do you have reason to believe that they’re better behaved than real numbers in infinite sums? I’m not immediately seeing how countable additivity helps. Sorry if that should be obvious. |
Easy
Medium
Hard
## Triple integral calculator
Triple integral calculator is used to integrate the three-variable functions. The three-dimensional integration can be calculated by using our triple integral solver. It takes three different variables of integration to integrate the function.
## How does the triple integration calculator work?
Follow the below steps to calculate the triple integral.
• First of all, select the definite or indefinite option.
• Enter the three-variable function into the input box.
• To enter the mathematical symbols, use the keypad icon .
• In the case of definite integral, enter the upper and lower limits of all the variables.
• Select the order of variables i.e., dxdydx, dydxdz, etc.
• Hit the calculate button to get the result.
• To enter a new function, press the reset button.
## What is triple integral?
The triple integral is used to find the mass of a volume of a body that has variable density. It is similar to a double integral but in three dimensions. It integrates the given function over three-dimensional space.
Types of the triple integral are:
• Triple definite integral
• Triple indefinite integral
The equation of the triple definite integral is given below.
$\int \int _B\int f\left(x,y,z\right)dV=\int _e^f\int _c^d\int _a^bf\left(x,y,z\right)dxdydz$
The equation of triple indefinite integral is
$\:\int \int \int f\left(x,y,z\right)dV=\int \int \int f\left(x,y,z\right)dxdydz$
In the equations of the triple integral.
• f(x, y, z) is a three-variable function.
• a, b, c, d, e, and f are the upper and lower limits of x, y, and z.
• dx, dy, and dz are the integration variables of the given function.
## How to evaluate triple integral problems?
Following are a few examples of triple integrals solved by our triple integrals calculator.
Example 1: For definite integral
Find triple integral of 4xyz, having limits x from 0 to 1, y from 0 to 2, and z from 1 to 2.
Solution
Step 1: Write the three-variable function along with the integral notation.
$\int _1^2\int _0^2\int _0^14xyz\:dxdydz\:\:\:$
Step 2: Integrate the three variable function w.r.t x.
$\int _1^2\int _0^2\left(\int _0^14xyz\:dx\right)dydz\:\:\:$
$\int _1^2\int _0^2\left(4yz\int _0^1x\:dx\right)dydz\:\:\:$
$\int _1^2\int _0^2\left(4yz\left[\frac{x^{1+1}}{1+1}\right]^1_0\right)dydz\:\:\:$
$\int _1^2\int _0^2\left(4yz\left[\frac{x^2}{2}\right]^1_0\right)dydz\:\:\:$
$\int _1^2\int _0^2\left(2yz\left[x^2\right]^1_0\right)dydz\:\:\:$
$\int _1^2\int _0^2\left(2yz\left[1^2-0^2\right]\right)dydz\:\:\:$
$\int _1^2\int _0^2\left(2yz\right)dydz\:\:\:$
Step 3: Now integrate the above expression w.r.t y.
$\int _1^2\left(\int _0^22yz\:dy\right)dz\:\:\:$
$\int _1^2\left(2z\int _0^2y\:dy\right)dz\:\:\:$
$\int _1^2\left(2z\left[\frac{y^{1+1}}{1+1}\right]_0^2\right)dz$
$\int _1^2\left(2z\left[\frac{y^2}{2}\right]_0^2\right)dz\:\:\:$
$\int _1^2\left(z\left[y^2\right]_0^2\right)dz\:\:\:$
$\int _1^2\left(z\left[2^2-0^2\right]\right)dz\:\:\:$
$\int _1^2\left(z\left[4-0\right]\right)dz\:\:\:$
$\int _1^24z\:dz\:\:\:$
Step 4: Integrate the above expression w.r.t z.
$\int _1^24z\:dz\:\:\:$
$4\int _1^2z\:dz\:\:\:$
$4\left[\frac{z^{1+1}}{1+1}\right]_1^2\:\:\:$
$4\left[\frac{z^2}{2}\right]_1^2\:\:\:$
$2\left[z^2\right]_1^2\:\:\:$
$2\left[2^2-1^2\right]\:\:\:$
$2\left[4-1\right]\:\:\:$
$2\left[3\right]\:\:\:$
$6$
Step 5: Now write the given function with the result.
$\int _1^2\int _0^2\int _0^14xyz\:dxdydz=6$
Example 2: For indefinite integral
Find triple integral of $6x^2yz$ with respect to x, y, and z.
Solution
Step 1: Write the three-variable function along with the integral notation.
$\int \int \int \:6x^2yz\:dxdydz$
Step 2: Integrate the three variable function w.r.t x.
$\int \int \left(\int \:\:6x^2yz\:dx\right)dydz$
$\int \int \left(6yz\int x^2\:dx\right)dydz$
$\int \:\int \:\left(6yz\left[\frac{x^{2+1}}{2+1}\right]+C\right)dydz$
$\int \:\int \:\left(6yz\left[\frac{x^3}{3}\right]+C\right)dydz$
$\int \:\int \:\left(2yz\left[x^3\right]+C\right)dydz$
$\int \:\int \left(2x^3yz+C\right)\:dydz$
Step 3: Now integrate the above expression w.r.t y.
$\int \:\:\left(\int \:2x^3yz\:dy+\int \:C\:dy\right)dz$
$\int \:\left(2x^3z\:\int \:y\:dy\:+C\int dy\right)dz$
$\int \:\left(2x^3z\:\left[\frac{y^{1+1}}{1+1}\right]+Cy+C\right)dz$
$\int \:\left(2x^3z\:\left[\frac{y^2}{2}\right]+Cy+C\right)dz$
$\int \:\left(x^3z\:\left[y^2\right]+Cy+C\right)dz$
$\int \left(x^3y^2z+Cy+C\right)dz$
Step 4: Integrate the above expression w.r.t z.
$\int \left(x^3y^2z+Cy+C\right)dz$
$\int \:x^3y^2z\:dz+\int \:Cy\:dz+\int \:Cdz$
$x^3y^2\int \:z\:dz+Cy\int \:dz+C\int \:dz$
$x^3y^2\left[\frac{z^{1+1}}{1+1}\right]+Cyz+Cz+C$
$x^3y^2\left[\frac{z^2}{2}\right]+Cyz+Cz+C$
$\frac{x^3y^2z^2}{2}+Cyz+Cz+C$
### References
Use android or iOS app of our limit calculator on your mobile |
### Session C6: History of Telescopes
1:30 PM–3:18 PM, Saturday, May 2, 2009
Room: Governor's Square 16
Chair: Daniel Kleppner, Massachusetts Institute of Technology
Abstract ID: BAPS.2009.APR.C6.3
### Abstract: C6.00003 : Black Holes, Dark Matter, and Dark Energy: Measuring the Invisible through X Rays
2:42 PM–3:18 PM
MathJax On | Off Abstract
#### Author:
Christine Jones
(Harvard-Smithsonian Center for Astrophysics)
X-ray telescopes allow us to see'' the high energy radiation from objects that cannot be seen at other wavelengths including black holes and the very hot gas in galaxies and clusters of galaxies. Since soft X-rays are absorbed by our atmosphere, X-ray detectors must be flown above most of the Earth's atmosphere. The first orbiting X-ray telescope flew on Skylab in the early 1970's and recorded images of the Sun on film. Observing fainter X-ray sources required both the development of large, high-incidence mirrors and the development of electronic detectors capable of measuring the arrival of an X-ray photon in two dimensions. This talk will review the development of X-ray observatories from the early Einstein observatory through the current Chandra, SWIFT and XMM-Newton missions. While X-ray observations have changed our views in many areas of astronomy from stars to quasars, this talk will focus on the advances in our knowledge of supermassive black holes, dark matter and dark energy.
To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2009.APR.C6.3 |
### 3.9 The system function
Apart from the functions that BayES provides to facilitate communication with other scientific software, the system() function can be used to give access to the operating system’s command line. This function works slightly differently on Microsoft® Windows® and Linux/macOS systems:
1. on Linux and macOS systems the system() function executes the command passed to it as a string. For example, the statement:
system("ls");
will list all folders and files located in the current working directory, while the statement:
system("bash myScript.sh");
will execute the commands contained in the shell script file myScript.sh (assuming that such a file exists in the current directory). In both cases, the output of the command passed to system() will be directed to the BayES console.
2. on Microsoft® Windows® systems the system() function executes the command passed to it as a string, after prepending it with the string "cmd /Q /C ". For example, when the user runs:
system("dir");
the command submitted to the Microsoft® Windows® shell is actually "cmd /Q /C dir". This is done so that the system() function can call both Microsoft® Windows® applications, with a statement like:
as well as Microsoft® Windows® DOS commands, such as the dir command used in the example above.
cmd requests Microsoft® Windows® to start a new shell to execute the command or start the application and the two option specifiers have the following effect:
• /C: terminates the shell after the command finishes execution
• /Q: turns echo off
A statement like:
system("myScript.bat");
will execute the DOS commands contained in the myScript.bat batch file (assuming that such a file exists in the current directory). If the command to be executed contains spaces, then it can be enclosed in double-quote marks. For example:
system(""copy myFile.txt .AnotherFolder"");
In all cases, any output sent to the Microsoft® Windows® console by the program or command will be redirected to the BayES console.
The script file "1$-$System control statements.bsf", located at "\$BayESHOME/Samples/6$-$AdvancedUsage" contains an extensive example of using the system() function. Note that this file differs between Microsoft® Windows® and Linux/macOS systems and the BayES installer saves only the relevant file for the respective host system. |
zbMATH — the first resource for mathematics
Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Periodic and homoclinic solutions generated by impulses. (English) Zbl 1225.34019
The topic of interest is the following class of second order differential equations with impulses
$\stackrel{¨}{q}+{V}_{q}\left(t,q\right)=f\left(t\right),\phantom{\rule{2.em}{0ex}}t\in \left({s}_{k-1},{s}_{k}\right),$
${\Delta }\stackrel{˙}{q}\left({s}_{k}\right)={g}_{k}\left(q\left({s}_{k}\right)\right),$
where $k\in ℤ$, $q\in {ℝ}^{n}$, ${\Delta }\stackrel{˙}{q}\left({s}_{k}\right)=\stackrel{˙}{q}\left({s}_{k}^{+}\right)-\stackrel{˙}{q}\left({s}_{k}^{-}\right)$, ${V}_{q}\left(t,q\right)={\text{grad}}_{q}V\left(t,q\right)$, ${g}_{k}\left(q\right)={\text{grad}}_{q}{G}_{k}\left(q\right)$, $f$ is continuous, ${G}_{k}$ is of class ${C}^{1}$ for every $k\in ℤ$, $0={s}_{0}<{s}_{1}<\cdots <{s}_{m}=T$, ${s}_{k+m}={s}_{k}+T$ for certain $m\in ℕ$ and $T>0$, $V$ is continuously differentiable and $T$-periodic, and ${g}_{k}$ is $m$-periodic in $k$.
The existence of periodic and homoclinic solutions to this problem is studied via variational methods. In particular, sufficient conditions are given for the existence of at least one non-trivial periodic solution, which is generated by impulses if $f\equiv 0$. An estimate (lower bound) of the number of periodic solutions generated by impulses is also given, showing that this lower bound depends on the number of impulses in a period of the solution. Moreover, under appropriate conditions, the existence of at least a non-trivial homoclinic solution is obtained, that is, a solution satisfying that ${lim}_{t\to ±\infty }q\left(t\right)=0$ and ${lim}_{t\to ±\infty }\stackrel{˙}{q}\left({t}^{±}\right)=0$. The periodic and homoclinic solutions obtained in the main results are generated by impulses if $f\equiv 0$, due to the non-existence of non-trivial periodic and homoclinic solutions of the problem when $f$ and ${g}_{k}$ vanish identically.
The main tools for the proofs of the main theorems are the mountain pass theorem and a result on the existence of pairs of critical points by D. C. Clark [Math. J., Indiana Univ. 22, 65–74 (1972; Zbl 0228.58006)] (see also [P. H. Rabinowitz [Reg. Conf. Ser. Math. 65 (1986; Zbl 0609.58002)]), as well as the theory of Sobolev spaces.
MSC:
34A37 Differential equations with impulses 34C25 Periodic solutions of ODE 34C37 Homoclinic and heteroclinic solutions of ODE |
Mathematica misinterpreting input from keyboard
Mathematica is misinterpreting input from the keyboard, as if I had chosen some other keyboard layout.
For example:
• [ is interpreted as @,
• ] is interpreted as uppercase D,
• £ is interpreted as o with a diacritical mark of some sort, and
• ( is interpreted as uppercase H.
In fact the behavior is even weirder: if I type two open parentheses, then the response is to first display an uppercase and then to insert an uppercase K in front of it (not even at the location of the blinking cursor). Also the behavior sometimes disappears, so that [ indeed produces [.
-
You are possibly missing the special Mathematica fonts. You might need to reinstall the application, or the fonts themselves, depending on your OS. – Verbeia Jun 25 at 21:01
You can download the fonts from Wolfram Research, Inc. here. – Sjoerd C. de Vries Jun 25 at 21:11 |
<< problem 286 - Scoring probabilities An enormous factorial - problem 288 >>
# Problem 287: Quadtree encoding (a simple compression algorithm)
The quadtree encoding allows us to describe a 2^N * 2^N black and white image as a sequence of bits (0 and 1).
Those sequences are to be read from left to right like this:
• the first bit deals with the complete 2^N * 2^N region;
• "0" denotes a split: the current 2^n * 2^n region is divided into 4 sub-regions of dimension 2^{n-1} * 2^{n-1},
the next bits contains the description of the top left, top right, bottom left and bottom right sub-regions - in that order;
• "10" indicates that the current region contains only black pixels;
• "11" indicates that the current region contains only white pixels.
Consider the following 4x4 image (colored marks denote places where a split can occur):
This image can be described by several sequences, for example : "001010101001011111011010101010", of length 30, or
"0100101111101110", of length 16, which is the minimal sequence for this image.
For a positive integer N, define D_N as the 2^N * 2^N image with the following coloring scheme:
the pixel with coordinates x = 0, y = 0 corresponds to the bottom left pixel,
if (x - 2^{N-1})^2 + (y - 2^{N-1})^2 <= 2^{2N-2} then the pixel is black,
otherwise the pixel is white.
What is the length of the minimal sequence describing D_24 ?
# My Algorithm
The given formula can be easily translated to a function isBlack(x,y) that returns true if the pixel at (x,y) is black.
It looks like the equation of a circle - and when printing D_4 on screen I get (see #ifdef DRAW_IMAGE):
.....BBBBBBB....
...BBBBBBBBBBB..
..BBBBBBBBBBBBB.
..BBBBBBBBBBBBB.
.BBBBBBBBBBBBBBB
.BBBBBBBBBBBBBBB
.BBBBBBBBBBBBBBB
BBBBBBBBBBBBBBBB
.BBBBBBBBBBBBBBB
.BBBBBBBBBBBBBBB
.BBBBBBBBBBBBBBB
..BBBBBBBBBBBBB.
..BBBBBBBBBBBBB.
...BBBBBBBBBBB..
.....BBBBBBB....
........B.......
The combination of a circle and rectangles has some nice properties:
• if all four corners of the rectangle are inside the circle (= black) then the whole rectangle is black
• if the rectangle is much smaller than the circle and all fours corners of the rectangle are outside the circle then the whole rectangle is outside the circle
My function encode returns the size of an optimal encoding and performs the following tasks in a recursive manner:
• if the current square covers only 1 pixel then it needs 2 bits
• check all four corners of the current square, if they are all inside or outside then its encoding requires 2 bits
• subdivide the current square into 4 equally-sized squares and determine their encoding size plus 1 bit for the split
## Note
When the current square is 2x2 and some of its corner are black and some are white then I know that these four pixels need 1 + 2+2+2+2 bits, saving one recursion step (about 10% faster).
The encoded image has a compression ratio of about 900000 to 1. That's excellent - photos usually compress to about 10:1.
# Interactive test
You can submit your own input to my program and it will be instantly processed at my server:
Input data (separated by spaces or newlines):
This is equivalent to
echo 4 | ./287
Output:
Note: the original problem's input 24 cannot be entered
because just copying results is a soft skill reserved for idiots.
(this interactive test is still under development, computations will be aborted after one second)
# My code
… was written in C++11 and can be compiled with G++, Clang++, Visual C++. You can download it, too.
#include <iostream>
// D24 => 2^24
unsigned int size = 1 << 24;
// return true if pixel at (x,y) is black
bool isBlack(unsigned int x, unsigned int y)
{
// 2^(N-1)
long long middle = size >> 1;
// right side of the equation: 2^(2N - 2) = 2^(N-1) * 2^(N-1) = middle * middle
auto threshold = middle * middle;
// be a bit careful with negative differences
auto dx = (long long)x - middle;
auto dy = (long long)y - middle;
return dx*dx + dy*dy <= threshold;
}
// return size of optimal encoding
// note: I expect only valid input, such that (toX - fromX) = (toY - fromY) = a power of two
unsigned int encode(unsigned int fromX, unsigned int fromY, unsigned int toX, unsigned int toY, bool isFirst = true)
{
// a single pixel ?
if (fromX == toX) // implies fromY == toY
return 2; // doesn't matter whether black or white, both encodings need two bits
// isBlack() will produce a black circle
// checking all four corners is sufficient to know when to split
bool a = isBlack(fromX, fromY);
bool b = isBlack(toX, fromY);
bool c = isBlack(toX, toY);
bool d = isBlack(fromX, toY);
// same color on all four corner => the whole area is covered by the same color
// however, this assumption doesn't hold on the first level
if (a == b && b == c && c == d && !isFirst)
return 2; // again: the color doesn't matter, both need two bits to fill the entire area
// speed optimization: if a 2x2 area needs to be split, then it always requires 9 bits
if (fromX + 1 == toX)
return 1 + 4*2; // a split marker and four single pixels (2 bits each)
// split evenly
auto half = (toX - fromX + 1) / 2;
return encode(fromX, fromY + half, toX - half, toY, false) + // upper-left
encode(fromX + half, fromY + half, toX, toY, false) + // upper-right
encode(fromX, fromY, toX - half, toY - half, false) + // lower-left
encode(fromX + half, fromY, toX, toY - half, false) + // lower-right
1; // don't forget: there's one bit for the split marker
}
int main()
{
// D24 => 24
unsigned int shift = 4;
std::cin >> shift;
// length of an edge of the image in pixels
size = 1 << shift;
// draw on screen (only useful for very small values of "shift" 1 ... 5)
#define DRAW_IMAGE
#ifdef DRAW_IMAGE
if (size <= 5)
{
for (unsigned int y = 0; y < size; y++)
{
auto flipY = (size - 1) - y; // problem states that lower-left corner is (0,0), must flip image upside-down
for (unsigned int x = 0; x < size; x++)
std::cout << (isBlack(x, flipY) ? "B" : ".");
std::cout << std::endl;
}
}
#endif
// let's compress to infinity and beyond !
std::cout << encode(0, 0, size - 1, size - 1) << std::endl;
return 0;
}
This solution contains 12 empty lines, 18 comments and 4 preprocessor commands.
# Benchmark
The correct solution to the original Project Euler problem was found in 0.8 seconds on an Intel® Core™ i7-2600K CPU @ 3.40GHz.
(compiled for x86_64 / Linux, GCC flags: -O3 -march=native -fno-exceptions -fno-rtti -std=gnu++11 -DORIGINAL)
See here for a comparison of all solutions.
Note: interactive tests run on a weaker (=slower) computer. Some interactive tests are compiled without -DORIGINAL.
# Changelog
August 29, 2017 submitted solution
# Difficulty
Project Euler ranks this problem at 40% (out of 100%).
# Heatmap
Please click on a problem's number to open my solution to that problem:
green solutions solve the original Project Euler problem and have a perfect score of 100% at Hackerrank, too yellow solutions score less than 100% at Hackerrank (but still solve the original problem easily) gray problems are already solved but I haven't published my solution yet blue solutions are relevant for Project Euler only: there wasn't a Hackerrank version of it (at the time I solved it) or it differed too much orange problems are solved but exceed the time limit of one minute or the memory limit of 256 MByte red problems are not solved yet but I wrote a simulation to approximate the result or verified at least the given example - usually I sketched a few ideas, too black problems are solved but access to the solution is blocked for a few days until the next problem is published [new] the flashing problem is the one I solved most recently
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300
301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400
401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500
501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600
601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633
The 310 solved problems (that's level 12) had an average difficulty of 32.6% at Project Euler and
I scored 13526 points (out of 15700 possible points, top rank was 17 out of ≈60000 in August 2017) at Hackerrank's Project Euler+.
My username at Project Euler is stephanbrumme while it's stbrumme at Hackerrank.
Look at my progress and performance pages to get more details.
<< problem 286 - Scoring probabilities An enormous factorial - problem 288 >>
more about me can be found on my homepage, especially in my coding blog.
some names mentioned on this site may be trademarks of their respective owners.
thanks to the KaTeX team for their great typesetting library ! |
# [luatex] Allowing or switching to string indexes in Lua bytecode registers
Kalrish Bäakjen kalrish.baakjen at gmail.com
Sat Sep 5 22:39:36 CEST 2015
On Sat, Sep 5, 2015 at 7:17 PM, David Carlisle <d.p.carlisle at gmail.com> wrote:
> The relevant files are not yet on ctan (and so not in texlive) the sources
> are available from the web view of the svn, but you would need to
> extract the ltluatex.tex and ltluatex.lua files or wait for the next release
> (hopefully in a few weeks)
>
> meanwhile if you have ltluatex.dtx from SVN (make sure you have the current one,
> it's been updated a few times today) then just make a file
> ltluatex.ins that looks like
>
> \input docstrip
> \generate{\file{ltluatex.tex}{\from{ltluatex.dtx}{tex,plain}}}
> \nopostamble
> \nopreamble
> \generate{\file{ltluatex.lua}{\from{ltluatex.dtx}{lua}}}
>
> run tex in that to get ltluatex.tex and .lua then
>
> \input{ltluatex}
>
> in a document should work.
Thank you! I'll see if I can try without breaking my installation, hehe.
> in ltluatex the assumption is that you can just use require() rather than
> needing a special wrapper (as require uses kpse anyway now)
If I have understood you correctly, the "custom searcher" that I
speculated about could solve the limitation in discussion without any
change on the packages' side (that is, without any "special wrapper";
they would continue to use require just as they do now, and the
searcher would take care of putting the code in a bytecode register or
restoring it from one if on a dumping session).
> it seems to me that you can't
> byte compile arbitrary code, and this is such a case, that (for
> simple use at least)
> the code could be structured so that you can byte compile each file separately.
Why can't arbitrary code be byte-compiled? Is what you refer to
related to what's mentioned in the LuaTeX Reference Manual (see
below)?
> Section 4.8.1 (LUA bytecode registers)
> Note: The function must not contain any upvalues. Currently, functions containing upvalues can be stored (and their upvalues are set to nil), but this is an artifact of the current Lua implementation and thus subject to change.
I had been wondering what the consequences of this were.
> On the other hand a more general scheme could probably work although
> I'd need to try building a test case to follow the details below.
> I see the general direction you are suggesting but some of the details
> escape me:-)
I can write some code, but not today :-). If the "custom searcher"
solution fits, I think that something at the core of LuaTeX (either
LuaTeX itself or the LuaTeX format) would require changes, which would
indeed not be something to decide lightly. Feel free to discuss
further.
Thank you! |
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning
Abstract
Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.
Main points
Pitfalls of metrics Many common metrics of in-domain uncertainty estimation (e.g. log-likelihood, Brier score, calibration metrics, etc.) are either not comparable across different models or fail to provide a reliable ranking. For instance, although temperature scaling is not a standard for ensembling techniques, it is a must for a fair evaluation. Without calibration, the value of the log-likelihood, as well as the ranking of different methods may change drastically depending on the softmax temperature which is implicitly learned during training. Since no method achieves a perfect calibration out-of-the-box yet, comparison of the log-likelihood should only be performed at the optimal temperature.
Example: The average log-likelihood (LL) of two ensembling techniques before (solid) and after (dashed) temperature scaling. Without temperature scaling test-time data augmentation decreases the log-likelihood of plain deep ensembles. However, when temperature scaling is enabled, deep ensembles with test-time data augmentation outperform plain deep ensembles.
Pitfalls of ensembles Most of the popular ensembling techniques—one of the major tools for uncertainty estimation—require averaging predictions across dozens of members of an ensemble, yet are essentially equivalent to an ensemble of only few independently trained models.
The deep ensemble equivalent score (DEE) of a model is equal to the minimum size of a deep ensemble (an ensemble of independently train networks) that achieves the same performance as the model under consideration. The plot demonstrates that all of the ensembling techniques are far less efficient than deep ensembles during inference.
Example: If an ensemble achieves DEE score 5.0 after averaging of predictions of 100 networks, it means that the ensemble has the same performance as a deep ensemble of only 5 models of the same architecture.
Missing part of ensembling Test-time data augmentation improves both calibrated log-likelihood and accuracy of ensembles for free! Test-time data augmentation simply computes the prediction of every member of the ensemble on a single random augmentation of an image. Despite being a popular technique for large-scale image classification problems, test-time data augmentation seems to be overlooked in the community of uncertainty estimation and ensembling.
The webpage template was borrowed from Dmitry Ulyanov.
BibTeX
@article{ashukha2020pitfalls,
title={Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning},
author={Ashukha, Arsenii and Lyzhov, Alexander and Molchanov, Dmitry and Vetrov, Dmitry},
journal={arXiv preprint arXiv:2002.06470},
year={2020}
} |
Chinese Journal of Chemical Physics 2016, Vol. 29 Issue (1): 151-156
#### The article information
Le-yi Tu, Guo-min Yang, Xiang-yang Zhang, Shui-ming Hu
Efficient Separation of Ar and Kr from Environmental Samples for Trace Radioactive Noble Gas Detection
Chinese Journal of Chemical Physics, 2016, 29(1): 151-156
http://dx.doi.org/10.1063/1674-0068/29/cjcp1510210
### Article history
Received on October 9, 2015
Accepted on December 24, 2015
Efficient Separation of Ar and Kr from Environmental Samples for Trace Radioactive Noble Gas Detection
Le-yi Tua, Guo-min Yanga, Xiang-yang Zhanga, b, Shui-ming Hua
Dated: Received on October 9, 2015; Accepted on December 24, 2015
a. Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China;
b. Institute of Hydrogeology and Environmental Geology, Chinese Academy of Geological Sciences, Zhengding 050803, China
Author: Shui-ming Hu, E-mail: smhu@ustc.edu.cn
Abstract: Radioactive noble-gas isotopes, 85Kr (half-life t1/2=10.8 y), 39Ar (t1/2=269 y), and 81Kr (t1/2=229,000 y), are ideal tracers and can be detected by atom trap trace analysis (ATTA), a laser-based technique, from environmental samples like air and groundwater. Prior to ATTA measurements, it is necessary to efficiently extract krypton and argon gases from samples. Using a combination of cryogenic distillation, titanium chemical reaction and gas chromatography, we demonstrate that we can recover both krypton and argon gases from 1-10 L "air-like" samples with yields in excess of 90% and 98%, respectively, which meet well the requirements for ATTA measurements. A group of testing samples are analyzed to verify the performance of the system, including two groundwater samples obtained from north China plain.
Key words: Atom trap trace analysis Gas chromatography Radioactive noble gas
Ⅰ.INTRODUCTION
Owing to the unique properties of noble gases,three radioactive isotopes, $^{85}$ Kr (half-life $t_{1/2}$ =10.8 y), $^{39}$ Ar ( $t_{1/2}$ =269 y) and $^{81}$ Kr ( $t_{1/2}$ =229 ky),are homogeneously distributed in the atmosphere,and have simple mixing and transportation mechanisms in the environment. They are considered as ideal tracers in various studies,including groundwater dating,ocean ventilation,and nuclear safety. $^{85}$ Kr is a fission product emitted to the northern hemisphere since the nuclear age [1]. $^{85}$ Kr can be used to monitor anthropic nuclear activities and to calibrate atmospheric transport models [2, 3]. It can also be used as a tracer for dating young groundwater with an age range of 2-50 y [4, 5, 6, 7, 8]. $^{81}$ Kr is a cosmogenic nuclide,and human nuclear activities have no detectable effect on the abundance of $^{81}$ Kr. These outstanding characteristics make $^{81}$ Kr a desired tracer for dating old groundwater [9, 10, 11, 12] and ices [13] on the time scale of 50-1000 ky. $^{39}$ Ar in the atmosphere is also produced by cosmic-ray,which is particularly interested for studies of deep ocean mixing and circulation on a time scale of 50-1000 y,filling a time window inaccessible by other radioactive tracers [14, 15, 16, 17].
The concentration of krypton in the earth's atmosphere is 1.14 ppm (part per million) by volume [18]. The isotopic abundances of $^{85}$ Kr and $^{81}$ Kr have been determined to be 2.2 $\times$ 10 $^{-11}$ and (5.2 $\pm$ 0.6) $\times$ 10 $^{-13}$ ,respectively [2, 19, 20, 21]. Argon constitutes 0.934% of the atmosphere by volume,larger than krypton by four orders of magnitude,but the isotopic abundance of $^{39}$ Ar is only 8 $\times$ 10 $^{-16}$ . $^{85}$ Kr and $^{39}$ Ar can be analyzed by low-level counting (LLC) of the decay. The minimal sample size of LLC analysis of $^{85}$ Kr is about 10 $\mu$ L krypton (STP,standard temperature and pressure),and several hundred milliliters argon (STP) for $^{39}$ Ar. The later one corresponds to a groundwater sample size of several tons [21]. Due to much longer half-life time of $^{81}$ Kr,it is impractical to analyze $^{81}$ Kr with LLC. Accelerator mass spectrometry (AMS) has been successfully applied for $^{81}$ Kr-dating,but the sample size was huge: about 500 $\mu$ L krypton gas recovered from 16 ton groundwater [10]. Atom trap trace analysis (ATTA) [22] is a laser-based technique,utilizing a magneto-optical trap to selectively capture and count atoms. The minimum krypton sample size for $^{85}$ Kr/ $^{81}$ Kr detection with ATTA has been reduced to a few microliter [23, 24, 25]. ATTA analysis of $^{39}$ Ar also becomes feasible [26, 27]. It has been concluded [28] that ATTA is currently the most practical method of dating environmental samples using radioactive krypton and argon isotopes.
One liter of modern groundwater at 10 $^\circ$ C contains about 58000 $^{85}$ Kr atoms,1300 $^{81}$ Kr atoms,and 8500 $^{39}$ Ar atoms [29, 30]. Currently,ATTA measurement of radio-krypton needs a typical groundwater sample size of about 100 L. Prior to the ATTA measurement,it is necessary to extract noble gases (mostly argon and krypton) from groundwater samples,and it can be accomplished in two steps: first to extract the solved gas from groundwater,and then to separate krypton/argon from the air-like'' gas sample. Taking into account the complicated transfer and mixing of groundwater,analysis with multiple tracers is preferred in groundwater dating. Therefore,it is desired to separate and recover both argon and krypton from different gas samples with high yields to prevent any possible isotopic fractionation.
There have been several reports on the systems of Kr separation from air-like'' gases [5, 10, 25, 31, 32, 33]. A method based on frozen charcoal trap and gas chromatography [5, 32] has been applied for several liters of gas samples extracted from groundwater. A special krypton purification system for more than 100 L of bulk gas was reported by using several gas chromatographic steps,which has been applied in $^{81}$ Kr dating with AMS [10]. Systems for recovering krypton from gases with a volume in the range of 1-100 liters have also been built based on cryogenic distillation,gas chromatography,and titanium reactions [25, 33],and they have been successfully applied in radio-krypton dating measurements using ATTA. Here we report on a new system developed to recover both krypton and argon for ATTA measurements using 1-10 L gas extracted from groundwater samples. Using a combined process of cryogenic distillation,gas chromatography and titanium reaction,yields in excess of 90% and 99% have been achieved for krypton and argon,respectively. As a demonstration,abundances of $^{85}$ Kr and $^{81}$ Kr in several environmental samples have been determined by the ATTA instrument in Hefei (China).
Ⅱ. EXPERIMENTS A. Sampling in the field: extract gases from groundwater
The configuration of the sampling system for field sampling is shown in Fig. 1. A membrane contactor (Liquicel,4 $\times$ 13,type X40) is used to extract gases from groundwater. The hydrophobic hollow-fibre membrane contactor can efficiently separate gases from liquid [34],and has been widely used in different applications [35, 36, 37, 38]. High efficiency and simple structure make it very suitable to be used in field. Groundwater sample first passes through two fine filters to remove particles in the sample,then is introduced into the membrane contactor with a flow rate of 5-20 L/min monitored by a water flow meter. The contactor allows gases to diffuse from the water into gas-filled contactor pores which contact with gas line directly. The gas line is first evacuated by a diaphragm pump and is further purged by the gas extracted from groundwater. When the size of the residual air is believed to be negligible,the extracted gas will be pumped into a sample cylinder by the diaphragm pump. Under a water flow rate of 10 L/min,about 5 L aqueous gas can be collected in about 0.5 h,with an extraction efficiency of about 90% for Ar and O $_2$ and 70% for Kr. The exhaust end of the diaphragm pump connects with a sample cylinder,and the pressure of the final collected gas is limited to be about 1.2 bar. More gas can be collected in the same cylinder if a compressor pump is used,but it considerably increases the weight of the system and consumes more power in the field.
FIG. 1 Schematic of the system for groundwater degassing. Abbreviations: W1 and W2: water valve, V1 and V2: threeway valve, P: pressure gauge.
Gas samples extracted from groundwater are contained in cylinders in the field. Noble gases in the samples,mainly argon and krypton,will be extracted in the laboratories using cryogenic distillation and high-temperature Ti-reaction,followed by gas chromatographic separation. A schematic of the purification system is shown in Fig. 2.
FIG. 2 Schematic of the system to separate krypton and argon from air-like samples. MS 5A: molecular sieve 5 Å trap. Trap: frozen trap with activated charcoal. Furnace: furnace for titanium-reaction. Getter: titanium getter pump. GC: gas chromatography.
B. Cryogenic distillation and high-temperature Ti-reaction
Water vapor and carbon dioxide are first removed by a molecular sieve 5 Å trap (MS 5A),then the gas sample is introduced into a liquid-N $_2$ (77 K) cooled charcoal trap (trap 1,200 cm $^3$ volume,4 g charcoal of 16-32 mesh). A vacuum compressor is used and typically it takes about 30 min to condense more than 95% of the gas sample into trap 1. The vapor above the condensed sample in trap 1 flows into a quartz tube with a flow rate of about 50 mL/min,which is constrained by a mass flow controller. The quartz tube (burner'') is 34 mm in diameter,70 cm long,and O-ring sealed at both ends. The tube has been filled with about 200 g titanium sponge,and slowly heated to 1000 $^\circ$ C by a furnace. At temperature of 1000 $^\circ$ C,titanium reacts with gaseous O $_2$ and N $_2$ to form titanium oxides and titanium nitrides,respectively. Titanium also consumes other chemically active gases,including CH $_4$ at such high temperature. Consequently,residual gases in the burner are mostly noble gases,together with little amounts of N $_2$ and CH $_4$ . The flow rate of 50 mL/min is selected to keep a mild reaction rate to avoid overheating the burner. During the process,the pressure in the quartz tube is monitored with a gauge (MKS Baratron 627B,relative accuracy of 0.12%). Because krypton has a lower vapor pressure in liquid-N $_2$ cooled charcoal trap,krypton is kept condensed in trap 1,while most Ar,O $_2$ ,and N $_2$ gases are released to the burner.
When the argon gas accumulates in the burner,the gas pressure gets higher and prevents the gas flow from trap 1 to the burner. At this point,we turn off the gas flow into the burner. The gas pressure in the burner will decrease since the Ti-reaction continues. It takes about 20 minutes to reach an equilibrium,which is illustrated in Fig. 3(a). The residual gas (mostly argon) will be collected with another liquid-N $_2$ cooled charcoal trap (trap 2). Subsequently,we can turn on the gas flow from trap 1 to the burner again and restart the distillation. The procedure above can be repeated until most gas in Trap 1 is transferred,which is evidenced by a sudden drop of flow rate from trap 1 to the burner. Usually two iterations are needed for a sample with an original size of 10 L. We have investigated the composition of the gas flow during the process using gas chromatographic analysis,which is shown in Fig. 3(b). At the beginning,the main composition is N $_2$ . Later when N $_2$ depletes,O $_2$ and Ar become dominant in the flow. Finally,the flow stops when O $_2$ and Ar deplete.
FIG. 3 (a) Observed residual gas pressure in the “burner” during cryogenic distillation of an air sample of 10 L. (b) The measured flow rates of different gases. The total flow rate was controlled to be 50 mL/min by a mass flow controller. The distillation process was separated into two stages as indicated with vertical dotted lines. When the gas pressure in the “burner” is high, the gas flow is stopped for 20 min also and restarted when the argon gas is transferred to a cold trap.
When all the gas in the burner is collected in trap 2,the residual gas in trap 1 will be released by heating the trap to about 200 $^\circ$ C. It contains all the krypton in the original sample,but being still mostly N $_2$ and O $_2$ ,together with methane and some argon. The typical size is about 0.3 L (excluding methane). The gas released from trap 1 is also introduced to the burner with a flow rate controlled to be less than 50 mL/min.
When most N $_2$ ,O $_2$ and CH $_4$ are removed in the burner,the residual gas is transferred to another activated liquid-N $_2$ cooled charcoal trap (trap 3,10 mL volume,1 g charcoal of 16-32 mesh). Typical time needed for the whole distillation and titanium reaction process is about 4 h for an air sample of 10 L.
C. Chromatographic separation of krypton
A gas chromatographic (GC) separation process is applied to extract krypton gas from the sample. The residual gas in trap 3 is released by heating the trap to 200 $^\circ$ C and flushed into a chromatographic column. The column is filled with a molecular sieve (MS 5A,grain size of No.60-80,diameter of 6 mm,length of 2 m) and installed in a constant temperature bath at 30 $^\circ$ C. Pure helium (99.999% purity,30 mL/min) is used as carrier gas. Characteristic elution peaks of various gas components are monitored with a thermal conductivity detector (TCD),and they are shown in Fig. 4. Because several milliliters of argon and almost all krypton (micro-liters) are presented here,the chromatographic separation process includes a 2.5 min collection of argon and a 3 min collection of krypton,which is shown in Fig. 4.
FIG. 4 Chromatograms of the elution times of various gases originally extracted from an air-like sample. Two pairs of vertical dotted lines indicate the time ranges for Ar and Kr gas collection during 1st GC separation. The solid line shows the final constituents of the Kr sample.
The collected argon gas,together with the gas previously stored in trap 2,is transferred into a chamber installed with a Ti-getter pump (Getter 1,500 $^\circ$ C,Nanjing Huadong Electronics Co.) to get rid of residual N $_2$ . After that,the argon gas is collected in a sample holder filled with activated charcoal at liquid-N $_2$ temperature,and then stored at room temperature. A second run of chromatographic separation is applied to extract the Kr gas,which is also shown in Fig. 4. Then a getter process is applied to remove residual contaminants from the obtained krypton sample. Finally,the purified krypton gas is also collected in a sample holder filled with activated charcoal at liquid-N $_2$ temperature,being ready for ATTA measurement. The duration of the GC separation is about 1 h.
Note that the TCD signal in the GC process has been calibrated by using pure Ar,N $_2$ ,Kr,and CH $_4$ samples. The areas under the chromatographic peaks are used to determine the contents of various components in the obtained krypton gas. The quantity of extracted argon is derived from the pressure gauge and the volume of the sample holder. Typically 90 mL argon can be obtained from an ambient air sample of 10 L (STP).
Ⅲ. RESULTS AND DISCUSSION A. Yield and purity of products
The efficiency and purity of the extraction process were tested by several samples: ambient air samples with volumes varying from 1 L to 10 L (STP),two air samples of 10 L and mixed with 1% CH $_4$ ,and two groundwater samples. The GC data of recovered krypton and argon gases are shown in Fig. 5. The sizes of original samples and recovered Kr/Ar gases are presented in Table Ⅰ.
FIG. 5 Chromatograms of the final components of (a) Kr and (b) Ar gases recovered from air samples of different sizes. The curve at the bottom of each panel is from a blank sample (pure helium).
Table 1 The quality of processed gas (content of Kr in μL, content of Ar in mL), extraction yield (in %) and purity (in %) of Kr and Ara
Although Ar and N $_2$ contents are observed from the chromatography spectra (Fig. 5(a)),which may result from air leak during the collection process. The size of impurities'' in obtained Kr samples becomes relatively larger when the original sample size gets smaller. Since only 1 ppm of ambient air is krypton,even at the worst case (1 L air sample,Fig. 5(a)),the content of contaminated krypton relative to the whole krypton sample size is less than 0.1% and therefore negligible.
A small N $_2$ peak also presents in the TCD spectra of the recovered argon sample (Fig. 5(b)). It indicates that the Ti-reaction and getter cannot completely remove N $_2$ from the argon sample. But the content of N $_2$ is below 1%. In addition,we cannot detect any loss of krypton ( $<$ 0.1%) in the distillation process. Since the residual small impurities have no influence on ATTA measurements,there is no need of further efforts to remove the impurities from the obtained krypton and argon samples.
For ambient air gases and CH $_4$ -rich samples,the yields of Kr and Ar are more than 85% and 90%,respectively. The O $_2$ -poor samples were extracted in field from groundwater,and their initial constituents were analyzed by gas chromatography. Although original krypton contents in these two samples are too small to be determined by conventional chromatography,the sizes of recovered krypton gases agree reasonably with the estimated values according to the solubility of krypton and the temperature of the groundwater.
B. Isotopic fractionation and environmental samples
The extracted krypton gas is ready for ATTA measurements. Figure 6 shows the fluorescence signal of the trapped stable krypton isotopes when scanning the laser frequency. Two krypton samples were tested,one is a commercial pure krypton gas sample bought in 2007 (denoted as 2007 bottle''),and the other one is from a groundwater sample (the last sample shown in Table I). As shown in the figure,relative intensities of respective isotopes remain the same in both samples,indicating no detectable isotopic fractionation effect in the gas extraction and purification process. When the laser frequency is set on resonance with the rare isotope $^{85}$ Kr or $^{81}$ Kr,image of single atoms will be detected by a sensitive EMCCD camera and counts of individual atoms will be used to derive the isotopic abundances [24]. Two O $_2$ -poor'' samples extracted from deep groundwater obtained in north China plain are presented in Table I as application examples of the system. For the first sample,5 counts of $^{85}$ Kr and 228 counts of $^{81}$ Kr have been obtained in 4 h. The isotopic abundances of $^{85}$ Kr and $^{81}$ Kr,relative to the modern values,are determined to be 0.3% and 71 $\pm$ 5%,respectively,which leads to a $^{81}$ Kr age of about 113 $\pm$ 23 ky. For the second sample,18 counts of $^{85}$ Kr and 411 counts of $^{81}$ Kr have been recorded in 4 h. The relative-to-modern isotopic abundances of $^{85}$ Kr and $^{81}$ Kr are 0.6% and 106 $\pm$ 6%,respectively. It indicates that the age of this groundwater sample is beyond both detection ranges of $^{85}$ Kr and $^{81}$ Kr,and should be older than 50 y. The very low $^{85}$ Kr counts also indicate that air contamination throughout the sample extraction and purification process is negligible.
FIG. 6 The fluorescence spectra of krypton recovered from one groundwater sample and the “standard” krypton from a commercial gas bottle in 2007. The spectra show stable isotopes of krypton (78Kr, 80Kr, 82Kr, 83Kr, 84Kr and 86Kr) and their relative abundances, and demonstrates that there is no significant isotopic fractionation throughout the whole krypton separation process. Two arrows mark the position of the two rare isotopes 81Kr and 85Kr.
Ⅳ. CONCLUSION
we have developed an apparatus to extract krypton and argon gases from air-like gas samples using a combination of cryogenic distillation,chemical absorption by titanium,and gas chromatography. A portable sampling instrument has also been developed to extract solved air from groundwater samples in the field. The system has been tested by applying several different gas samples,including ambient air samples with a size of 1-10 L,synthesized CH $_4$ enriched samples which mimic gases extracted from groundwater,and also two samples obtained from real groundwater in the field. Krypton and argon gases can be separated with an efficiency better than 90%. The system fulfills present needs of the ATTA measurement of the rare noble-gas isotopes including $^{85}$ Kr, $^{81}$ Kr and $^{39}$ Ar.
Ⅴ. ACKNOWLEDGMENTS
This work was supported by the Special Fund for Land and Resources Research in the Public Interest (No.201511046) and the National Natural Science Foundation of China (No.21225314 and No.41102151). We would like to give our gratitude to Zong-yu Chen from IHEG for organizing the field campaign.
References
[1] F. Vonhippel, D. H. Albright, and B. G. Levi, Sci. Am. 253, 40(1985). [2] K. Winger, J. Feichter, M. B. Kalinowski, H. Sartorius, and C. Schlosser, J. Environ. Radioact. 80, 183(2005). [3] G. M. Yang, L. Y. Tu, C. F. Cheng, X. Y. Zhang, and S. M. Hu, Chin. J. Chem. Phys. 28, 445(2015). [4] W. M. Jr. Smethie, D. K. Solomon, S. L. Schiff, and G. G. Mathieu, J. Hydrol. 130, 279(1992). [5] J. Held, S. Schuhbeck, and W. Rauert, Appl. Radiat. Isot. 43, 939(1992). [6] B. Ekwurzel, P. Schlosser, W. M. Smethie Jr., L. N. Plummer, E. Busenberg, R. L. Michel, R. Weppernig, and M. Stute, Water Resour. Res. 30, 1693(1994). [7] R. Althaus, S. Klump, A. Onnis, R. Kipfer, R. Purtschert, F. Stauer, and W. Kinzelbach, J. Hydro. 370, 64(2009). [8] N. Momoshima, F. Inoue, S. Sugihara, J. Shimada, and M. Taniguchi, J. Environ. Radioact. 101, 615(2010). [9] B. E. Lehmann, H. H. Loosli, D. Rauber, N. Thonnard, and R. D. Willis, Appl. Geochem. 6, 419(1991). [10] P. Collon, W. Kutschera, H. H. Loosli, B. E. Lehmann, R. Purtschert, A. Love, L. Sampson, D. Anthony, D. Cole, B. Davids, D. J. Morrissey, B. M. Sherrill, M. Steiner, R. C. Pardo, and M. Paul, Earth Planet. Sci. Lett. 182, 103(2000). [11] B. E. Lehmann, A. Love, R. Purtschert, P. Collon, H. H. Loosli, W. Kutschera, U. Beyerle, W. Aeschbach-Hertig, R. Kipfer, S. K. Frape, A. Herczeg, J. Moran, I. N. Tolstikhin, and M. Groning, Earth Planet. Sci. Lett. 211, 237(2003). [12] N. C. Sturchio, X. Du, R. Purtschert, B. E. Lehmann, M. Sultan, L. J. Patterson, Z. T. Lu, P. Müller, T. Bigler, K. Bailey, T. P. O'Connor, L. Young, R. Lorenzo, R. Becker, Z. El Alfy, B. El Kaliouby, Y. Dawood, and A. M. A. Abdallah, Geophys. Res. Lett. 31, L05503(2004). [13] C. Buizert, D. Baggenstos, W. Jiang, R. Purtschert, V. V. Petrenko, Z. T. Lu, P. Müller, T. Kuhl, J. Lee, J. P. Severinghaus, and E. J. Brook, Proc. Natl. Acad. Sci. USA 111, 6876(2014). [14] H. H. Loosli, Earth Planet. Sci. Lett. 63, 51(1983). [15] H. H. Loosli, M. Möll, H. Oeschger, and U. Schotterer, Nucl. Instrum. Meth. B 17, 402(1986). [16] P. Collon, M. Bichler, J. Caggiano, L. D. Cecil, Y. El Masri, R. Golser, C. L. Jiang, A. Heinz, D. Henderson, W. Kutschera, B. E. Lehmann, P. Leleux, H. H. Loosli, R. C. Pardo, M. Paul, K. E. Rehm, P. Schlosser, R. H. Scott, W. M. Smethie Jr., and R. Vondrasek, Nucl. Instrum. Meth. B 223, 428(2004). [17] J. A. Corcho Alvarado, R. Purtschert, F. Barbecot, C. Chabault, J. Rueedi, V. Schneider, W. Aeschbach-Hertig, R. Kipfer, and H. H. Loosli, Water Resour. Res. 43, W03427(2007). [18] N. Aoki and Y. Makide, Chem. Lett. 34, 1396(2005). [19] J. Ahlswede, S. Hebel, J. O. Ross, R. Schoetter, and M. B. Kalinowski, J. Environ. Radioact. 115, 34(2013). [20] H. H. Loosli and H. Oeschger, Earth Planet. Sci. Lett. 7, 67(1969). [21] P. Collon, T. Antaya, B. Davids, M. Fauerbach, R. Harkewicz, M. Hellstrom, W. Kutschera, D. Morrissey, R. Pardo, M. Paul, B. Sherrill, and M. Steiner, Nucl. Instrum. Meth. B 123, 122(1997). [22] C. Y. Chen, Y. M. Li, K. Bailey, T. P. O'Connor, L. Young, and Z. T. Lu, Science 286, 1139(1999). [23] W. Jiang, K. Bailey, Z. T. Lu, P. Mueller, T. P. O'Connor, C. F. Cheng, S. M. Hu, R. Purtschert, N. C. Sturchio, Y. R. Sun, W. D. Williams, and G. M. Yang, Geochim. Cosmochim. Acta 91, 1(2012). [24] G. M. Yang, C. F. Cheng, W. Jiang, Z. T. Lu, R. Purtschert, Y. R. Sun, L. Y. Tu, and S. M. Hu, Sci. Rep. 3, 1(2013). [25] L. Y. Tu, G. M. Yang, C. F. Cheng, G. L. Liu, X. Y. Zhang, and S. M. Hu, Anal. Chem. 86, 4002(2014). [26] W. Jiang, W. Williams, K. Bailey, A. M. Davis, S. M. Hu, Z. T. Lu, T. P. O'Connor, R. Purtschert, N. C. Sturchio, Y. R. Sun, and P. Mueller, Phys. Rev. Lett. 106, 103001(2011). [27] F. Ritterbusch, S. Ebser, J. Welte, T. Reichel, A. Kersting, R. Purtschert, W. AeschbachHertig, and M. K. Oberthaler, Geophys. Res. Lett. 41, 6758(2014). [28] Z. T. Lu, P. Schlosser, W. M. Smethie Jr., N. C. Sturchio, T. P. Fischer, B. M. Kennedy, R. Purtschert, J. P. Severinghaus, D. K. Solomon, T. Tanhua, and R. Yokochi, Earth-Sci. Rev. 138, 196(2014). [29] R. F. Weiss, Deep-Sea Res. 17, 721(1970). [30] R. F. Weiss and T. K. Kyser, J. Chem. Eng. Data 23, 69(1978). [31] W. M. Smethie Jr. and G. Mathieu, Mar. Chem. 18, 17(1986). [32] T. Mohamed, J. Strohaber, R. Nava, A. Kolomenskii, N. Thonnard, and H. A. Schuessler, J. Am. Soc. Mass Spectrom. 23, 1260(2012). [33] R. Yokochi, L. J. Heraty, and N. C. Sturchio, Anal. Chem. 80, 8688(2008). [34] A. Gabelman and S. T. Hwang, J. Membr. Sci. 159, 61(1999). [35] P. C. Probst, R. Yokochi, and N. C. Sturchio, 4th Mini Conference on Noble Gases in the Hydrosphere and in Natural Gas Reservoirs, (2007). [36] B. Loose, M. Stute, P. Alexander, and W. M. Smethie, Water Resour. Res. 45, W00D34(2009). [37] T. Ohta, Y. Mahara, N. Momoshima, F. Inoue, J. Shimada, R. Ikawa, and M. Taniguchi, J. Hydro. 376, 152(2009). [38] T. Matsumoto, L. F. Han, M. Jaklitsch, and P. K. Aggarwal, Groundwater 51, 461(2013). |
# Release R Luminescence version 0.7.0
###### by R Luminescence Team (February 20, 2017)
Dear R Luminescence users!
We are happy to announce that another new version of our R package (0.7.3) just made it to CRAN. As usual: many thanks for your helpful suggestions and comments! This is a major update and comes with a lot of bugfixes and several new functions serving various requests out of the community. We are excited to present some the new possibilities below. For a full list of all changes, please check the CRAN website here.
## What’s new?
### New analysis functions
This release comes with two (one of them long requested) new analysis functions:
Calculating fading corrected ages (calc_FadingCorr()) following the approach by Huntley and Lamothe (2001) was already possible in the package 'Luminescence' since 2012. Until today, however, a function to analyse the fading measurements was still missing. The new function analyse_FadingMeasurement() closes this gap by enabling the analysis of common ‘SAR fading’ measurement data. The function can be fed with an RLum.Analysis object (raw measurement data) or with an $$L_x/T_x$$ table and returns various numerical and graphical output. On top of that: the output can be directly forwarded to other functions, e.g., calc_FadingCorr() or calc_Kars2008() for further calculations.
g_value <- analyse_FadingMeasurement(
structure = c("Lx", "Tx"),
plot = TRUE,
verbose = TRUE,
n.MC = 100)
##
##
## n.MC: 100
## tc: 3.78e+02 s
## ---------------------------------------------------
## T_0.5 interpolated: NA
## T_0.5 predicted: 4e+11
## g-value: 5.18 ± 0.88 (%/decade)
## g-value (norm. 2 days): 6.01 ± 0.9 (%/decade)
## ---------------------------------------------------
## rho': 3.98e-06 ± 7.15e-07
## log10(rho'): -5.4 ± 0.08
## ---------------------------------------------------
#### Analyse data determined by the portable OSL reader
With this version, we now also support files produced by an SUERC portable OSL reader. Importing a PSL file is as easy as running
psl_file <- read_PSL2R(file = "myPSLfile.psl",
drop_bg = TRUE,
as_decay_curve = TRUE,
smooth = TRUE,
merge = TRUE)
Note that there are several additional arguments available to modify the data when importing the raw data to R (see ?read_PSL2R). Once the data are available in R the next logical step would be to analyse them, which can be done via analyse_portableOSL().
plot_RLum(object = psl_file,
combine = TRUE,
subset = list(recordType = c("IRSL", "OSL")))
psl_results <- analyse_portableOSL(object = psl_file,
signal.integral = 1:3,
invert = FALSE,
normalise = TRUE,
plot = TRUE)
head(round(get_RLum(psl_results), 2))
## BSL BSL_error IRSL IRSL_error BSL_depletion IRSL_depletion IRSL_BSL_RATIO
## 1 0.67 0 0.70 0.00 0.87 0.91 1.05
## 2 1.33 0 1.43 0.01 0.88 0.90 1.07
## 3 0.36 0 0.45 0.00 1.21 1.03 1.27
## 4 0.40 0 0.42 0.00 1.05 1.02 1.06
## 5 1.92 0 1.93 0.01 0.90 0.93 1.01
## 6 1.87 0 1.84 0.01 0.96 0.95 0.98
As can be seen analyse_portableOSL() produces a recognizable plot of commonly reported parameters, i.e. the signal intensities, depletion ratios and signal ratios. The function returns the numeric values, which can be used for further processing if desired.
### New models
#### Anomalous fading correction for feldspar IRSL (Kars et al., 2008)
The newly introduced function calc_Kars2008() applies the approach described in Kars et al. (2008), developed from the model of Huntley (2006) to calculate the expected sample particular fraction of saturation of a feldspar and also to calculate fading corrected age using this model. The density of recombination centres ρ’ is a crucial parameter of this model and must be determined separately from a fading measurement. The function analyse_FadingMeasurement() can be used to calculate the sample specific ρ’ value.
Below an example with the example data provided by the package:
data("ExampleData.Fading", envir = environment())
fading_data <- ExampleData.Fading$fading.data$IR50
plot = FALSE,
verbose = FALSE,
n.MC = 999)
lxtx_data <- ExampleData.Fading$equivalentDose.data$IR50
kars_res <- calc_Kars2008(data = lxtx_data,
rhop = rhop,
ddot = c(7.00, 0.004),
n.MC = 999)
## Warning: 'calc_Kars2008' is deprecated.
## See help("Deprecated")
##
## [calc_Huntley2006()]
##
## -------------------------------
## (n/N) [-]: 0.15 ± 0.02
## (n/N)_SS [-]: 0.36 ± 0.06
##
## ---------- Measured -----------
## DE [Gy]: 130.97 ± 17.1
## D0 [Gy]: 539.01 ± 20.76
## Age [ka]: 18.71 ± 2.62
##
## D0 [Gy]: 624.66 ± 12.36
##
## ---------- Simulated ----------
## DE [Gy]: 305.39 ± 38.31
## D0 [Gy]: 570.09 ± 4.68
## Age [ka]: 43.63 ± 5.89
## Age @2D0 [ka]: 162.88 ± 8.25
## -------------------------------
The calc_Kars2008() function also calculates the level of saturation $$\left(\frac{n}{N}\right)$$ and the field saturation (i.e. athermal steady state, $$\left(\frac{n}{N}\right)_{SS}$$) value for the sample under investigation using the sample specific ρ’, unfaded D0 and environmental dose rate $$\dot{D}$$ values, following the approach of Kars et al. (2008).
#### Average Dose Model (Guérin et al., 2017)
To overcome the drawbacks of commonly used age (dose) models, Guérin et al., 2017 introduced a new dose model that calculates the average dose and their extrinsic dispersion and standard error by bootstrapping. The function fits neatly into the collection of approaches dealing with age (dose) models. Example for the package example data set:
data(ExampleData.DeValues, envir = environment())
sigma_m = 0.1)
##
## [calc_AverageDose()]
##
## >> Initialisation <<
## n: 56
## delta: 65.7939285714286
## sigma_m: 0.1
## sigma_d: 0.286159381384861
##
## >> Calculation <<
## log likelihood: -19.251
## confidence intervals
## --------------------------------------------------
## IC_delta IC_sigma_d
## level 0.95 0.9500
## CredibleIntervalInf 60.84 0.2145
## CredibleIntervalSup 69.99 0.3955
## --------------------------------------------------
##
## >> Results <<
## ----------------------------------------------------------
## Average dose: 65.3597 se(Aver. dose): 2.4757
## sigma_d: 0.3092 se(sigma_d): 0.0471
## ----------------------------------------------------------
### Conversions
Even data export to CSV-files is already part of R’s base functionality; we decided to make it even easier and more straight forward. Four new functions allow a direct and straight forward conversion from proprietary input formats to CSV-files
• convert_BIN2CSV()
• convert_Daybreak2CSV()
• convert_PSL2CSV()
• convert_XSYG2CSV()
• write_RLum2CSV()
### Miscellaneous
#### Getting closer with GitHub
As some of you may already know, the R package 'Luminescence' is actively developed and maintained on the web-based Git repository hosting service GitHub. In this package version, we introduce a couple of new functions that make use of the public GitHub API v3.
With github_issues() you can query known issues (output shortened):
github_issues()
## NA
## NA
## NA
## NA
## NA
## NA
## NA
## NA
## NA
## NA
## NA
github_branches() can be used to check all currently available development branches. The output further provides an install command that can be used to install a specific development branch manually.
github_branches()
## BRANCH SHA
## 2 dev_0.9.x fa48e2585e86134729fcd9750f5ed27627f32b78
## 3 master b8ba3dfdcee5c129172c191ca631265177458dde
## 4 surfexp 17a482f4e2e82d5a2fe83af20a5b4844197176f0
## INSTALL
## 1 devtools::install_github('r-lum/luminescence@dev_OSLcomponents')
## 2 devtools::install_github('r-lum/luminescence@dev_0.9.x')
## 3 devtools::install_github('r-lum/luminescence@master')
## 4 devtools::install_github('r-lum/luminescence@surfexp')
Finally, github_commits() returns the latest n code commits to the package:
github_commits(n = 1)
## SHA AUTHOR DATE
## 1 b8ba3dfdcee5c129172c191ca631265177458dde RLumSK 2020-11-03T12:27:42Z
## MESSAGE
## 1 Update GitHub pages
Ultimately, all these functions are the foundation for the also newly introduced function install_DevelopmentVersion(); a convenient implementation for installing the development version of the R package 'Luminescence' directly from GitHub. This function uses github_branches() to check which development branches of the R package 'Luminescence' are currently available on GitHub. The user is then prompted to choose one of the branches to install. It further checks whether the R package 'devtools' is currently installed and available on the system. Finally, it prints R code to the console that the user can copy and paste into the R console in order to install the desired development version of the package.
Alternatively, with force_install = TRUE the functions checks if 'devtools' is available and then attempts to install the chosen development branch via install_github().
install_DevelopmentVersion()
install_DevelopmentVersion(force_install = TRUE)
In the R-community a new operator, called “magrittr forward-pipe operator” or short %>%, from the package magrittr turned out as being very efficient for R code scripting. With this operator, values are piped from one function to another. Example:
rnorm(1000) %T>% hist(freq = FALSE, breaks = "FD") %>% density %>% lines
To further support this operator, the package magrittr is now loaded by default when attaching the package 'Luminescence'
#### Other enhancements
• Thanks to Antoine Zink the function read_Daybreak2R() now also handles binary files produced by the software TLAPLLIC v.3.2, used for a Daybreak, model 1100.
## References
Huntley, D.J., Lamothe, M., 2001. Ubiquity of anomalous fading in K-feldspars and the measurement and correction for it in optical dating. Canadian Journal of Earth Sciences 38, 1093–1106. doi:10.1139/cjes-38-7-1093
Huntley, D.J., 2006. An explanation of the power-law decay of luminescence. Journal of Physics: Condensed Matter 18, 1359-1365. doi:10.1088/0953-8984/18/4/020
Kars, R.H., Wallinga, J., Cohen, K.M., 2008. A new approach towards anomalous fading correction for feldspar IRSL dating-tests on samples in field saturation. Radiation Measurements 43, 786-790. doi:10.1016/j.radmeas.2008.01.021 |
# Scalar curvature and the degree of symmetry
Let $$M$$ be a closed connected smooth manifold, then we define the degree of symmetry of $$M$$ by $$N(M):=\sup_\limits{g}\{\mathrm{dim}(\mathrm{Isom}(M,g)\}$$, where $$g$$ is a smooth Riemannian metric on $$M$$ and $$\mathrm{Isom}$$ is the isometry group of the Riemannian manifold $$(M,g)$$.
The torus $$T^n$$ does not admit a Riemannian metric with positive scalar curvature and has $$N(T^n)\neq 0$$.
Whether there exists $$M$$ with $$N(M)=0$$ such that $$M$$ admits a metric with positive scalar curvature? That is, whether admitting a metric with positive scalar curvature implies its degree of symmetry is nonzero?
• You can replace the supremum in the definition of $N(M)$ with a maximum because $\dim\operatorname{Isom}(M, g) \leq \frac{1}{2}n(n+1)$. Jul 29 at 15:44
It seems that there are examples. By a theorem of Gromov and Lawson every simply connected manifold of dimension $$n \geq 5$$ which is not spin admits a metric of positive scalar curvature.
There are many examples of simply connected, non-spin, closed $$6$$-manifolds which cannot admit a smooth circle action, constructed by Puppe. Theorem 7 of https://arxiv.org/pdf/math/0606714.pdf.
Then, since the isomotetry group of a closed manifold is a compact Lie group, if $$N(M)>0$$ then taking a maximal torus gives a non-trivial circle action, which contradicts the above. So every metric has isometry group of dimension $$0$$.
Edit: A specific example would be a quartic $$3$$-fold $$X \subset \mathbb{CP}^4$$. It admits a metric with positive Ricci curvature (since it is Fano), or alternatively since it is not spin we can apply Gromov-Lawson. It does not admit any smooth circle action due to a Theorem of Dessai and Wiemler https://arxiv.org/pdf/1108.5327.pdf. |
Geometry & Topology
The Binet–Legendre Metric in Finsler Geometry
Abstract
For every Finsler metric $F$ we associate a Riemannian metric $gF$ (called the Binet–Legendre metric). The Riemannian metric $gF$ behaves nicely under conformal deformation of the Finsler metric $F$, which makes it a powerful tool in Finsler geometry. We illustrate that by solving a number of named Finslerian geometric problems. We also generalize and give new and shorter proofs of a number of known results. In particular we answer a question of M Matsumoto about local conformal mapping between two Minkowski spaces, we describe all possible conformal self maps and all self similarities on a Finsler manifold. We also classify all compact conformally flat Finsler manifolds, we solve a conjecture of S Deng and Z Hou on the Berwaldian character of locally symmetric Finsler spaces, and extend a classic result by H C Wang about the maximal dimension of the isometry groups of Finsler manifolds to manifolds of all dimensions.
Most proofs in this paper go along the following scheme: using the correspondence $F↦gF$ we reduce the Finslerian problem to a similar problem for the Binet–Legendre metric, which is easier and is already solved in most cases we consider. The solution of the Riemannian problem provides us with the additional information that helps to solve the initial Finslerian problem.
Our methods apply even in the absence of the strong convexity assumption usually assumed in Finsler geometry. The smoothness hypothesis can also be replaced by a weaker partial smoothness, a notion we introduce in the paper. Our results apply therefore to a vast class of Finsler metrics not usually considered in the Finsler literature.
Article information
Source
Geom. Topol., Volume 16, Number 4 (2012), 2135-2170.
Dates
Revised: 15 May 2012
Accepted: 9 July 2012
First available in Project Euclid: 20 December 2017
https://projecteuclid.org/euclid.gt/1513732481
Digital Object Identifier
doi:10.2140/gt.2012.16.2135
Mathematical Reviews number (MathSciNet)
MR3033515
Zentralblatt MATH identifier
1258.53080
Citation
Matveev, Vladimir S; Troyanov, Marc. The Binet–Legendre Metric in Finsler Geometry. Geom. Topol. 16 (2012), no. 4, 2135--2170. doi:10.2140/gt.2012.16.2135. https://projecteuclid.org/euclid.gt/1513732481
References
• D V Alekseevskiĭ, Groups of conformal transformations of Riemannian spaces, Mat. Sb. 89(131) (1972) 280–296, 356 In Russian; translated in Math. USSR-Sb 18: (1972), 285–301
• D Bao, On two curvature-driven problems in Riemann–Finsler geometry, from: “Finsler geometry, Sapporo 2005–-in memory of Makoto Matsumoto”, (S V Sabau, H Shimada, editors), Adv. Stud. Pure Math. 48, Math. Soc. Japan, Tokyo (2007) 19–71
• D Bao, S-S Chern, Z Shen, An introduction to Riemann–Finsler geometry, Graduate Texts in Mathematics 200, Springer, New York (2000)
• V N Berestovskiĭ, Generalized symmetric spaces, Sibirsk. Mat. Zh. 26 (1985) 3–17, 221
• M Berger, Sur les groupes d'holonomie homogène des variétés à connexion affine et des variétés riemanniennes, Bull. Soc. Math. France 83 (1955) 279–330
• H Busemann, The geometry of geodesics, Academic Press, New York (1955)
• H Busemann, B B Phadke, Two theorems on general symmetric spaces, Pacific J. Math. 92 (1981) 39–48
• P Centore, Volume forms in Finsler spaces, Houston J. Math. 25 (1999) 625–640
• S-S Chern, Local equivalence and Euclidean connections in Finsler spaces, Sci. Rep. Nat. Tsing Hua Univ. Ser. A. 5 (1948) 95–121
• S-S Chern, Z Shen, Riemann–Finsler geometry, Nankai Tracts in Mathematics 6, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2005)
• G de Rham, Sur la reductibilité d'un espace de Riemann, Comment. Math. Helv. 26 (1952) 328–344
• S Deng, Z Hou, The group of isometries of a Finsler space, Pacific J. Math. 207 (2002) 149–155
• S Deng, Z Hou, Homogeneous Finsler spaces of negative curvature, J. Geom. Phys. 57 (2007) 657–664
• S Deng, Z Hou, On symmetric Finsler spaces, Israel J. Math. 162 (2007) 197–219
• J Ferrand, The action of conformal transformations on a Riemannian manifold, Math. Ann. 304 (1996) 277–291
• P Foulon, Locally symmetric Finsler spaces in negative curvature, C. R. Acad. Sci. Paris Sér. I Math. 324 (1997) 1127–1132
• D Fried, Closed similarity manifolds, Comment. Math. Helv. 55 (1980) 576–582
• P Hartman, Ordinary differential equations, John Wiley & Sons, New York (1964)
• E Heil, D Laugwitz, Finsler spaces with similarity are Minkowski spaces, Tensor 28 (1974) 59–62
• S Helgason, Differential geometry, Lie groups, and symmetric spaces, Pure and Applied Mathematics 80, Academic Press [Harcourt Brace Jovanovich Publishers], New York (1978)
• S Ishihara, Homogeneous Riemannian spaces of four dimensions, J. Math. Soc. Japan 7 (1955) 345–370
• C-W Kim, Locally symmetric positively curved Finsler spaces, Arch. Math. (Basel) 88 (2007) 378–384
• S Kobayashi, T Nagano, Riemannian manifolds with abundant isometries, from: “Differential geometry (in honor of Kentaro Yano)”, (M Obata, S Kobayashi, editors), Kinokuniya, Tokyo (1972) 195–219
• N H Kuiper, Compact spaces with a local structure determined by the group of similarity transformations in $E\sp n$, Nederl. Akad. Wetensch., Proc. 53 (1950) 1178–1185 In Russian; translated in Indagationes Math. 12: (1950) 411–418
• R S Kulkarni, Conformally flat manifolds, Proc. Nat. Acad. Sci. U.S.A. 69 (1972) 2675–2676
• A M Legendre, Traité des fonctions elliptiques et des intégrales eulériennes, Volume 1, Huzard-Courcier (1825)
• J Liouville, Extension au cas des trois dimensions de la question du tracé géographique, Applications de l'analyse à la géométrie (1850) 609–617
• J Liouville, Théorème sur l'équation $dx^2+dy^2+dz^2 = \lambda (d\alpha^2 + d\beta^2 + d\gamma^2)$, J. Math. Pures et Appliquées (1850)
• R L Lovas, J Szilasi, Homotheties of Finsler manifolds, SUT J. Math. 46 (2010) 23–34
• E Lutwak, D Yang, G Zhang, A new ellipsoid associated with convex bodies, Duke Math. J. 104 (2000) 375–390
• E Lutwak, D Yang, G Zhang, $L\sb p$ John ellipsoids, Proc. London Math. Soc. 90 (2005) 497–520
• M Matsumoto, Conformally Berwald and conformally flat Finsler spaces, Publ. Math. Debrecen 58 (2001) 275–285
• S Matsumoto, Foundations of flat conformal structure, from: “Aspects of low-dimensional manifolds”, (Y Matsumoto, S Morita, editors), Adv. Stud. Pure Math. 20, Kinokuniya, Tokyo (1992) 167–261
• V S Matveev, Riemannian metrics having common geodesics with Berwald metrics, Publ. Math. Debrecen 74 (2009) 405–416
• V S Matveev, H-B Rademacher, M Troyanov, A Zeghib, Finsler conformal Lichnerowicz–Obata conjecture, Ann. Inst. Fourier (Grenoble) 59 (2009) 937–949
• V D Milman, A Pajor, Isotropic position and inertia ellipsoids and zonoids of the unit ball of a normed $n$–dimensional space, from: “Geometric aspects of functional analysis (1987–88)”, (J Lindenstrauss, V D Milman, editors), Lecture Notes in Math. 1376, Springer, Berlin (1989) 64–104
• D Montgomery, H Samelson, Transformation groups of spheres, Ann. of Math. 44 (1943) 454–470
• P Planche, Géométrie de Finsler sur les espaces symétriques, Thèse Genève (1995)
• P Planche, Structures de Finsler invariantes sur les espaces symétriques, C. R. Acad. Sci. Paris Sér. I Math. 321 (1995) 1455–1458
• R Schoen, On the conformal and CR automorphism groups, Geom. Funct. Anal. 5 (1995) 464–481
• R Schoen, S-T Yau, Conformally flat manifolds, Kleinian groups and scalar curvature, Invent. Math. 92 (1988) 47–71
• J Simons, On the transitivity of holonomy systems, Ann. of Math. 76 (1962) 213–234
• Z I Szabó, Berwald metrics constructed by Chevalley's polynomials
• Z I Szabó, Positive definite Berwald spaces, Structure theorems on Berwald spaces, Tensor 35 (1981) 25–39
• I Vaisman, C Reischer, Local similarity manifolds, Ann. Mat. Pura Appl. 135 (1983) 279–291
• C Vincze, A new proof of Szabó's theorem on the Riemann-metrizability of Berwald manifolds, Acta Math. Acad. Paedagog. Nyházi. 21 (2005) 199–204
• H-C Wang, On Finsler spaces with completely integrable equations of Killing, J. London Math. Soc. 22 (1947) 5–9
• K Yano, On $n$–dimensional Riemannian spaces admitting a group of motions of order $n(n-1)/2+1$, Trans. Amer. Math. Soc. 74 (1953) 260–279 |
# Why does the addition of PCl5 increase the rate of dissociation of PCl5?
$\ce{PCl5 <=> PCl3 +Cl2}$
It's stated in my book that addition of $\ce{PCl5}$ to the equilibrium mixture increases the rate of forward reaction but no reason is mentioned for it.
However, before this the author derived $K_c = \dfrac{x^2}{(1-x)V}$ (using law of mass action at equilibrium ) where V is the volume of the container and $x$ is the number of $\ce{PCl5}$ molecules dissociated from $a$ number of moles initially. I couldn't relate the "rate of forward reaction" statement to this equation while earlier in case of the reaction:
$\ce{H2 + I2 <=> HI}$
I was able to relate the equation I obtained through law of mass action with the effect of concentration on rate of forward or backward reaction.
Could someone please explain why addition of $\ce{PCl5}$ to the equilibrium mixture increases the rate of forward reaction?
Imagine a overpopulated kingdom suddenly gaining a large area of land (Start of Reaction). Immediately, groups of people start migrating to the area of land (Forward Reaction). However, as people migrate to this new land, some decide that, for whatever reason, they don't like this area (Reverse Reaction). As more and more people move there, more start coming back until there is an equal rate of people moving to the territory and people moving back (Dynamic Equilibrium).
To your question: To model this system using my example of immigration, we suddenly add an enourmous influx of refugees into the old kingdom (Increase in $\ce{PCl5}$).
So the land in the outer territory seems more valuable now (less crowding, more open space, better air, etc.). This causes a temporary increase in the forward direction until the reverse reaction increases to compensate. This is called Le Chatelier's principle in chemistry. |
# evaluation of functions examples with solutions
Integrating various types of functions is not difficult. 4 Evaluating Functions Algebraically, cont. The solutions of this equation are called the roots of the polynomial, or the zeros of the associated function (they correspond to the points where the graph of the function meets the x-axis). The point has coordinates $\left(2,1\right)$, so $f\left(2\right)=1$. The range of the function describes the values of the output. We will set each factor equal to 0 and solve for $p$ in each case. The following diagram shows some examples of composite functions. When we input 2 into the function $g$, our output is 6. It is important to note that not every relationship expressed by an equation can also be expressed as a function with a formula. Watch this short tutorial to learn how. So this is equal to 49 minus 25. Indicates which variable the function is in terms of (the variable used in the function) 3 Evaluating Functions Algebraically. Solving $g\left(n\right)=6$ means identifying the input values, $n$, that produce an output value of 6. As we saw above, we can represent functions in tables. The function must work for all values we give it, so it is up to us to make sure we get the domain correct! Because the input value is a number, 2, we can use algebra to simplify. function box here as whatever your input is, 1. For example, the function $f\left(x\right)=5 - 3{x}^{2}$ can be evaluated by squaring the input value, multiplying by 3, and then subtracting the product from 5. Therefore, for an input of 4, we have an output of 24 or h ( 4) = 24 h ( 4) = 24. We can evaluate the function $P$ at the input value of “goldfish.” We would write $P\left(\text{goldfish}\right)=2160$. In the next video, we provide another example of how to solve for a function value. Example 4: Given that g\left( x \right) = {x^2} - 3x + 1, find g\left( {2x - 1} \right). Find the given output values in the row (or column) of output values, noting every time that output value appears. Replace x with 6 and solve. to be equal to 49 minus-- instead of Example 1 If f ( x ) = x + 4 and g ( x ) = x 2 – 2 x – 3, find each of the following and determine the common domain. So whenever you're Read off the output of the inner function from the … For the input x, the function gives the largest integer smaller than or equal to x i.e. This time the input value is no longer a fixed numerical value, but instead an expression. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to … f(x) 2x 10 find f(6) f(6) 2(6) 10 ; f(6) 12 10 ; f(6) 2 ; The value of x is 6. is going to be our 5. These points represent the two solutions to $f\left(x\right)=4:$ $x=-1$ or $x=3$. Solve the function for $f(0)$. \begin{align}&p+3=0, &&p=-3 \\ &p - 1=0, &&p=1\hfill \end{align}. Evaluate f(x) = 2x + 4 for x = 5 . Examples of evaluation research. The tabular form for function $P$ seems ideally suited to this function, more so than writing it in paragraph or function form. They define the topics that will be evaluated. It requires the efficient use of resources combined with the guidance of people in order to reach a specific organizational objective. There are some types of functions, where you have to be a little more careful. The table below shows two solutions: $n=2$ and $n=4$. Use all the usual algebraic methods for solving equations, such as adding or subtracting the same quantity to or from both sides, or multiplying or dividing both sides of the equation by the same quantity. }[/latex] See the graph below. A number a is a root of a polynomial P if and only if the linear polynomial x − a divides P , that is if there is another polynomial Q such that P = ( x – a ) Q . Locate the given input to the inner function on the $x\text{-}$ axis of its graph. To express the relationship in this form, we need to be able to write the relationship where $p$ is a function of $n$, which means writing it as $p=$ expression involving $n$. For e.g. Replace the variable "x" with "−3": h(−3) = (−3) 2 + 2 = 9 + 2 = 11. Identifying Criteria for an Evaluation "Make a list of prominent, widely recognized standards for judging your subject. \begin{align}\dfrac{f\left(a+h\right)-f\left(a\right)}{h}&=\dfrac{\left({a}^{2}+2ah+{h}^{2}+3a+3h - 4\right)-\left({a}^{2}+3a - 4\right)}{h} \\[2mm] &=\dfrac{2ah+{h}^{2}+3h}{h}\\[2mm] &=\frac{h\left(2a+h+3\right)}{h}&&\text{Factor out }h. \\[2mm] &=2a+h+3&&\text{Simplify}.\end{align}. However, each $x$ does determine a unique value for $y$, and there are mathematical procedures by which $y$ can be found to any desired accuracy. Conversely, we can use information in tables to write functions, and we can evaluate functions using the tables. https://www.khanacademy.org/.../v/understanding-function-notation-example-1 QuestionPro is the leader in employee evaluation survey templates. Solving a function equation using a graph requires finding all instances of the given output value on the graph and observing the corresponding input value(s). This resolves PR#16093. 2. Goldfish can remember up to 3 months, while the beta fish has a memory of up to 5 months. In this case, the input value is a letter so we cannot simplify the answer any further. Function notation is written using the name of the function and the value you want to find the output for. Evaluation research questions lay the foundation of a successful evaluation. You know the problem is an integration problem when you see the following symbol: Remember, too, that your integration answer will always have a constant of integration, which means that you are going to add '+ C' for all your answers. We can also verify by graphing as in Figure 5. Here let us call the function $P$. Example 1. Given, y = x 2 + 4x + 1. They define the topics that will be evaluated. Therefore, for an input of 4, we have an output of 24 or $h(4)=24$. When we have a function in formula form, it is usually a simple matter to evaluate the function. Evaluating $g\left(3\right)$ means determining the output value of the function $g$ for the input value of $n=3$. Lashley proposed the equipotentiality theory, which suggests that the basic motor and sensory functions are localised, but that higher mental functions are not.He claimed that intact areas of the cortex could take over responsibility for specific cognitive functions following brain injury. Evaluation functions in chess consist of a material balance term that dominates the evaluation, plus a set of positional terms usually totaling no more than the value of a pawn, though in some positions the positional terms can get much larger, such as when checkmate is imminent. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Example: the domain for √x (the square root of x) We can't have the square root of a negative number (unless we use imaginary numbers, but we aren't doing that here), so we must exclude negative numbers: The domain of the function describes values of x that can be put into the function. This resulted in me signing my biggest client to date, and gaining three solid referrals from the new relationship. Find the value of f of 5. How to Evaluate Functions? In computer science, recursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Solution for Economic evaluation >Example [Reference 1] Sum-of-the-year's digits (SYD) amortization Calculate the amortization schedule for a \$500 asset… If you're seeing this message, it means we're having trouble loading external resources on our website. We get two outputs corresponding to the same input, so this relationship cannot be represented as a single function $y=f\left(x\right)$. Function evaluation is the process of determining output values of a function. Moving horizontally along the line $y=4$, we locate two points of the curve with output value $4:$ $\left(-1,4\right)$ and $\left(3,4\right)$. Find the given input in the row (or column) of input values. Hyperbolic Functions And Their Derivatives. This is done by substituting the input values in the given function notation. Asking for help, clarification, or responding to other answers. Given the function $g\left(m\right)=\sqrt{m - 4}$, evaluate $g\left(5\right)$. Use MathJax to format equations. The examples in this section can all be done with a basic knowledge of indefinite integrals and will not require the use of the substitution rule. For example, if we wanted to know the value of $$f$$ when $$x = -1$$ for the function below, we would just find $$x = -1$$ on the x-axis and use the graph to find the corresponding y-value. How To: Given a composite function and graphs of its individual functions, evaluate it using the information provided by the graphs. take that, square it, and then subtract it from 49. This means $f\left(-1\right)=4$ and $f\left(3\right)=4$, or when the input is $-1$ or $\text{3,}$ the output is $\text{4}\text{. Given the function [latex]g\left(m\right)=\sqrt{m - 4}$, solve $g\left(m\right)=2$. Given the function $h\left(p\right)={p}^{2}+2p$, evaluate $h\left(4\right)$. Evaluation research questions lay the foundation of a successful evaluation. And we are done. For the function, $f\left(x\right)={x}^{2}+3x - 4$, evaluate each of the following. Write y = x 2 + 4x + 1 using function notation and evaluate the function at x = 3. In this way of representing functions, we use words. Evaluating Functions on Brilliant, the largest community of math and science problem solvers. To solve $f\left(x\right)=4$, we find the output value $4$ on the vertical axis. The claim that functions are localised to certain areas of the brain has been criticised. Hyperbolic Functions - The Basics. In previous examples, we have been evaluating a function by a number. The following table gives the Existence of Limit Theorem and the Definition of Continuity. For detailed information about each distance metric, see pdist.. You can also specify a function for the distance metric using a function handle.The distance function must be of the form d2 = distfun(XI,XJ), where XI is a 1-by-n vector corresponding to a single row of the input matrix X, and XJ is an m 2-by-n matrix corresponding to multiple rows of X. We input it into our The function that relates the type of pet to the duration of its memory span is more easily visualized with the use of a table. And they defined the If it is possible to express the function output with a formula involving the input quantity, then we can define a function in algebraic form. Self evaluation example: Marketing The domain of the function is the type of pet and the range is a real number representing the number of hours the pet’s memory span lasts. Given that g\left( x \right) = {x^2} - 3x + 1, find g\left( {2x - 1} \right). $g\left(5\right)=\sqrt{5 - 4}=1$. So f of 5, every time I For example, how well do our pets recall the fond memories we share with them? Example: Find the derivative of . \begin{align}&h\left(p\right)=3\\ &{p}^{2}+2p=3 &&\text{Substitute the original function }h\left(p\right)={p}^{2}+2p. It involves responsibility to achieve the objectives and to fulfill specific organizational purposes through economical and effective planning and regulation. In order for a relation to be a function, each input must have one and only one output. writing x squared, I would write 5 squared. With an input value of [latex]a+h, we must use the distributive property. \\ &\left(p+3\text{)(}p - 1\right)=0 &&\text{Factor}. For example, the position of a planet is a function of time. Make a table of values that references the function. The function f of $\dfrac{f\left(a+h\right)-f\left(a\right)}{h}$. Introduction to Limits of Functions Limits of Rational Functions Calculate Limits using Different Techniques Calculus Lessons. Replace the $x$ in the function with each specified value. Intermediate Examples of Evaluating Functions. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to … Evaluating a function using a graph also requires finding the corresponding output value for a given input value, only in this case, we find the output value by looking at the graph. \end{align}[/latex]. To help you understand this notation, let’s look at a couple of examples. Using the graph, solve $f\left(x\right)=1$. Graph the function $f(x) = -\frac{1}{2}x^2+x+4$ using function notation. If $\left(p+3\right)\left(p - 1\right)=0$, either $\left(p+3\right)=0$ or $\left(p - 1\right)=0$ (or both of them equal 0). In this case, we apply the input values to the function more than once, and then perform algebraic operations on the result. The table output value corresponding to $n=3$ is 7, so $g\left(3\right)=7$. Solution. So, Five real-world examples: If you look at a collection of people, you can think of there being a relation between height and age (people generally get taller as they age then remain the same h For example, given the equation $x=y+{2}^{y}$, if we want to express $y$ as a function of $x$, there is no simple algebraic formula involving only $x$ that equals $y$. Representation of a Function- Verbal. The above mentioned is the analysis and evaluation of key elements of marketing functions. Identify the input value(s) corresponding to the given output value. Evaluating Limits Examples With Solutions : Here we are going to see some practice problems with solutions. All we do is evaluate the line integral over each of the pieces and then add them up. Without the you could make a mistake: h(−3) = −3 2 + 2 = −9 + 2 = −7 (WRONG!) To understand the functions of management, you must first examine what management is about. The line integral for some function over the above piecewise curve would be, ∫ Cf(x, y)ds = ∫ C1f(x, y)ds + ∫ C2f(x, y)ds + ∫ C3f(x, y)ds + ∫ C4f(x, y)ds The analysis of the function and study of its behaviour hence becomes difficult. The interrelations of the key marketing functions with other functional units of organization are explained by the following: The marketing functions are closely linked with the … You can use an online graphing tool to graph functions, find function values, and evaluate functions. Solution: Using the above table and the Chain Rule. In this article, we explain what performance evaluation comments are and why they’re important, list tips for writing them and give examples of some common performance review phrases. \begin{align}&2n+6p=12\\[1mm] &6p=12 - 2n &&\text{Subtract }2n\text{ from both sides}. The claim that functions are localised to certain areas of the brain has been criticised. Find The Domain of Rational Functions, examples with detailed solutions and graphical explanations. You use this local search method to improve your solutions and hence this is a part of your algorithm and for fair comparison number of function evaluation in local search phase must be enumerated Our mission is to provide a free, world-class education to anyone, anywhere. it with the input. You shouldn’t come across this issue while using most high-order functions: R 3.2.0 (2015) changelog: Higher-order functions such as the apply functions and Reduce() now force arguments to the functions they apply in order to eliminate undesirable interactions between lazy evaluation and variable capture in closures. Include at least the interval [latex][-5,5] for $x$-values. dealing with a function, you take your input. Worked example: Evaluating functions from equation, Worked example: Evaluating functions from graph, Practice: Evaluate functions from their graph, Worked example: evaluating expressions with function notation. Does the equation ${x}^{2}+{y}^{2}=1$ represent a function with $x$ as input and $y$ as output? little function box, and we need to get our output. \begin{align}h\left(p\right)&={p}^{2}+2p \\ h\left(4\right)&={\left(4\right)}^{2}+2\left(4\right) \\ &=16+8 \\ &=24 \end{align}. This gives us two solutions. By applying function notation, we get And 49 minus 25 is equal to 24. http://cnx.org/contents/9b08c294-057f-4201-9f48-5d6ad992740d@5.2. Example: Given f(x) = x 2 + 6 and g(x) = 2x – 1, find a) (f ∘ g)(x) b) (g ∘ f)(x) Solution: a) (f ∘ g)(x) = f(2x – 1) = (2x – 1) 2 + 6 = 4x 2 – 4x + 1 + 6 = 4x 2 – 4x + 7. b) (g ∘ … For example, f(x) is read “f of x” and means “the output of the function f when the input is x”. Self evaluation example: Sales. Did you have an idea for improving this content? Evaluating a Function . see an x here, since f of x is equal to this, Solution to Question 5: (f + g)(x) is defined as follows (f + g)(x) = f(x) + g(x) = (- 7 x - 5) + (10 x - 12) Group like terms to obtain (f + g)(x) = 3 x - 17 Example . Example: Evaluating Functions. Scroll down the page for more examples and solutions. We have also included a limits calculator at the end of this lesson. floor function (see fig. Notice that, to evaluate the function in table form, we identify the input value and the corresponding output value from the pertinent row of the table. \\[1mm] &p=\frac{12 - 2n}{6} &&\text{Divide both sides by 6 and simplify}. The evaluation function uses two calls to the FBm () function. See the table below. Khan Academy is a 501(c)(3) nonprofit organization. Donate or volunteer today! Lashley proposed the equipotentiality theory, which suggests that the basic motor and sensory functions are localised, but that higher mental functions are not.He claimed that intact areas of the cortex could take over responsibility for specific cognitive functions following brain injury. Now try the following with an online graphing tool: \begin{align}f\left(2\right)&={2}^{2}+3\left(2\right)-4 \\ &=4+6 - 4 \\ &=6\hfill \end{align}, $f\left(a\right)={a}^{2}+3a - 4$, \begin{align}f\left(a+h\right)&={\left(a+h\right)}^{2}+3\left(a+h\right)-4 \\[2mm] &={a}^{2}+2ah+{h}^{2}+3a+3h - 4 \end{align}, $f\left(a+h\right)={a}^{2}+2ah+{h}^{2}+3a+3h - 4$, $y=f\left(x\right)=\cfrac{\sqrt[3]{x}}{2}$. Read off the output of the inner function from the … \begin{align}y&=\pm \sqrt{1-{x}^{2}} \\[1mm] &=\sqrt{1-{x}^{2}}\hspace{3mm}\text{and}\hspace{3mm}-\sqrt{1-{x}^{2}} \end{align}. This video gives the definitions of the hyperbolic functions, a rough graph of three of the hyperbolic functions: y = sinh x, y = cosh x, y = tanh x We can rewrite it to decide if $p$ is a function of $n$. How do you define management?Management is a process with a social element. Example: the domain for √x (the square root of x) We can't have the square root of a negative number (unless we use imaginary numbers, but we aren't doing that here), so we must exclude negative numbers: The formula for the area of a circle is an example of a polynomial function. \\ &{p}^{2}+2p - 3=0 &&\text{Subtract 3 from each side}. So f of 5 is going Are there relationships expressed by an equation that do represent a function but which still cannot be represented by an algebraic formula? Given the function h(p) =p2 +2p h ( p) = p 2 + 2 p, evaluate h(4) h ( 4). These templates consist of several insightful survey questions for employee evaluation that are written by HR experts, particularly to gain the best responses and insights from employee evaluations. Example: evaluate the function h(x) = x 2 + 2 for x = −3. The first scales down the point P by a factor of 10; as a result, the first call to FBm () returns relatively low-frequency variation over the surface of the object being shaded. Evaluating a function means to substitute a variable with its given number or expression. every time I see an x, I would replace If $x - 8{y}^{3}=0$, express $y$ as a function of $x$. Find the Inverse of a Relation Examples and Questions with Solutions and detailed explanations. If so, express the relationship as a function $y=f\left(x\right)$. 49 minus x squared. Included in the examples in this section are computing definite integrals of piecewise and absolute value functions. Replace the input variable in the formula with the value provided. In this case, our input In the first quarter I exceeded my sales target by 10% through a creative outbound campaign in collaboration with the marketing team. Evaluation of line integrals over piecewise smooth curves is a relatively simple thing to do. To evaluate $h\left(4\right)$, we substitute the value 4 for the input variable $p$ in the given function. The graph verifies that $h\left(1\right)=h\left(-3\right)=3$ and $h\left(4\right)=24$. This function may be familiar. Scroll down the page for examples and solutions. Find The Inverse Function Values from Tables Questions with detailed Solutions and explanations. This is read “g of 2” and represents the output of the function gfor the input value of 2. Examples of evaluation research. Returning to our savings account example, we can conclude that if a person puts $$P$$ dollars in an account at an annual interest rate r, compounded continuously, then $$A(t)=Pe^{rt}$$. MathJax reference. There is an urban legend that a goldfish has a memory of 3 seconds, but this is just a myth. The output $h\left(p\right)=3$ when the input is either $p=1$ or $p=-3$. In previous examples, we have … 3). The general form for such functions is P (x) = a0 + a1x + a2x2 +⋯+ anxn, where the coefficients (a0, a1, a2,…, an) are given, x can be any real number, and all the powers of x are counting numbers (1, 2, 3,…). Solve for the value of a function at a point. A performance review is a great way to obtain helpful feedback and an important opportunity for managers to aid in the development of their team members. And while a puppy’s memory span is no longer than 30 seconds, the adult dog can remember for 5 minutes. The definition and the properties of the composition of functions are discussed through examples with detailed solutions and explanations. Making statements based on opinion; back them up with references or personal experience. ( or column ) of input values in the row ( or column ) input. Little more careful function, you take your input improving this content table form may be useful. X i.e relationship expressed by an algebraic formula algebraic formula or procedures expressed equation... To 3 months, while the beta fish has a memory of to. May be more useful than using equations given, y = x 2 + +... Need to know are the rules that apply and how different functions integrate be more useful than using equations the! In Figure 5 a social element Limit Theorem and the definition and the properties of output. Represents the output of the brain has been criticised to date, and evaluate functions questions with.! Function box, and we need to get our output is also 6 we are going to be our.! Going to be a little more careful evaluate [ latex ] g [ ]... Of Continuity is 6 're behind a web filter, please enable in! Numerical value, but this is just a myth ” and represents the output of composition. Used to determine the local evaluation of functions examples with solutions of the function achieve the objectives and to fulfill specific organizational objective Academy please. Are some types of functions are localised to certain areas of the brain has been criticised [... Is no longer than 30 seconds, but instead an expression + 1 using function notation {. The above mentioned is the analysis and evaluation of key elements of marketing functions Subtract 3 from each }... Function of time would write 5 squared a little more careful have been evaluating function... ] x=1 [ /latex ] set each Factor equal to 0 and solve a... ) =\sqrt { 5 - 4 } =1 [ /latex ] axis of its graph equal! Questions lay the foundation of a successful evaluation than using equations based on opinion back. Function describes the values of a planet is a relatively simple thing to do goldfish remember. To write functions, find function values from tables questions with solutions and explanations. For an evaluation make a table of values that references the function gives the Existence of Limit Theorem the... Numerical value, but instead an expression piecewise smooth curves is a number, 2, we can represent in. Example, evaluate [ latex ] x\text { - } [ /latex ] from both sides resources on website... Need to get our output is also 6 a memory of up to 3 months, while beta! The [ latex ] x\text { - } [ /latex ] an urban legend that a goldfish has memory.? management is a number a process with a function but which still can not simplify the answer any.... ) corresponding to the inner function on the [ latex ] y=f\left ( x\right ) =1 [ /latex ] of! From graph our mission is to provide a free, world-class education to anyone evaluation of functions examples with solutions anywhere /latex ] both! Rules that apply and how different functions integrate did you have to a... Tables to write functions, where you have to be our 5 gives the Existence of Theorem! A memory of up to 5 months a table of values that references the function f of 5 going! =0 & & \text { Subtract 3 from each side } y=f\left ( x\right ) =1 /latex. And absolute value functions help you understand this notation, let ’ s memory span is no longer fixed. Minus -- instead of writing x squared, I would write 5 squared,. Here let us call the function at x = −3 relationship expressed by an algebraic formula, solve [ ]. Responding to evaluation of functions examples with solutions answers with the marketing team features of khan Academy is function. Given input to the inner function on the [ latex ] n=4 [ /latex ], we represent... A formula with references or personal experience Chain Rule verify by graphing as in Figure 5 it our... To 49 minus x squared, I would write 5 squared x =.! Algebraic operations on the [ latex ] a+h [ /latex ] curves is a letter so we not! Just a myth are localised to certain areas of the composition of functions we. And effective planning and regulation least the interval [ latex ] g\left ( 1\right ) [ /latex,. =0 & & \text { Factor } the various types of functions you will most commonly see are mono…:... Above mentioned is the analysis and evaluation of line integrals over piecewise smooth curves is a 501 ( c (... How well do our pets recall the fond memories we share with them given number or expression 1 using notation! ( x ) = x 2 + 4x + 1 note that not every relationship by... Remember evaluation of functions examples with solutions 5 minutes table gives the Existence of Limit Theorem and the definition Continuity. With its given number or expression 2 ” and represents the output of the composition of are... 5 is going to be equal to x i.e the tables a planet a. Video to see some practice problems with solutions is the analysis and evaluation of key elements of marketing.. Use of resources combined with the guidance of people in order to reach a specific objective. Can represent functions in tables Theorem and the Chain Rule and detailed explanations x is defined as f of is. Scroll down the page for more examples and solutions fish has a memory of up to months. Of line integrals over piecewise smooth curves is a 501 ( c ) ( p! \Dfrac { f\left ( x\right ) [ /latex ]: find the derivative of can algebra... From each side } “ g of 2 a couple of examples graphical explanations least the interval [ latex n=2! Try to solve for the value of a function value is the analysis and evaluation of key of! { Subtract 3 from each side } references the function h ( x =! This value is a process with a social element is 6 and use all the features of Academy! Input 4 into the function is important to note that not every relationship expressed by an equation can also expressed... { ) ( } p - evaluation of functions examples with solutions ) [ /latex ] of functions! Used to determine the local strength of the brain has been criticised different. Apply the input values Limits using different Techniques Calculus Lessons section are computing definite of! Video, we use words corresponding output value appears an evaluation make a table of values references! S ) corresponding to the given output values in the function an algebraic formula external on! ) =1 [ /latex ] of values that references the function more than once and. Function ) 3 evaluating functions from graph our mission is to provide a free, education! Function ) 3 evaluating functions from graph our mission is to provide a free, world-class to! And use all the features of khan Academy is a process with a element! Responsibility to achieve the objectives and to fulfill specific organizational objective algebraic formula on the evaluation of functions examples with solutions given values! Academy is a 501 ( c ) ( 3 ) nonprofit organization through economical and effective planning and regulation simple! Values to the given output values, and then add them up with references or experience... We Subtract [ latex ] p [ /latex ] and [ latex ] (! The end of this lesson identify the input value is a number, 2, use. To help you understand this notation, let ’ s look at a couple examples... The brain has been criticised biggest client to date, and then add them up with references or personal.! Each Factor equal to 49 minus -- instead of writing x squared the information provided by the.. This way of representing functions, examples with detailed solutions and detailed explanations { Factor.... Little more careful ] [ -5,5 ] [ -5,5 ] [ /latex ] the properties the! Also 6 couple of examples the beta fish has a memory of 3,. Function [ latex ] { x } ^ { 2 } [ /latex ] Domain of Rational functions, we... Whenever you're dealing with a formula graph our mission is to provide a free, education! And explanations mathematical rules or evaluation of functions examples with solutions expressed in equation form can remember up to 3,! Video we offer more examples and questions with solutions: Here we are to... Tables to write functions, we provide another example is something like g ( 2 ) 1\right... Localised to certain areas of the pieces and then perform algebraic operations on the result evaluation a! Value ( s ) corresponding to the given function notation and evaluate the function for specific x values organizational through! Graph our mission is to provide a free, world-class education to anyone, anywhere for example the. Procedures expressed in equation form is equal to 49 minus x squared the. Curves is a 501 ( c ) ( } p - 1\right ) =0 & & \text Factor! = 3 problems with solutions of line integrals over piecewise smooth curves is a with! Algebraic formula where you have to be our 5: Here we are going to our! Make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked, 2, we can functions. Y=F\Left ( x\right ) [ /latex ] -values a memory of up to 3,... The tables inner function on the [ latex ] x [ /latex ] we words. ] x=1 [ /latex ], our output is also 6 definite integrals of piecewise absolute. =1 [ /latex ] if you 're behind a web filter, please make sure that the domains * and!: find the derivative of procedures expressed in equation form Limits examples with detailed solutions and explanations through examples detailed!
This entry was posted in Good Lab Outfitters. Bookmark the permalink. |
Solve PDE in 2D
Problem How should I go about solving this PDE:
$$\phi_x+\phi_y=x+y-3c$$
Where $\phi = \phi(x,y)$, $c$ is a constant, and $\phi$ is specified on the circle
$$x^2+y^2=1$$
My Attempt to solve it I would like to use the method of characteristics, but then I get stuck because of the given initial condition. In fact, so far I have the characteristics equations
$$\dot{{z}}(s)=x+y-3c$$ $$\dot{{x}}(s)= 1$$ $$\dot{{y}}(s)= 1$$
The last two are easy to solve but then I am not sure how to use the initial condition. If you know of a different/easier method to solve this PDE, feel free to let me know, thanks!
• Your "initial condition" doesn't say anything about $\phi$. Perhaps you mean that $\phi(x,y)$ is specified on the circle $x^2 + y^2 = 1$? – Robert Israel Dec 3 '13 at 5:18
• oh yeah, sorry! I meant that $\phi$ is specified on the circle $x^2+y^2=1$. – johnsteck Dec 3 '13 at 5:23
• – Mhenni Benghorbal Dec 3 '13 at 5:46
As you say, the characteristic equations are $\dot{z} = x + y - 3 c$, $\dot{x} = 1$, $\dot{y} = 1$. So the characteristic curves are $x = x_0 + s$, $y = y_0 + s$, i.e. $x - y = \text{constant}$. But there's a problem with specifying the initial conditions on the circle $x^2 + y^2 = 1$: the characteristic curves through most points either don't intersect the circle at all (so the initial condition doesn't determine $\phi$ there) or intersect it in two points (so the initial conditions might not be consistent).
This problem can also be solved by a substitution: $$x = s+t,\;\;\; y = s-t,\;\;\; \psi(s,t)=\phi(s+t,s-t).$$ Then $$\psi_{s} = \phi_{x}+\phi_{y} = x+y-3c = 2s-3c$$ Then there is a function $d(t)$ such that $$\psi = s^{2}-3cs+d(t).$$ The function $d$ is determined from $\psi(s,t)$, which is assumed to be known on $1=x^{2}+y^{2}=2s^{2}+2t^{2}$. |
MathSciNet bibliographic data MR1371124 (97g:46024) 46E15 (46B20) González, Manuel; Gutiérrez, Joaquín M.; Llavona, José G. Polynomial continuity on \$l\sb 1\$$l\sb 1$. Proc. Amer. Math. Soc. 125 (1997), no. 5, 1349–1353. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. |
# Math Help - Related Rates, water entering trough problem
1. ## Related Rates, water entering trough problem
A trough is 15ft long and 4ft across the top. Its ends are isosceles triangles with height 3 ft. Water runs into the trough at a rate of 2.5 cubic feet per minute. How fast is the water rising when it is 2 feet deep?
The answer is .0625 but I'm pretty lost on how to get there.
2. Originally Posted by Porcelain
A trough is 15ft long and 4ft across the top. Its ends are isosceles triangles with height 3 ft. Water runs into the trough at a rate of 2.5 cubic feet per minute. How fast is the water rising when it is 2 feet deep?
The answer is .0625 but I'm pretty lost on how to get there.
$\frac{dV}{dt} = 2.5 \, ft^3/min$
volume of water in the tank ...
$V = \frac{1}{2} \cdot b \cdot h \cdot 15$
using similar triangles ...
$\frac{b}{h} = \frac{4}{3}$
solve for $b$, substitute the result for $b$ in the volume formula to get $V$ as a function of $h$.
take the time derivative and determine $\frac{dh}{dt}$ when $h = 2$ ft
3. Skeeter you always answer my questions. Thanks SOO much |
Question
The product of two consecutive integers is $156$, find the numbers.A) $\text{10 }and\text{ 13}$B) $\text{12 }and\text{ 13}$C) $\text{12 }and\text{ 11}$D) $\text{1 }and\text{ 13}$
Hint: Here we assume variable to one of the consecutive integers to get the next one. Algebraic equation is framed and solved to get the unknown values.
Consecutive integers are integers whose difference value is $1$ .
We need to find the numbers as per the condition in the question.
Let the two consecutive numbers be $x$ and $x+1$ respectively.
The product of two consecutive integers is $156$ $\Rightarrow x\times (x+1)=156$
\begin{align} & \Rightarrow {{x}^{2}}+x=156 \\ & \Rightarrow {{x}^{2}}+x-156=0 \\ & \Rightarrow {{x}^{2}}+13x-12x-156=0 \\ & \Rightarrow x(x+13)-12(x+13)=0 \\ & \Rightarrow (x+13)(x-12)=0 \\ & \Rightarrow x=-13,12 \\ \end{align}
Therefore, If $x=-13$ the consecutive integers are $-13,-12$
Therefore, If $x=12$the consecutive integers are $12,13$
Note: The two consecutive integers can be even taken as $x-1$ and $x$ respectively. Even then we get the values to be $12$ and $13$. Upon performing the above calculation $x$ turns to be $13$ and therefore $x-1$ becomes $12$. |
# Common logarithm
The common logarithm.
In mathematics, the common logarithm is the logarithm with base 10. It is also known as the decadic logarithm and also as the decimal logarithm, named after its base, or Briggsian logarithm, after Henry Briggs, an English mathematician who pioneered its use. It is indicated by log10(x), or sometimes Log(x) with a capital L (however, this notation is ambiguous since it can also mean the complex natural logarithmic multi-valued function). On calculators it is usually "log", but mathematicians usually mean natural logarithm rather than common logarithm when they write "log". To mitigate this ambiguity the ISO specification is that log10(x) should be lg (x) and loge(x) should be ln (x).
## Uses
Before the early 1970s, handheld electronic calculators were not yet in widespread use. Due to their utility in saving work in laborious multiplications and divisions with pen and paper, tables of base 10 logarithms were given in appendices of many books. Such a table of "common logarithms" gave the logarithm, often to 4 or 5 decimal places, of each number in the left-hand column, which ran from 1 to 10 by small increments, perhaps 0.01 or 0.001. There was only a need to include numbers between 1 and 10, since the logarithms of larger numbers can then be easily derived.
For example, the logarithm of 120 is given by:
$\log_{10}120=\log_{10}(10^2\times 1.2)=2+\log_{10}1.2\approx2+0.079181.$
The last number (0.079181)—the fractional part of the logarithm of 120, known as the mantissa of the common logarithm of 120—was found in the table.[note 1] The location of the decimal point in 120 tells us that the integer part of the common logarithm of 120, called the characteristic of the common logarithm of 120, is 2.
Numbers between (and excluding) 0 and 1 have negative logarithms. For example,
$\log_{10}0.012=\log_{10}(10^{-2}\times 1.2)=-2+\log_{10}1.2\approx-2+0.079181=-1.920819$
To avoid the need for separate tables to convert positive and negative logarithms back to their original numbers, a bar notation is used:
$\log_{10}0.012\approx-2+0.079181=\bar{2}.079181$
The bar over the characteristic indicates that it is negative whilst the mantissa remains positive. When reading a number in bar notation out loud, the symbol $\bar{n}$ is read as "bar n", so that $\bar{2}.079181$ is read as "bar 2 point 07918...".
Common logarithm, characteristic, and mantissa of powers of 10 times a number
number logarithm characteristic mantissa combined form
n (= 5 × 10i) log10(n) i (= floor(log10(n)) ) log10(n) − characteristic
5 000 000 6.698 970... 6 0.698 970... 6.698 970...
50 1.698 970... 1 0.698 970... 1.698 970...
5 0.698 970... 0 0.698 970... 0.698 970...
0.5 −0.301 029... −1 0.698 970... 1.698 970...
0.000 005 −5.301 029... −6 0.698 970... 6.698 970...
Note that the mantissa is common to all of the 5×10i. This holds for any positive real number $x$ because:
$\log_{10}(x\times10^i)=\log_{10}(x)+\log_{10}(10^i)=\log_{10}(x)+i$.
Since $i$ is always an integer the mantissa comes from $\log_{10}(x)$ which is constant for given $x$. This allows a table of logarithms to include only one entry for each mantissa. In the example of 5×10i, 0.698 970 (004 336 018 ...) will be listed once indexed by 5, or 0.5, or 500 etc..
The following example uses the bar notation to calculate 0.012 × 0.85 = 0.0102:
$\begin{array}{rll} \text{As found above,} &\log_{10}0.012\approx\bar{2}.079181 \\ \text{Since}\;\;\log_{10}0.85&=\log_{10}(10^{-1}\times 8.5)=-1+\log_{10}8.5&\approx-1+0.929419=\bar{1}.929419\;, \\ \log_{10}(0.012\times 0.85) &=\log_{10}0.012+\log_{10}0.85 &\approx\bar{2}.079181+\bar{1}.929419 \\ &=(-2+0.079181)+(-1+0.929419) &=-(2+1)+(0.079181+0.929419) \\ &=-3+1.008600 &=-2+0.008600\;^* \\ &\approx\log_{10}(10^{-2})+\log_{10}(1.02) &=\log_{10}(0.01\times 1.02) \\ &=\log_{10}(0.0102) \end{array}$
* This step makes the mantissa between 0 and 1, so that its antilog (10mantissa) can be looked up.
Numbers are placed on slide rule scales at distances proportional to the differences between their logarithms. By mechanically adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale, one can quickly determine that 2 x 3 = 6.
## History
Common logarithms are sometimes also called "Briggsian logarithms" after Henry Briggs, a 17th-century British mathematician.
Because base 10 logarithms were most useful for computations, engineers generally simply wrote "log(x)" when they meant log10(x). Mathematicians, on the other hand, wrote "log(x)" when they meant loge(x) for the natural logarithm. Today, both notations are found. Since hand-held electronic calculators are designed by engineers rather than mathematicians, it became customary that they follow engineers' notation. So the notation, according to which one writes "ln(x)" when the natural logarithm is intended, may have been further popularized by the very invention that made the use of "common logarithms" far less common, electronic calculators.
## Numeric value
The numerical value for logarithm to the base 10 can be calculated with the following identity.
$\log_{10}(x) = \frac{\ln(x)}{\ln(10)} \qquad \text{ or } \qquad \log_{10}(x) = \frac{\log_2(x)}{\log_2(10)}$
as procedures exist for determining the numerical value for logarithm base e and logarithm base 2. |
# 2D Fourier Transform invertibility [closed]
Edit: I found an error in my C++ code. -(rand()&1)) is either 0 or -1 rather than -1 or 1 as I intended. Changing this to (0.5-(rand()&1))*2 seems to solve the problem. Does anyone know why only zeroes and positive values in the power spectrum would cause peaks in the corners of the terrain?
For context, I'm working on a program that procedurally generates variable, tileable terrain with spectral synthesis, but I don't think I entirely understand the concept of the Fourier transform (I've taken calculus, but have been reading online about the transform).
I'm using Mathematica currently but I'm hoping to move into C++, which is enormously faster for the filtering. I came here since this seemed much more of a math problem than a programming one, but I'll post my code to help explain my algorithm:
f[d_] := (1/(1000 d + 1))^2.4 (* filtering function *)
r = 128; (* resolution *)
noise = Fourier@RandomComplex[{-1 - I, 1 + I}, {r, r}];
(* generates a 2d array of random complexes, then takes the Fourier transform *)
For[x = 1, x <= r, x++,
For[y = 1, y <= r, y++,
d = EuclideanDistance[{x, y}, {r/2 - .5, r/2 - .5}]/r;
(* calculates distance to center *)
noise[[x, y]] = f@d*noise[[x, y]];
(* applies filtering function to each point *)
]
]
This produces decent-looking terrain that is the goal of the algorithm. However, the filtering (inside the For loops) takes a long time (about 1 second for the above code for 128^2 points), while my C++ implementation of the filter takes under a second for the 512^2 terrain linked below.
Produced with ImageAdjust@Image@Log@Abs@noise:
http://i.stack.imgur.com/ip6bM.png
Produced with ReliefImage@Abs@InverseFourier@noise:
http://i.stack.imgur.com/9MKNV.png
Earlier I said I applied a low-pass filter to the power spectrum, and that was incorrect. I apply a filter to the white noise by attenuating the power spectrum in proportion to the distance to the center. This is implemented in Mathematica above by f[d] which represents the factor of attenuation as a function of distance from the center.
I'm happy with the results but not the execution speed. I've implemented the random complex generation and the filter in C++. The Fourier transform of white noise appears to be simply more white noise, so I figured I didn't need to compute the forward transform before filtering. Here is the relevant C++ code:
for(int i=0; i<RESOLUTION*RESOLUTION; i++){
int x = i/RESOLUTION;
int y = i%RESOLUTION;
double distance=sqrt(pow(((double)RESOLUTION/2-0.5-x),2)+pow((double)RESOLUTION/2-0.5-y,2))/(double)RESOLUTION;
double f = pow(DISTANCE_LIN*distance+1,-DISTANCE_EXP);
realnoise[i] = f * ((double)(rand()%RAND_MAX))/((double)RAND_MAX) * -(rand()&1));
imagnoise[i] = f * ((double)(rand()%RAND_MAX))/((double)RAND_MAX) * -(rand()&1));
// Random positive double // Random sign
}
Right now I just export the contents of realnoise and imagnoise to CSV files and import them into Mathematica for viewing (creating the noise variable with noise = realnoise + imagnoise*I;). The result contains oddities I can't seem to remove by filtering. Specifically, the four corners of the resultant terrain have very high values.
Produced with ReliefImage@Abs@InverseFourier@noise:
(not letting me do more than two hyperlinks) http:// i.stack.imgur.com/zx2YA.png
The power spectrum after importation from C++ contains zeroes, so I can't show a logarithmic power spectrum. Produced with ImageAdjust@Image@Abs@Noise:
http:// i.stack.imgur.com/Sit53.png
From what I know of the Fourier transform, the top left corner of the terrain represents the mean of all values of the power spectrum. I subtracted this mean from all filtered noise values before transforming (noise=noise-Mean@noise), and afterwards every leftmost pixel of the terrain was equal to zero. Nothing else was affected. I suppose it makes sense that only the "zero-frequency" points would be affected, but then why wasn't the top edge affected as well? What is causing the upwards slope around the corners? Is there something special about the Fourier transform of white noise versus regular white noise, or am I misunderstanding the invertibility of the transform?
-
I'd like to try to help you with this, but I'm finding it a bit difficult to follow your description of the problem. For a start, could you indicate more precisely what refers to what? (E.g. "the mean of all the values" -- which values? You "subtracted the mean from all filtered noise values before transforming" -- in which direction?) Also, the second image seems to show only black with a white smudge in the middle -- is this what it's supposed to show? If this really shows the power spectrum of the other image, perhaps you could colour it differently so the values can be distinguished? – joriki Oct 24 '11 at 14:12
Also it might help if you tell us something about how you're doing the transform, so we can try to assess how likely the problem is to be in the software, your use of the software, your understanding of the transform or whatever. For instance, there could be problems with values automatically being reordered or not, with normalization, with arrangement of real and complex values in memory, etc. etc., almost none of which we can say anything useful about at the current level of detail of your question. – joriki Oct 24 '11 at 14:13
You should try to understand/test all this concepts firts in 1D – leonbloy Oct 24 '11 at 14:23
"it applies a low-pass filter to both the imaginary and real parts and takes the inverse Fourier " Sounds rather arbitrary to me (why apply a low pass filter in the transformed domain? are you sure you didn't misunderstand the algorithm?) – leonbloy Oct 24 '11 at 14:34
To apply a low-pass filter in the "time" domain, is the same as multiplying in the frequency domain by almostr-rectangular-window. Dually, applying a low pass filter in the "frequency domain" (??) would amount to multiply in the "time" (here, the pixel) domain by a rectangular window. I doubt you want that. – leonbloy Oct 24 '11 at 14:37 |
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors
<< Previous Issue Journal of the Korean Mathematical Society (Vol. 47, No. 6) Next Issue >>
J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1107—1330Download front and back covers
On mixed two-term exponential sums Zhang Tianping MSC numbers : 11L05 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1107—1122
Eigenvalue problem of biharmonic equation with Hardy potential Yangxin Yao, Shaotong He, and Qingtang Su MSC numbers : 35J40, 35J60, 35J85 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1123—1135
On the goodness of fit test for discretely observed sample from diffusion processes: divergence measure approach Sangyeol Lee MSC numbers : 60J60, 62F05 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1137—1146
Splitting type, global sections and Chern classes for torsion free sheaves on ${\rm P}^N$ Cristina Bertone and Margherita Roggero MSC numbers : 14F05, 14C17, 14Jxx J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1147—1165
Higher weights and generalized MDS codes Steven T. Dougherty and Sunghyu Han MSC numbers : 94B65 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1167—1182
Annulus criteria for oscillation of second order damped elliptic equations Zhiting Xu MSC numbers : 35B05, 35J15, 35J60 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1183—1196
An extension of reduction formula for Littlewood-Richardson coefficients Soojin Cho, Eun-Kyoung Jung, and Dongho Moon MSC numbers : Primary 05E10; Secondary 14M15 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1197—1222
Identification of resistors in electrical networks Soon-Yeong Chung MSC numbers : Primary 94C15; Secondary 94C05, 94C12 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1223—1238
Characterization of central units of $\mathbb{Z}A_{n}$ Tevfik Bilgin, Necat Gorentas, and I. Gokhan Kelebek MSC numbers : 16S34, 16U60 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1239—1252
A novel filled function method for global optimization Youjiang Lin, Yongjian Yang, and Liansheng Zhang MSC numbers : 90C26, 90C30 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1253—1267
On a generalization of the McCoy condition Young Cheol Jeon, Hong Kee Kim, Nam Kyun Kim, Tai Keun Kwak, Yang Lee, and Dong Eun Yeo MSC numbers : 16N40, 16U80 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1269—1282
Some logarithmically completely monotonic functions related to the gamma function Feng Qi and Bai-Ni Guo MSC numbers : Primary 26A48, 33B15; Secondary 26A51, 65R10 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1283—1297
Error analysis associated with uniform Hermite interpolations of bandlimited functions Mahmoud H. Annaby and Rashad M. Asharabi MSC numbers : 30D10, 41A05, 41A30, 94A20 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1299—1316
Properties of positive solutions for a nonlocal reaction-diffusion equation with nonlocal nonlinear boundary condition Chunlai Mu, Dengming Liu, and Shouming Zhou MSC numbers : 35B35, 35K57, 35K60, 35K65 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1317—1328
Erratum to "Floer mini-max theory, the Cerf diagram, and the spectral invariants, J. Korean Math. Soc. 46 (2009), no. 2, 363--447'' Yong-Geun Oh MSC numbers : 53D35, 53D40 J. Korean Math. Soc. 2010 Vol. 47, No. 6, 1329—1330 |
# Very basic math question...
## Homework Statement
19. A wind turbine generates 6000 kWh of electricity on a day when the wind speed was 3 meters per second for 24 hours. If the wind speed was 9 meters per second for 24 hours, how many kWh of electricity will be generated?
a. 12,000 kWh b. 18,000 kWh c. 54,000 kWh d. 162,000 kWh
*This question is way below precalc, but I figured this is the best place to ask... Didn't want to bother the engineering thread... :/
Basic math...
## The Attempt at a Solution
Ok, I feel like a complete idiot for not understanding this, what appears to be, a very simple math problem... Why is this not as simple as doing a ratio...
6000/3 = x/9
==> x=18,000kWh.
But the answer is somehow 162,000 kWh according to the answer key. Why is that the answer and why am I so dumb?
## Answers and Replies
The power output of a wind turbine is proportional to the cube of the wind speed. This relationship should have been provided to you in my opinion.
phinds
Science Advisor
Gold Member
2021 Award
I know nothing about such things but IF the generated power goes as the cube of the wind speed THEN the answer is 162,000
phinds
Science Advisor
Gold Member
2021 Award
I know nothing about such things but IF the generated power goes as the cube of the wind speed THEN the answer is 162,000
EDIT: HA ... according to Mr. Magoo, I nailed it
fresh_42
Mentor
2021 Award
You assume a linear dependence between wind speed and electrical power, but this is not the correct formula. You have to consider the mass of the air, velocity, radius, density and kinetic energy. Of course we can assume that radius and density are constant. But moved mass is not.
phinds
Science Advisor
Gold Member
2021 Award
You assume a linear dependence between wind speed and electrical power, but this is not the correct formula. You have to consider the mass of the air, velocity, radius, density and kinetic energy. Of course we can assume that radius and density are constant. But moved mass is not.
So is Mr Magoo right or not?
fresh_42
Mentor
2021 Award
I do not want to write down the view steps needed in the homework section, but kinetic energy and volume is turned into power.
Last edited:
phinds
Science Advisor
Gold Member
2021 Award
I do not want to write down the view steps needed in the homework section.
fair enough
The power output of a wind turbine is proportional to the cube of the wind speed. This relationship should have been provided to you in my opinion.
Yeah, that was not given, but I did notice cubic eventually just didn't understand why... Good to know. Thanks!
I know nothing about such things but IF the generated power goes as the cube of the wind speed THEN the answer is 162,000
Yup. Got it. Thanks!
You assume a linear dependence between wind speed and electrical power, but this is not the correct formula. You have to consider the mass of the air, velocity, radius, density and kinetic energy. Of course we can assume that radius and density are constant. But moved mass is not.
I will let my professor know.
fresh_42
Mentor
2021 Award
Yeah, that was not given, but I did notice cubic eventually just didn't understand why... Good to know. Thanks!
Did you understand why the proportion goes with velocity cubed and not linear?
shiv222
I think power generated goes as cube of wind speed because: (a) Kinetic energy of given mass of air is proportional to velocity square and (b) Amount of air that hits the turbine in given time is proportional to velocity. So overall the energy transferred to turbine is proportional to cube of velocity.
Ray Vickson
Science Advisor
Homework Helper
Dearly Missed
I think power generated goes as cube of wind speed because: (a) Kinetic energy of given mass of air is proportional to velocity square and (b) Amount of air that hits the turbine in given time is proportional to velocity. So overall the energy transferred to turbine is proportional to cube of velocity.
That is an over-simplification of the real situation, as are most of the responses you have received. Real "power curves" are not really simple cubics; see, eg.,
http://www.wind-power-program.com/turbine_characteristics.htm .
Basically, the cubic formula gives the "input energy" to the turbine, but the output energy of the turbine is more complicated.
fresh_42
Mentor
2021 Award
Basically, the cubic formula gives the "input energy" to the turbine, but the output energy of the turbine is more complicated.
Yes, of course, however in the context, i.e. as multiple choice question and as to why cubic is the correct answer (here), the explanation is correct. I've read of something about roughly 60% efficiency. And in real life, density isn't constant either.
Ray Vickson
Science Advisor
Homework Helper
Dearly Missed
Yes, of course, however in the context, i.e. as multiple choice question and as to why cubic is the correct answer (here), the explanation is correct. I've read of something about roughly 60% efficiency. And in real life, density isn't constant either.
The link I included gives a typical power vs. wind-speed curve. It starts out at zero---and remains at zero up to a certain positive wind speed---then increases (perhaps cubically) for a while, then starts to level off again. So, it is "S-shaped", but may be cubic over a certain range, but of the form ##(s-a)^3## for some positive ##a##, not simply ##s^3##.
I agree, however, that in a multiple-choice question the OP has no choice but to go with the cubic.
shiv222
That is an over-simplification of the real situation, as are most of the responses you have received. Real "power curves" are not really simple cubics; see, eg.,
http://www.wind-power-program.com/turbine_characteristics.htm .
Basically, the cubic formula gives the "input energy" to the turbine, but the output energy of the turbine is more complicated.
In this type of objective question one looks at most important factors. For further, accurate answer, I think turbine design and lot of other information will be needed in question itself. |
# Carry out integral by using Cauchy's theorem
I have kind of a silly question, which probably has an easy answer which I should know myself, but here goes. Say we want to integrate $$\int_{-\infty}^\infty dx \frac{1}{(x^2 + 1)(x - 1 - i)}.$$ If we go to the complex plain, we have two poles in the upper half plane and only one in the lower half plane. Using the residue theorem, if we close the contour in the upper plane (the integrand vanishes fast enough so this is allowed): $$I_{up}=2 \pi i * \left( \frac{1}{2i*1}+\frac{1}{(2i + 1 )*1}\right)=\pi/5*(9+2i),$$ whereas closing in the lower half plane gives: $$I_{down} = -2 \pi i \left(\frac{1}{-2i*(-2i-1)}\right)=\pi/5(2i-1).$$ I was under the impression that closing below or above should not matter, if the integrand vanishes in both cases. What am I doing wrong here?
Your understanding is correct; you've just made an algebra mistake. In your calculation of $I_{up}$, the first denominator $2i*1$ should be $2i*(-1)$.
(Side note: it's confusing the way you've written $I_{down} = \pi/5(2i-1)$ when you mean $\frac\pi5(2i-1)$ and not $\frac{\pi}{5(2i-1)}$ - especially since you explicitly included the $*$ in the last expression for $I_{up}$.) |
# Frame (linear algebra)
(Redirected from Frame of a vector space)
In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal.[1] Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.[2]
## Definition and motivation
### Motivating example: computing a basis from a linearly dependent set
Suppose we have a set of vectors ${\displaystyle \{\mathbf {e} _{k}\}}$ in the vector space V and we want to express an arbitrary element ${\displaystyle \mathbf {v} \in V}$ as a linear combination of the vectors ${\displaystyle \{\mathbf {e} _{k}\}}$, that is, we want to find coefficients ${\displaystyle c_{k}}$ such that
${\displaystyle \mathbf {v} =\sum _{k}c_{k}\mathbf {e} _{k}}$
If the set ${\displaystyle \{\mathbf {e} _{k}\}}$ does not span ${\displaystyle V}$, then such coefficients do not exist for every such ${\displaystyle \mathbf {v} }$. If ${\displaystyle \{\mathbf {e} _{k}\}}$ spans ${\displaystyle V}$ and also is linearly independent, this set forms a basis of ${\displaystyle V}$, and the coefficients ${\displaystyle c_{k}}$ are uniquely determined by ${\displaystyle \mathbf {v} }$. If, however, ${\displaystyle \{\mathbf {e} _{k}\}}$ spans ${\displaystyle V}$ but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if ${\displaystyle V}$ is of infinite dimension.
Given that ${\displaystyle \{\mathbf {e} _{k}\}}$ spans ${\displaystyle V}$ and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan:
1. Removing arbitrary vectors from the set may cause it to be unable to span ${\displaystyle V}$ before it becomes linearly independent.
2. Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become infeasible in practice if the set is large or infinite.
3. In some applications, it may be an advantage to use more vectors than necessary to represent ${\displaystyle \mathbf {v} }$. This means that we want to find the coefficients ${\displaystyle c_{k}}$ without removing elements in ${\displaystyle \{\mathbf {e} _{k}\}}$. The coefficients ${\displaystyle c_{k}}$ will no longer be uniquely determined by ${\displaystyle \mathbf {v} }$. Therefore, the vector ${\displaystyle \mathbf {v} }$ can be represented as a linear combination of ${\displaystyle \{\mathbf {e} _{k}\}}$ in more than one way.
### Formal definition
Let V be an inner product space and ${\displaystyle \{\mathbf {e} _{k}\}_{k\in \mathbb {N} }}$ be a set of vectors in ${\displaystyle V}$. These vectors satisfy the frame condition if there are positive real numbers A and B such that ${\displaystyle 0 and for each ${\displaystyle \mathbf {v} }$ in V,
${\displaystyle A\left\|\mathbf {v} \right\|^{2}\leq \sum _{k\in \mathbb {N} }\left|\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \right|^{2}\leq B\left\|\mathbf {v} \right\|^{2}.}$
A set of vectors that satisfies the frame condition is a frame for the vector space.[3]
The numbers A and B are called the lower and upper frame bounds, respectively.[3] The frame bounds are not unique because numbers less than A and greater than B are also valid frame bounds. The optimal lower bound is the supremum of all lower bounds and the optimal upper bound is the infimum of all upper bounds.
A frame is called overcomplete (or redundant) if it is not a basis for the vector space.
### Analysis operator
The operator mapping ${\displaystyle \mathbf {v} \in V}$ to a sequence of coefficients ${\displaystyle c_{k}}$ is called the analysis operator of the frame. It is defined by:[4]
${\displaystyle \mathbf {T} :V\mapsto \ell ^{2},\quad \mathbf {v} \mapsto \{c_{k}\}_{k\in \mathbb {N} },\quad c_{k}=\langle \mathbf {v} ,\mathbf {e_{k}} \rangle }$
By using this definition we may rewrite the frame condition as
${\displaystyle A\left\|\mathbf {v} \right\|^{2}\leq \left\|\mathbf {T} \mathbf {v} \right\|^{2}\leq B\left\|\mathbf {v} \right\|^{2}}$
where the left and right norms denote the norm in ${\displaystyle V}$ and the middle norm is the ${\displaystyle \ell ^{2}}$ norm.
### Synthesis operator
The adjoint operator ${\displaystyle \mathbf {T} ^{*}}$ of the analysis operator is called the synthesis operator of the frame.[5]
${\displaystyle \mathbf {T} ^{*}:\ell ^{2}\mapsto V,\quad \{c_{k}\}_{k\in \mathbb {N} }\mapsto \mathbf {v} ,\quad \mathbf {v} =\sum _{k}c_{k}\mathbf {e_{k}} }$
### Motivation for the lower frame bound
We want that any vector ${\displaystyle v\in V}$ can be reconstructed from the coefficients ${\displaystyle \{\langle \mathbf {v} ,\mathbf {e_{k}} \rangle \}_{k\in \mathbb {N} }}$. This is satisfied if there exists a constant ${\displaystyle A>0}$ such that for all ${\displaystyle x,y\in V}$ we have:
${\displaystyle A\|x-y\|^{2}\leq \|Tx-Ty\|^{2}}$
By setting ${\displaystyle v=x-y}$ and applying the linearity of the analysis operator we get that this condition is equivalent to:
${\displaystyle A\|v\|^{2}\leq \|Tv\|^{2}}$
for all ${\displaystyle v\in V}$ which is exactly the lower frame bound condition.
## History
Because of the various mathematical components surrounding frames, frame theory has roots in harmonic and functional analysis, operator theory, linear algebra, and matrix theory.[6]
The Fourier transform has been used for over a century as a way of decomposing and expanding signals. However, the Fourier transform masks key information regarding the moment of emission and the duration of a signal. In 1946, Dennis Gabor was able to solve this using a technique that simultaneously reduced noise, provided resiliency, and created quantization while encapsulating important signal characteristics.[1] This discovery marked the first concerted effort towards frame theory.
The frame condition was first described by Richard Duffin and Albert Charles Schaeffer in a 1952 article on nonharmonic Fourier series as a way of computing the coefficients in a linear combination of the vectors of a linearly dependent spanning set (in their terminology, a "Hilbert space frame").[7] In the 1980s, Stéphane Mallat, Ingrid Daubechies, and Yves Meyer used frames to analyze wavelets. Today frames are associated with wavelets, signal and image processing, and data compression.
## Relation to bases
A frame satisfies a generalization of Parseval's identity, namely the frame condition, while still maintaining norm equivalence between a signal and its sequence of coefficients.
If the set ${\displaystyle \{\mathbf {e} _{k}\}}$ is a frame of V, it spans V. Otherwise there would exist at least one non-zero ${\displaystyle \mathbf {v} \in V}$ which would be orthogonal to all ${\displaystyle \mathbf {e} _{k}}$. If we insert ${\displaystyle \mathbf {v} }$ into the frame condition, we obtain
${\displaystyle A\left\|\mathbf {v} \right\|^{2}\leq 0\leq B\left\|\mathbf {v} \right\|^{2};}$
therefore ${\displaystyle A\leq 0}$, which is a violation of the initial assumptions on the lower frame bound.
If a set of vectors spans V, this is not a sufficient condition for calling the set a frame. As an example, consider ${\displaystyle V=\mathbb {R} ^{2}}$ with the dot product, and the infinite set ${\displaystyle \{\mathbf {e} _{k}\}}$ given by
${\displaystyle \left\{(1,0),\,(0,1),\,\left(0,{\frac {1}{\sqrt {2}}}\right),\,\left(0,{\frac {1}{\sqrt {3}}}\right),\dotsc \right\}.}$
This set spans V but since ${\displaystyle \sum _{k}\left|\langle \mathbf {e} _{k},(0,1)\rangle \right|^{2}=0+1+{\frac {1}{2}}+{\frac {1}{3}}+\dotsb =\infty }$, we cannot choose a finite upper frame bound B. Consequently, the set ${\displaystyle \{\mathbf {e} _{k}\}}$ is not a frame.
## Applications
In signal processing, each vector is interpreted as a signal. In this interpretation, a vector expressed as a linear combination of the frame vectors is a redundant signal. Using a frame, it is possible to create a simpler, more sparse representation of a signal as compared with a family of elementary signals (that is, representing a signal strictly with a set of linearly independent vectors may not always be the most compact form).[8] Frames, therefore, provide robustness. Because they provide a way of producing the same vector within a space, signals can be encoded in various ways. This facilitates fault tolerance and resilience to a loss of signal. Finally, redundancy can be used to mitigate noise, which is relevant to the restoration, enhancement, and reconstruction of signals.
In signal processing, it is common to assume the vector space is a Hilbert space.
## Special cases
A frame is a tight frame if A = B; in other words, the frame satisfies a generalized version of Parseval's identity. For example, the union of k orthonormal bases of a vector space is a tight frame with A = B = k. A tight frame is a Parseval frame (sometimes called a normalized frame) if A = B = 1. Each orthonormal basis is a Parseval frame, but the converse is not always true.
A frame is an equal norm frame (sometimes called a uniform frame or a normalized frame) if there is a constant c such that ${\displaystyle \|e_{i}\|=c}$ for each i. An equal norm frame is a unit norm frame if c = 1. A Parseval (or tight) unit norm frame is an orthonormal basis; such a frame satisfies Parseval's identity.
A frame is an equiangular frame if there is a constant c such that ${\displaystyle |\langle e_{i},e_{j}\rangle |=c}$ for each distinct i and j.
A frame is an exact frame if no proper subset of the frame spans the inner product space. Each basis for an inner product space is an exact frame for the space (so a basis is a special case of a frame).
## Generalizations
A Bessel Sequence is a set of vectors that satisfies only the upper bound of the frame condition.
### Continuous Frame
Suppose H is a Hilbert space, X a locally compact space , and ${\displaystyle \mu }$ is a locally finite Borel measure on X. Then a set of vectors in H, ${\displaystyle \{f_{x}\}_{x\in X}}$ with a measure ${\displaystyle \mu }$ is said to be a Continuous Frame if there exists constants, ${\displaystyle 0 such that ${\displaystyle A||f||^{2}\leq \int _{X}|\langle f,f_{x}\rangle |^{2}d\mu (x)\leq B||f||^{2}}$ for all ${\displaystyle f\in H}$.
#### Example
Given a discrete set ${\displaystyle \Lambda \subset X}$ and a measure ${\displaystyle \mu =\delta _{\Lambda }}$ where ${\displaystyle \delta _{\Lambda }}$ is the Dirac measure then the continuous frame property:
${\displaystyle A||f||^{2}\leq \int _{X}|\langle f,f_{x}\rangle |^{2}d\mu (x)\leq B||f||^{2}}$
reduces to: ${\displaystyle A||f||^{2}\leq \sum _{\lambda \in \Lambda }|\langle f,f_{x}\rangle |^{2}\leq B||f||^{2}}$
and we see that Continuous Frames are indeed the natural generalization of the frames mentioned above.
Just like in the discrete case we can define the Analysis, Synthesis, and Frame operators when dealing with continuous frames.
#### Continuous Analysis Operator
Given a continuous frame ${\displaystyle \{f_{x}\}_{x\in X}}$ the Continuous Analysis Operator is the operator mapping ${\displaystyle \{f_{x}\}_{x\in X}}$ to a sequence of coefficients ${\displaystyle \langle f,f_{x}\rangle _{x\in X}}$.
It is defined as follows:
${\displaystyle T:H\mapsto L^{2}(X,\mu )}$ by ${\displaystyle f\to \langle f,f_{x}\rangle _{x\in X}}$
#### Continuous Synthesis Operator
The adjoint operator of the Continuous Analysis Operator is the Continuous Synthesis Operator which is the map:
${\displaystyle T^{*}:L^{2}(X,\mu )\mapsto H}$ by ${\displaystyle a_{x}\to \int _{X}a_{x}f_{x}d\mu (x)}$
#### Continuous Frame Operator
The Composition of the Continuous Analysis Operator and the Continuous Synthesis Operator is known as the Continuous Frame Operator. For a continuous frame ${\displaystyle \{f_{x}\}_{x\in X}}$, the Continuous Frame Operator is defined as follows: ${\displaystyle S:H\mapsto H}$ by ${\displaystyle Sf:=\int _{X}\langle f,f_{x}\rangle f_{x}d\mu (x)}$
#### Continuous Dual Frame
Given a continuous frame ${\displaystyle \{f_{x}\}_{x\in X}}$, and another continuous frame ${\displaystyle \{g_{x}\}_{x\in X}}$, then ${\displaystyle \{g_{x}\}_{x\in X}}$ is said to be a Continuous Dual Frame of ${\displaystyle \{f_{x}\}}$ if it satisfies the following condition for all ${\displaystyle f,h\in H}$:
${\displaystyle \langle f,h\rangle =\int _{X}\langle f,f_{x}\rangle \langle g_{x},h\rangle d\mu (x)}$
## Dual frames
The frame condition entails the existence of a set of dual frame vectors ${\displaystyle \{\mathbf {\tilde {e}} _{k}\}}$ with the property that
${\displaystyle \mathbf {v} =\sum _{k}\langle \mathbf {v} ,\mathbf {\tilde {e}} _{k}\rangle \mathbf {e} _{k}=\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {\tilde {e}} _{k}}$
for any ${\displaystyle \mathbf {v} \in V}$. This implies that a frame together with its dual frame has the same property as a basis and its dual basis in terms of reconstructing a vector from scalar products.
In order to construct a dual frame, we first need the linear mapping ${\displaystyle \mathbf {S} :V\rightarrow V}$, called the frame operator, defined as
${\displaystyle \mathbf {S} \mathbf {v} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {e} _{k}=\mathbf {T} ^{*}\mathbf {T} \mathbf {v} }$.
From this definition of ${\displaystyle \mathbf {S} }$ and linearity in the first argument of the inner product,
${\displaystyle \langle \mathbf {S} \mathbf {v} ,\mathbf {v} \rangle =\sum _{k}\left|\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \right|^{2},}$
which, when substituted in the frame condition inequality, yields
${\displaystyle A\left\|\mathbf {v} \right\|^{2}\leq \langle \mathbf {S} \mathbf {v} ,\mathbf {v} \rangle \leq B\left\|\mathbf {v} \right\|^{2},}$
for each ${\displaystyle \mathbf {v} \in V}$.
The frame operator ${\displaystyle \mathbf {S} }$ is self-adjoint, positive definite, and has positive upper and lower bounds. The inverse ${\displaystyle \mathbf {S} ^{-1}}$ of ${\displaystyle \mathbf {S} }$ exists and it, too, is self-adjoint, positive definite, and has positive upper and lower bounds.
The dual frame is defined by mapping each element of the frame with ${\displaystyle \mathbf {S} ^{-1}}$:
${\displaystyle {\tilde {\mathbf {e} }}_{k}=\mathbf {S} ^{-1}\mathbf {e} _{k}}$
To see that this makes sense, let ${\displaystyle \mathbf {v} }$ be an element of ${\displaystyle V}$ and let
${\displaystyle \mathbf {u} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle {\tilde {\mathbf {e} }}_{k}}$.
Thus
${\displaystyle \mathbf {u} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle (\mathbf {S} ^{-1}\mathbf {e} _{k})=\mathbf {S} ^{-1}\left(\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {e} _{k}\right)=\mathbf {S} ^{-1}\mathbf {S} \mathbf {v} =\mathbf {v} }$,
which proves that
${\displaystyle \mathbf {v} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle {\tilde {\mathbf {e} }}_{k}}$.
Alternatively, we can let
${\displaystyle \mathbf {u} =\sum _{k}\langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle \mathbf {e} _{k}}$.
By inserting the above definition of ${\displaystyle {\tilde {\mathbf {e} }}_{k}}$ and applying the properties of ${\displaystyle \mathbf {S} }$ and its inverse,
${\displaystyle \mathbf {u} =\sum _{k}\langle \mathbf {v} ,\mathbf {S} ^{-1}\mathbf {e} _{k}\rangle \mathbf {e} _{k}=\sum _{k}\langle \mathbf {S} ^{-1}\mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {e} _{k}=\mathbf {S} (\mathbf {S} ^{-1}\mathbf {v} )=\mathbf {v} }$
which shows that
${\displaystyle \mathbf {v} =\sum _{k}\langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle \mathbf {e} _{k}}$.
The numbers ${\displaystyle \langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle }$ are called frame coefficients. This derivation of a dual frame is a summary of Section 3 in the article by Duffin and Schaeffer.[7] They use the term conjugate frame for what here is called a dual frame.
The dual frame ${\displaystyle \{{\tilde {\mathbf {e} }}_{k}\}}$ is called the canonical dual of ${\displaystyle \{\mathbf {e} _{k}\}}$ because it acts similarly as a dual basis to a basis.
When the frame ${\displaystyle \{\mathbf {e} _{k}\}}$ is overcomplete, a vector ${\displaystyle \mathbf {v} }$ can be written as a linear combination of ${\displaystyle \{\mathbf {e} _{k}\}}$ in more than one way. That is, there are different choices of coefficients ${\displaystyle \{c_{k}\}}$ such that ${\displaystyle \mathbf {v} =\sum _{k}c_{k}\mathbf {e} _{k}}$. This allows us some freedom for the choice of coefficients ${\displaystyle \{c_{k}\}}$ other than ${\displaystyle \langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle }$. It is necessary that the frame ${\displaystyle \{\mathbf {e} _{k}\}}$ is overcomplete for other such coefficients ${\displaystyle \{c_{k}\}}$ to exist. If so, then there exist frames ${\displaystyle \{\mathbf {g} _{k}\}\neq \{{\tilde {\mathbf {e} }}_{k}\}}$ for which
${\displaystyle \mathbf {v} =\sum _{k}\langle \mathbf {v} ,\mathbf {g} _{k}\rangle \mathbf {e} _{k}}$
for all ${\displaystyle \mathbf {v} \in V}$. We call ${\displaystyle \{\mathbf {g} _{k}\}}$ a dual frame of ${\displaystyle \{\mathbf {e} _{k}\}}$. |
# Evaluating the expected product of Poisson and exponential random variables.
Consider a bank with 1000 customers. On average there are 60 withdrawal requests per month, while the number of withdrawals in a single month is Poisson distributed. On average, the amount of each withdrawal is 700 euro and the amounts are exponentially distributed. Calculate the probability that the sum total of withdrawals in a given month exceeds 50,000 euro.
My approach was to use the pdfs for Poisson and exponential random variables to evaluate the expectation of the product of the two variables:
$$$$f_p(x) = \frac{e^{-\lambda}\lambda^x}{x!} \\ f_e(y) = \lambda e^{-\lambda x} \\ \int_{x=0}^{1000}\int_{y=\frac{50000}{x}}^{\infty}xyf_p(x)f_e(y) dy dx$$$$
But this integral is unwieldy, and I suspect incorrectly specified.
Any hints on a better approach are appreciated.
• soa.org/globalassets/assets/files/edu/… see example 1.12 it is pretty much exactly this. You would use a normal approximation here. You can find the mean and variance using the independence of the exponentials and poissons and using the tower law for expectation and variance. – George Dewhirst Dec 3 at 0:05
• I've removed the "stochastic-integral" and "stochastic-processes" tags because those don't apply to this question. – Math1000 Dec 3 at 2:04 |
Lemma 38.27.2. In Situation 38.20.11. Let $h : X' \to X$ be an étale morphism. Set $\mathcal{F}' = h^*\mathcal{F}$ and $f' = f \circ h$. Let $F_ n'$ be (38.20.11.1) associated to $(f' : X' \to S, \mathcal{F}')$. Then $F_ n$ is a subfunctor of $F_ n'$ and if $h(X') \supset \text{Ass}_{X/S}(\mathcal{F})$, then $F_ n = F'_ n$.
Proof. Let $T \to S$ be any morphism. Then $h_ T : X'_ T \to X_ T$ is étale as a base change of the étale morphism $g$. For $t \in T$ denote $Z \subset X_ t$ the set of points where $\mathcal{F}_ T$ is not flat over $T$, and similarly denote $Z' \subset X'_ t$ the set of points where $\mathcal{F}'_ T$ is not flat over $T$. As $\mathcal{F}'_ T = h_ T^*\mathcal{F}_ T$ we see that $Z' = h_ t^{-1}(Z)$, see Morphisms, Lemma 29.25.13. Hence $Z' \to Z$ is an étale morphism, so $\dim (Z') \leq \dim (Z)$ (for example by Descent, Lemma 35.21.2 or just because an étale morphism is smooth of relative dimension $0$). This implies that $F_ n \subset F_ n'$.
Finally, suppose that $h(X') \supset \text{Ass}_{X/S}(\mathcal{F})$ and that $T \to S$ is a morphism such that $F_ n'(T)$ is nonempty, i.e., such that $\mathcal{F}'_ T$ is flat in dimensions $\geq n$ over $T$. Pick a point $t \in T$ and let $Z \subset X_ t$ and $Z' \subset X'_ t$ be as above. To get a contradiction assume that $\dim (Z) \geq n$. Pick a generic point $\xi \in Z$ corresponding to a component of dimension $\geq n$. Let $x \in \text{Ass}_{X_ t}(\mathcal{F}_ t)$ be a generalization of $\xi$. Then $x$ maps to a point of $\text{Ass}_{X/S}(\mathcal{F})$ by Divisors, Lemma 31.7.3 and Remark 31.7.4. Thus we see that $x$ is in the image of $h_ T$, say $x = h_ T(x')$ for some $x' \in X'_ T$. But $x' \not\in Z'$ as $x \leadsto \xi$ and $\dim (Z') < n$. Hence $\mathcal{F}'_ T$ is flat over $T$ at $x'$ which implies that $\mathcal{F}_ T$ is flat at $x$ over $T$ (by Morphisms, Lemma 29.25.13). Since this holds for every such $x$ we conclude that $\mathcal{F}_ T$ is flat over $T$ at $\xi$ by Theorem 38.26.1 which is the desired contradiction. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). |
# American Institute of Mathematical Sciences
August 2012, 32(8): 2913-2935. doi: 10.3934/dcds.2012.32.2913
## Feed-forward networks, center manifolds, and forcing
1 Mathematical Biosciences Institute, The Ohio State University, Columbus, OH 43215, United States 2 Department of Mathematics, University of Auckland, Auckland 1142, New Zealand
Received June 2011 Revised August 2011 Published March 2012
This paper discusses feed-forward chains near points of synchrony-breaking Hopf bifurcation. We show that at synchrony-breaking bifurcations the center manifold inherits a feed-forward structure and use this structure to provide a simplified proof of the theorem of Elmhirst and Golubitsky that there is a branch of periodic solutions in such bifurcations whose amplitudes grow at the rate of $\lambda^{\frac{1}{6}}$. We also use this center manifold structure to provide a method for classifying the bifurcation diagrams of the forced feed-forward chain where the amplitudes of the periodic responses are plotted as a function of the forcing frequency. The bifurcation diagrams depend on the amplitude of the forcing, the deviation of the system from Hopf bifurcation, and the ratio $\gamma$ of the imaginary part of the cubic term in the normal form of Hopf bifurcation to the real part. These calculations generalize the results of Zhang on the forcing of systems near Hopf bifurcations to three-cell feed-forward chains.
Citation: Martin Golubitsky, Claire Postlethwaite. Feed-forward networks, center manifolds, and forcing. Discrete and Continuous Dynamical Systems, 2012, 32 (8) : 2913-2935. doi: 10.3934/dcds.2012.32.2913
##### References:
[1] N. N. Bogoliubov and Y. A. Mitropolsky, "Asymptotic Methods in the Theory of Non-linear Oscillations," Translated from the second revised Russian edition, International Monographs on Advanced Mathematics and Physics, Hindustan Publ. Corp., Delhi, Gordon and Breach Science Publishers, New York, 1961. [2] J. Carr, "Applications of Centre Manifold Theory," Applied Mathematical Sciences, 35, Springer-Verlag, New York-Berlin, 1981. [3] T. Elmhirst and M. Golubitsky, Nilpotent Hopf bifurcations in coupled cell systems, SIAM J. Appl. Dynam. Sys., 5 (2006), 205-251. doi: 10.1137/050635559. [4] J.-M. Gambaudo, Perturbation of a Hopf bifurcation by an external time-periodic forcing, J. Diff. Eqns., 57 (1985), 172-199. doi: 10.1016/0022-0396(85)90076-2. [5] M. Golubitsky, M. Nicol and I. Stewart, Some curious phenomena in coupled cell networks, J. Nonlinear Sci., 14 (2004), 207-236. doi: 10.1007/s00332-003-0593-6. [6] M. Golubitsky, C. Postlethwaite, L.-J. Shiau and Y. Zhang, The feed-forward chain as a filter amplifier motif, in "Coherent Behavior in Neuronal Networks," (eds. K. Josic, M. Matias, R. Romo and J. Rubin), Springer Ser. Comput. Neurosci., 3, Springer, New York, (2009), 95-120. [7] M. Golubitsky and D. G. Schaeffer, "Singularities and Groups in Bifurcation Theory," Vol. I, Appl. Math. Sci., 51, Springer-Verlag, New York, 1985. [8] M. Golubitsky, I. N. Stewart and D. G. Schaeffer, "Singularities and Groups in Bifurcation Theory," Vol. II, Appl. Math. Sci., 69, Springer-Verlag, New York, 1988. [9] N. J. McCullen, T. Mullin and M. Golubitsky, Sensitive signal detection using a feed-forward oscillator network, Phys. Rev. Lett., 98 (2007), 254101. doi: 10.1103/PhysRevLett.98.254101. [10] Y. Zhang, "Periodic Forcing of a System Near a Hopf Bifurcation Point," Ph.D Thesis, Department of Mathematics, Ohio State University, 2010. [11] Y. Zhang and M. Golubitsky, Periodically forced Hopf bifurcation, SIAM J. Appl. Dynam. Sys., to appear.
show all references
##### References:
[1] N. N. Bogoliubov and Y. A. Mitropolsky, "Asymptotic Methods in the Theory of Non-linear Oscillations," Translated from the second revised Russian edition, International Monographs on Advanced Mathematics and Physics, Hindustan Publ. Corp., Delhi, Gordon and Breach Science Publishers, New York, 1961. [2] J. Carr, "Applications of Centre Manifold Theory," Applied Mathematical Sciences, 35, Springer-Verlag, New York-Berlin, 1981. [3] T. Elmhirst and M. Golubitsky, Nilpotent Hopf bifurcations in coupled cell systems, SIAM J. Appl. Dynam. Sys., 5 (2006), 205-251. doi: 10.1137/050635559. [4] J.-M. Gambaudo, Perturbation of a Hopf bifurcation by an external time-periodic forcing, J. Diff. Eqns., 57 (1985), 172-199. doi: 10.1016/0022-0396(85)90076-2. [5] M. Golubitsky, M. Nicol and I. Stewart, Some curious phenomena in coupled cell networks, J. Nonlinear Sci., 14 (2004), 207-236. doi: 10.1007/s00332-003-0593-6. [6] M. Golubitsky, C. Postlethwaite, L.-J. Shiau and Y. Zhang, The feed-forward chain as a filter amplifier motif, in "Coherent Behavior in Neuronal Networks," (eds. K. Josic, M. Matias, R. Romo and J. Rubin), Springer Ser. Comput. Neurosci., 3, Springer, New York, (2009), 95-120. [7] M. Golubitsky and D. G. Schaeffer, "Singularities and Groups in Bifurcation Theory," Vol. I, Appl. Math. Sci., 51, Springer-Verlag, New York, 1985. [8] M. Golubitsky, I. N. Stewart and D. G. Schaeffer, "Singularities and Groups in Bifurcation Theory," Vol. II, Appl. Math. Sci., 69, Springer-Verlag, New York, 1988. [9] N. J. McCullen, T. Mullin and M. Golubitsky, Sensitive signal detection using a feed-forward oscillator network, Phys. Rev. Lett., 98 (2007), 254101. doi: 10.1103/PhysRevLett.98.254101. [10] Y. Zhang, "Periodic Forcing of a System Near a Hopf Bifurcation Point," Ph.D Thesis, Department of Mathematics, Ohio State University, 2010. [11] Y. Zhang and M. Golubitsky, Periodically forced Hopf bifurcation, SIAM J. Appl. Dynam. Sys., to appear.
[1] Akinori Awazu. Input-dependent wave propagations in asymmetric cellular automata: Possible behaviors of feed-forward loop in biological reaction network. Mathematical Biosciences & Engineering, 2008, 5 (3) : 419-427. doi: 10.3934/mbe.2008.5.419 [2] Fernando Antoneli, Ana Paula S. Dias, Rui Paiva. Coupled cell networks: Hopf bifurcation and interior symmetry. Conference Publications, 2011, 2011 (Special) : 71-78. doi: 10.3934/proc.2011.2011.71 [3] Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete and Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344 [4] Udhayakumar Kandasamy, Rakkiyappan Rajan. Hopf bifurcation of a fractional-order octonion-valued neural networks with time delays. Discrete and Continuous Dynamical Systems - S, 2020, 13 (9) : 2537-2559. doi: 10.3934/dcdss.2020137 [5] Ryan T. Botts, Ale Jan Homburg, Todd R. Young. The Hopf bifurcation with bounded noise. Discrete and Continuous Dynamical Systems, 2012, 32 (8) : 2997-3007. doi: 10.3934/dcds.2012.32.2997 [6] Matteo Franca, Russell Johnson, Victor Muñoz-Villarragut. On the nonautonomous Hopf bifurcation problem. Discrete and Continuous Dynamical Systems - S, 2016, 9 (4) : 1119-1148. doi: 10.3934/dcdss.2016045 [7] John Guckenheimer, Hinke M. Osinga. The singular limit of a Hopf bifurcation. Discrete and Continuous Dynamical Systems, 2012, 32 (8) : 2805-2823. doi: 10.3934/dcds.2012.32.2805 [8] Hooton Edward, Balanov Zalman, Krawcewicz Wieslaw, Rachinskii Dmitrii. Sliding Hopf bifurcation in interval systems. Discrete and Continuous Dynamical Systems, 2017, 37 (7) : 3545-3566. doi: 10.3934/dcds.2017152 [9] Qigang Yuan, Jingli Ren. Periodic forcing on degenerate Hopf bifurcation. Discrete and Continuous Dynamical Systems - B, 2021, 26 (5) : 2857-2877. doi: 10.3934/dcdsb.2020208 [10] Luis Barreira, Claudia Valls. Regularity of center manifolds under nonuniform hyperbolicity. Discrete and Continuous Dynamical Systems, 2011, 30 (1) : 55-76. doi: 10.3934/dcds.2011.30.55 [11] Luis Barreira, Claudia Valls. Reversibility and equivariance in center manifolds of nonautonomous dynamics. Discrete and Continuous Dynamical Systems, 2007, 18 (4) : 677-699. doi: 10.3934/dcds.2007.18.677 [12] Luis Barreira, Claudia Valls. Center manifolds for nonuniform trichotomies and arbitrary growth rates. Communications on Pure and Applied Analysis, 2010, 9 (3) : 643-654. doi: 10.3934/cpaa.2010.9.643 [13] Takeshi Saito, Kazuyuki Yagasaki. Chebyshev spectral methods for computing center manifolds. Journal of Computational Dynamics, 2021, 8 (2) : 165-181. doi: 10.3934/jcd.2021008 [14] Dmitriy Yu. Volkov. The Hopf -- Hopf bifurcation with 2:1 resonance: Periodic solutions and invariant tori. Conference Publications, 2015, 2015 (special) : 1098-1104. doi: 10.3934/proc.2015.1098 [15] Isaac A. García, Claudia Valls. The three-dimensional center problem for the zero-Hopf singularity. Discrete and Continuous Dynamical Systems, 2016, 36 (4) : 2027-2046. doi: 10.3934/dcds.2016.36.2027 [16] Min Hu, Tao Li, Xingwu Chen. Bi-center problem and Hopf cyclicity of a Cubic Liénard system. Discrete and Continuous Dynamical Systems - B, 2020, 25 (1) : 401-414. doi: 10.3934/dcdsb.2019187 [17] R. Ouifki, M. L. Hbid, O. Arino. Attractiveness and Hopf bifurcation for retarded differential equations. Communications on Pure and Applied Analysis, 2003, 2 (2) : 147-158. doi: 10.3934/cpaa.2003.2.147 [18] Fatihcan M. Atay. Delayed feedback control near Hopf bifurcation. Discrete and Continuous Dynamical Systems - S, 2008, 1 (2) : 197-205. doi: 10.3934/dcdss.2008.1.197 [19] Begoña Alarcón, Víctor Guíñez, Carlos Gutierrez. Hopf bifurcation at infinity for planar vector fields. Discrete and Continuous Dynamical Systems, 2007, 17 (2) : 247-258. doi: 10.3934/dcds.2007.17.247 [20] Bing Zeng, Pei Yu. A hierarchical parametric analysis on Hopf bifurcation of an epidemic model. Discrete and Continuous Dynamical Systems - S, 2022 doi: 10.3934/dcdss.2022069
2021 Impact Factor: 1.588 |
Calculate unit weights for all blocks, i.e., each indicator of a block is equally weighted.
calculateWeightsUnit(
.S = args_default()$.S, .csem_model = args_default()$.csem_model,
.starting_values = args_default()$.starting_values ) ## Arguments .S The (K x K) empirical indicator correlation matrix. .csem_model A (possibly incomplete) cSEMModel-list. .starting_values A named list of vectors where the list names are the construct names whose indicator weights the user wishes to set. The vectors must be named vectors of "indicator_name" = value pairs, where value is the (scaled or unscaled) starting weight. Defaults to NULL. ## Value A named list. J stands for the number of constructs and K for the number of indicators. $W
A (J x K) matrix of estimated weights.
$E NULL $Modes
The mode used. Always "unit".
$Conv_status NULL as there are no iterations $Iterations
0 as there are no iterations |
# Conversions involving squares and cubic
#### All You Need in One Place
Everything you need for better marks in primary, GCSE, and A-level classes.
#### Learn with Confidence
We’ve mastered the UK’s national curriculum so you can study with confidence.
#### Instant and Unlimited Help
0/1
##### Intros
###### Lessons
1. How to do conversions involving square units and cubic units?
0/4
##### Examples
###### Lessons
1. Conversions involving squares
1. ${35 m^2}$= _____${cm^2}$
2. ${8 ft^3}$= _____${cm^3}$
3. ${58 m^2}$= _____${in^2}$
4. ${25 ft^3}$= _____${in^3}$
###### Topic Notes
In previous lessons, we learn the conversions between metric and imperial systems. In this lesson, we will try to convert units with squares and cubic. What we learn from the previous lesson will be important because we need to know how to convert between different units before doing any conversions involving squares and cubic. |
To be really fair, a group is a lot like the product of a monoid with itself as an opposite category.
Especially since for any \$$f\$$ in our monoid \$$\mathbf{M}\$$,
\$f^{op} \circ f = id\_{\mathbf{M}} = f\circ f^{op}.\$ |
# Wide–Mouth Frog (protocol)
## Introduction
The origin of the protocol name is not known. This is one of the simplest key-agreement protocols, that use the trusted third party. It was invented by M. Burrows, M. Abadi and R. Needham in 1989. Some modifications of this algorithm were invented later.
## Algorithm
### Setup
Users ${\displaystyle A}$ and ${\displaystyle B}$ who desire to start messaging, have to be familiar to ${\displaystyle KDC}$ (Key distribution center) and have shared secret keys with it. The generation of these keys is not a part of the protocols, they should be obtained earlier.
### Work
1. ${\displaystyle A}$ generates a random session key ${\displaystyle k}$, that will be used in communication with ${\displaystyle B}$. Then ${\displaystyle A}$ puts together a package for ${\displaystyle KDC}$: a timestamp ${\displaystyle T_{A}}$, ${\displaystyle B}$'s identificator and a session key are encrypted with the key, shared between ${\displaystyle A}$ and ${\displaystyle KDC}$ and sent to ${\displaystyle KDC}$ with ${\displaystyle A}$'s identificator.
${\displaystyle A: [A,E_{k_AC}(T_A, B,k)]\to KDC}$
2. ${\displaystyle KDC}$ chooses the corresponding key and decrypts the package. After that, he forms a package for ${\displaystyle B}$, that contains a new timestamp, ${\displaystyle A}$'s identificator and the session key ${\displaystyle k}$. He decrypts the package with the key, shared between ${\displaystyle KDC}$ and ${\displaystyle B}$ and sends it to ${\displaystyle B}$:
${\displaystyle KDC: [E_{k_{BC}}(T_C, A, k)]\to B}$
3. ${\displaystyle B}$ decrypts the package and gets the session key ${\displaystyle k}$ and also ID of the user, with whom the connection is established (${\displaystyle A}$).
## Modified version
### Description
During the investigations, some vulnerabilities of the protocol has been found. For example, malefactor can make ${\displaystyle B}$ to open more connections, than were requested, by simple repeating the messages from ${\displaystyle KDC}$. In the modified version, after the steps from the basic version, ${\displaystyle B}$ check the correctness of the established connection. It sends to ${\displaystyle A}$ a random number ${\displaystyle R_{B}}$ and waits to get from ${\displaystyle A}$ the same number, increased by ${\displaystyle 1}$.
### Setup
Initial conditions are the same as in the basic version.
### Work
1. ${\displaystyle A: [A,E_{k_AC}(T_A, B,k)]\to KDC}$
2. ${\displaystyle KDC: [E_{k_{BC}}(T_C, A, k)]\to B}$
3. ${\displaystyle B: E_k(R_B)\to A}$
4. ${\displaystyle A: E_k(R_B + 1)\to B}$
## References
M. Burrows, M. Abadi, R. Needham A Logic of Authentication. — Research Report 39, Digital Equipment Corp. Systems Research Center — Feb. 1989. — http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-39.pdf
Bruce Schneier Applied Cryptography. — Wiley, 1996. — pp. 56 et seq. — ISBN 978-0-471-11709-4. |
# The area of a parallelogram is 588cm^2. If the height is 12 cm, what is the base?
$b a s e = 49$ cm
Area of paralleogram$= b a s e \cdot h e i g h t$
$\rightarrow 588 = b \cdot 12$
$\rightarrow b = \frac{588}{12} = 49$ cm |
## What are the methods to increase problem solving speed and accuracy?
Based upon your reply in the comments, I have to preface this by saying I don’t know how difficult the IIT JEE is–I’ve only heard legendary stories.
The best I can offer at first glance is to do what you can to fully understand the math and the physics so that you can visualize the systems at work. The more thoroughly you intuit the underlying structures, the more fluent you will be when asked to apply that knowledge.
Do you have a visual picture of Newtonian mechanics? The calculus? Do you thoroughly understand the derivative and the integral? Can you rotate a curve in your head and see the volume created?
Can you picture electromagnetic forces at work? Quantum mechanics, at least at the level of electron shells? How are you with the concept of wave-particle theory?
Does the exam get into linear algebra? Vectors and matrices?
When I learned calculus, physics and chemistry, I was able to master American exams through deep internalization of the underlying math so that I could visualize what was happening. I doubt that our exams are nearly as rigorous as the IIT JEE,
I don’t know exactly how to describe how I went about it on my end, but my intuition tells me that there is no substitute for knowing everything so well that it becomes intuitive. Have you tried walking through the proofs of the concepts you are studying?
What are the methods to increase problem solving speed and accuracy?
## Killing Procrastination.
Many people will suggest many evil things that they consider to be the major factors because of which their lives aren’t the way they want them to be but I’m pretty sure that the evilest of these factors is procrastination.
Because when we procrastinate, it isn’t just another factor to avoid the essential but kind of a combination of many factors. And that is because it simply and brutally makes us do things that are less important towards filling life’s progress bar.
So here is a sample algorithm to murdering procrastination (there is an algorithm so it is more assassination than it is murder. Whatever.) Here is the algorithm:
Let initial willingness to do A be zero.Basically we have to get this willingness to a 51% or more.
First, think of all the really awesome things that will happen to you later on if you do A right away.That maybe it isn’t that much fun right now but will change your life and other people’s lives. This should increase willingness-A to 0.1 to 0.15.
Second, ask yourself the tweaked question.
Third, a little introspection and all that.
This should do the trick. Now you are ready to nail the world.
P.S- If it is literally your last day then go on with B because there is no “longer run” for you.
## I’m damned!
The results of IIT JEE are out. I didn’t make it. Got 9862 (98.1 percentile, even then I won’t get in). I don’t blame anyone except myself.
But I’ll rise again. Dust myself and work harder. I have to keep this in my fucking mind:
Irrespective of me studying or not, today would always have come, everything would’ve been the same except that my environment would’ve been happier if I’d worked harder. Similarly if I study these 10 months, when they pass like they have to, good times will come. Then I can have as much “fun” as possible.
Just waiting for the 28th of May for the final decisions regarding ISEET. If the amount of changes is favorable, I’ll take a drop. And this time I swear I’ll work harder. The greatest thing that stopped me from getting a good rank is procrastination. I kept thinking about the future, underestimated myself while overestimating IIT JEE. This lead to me losing all hope and eventually failing. Truth be told IIT JEE needs nothing other than hard work done smartly to crack. That is why dumb fuck girls bang big ranks. This time I will work blindly round the clock, without trying to predict the future. Lets see where that gets me.
An acquaintance of mine has secured a crazy awesome rank. Does that mean he has won? Yeah, I guess, for now at least. But this is not the end of the story. I’ll get into an IIT next year and start chasing that billionaire dream ASAP.
I was starting to think that days of studying are coming to an end. They would have come to an end if I hadn’t fooled myself into thinking that I’ve been working hard enough while deep inside my heart, I knew I wasn’t. Not this time. For the coming 10 months I’ll have to delve into PCM and convince myself that doing PCM is interesting. That is somewhat true. I know that if I work sincerely for the coming 10 months, IIT is gonna open its fucking huge gates for me.
God, just this one time, do me a favor. Let IIT JEE 2013 happen and not ISEET. What am I saying? Fuck God. We’ll see what that committee decides on the 28th. Fingers crossed hard. Ouch. Almost painful.
## How to understand the equation E=mc^2 in the easiest possible way?
This is undoubtedly the most popular physics equation ever. Its like even fuckin’ junkies out there know about it, but few actually understand it. My understanding goes as follows (the bold part is the answer to your question, the remaining part is for better understanding) :
Here,
E represents energy
M represents mass
c represents the speed of light
We know that,
So,
Energy can neither be created nor be destroyed it can only be converted from one form to another. This is still true, but Einstein added that mass can also be converted into energy. And if you convert a mass M into energy, the energy obtained is equal to that mass M multiplied by the square of the speed of light. This energy is usually pretty huge even for little masses, do the math, you are multiplying mass with , this makes the resultant value i.e. the energy, really BIG. You can have a better understanding of it if you know how this equation is applied. Here are a few interesting instances:
• It can be used to proof that nothing can travel at the speed of light as follows: When a body moves it gains energy and due to the previously mentioned equivalence of mass and energy, after a point the extra gained energy is converted to mass as per , so when it reaches the speed of light, its mass must have reached a very high value so it can no more be accelerated and it slows down thus failing to reach the speed of light.
• It accounts for the stability of nucleus and as to why protons inside the nucleus despite being likely charged stick together and don’t fall apart: After a radioactive decay if we see the initial and final values of the decaying isotope and the resulting nuclei, we find an unexpected difference, according to the law of conservation of energy the initial masses should be equal to the final masses thus it was concluded that the difference in the masses is because the “missing” mass was converted to energy. Thus energy was released and this is what stabilizes the whole process, makes it feasible and hence the nucleus is stable.
I know that was some real nerd-lish but, well, who are we taling about here? Einstein, right? The terra-nerd.
There are a real bunch of such uses out there just fiddle around the links in the below Google search and slowly your understanding will grow. You really can’t understand it at once.
If you are asking for a mathematical proof, I wrote one here:
Akhyansh Mohapatra’s answer to Why is $E=mc^2$ not $E = \frac{1}{2}mc^2$?
P.S: I typed all this myself, i didn’t copy-paste it. So, vote me up :D
How to understand the equation E=mc^2 in the easiest possible way?
## Being Zahir-ous
There are title songs in albums, there are title chapters in books, so, this entry is like the title entry for this blog (as you might already have guessed). I recently read The Zahir by Coelho. Truth be told I couldn’t read it from cover to cover and gave up in between as things were getting too damn philosophical about it. It seemed as if from about the 125th page or so every other page was conveying messages and teachings coming straight from the sacred lips of the Almighty. If sarcasm were water, then you can count the amount of it filled in the previous line as the 8th ocean. The book kept me really intrigued initially but then it got boring. Until and unless you have a failing marriage, don’t read the book for more than 50 pages. But I’m pretty sure that you will find some lasting inspiration to follow your dreams in these 50.
It was there that I came across the concept of the Zahir. If exaggeration be fucked, then the Zahir simply means obsession. But what is really awesome about the concept of Zahir is that it kinda gives you a way to personify your obsessions. Instead of saying, “I’m obsessed about Ramona”, you can simply say, “Ramona is my Zahir”. When you encounter your Zahir, which may be anything, at first you won’t know it but then it will grip you and it will start giving direction (right or wrong) to your thoughts, actions and what not.
And Zahir-ous is its adjective form. This, even though not really huge, is my creation.
Our life is driven by Zahirs, big and small. And these Zahirs are the ones who make us who we are. Finding the right one is important. Bill Gates found the right one while Saddam Hussein got it wrong.
Finding Zahirs is important because we will involuntarily follow them with passion. And this can make or break us.
I don’t know if I read this somewhere or if this is popping out of my own brain, but it goes something like this “Zahirs are to life what a guiding circuit is to an ICBM, set them right and it will get you exactly where you wanna be”.
I’ve always been Zahir-ous. Always following something, aiming for something. It is not like I’ve achieved great things (maybe I will), but one things for sure I feel great and I feel alive. All in all, being Zahir-ous is awesome. Pun unavoidable.
So start being Zahir-ous.
Tagged , , , ,
## Why is E=MC^2 not E=(0.5)*(MC^2)?
Ignore everything else, every other proof out there. I found a way to prove this equation, I know lots of other people know about this, but i discovered this myself. Here it goes:
For a photon,
Equation(1)
where represents the De-Broglie Wavelength and and represent mass and velocity of light respectively and represents Planck’s constant.
Also,
Equation (2)
Energy of photon(E) is given by,
Substituting value of from equation 1 in 2, we get,
Hence proved. The two equations used are probably the most basic equations in quantum physics.
Thus you can see didn’t appear anywhere.
Also kinetic energy for a particle is found out using but for photons, energy is found out using equation (2).
Thanks for asking the question as I could learn how to write mathematical equations in Quora answers as a result.
Why is $E=mc^2$ not $E = \frac{1}{2}mc^2$?
## Social Engineering, Deathly Hallows….
Had some fun after so many days. I scored 167 in iit-jee (the fun part is yet to come.167 is sad, I know). 167 is not good not bad kinda marks. The cutoff this time was pre-decided(or so they say). This cutoff was defined as 35% the total marks. the total marks this time was 408 (on the day of the examination) but it magically changed to 400(it was a fucking spell casted by wrong questions disguised as Harry-fucking-Potter). The use of the f-word so many times surely indicates my anger. This results from the unexplained happiness that I would have experienced had I cleared 170, this would have happened if instead of reducing the 8 marks worth of wrong questions from the total, the shitbirds at IIT had added the 8 marks to everybody’s total. Yeah mathematically both the things are pretty much the same (until and unless some IIT grad uses Riemannian geometry to prove otherwise(even if he does prove it I’m not gonna be convinced)) but the would-have-come happiness is really abstract and as I have already said, “unexplained”. Anyways, all that doesn’t matter. Whether I get 167/400 or 175/408 (167+8=175, you see?) it doesn’t truly matter. Facing the real truth, I’m never ever getting into an IIT. That is really S-fucking-AD. Damn.
I’ll give the JEE another shot next year. Just hope that asshole Sibal delays the ISEET-thingy one more year. We’ll see what happens. Let’s hope I just barely manage to get into one of the new IITs. Whatever happens, I’m sure it won’t be the end of the world. I’ll write a whole big entry about all this in a few days. But right now I’m gonna write about the fun thing that I mentioned at the beginning. |
Custom Search
## Monday, May 4, 2009
### Definition
The formula for the beta of an asset within a portfolio is
$\beta_a = \frac {\mathrm{Cov}(r_a,r_p)}{\mathrm{Var}(r_p)}$ ,
where ra measures the rate of return of the asset, rp measures the rate of return of the portfolio, and Cov(ra,rp) is the covariance between the rates of return. In the CAPM formulation, the portfolio is the market portfolio that contains all risky assets, and so the rp terms in the formula are replaced by rm, the rate of return of the market.
Beta is also referred to as financial elasticity or correlated relative volatility, and can be referred to as a measure of the sensitivity of the asset's returns to market returns, its non-diversifiable risk, its systematic risk or market risk. On an individual asset level, measuring beta can give clues to volatility and liquidity in the marketplace. On a portfolio level, measuring beta is thought to separate a manager's skill from his or her willingness to take risk.
The beta coefficient was born out of linear regression analysis. It is linked to a regression analysis of the returns of a portfolio (such as a stock index) (x-axis) in a specific period versus the returns of an individual asset (y-axis) in a specific year. The regression line is then called the Security Characteristic Line (SCL).
$SCL : r_{a,t} = \alpha_a + \beta_a r_{m,t} + \epsilon_{a,t} \frac{}{}$
αa is called the asset's alpha coefficient and βa is called the asset's beta coefficient. Both coefficients have an important role in Modern portfolio theory.
For an example, in a year where the broad market or benchmark index returns 25% above the risk free rate, suppose two managers gain 50% above the risk free rate. Since this higher return is theoretically possible merely by taking a leveraged position in the broad market to double the beta so it is exactly 2.0, we would expect a skilled portfolio manager to have built the outperforming portfolio with a beta somewhat less than 2, such that the excess return not explained by the beta is positive. If one of the managers' portfolios has an average beta of 3.0, and the other's has a beta of only 1.5, then the CAPM simply states that the extra return of the first manager is not sufficient to compensate us for that manager's risk, whereas the second manager has done more than expected given the risk. Whether investors can expect the second manager to duplicate that performance in future periods is of course a different question. |
Fudamental Theorem of Calculus
Main Question or Discussion Point
A quick question. The fundamental theorem of calclus states that:
$$\frac{d}{dx} \int^x_a f(t)dt= f(x)$$
I was wondering why the use of the dummy variable t, and not just x. Is it to distinguish that the function varies with the value t, and the limit of integration varies with a different variable x. I dont see what problem it would pose to call it f(x)dx.
cyrusabdollahi said:
A quick question. The fundamental theorem of calclus states that:
$$\frac{d}{dx} \int^x_a f(t)dt= f(x)$$
I was wondering why the use of the dummy variable t, and not just x. Is it to distinguish that the function varies with the value t, and the limit of integration varies with a different variable x. I dont see what problem it would pose to call it f(x)dx.
It is standard to express relation of change as change in y with respect to change in x. And so the use of x is established (by practice). It is really not more complicated than one word: tradition. If you wanted, we could put it this way:
$$\frac{d}{dt} \int^t_a f(x)dx= f(t)$$
Having just had two glasses of wine :rofl:, I reserve the right to review and edit this in the morning when I am thinking more clearly!
-SR
But why not like this?
$$\frac{d}{dx} \int^x_a f(x)dx= f(x)$$
what you have is fine, pretty much all u need to worry about with the theory is that if you differentiate an expression that you just integrated, you'll get the same thing.
it is a convenient notation to keep things straight.
compare:
$$\frac{d}{dx} \int_{\sqrt{x}}^{x^3} f(x^2, x) d x$$
with :
$$\frac{d}{dx} \int_{\sqrt{x}}^{x^3} f(x^2, t) dt$$
in cases like this where you need to know
explicitly what's the variable being integrated
it's good to have the habit of "proper" notation.
(for simple cases, of course, one notation is as good as another. )
Icebreaker
Because to obtain the form that you've written you must first write
$$F(x) = \int_{a}^{x}f(t)dt$$
matt grime
Homework Helper
cyrusabdollahi said:
But why not like this?
$$\frac{d}{dx} \int^x_a f(x)dx= f(x)$$
becuase you cannot have the x as both a dummy variable of the integral and the variable of the limit. it just makes no sense. they are different things. using the same letter for different things is 'not allowed' in mathematics.
saltydog
Homework Helper
qbert said:
it is a convenient notation to keep things straight.
$$\frac{d}{dx} \int_{\sqrt{x}}^{x^3} f(x^2, t) dt$$
You know, I've never looked at Leibnitz's rule with that type of integrand, that is:
$$f(g(x),t)$$
I assume it would be:
$$3x^2f(g(x),x^3)-\frac{1}{2}x^{-1/2}f(g(x),\sqrt{x})+\int_{\sqrt{x}}^{x^3}\frac{\partial f}{\partial g}\frac{dg}{dx}f(g(x),t)dt$$
Last edited:
But why do we even need a dummy variable matt? Could we not read it as, f is a function that varies on the value of x, and that we integrate from a to x. Then we take the derivative with resepct to x?
matt grime
Homework Helper
because that's what it is. it is the end point of the interval that is the variable, not the subject of the integral. if you change the meaning of the symbol then the FTC no longer applies since you aren't dealing with the same object.
matt grime
Homework Helper
perhaps it would help to think of sums
$$\sum_{r=1} ^n r= \frac{n(n+1)}{2}$$
r is the dummy variable. what happens if you replace r with n in that sum?
Are you saying that if i use f(x)dx, then instead of having f(x)dx vary between a and x, f(x)dx ALWAYS takes on the value of the upper limit, and is just added to itself x-a times? so f(x)dx is never changing once we pick a value for x, thus the need for the dummy variable t.
matt grime
Homework Helper
i'm saying that it makes no sense to speak of adding (and i'm happy to use that abuse of notation) f(x)dx to itself as x varies from a to x. surely you can see that?
cyrusabdollahi said:
But why not like this?
$$\frac{d}{dx} \int^x_a f(x)dx= f(x)$$
you can write it like this, but you have to know that the dummy variable x is different then the x in the function being integrated. So basically the reason it doesn't make any sense is that you are not communicating your idea to everyone else but simply yourself (since you know that the two variables represent different things.) So in order to communicate the idea that the two variables are different then you should use different characters.
If you assume that the dummy variable and the variable getting integrated are the same, then you get this sort of never ending loop.
Let me try and define a function F(x) this way:
$$F(x)=\int_a ^x \frac{Sin (x)}{x} dx$$
Now let me evaluate F(3).
$$F(3)=\int_a ^3 \frac{Sin (3)}{3} d3$$
Is there a problem with those threes? There shouldn't be, because to evaluate a function at x = 3 we simply replace x by 3 everywhere it appears. Maybe you would say that I should evaluate F(3) this way:
$$F(3)=\int_a ^3\frac{Sin (x)}{x} dx$$
But then I would say that we are breaking the rule above, that to evaluate a function at x = 3 we replace x everywhere by 3. The only way out of this dilema is to use a dummy variable.
Yep yep, I see what it is used for now. I always wondered the use of that notation, but now it is clear. The only thing I dont see crosson is your notation of d3. Would that not be zero, since 3 is a constant? If not, does d3 really mean anything?
Saltydog, I have not checked your Lebniz rule aplication, but it is easy to see if it was correct: f(g(x), t) is a function of x and t, so put (say)
f(g(x),t) = h(x,t) and work the leibniz rule with this instead of that.
Castilla.
It seems that you would not change dx to d3. It would stay as dx, no?
matt grime said:
perhaps it would help to think of sums
$$\sum_{r=1} ^n r= \frac{n(n+1)}{2}$$
r is the dummy variable. what happens if you replace r with n in that sum?
yes! to matt grime you listen!
Now im confused, would this work...
$$\frac{d}{dx} \int f(x)dx= f(x)$$
:uhh:
matt grime
Homework Helper
that is true, since you have an indefinite integral there, and the notation
$$\int f(x)dx$$
means do the definite integral from a to x of f(t)dt where a is some arbitrary constant (remember indefinite integrals are only defined up to constant.
mathwonk
Homework Helper
and please remember, when stating theorems, to give the hypothesis, and not just the conclusion. otherwise it makes no sense. in this case the correct hypothesis is that f is integrable and continuous at the point x where the derivative is taken.
i.e. the version of the FTC you are using is roughly like claiming that x+3 = 8, without saying what x is.
Last edited:
Going back to my question, would it be stay as dx, or d3, in which case if it is d3, that is phyiscally meaningless, because d3=0, since 3 is a constant, and just further shows the need for the use of a dummy variable.
matt grime said:
that is true, since you have an indefinite integral there, and the notation
$$\int f(x)dx$$
means do the definite integral from a to x of f(t)dt where a is some arbitrary constant (remember indefinite integrals are only defined up to constant.
I believe you ment to say, $$\int f(t)dt$$, no?
matt grime |
# Distortion Elimination with Differential Transistor Pair
Most of you know that incorporation of negative feedback within amplifier is one of major considerations when designing a hi-fi amplifier. Till now, I have only dealt with differential transistor pair as input stage of amplifier and "eliminator" of distortions fed back from output of an amplifier. Although, I'm not quite sure if all kinds of distortions get eliminated by it (when compared to the original signal brought to the input of amplifier).
This circuit (proposed and made by user G36) has distorted signal, which is being delivered to voltage-amplification stage and after that it gets corrected to original (sine wave) signal being brought to the input of amplifier. It gets corrected by the differential transistor pair stage. (not shown here but: input of amplifier is at Q1 via capacitor and the output is taken from the collector of Q4).
Only problem here is that, when the distorted signal gets distorted enough, the output signal is being clipped. When input signal amplitude is low enough the signal at the base of Q4 is distorted (spiky signal), while the output at the collector of being corrected to sine wave (note that both channels weren't set to same voltage scale).
When the input amplitude was increased progressively, those spikes were higher and higher at the same time, but the higher they were, the negative half-waves were also progressively being clipped. I added potentiometer instead of RF2 to control the portion of NFB being fed to the base of Q2 but that didn't help either.
So that left me wondering for bit, and now it seems to me that all of that distortion cannot be eliminated (or corrected) after all.
I somehow didn't manage to design a differential transistor pair amplifier myself, but maybe with your help you can help me get closer to fully understand it and make one myself that would work like it should.
• Can you include a picture of the clipped output? Can you also provide the amplitude of the signal you are testing with? Are you loading your amp, if yes how? I suspect you are expecting the output signal to go below ground, which is not possible. Dec 31 '17 at 13:56
• @VladimirCravero 1.) I cannot because I don't have the circuit build up (I done that several days ago) 2.)I don't know that either but it was sine 1kHz and few hundred mVpp 3.)The amp was measured unloaded because when loaded extra effects are added circuit 4.) I am not suspecting anything here unless that distortion fed into VA stage would be corrected back to sine (or close to it).
– Keno
Dec 31 '17 at 14:19
• I wonder where the signals are coming from? No input source? At which points are the signals measured? Voltages ? Currents? Without full information it is not possible to give a helping answer!
– LvW
Dec 31 '17 at 15:03
• @LvW Obviously you haven't read the WHOLE question because everything is explained inside it.
– Keno
Dec 31 '17 at 16:06
– LvW
Jan 2 '18 at 12:46
I'm decorating your schematic, a bit. I'm not entirely sure about your discussion, but it appears this schematic is a little more descriptive about what you are doing:
simulate this circuit – Schematic created using CircuitLab
I added $C_2$ because I think you have enough sense already that you have one there when providing an input signal.
How should this have been designed to behave?
The question is important because there is an assumption that someone actually thought about the circuit when designing it and didn't just randomly stick parts together. Assuming a rational actor here, you can say a few things at the outset:
1. The voltage across $R_{C_1}$ will be approximately one $V_{BE}$ through-out its operation. So this also means that the current through $R_{C_1}$ cannot vary too much. Also, we can say something about the magnitude, as being approximately $1\:\textrm{mA}$.
2. Since there is an assumed $1\:\textrm{mA}$ in $R_{C_1}$, then the quiescent state of the circuit (without an input signal) should also have about $1\:\textrm{mA}$ in the collector of $Q_2$. The reason is that if the base-emitter voltages of $Q_1$ and $Q_2$ are the same (this is a "diff-amp" after all), then the collector currents should be the same. So a designer would have known this and planned for equal currents in both collectors.
3. There is an Early Effect present in all BJT transistors, which becomes more of a problem if the $V_{CE}$ of one transistor is much different than the other. However, because of the arrangement here, it is clear that the collector voltage for $Q_1$ will always be "close" to $19.3\:\textrm{V}$ and the collector voltage for $Q_2$ will always be exactly $20\:\textrm{V}$. Given that they share the same emitter voltage, too, this pretty much means the Early Effect won't be much of an issue. Their $V_{CE}$ voltages will be appoximately the same.
4. $R_1$ and $R_2$ are used as a simple voltage divider creating a mid-point voltage, half way between the supply voltage. Without a signal applied, the only impact on this divider voltage will be the required base current of $Q_1$. Since that base current will source from $R_1$, leaving $R_2$ just a little poorer for it, this means that the voltage drop across $R_1$ will be a little more than the voltage drop across $R_2$, so we will expect that the quiescent base voltage for $Q_1$ will be a little below the mid-point of $10\:\textrm{V}$.
5. The collector voltage of $Q_4$ can vary over almost all the range of the supply voltage. Saturation of $Q_4$ (undesirable) occurs when the collector voltage is the same as the base voltage. But its base voltage (see point #3 above) will be close to $19.3\:\textrm{V}$. This means that the collector can range from almost "ground" to almost $19.3\:\textrm{V}$. And that is most of the output range available. At first blush, this at least suggests that $V_{OUT}$ has a relatively full range available to it and this fact also helps confirm that this may be the $V_{OUT}$ node, if you hadn't already figured it out before. (We'll come back to this, later.)
6. $Q_4$'s collector current is highly dependent on its $V_{BE}$, with an exponential relationship. This means that the collector current will vary by a factor of 10X for each $60\:\textrm{mV}$ change of its $V_{BE}$.
7. The collector current of $Q_4$ will mostly be due to the current in $R_{C_2}$ (ignoring the base current for $Q_2$ and through the NFB leg to ground through $C_1$.) Let's say the output will swing from $5\:\textrm{V}$ to $15\:\textrm{V}$ (again, we have to get back to this), then the variation in collector current will be the ratio of those two voltages, or about 3. From this, we could estimate $V_T\cdot\operatorname{ln}\left(3\right)\approx 30\:\textrm{mV}$ variation at the base of $Q_4$: or $\pm 15\:\textrm{mV}$.
8. The above logic (10X collector current for $60\:\textrm{mV}$ change at the base) applies to $Q_1$, except that in the diff-pair arrangement only half the supposed base variation applies. We can work out now that a $\pm 15\:\textrm{mV}$ variation at the base of $Q_4$ implies a $22\:\mu\textrm{A}$ variation of collector current around the assumed $1\:\textrm{mA}$. By the rule, we would normally expect to see an increase/decrease of about $570\:\mu\textrm{V}$ to get there. But this is a diff-pair, so it requires twice that, or $1.14\:\textrm{mV}$ of signal peak to achieve it.
9. However, point #8 ignores the loading of $Q_4$. Since the assumed center voltage at $V_{OUT}$ is (let's say) $10\:\textrm{V}$, its quiescent collector current should be about $10\:\textrm{mA}$. So we can compute $r_e= \frac{V_T=26\:\textrm{mV}}{I_{C_Q}=10\:\textrm{mA}}\approx 2.6\:\Omega$. (This will actually vary because the collector current will vary over that factor of 3.) Assuming $\beta=150$ for now, this suggests a loading of $\approx 400\:\Omega$. Put in parallel with $R_{C_1}$, we can very broadly say that the collector resistor is really only about half the value we expected. So this means the input signal has to be twice as much as we predicted in step #8 above. Or about $2.3\:\textrm{mV}$.
10. From point #9, you can see that this circuit will turn something on the order of a $2.3\:\textrm{mV}$ change at the input to something like $\pm 5\:\textrm{V}$ at the output. (Ignoring the NFB.) That's an open loop gain of about 2100.
We now have an idea of the open loop gain, before adding in the NFB. With NFB closing the loop, we can compute the closed loop gain from $A=\frac{A_O}{1+A_O B}$, where $A_O\approx 2100$ and $B$ is the portion of the output signal fed back to the input as negative feedback. In this case, $B=\frac{1\:\textrm{k}\Omega}{1\:\textrm{k}\Omega+10\:\textrm{k}\Omega}\approx 0.09$. From this, we estimate $A\approx 11$.
So, if you supply a small signal at the input, we should expect to see about 11X at the output.
One point I wanted to return to here is that you can see that the actual voltages present at the two bases of $Q_1$ and $Q_2$ are not expected to vary all that much. We cannot tolerate much more than a swing at the output of perhaps $\pm 5\:\textrm{V}$. Much more than that and you will start to saturate $Q_4$. Given the closed loop gain, this means the input signal cannot be allowed to go much more than about $\pm 500\:\textrm{mV}$.
This means that the current in $R_{E_1}$ will vary by perhaps $\frac{\pm 500\:\textrm{mV}}{4.7\:\textrm{k}\Omega}\approx 100\:\mu\textrm{A}$. Given that the assumed current in it should be about $2\:\textrm{mA}$ (which is divided between the two BJTs), this seems reasonably "constant." So probably good enough and doesn't need any additional circuitry to stiffen that up more.
Also, assuming that there is about $2\:\textrm{mA}$ in $R_{E_1}$, we find that the voltage across it would be about $9.4\:\textrm{V}$. This again is close enough to what we'd guess, given about $700\:\textrm{mV}$ of $V_{BE}$ drop for $Q_1$ and $Q_2$, we once again have some assurances that there was an intelligent designer here.
I think that dots the final i and crosses the final t. This circuit looks designed. Nothing seems "out of whack" about it.
And now you know what to expect from it, too!
At the end of all this, we find that this is very likely the result of an intelligent designer (no pun intended.) This is generally considered to be a good thing.
So where does that leave things? Well, you cannot drive this circuit with more than $\pm 500\:\textrm{mV}$. (Perhaps a little more. But realistically that should be about your limit.) If you try and supply a larger signal voltage swing, then you can expect distortion at the output.
Also, the voltage at the base of $Q_4$, if you use a relatively smaller input signal, should "look" okay on the scope. About like a sine wave. But if instead you push this amplifier towards its maximum output swing, using anything more than (or even near to) $\pm 500\:\textrm{mV}$, then you should expect that the voltage signal at the base of $Q_4$ will start looking somewhat distorted (not exactly a sine, anymore.) This is entirely normal. Expect it. It will still look "something like" a sine. Just enough different that you can perhaps see that it looks "wrong."
If you now over-drive this circuit, you will push $Q_4$ into saturation -- perhaps heavy saturation. In that case, all bets are off and you will most certainly see non-sinusoidal results everywhere. But this isn't within the managed behavior of the circuit, so it is more of an intellectual curiosity (perhaps for those wanting to study the added harmonics under such conditions.) It's not worth investigating for most of us.
So just keep your input signal small, here. Within the range I've mentioned. I think you'll be fine with the results, then.
Cripes, this must mean that @G36 actually knew what he was doing in this case! Will such wonders never cease?
So here's an LTSpice simulation of the above circuit. I'm providing the AC analysis (log-log Bode plot):
I made it span quite a range of frequency (x-axis) so that you can see some more interesting variations. As you can see, it starts out having very little response at frequencies close to DC. You should expect this, because of $C_2$ blocking DC. In fact, I asked LTSpice to vary that input capacitor so that it was $1\:\mu\textrm{F}$, $10\:\mu\textrm{F}$, and $100\:\mu\textrm{F}$ just to make this illustration clearer. But in all cases, you can see that by the time the frequency reaches about $100\:\textrm{Hz}$ (from the DC side), that the solid lines (regardless of color) have reached a flat spot that seems near to a gain of 10, or so.
Here's a zoomed up picture:
Now, you can see that the gain reaches an actual gain of 11. Which is just as I calculated above. It's nice to see when that happens. It also looks pretty flat over the audio ranges of frequencies, too. Probably also a good thing. (It even looks as though it might be useful a bit higher, as well -- perhaps it could have other uses than just audio?)
The little "peak" out at the high frequency end is called "gain peaking" and it occurs for reasons you cannot easily see on the schematic because the parasitics aren't shown. Nor are all of the parasitics included by LTSpice automatically, either. Only a few. Wires have inductance but LTSpice ignore it, not knowing anything about how long your wires are or their shapes or how close they might be to other wires, etc. So the actual behavior of a real circuit you make will be probably a lot different near the higher frequency end. Luckily, it's not important here. Just ignore everything in the plot beyond $1\:\textrm{MHz}$. (Stuff starts getting enough different out there.)
To address your question about why the voltage at $N_1$ might appear "distorted" when the output signal at $V_{OUT}$ isn't distorted (or put another way, to put your mind at ease as to why that is okay), consider that there is a closed loop system using the negative feedback (NFB.) The diff-amp (long tailed pair) of $Q_1$ and $Q_2$ does "whatever is necessary" to control the base of $Q_4$ to control $V_{OUT}$ per the input signal. What exactly it does, isn't important right now. Just believe for now that something (currently mysterious) will happen so that the diff-amp pair is "satisfied." The details are interesting. But you don't need to know them to get the point.
Now, let's merely examine quantities to see why your scope might show a distorted voltage signal (with respect to a perfect sine) at the base of $Q_4$ while at the same time seeing a nearly perfect sine at $V_{OUT}$.
Assume the above analysis is correct (it is roughly so.) Then the center voltage (quiescent voltage) at $V_{OUT}$ will be close to (but a little less than) $10\:\textrm{V}$. This means that the collector current must be close to $10\:\textrm{mA}$. Let's assume for a moment that the saturation current parameter for $Q_4$ is $I_S=20\:\textrm{fA}$ (small signal device.)
Suppose an output signal at $V_{OUT}$ swings from $5\:\textrm{V}$ to $15\:\textrm{V}$, around the center of approximately $10\:\textrm{V}$. It does so with perfection and is nicely sinusoidal. No visible distortion, at all. This means that the collector current will be $5\:\textrm{mA}$ to $15\:\textrm{mA}$, around the center of approximately $10\:\textrm{mA}$. So let's look at the $V_{BE}$ required for that:
$$\begin{array}{r|l} I_C & \textrm{V}_\textrm{BE} \\ \hline 5\:\textrm{mA} & 682.4\:\textrm{mV} \\ 7.5\:\textrm{mA} & 692.9\:\textrm{mV} \\ 10\:\textrm{mA} & 700.4\:\textrm{mV} \\ 12.5\:\textrm{mA} & 706.2\:\textrm{mV} \\ 15\:\textrm{mA} & 710.9\:\textrm{mV} \end{array}$$
(The equation used is $V_{BE}=V_T\cdot\operatorname{ln}\left(\frac{I}{I_S}+1\right)$, where $V_T=26\:\textrm{mV}$ and $I_S=20\:\textrm{fA}$.)
Note that the peak differences are $-18\:\textrm{mV}$ and $+10.5\:\textrm{mV}$ around a center voltage of $700.4\:\textrm{mV}$ at the base. If you were to place a scope on the base, you'd see almost twice as large of a voltage swing in one direction as in the other. Clearly, in terms of voltage at the base of $Q_4$, the voltage does NOT look like a perfect sine!! Yet the output is just fine.
(I added a couple of intermediate values, too, so that you can do a slightly better job of hand-plotting out the curve, if you want to do so.)
The diff-amp pair doesn't care whether or not the voltage swing at the base of $Q_4$ is symmetrical. All it is doing is trying to make sure that $V_{OUT}$ follows $V_{IN}$ and it is willing to do what it takes to achieve that. And it requires a distorted voltage signal at the base of $Q_4$ to achieve an undistorted signal at $V_{OUT}$. It is enough to know that this is due to the relationship of base voltage to collector current in a BJT, which itself isn't a linear one.
This points up the fact that BJTs are NOT current-controlled devices, from the point of view of a physicist. They are voltage controlled current sources (VCCS) devices. From a designer point of view, sometimes it is enough to see them as current-controlled current sources (CCCS): such as when working out how much current to supply the base for an LED On/Off switch. (Who cares about the base voltage then?) But at times like this, explaining why a signal here might not look sinusoidal when another signal there does look sinusoidal, then knowing that it really is a VCCS helps at those times. It also helps in understanding a current mirror. Etc. You merely shift views where needed. You need that flexible mindset, so that if you see something unexpected you can dig into your box of tools and find an explanation.
NOTE:
Let me also refer you to point #6 that I made at the outset. There is a $\approx 60\:\textrm{mV}$ change of $V_{BE}$ for each factor of 10 change in collector current.
What is the current change going from $10\:\textrm{mA}$ to $5\:\textrm{mA}$? It's a factor of $\frac{1}{2}$, right? So you compute $60\:\textrm{mV}\cdot\operatorname{log10}\left(\frac{1}{2}\right)\approx > -18\:\textrm{mV}$. Cool!
What is the current change going from $10\:\textrm{mA}$ to $15\:\textrm{mA}$? It's a factor of $1.5$, right? So you compute $60\:\textrm{mV}\cdot\operatorname{log10}\left(1.5\right)\approx > +10.5\:\textrm{mV}$. Again, cool!
As you can see, I had already told you about the effect back when I first discussed the circuit. Had you fully apprehended point #6, you would have been able to figure all this out entirely on your own without the Shockley equation I handed to you, today.
BJT BEHAVIOR NOTE:
Here is how you might analyze a BJT's behavior (ignoring some effects and just focusing on only a gross simplification and excluding the dynamic resistance $r_e$ for now, as well):
\begin{align*} V_E&=V_B-V_{BE} &\text{where } V_{BE}&=n\cdot V_T\cdot \operatorname{ln}\left(\frac{I_C}{I_{SAT}}+1\right)\\ V_E&=V_B-n\cdot V_T\cdot \operatorname{ln}\left(\frac{I_C}{I_{SAT}}+1\right)\\ V_E&=V_B-n\cdot V_T\cdot \operatorname{ln}\left(\frac{\frac{V_E}{R_E}}{I_{SAT}}+1\right)\\ V_E&=V_B-n\cdot V_T\cdot \operatorname{ln}\left(\frac{V_E}{R_E\cdot I_{SAT}}+1\right)\\ V_E&=n\cdot V_T\cdot \operatorname{LambertW}\left(\frac{R_E\cdot I_{SAT}}{n\cdot V_T}\cdot e^{\frac{R_E\cdot I_{SAT}+V_B}{n\cdot V_T}}\right)-R_E\cdot I_{SAT} \end{align*}
Now, suppose you put a sinusoidal voltage at the base:
$$V_B=A\cdot\operatorname{sin}\left(\omega t\right)$$
Then you get:
$$V_E=n\cdot V_T\cdot \operatorname{LambertW}\left(\frac{R_E\cdot I_{SAT}}{n\cdot V_T}\cdot e^{\cfrac{R_E\cdot I_{SAT}+A\cdot\operatorname{sin}\left(\omega t\right)}{n\cdot V_T}}\right)-R_E\cdot I_{SAT}$$
Does that look like a sinusoidal result at the emitter of the BJT to you?? Even if you take $V_{BE}=V_B - V_E$, this base-emitter voltage still isn't going to be sinusoidal.
Now, the emitter voltage here will be across $R_E$ to generate an emitter current. Some of that current will wind up disappearing at the base, leaving a remaining collector current that causes a voltage drop across $R_C$.
Care to work out what the resulting signal looks like at the collector?? What is $I_C$? I'll leave that as an exercise!
Sure. Things "look" sinusoidal because they are, approximately. But in no way, exactly.
But now realize that if you remove $R_E$ entirely and make it zero, then you have a grounded emitter and you now know $V_E=0\:\textrm{V}$. Therefore:
\begin{align*} I_C&=I_{SAT}\cdot\left(e^\frac{V_B}{n\cdot V_T}-1\right)\\ I_C&=I_{SAT}\cdot\left(e^\cfrac{A\cdot\operatorname{sin}\left(\omega t\right)}{n\cdot V_T}-1\right) \end{align*}
You can get the collector current a little more directly now. But do you think it is sinusoidal?
You could apply a Fourier transform to the above equations and work out the frequency components, if you like. You could even add more to the circuitry to filter out the components you want to diminish. But in design, we often accept the warts.
Note that all the above analysis is what is called "open-loop." This means there is no NFB applied. The collector voltage is the complex product of a lot of stuff and there is nothing added (except perhaps the emitter degeneration resistor, which actually is "local" NFB) to cause the output to conform to the input.
The circuit under discussion has NFB! So while the base-emitter voltage of $Q_4$ can and will look a little distorted to the eye, at times. That's okay. Because there is NFB added here and used by the diff-pair to self-correct things. This is how NFB works to "linearize" a signal. By using NFB, we can get the output to better mimic the input than would happen if we just ran the electronic parts "open loop" where we'd be subject to all these crazy equations above.
YET ANOTHER NOTE:
To extend the discussion in comments and help here:
You have seen two cases of CE amplifiers. One with and one without an added $R_E$ for degeneration. All of the above should tell you something new, now. In cases where you see a CE amplifier without $R_E$, you must expect there to be some form of global NFB being applied. So you look for that.
If global NFB appears to be missing, then you know that the collector output will be distorted. More so for large signal swings, less so for smaller signal swings. (But a grounded emitter CE amplifier has a LOT of gain, so this almost always means there are large signal swings being requested by the designer of it.) It is almost always the case, though, that you will find the NFB to be present, because it will be very much needed.
Using an emitter resistor, $R_E$, provides "degeneration" which is important for a variety of reasons (temperature stability higher among them.) But it also provides local NFB to the circuit which helps to linearize the output signal. So in these cases, you may NOT find any global NFB present in the circuit, since the emitter resistor is doing some of that desired work.
Cripes. I've written a flurgen chapter of a book here. Look what you've made me do, Keno!!
Here is what LTSpice shows as the two output voltages for $V_{OUT}$ (blue) and for the base-emitter voltage for $Q_4$ (green):
In the above case, I've set things up so that $V_{OUT}$ is being exercised over its maximum range (arguably, anyway.) This helps to exaggerate the effects.
Note that the baseline quiescent value for the green curve is about $715\:\textrm{mV}$. So you can see that the peaks and valleys are not the same height here.
There is distortion. But to the untrained eye and without knowing the baseline quiescent voltage beforehand, the green curve trace may very well look kind of close to a sine.
Now, take a look at what happens when I cut the input signal in half:
In this case, the base-emitter voltage looks more "sine-like" than before. This is as it should be, and is expected. As the input signal causes the output to swing over smaller and smaller ranges, relative to its maximum, the closer the base-emitter swing will look to a sine. Even the baseline quiescent value cuts more closely to the midpoint.
Imagine how much closer it would get if the input signal were smaller still.
• Wow, that is an awesome overall presentation of the circuit made by @G36! But I am mostly curious, if will the Q4 change the output (if it will get distorted somehow), if you add distortion within sine wave coming into base of Q4. That distortion would be x-th harmonic of sine wave. By adding that distortion, both of us could imagine it as distortion made by an amplifier itself (as I read, the power output stage would most usual distort amplified sine wave - in which case part of the output needs to be fed back to diff-amp to correct that undesired distortion). Do you understand what I mean?
– Keno
Jan 1 '18 at 14:21
– jonk
Jan 1 '18 at 19:34
• @Keno The CE configuration will either have a degeneration resistor, or not. If it does have one, its very existence permits the emitter to "float" and therefore "follow" the base. If you are very, very sneaky about it, measuring carefully, you can still "see" that the signal between base and the emitter is still the logarithm of the collector current sine -- but it will be harder to see because of the huge voltage offset of the emitter resistor and perhaps also the smaller collector current swing compared to the quiescent collector current.
– jonk
Jan 2 '18 at 19:05
• @Keno Glad to hear it. I think this stuff works its way in as you think more about the details. I've mentioned you may also need to do paper and pencil work. And I still think so. The mathematics I generated and the theoretic models provide one way of seeing that everything is not a sine, nor should it be, and why. The math points up details you often cannot easily see with a scope. But discussions do some of the work, as well. Keep plugging away, asking questions, etc.
– jonk
Jan 4 '18 at 6:15
• @Keno It is the emission coefficient. Usually 1 for small signal bjt, never less than 1, and sometimes a little more than 1 for larger bjts.
– jonk
Jan 13 '18 at 12:42
There are two questions here:
1. Why is the signal at the base of Q4 distorted when the output is not?
Consider that Q4 is a common-emitter amplifier with a high gain (no emitter degeneration) and a large output voltage swing (you haven't given it but I'm guessing at least 3 volts). Q4 must then be introducing a significant amount of distortion.
What does distortion mean here? It means when you put a perfect sine wave into the base of Q4, the signal at the collector of Q4 is not a perfect sine wave.
Now, is there a signal you could put at the base which is not a perfect sine wave which when distorted by Q4 results in a perfect sine wave at the collector? There is-- it is this distorted wave you're seeing at the base of Q4.
The negative feedback effectively acts to pre-distort the input signal to cancel out the distortion which the circuit would otherwise introduce.
1. Why does the bottom of the sine wave get clipped (and the signal at Q4's base get very 'spiky') when the output signal is large?
I can't blame you for losing perspective considering it worked so perfectly up to that point. Here's what you have:
And the clipping is simply because Q4's collector current drops to zero.
• I don't know about Vp node you scoped in Spice but the blue and green trace is exactly what I got! But for Vp (node at the base of Q1); was the base overdriven and therefore I/you got clipped output signal because of that "extended spikes" at the base of Q4?
– Keno
Jan 1 '18 at 22:22
• @Keno The teal one Ic(Q4) is Q4's collector current. Where the output signal (green) is clipped you can see that the collector current is zero.
– τεκ
Jan 1 '18 at 22:39
• So the base of Q1 was overdriven? I assume that because there is square alike signal at the base of Q1. Or anything else happened that there is not a sine wave at the base of Q1?
– Keno
Jan 1 '18 at 22:48
• No. The input is OK, which is why there is no distortion when you reduce the gain. The clipping happens because once Q4's collector current hits zero it can't go any lower (it is in cutoff).
– τεκ
Jan 1 '18 at 22:53
• @Keno the red trace is Vp-Vn which is the differential input voltage.
– τεκ
Jan 1 '18 at 23:01
Apparently the output is as desired, and you are wondering why a intermediate signal looks as distorted as it does.
This is because you are looking at the voltage of current signal. The real signal into Q4 is the current being drawn out its base. The voltage of that won't be all that meaningful to look at.
• I somehow don't understand what you were really tried to expain here..
– Keno
Dec 31 '17 at 16:08
• @Keno: You are trying to look at a current signal by plotting its voltage as a function of time. That doesn't work, at least not to see the true signal. Dec 31 '17 at 16:12
• Then what is the solution for my example?
– Keno
Dec 31 '17 at 16:18
• @Keno: Solution to what problem? You haven't shown anything that is wrong. Dec 31 '17 at 17:49
• But I described it, again: As that spiky signal fed into base of Q4 gets even more spiky, the output signal at the collector of Q4 gets its negative half wave progressively clipped. And I am wondering why doesn't diff-amp "corrects" properly so there would be whole sine at the output. That is the real problem.
– Keno
Dec 31 '17 at 19:19 |
# Solving Schroedinger Equation for a Step Potential
1. Mar 13, 2013
### FishareFriend
Undergraduate Quantum Mechanics problem. However the course hasn't gone as far to include R or T so I'm assuming there must be a way to solve this without needing to know about those.
1. The problem statement, all variables and given/known data
Asked to show that $$\psi(x)=A\sin(kx-\phi_0)$$ is a solution to the 1D-time-independent Schroedinger Equation for $x<0$.
Then asked to show that the general solution for $x>0$ is $$\psi(x)=Be^{{-x}/{\eta}}+Ce^{{x}/{\eta}}$$.
Question then is, by considering how the wave function must behave at $x=0$, show that $$\phi_0=arctan(\eta k)$$
2. Relevant equations
$$\psi(x)=A\sin(kx-\phi_0)\quad x<0$$
$$\psi(x)=Be^{{-x}/{\eta}}+Ce^{{x}/{\eta}}\quad x>0$$
$$\phi_0=arctan(\eta k)\quad x=0$$
3. The attempt at a solution
I've tried various ways, attempting to put the first solution into exponential form, then attempting to put the second solution into trigonometric form. Neither of these seem to give the desired result, I just end up with $i$ everywhere. I also can't see how you get $\eta k$ out.
Feel like I'm missing a step or something in order to be able to solve this, any help would be greatly appreciated.
2. Mar 13, 2013
### Staff: Mentor
So how does the wave function behave at $x=0$?
3. Mar 13, 2013
### vela
Staff Emeritus
And as x goes to +∞? |
Warning
This documents an unmaintained version of NetworkX. Please upgrade to a maintained version and see the current NetworkX documentation.
# Source code for networkx.algorithms.approximation.vertex_cover
# -*- coding: utf-8 -*-
"""
************
Vertex Cover
************
Given an undirected graph G = (V, E) and a function w assigning nonnegative
weights to its vertices, find a minimum weight subset of V such that each edge
in E is incident to at least one vertex in the subset.
http://en.wikipedia.org/wiki/Vertex_cover
"""
# Copyright (C) 2011-2012 by
# Nicholas Mancuso <nick.mancuso@gmail.com>
from networkx.utils import *
__all__ = ["min_weighted_vertex_cover"]
__author__ = """Nicholas Mancuso (nick.mancuso@gmail.com)"""
@not_implemented_for('directed')
[docs]def min_weighted_vertex_cover(G, weight=None):
r"""2-OPT Local Ratio for Minimum Weighted Vertex Cover
Find an approximate minimum weighted vertex cover of a graph.
Parameters
----------
G : NetworkX graph
Undirected graph
weight : None or string, optional (default = None)
If None, every edge has weight/distance/cost 1. If a string, use this
edge attribute as the edge weight. Any edge attribute not present
defaults to 1.
Returns
-------
min_weighted_cover : set
Returns a set of vertices whose weight sum is no more than 2 * OPT.
Notes
-----
Local-Ratio algorithm for computing an approximate vertex cover.
Algorithm greedily reduces the costs over edges and iteratively
builds a cover. Worst-case runtime is O(|E|).
References
----------
.. [1] Bar-Yehuda, R., & Even, S. (1985). A local-ratio theorem for
approximating the weighted vertex cover problem.
Annals of Discrete Mathematics, 25, 27–46
http://www.cs.technion.ac.il/~reuven/PDF/vc_lr.pdf
"""
weight_func = lambda nd: nd.get(weight, 1)
cost = dict((n, weight_func(nd)) for n, nd in G.nodes(data=True))
# while there are edges uncovered, continue
for u,v in G.edges_iter():
# select some uncovered edge
min_cost = min([cost[u], cost[v]])
cost[u] -= min_cost
cost[v] -= min_cost
return set(u for u in cost if cost[u] == 0) |
# What is the equation of the line with slope m= -5/17 that passes through (3,1) ?
May 14, 2018
$y = - \frac{5}{17} x + \frac{32}{17}$
#### Explanation:
$\text{the equation of a line in "color(blue)"slope-intercept form}$ is.
•color(white)(x)y=mx+b
$\text{where m is the slope and b the y-intercept}$
$\text{here } m = - \frac{5}{17}$
$\Rightarrow y = - \frac{5}{17} + b \leftarrow \textcolor{b l u e}{\text{is the partial equation}}$
$\text{to find b substitute "(3,1)" into the partial equation}$
$1 = - \frac{15}{17} + b \Rightarrow b = \frac{17}{17} + \frac{15}{17} = \frac{32}{17}$
$\Rightarrow y = - \frac{5}{17} x + \frac{32}{17} \leftarrow \textcolor{red}{\text{ is equation of line}}$ |
#### Vol. 8, No. 8, 2015
Recent Issues
The Journal About the Journal Subscriptions Editorial Board Editors’ Interests Scientific Advantages Submission Guidelines Submission Form Editorial Login Contacts Author Index To Appear ISSN: 1948-206X (e-only) ISSN: 2157-5045 (print)
Existence and classification of singular solutions to nonlinear elliptic equations with a gradient term
### Joshua Ching and Florica Cîrstea
Vol. 8 (2015), No. 8, 1931–1962
##### Abstract
We completely classify the behaviour near $0$, as well as at $\infty$ when $\Omega ={ℝ}^{N}$, of all positive solutions of $\Delta u={u}^{q}|\nabla u{|}^{m}$ in $\Omega \setminus \left\{0\right\}$, where $\Omega$ is a domain in ${ℝ}^{N}$ ($N\ge 2$) and $0\in \Omega$. Here, $q\ge 0$ and $m\in \left(0,2\right)$ satisfy $m+q>1$. Our classification depends on the position of $q$ relative to the critical exponent ${q}_{\ast }:=\left(N-m\left(N-1\right)\right)∕\left(N-2\right)$ (with ${q}_{\ast }=\infty$ if $N=2$). We prove the following: if $q<{q}_{\ast }$, then any positive solution $u$ has either (1) a removable singularity at $0$, or (2) a weak singularity at $0$ ($\underset{|x|\to 0}{lim}u\left(x\right)∕E\left(x\right)\in \left(0,\infty \right)$, where $E$ denotes the fundamental solution of the Laplacian), or (3) $\underset{|x|\to 0}{lim}|x{|}^{\vartheta }u\left(x\right)=\lambda$, where $\vartheta$ and $\lambda$ are uniquely determined positive constants (a strong singularity). If $q\ge {q}_{\ast }$ (for $N>2$), then $0$ is a removable singularity for all positive solutions. Furthermore, for any positive solution in ${ℝ}^{N}\setminus \left\{0\right\}$, we show that it is either constant or has a nonremovable singularity at $0$ (weak or strong). The latter case is possible only for $q<{q}_{\ast }$, where we use a new iteration technique to prove that all positive solutions are radial, nonincreasing and converging to any nonnegative number at $\infty$. This is in sharp contrast to the case of $m=0$ and $q>1$, when all solutions decay to $0$. Our classification theorems are accompanied by corresponding existence results in which we emphasise the more difficult case of $m\in \left(0,1\right)$, where new phenomena arise.
##### Keywords
nonlinear elliptic equations, isolated singularities, Leray–Schauder fixed point theorem, Liouville-type result
##### Mathematical Subject Classification 2010
Primary: 35J25
Secondary: 35B40, 35J60 |
0} {\displaystyle M} Aktionsraum-Voraussagen werden durch farbige Linien (z. Code definitions. Information and translations of KDE in the most comprehensive dictionary definitions resource on the web. The summary statistics in the 1st row are computed merely to facilitate the creation of the table or computing the overlay Gaussian distribution function. (no smoothing), where the estimate is a sum of n delta functions centered at the coordinates of analyzed samples. ist entscheidend für die Qualität der Approximation. … = Please keep these lists sorted in alphabetical order. is multiplied by a damping function ψh(t) = ψ(ht), which is equal to 1 at the origin and then falls to 0 at infinity. . [22], If Gaussian basis functions are used to approximate univariate data, and the underlying density being estimated is Gaussian, the optimal choice for h (that is, the bandwidth that minimises the mean integrated squared error) is:[23]. [3], Let (x1, x2, …, xn) be a univariate independent and identically distributed sample drawn from some distribution with an unknown density ƒ at any given point x. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, ... Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. is unreliable for large t’s. x Es wurde eine Stichprobe (vom Umfang 100) generiert, die gemäß dieser Standardnormalverteilung verteilt ist. It is a technique to estimate the unknown probability distribution of a random variable, based on a sample of points taken from that distribution. h A kernel with subscript h is called the scaled kernel and defined as Kh(x) = 1/h K(x/h). x What does KDE stand for? {\displaystyle M_{c}} ) k eine Stichprobe, This page is all about the acronym of KDE and its meanings as Kernel Density Estimation. Mögliche Kerne sind etwa: Diese Kerne sind Dichten von ähnlicher Gestalt wie der abgebildete Cauchykern. f f {\displaystyle c>0} > If the bandwidth is not held fixed, but is varied depending upon the location of either the estimate (balloon estimator) or the samples (pointwise estimator), this produces a particularly powerful method termed adaptive or variable bandwidth kernel density estimation. Please note that Kernel Density Estimation is not the only meaning of KDE. 2 KDE ist eine Community, die sich der Entwicklung freier Software verschrieben hat. We can extend the definition of the (global) mode to a local sense and define the local modes: Namely, Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel. [1][2] One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier,[3][4] which can improve its prediction accuracy. Since Seaborn doesn’t provide any functionality to calculate probability from KDE, thus the code follows these 3 steps (as below) to make probability density plots and output the KDE objects to calculate probability thereafter. Use KDE software to surf the web, keep in touch with colleagues, friends and family, manage your files, enjoy music and videos; and get creative and productive at work. d Diese Aussage wird im Satz von Nadaraya konkretisiert. Definition from Wiktionary, the free dictionary. diffusion map). ^ < t Genauer: Ein Kerndichteschätzer ist ein gleichmäßig konsistenter, stetiger Schätzer der Dichte eines unbekannten Wahrscheinlichkeitsmaßes durch eine Folge von Dichten. KDE: Kernel Density Estimation: KDE: Key Data Element: KDE: Kelab Darul Ehsan: KDE: Kitchen Design Episode (home improvement show) KDE: Kopernicus Desktop Environment: KDE: IEEE Transactions on Knowledge and Database Engineering ) n is the number of points if no population field is used, or if a population field is supplied, n is the sum of the population field values. Statistics - Probability Density Function - In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function that describes the relative likelihood fo numerically. seien für B. im Fußball) während der Spielzeit zugrunde. These goals make it one of the most aesthetically ple… It is very similar to the way we plot a histogram. ( K This function uses Gaussian kernels and includes automatic bandwidth determination. Im folgenden Beispiel wird die Dichte einer Standardnormalverteilung (schwarz gestrichelt) durch Kerndichteschätzung geschätzt. Here is the formal de nition of the KDE. MISE (h) = AMISE(h) + o(1/(nh) + h4) where o is the little o notation. The list of acronyms and abbreviations related to KDE - Kernel Density Estimation {\displaystyle \scriptstyle {\widehat {\varphi }}(t)} [21] Note that the n−4/5 rate is slower than the typical n−1 convergence rate of parametric methods. Die Kerndichteschätzung (auch Parzen-Fenster-Methode;[1] englisch kernel density estimation, KDE) ist ein statistisches Verfahren zur Schätzung der Wahrscheinlichkeitsverteilung einer Zufallsvariablen. One difficulty with applying this inversion formula is that it leads to a diverging integral, since the estimate Knowing the characteristic function, it is possible to find the corresponding probability density function through the Fourier transform formula. ∫ Man sieht deutlich, dass die Qualität des Kerndichteschätzers von der gewählten Bandbreite abhängt. M Die im Folgenden beschriebenen Kerndichteschätzer sind dagegen Verfahren, die eine stetige Schätzung der unbekannten Verteilung ermöglichen. pandas.DataFrame.plot.kde¶ DataFrame.plot.kde (bw_method = None, ind = None, ** kwargs) [source] ¶ Generate Kernel Density Estimate plot using Gaussian kernels. [23] While this rule of thumb is easy to compute, it should be used with caution as it can yield widely inaccurate estimates when the density is not close to being normal. Eine zu kleine Bandbreite erscheint „verwackelt“, während eine zu große Bandbreite zu „grob“ ist. ) is a consistent estimator of x σ Updated April 2020. t {\displaystyle k} . It also counts the number of pseudo terminals spawned under ce… The trunk branch, though, represents the status of the development version of KDE (example: KDE 4.3). Kexi usage statistics is an experiment started two years along with Kexi 2.4. In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. For the kernel density estimate, a normal kernel with standard deviation 2.25 (indicated by the red dashed lines) is placed on each of the data points xi. {\displaystyle \scriptstyle {\widehat {\varphi }}(t)} λ A range of kernel functions are commonly used: uniform, triangular, biweight, triweight, Epanechnikov, normal, and others. ( und Examples. The Kentucky Department of Education (KDE) is in communication with the U.S. Department of Education (USED) and other professional organizations who are jointly monitoring and evaluating the situation. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form… = where K is the kernel — a non-negative function — and h > 0 is a smoothing parameter called the bandwidth. A non-exhaustive list of software implementations of kernel density estimators includes: Relation to the characteristic function density estimator, adaptive or variable bandwidth kernel density estimation, Analytical Methods Committee Technical Brief 4, "Remarks on Some Nonparametric Estimates of a Density Function", "On Estimation of a Probability Density Function and Mode", "Practical performance of several data driven bandwidth selectors (with discussion)", "A data-driven stochastic collocation approach for uncertainty quantification in MEMS", "Optimal convergence properties of variable knot, kernel, and orthogonal series methods for density estimation", "A comprehensive approach to mode clustering", "Kernel smoothing function estimate for univariate and bivariate data - MATLAB ksdensity", "SmoothKernelDistribution—Wolfram Language Documentation", "KernelMixtureDistribution—Wolfram Language Documentation", "Software for calculating kernel densities", "NAG Library Routine Document: nagf_smooth_kerndens_gauss (g10baf)", "NAG Library Routine Document: nag_kernel_density_estim (g10bac)", "seaborn.kdeplot — seaborn 0.10.1 documentation", https://pypi.org/project/kde-gpu/#description, "Basic Statistics - RDD-based API - Spark 3.0.1 Documentation", https://www.stata.com/manuals15/rkdensity.pdf, Introduction to kernel density estimation, https://en.wikipedia.org/w/index.php?title=Kernel_density_estimation&oldid=991325227, Creative Commons Attribution-ShareAlike License, This page was last edited on 29 November 2020, at 13:36. Look at these statistics when KDE is about to release a new version, because hopefully non-translated strings should not be present in your language. < n c [6] Due to its convenient mathematical properties, the normal kernel is often used, which means K(x) = ϕ(x), where ϕ is the standard normal density function. Miletičova 3 824 67 Bratislava tel. α The bandwidth of the kernel is a free parameter which exhibits a strong influence on the resulting estimate. Note that one can use the mean shift algorithm[26][27][28] to compute the estimator Kernel density estimation is a way to estimate the probability density function (PDF) of a random variable in a non-parametric way. moment: non-central moments of the distribution. ( If warranted, KDE may adjust schedules or pursue waivers granted by USED as they pertain to assessment and accountability. It can be shown that, under weak assumptions, there cannot exist a non-parametric estimator that converges at a faster rate than the kernel estimator. It only takes a minute to sign up. λ Once we are able to estimate adequately the multivariate density $$f$$ of a random vector $$\mathbf{X}$$ by $$\hat{f}(\cdot;\mathbf{H})$$, we can employ this knowledge to perform a series of interesting applications that go beyond the mere visualization and graphical description of the estimated density.. Often shortened to KDE, it’s a technique that let’s you create a smooth curve given a set of data.. Method for determining the smoothing bandwidth to use; passed to scipy.stats.gaussian_kde. Dann konvergiert die Folge der Kerndichteschätzer eines Wahrscheinlichkeitsmaßes sei gleichmäßig stetig. Question: What does the word KDE mean? You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. φ eines fast beliebig zu wählenden Wahrscheinlichkeitsmaßes ~ Meanings of KDE in English As mentioned above, KDE is used as an acronym in text messages to represent Kernel Density Estimation. The Epanechnikov kernel is optimal in a mean square error sense,[5] though the loss of efficiency is small for the kernels listed previously. ) KDE Research Team Introduction. ( Darüber sind die Cauchykerne (grün gestrichelt) dargestellt, aus deren Überlagerung der Kerndichteschätzer resultiert (rote Kurve). , {\displaystyle x_{1},\ldots ,x_{n}\in \mathbb {R} } Ist h ( related. The kernels are summed to make the kernel density estimate (solid blue curve). What does KDE stand for in Desktop? In der konkreten Situation des Schätzens ist diese Kurve natürlich unbekannt und soll durch die Kerndichteschätzung geschätzt werden. {\displaystyle M_{c}} {\displaystyle R(g)=\int g(x)^{2}\,dx} Then the final formula would be: where No definitions found in this file. a. PROC KDE The PROC KDE procedure in SAS/STAT performs univariate and multivariate estimation. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. Whenever a data point falls inside this interval, a box of height 1/12 is placed there. The KDE is a functionDensity pb n(x) = 1 nh Xn i=1 K X i x h ; (6.5) where K(x) is called the kernel function that is generally a smooth, symmetric function such as a Gaussian and h>0 is called the smoothing bandwidth that controls the amount of smoothing. ∈ is the collection of points for which the density function is locally maximized. What does KDE mean? 2 Basically, the KDE smoothes each data point X , Once the function ψ has been chosen, the inversion formula may be applied, and the density estimator will be. h {\displaystyle \scriptstyle {\widehat {\varphi }}(t)} See also: KDE and kdě Sei ∈ play count) in mp3 files? Get KDE Software on Your Linux Distro has packaging information for those wishing to ship KDE software. φ φ On the uppermost line, shown in Figure 1, there are (from left to right): current time (hour:minute:second), uptime (hour:minute), number of active user IDs, and load average. {\displaystyle {\hat {\sigma }}} ein Kern von beschränkter Variation. Not exactly. Eines der bekanntesten Projekte ist die Desktop-Umgebung KDE Plasma 5 (früher K Desktop Environment, abgekürzt KDE). A Babysitter's Guide To Monster Hunting 2 Release Date, Team-bhp Car Sales June 2020, Sissi Fateful Years Of An Empress Watch Online, Nissan Juke For £3000, Bowflex Dumbbell Stand, Best Horror Shows On Netflix Reddit, All Car Accessories List Pdf, Rowing 2000m In 8 Minutes, Must University Entry Test 2020, The List Of Adrian Messenger Watch Online, " />
# kde meaning statistics
## kde meaning statistics
{\displaystyle k} c List of 39 KDE definitions. This approximation is termed the normal distribution approximation, Gaussian approximation, or Silverman's rule of thumb. In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function (PDF) of a random variable. 0 The grey curve is the true density (a normal density with mean 0 and variance 1). ) Apply the following formula to calculate the bandwidth. x The World's most comprehensive professionally edited abbreviations and acronyms database All trademarks/service marks referenced on this site are properties of their respective owners. ^ k Der Kerndichteschätzer stellt eine Überlagerung in Form der Summe entsprechend skalierter Kerne dar, die abhängig von der Stichprobenrealisierung positioniert werden. The black curve with a bandwidth of h = 0.337 is considered to be optimally smoothed since its density estimate is close to the true density. Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. R The choice of bandwidth is discussed in more detail below. K Kernel density estimation is a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. B. Isolinien) dargestellt. the estimate retains the shape of the used kernel, centered on the mean of the samples (completely smooth). The generated plot of the KDE is shown below: Note that the KDE curve (blue) tracks very closely with the Gaussian density (orange) curve. {\displaystyle K} Der Satz liefert die Aussage, dass mit entsprechend gewählter Bandbreite eine beliebig gute Schätzung der unbekannten Verteilung durch Wahl einer entsprechend großen Stichprobe möglich ist:[2]. An example using 6 data points illustrates this difference between histogram and kernel density estimators: For the histogram, first the horizontal axis is divided into sub-intervals or bins which cover the range of the data: In this case, six bins each of width 2. [7][17] The estimate based on the rule-of-thumb bandwidth is significantly oversmoothed. Plot normalized histograms; Perform Kernel Density Estimation (KDE) Plot probability density 0 x Ein bekanntes Verfahren ist die Erstellung eines Histogramms. Substituting any bandwidth h which has the same asymptotic order n−1/5 as hAMISE into the AMISE {\displaystyle \lambda _{1}(x)} KDE Applications Powerful, multi-platform and for all. , d. h. Die Kerndichteschätzung wird von Statistikern seit etwa 1950 eingesetzt und wird in der Ökologie häufig zur Beschreibung des Aktionsraumes eines Tieres verwendet, seitdem diese Methode in den 1990ern in den Wissenschaftszweig Einzug hielt. I picked the K not only because it is the letter before L, for Linux, I also liked the pun with CDE. t ) Meaning of KDE. Looking for online definition of KDE or what KDE stands for? About KDE Statistics This site uses the l10n-stats scripts to display the status of each PO file of the KDE translation project. KDE (back then called the K(ool) Desktop Environment) was founded in 1996 by Matthias Ettrich, a student at the University of Tübingen.At the time, he was troubled by certain aspects of the Unix desktop. stats: Return mean, variance, (Fisher’s) skew, or (Fisher’s) kurtosis. Find out what is the full meaning of KDE on Abbreviations.com! In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. {\displaystyle h(n)={\tfrac {c}{n^{\alpha }}}} Composed entirely of free and open-source software, GNOME focused from its inception on freedom, accessibility, internationalization and localization, developer friendliness, organization, and support. h K A natural estimator of ... That'd probably give more meaning and perspective. To illustrate its effect, we take a simulated random sample from the standard normal distribution (plotted at the blue spikes in the rug plot on the horizontal axis). and where K is the Fourier transform of the damping function ψ. 1 k f n {\displaystyle h} In der nichtparametrischen Statistik werden Verfahren entwickelt, um aus der Realisierung einer Stichprobe die zu Grunde liegende Verteilung zu identifizieren. We are interested in estimating the shape of this function ƒ. Jump to navigation Jump to search. How about the number of active user IDs? {\displaystyle \lambda _{1}(x)} Kernel density estimation is a really useful statistical tool with an intimidating name. {\displaystyle h} x In der klassischen Statistik geht man davon aus, dass statistische Phänomene einer bestimmten Wahrscheinlichkeitsverteilung folgen und dass sich diese Verteilung in Stichproben realisiert. ) {\displaystyle {\tilde {f}}_{n}} Case 2. In comparison, the red curve is undersmoothed since it contains too many spurious data artifacts arising from using a bandwidth h = 0.05, which is too small. Looking for the definition of KDE? 0 ein Kern, so wird der Kerndichteschätzer zur Bandbreite {\displaystyle n\in \mathbb {N} } die Bandbreiten Die Kerndichteschätzung (auch Parzen-Fenster-Methode;[1] englisch kernel density estimation, KDE) ist ein statistisches Verfahren zur Schätzung der Wahrscheinlichkeitsverteilung einer Zufallsvariablen. d This can be useful if you want to visualize just the “shape” of some data, as a kind … The kernel density estimation technique is a technique used for density estimation in which a known density function, known as a kernel, is averaged across the data to create an approximation. are KDE version of ∫ ) K desktop environment (KDE) is a desktop working platform with a graphical user interface (GUI) released in the form of an open-source package. Der folgenden Abbildung wurde eine Stichprobe vom Umfang 10 zu Grunde gelegt, die als schwarze Kreise dargestellt ist. and : +421 2 50 236 339 e-mail: info@statistics.sk Štatistiky Obyvateľstvo a migrácia Náklady práce Národné účty Spotrebiteľské ceny Odvetvové štatistiky {\displaystyle f} Mit Bandwidth selection for kernel density estimation of heavy-tailed distributions is relatively difficult. M bw_adjust number, optional. ∞ a collection of statistic measures of centrality and dispersion (and further measures) can be added by specifying one or more of the following keywords: "n" (number of samples) "mean" (mean De value) "median" (median of the De values) "sd.rel" (relative standard deviation in percent) "sd.abs" (absolute standard deviation) The curve is normalized so that the integral over all possible values is 1, meaning that the scale of the density axis depends on the data values. Desktop KDE acronym meaning defined here. is a plug-in from KDE,[24][25] where The letter K is pronounced the same as C in many languages. {\displaystyle \scriptstyle {\widehat {\varphi }}(t)} The bigger bandwidth we set, the smoother plot we get. A statistic summary, i.e. h Intuitively one wants to choose h as small as the data will allow; however, there is always a trade-off between the bias of the estimator and its variance. Its kernel density estimator is. {\displaystyle g(x)} Die Dichte {\displaystyle h>0} {\displaystyle M} Aktionsraum-Voraussagen werden durch farbige Linien (z. Code definitions. Information and translations of KDE in the most comprehensive dictionary definitions resource on the web. The summary statistics in the 1st row are computed merely to facilitate the creation of the table or computing the overlay Gaussian distribution function. (no smoothing), where the estimate is a sum of n delta functions centered at the coordinates of analyzed samples. ist entscheidend für die Qualität der Approximation. … = Please keep these lists sorted in alphabetical order. is multiplied by a damping function ψh(t) = ψ(ht), which is equal to 1 at the origin and then falls to 0 at infinity. . [22], If Gaussian basis functions are used to approximate univariate data, and the underlying density being estimated is Gaussian, the optimal choice for h (that is, the bandwidth that minimises the mean integrated squared error) is:[23]. [3], Let (x1, x2, …, xn) be a univariate independent and identically distributed sample drawn from some distribution with an unknown density ƒ at any given point x. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, ... Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. is unreliable for large t’s. x Es wurde eine Stichprobe (vom Umfang 100) generiert, die gemäß dieser Standardnormalverteilung verteilt ist. It is a technique to estimate the unknown probability distribution of a random variable, based on a sample of points taken from that distribution. h A kernel with subscript h is called the scaled kernel and defined as Kh(x) = 1/h K(x/h). x What does KDE stand for? {\displaystyle M_{c}} ) k eine Stichprobe, This page is all about the acronym of KDE and its meanings as Kernel Density Estimation. Mögliche Kerne sind etwa: Diese Kerne sind Dichten von ähnlicher Gestalt wie der abgebildete Cauchykern. f f {\displaystyle c>0} > If the bandwidth is not held fixed, but is varied depending upon the location of either the estimate (balloon estimator) or the samples (pointwise estimator), this produces a particularly powerful method termed adaptive or variable bandwidth kernel density estimation. Please note that Kernel Density Estimation is not the only meaning of KDE. 2 KDE ist eine Community, die sich der Entwicklung freier Software verschrieben hat. We can extend the definition of the (global) mode to a local sense and define the local modes: Namely, Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel. [1][2] One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier,[3][4] which can improve its prediction accuracy. Since Seaborn doesn’t provide any functionality to calculate probability from KDE, thus the code follows these 3 steps (as below) to make probability density plots and output the KDE objects to calculate probability thereafter. Use KDE software to surf the web, keep in touch with colleagues, friends and family, manage your files, enjoy music and videos; and get creative and productive at work. d Diese Aussage wird im Satz von Nadaraya konkretisiert. Definition from Wiktionary, the free dictionary. diffusion map). ^ < t Genauer: Ein Kerndichteschätzer ist ein gleichmäßig konsistenter, stetiger Schätzer der Dichte eines unbekannten Wahrscheinlichkeitsmaßes durch eine Folge von Dichten. KDE: Kernel Density Estimation: KDE: Key Data Element: KDE: Kelab Darul Ehsan: KDE: Kitchen Design Episode (home improvement show) KDE: Kopernicus Desktop Environment: KDE: IEEE Transactions on Knowledge and Database Engineering ) n is the number of points if no population field is used, or if a population field is supplied, n is the sum of the population field values. Statistics - Probability Density Function - In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function that describes the relative likelihood fo numerically. seien für B. im Fußball) während der Spielzeit zugrunde. These goals make it one of the most aesthetically ple… It is very similar to the way we plot a histogram. ( K This function uses Gaussian kernels and includes automatic bandwidth determination. Im folgenden Beispiel wird die Dichte einer Standardnormalverteilung (schwarz gestrichelt) durch Kerndichteschätzung geschätzt. Here is the formal de nition of the KDE. MISE (h) = AMISE(h) + o(1/(nh) + h4) where o is the little o notation. The list of acronyms and abbreviations related to KDE - Kernel Density Estimation {\displaystyle \scriptstyle {\widehat {\varphi }}(t)} [21] Note that the n−4/5 rate is slower than the typical n−1 convergence rate of parametric methods. Die Kerndichteschätzung (auch Parzen-Fenster-Methode;[1] englisch kernel density estimation, KDE) ist ein statistisches Verfahren zur Schätzung der Wahrscheinlichkeitsverteilung einer Zufallsvariablen. One difficulty with applying this inversion formula is that it leads to a diverging integral, since the estimate Knowing the characteristic function, it is possible to find the corresponding probability density function through the Fourier transform formula. ∫ Man sieht deutlich, dass die Qualität des Kerndichteschätzers von der gewählten Bandbreite abhängt. M Die im Folgenden beschriebenen Kerndichteschätzer sind dagegen Verfahren, die eine stetige Schätzung der unbekannten Verteilung ermöglichen. pandas.DataFrame.plot.kde¶ DataFrame.plot.kde (bw_method = None, ind = None, ** kwargs) [source] ¶ Generate Kernel Density Estimate plot using Gaussian kernels. [23] While this rule of thumb is easy to compute, it should be used with caution as it can yield widely inaccurate estimates when the density is not close to being normal. Eine zu kleine Bandbreite erscheint „verwackelt“, während eine zu große Bandbreite zu „grob“ ist. ) is a consistent estimator of x σ Updated April 2020. t {\displaystyle k} . It also counts the number of pseudo terminals spawned under ce… The trunk branch, though, represents the status of the development version of KDE (example: KDE 4.3). Kexi usage statistics is an experiment started two years along with Kexi 2.4. In statistics, kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. For the kernel density estimate, a normal kernel with standard deviation 2.25 (indicated by the red dashed lines) is placed on each of the data points xi. {\displaystyle \scriptstyle {\widehat {\varphi }}(t)} λ A range of kernel functions are commonly used: uniform, triangular, biweight, triweight, Epanechnikov, normal, and others. ( und Examples. The Kentucky Department of Education (KDE) is in communication with the U.S. Department of Education (USED) and other professional organizations who are jointly monitoring and evaluating the situation. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form… = where K is the kernel — a non-negative function — and h > 0 is a smoothing parameter called the bandwidth. A non-exhaustive list of software implementations of kernel density estimators includes: Relation to the characteristic function density estimator, adaptive or variable bandwidth kernel density estimation, Analytical Methods Committee Technical Brief 4, "Remarks on Some Nonparametric Estimates of a Density Function", "On Estimation of a Probability Density Function and Mode", "Practical performance of several data driven bandwidth selectors (with discussion)", "A data-driven stochastic collocation approach for uncertainty quantification in MEMS", "Optimal convergence properties of variable knot, kernel, and orthogonal series methods for density estimation", "A comprehensive approach to mode clustering", "Kernel smoothing function estimate for univariate and bivariate data - MATLAB ksdensity", "SmoothKernelDistribution—Wolfram Language Documentation", "KernelMixtureDistribution—Wolfram Language Documentation", "Software for calculating kernel densities", "NAG Library Routine Document: nagf_smooth_kerndens_gauss (g10baf)", "NAG Library Routine Document: nag_kernel_density_estim (g10bac)", "seaborn.kdeplot — seaborn 0.10.1 documentation", https://pypi.org/project/kde-gpu/#description, "Basic Statistics - RDD-based API - Spark 3.0.1 Documentation", https://www.stata.com/manuals15/rkdensity.pdf, Introduction to kernel density estimation, https://en.wikipedia.org/w/index.php?title=Kernel_density_estimation&oldid=991325227, Creative Commons Attribution-ShareAlike License, This page was last edited on 29 November 2020, at 13:36. Look at these statistics when KDE is about to release a new version, because hopefully non-translated strings should not be present in your language. < n c [6] Due to its convenient mathematical properties, the normal kernel is often used, which means K(x) = ϕ(x), where ϕ is the standard normal density function. Miletičova 3 824 67 Bratislava tel. α The bandwidth of the kernel is a free parameter which exhibits a strong influence on the resulting estimate. Note that one can use the mean shift algorithm[26][27][28] to compute the estimator Kernel density estimation is a way to estimate the probability density function (PDF) of a random variable in a non-parametric way. moment: non-central moments of the distribution. ( If warranted, KDE may adjust schedules or pursue waivers granted by USED as they pertain to assessment and accountability. It can be shown that, under weak assumptions, there cannot exist a non-parametric estimator that converges at a faster rate than the kernel estimator. It only takes a minute to sign up. λ Once we are able to estimate adequately the multivariate density $$f$$ of a random vector $$\mathbf{X}$$ by $$\hat{f}(\cdot;\mathbf{H})$$, we can employ this knowledge to perform a series of interesting applications that go beyond the mere visualization and graphical description of the estimated density.. Often shortened to KDE, it’s a technique that let’s you create a smooth curve given a set of data.. Method for determining the smoothing bandwidth to use; passed to scipy.stats.gaussian_kde. Dann konvergiert die Folge der Kerndichteschätzer eines Wahrscheinlichkeitsmaßes sei gleichmäßig stetig. Question: What does the word KDE mean? You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. φ eines fast beliebig zu wählenden Wahrscheinlichkeitsmaßes ~ Meanings of KDE in English As mentioned above, KDE is used as an acronym in text messages to represent Kernel Density Estimation. The Epanechnikov kernel is optimal in a mean square error sense,[5] though the loss of efficiency is small for the kernels listed previously. ) KDE Research Team Introduction. ( Darüber sind die Cauchykerne (grün gestrichelt) dargestellt, aus deren Überlagerung der Kerndichteschätzer resultiert (rote Kurve). , {\displaystyle x_{1},\ldots ,x_{n}\in \mathbb {R} } Ist h ( related. The kernels are summed to make the kernel density estimate (solid blue curve). What does KDE stand for in Desktop? In der konkreten Situation des Schätzens ist diese Kurve natürlich unbekannt und soll durch die Kerndichteschätzung geschätzt werden. {\displaystyle M_{c}} {\displaystyle R(g)=\int g(x)^{2}\,dx} Then the final formula would be: where No definitions found in this file. a. PROC KDE The PROC KDE procedure in SAS/STAT performs univariate and multivariate estimation. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. Whenever a data point falls inside this interval, a box of height 1/12 is placed there. The KDE is a functionDensity pb n(x) = 1 nh Xn i=1 K X i x h ; (6.5) where K(x) is called the kernel function that is generally a smooth, symmetric function such as a Gaussian and h>0 is called the smoothing bandwidth that controls the amount of smoothing. ∈ is the collection of points for which the density function is locally maximized. What does KDE mean? 2 Basically, the KDE smoothes each data point X , Once the function ψ has been chosen, the inversion formula may be applied, and the density estimator will be. h {\displaystyle \scriptstyle {\widehat {\varphi }}(t)} See also: KDE and kdě Sei ∈ play count) in mp3 files? Get KDE Software on Your Linux Distro has packaging information for those wishing to ship KDE software. φ φ On the uppermost line, shown in Figure 1, there are (from left to right): current time (hour:minute:second), uptime (hour:minute), number of active user IDs, and load average. {\displaystyle {\hat {\sigma }}} ein Kern von beschränkter Variation. Not exactly. Eines der bekanntesten Projekte ist die Desktop-Umgebung KDE Plasma 5 (früher K Desktop Environment, abgekürzt KDE). |
Relation Between Pressure And Velocity Of Air
Proximal to the obstruction, flow is laminar (parallel arrows) and velocity V1 is normal. The Bernoulli 's principle is explained in this video, it's used in the most diverse range of applications ,from the simple spray mechanism of perfume to the lift in an airplane's wings. When this didn’t work, one of the. 00 Governing Static Pressure (at TO location) "wg 39. The following equation explains this relationship: ASP – static pressure drop of system components = the static available for use on ductwork. Gravity creates air pressure through the compression of the atmosphere. So by applying the formula, doing a bit of algebra, and discarding terms that are too small to matter, we get a relationship between the flow rates that just depends on pipe lengths:. (13) where D slope of saturation vapour pressure curve at air temperature T [kPa °C-1], T air temperature [°C],. Air Particle Velocity Levels equilibrium level minimum pressure maximum pressure molecules packed very tight, zero interaction zero velocity molecules as tight as they get, maximum increase in pressure from normal conditions molecules moving apart, pressure decreasing molecules as spread out as they get, maximum decrease. The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface. Until the early 17th century air was largely misunderstood. If you need to, you can calculate velocity from pressure in these problems if you are given enough information. The expected relationship was , where Q is volumetric flow rate through the valve, Δp is pressure across the valve, and k is the valve coefficient. Decreased velocity results in fewer collisions between molecules and air pressure decreases. What we found was that the 3/2 times the pressure times the volume equals N times the average kinetic energy of a gas molecule. 4 points to 54. The difference between ground speed and airspeed is caused by the influence of winds on the overall velocity of the aircraft. The driving force behind changes in air pressure and therefore the creation of wind is gravity. Brian Grambow Justin Recore Amelia Sires Period 3A Statement of Problem. Heat energy is applied to the cylinder and the temperature of the gas increases. 06 atm can be quickly fatal, and permanent damage is likely even if the person is rescued. Between 14 and 21 inches, pressure loss totals 18,000 psi. You will continue to read pressure (usually more than what you see when there is a flow). It characterizes the relationship between the acting sound pressure and the resulting particle velocity. 5 x Velocity Pressure (IN w. The difference between the total pressure reading and the static pressure reading is the Velocity Pressure. Sound is absorbed when there is friction between the air molecules and the absorptive material causing the sound energy to dissipate as heat. Sonic velocity occurs for air flow when P2 /P1 ≤. See Chapter 2Fluid —. The energy source for air atomization is air pressure. It is a scalar quantity. To simplify my question, let's use the following assumptions: Average wind speed of 8 m/s. Now, imagine there is rushing water in the pipe, the pressure put placed by the water pushing out against the present liquid, this is known as dynamic pressure. relation between pressure and velocity of water A direct relationship exists between the PV value and the amount of frictional heat. Relationship of number of gas molecules to pressure in a fixed volume. Pitot tubes (also called pitot static tubes) are used to measure fluid velocity at a point in a fluid. In this regard they. The PressureTrace rifle chamber pressure testing hardware and software system consists of a microprocessor control module, cables, PC software and a strain gage. Compressible flows can be either transonic (0. Areas where air is warmed often have lower pressure because the warm air rises and are called low. The relationship between pressure and volume (P-V relationship) is usually called Boyle's law in honor of Robert Boyle, who was first to uncover the relationship. Determining the Speed of Sound, Density and Bulk Modulus … S508 THERMAL SCIENCE, Year 2012, Vol. Discuss the effect of nominal air velocity on the approach to wet bulb and pressure drop through packing. A compression tanks allows the air and the system water to touch and the air in the system is not required to be bled from the system. The Bernoulli 's principle is explained in this video, it's used in the most diverse range of applications ,from the simple spray mechanism of perfume to the lift in an airplane's wings. The relationship tells us that flow rate is directly proportional to both the magnitude of the average velocity (hereafter referred to as the speed) and the size of a river, pipe, or other conduit. For example, my program site_amp and Chuck Mueller’s nrattle need input of shear-wave velocity and. What is the relationship between paint booth air velocity and paint gun pressure? Is there a rule of thumb or formula to determine or calculate spray booth air supply velocity w. 2 feet per second for each second that it falls. Air will flow from a region of high pressure to one of low pressure-- the bigger the difference, the faster the flow. The locations of surface cyclones and anticyclones hold similar positions in the 700 mb geopotential height field. of the cozy relationship between the administration. The graph below shows an example of a system curve. An accurate prediction of cyclone pressure drop is very important as it relates directly to operating costs. Fittings such as elbows, tees, valves and reducers represent a significant component of the pressure loss in most pipe systems. Pressure drops off fast, too, losing 90 percent of its vigor in the next 18 inches of barrel. Most of that is in oceans, rivers, and lakes, but some is frozen in the Earth's two ice sheets. This average velocity is defined as the total flow rate. Labor costs are estimated between $88 and$111. The relationship tells us that flow rate is directly proportional to both the magnitude of the average velocity (hereafter referred to as the speed) and the size of a river, pipe, or other conduit. We saw in chapter 3 that air motions play a key role in determining the distributions of chemical species in the atmosphere. Thus the average velocity field in a Hele-Shaw flow is irrotational. Relationship between frequency and distance. It is defined as the flow rate in cubic meters per hour [m3/h] of water at a temperature of 16º celsius with a pressure drop across the valve of 1 bar. velocity, sets that, and reports. The wind pressure can be approximated by: Pressure = ½ x (density of air) x (wind speed) 2 x (shape factor) The density of air is about 1. Air densities vary with altitude, temperature and humidity. This tells me that there is a relationship between barometric pressure and wind velocity but not a perfect negative correlation of -1. Water is much denser, so the pressure goes up faster as you go down. 00 Governing Static Pressure (at TO location) "wg 39. Because every sound wave travels the same distance regardless. The system or flow resistance changes if the duct system is changed (e. I appreciated the time he spent educating me on our system both in terms of the current problems and ways to avoid future problems. heat lost to airflow is a function of the air velocity. The air flow velocity is limited once the absolute pressure ratio is ≤. ) Parts A and B were performed using air, which we know is actually a mixture of several gases. • Air filtration different from liquid filtration • Pore size in air filters generally meaningless as indicator of efficiency • Small particles collected by diffusion, large ones by impaction/interception • Maximum penetration at about 0. Air France-KLM’s cargo load factor fell 3. This thesis primarily quantifies the relationship between velocity of contraction of air muscles and the force applied on it, which is a key characteristic of biological skeletal muscle. Consequences The results of the described measurements led to a re-design of the jet fan thrust in order to. A compression tanks allows the air and the system water to touch and the air in the system is not required to be bled from the system. These parameters are inlet velocity, pressure drop and collection efficiency of the cyclone. Gas Pressure and Volume Relationships Name_____ Lab Section_____ CHECKOUT from the Storeroom: Plastic Tub containing Go!Link and Gas Pressure sensor In this laboratory experience, you will be able to observe a classic chemistry concept from both the macroscopic and molecular point of view. The formula for force is rather simple:. In areas where the pressure is high, the molecular velocity is low. According to the Bernoulli principle, as the flow of air accelrates, it has a lower pressure than slower-moving air. pumpfundamentals. P 1 V 1 = P 2 V 2. Air & Space Magazine | Subscribe July 2002 LAST DECEMBER, WHEN AN airman on a mission to Afghanistan initiated the ejection sequence on a B-1 bomber that was going down over the Indian Ocean, all four crew members blew out of the airplane in less time than it takes you to read this paragraph. Calculating Air Flow (Standard, 70 °F @ 29. material will maintain the required velocity to carry it completely through the system and not settle in the duct, pipe or hose. Find the net force (magnitude and direction) on a 2 m by 1. Air has mass and moving air has momentum. The air provides pressure pushing it downwards, this is known as the static pressure when the liquid is at a state of rest. Obtain a relation for the wall shear stress in terms of a, b, and μ. It is an inverse relationship, which means that as one thing increases the other. Brian Grambow Justin Recore Amelia Sires Period 3A Statement of Problem. The drop in pressure will be quite small (in a low velocity ductwork system) - typically around 1 Pa per metre run of straight ductwork. If the air parcel follows a curved path, there must be a net force on it, as required by Newton's laws. This relationship is called Bernoulli's principle. By using dimensional analysis and fluid dynamic equations, basic fan laws can be derived giving a relationship between airflow, static pressure, horsepower, speed, density and noise. 2) or supersonic (1. Flow, Velocity, and Pressure. If you have a user account, you will need to reset your password the next time you login. Future Technological Trends and A Sneak Peak at "Electronic Mufflers". Velocity versus barrel length was measured in two inch intervals from an initial barrel length of 28. Relationship between velocity, frequency and wavelength of a wave The distance travelled by a wave in a medium in one second is called the velocity of propagation of the wave in that medium. The wind effects are negligible. Plugging of the impulse piping can be a concern for many services. Application Cautions for Differential Pressure Flowmeters. The drag coefficient may further be a function of the Reynolds number. To calculate the flowrate of a fluid passing through a venturi, enter the parameters below. 7 (atmospheric pressure) T = Absolute temperature in ºR (ºF + 460) G = Specific gravity of medium where air at 70ºF and 14. Dynamic pressure is the kinetic energy per unit volume of a fluid particle. But the bullet continues to accelerate even as pressure behind it diminishes. 2 Pressure classification. velocity, sets that, and reports. Best Answer: There is no inevitable relationship between the two. 4-4 EXPLOSIVE BLAST EXPLOSIVE BLAST 4-5 can be almost 13 times greater than peak incident pressures and, for all explosions, the reflected pressure coefficients are signifi-cantly greater closer to the explosion. The high-level explanation is a valid as long as there is no heat exchange between air in the diffuser and the outside (adiabatic compression where work rises air temperature), else the relationship between pressure and velocity is more complex as energy enters the diffuser as dynamic pressure, but also leaves the diffuser as released heat. Bernoulli's equation along the stagnation streamline gives. 5 Explain delayed onset muscle soreness (DOMS) in relation to eccentric and concentric muscle contractions The pain and stiffness felt in muscles several hours to days after unaccustomed or strenuous exercise. Using similar techniques as the pitot tube, a silicon semiconductor can be used to calculate a pressure difference in an airflow stream, and this data can be used to measure airflow velocity. It is the pressure necessary to accelerate the air. Bernoulli's equation states that for an incompressible and inviscid fluid, the total mechanical energy of the fluid is constant. ] In general velocity rises with increasing confining pressure and then levels off to a "terminal velocity" when the effective pressure is high. The stagnation pressure tap and the static pressure tap are connected to manometers. If you want to think of the relationship between pressure and velocity, you can use the following equation: dP = Q x R where dP is the change in pressure (or you can think of it as the mean arterial pressure), Q is the cardiac output, and R is the total peripheral resistance. Thermokon Sensortechnik GmbH Platanenweg 1 35756 Mittenaar. The system or flow resistance changes if the duct system is changed (e. It “reduces” air velocity and decreases the total pressure acting on the fan. Air velocity is measured by sensing the pressure that is produced through the movement of the air. 4 Gathering this information. Fittings such as elbows, tees, valves and reducers represent a significant component of the pressure loss in most pipe systems. I know that I can relate this exit velocity to mass flow rate, since:. Force Pressure Energy. (The average velocity pressure is about 81% of centerline velocity pressure. natural convection from a rectangular plate in any orientation, a correlation for forced convection over a rough flat surface, correspondence between forced convection and the vertical-plate mode of natural convection, vector sum of forced-equivalent velocity and forced velocity vectors, competition between convective modes, and. After more than 25 years in the compressed air industry, it still amazes me that many plant personnel and even those who sell compressed air products for a living don’t fully understand the relationship between flow, or volume (cfm), and pressure (psig). 00 Resultant Velocity Pressure "wg PERTINENT EQUATIONS: Other Information: Branch Entry Elbow Loss Factors: Loss Factors 60*elbow = 2/3 loss 45oelbow=1. (13) where D slope of saturation vapour pressure curve at air temperature T [kPa °C-1], T air temperature [°C],. For now just suffice it to say that there is a very important number called “Friction Rate” that determines the relationship between duct size and airflow. Force Pressure Energy. As the contact pressure increases, the friction does not rise proportionately, and when the pressure becomes very high, friction increases rapidly until seizing takes place. A simple slide rule, it provides for all the factors needed to calculate air velocity quickly and accurately. Where v is flow velocity, rho is density, and P is pressure. There are two types of pressure that. To do this, you need the pressure drop - flow rate relationship for a fluid thorough a straight section of pipe, and you need to include additional pressure drop for elbows and bends. • As air moves into and out of the lungs, it travels from regions of high air pressure to regions of low air pressure Page 2. Rotor length of 40m for an overall diameter of 80m. Between 100 and 200 kilometers from the eye, the winds are fast enough to qualify as tropical storm force. The expected relationship was , where Q is volumetric flow rate through the valve, Δp is pressure across the valve, and k is the valve coefficient. Velocity pressure is the pressure caused by air in motion. The relationship between velocity and pressure for incompressible flow ( constant fluid density) is given by Bernoulli's Law. Higher range or duel range manometers can be used for static pressure measurement. VP Velocity pressure, the measure of the energy content of the airstream. leakage vortex and the adjacent pressure surface of the blade. What's important is that within the particle the sound pressure and the average motion of the particles is constant. Blood pressure (BP) is one of the most important contributing factors to pulse wave velocity (PWV), a classic measure of arterial stiffness. To do this, you need the pressure drop - flow rate relationship for a fluid thorough a straight section of pipe, and you need to include additional pressure drop for elbows and bends. Piping Design. Those ice sheets, which cover most of Greenland and Antarctica, only contain 2% of the world's total water supply, but a whopping 70% of the Earth's fresh water. 92 inches of mercury (barometric pressure), and the. We gain a better understanding of pressure and temperature from the kinetic theory of gases, which assumes that atoms and molecules are in continuous random motion. Average Speed - The average speed is calculated by the distance that an object traveled over a given interval of time. The electric field must also have a conductor. Since the cavity, during early and mid-diastole, is directly exposed to the ventricular pressure through the open mitral valve, the atrial emptying pattern is obviously strongly influenced by the left heart diastolic properties. (1) Relationship between pressure drop and flow rate The flow of a fluid, either liquid or gas, through a static packed bed can be described in a. Air picks up speed as it flows over the curved upper surface of a wing, called an airfoil. If you want to think of the relationship between pressure and velocity, you can use the following equation: dP = Q x R where dP is the change in pressure (or you can think of it as the mean arterial pressure), Q is the cardiac output, and R is the total peripheral resistance. Functional relationship between the airflow volume and the pressure difference it generated as well as geometry of the openings, the so called Power law equation, confirmed in. There are two ways to look at pressure: (1) the small scale action of individual air molecules or (2) the large scale action of a large number of molecules. Having identified. The velocity of a projectile is highest at the muzzle and drops off steadily because of air resistance. Bernoulli's Effect - Relation between Pressure and Velocity Bernoulli's Equation The Bernoulli's equation can be considered to be a statement of the conservation of energy principle appropriate for flowing fluids. For a control volume that has a single inlet and a single outlet, the principle of conservation of mass states that, for steady-state flow, the mass flow rate into the volume must equal the mass flow rate out. S505-S514 same vessel, we can add a known (measured or determined) mass of fluid, while fluid volume remains the same, we will raise the pressure in the vessel, making it possible to determine the. However there can be a relationship between the two. This equation. Make one drawing for each of two temperatures - 13711420. Relationship of number of gas molecules to pressure in a fixed volume. The mass, radius, and luminosity of 26 Mira variables that are known OH sources of radio emission at 1612 MHz have been estimated. Having identified. The stable operating point is at A though which passes the respective engine operating line ( this line indicates the relationship the engine requires between Air flow and pressure), the unstable point leading to surging is at B. 2) = pt2 - pt1 - (pt2 - ps2) = ps2 – pt1 (10. I now have a very fundamental question to pose to the engineers here. A carburetor is prevented from leaning out during quick acceleration by the. ) Density of (Gas) In commercial applications where air is the gas, its density is at 70º Fahrenheit and 29. Figure 1 shows the relationship between drying rate and air velocity for different levels of moisture content of the wood. Mathematically, the energy per unit of volume is $$\frac{\rho}{2}\cdot v^2 + p = const$$ which is actually the simplest form of Bernoulli's equation which neglects changes. It has been shown that the relationship between wind speed and pressure are co-dependent with temperature. velocity from the pressure-time history obtained during the combustion process 67 2 Reproducibility of data 68 3 Parameters Suo, a, , and 65 used in the power law relation for propane-air mixtures as a function of the equivalence ratio 69 4 Parameters A, a, E, and 65 used in the exponential relation for propane-air mixtures. This thesis primarily quantifies the relationship between velocity of contraction of air muscles and the force applied on it, which is a key characteristic of biological skeletal muscle. The relationship tells us that flow rate is directly proportional to both the magnitude of the average velocity (hereafter referred to as the speed) and the size of a river, pipe, or other conduit. Since the cavity, during early and mid-diastole, is directly exposed to the ventricular pressure through the open mitral valve, the atrial emptying pattern is obviously strongly influenced by the left heart diastolic properties. The table below shows the most useful of these fan laws. Dynamic pressure is in fact one of the terms of Bernoulli's equation, which can be derived from the conservation of energy for a fluid in motion. The ambient temperature and pressure are 72. If a stone is dropped from the top of a building it's velocity will increase at a rate of 32. We will begin our analysis by examining the relationship between the gas pressure drop and gas velocity. p is the density of the medium (tissue density is >800 times that of air), d is the diameter (caliber) of the bullet, and v the velocity. The larger the conduit, the greater its cross-sectional area. I found an equivalent fitting in SMACNA's "HVAC Duct System Design" book, and calculated a velocity pressure loss of 45%. Estimate does not include taxes and fees. Plugging of the impulse piping can be a concern for many services. Dam (or weir) Using the Bernoulli equation to determine flowrate over a dam assumes that the velocity upstream of the dam is negligible (V 1 =0) and that the nappe is exposed to atmospheric pressure. 4% increase in unit costs but said they would still come in between flat and a 1% decline. The relation between pressure and velocity can be given through two independent equations/formulation. From the Bernoulli equation we can obtain the following relationship. , dampers are opened or closed). About 71% of the Earth is covered in water. As you increase the Reynolds number, you would end up approximating that relationship of pressure drop being proportional to the square of the flow rate. Pressure, typically measured in psi, determines an air compressor’s ability to perform a certain amount of work at any given point in time. • A venturi can be used to determine mass flow rates due to changes in pressure and fluid velocity. The area required to calculate the mass flow rate is real or imaginary, flat or curved, either as a cross-sectional area or a surface. This application note will describe the basic relationships between air velocity and the pressure generated by air flow. Force: Force is described as what is required to change velocity or acceleration of an objects - recall that acceleration is any change in vector. This pit moves an arc length Δ s Δ s size 12{Δs} {} in a time Δ t Δ t size 12{Δt} {}, and so it has a linear velocity. Through Newton's second law, which states: The acceleration of a body is directly proportional to the net unbalanced force and inversely proportional to the body's mass, a relationship is established between. View contact. in relation to the entering nozzle diameter. Therefore. Sound Pressure. Gas Pressure and Volume Relationships Name_____ Lab Section_____ CHECKOUT from the Storeroom: Plastic Tub containing Go!Link and Gas Pressure sensor In this laboratory experience, you will be able to observe a classic chemistry concept from both the macroscopic and molecular point of view. The amount of air resistance an object experiences depends on its speed, its cross-sectional area, its shape and the density of the air. A particle of free air exposed to a sinusoidal sound pressure of 1 Pa, moves back and forth with a velocity amplitude of about 2 mm/s. De Faria, Luciano 2012. The atmospheric pressure here will still be relatively high compared to the storm's center at about 990 to 1010 millibars. Both pressure and velocity are related: The total energy of an air molecule outside of the boundary layer is constant and the sum of its pressure and its velocity component. Estimate does not include taxes and fees. 5 Explain delayed onset muscle soreness (DOMS) in relation to eccentric and concentric muscle contractions The pain and stiffness felt in muscles several hours to days after unaccustomed or strenuous exercise. What is the relationship between the pressure existing within the throat of a venturi and the velocity of the air passing through the venturi? The pressure is inversely proportional to the velocity. For most recreational scuba diving, this difference (about 1. The relationship between temperature and air pressure is referred to as Gay-Lussac's Law. You can find the final velocity if air resistance is included as a homework problem. 00 Corrected Volumetric Flowrate cfm 40. 56 (8689) - What is the relationship between the accelerating pump and the enrichment valve in a pressure injection carburetor? A- No relationship since they operate independently. 00 Resultant Velocity Pressure "wg PERTINENT EQUATIONS: Other Information: Branch Entry Elbow Loss Factors: Loss Factors 60*elbow = 2/3 loss 45oelbow=1. A simple slide rule, it provides for all the factors needed to calculate air velocity quickly and accurately. The SBP is determined by the stroke volume, the velocity of left ventricular ejection (an indirect indicator of left ventricular contractile force), systemic arterial resistance, the distensibility of the aortic and arterial walls, the viscosity of blood, and the left ventricular preload (end-diastolic volume). When a fluid is in contact with a solid surface, there can be no relative motion between the fluid in contact with the solid surface and. Force, pressure and Energy are some of Physics basic tenets. However, the relation between the aerodynamic sound and the tip leakage vortex near the rotor tip in ax- ial flow fan is unclear in detail. 3) The reason for defining fan velocity pressure in this way is that the kinetic energy imparted by the. Areas where air is warmed often have lower pressure because the warm air rises and are called low. 3) moves between two parallel, horizontal, stationary flat surfaces at a constant velocity of 5 m/s. 4-4 EXPLOSIVE BLAST EXPLOSIVE BLAST 4-5 can be almost 13 times greater than peak incident pressures and, for all explosions, the reflected pressure coefficients are signifi-cantly greater closer to the explosion. For many applications, the two definitions in Eqs. Plot a graph to show that the relationship between "wet bulb approach" and packing pressure drops versus nominal air velocity in the same graph. At 20 percent moisture content, drying rate is generally unaffected by air velocity increases. 3 m window in the building. pumpfundamentals. The 1097 is the average velocity at the traverse plane. The velocity of a projectile is highest at the muzzle and drops off steadily because of air resistance. To get the precise relationship between angular and linear velocity, we again consider a pit on the rotating CD. This is equivalent to an airfoil moving through the air - just a question of the reference system. of the upstream absolute pressure (P1). This allotted seven data points for curve fitting, with the respective barrel lengths measured at 28. Since the air velocity is calculated from the difference between air and heated-element temperatures, such a compromise is probably sufficient. Other measuring methods cool the air which goes through flow transducers or there are other models that use the ultrasound method. 5 If the initial pressure and temperature of the leak-free vessel in figure 3. Designed by Midori Architects, the SkyHive Skyscraper, otherwise known as the Aero Hive, seamlessly works nature into the physical aspects of a mixed-use office complex. The larger the conduit, the greater its cross-sectional area. Relationship between density, pressure, and temperature • The ideal gas law for dry air - R d: gas constant for dry air • Equals to 287 J/kg/K - Note that P, , and T have to be in S. An accurate prediction of cyclone pressure drop is very important as it relates directly to operating costs. Between 14 and 21 inches, pressure loss totals 18,000 psi. From the Bernoulli equation we can obtain the following relationship. the amount of paint required for a particular part?. 17 elevation in the atmosphere for those regions in which the temperature varies linearly with elevation. They are commonly used to measure air velocity, but can be use to measure the velocity of other fluids as well. (The average velocity pressure is about 81% of centerline velocity pressure. Until the early 17th century air was largely misunderstood. Thus far we have discussed the relationship between. 5-9 Acoustic Impedance Acoustic impedance is the opposition of a medium to a longitudinal wave motion. act = absolute pressure of the actual airstream, in. The relationship between the air flow rate (CFM) and the pressure of an air system is expressed as an increasing exponential function. You will continue to read pressure (usually more than what you see when there is a flow). We have observed that an increase in the tension of a string causes an increase in the velocity that waves travel on the string. If a stone is dropped from the top of a building it's velocity will increase at a rate of 32. Pressure of flowing air may be compared to energy in that the total pressure of flowing air always remains constant unless energy is added or removed. The air in a tire expands with an increase in temperature, which means an inflation of the air pressure. The amount of lift generated by an object depends on a number of factors, including the density of the air, the velocity between the object and the air, the viscosity and compressibility of the air, the surface area over which the air flows, the shape of the body, and the body's inclination to the flow, also called the angle of attack. The relationship between velocity and flow in a liquid system J. 4°F (+ 3°C) above the average temperature of the air outlet. For the following explanations it is assumed, that a stream of air is directed against an airfoil, which is fixed in space. EDIT: I remember for certain that for very high Reynolds number you approach the velocity squared relationship. Velocity is also related to air density with assumed constants of 70° F and 29. natural convection from a rectangular plate in any orientation, a correlation for forced convection over a rough flat surface, correspondence between forced convection and the vertical-plate mode of natural convection, vector sum of forced-equivalent velocity and forced velocity vectors, competition between convective modes, and. Velocity Pressure & Velocity • V = 1096 (V P/p)0. With fewer air molecules above, there is less pressure from the weight of air above. An example of this is the air pressure in an automobile tire , which might be said to be "220 kPa (32 psi)", but is actually 220 kPa (32 psi) above atmospheric pressure. The difference between the total pressure reading and the static pressure reading is the Velocity Pressure. This is why proper air inflation has such a big impact on your tire's handling, traction, and durability. Because every sound wave travels the same distance regardless. Pregnancy Sex and Your Relationship Just don't let him blow into your vagina because that can cause a fatal air embolism. Find: Calculate the air velocity in ft/s. Pressure generation inside the compressor consists of several steps: (1) kinetic energy is first supplied to the air by means of the rotor which accelerates the air to a high speed, (2) as the air passes between the blades of the impeller, the cross-sectional flow area between the blades increases which causes some of the kinetic energy. The precise relationship between flow rate Q Q size 12{Q} {} and velocity v ¯ v ¯ size 12{ {overline {v}} } {} is. S505-S514 same vessel, we can add a known (measured or determined) mass of fluid, while fluid volume remains the same, we will raise the pressure in the vessel, making it possible to determine the. and parameterizing the effects of inflating air pressure on both force and velocity. This allotted seven data points for curve fitting, with the respective barrel lengths measured at 28. Air flow measuring – Velocity sensor more accurate to measure the air flow = better control (less hunting) = less temperature variation = less energy consumption not easy to maintain accuracy when flow rate is lower 2. Bigger ducts, lower velocity. Relationship between velocity and air pressure? The wind blows at speed 25 m/s past a well-sealed building, inside which the air pressure is always 1 atm. • A venturi can be used to determine mass flow rates due to changes in pressure and fluid velocity. The relationship between temperature and air pressure is referred to as Gay-Lussac's Law. This part of the drag is called induced drag. This relationship between pressure and resistance is mostly linear. • In the shear layer, pressure and streamwise velocity are negatively correlated, however, the correlation changes its sign near the corner due to the adverse pressure gradient. This Relationship Stays The Same Regardless Of The Boundary Conditions Of The Tube (or Air Column). For calculations of the volume flow rate for the open rotor fan, it was necessary to assume some outlet control surface, A out ( Figure 3 ). When the air is thinner, objects are not as affected by air resistance pushing against them as they bounce or fly through the air. One thing to keep in mind is that this formula finds the average speed of sound for any given temperature. The reason is— as pointed out before—that a gas in steady flow "prefers to get out of the way" rather than become compressed when it encounters an obstacle. To put it more simply, what we hear is sound pressure, but this sound pressure is caused by the sound power of the emitting sound source. The relation between electric current and drift velocity is that they both happen to involve electrons moving opposite of the electric field. The velocity is increased by forcing a volume of air through a constricted outlet. About 71% of the Earth is covered in water. Which statement BEST characterizes the general relationship between air pressure and elevation? A. Here, an analytical model without such assumptions or empirical expressions is established to yield a relation between blood pressure and pulse wave velocity that has general utility for future work in continuous, cuffless, and noninvasive blood pressure monitoring. ) Parts A and B were performed using air, which we know is actually a mixture of several gases. The fan model is a lumped parameter model that can be used to determine the impact of a fan with known characteristics upon some larger flow field. Although Pettine was effective calling blitzes for much of his first season as Packers defensive coordinator, the additions of Preston Smith and Za’Darius Smith have allowed Pettine to do less blitzing and commit more players into coverage while still. 1 PRESSURE COORDINATES Pressure is often a convenient vertical coordinate to use in place of altitude. 4% increase in unit costs but said they would still come in between flat and a 1% decline. 2002 (tunnel open) Figure 5: Relation between exterior temp erature and air velocity in the Gotschnatunnel 3. Air Flow Air will flow from an area of higher pressure to one of lower pressure; during inspiration, the pressure in the alveoli must be less than the pressure at the mouth for air to flow in, and during expiration, the reverse is true. At sea level (0 feet) 34 feet of water pressure is. Both of these should be measured and calculated in blower fan testing. If a stone is dropped from the top of a building it's velocity will increase at a rate of 32. Hudson Products Corp. airway, the Log-Tchebycheff method calls for a five-by-five grid. , centimetres of water, millimetres of mercury or inches of mercury). The group posted a 0. It can be easily explained for a varying diameter pipe. Duct systems are also divided into three pressure classifications, matching the way supply fans are classified. Bernoulli's equation describes the relation between velocity, density, and pressure for this flow problem. So the specific acoustic impedance of water is 3500 times higher than that of air. If there is a waterflow trough a hose, when pressure is increased, the flow will also increase. P = Pressure, psi V = Volume, ft3 R = Gas Constant (for Air) = 53. First responders from the 332d Air Expeditionary Wing perform first aid during a Mass Casualty exercise on Oct. In a closed system where volume is held constant, there is a direct relationship between Pressure and Temperature. However, the relation between the aerodynamic sound and the tip leakage vortex near the rotor tip in ax- ial flow fan is unclear in detail. |
CGAL 5.4 - Bounding Volumes
CGAL::Min_sphere_d< Traits > Class Template Reference
#include <CGAL/Min_sphere_d.h>
Definition
An object of the class Min_sphere_d is the unique sphere of smallest volume enclosing a finite (multi)set of points in $$d$$-dimensional Euclidean space $$\E^d$$.
For a set $$P$$ we denote by $$ms(P)$$ the smallest sphere that contains all points of $$P$$. $$ms(P)$$ can be degenerate, i.e. $$ms(P)=\emptyset$$ if $$P=\emptyset$$ and $$ms(P)=\{p\}$$ if $$P=\{p\}$$.
An inclusion-minimal subset $$S$$ of $$P$$ with $$ms(S)=ms(P)$$ is called a support set, the points in $$S$$ are the support points. A support set has size at most $$d+1$$, and all its points lie on the boundary of $$ms(P)$$. In general, neither the support set nor its size are unique.
The algorithm computes a support set $$S$$ which remains fixed until the next insert or clear operation.
Note
This class is (almost) obsolete. The class CGAL::Min_sphere_of_spheres_d<Traits> solves a more general problem and is faster than Min_sphere_d even if used only for points as input. Most importantly, CGAL::Min_sphere_of_spheres_d<Traits> has a specialized implementation for floating-point arithmetic which ensures correct results in a large number of cases (including highly degenerate ones). In contrast, Min_sphere_d is not reliable under floating-point computations. The only advantage of Min_sphere_d over CGAL::Min_sphere_of_spheres_d<Traits> is that the former can deal with points in homogeneous coordinates, in which case the algorithm is division-free. Thus, Min_sphere_d might still be an option in case your input number type cannot (efficiently) divide.
Template Parameters
Traits must be a model of the concept MinSphereAnnulusDTraits as its template argument.
We provide the models CGAL::Min_sphere_annulus_d_traits_2, CGAL::Min_sphere_annulus_d_traits_3 and CGAL::Min_sphere_annulus_d_traits_d for two-, three-, and $$d$$-dimensional points respectively.
CGAL::Min_sphere_annulus_d_traits_2<K,ET,NT>
CGAL::Min_sphere_annulus_d_traits_3<K,ET,NT>
CGAL::Min_sphere_annulus_d_traits_d<K,ET,NT>
MinSphereAnnulusDTraits
CGAL::Min_circle_2<Traits>
CGAL::Min_sphere_of_spheres_d<Traits>
CGAL::Min_annulus_d<Traits>
Implementation
We implement the algorithm of Welzl with move-to-front heuristic [16] for small point sets, combined with a new efficient method for large sets, which is particularly tuned for moderately large dimension ( $$d \leq 20$$) [8]. The creation time is almost always linear in the number of points. Access functions and predicates take constant time, inserting a point might take up to linear time, but substantially less than computing the new smallest enclosing sphere from scratch. The clear operation and the check for validity each take linear time.
Example
#include <CGAL/Exact_integer.h>
#include <CGAL/Homogeneous.h>
#include <CGAL/Random.h>
#include <CGAL/Min_sphere_annulus_d_traits_3.h>
#include <CGAL/Min_sphere_d.h>
#include <iostream>
#include <cstdlib>
typedef CGAL::Min_sphere_d<Traits> Min_sphere;
typedef K::Point_3 Point;
int
main ()
{
const int n = 10; // number of points
Point P[n]; // n points
CGAL::Random r; // random number generator
for (int i=0; i<n; ++i) {
P[i] = Point(r.get_int(0, 1000),r.get_int(0, 1000), r.get_int(0, 1000), 1 );
}
Min_sphere ms (P, P+n); // smallest enclosing sphere
std::cout << ms; // output the sphere
return 0;
}
Examples:
Min_sphere_d/min_sphere_homogeneous_3.cpp.
Related Functions
(Note that these are not member functions.)
std::ostream & operator<< (std::ostream &os, const Min_sphere_d< Traits > &min_sphere)
writes min_sphere to output stream os. More...
std::istream & operator>> (std::istream &is, Min_sphere_d< Traits > min_sphere &)
reads min_sphere from input stream is. More...
Types
typedef unspecified_type FT
typedef to Traits::FT.
typedef unspecified_type Point
typedef to Traits::Point.
typedef unspecified_type Point_iterator
non-mutable model of the STL concept BidirectionalIterator with value type Point. More...
typedef unspecified_type Support_point_iterator
non-mutable model of the STL concept BidirectionalIterator with value type Point. More...
Creation
Min_sphere_d (const Traits &traits=Traits())
creates a variable of type Min_sphere_d and initializes it to $$ms(\emptyset)$$. More...
template<class InputIterator >
Min_sphere_d (InputIterator first, InputIterator last, const Traits &traits=Traits())
creates a variable min_sphere of type Min_sphere_d. More...
int number_of_points () const
returns the number of points of min_sphere, i.e. $$|P|$$.
int number_of_support_points () const
returns the number of support points of min_sphere, i.e. $$|S|$$.
Point_iterator points_begin () const
returns an iterator referring to the first point of min_sphere.
Point_iterator points_end () const
returns the corresponding past-the-end iterator.
Support_point_iterator support_points_begin () const
returns an iterator referring to the first support point of min_sphere.
Support_point_iterator support_points_end () const
returns the corresponding past-the-end iterator.
int ambient_dimension () const
returns the dimension of the points in $$P$$. More...
const Pointcenter () const
returns the center of min_sphere. More...
returns the squared radius of min_sphere. More...
Predicates
By definition, an empty Min_sphere_d has no boundary and no bounded side, i.e. its unbounded side equals the whole space $$\E^d$$.
Bounded_side bounded_side (const Point &p) const
returns CGAL::ON_BOUNDED_SIDE, CGAL::ON_BOUNDARY, or CGAL::ON_UNBOUNDED_SIDE iff p lies properly inside, on the boundary, or properly outside of min_sphere, resp. More...
bool has_on_bounded_side (const Point &p) const
returns true, iff p lies properly inside min_sphere. More...
bool has_on_boundary (const Point &p) const
returns true, iff p lies on the boundary of min_sphere. More...
bool has_on_unbounded_side (const Point &p) const
returns true, iff p lies properly outside of min_sphere. More...
bool is_empty () const
returns true, iff min_sphere is empty (this implies degeneracy).
bool is_degenerate () const
returns true, iff min_sphere is degenerate, i.e. if min_sphere is empty or equal to a single point, equivalently if the number of support points is less than 2.
Modifiers
void clear ()
resets min_sphere to $$ms(\emptyset)$$.
template<class InputIterator >
void set (InputIterator first, InputIterator last)
sets min_sphere to the $$ms(P)$$, where $$P$$ is the set of points in the range [first,last). More...
void insert (const Point &p)
inserts p into min_sphere. More...
template<class InputIterator >
void insert (InputIterator first, InputIterator last)
inserts the points in the range [first,last) into min_sphere and recomputes the smallest enclosing sphere, by calling insert for all points in the range. More...
Validity Check
An object min_sphere is valid, iff
• min_sphere contains all points of its defining set $$P$$,
• min_sphere is the smallest sphere containing its support set $$S$$, and
• $$S$$ is minimal, i.e. no support point is redundant.
Note
Under inexact arithmetic, the result of the validation is not realiable, because the checker itself can suffer from numerical problems.
bool is_valid (bool verbose=false, int level=0) const
returns true, iff min_sphere is valid. More...
Miscellaneous
const Traitstraits () const
returns a const reference to the traits class object.
◆ Point_iterator
template<typename Traits >
typedef unspecified_type CGAL::Min_sphere_d< Traits >::Point_iterator
non-mutable model of the STL concept BidirectionalIterator with value type Point.
Used to access the points used to build the smallest enclosing sphere.
◆ Support_point_iterator
template<typename Traits >
non-mutable model of the STL concept BidirectionalIterator with value type Point.
Used to access the support points defining the smallest enclosing sphere.
◆ Min_sphere_d() [1/2]
template<typename Traits >
CGAL::Min_sphere_d< Traits >::Min_sphere_d ( const Traits & traits = Traits() )
creates a variable of type Min_sphere_d and initializes it to $$ms(\emptyset)$$.
If the traits parameter is not supplied, the class Traits must provide a default constructor.
◆ Min_sphere_d() [2/2]
template<typename Traits >
template<class InputIterator >
CGAL::Min_sphere_d< Traits >::Min_sphere_d ( InputIterator first, InputIterator last, const Traits & traits = Traits() )
creates a variable min_sphere of type Min_sphere_d.
It is initialized to $$ms(P)$$ with $$P$$ being the set of points in the range [first,last).
Template Parameters
InputIterator is a model of InputIterator with Point as value type. If the traits parameter is not supplied, the class Traits must provide a default constructor.
Precondition
All points have the same dimension.
◆ ambient_dimension()
template<typename Traits >
int CGAL::Min_sphere_d< Traits >::ambient_dimension ( ) const
returns the dimension of the points in $$P$$.
If min_sphere is empty, the ambient dimension is $$-1$$.
◆ bounded_side()
template<typename Traits >
Bounded_side CGAL::Min_sphere_d< Traits >::bounded_side ( const Point & p ) const
returns CGAL::ON_BOUNDED_SIDE, CGAL::ON_BOUNDARY, or CGAL::ON_UNBOUNDED_SIDE iff p lies properly inside, on the boundary, or properly outside of min_sphere, resp.
Precondition
If min_sphere is not empty, the dimension of $$p$$ equals ambient_dimension().
◆ center()
template<typename Traits >
const Point& CGAL::Min_sphere_d< Traits >::center ( ) const
returns the center of min_sphere.
Precondition
min_sphere is not empty.
◆ has_on_boundary()
template<typename Traits >
bool CGAL::Min_sphere_d< Traits >::has_on_boundary ( const Point & p ) const
returns true, iff p lies on the boundary of min_sphere.
Precondition
if min_sphere is not empty, the dimension of $$p$$ equals ambient_dimension().
◆ has_on_bounded_side()
template<typename Traits >
bool CGAL::Min_sphere_d< Traits >::has_on_bounded_side ( const Point & p ) const
returns true, iff p lies properly inside min_sphere.
Precondition
If min_sphere is not empty, the dimension of $$p$$ equals ambient_dimension().
◆ has_on_unbounded_side()
template<typename Traits >
bool CGAL::Min_sphere_d< Traits >::has_on_unbounded_side ( const Point & p ) const
returns true, iff p lies properly outside of min_sphere.
Precondition
If min_sphere is not empty, the dimension of $$p$$ equals ambient_dimension().
◆ insert() [1/2]
template<typename Traits >
void CGAL::Min_sphere_d< Traits >::insert ( const Point & p )
inserts p into min_sphere.
If p lies inside the current sphere, this is a constant-time operation, otherwise it might take longer, but usually substantially less than recomputing the smallest enclosing sphere from scratch.
Precondition
The dimension of p equals ambient_dimension() if min_sphere is not empty.
◆ insert() [2/2]
template<typename Traits >
template<class InputIterator >
void CGAL::Min_sphere_d< Traits >::insert ( InputIterator first, InputIterator last )
inserts the points in the range [first,last) into min_sphere and recomputes the smallest enclosing sphere, by calling insert for all points in the range.
Template Parameters
InputIterator is a model of InputIterator with Point as value type.
Precondition
All points have the same dimension. If min_sphere is not empty, this dimension must be equal to ambient_dimension().
◆ is_valid()
template<typename Traits >
bool CGAL::Min_sphere_d< Traits >::is_valid ( bool verbose = false, int level = 0 ) const
returns true, iff min_sphere is valid.
If verbose is true, some messages concerning the performed checks are written to standard error stream. The second parameter level is not used, we provide it only for consistency with interfaces of other classes.
◆ set()
template<typename Traits >
template<class InputIterator >
void CGAL::Min_sphere_d< Traits >::set ( InputIterator first, InputIterator last )
sets min_sphere to the $$ms(P)$$, where $$P$$ is the set of points in the range [first,last).
Template Parameters
InputIterator is a model of InputIterator with Point as value type.
Precondition
All points have the same dimension.
template<typename Traits >
FT CGAL::Min_sphere_d< Traits >::squared_radius ( ) const
returns the squared radius of min_sphere.
Precondition
min_sphere is not empty.
◆ operator>>()
template<typename Traits >
std::istream & operator>> ( std::istream & is, Min_sphere_d< Traits > min_sphere & )
related
reads min_sphere from input stream is.
An overload of operator>> must be defined for Point |
Question 28. Toppr provides free study materials, 1000+ hours of video lectures, last 10 years of question papers for free. (b) A Jet plane moves with a speed greater than that of a super fast train. NCERT Solutions for Class 11-science Physics CBSE, 3 Motion in a Straight Line. Nuclear mass density = Mass of nucleus/Volume of nucleus Answer: Question 2. Question 7. Answer: 1 micron (1 p) = 10-6 m Reynold’s number NR (a dimensionless quantity) determines the condition of laminar flow of a viscous liquid through a pipe. We can estimate the area of the head. Answer: As magnification, m =thickness of image of hair/ real thickness of hair = 100 So, the nuclear mass density is nearly 50 million times more than the atomic mass density for a sodium atom. NCERT solutions provide a strong foundation for every chapter. A laser light beam sent to the moon takes 2.56 s to return after reflection at the Moon’s surface. Answer: Question 5. Across Sketch the cross section of soil and label the various layers. If d be the distance of Moon from the earth, the time taken by laser signal to return after reflection at the Moon’s surface. km h-2 30. (a) The size of an atom is much smaller than even the sharp tip of a pin. (e) This is a correct statement. [Y] = [ML-1T–2] Mass density of Sun is in the range of mass densities of solids/liquids and not gases. You can also download here the NCERT Solutions Class 11 Physics chapter 2 Units And Measurement in PDF format. (b) Measure the depth of an empty boat in water. E, m, 1 and G denote energy, mass, angular momentum and gravitational constant respectively.Determine the dimensions of El2/m5G2. Ans : $$\begin{array}{l}{\text { Distance between Sun and Earth }} \\ {=\text { Speed of light in vacuum } x \text { time taken by light to travel from Sim to Earth }=3 \times 10^{8} \mathrm{m} / \mathrm{s} \times 8} \\ {\min 20 \mathrm{s}=3 \times 10^{8} \mathrm{m} / \mathrm{s} \times 500 \mathrm{s}=500 \times 3 \times 10^{8} \mathrm{m} \text { . }} A book with many printing errors contains four different formulas for the displacement y of a particle undergoing a certain periodic motion: If you have any problem in finding the correct answers of Physics Part I Textbook then you can find here. Volume of hydrogen molecule = 4/3 πr3 Long Answer Type Questions Volume of one mole atom of sodium, V = NA .4/3 π R3 Question 24. Its density is ______g a-n 30r ______kg m⁻⁵. NCERT Solutions Class 11 Physics with Chapter-wise, detailed are given with the objective of helping students compare their answers with the example. Answer: The distance at which a star would have annual parallax of 1 second of arc. Pages. \(\begin{array}{l}{1 \mathrm{s}=\frac{1}{\gamma}=\gamma^{-1}} \\ {1 \mathrm{s}^{2}=\gamma^{-2}} \\ {1 \mathrm{s}^{-2}=\gamma^{2}}\end{array}$$ Answer: No. (c) modulus of elasticity (d) all the above Now take a large sized trough filled with water. Density of water = 1 g/cm³ Let us further assume that the hair on the head are uniformly distributed. NCERT Solutions Class 11 can be extremely useful in understanding the methods of framing answers and writing them in a way that exhibits the student’s analysis of phenomena supported by facts. (a) The size of an atom is much smaller than even the sharp tip of a pin. Question 2. Find the Young’s modulus of the material of the wire from this data. Question 2. (c) the wind speed during a storm Question 15. (d) The air inside this room contains more number of molecules than in one mole of air. NCERT Solutions for Class 11 Physics Chapter 2 Units and Measurements are part of NCERT Solutions for Class 11 Physics. Question 17. Question 19. Question 2. The distance of the Sun from the Earth is 1.496 x 1011 m (i.e., 1 A.U.). Answer: Here area of the house on slide = 1.75 cm2 = 1.75 x 10-4 m2 and area of the house of projector-screen = 1.55 m2 Answer: Assume that the nucleus is spherical. Vidyakul gives the NCERT Solutions for class 11 physics in the PDF. Answer: From parallax method we can say The Reynold’s number nR for a liquid flowing through a pipe depends upon: (i) the density of the liquid ρ, (ii) the coefficient of viscosity η, (iii) the speed of flow of the liquid v, and (iv) the f radius of the tube r.Obtain dimensionally an expression for nR. Solving these equations, we get The number of hair on the head is clearly the ratio of the area of head to the cross-sectional area of a hair. Parallax angle subtended by the star Alpha Centauri at the given basis θ = 1.32 x 2 = 2.64″. (b) A screw gauge has a pitch of 1.0 mm and 200 divisions on the circular scale. The number of particles crossing per unit area perpendicular to x-axis in unit time N is given by N= -D(n2-n1/x2-x1), where n1 and n2 are the number of particles per unit volume at x1 and x2 respectively. Measure the length of this coil, mode by the thread, with a metre scale. $$\begin{array}{l}{ 1 \mathrm{cm} 3=10-6 \mathrm{m} 3} \\ {\text { Hence, the volume of a cube of side } 1 \mathrm{cm} \text { is equal to } 10-6 \mathrm{m} 3 \text { . i. e., [L1 T-1] = dimensionless, which is incorrect. Study every day: A student should study NCERT text book one hour per day for seven days. If distance of Venus be d, then t = 2d/c One mole contains 6.023 x 1023 molecules. The dimensions of diffusion constant D are Think of ways by which you can estimate the following (where an estimate is difficult to obtain, try to get an upper bound on the quantity): These objects (known as quasars) have many puzzling features, which have not yet been satisfactorily explained. (a) atoms are very small objects From the table of fundamental constants in this book, try to see if you too can construct this number (or any other interesting number you can think of). Download Class 11 Physics NCERT Solutions in pdf free. = 0.5238 x 10-30 m3 Answer: Question 2. (c) Speed of vehicle = 18 km/h = 18 x 1000/3600 m/s (a) (αa – βb +γc)% (b) (αa + βb +γc)% According to Avagadro’s hypothesis, one mole of hydrogen contains 6.023 x 1023 atoms. Linear magnification. }}\end{array}$$ kg m-3. Correcting the L.H.S., we. Answer: As Area = (4.234 x 1.005) x 2 = 8.51034 = 8.5 m2 (a)$$1 \mathrm{cm}=\frac{1}{100} \mathrm{m}$$ Obtain a relation between the distance travelled by a body in time t, if its initial velocity be u and accelerationf. Fill in the blanks Parallax angle subtended by 1 parsec distance at this basis = 2 second (by definition of parsec). Question 2. If the angular diameter of the Sun is 2000″, find the diameter of the Sun. unit of time is ys. \begin{aligned}(b) 3 \mathrm{ms}^{-2} &=\frac{3 \times 10^{-3} \mathrm{km}}{\left(\frac{1}{3600}\right)^{2} \mathrm{h}^{2}}=3 \times 3600 \times 3600 \times 10^{-3} \mathrm{km} \mathrm{h}^{-2} \\ &=3.888 \times 10^{4} \mathrm{km} \mathrm{h}^{-2} \end{aligned} Hence more reliable result can be obtained. NCERT Book Solutions for Class 11 for the Humanities subjects are also available here. What do you mean by order of magnitude? Answer the following: =11.3 x 103 kg m-3 [1 kg =103 g,1m=102 cm] Suppose we employ a system of units in which the unit of mass equals a kg, the unit of length equals j8 m, the. Relative error in the volume of block. NCERT Solutions for Class 11. If A be the base area of the boat, then volume of water displaced by boat, V1 = Ad2 Answer: Young’s modulus of the material of the wire is given as. Fill in the blanks by suitable conversion of units: NCERT Solutions for Class 6, 7, 8, 9, 10, 11 and 12. Do A and A.U. Answer: Volume of one hydrogen atom = 4/3 πr3 (volume of sphere) Compare it with the average mass density of a sodium atom obtained in Exercise 2.27. Therefore, time needs more careful measurement. NCERT SOLUTION, CLASS 11, PHYSICS, PHYSICAL, CBSE BOARD . 1 A.U. Here we have given NCERT Solutions for Class 11 Physics Chapter 2 … = 28.38 x 1024 m = 2.8 x 1025 m or 2.8 x 1022 km. The nearest divisions would not clearly be distinguished as separate. Consider a class room of size 10 m x 8 m x 4 m. Volume of this room is 320 m3. 25. Question 2. Answer: —> Stress and Young’s modulus. 9. (iv) Kinetic energy (v) Gravitational constant (vi) Permeability The distance of the Moon from the Earth has been already determined very precisely using a laser as a source of light. Define parsec. 28. A famous relation in physics relates ‘moving mass’ m to the ‘rest mass’ m0 of a particle in terms of its speed v and the speed of light c. (This relation first arose as a consequence of special relativity due to Albert Einstein). In the new system, the speed of light in vacuum is unity. $$\begin{array}{l}{1 \mathrm{s}=\frac{1}{\gamma}=\gamma^{-1}} \\ {1 \mathrm{s}^{2}=\gamma^{-2}} \\ {1 \mathrm{s}^{-2}=\gamma^{2}}\end{array}$$ If the percentage errors in measurements of a, b and c are ± 1%, ±2% and ± 1.5% respectively, then calculate the maximum percentage error in value of x obtained. Answer: RADAR stands for ‘Radio detection and ranging’. The farthest objects in our Universe discovered by modem astronomers are so distant that light emitted by them takes billions of years to reach the Earth. The baseline AB is the line joining the Earth’s two locations six months apart in its orbit around the Sun. $$\mathrm{But}, 1 \mathrm{cm} 3=1 \mathrm{cm} \times 1 \mathrm{cm} \times 1 \mathrm{cm}=\left(\frac{1}{100}\right) \mathrm{m} \times\left(\frac{1}{100}\right) \mathrm{m} \times\left(\frac{1}{100}\right) \mathrm{m}$$ length (1) = 5.12 cm P.A.M. Dirac, a great physicist of 20th century found that from the following basic constants, a number having dimensions of time can be constructed: Answer: The line joining a given object to our eye is known as the line of sight. This is due to the fact that the probability (chance) of making a positive random error of a given magnitude is equal to that of making a negative random error of the same magnitude. The value of d directly gives the wind speed. The heat dissipated in a resistance can be obtained by the measurement of resistance, the current and time. Applications engineer, data analyst, accelerator operator and aeronautical engineering are some of the prominent employment areas after completing graduation with physics as a major. (a) 1 kg m2 s-2 = …. Angular diameter of the moon, θ= Angular diameter of the sun Volume of nucleus Answer: The measured (nominal) volume of the block is, Area= l x b Name four units used in the measurement of extremely short distances. NCERT Solutions for Class 11. If instead of mass, length and time as fundamental quantities, we choose velocity, acceleration and force as fundamental quantities and express their dimensions by V, A and F respectively, show that the dimensions of Young’s modulus can be expressed as [FA2 V-4]. = 4/3 x 3.142 x (6.37 x 106)3m3 Think of different examples in modem science where precise measurements of length, time, mass etc., are needed. Obtain the dimensional formula for coefficient of viscosity. Answer: N m-1 s2 is nothing but SI unit of mass i.e., the kilogram. In view of this, reframe the following statements wherever necessary: Name two pairs of physical quantities whose dimensions are same. If the length and time period of an oscillating pendulum have errors of 1% and 2% respectively, what is the error in the estimate of g? The distance travelled by light in one year (i.e., 365 days = 3.154 x 107 s) is known as light year. A body travels uniformly a distance of (13.8 ± 0.2) m in a time (4.0 ± 0.3) s. What is the velocity of the body within error limits? = Speed of light in vacuum x time taken by light to travel from Sim to Earth = 3 x 108 m/ s x 8 min 20 s = 3 x 108 m/s x 500 s = 500 x 3 x 108 m. What is the distance in km of a quasar from which light takes 3.0 billion years to reach us? For a glass prism of refracting angle 60°, the minimum angle of deviation Dm is found to be 36° with a maximum error of 1.05°. To get fastest exam alerts and government job alerts in India, join our Telegram channel with... Brother Lalit who is a Civil Engineer there ( 10-2 m ) 3 (... Cm3, Question 10 also please like, and share it with the Book. Ml2 ] with you ) the precision needed our solar system is 4.29 years... Three significant figures =Area on screen/Area on slide = 1.55 m2 ) we can determine the distance the. The air inside this room is 320 m3 over an area of 1.75 cm2 on a having! Of two Gentlemen of Verona NCERT by studyrankers.com two cities on earth ’ s surface your friends for NCERT. Be distinguished as separate BASED on SUPPLEMENTARY CONTENTS, Question 12 properties of viscous... Has 6.02 x 1023 /22.4 x 10-3 m3 of air in Eqn 10 x. To detect and locate objects under water atom assuming its size to be about 2.5 a..! These two values as the line joining the earth is 1.496 x 1011 (. Area of ( 2500 ±5 ) N is applied over an area of cm2... Standard parsec is a unit of length is chosen such that the speed of light in is! Body from the earth = 5.97 x 1024 kg 11 for the frequency of a sheet... ± 0.5 ) cm.- of radar in World War II be 1.37 cm, 4.11 cm 1. Used in the Measurement of Extremely Short distances larger than metre or kilometre formula is [ M1 L2 T3-.. Second of arc since you are moving, these distant objects seem to move with )... Gas constant R. answer: Question 8 thus in a Straight line: here n1 = 60 Obviously. A Vernier callipers, positive and negative errors are likely to cancel each other fine bore gently few. A need of science θ with the average mass density is constant for different Nuclei 1.5. Then what is the relative density of lead is 11.3 20.17 g are added to the other users are. Was the actual motivation behind the discovery of radar in World War II given with the example heat or and. Can, give a more reliable estimate than a set of 5 Measurements only \text! Given object to our eye is known as light year ) ncert solutions for class 11 physics chapter 2 study rankers the of... = 6.023 x 1023 x 5.23 x 10-31 = 3.15 x 10-7m3 vehicle moving with a speed of light mole...: a student measures the thickness of a hair I b t relative in... Selecting the correct value of refractive index ‘ μ ’ of the thread, a... By study-rankers.soft112.com PDF file for future Use v/u= tan θ i. e., [ L1 T-1 ] [! In one mole of hydrogen atoms than metre or kilometre chapters of Physics the various layers Chapter. Prism is given by answer: volume of block spread into a brass. Parallel light is incident on the screen is 1.55 m2 has a mass 0.3 0.003... Are given with the student ’ s doubts and queries will be cleared, would... On SUPPLEMENTARY CONTENTS, Question 1 centre of earth is 6.37 x m. Even the sharp tip of a human air is 3.00 x 108 m..... Metre scale the complete Chapter 2 Units and Measurements Class 11th Physics studyrankers.com... Measured with the age of the screw throughout its length the projector-screen?. ‘ Radio detection and ranging ’ year ), ncert solutions for class 11 physics chapter 2 study rankers and then x is as... Job alerts in India, join our Telegram channel = 1450 m s-1 ) ) ultrasonic! Contains more number of molecules of air has 6.02 x 1023 x 5.23 10-31! Of video lectures, last 10 years of Question papers for free by study-rankers.soft112.com laser as result... Hots ) Question 1 ( ncert solutions for class 11 physics chapter 2 study rankers ) how many unit system used in England the angular diameter of the of. Mass etc., are needed in modem science and 2.01 cm respectively molecules ( equal to x. Atom assuming its size to be stationary, a large number of molecules of air in blanks. Its size to be stationary 4.2 α-1 β-2 γ2 in terms of parsecs the.. Quantity P English Honeycomb Chapter 2 to the cross-sectional area of 1.75 on. If no, name four physical quantities are a need of science foot... Class 6, 7, 8 cm x 77.0/2 =55825 m=55.8 x kg., Question 3 cm is equal to…………m3 in foot, mass in pound and how it is known light... From dimensional considerations, find the value of density 2 second ( by definition parsec... The age of the material of the lunar orbit around the Sun is 2000″, find the value MKS! In modem science where precise Measurements are part of NCERT Class11 Physics 2... 6.67 x 10-11 N m2 ( kg ) -2 = … strong wind and that value! Resistance can be written as gauge has a pitch of 1.0 mm and 100 divisions on earth. Of magnification 100 room contains more number ncert solutions for class 11 physics chapter 2 study rankers observation will give a more reliable estimate than a set 100. Are called derived Units the relation ab2x =ab2/c3, these distant objects to! The class-room by measuring its mass and the area and volume of the solution prepared on to a,... } \text { distance between Sun and earth = 500 new Units finally the magnetic vector... Cm ) 3 10-6 m3 CONTENTS, Question 3 = … area with error limits ± 0.02 m2... Years away Chapter 7 system of particles and Rotational Motion help you top your Class exercises ncert solutions for class 11 physics chapter 2 study rankers... ( standard ) of the sphere = ( 10-2 m ) 1.5 x 1011 m ( i.e. 1.66... 1.66 x 10-27 kg by Entrancei a screw gauge of pitch 1 and.: note in Eqn two densities of the projector-screen arrangement be stationary possess dimensions 10-27 kg /22.4 10-3. In 1 second of arc of time Question 11. f= x2, then what is order. A block of metal were measured with the vertical NCERT Language: thin, and. A student calculates one light year = 9.462 x 1015 m.• light beam sent to the Moon takes s... Cube of side 1 cm ) 3 10-6 m3 the unit ( 1 parsec distance at which star... ) Let us further assume that the man is not partially bald it a!: we known that speed of 18 km h-1 covers ………: x..., nR is directly proportional to R. answer: Units of those ncert solutions for class 11 physics chapter 2 study rankers quantities whose dimensions are same film molecular. The new system, the new system, the new Units the after... Measured as ( 2.1 ± 0.5 ) cm calculate its surface area with error limits atoms = 6.023 x x! Require length Measurements to an angstrom unit ( 1 A° = 10-10m ) even! Possible relation for ncert solutions for class 11 physics chapter 2 study rankers Humanities subjects are also available here v. Question on High order Thinking (... Mass i.e., 1 A.U. ) can determine the distance covered is s, ncert solutions for class 11 physics chapter 2 study rankers determines condition! Following statements wherever necessary: r = r0 A1/3 ) answer: N m-1 s2 is nothing but unit... = 8.857 x 103 m or 3.08 x 1016 m. Question 4 particles and Rotational Motion help to... I ), we get, Question 3 P after rounding it off as P = 3.8 answers images! 4.2 J where 1 J = 1 kgm² s⁻² light year = 9.46 x 1015 m.• put few (... Mm and 100 divisions on the earth is 6.4 x 106 m and 2.01 cm.... Students solving difficult questions suggestions for studying NCERT Solutions Class 11 Physics Chapter 2 titled of and! But SI unit of heat or energy and it equals about 4.2 where! Wire from this data = 1.32 x 2 = 2.64″ both sides equation! Download in PDF format 22.4 x 10-3 ) x 320 =8.6 x 1027 NCERT Solutions 2020-2021 and Apps. Result and academic performance apart in its crystalline phase: 970 kg m3- be dimensionally homogeneous here the NCERT Solutions! Desired number is proportional to R. answer: no, since you are given below in free. 10 m x 4 m. volume of this solution in 20 mL of alcohol are derived from the earth s... + c ; where x is calculated by using the relation ab2x =ab2/c3 ncert solutions for class 11 physics chapter 2 study rankers pitch! ‘ the length, measured by a distant star is 0.76 on the head are uniformly distributed 8.857 x m... ] = [ L ] or [ c ] = [ L ] or [ c ] = LT-2... 22.4 x 10-3 m3 of air know that 22.4l or 22.4 x 10-3 x! Watch the video, please subscribe to our solar system is 4.29 light years away Solutions and download! Mass is 5.975 x 1024 kg -2 = … statements wherever necessary ; x... Given, nR is directly proportional to mp-1 and me-2: about 1 a 10-10... 2, Units and Measurement Solutions are given with the NCERT Solutions Motion... And easy knowledge of advanced concepts the inter-stellar or intergalactic distances are certainly larger than metre or.. Foot, mass, angular momentum and gravitational constant respectively.Determine the dimensions of both of! Final solution is 1/20 x 1/20 =1/400 th part of oleic acid suitable conversion of Units ( a the... You stay updated with ncert solutions for class 11 physics chapter 2 study rankers study material and Notes of Ch 2 Units and Measurement Physics... Nuclear mass density of sodium ) light beam sent to the questions in... To random errors, a large sized trough filled with water of are... |
# How do you simplify 25^(-1/2)?
It is ${25}^{- \frac{1}{2}} = {\left({5}^{2}\right)}^{- \frac{1}{2}} = {5}^{- 1} = \frac{1}{5}$ |
Question #aa422
Mar 3, 2016
No, it is not. Try it again, and pay attention to the formula and the math.
Explanation:
The ideal gas laws (Charles' Law) state:
$\frac{{P}_{1} \cdot {V}_{1}}{T} _ 1 = \frac{{P}_{2} \cdot {V}_{2}}{T} _ 2$
Where ${P}_{1} {,}_{2}$ are pressures – units don't matter in this case as long as they are consistent, because this is a ratio.
${V}_{1} {,}_{2}$ are the corresponding volumes in Liters (also really independent of units)
${T}_{1} {,}_{2}$ are the temperatures in degrees Kelvin (REQUIRED to be in 'K) |
# Tag
Sort by:
### Pippenger Product
The Pippenger product is an unexpected Wallis-like formula for given by(1)(OEIS A084148 and A084149; Pippenger 1980). Here, the th term for is given by(2)(3)where is a double factorial and is the gamma function.
### Least Significant Bit
The value of the bit in a binary number. For the sequence of numbers 1, 2, 3, 4, ..., the least significant bits are therefore the alternating sequence 1, 0, 1, 0, 1, 0, ... (OEIS A000035). It can be represented as(1)(2)or(3)It is also given by the linear recurrenceequation(4)with (Wolfram 2002, p. 128).Analogously, the "most significant bit" is the value of the bit in an -bit representation.The least significant bit has Lambert series(5)where is a q-polygamma function.
### Vector Division
In general, there is no unique matrix solution to the matrix equationEven in the case of parallel to , there are still multiple matrices that perform this transformation. For example, given , all the following matrices satisfy the above equation:Therefore, vector division cannot be uniquely defined in terms of matrices.However, if the vectors are represented by complex numbers or quaternions, vector division can be uniquely defined using the usual rules of complex division and quaternion algebra, respectively.
### Unique Prime
Following Yates (1980), a prime such that is a repeating decimal with decimal period shared with no other prime is called a unique prime. For example, 3, 11, 37, and 101 are unique primes, since they are the only primes with periods one (), two (), three (), and four () respectively. On the other hand, 41 and 271 both have period five, so neither is a unique prime.The unique primes are the primes such thatwhere is a cyclotomic polynomial, is the period of the unique prime, is the greatest common divisor, and is a positive integer.The first few unique primes are 3, 11, 37, 101, 9091, 9901, 333667, ... (OEIS A040017), which have periods 1, 2, 3, 4, 10, 12, 9, 14, 24, ... (OEIS A051627), respectively.
### Ford Circle
Pick any two relatively prime integers and , then the circle of radius centered at is known as a Ford circle. No matter what and how many s and s are picked, none of the Ford circles intersect (and all are tangent to the x-axis). This can be seen by examining the squared distance between the centers of the circles with and ,(1)Let be the sum of the radii(2)then(3)But , so and the distance between circle centers is the sum of the circle radii, with equality (and therefore tangency) iff . Ford circles are related to the Farey sequence (Conway and Guy 1996).If , , and are three consecutive terms in a Farey sequence, then the circles and are tangent at(4)and the circles and intersect in(5)Moreover, lies on the circumference of the semicircle with diameter and lies on the circumference of the semicircle with diameter (Apostol 1997, p. 101)...
### Automatic Set
A -automatic set is a set of integers whose base- representations form a regular language, i.e., a language accepted by a finite automaton or state machine. If bases and are incompatible (do not have a common power) and if an -automatic set and -automatic set are both of density 0 over the integers, then it is believed that is finite. However, this problem has not been settled.Some automatic sets, such as the 2-automatic consisting of numbers whose binary representations contain at most two 1s: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 16, 17, 18, ... (OEIS A048645) have a simple arithmetic expression. However, this is not the case for general -automatic sets.
### Normal Number
A number is said to be simply normal to base if its base- expansion has each digit appearing with average frequency tending to .A normal number is an irrational number for which any finite pattern of numbers occurs with the expected limiting frequency in the expansion in a given base (or all bases). For example, for a normal decimal number, each digit 0-9 would be expected to occur 1/10 of the time, each pair of digits 00-99 would be expected to occur 1/100 of the time, etc. A number that is normal in base- is often called -normal.A number that is -normal for every , 3, ... is said to be absolutely normal (Bailey and Crandall 2003).As stated by Kac (1959), "As is often the case, it is much easier to prove that an overwhelming majority of objects possess a certain property than to exhibit even one such object....It is quite difficult to exhibit a 'normal' number!" (Stoneham 1970).If a real number is -normal, then it is also -normal for and integers (Kuipers..
### Absolutely Normal
A real number that is -normal for every base 2, 3, 4, ... is said to be absolutely normal. As proved by Borel (1922, p. 198), almost all real numbers in are absolutely normal (Niven 1956, p. 103; Stoneham 1970; Kuipers and Niederreiter 1974, p. 71; Bailey and Crandall 2002).The first specific construction of an absolutely normal number was by Sierpiński (1917), with another method presented by Schmidt (1962). These results were both obtained by complex constructive devices (Stoneham 1970), and are by no means easy to construct (Stoneham 1970, Sierpiński and Schinzel 1988).
### Ring of Fractions
The extension ring obtained from a commutative unit ring (other than the trivial ring) when allowing division by all non-zero divisors. The ring of fractions of an integral domain is always a field.The term "ring of fractions" is sometimes used to denote any localization of a ring. The ring of fractions in the above meaning is then referred to as the total ring of fractions, and coincides with the localization with respect to the set of all non-zero divisors.When defining addition and multiplication of fractions, all that is required of the denominators is that they be multiplicatively closed, i.e., if , then ,(1)(2)Given a multiplicatively closed set in a ring , the ring of fractions is all elements of the form with and . Of course, it is required that and that fractions of the form and be considered equivalent. With the above definitions of addition and multiplication, this set forms a ring.The original ring may not embed in this ring of..
### Complex Division
The division of two complex numbers can be accomplished by multiplying the numerator and denominator by the complex conjugate of the denominator, for example, with and , is given by(1)(2)(3)(4)(5)where denotes the complex conjugate. In component notation with ,(6)
Two complex numbers and are added together componentwise,In component form,(Krantz 1999, p. 1).
### Base
The word "base" in mathematics is used to refer to a particular mathematical object that is used as a building block. The most common uses are the related concepts of the number system whose digits are used to represent numbers and the number system in which logarithms are defined. It can also be used to refer to the bottom edge or surface of a geometric figure.A real number can be represented using any integer number as a base (sometimes also called a radix or scale). The choice of a base yields to a representation of numbers known as a number system. In base , the digits 0, 1, ..., are used (where, by convention, for bases larger than 10, the symbols A, B, C, ... are generally used as symbols representing the decimal numbers 10, 11, 12, ...).The digits of a number in base (for integer ) can be obtained in the Wolfram Language using IntegerDigits[x, b].Let the base representation of a number be written(1)(e.g., ). Then, for example, the number 10 is..
### Archimedes' Axiom
Archimedes' axiom, also known as the continuity axiom or Archimedes' lemma, survives in the writings of Eudoxus (Boyer and Merzbach 1991), but the term was first coined by the Austrian mathematician Otto Stolz (1883). It states that, given two magnitudes having a ratio, one can find a multiple of either which will exceed the other. This principle was the basis for the method of exhaustion, which Archimedes invented to solve problems of area and volume.Symbolically, the axiom states thatiff the appropriate one of following conditions is satisfied for integers and : 1. If , then . 2. If , then . 3. If , then . Formally, Archimedes' axiom states that if and are two line segments, then there exist a finite number of points , , ..., on such thatand is between and (Itô 1986, p. 611). A geometry in which Archimedes' lemma does not hold is called a non-Archimedean Geometry...
### Binary Plot
A binary plot of an integer sequence is a plot of the binary representations of successive terms where each term is represented as a column of bits with 1s colored black and 0s colored white. The columns are then placed side-by-side to yield an array of colored squares. Several examples are shown above for the positive integers , square numbers , Fibonacci numbers , and binomial coefficients .Binary plots can be extended to rational number sequences by placing the binary representations of numerators on top, and denominators on bottom, as illustrated above for the sequence .Similarly, by using other bases and coloring the base- digits differently, binary plots can be extended to n-ary plots.
### Product
The term "product" refers to the result of one or more multiplications. For example, the mathematical statement would be read " times equals ," where is the product.More generally, it is possible to take the product of many different kinds of mathematical objects, including those that are not numbers. For example, the product of two sets is given by the Cartesian product. In topology, the product of spaces can be defined by using the product topology. The product of two groups, vector spaces, or modules is given by the direct product. In category theory, the product of objects is given using the category product.The product symbol is defined by(1)Useful product identities include(2)(3)
### Polynomial Remainder
The remainder obtained when dividing a polynomial by another polynomial . The polynomial remainder is implemented in the Wolfram Language as PolynomialRemainder[p, q, x], and is related to the polynomial quotient byFor example, the polynomial remainder of and is , corresponding to polynomial quotient .
### Polynomial Quotient
The quotient of two polynomials and , discarding any polynomial remainder. Polynomial quotients are implemented in the Wolfram Language as PolynomialQuotient[p, q, x], and are related to the polynomial remainder byFor example, the polynomial quotient of and is , leaving remainder .
### Synthetic Division
Synthetic division is a shortcut method for dividing two polynomials which can be used in place of the standard long division algorithm. This method reduces the dividend and divisor polynomials into a set of numeric values. After these values are processed, the resulting set of numeric outputs is used to construct the polynomial quotient and the polynomial remainder.For an example of synthetic division, consider dividing by . First, if a power of is missing from either polynomial, a term with that power and a zero coefficient must be inserted into the correct position in the respective polynomial. In this case the term is missing from the dividend while the term is missing from the divisor; therefore, is added between the quintic and the cubic terms of the dividend while is added between the cubic and the linear terms of the divisor:(1)and(2)respectively.Next, all the variables and their exponents () are removed from the dividend, leaving instead..
### Ruffini's Rule
Ruffini's rule a shortcut method for dividing a polynomial by a linear factor of the form which can be used in place of the standard long division algorithm. This method reduces the polynomial and the linear factor into a set of numeric values. After these values are processed, the resulting set of numeric outputs is used to construct the polynomial quotient and the polynomial remainder.Note that Ruffini's rule is a special case of the more generalized notion of synthetic division in which the divisor polynomial is a monic linear polynomial. Confusingly, Ruffini's rule is sometimes referred to as synthetic division, thus leading to the common misconception that the scope of synthetic division is significantly smaller than that of the long division algorithm.For an example of Ruffini's rule, consider divided by . First, if a power of is missing from the dividend, a term with that power and a zero coefficient must be inserted into the correct position..
Check the price |
# A question about eigenvalue equation of Hankel transform
When we think about the Fourier transform in two dimensional polar coordinates, the Hankel transform is the transformation with respect to the polar diameter. Now I have a question, why is the following expression invariant for the Hankel transform? And How to prove this equation?
$$\exp \left(-\frac{1}{2} x\right) x^{m / 2} L_{p}^{m}(x)=\frac{1}{2}(-1)^{p} \int_{0}^{+\infty} \mathrm{d} y \exp \left(-\frac{1}{2} y\right) J_{m}(\sqrt{x y}) y^{m / 2} L_{p}^{m}(y)$$
$$L_{p}^{m}(y)$$ is Laguerre function. m, p is order;
$$J_{m}(\sqrt{x y})$$ is Bessel function with order of m.
L. Yu et al., The Laguerre-Gaussian series representation of two-dimensional fractional Fourier transform. Journal of Physics A: Mathematical and General 31, 9353-9357 (1998).
• The first part is trivial (why is). The Hankel transform is its own inverse. Just apply the Hankel transform and you are done. I agree you should ask on math.se Jun 23 at 12:33
A: An explicit proof, of a more general identity, is in On a Hankel Transform Integral containing an Exponential Function and Two Laguerre Polynomials. See Equation (5), and take $$\sigma=0$$, $$m=n$$.
Another special case ($$m=n$$, $$\nu=2\sigma$$) was discussed at MSE, with a reference to a 1936 paper by Watson, An Integral Equation for the Square of a Laguerre Polynomial |
# ON DIFFERENTIAL INVARIANTS OF HYPERPLANE SYSTEMS ON NONDEGENERATE EQUIVARIANT EMBEDDINGS OF HOMOGENEOUS SPACES
• HONG, JAEHYUN (Department of Mathematical Sciences Seoul National University)
• Received : 2015.03.23
• Published : 2015.06.30
• 72 16
#### Abstract
Given a complex submanifoldM of the projective space $\mathbb{P}$(T), the hyperplane system R on M characterizes the projective embedding of M into $\mathbb{P}$(T) in the following sense: for any two nondegenerate complex submanifolds $M{\subset}\mathbb{P}$(T) and $M^{\prime}{\subset}\mathbb{P}$(T'), there is a projective linear transformation that sends an open subset of M onto an open subset of M' if and only if (M,R) is locally equivalent to (M', R'). Se-ashi developed a theory for the differential invariants of these types of systems of linear differential equations. In particular, the theory applies to systems of linear differential equations that have symbols equivalent to the hyperplane systems on nondegenerate equivariant embeddings of compact Hermitian symmetric spaces. In this paper, we extend this result to hyperplane systems on nondegenerate equivariant embeddings of homogeneous spaces of the first kind.
#### Keywords
homogeneous spaces;fundamental forms
#### References
1. D. N. Akhiezer, Equivariant completions of homogeneous algebraic varieties by homogeneous divisors, Ann. Global Anal. Geom. 1 (1983), no. 1, 49-78. https://doi.org/10.1007/BF02329739
2. D. N. Akhiezer, Lie group actions in complex analysis, Aspects of Mathematics, E27. Friedr. Vieweg and Sohn, Braunschweig, 1995.
3. J.-M. Hwang and K. Yamaguchi, Characterization of Hermitian symmetric spaces by fundamental forms, Duke Math. J. 120 (2003), no. 3, 621-634. https://doi.org/10.1215/S0012-7094-03-12035-9
4. J. M. Landsberg, On the infinitesimal rigidity of homogeneous varieties, Compositio Math. 118 (1999), no. 2, 189-201. https://doi.org/10.1023/A:1017161326705
5. J. M. Landsberg, Griffiths-Harris rigidity of compact Hermitian symmetric spaces, J. Differential Geom. 74 (2006), no. 3, 395-405. https://doi.org/10.4310/jdg/1175266232
6. J. M. Landsberg and L. Manivel, Construction and classification of complex simple Lie algebras via projective geometry, Selecta Math. 8 (2002), no. 1, 137-159. https://doi.org/10.1007/s00029-002-8103-5
7. J. M. Landsberg and L. Manivel, On the projective geometry of rational homogeneous varieties, Comment. Math. Helv. 78 (2003), no. 1, 65-100. https://doi.org/10.1007/s000140300003
8. J. M. Landsberg and C. Robles, Fubini-Griffiths-Harris rigidity and Lie algebra cohomology, Asian J. Math. 16 (2012), no. 4, 561-586. https://doi.org/10.4310/AJM.2012.v16.n4.a1
9. J. M. Landsberg and C. Robles, Fubini-Griffiths-Harris rigidity of homogeneous varieties, Int. Math. Res. Not. 2013 (2013), no. 7, 1643-1664. https://doi.org/10.1093/imrn/rns016
10. B. Pasquier, On some smooth projective two-orbit varieties with Picard number 1, Math. Ann. 344 (2009), no. 4, 963-987. https://doi.org/10.1007/s00208-009-0341-9
11. T. Sasaki, K. Yamaguchi, and M. Yoshida, On the rigidity of differential systems modelled on Hermitian symmetric spaces and disproofs of a conjecture concerning modular interpretations of configuration spaces, Adv. Stud. Pure Math. 25 CR-geometry and overdetermined systems (1997), 318-354.
12. Y. Se-Ashi, On differential invariants of integrable finite type linear differential equations, Hokkaido Math. J. 17 (1988), no. 2, 151-195. https://doi.org/10.14492/hokmj/1381517803 |
Understanding and Working with Units
# 2 Linear Measurements
Click play on the following audio player to listen along as you read this section.
Linear measurement can be defined as a measure of length. The length
of a table, the length of a piece of pipe and the length of a football field are all examples of linear measurement. We might also refer to it as distance.
Linear measurements represent a single dimension. This means there is only one line or one plane being measured. Basically it means that it’s a line of some type, either straight, curved or wherever you want the line to go. It could be like a road in Saskatchewan which is long and straight or it could be a road in the interior of British Columbia which can be narrow and windy. It doesn’t matter if the item or object you are measuring is straight on not. What you are measuring will only have a length.
Measuring length can be accomplished using many different types of units. You’ve heard of a mile, foot, yard and inch but have you ever heard of a furlong, link, pole or a league? Those are all examples of imperial linear measurement.
How about on the metric side. We have the metre, the centimetre and millimetre. Those would all be familiar to us. But how about micrometre, nanometre, pentametre, tetrametre and hexametre?
How we’ll work this section is to first define metric lengths of measurement and work with them and then we’ll move onto imperial lengths of measurement and work with them. After all that is done and settled we’ll move on to working between metric and imperial.
# The Metric System of Linear Measurement
If I were to ask you what is an example of a metric unit of measurement how would you respond? I think most of us might say a metre or a centimetre or even a kilometre.
One of the interesting things about the metric system of linear measurement is that it’s all based on measurements of 10 and quite often it’s referred to as a decimal based system.
For example, there are 10 millimetres in a centimetre, and there are 10 centimetres in a decimetres, and there are 10 decimetres in a metre. See the pattern. Once you get this pattern then working in metric actually becomes quite easy.
Another cool aspect to the metric system is that everything is derived from a base unit. All other units go from there using multiples of 10. Take a look at the table below to see how this works.
Unit Multiplier
kilometre 1,000
hectometre 100
decametre 10
metre (base unit) 1
decimetre 0.1
centimetre 0.01
millimetre 0.001
The idea with the above table is that the metre is the place where all the other numbers work back to. So, for example, to go from kilometres back to metres we would multiply by 1000. If we had a length of one kilometre that means we would have a length of 1000 metres.
If we were to go from centimetres to metres the chart tells us that a centimetre is 1/100 of a metre. Therefore if we had one centimetre we would multiply that by 0.01 to get metres.
You might be wondering at this point whether that is all there is to the metric system of linear measurement. In fact there are a number of other measurements based on the metre. Take a look at the crazy table below to see how far the measurement spreads out from the metre.
Common metric prefix Multiplier
yotta 1,000,000,000,000,000,000,000,000
zetta 1,000,000,000,000,000,000,000
exa 1,000,000,000,000,000,000
peta 1,000,000,000,000,000
tera 1,000,000,000,000
giga 1,000,000,000
mega 1,000,000
kilo 1,000
hecto 100
deca 10
metre (base unit) 1
deci 0.1
centi 0.01
milli 0.001
micro 0.000001
nano 0.000000001
pico 0.000000000001
femto 0.000000000000001
atto 0.000000000000000001
zepto 0.000000000000000000001
yocto 0.000000000000000000000001
Do you recognize any of these prefixes? You might see some of the larger ones such as mega, giga and terra used in computers when dealing with memory and speed.
Don’t worry though as we will generally never work with a lot of these in the trades and we will be sticking to the few that surround the metre.
What we want to do now is to work within the metric linear system. We want to be able to go from one unit of measurement to another and we will utilize the two tables up above for this.
Example
How many centimetres are there in 2.3 metres?
$\Large2.3 \text{ metres}= \text{X centimetres}$
Similar to how we did things in the first four chapters we will go about this in steps.
Step 1: Find the multiplier
What we see is that going from centimetres to metres the multiplier is 0.01. What this is saying is a centimetre is 1/100th of a metre or that there are 100 centimetres in a metre.
It’s important here to note that a centimetre is smaller than a metre and as this is the case then we would expect our answer to decrease.
Step 2: Build a ratio
$\Large \dfrac{1\text{ m}}{2.3\text{ m}} = \dfrac{100\text{ cm}}{\text{X cm}}$
What this ratio states is that if 1 metre is equal to 100 centimetres then 2.3 metres is equal to X centimetres.
Step 3: Cross multiply.
$\Large \begin{array}{c} \dfrac{1\text{ m}}{2.3\text{ m}} = \dfrac{100\text{ cm}}{\text{X cm}} \\ 1 \times \text{X} = 2.3 \times 100 \\ \text{X}=230 \\ \text{Answer}= 230\text{ centimetres}\end{array}$
We’ll try another example.
Example
How many kilometres are there in 1057 metres?
Step 1: Find the multiplier.
$\Large\text{multiplier} = 1000$
$\Large1 \text{ kilometre} = 1000 \text{ metres}$
Step 2: Build a ratio
$\Large\dfrac{1 \text{ km}}{\text{X km}} = \dfrac{1000 \text{ m}}{1057 \text{ m}}$
Step 3: Cross multiply.
$\Large\begin{array}{c} \dfrac{1 \text{ km}}{\text{X km}}= \dfrac{1000 \text{ m}}{1057 \text{ m}} \\ 1\times 1057 = \text{X} \times 1000 \\ \text{X} = \dfrac{1057}{1000}=1.057 \\ \text{Answer} = 1.057 \text{ metres}\end{array}$
# Practice Question
Try a couple practice questions yourself and check the video answers to see how you did. Make sure to follow the steps outlined above and think about whether your answer should be bigger or smaller.
Question 1
Barry owns a sheet metal company (Metal Sheet Incorporated) and he is making duct work for a heating system in a new video production studio under construction. The ducts are 0.79 metres wide by 0.45 metres deep. What is the depth of the ducts in centimetres?
# The Imperial System of Linear Measurement
The imperial system isn’t quite as straight forward as the metric system. If we were to try and follow the same principle as the metric system we would think that 1 foot would be equal to 10 inches but unfortunately it’s not. One foot is equal to 12 inches and one mile is equal to 5280 feet.
Any guesses why there are 5280 feet in a mile? It turns out that it stems from an ancient linear measurement used by the Romans. Back then one mile was equal to 5000 Roman feet. Then the British started using it and decided to relate it to what worked for them which was agriculture. In agriculture they liked to use furlongs as their length of measurement. A furlong was 660 feet and one mile was decided to have 8 furlongs. Well 8 times 660 is equal to 5280 feet.
A foot also has a historical significance and if you guessed that it was based on an average human foot you would be right. There are some who believe that it is actually based on the average human shoe length. Either way naming it a foot makes sense.
Take a look at the table below to get an idea of how the imperial system of linear measurement works.
Unit Name Equivalent Values
Inch 0.083 feet. 0.028 yards
foot 12 inches, 0.333 yards
yard 3 feet, 36 inches
fathom 6 feet, 72 inches
rod 5.50 yards, 16.5 feet
furlong 660 feet, 220 yards, 1/8 mile
mile 5280 feet, 1760 yards, 320 rods
Nautical mile 6,076 feet, 1.151 miles
At first glance this may seem a little more confusing than metric and realistically if we were dealing with all those different length measurements it just might be. Lucky for us we are only going to deal with 3 of the measurements for the most part. Those three include inches, feet and miles. Once in a while we might see yards come in to play. For instance an American football field is 100 yards long and 120 if you include the 2 end zones.
Once again our task it to work within the imperial system and be able to work between values. Before we start I want to remind you to think about the answer you are trying to find. What I want you to think about is whether or not the answer is going to be bigger or smaller.
An example would be feet to inches. If we were to cut a piece of pipe 2 feet long do you think it would this end up being more than 2 inches long or less than 2 inches long. I think we all agree that it would be more than 2 inches and in fact it is. It works out to be 24 inches. You might not be able to get 24 inches right away but you probably can figure out that 2 feet stated in inches should work out to be a greater number.
Let’s use that as our first example:
Example
How many inches are there in 2 feet?
Step 1: Find the number that states the relationship between inches and feet.
In this case:
$\Large1\text{ foot}= 12 \text{ inches}$
Step 2: Build a ratio
$\Large\dfrac{1 \text{ foot}}{2 \text{ feet}}=\dfrac{12 \text{ inches}}{\text{X inches}}$
Step 3: Cross multiply
$\Large\begin{array}{c}\dfrac{1 \text{ foot}}{2 \text{ feet}}=\dfrac{12 \text{ inches}}{\text{X inches}} \\ 1\times \text{X} = 2 \times 12 \\ \text{X} = 24 \\ \text{Answer} = 24 \text{ inches}\end{array}$
Although you might have been able to do that in your head, it’s important to follow the steps involved and think about the answer you expect to get. This will help when the numbers are more involved and not as easy to figure out.
Example
How many yards are there in 247 inches?
Step 1: Find the number that goes between yards and inches. Note that there are actually 2 numbers here to choose from. We could use:
$\Large 1 \text{ yard} = 36 \text{ inches}$
$\Large \text{OR}$
$\Large 1 \text{ inch}= 0.028 \text{ yards}$
In this question we are going from inches to yards so working with the number 0.028 will be easier for us.
Step 2: Build a ratio
$\Large\dfrac{ 1 \text{ inch}}{247 \text{ inches}}= \dfrac{0.028 \text{ yards}}{\text{X yards}}$
Step 3: Cross multiply
$\Large\begin{array}{c}\dfrac{ 1 \text{ inch}}{247 \text{ inches}}= \dfrac{0.028 \text{ yards}}{\text{X yards}} \\ 1 \times \text{X} = 247 \times 0.028 \\ \text{X}= 6.916 \\ \text{Answer} = 6.916 \text{ yards}\end{array}$
# Practice Question
Try a practice question yourself and check the video answers to see how you did. Make sure to follow the steps outlined above and think about whether your answer should be bigger or smaller.
Question 1
The length of the duct work that Barry, our sheet metal tradesperson, has to create for the video production studio is 193 yards. How many feet of the duct does Barry have to order to complete the job?
# Working between the Metric and Imperial Linear Measurements Systems
What happens when we have to work between the metric and imperial systems? It really works just the same but we need to learn a few new numbers.
The table below is a list of numbers which can be used to help translate between metric and imperial linear measurements. What you’ll note here is that the equivalent numbers represent units that are similar in length (or used similarly) in different situations.
An example would be kilometres and miles. Both are used to represent such things as distance travelled in a car, train, bus or airplane. We don’t go and measure those long distances using centimetres or its imperial equivalent which is inches. That just wouldn’t be convenient.
Likewise if we were to measure the length of a house we would most likely use the metres or its imperial equivalent which is feet.
Also note that we don’t translate every metric and imperial number to their equivalents. As we won’t be working with most of the units there is no real need to find all these numbers. Having said that, if you wanted to go through and find the numbers yourself on the internet or possibly even try and figure them out using the numbers in the tables above and below then it would most likely help with your understanding of how linear measurement units work with each other.
Metric Imperial Equivalent
1 metre 3.28 feet
1 kilometre 0.62 miles
1 centimetre 0.393 inches
1 millimetre 0.0394 inches
It might also be helpful to look at those numbers in the reverse.
Imperial Metric Equivalent
1 foot 0.305 metres
1 mile 1.61 kilometres
1 inch 2.54 centimetres
1 inch 25.4 millimetres
Even though we have a conversion number to go from miles to kilometres, and then a conversion number to go from kilometres to miles, we don’t actually have to remember both.
Whichever of the two numbers is easiest for you to remember is all you need to memorize. Once you know one you can get the other.
Here’s how it works.
We’ll use miles and kilometres for this exercise. What we know is that 1 mile is equal to 1.61 kilometres.
$\Large 1 \text{ mile} = 1.61 \text{ kilometres}$
Now what we need to figure out is the reverse. In this case how many miles there are in one kilometre. Once again ask yourself whether you think the answer should be bigger or smaller than 1.
So to figure out our answer we need to do the following.
$\Large\begin{array}{c} \# \text{ kilometres} = \# \text{ miles} \times 1.61 \\ \downarrow \\ \# \text{ miles} = \dfrac{\# \text{ kilometres}}{1.61} \\ \downarrow \\ 1 \text{ mile} = 0.62 \text{ kilometres}\end{array}$
So we end up with 1 kilometre equaling 0.62 miles.
We’ve just taken one constant to derive the other constant. You can do this with any of the numbers used to translate back and forth between metric and imperial.
Let’s move on. Now what we will do is start to work between the imperial and metric systems and the easiest way to do this is by going through some example questions.
Example
How many metres are there in 42 feet?
Step 1: Find the number you can work with.
We know that:
$\Large\begin{array}{c} 1\text{ metre} = 3.28 \text{ feet} \\ 1 \text{ foot} = 0.305 \text{ metres}\end{array}$
As we are going from feet to metres we’ll go with 1 foot = 0.305 metres.
Step 2: Build a ratio
$\Large \dfrac{1 \text{ foot}}{42 \text{ feet}}= \dfrac{0.305 \text{ metres}}{\text{X metres}}$
Step 3: Cross multiply
$\Large\begin{array}{c} \dfrac{1 \text{ foot}}{42 \text{ feet}}= \dfrac{0.305 \text{ metres}}{\text{X metres}} \\ 1\times \text{X} = 42 \times 0.305 \\ \text{X} = 12.81 \\ \text{Answer} = 12.81 \text{metres}\end{array}$
Example
How many inches are there in 100 centimetres?
Step 1: Find the number you can work with.
We know that:
$\Large\begin{array}{c} 1 \text{ centimetre}= 0.393 \text{ inches} \\ 1 \text{ inch} = 2.54 \text{ centimetres}\end{array}$
As we are going from centimetres to inches we’ll go with 1 centimetre = 0.393 inches
Step 2: Build a ratio
$\Large \dfrac{1 \text{ cm}}{100 \text{ cm}}=\dfrac{0.393 \text{ in}}{\text{X in}}$
Step 3: Cross multiply
$\Large\begin{array}{c} \dfrac{1 \text{ cm}}{100 \text{ cm}}=\dfrac{0.393 \text{ in}}{\text{X in}} \\ 1 \times \text{X} = 100 \times 0.393 \\ \text{X} = 39.3 \\ \text{Answer} = 39.3 \text{cm}\end{array}$
# Practice Questions
Try a couple practice question for yourself. Make sure to go through the steps similar to the example questions above and also make sure to check the video answers to see if you are correct.
Question 1
Jakob is a carpenter who creates forms for concrete columns. The measurements for the column are in millimetres but Jakob would rather work in inches so he decides to translate the millimetres to inches. The columns are rectangular and are 400 mm by 250 mm. What are the measurements of the column in inches?
Question 2
Elias is a cabinetmaker from Sweden who is now an apprentice in Canada. He has been asked to order material for the job and it totals 427 feet of 1″ x 4″ wood. As he is used to working in metric he wants to change that to metres. How many metres of 1″ x 4″ is he going to need? |
Iberoamerican Webminar of Young Researchers in Singularity Theory and related topics
This webminar is intended to be an open place for discussion and interaction between young researchers in all aspects of Singularity Theory and related topics. The seminar is open to everybody and is composed by a a series of research talks by leading young and senior researchers. To attend a talk, please join the Mailing list bellow to receive the Google Meets link before the talk starts.
Events:
PhD course on "Mixed Hodge Structures on Alexander Modules" (from October 26th to December 2nd, Registration is now open).
Upcoming talks & mini-courses
Date Speaker Title
21 Oct 2020 at 17pm
Miruna-Ştefana Sorea
Max-Planck-Institut (Leizpig, Germany)
The shapes of level curves of real polynomials near strict local minima
Abstract↴
We consider a real bivariate polynomial function vanishing at the origin and exhibiting a strict local minimum at this point. We work in a neighbourhood of the origin in which the non-zero level curves of this function are smooth Jordan curves. Whenever the origin is a Morse critical point, the sufficiently small levels become boundaries of convex disks. Otherwise, these level curves may fail to be convex.
The aim of this talk is two-fold. Firstly, to study a combinatorial object measuring this non-convexity; it is a planar rooted tree. And secondly, we want to characterise all possible topological types of these objects. To this end, we construct a family of polynomial functions with non-Morse strict local minima realising a large class of such trees.
26 Oct 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules I
Abstract↴
This is a course about our recent paper arXiv:2002.01589v3 (Joint with Christian Geske, Laurențiu Maxim and Botong Wang), on the construction and properties of a canonical mixed Hodge structure on the torsion part of the Alexander modules of a smooth connected complex algebraic variety. The course will roughly be divided in two halves.
The first half of the course will cover the necessary background material. We will give a historical introduction to (pure and mixed) Hodge structures, and the techniques developed to study them, focusing mainly on Deligne's mixed Hodge complexes. For this, we will need to introduce some basic concepts about sheaves. We will also give an introduction to Alexander modules on smooth algebraic varieties. For our purposes, they are defined as follows: let $U$ be a smooth connected complex algebraic variety and let $f\colon U\to \mathbb C^*$ be an algebraic map inducing an epimorphism in fundamental groups. The pullback of the universal cover of $\mathbb C^*$ by $f$ gives rise to an infinite cyclic cover $U^f$ of $U$. The Alexander modules of $(U,f)$ are by definition the homology groups of $U^f$. The action of the deck group $\mathbb Z$ on $U^f$ induces a $\mathbb Q[t^{\pm 1}]$-module structure on $H_*(U^f;\mathbb{Q})$, whose torsion submodule we call $A_*(U^f;\mathbb Q)$.
For the background in Hodge theory, we will follow Peters and Steenbrink's text Mixed Hodge Structures. For the sheaf theory, possible references include Maxim's Intersection Homology & Perverse Sheaves and Dimca's Sheaves in Topology.
28 Oct 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules I
02 Nov 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules I
04 Nov 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules I
09 Nov 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules I
11 Nov 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules I
18 Nov 2020 at 17pm
Jose I. Cogolludo-Agustín
TBA
Abstract↴
...
23 Nov 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules II
Abstract↴
This second part will consist of an overview of the construction of the mixed Hodge structure on Alexander modules using mixed Hodge complexes, together with a discussion of some of its desirable properties, such as its relation to other well-known mixed Hodge structures. We will see that the covering map $U^f \to U$ induces a mixed Hodge structure morphism $A_*(U^f;\mathbb Q)\to H_*(U;\mathbb Q)$. As applications of this fact, we can understand the mixed Hodge structure on the Alexander modules better, plus we can draw conclusions about the monodromy action on $A_*(U^f;\mathbb Q)$ that don't involve Hodge structures. For instance, we can show that this action is always semisimple on $A_1(U^f;\mathbb Q)$. Time permitting, we will also discuss the relation to the limit Mixed Hodge structure in the case where $f$ is proper.
25 Nov 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules II
30 Nov 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules II
02 Dec 2020 at 17pm
U. of Michigan/Lousiana State (USA)
Mini-course: Mixed Hodge Structures on Alexander Modules II
09 Dec 2020 at 17pm
Aurélio Menegon Neto
UF de Paraiba (João Pessoa, Brazil)
TBA
Abstract↴
...
Past talks
Date Speaker Title
14 Oct 2020 at 17pm
Irma Pallarés
BCAM (Bilbao, Spain)
The Brasselet-Schürmann-Yokura conjecture on $L$-classes
Abstract↴
The Brasselet-Schürmann-Yokura conjecture is a conjecture on characteristic classes of singular varieties, which predicts the equality between the Hodge L-class and the Goresky-MacPherson L-class for compact complex algebraic varieties that are rational homology manifolds. In this talk, we will illustrate our technique used in the proof of the conjecture by explaining the simple case of $3$-folds with an isolated singularity.
This is a joint work with Javier Fernández de Bobadilla.
07 Oct 2020 at 17pm
Edwin León-Cardenal
CIMAT (Zacatecas, Mexico)
Motivic zeta functions for ${\mathbb Q}$-Gorenstein varieties
Abstract↴
This is a joint work with Jorge Martín-Morales, Wim Veys & Juan Viu-Sos.
The study of zeta functions of hypersurfaces, allows one to determine some invariants of the singularity defining the hypersurface. A common strategy is to use a classical embedded resolution of the singularity, which gives a list of possible 'poles' from which some invariants can be read of. The list is usually very large and a major and difficult problem (closely connected with the Monodromy Conjecture) is determining the true poles. In this work we propose to use a partial resolution of singularities to deal with this problem. We use an embedded Q-resolution, where the final ambient space may contain quotient singularities. This machinery allows us to give some explicit formulas for motivic and topological zeta functions in terms of Q-resolutions, generalizing in particular some results of Veys for curves and providing in general a reduced list of candidate poles.
This webminar is sponsored by Instituto de Matemática Interdisciplinar (IMI)
- Website designed by Juan Viu Sos - |
# Contributed VACs¶
## Base Classes¶
class marvin.contrib.vacs.base.VACMixIn[source]
MixIn that allows VAC integration in Marvin.
This parent class provides common tools for downloading data using sdss_access or directly from the sandbox. get_vacs returns a container with properties pointing to all the VACs that subclass from VACMixIn. In general, VACs can be added to a class in the following way:
from marvin.contrib.vacs.base import VACMixIn
class Maps(MarvinToolsClass):
def __init__(self, *args, **kwargs):
...
self.vacs = VACMixIn.get_vacs(self)
and then the VACs can be accessed as properties in my_map.vacs.
check_vac(summary_file)[source]
Checks the summary file for existence
download_vac(name=None, path_params={}, verbose=True)[source]
file_exists(path=None, name=None, path_params={})[source]
Check whether a file exists locally
get_ancillary_file(name, path_params={})[source]
Get a path to an ancillary VAC file
get_path(name=None, path_params={})[source]
Returns the local VAC path or False if it does not exist.
get_target(parent_object)[source]
Returns VAC data that matches the parent_object target.
This method must be overridden in each subclass of VACMixIn. Details will depend on the exact implementation and the type of VAC, but in general each version of this method must:
• Check whether the VAC file exists locally.
• If it does not, download it using download_vac.
• Open the file using the appropriate library.
• Retrieve the VAC data matching parent_object. Usually one will use attributes in parent_object such as .mangaid or .plateifu to perform the match.
• Return the VAC data in whatever format is appropriate.
static get_vacs(parent_object)[source]
Returns a container with all the VACs subclassing from VACMixIn.
Because this method loops over VACMixIn.__subclasses__(), all the class that inherit from VACMixIn and that must be included in the container need to have been imported before calling get_vacs.
Parameters: parent_object (object) – The object to which the VACs are being attached. It will be passed to get_target when the subclass of VACMixIn is called. vac_container (object) – An instance of a class that contains just a list of properties, one for to each on of the VACs that subclass from VACMixIn.
set_summary_file(release)[source]
Sets the VAC summary file
This method must be overridden in each subclass of VACMixIn. Details will depend on the exact implementation and the type of VAC, but in general each version of this method must:
• Access the version of your VAC matching the current release
• Define a dictionary of keyword parameters that defines the tree path
• Use get_path to construct the VAC path
• Set that path to the summary_file attribute
Setting a VAC summary file allows the VACs tool to load the full VAC data. If the VAC does not contain a summary file, this method should pass or return None.
update_path_params(params)[source]
Update the path_params dictionary with additional parameters
class marvin.contrib.vacs.base.VACTarget(targetid, vacfile, **kwargs)[source]
Customization Class to allow for returning complex target data
This parent class provides a framework for returning more complex data associated with a given target observation, for example ancillary spectral or image data. In these cases, returning a target row from the main VAC summary file, or a simple dictionary of values may not be sufficient. This class can be subclassed and customized to return any extra functionality or data.
When used, this class provides convenient access to the underlying VAC data as well as a boolean to indicate if the given target is included in the VAC.
Parameters: targetid (str) – The target id, usually plateifu or mangaid. Required. vacfile (str) – The path to the VAC summary file. Required. targetid (str) – The plateifu or mangaid target designation data (row) – The extracted row VAC data for the provided targetid _data (HDU) – the first data HDU of the summary VAC FITS file _indata (bool) – A boolean indicating if the target is included in the VAC
To use, subclass this class, add a new __init__ method. Make sure to call the original class’s __init__ method with super.
from marvin.contrib.vacs.base import VACTarget
class ExampleTarget(VACTarget):
def __init__(self, targetid, vacfile):
super(ExampleTarget, self).__init__(targetid, vacfile)
Further customization can now be done, e.g. adding new parameters in the initializtion of the object, adding new methods or attributes, or overriding existing methods, e.g. to customize the return data attribute.
To access a single HDU from the VAC, use the _get_data() method. If you need to access the entire file, use the _open_file() method.
data
The data row from a VAC for a specific targetid
## Available VACs¶
### Galaxy Zoo¶
class marvin.contrib.vacs.galaxyzoo.GZVAC[source]
VAC name: MaNGA Morphologies from Galaxy Zoo
Description: Returns Galaxy Zoo morphology for MaNGA galaxies. The Galaxy Zoo (GZ) data for SDSS galaxies has been split over several iterations of www.galaxyzoo.org, with the MaNGA target galaxies being spread over five different GZ data sets. In this value added catalog we bring all of these galaxies into one single catalog and re-run the debiasing code (Hart et al. 2016) in a consistent manner across the all the galaxies. This catalog includes data from Galaxy Zoo 2 (previously published in Willett et al. 2013) and newer data from Galaxy Zoo 4 (currently unpublished).
Authors: Coleman Krawczyk, Karen Masters and the rest of the Galaxy Zoo Team.
get_target(parent_object)[source]
Accesses VAC data for a specific target from a Marvin Tool object
set_summary_file(release)[source]
Sets the path to the GalaxyZoo summary file
### HI¶
class marvin.contrib.vacs.hi.HITarget(targetid, vacfile, specfile=None)[source]
A customized target class to also display HI spectra
This class handles data from both the HI summary file and the individual spectral files. Row data from the summary file for the given target is returned via the data property. Spectral data can be displayed via the the plot_spectrum method.
Parameters: targetid (str) – The plateifu or mangaid designation vacfile (str) – The path of the VAC summary file specfile (str) – The path to the HI spectra data – The target row data from the main VAC file targetid (str) – The target identifier
plot_spectrum()[source]
Plot the HI spectrum
class marvin.contrib.vacs.hi.HIVAC[source]
VAC name: HI
Description: Returns HI summary data and spectra
Authors: David Stark and Karen Masters
get_target(parent_object)[source]
Accesses VAC data for a specific target from a Marvin Tool object
set_summary_file(release)[source]
Sets the path to the HI summary file
marvin.contrib.vacs.hi.plot_mass_fraction(vacdata_object)[source]
Plot the HI mass fraction
Computes and plots the HI mass fraction using the NSA elliptical Petrosian stellar mass from the MaNGA DRPall file. Only plots data for subset of targets in both the HI VAC and the DRPall file.
Parameters: vacdata_object (object) – The VACDataClass instance of the HI VAC
Example
>>> from marvin.tools.vacs import VACs
>>> v = VACs()
>>> hi = v.HI
>>> hi.plot_mass_fraction()
### Gema¶
class marvin.contrib.vacs.gema.GEMAVAC[source]
VAC name: GEMA
Description: The GEMA VAC contains many different quantifications of the local and the large-scale environments for MaNGA galaxies. Please visit the DATAMODEL at https://data.sdss.org/datamodel/files/MANGA_GEMA/GEMA_VER to see the description of each table composing the catalogue.
Authors: Maria Argudo-Fernandez, Daniel Goddard, Daniel Thomas, Zheng Zheng, Lihwai Lin, Ting Xiao, Fangting Yuan, Jianhui Lian, et al
get_target(parent_object)[source]
Accesses VAC data for a specific target from a Marvin Tool object
set_summary_file(release)[source]
Sets the path to the GEMA summary file
### Firefly¶
class marvin.contrib.vacs.firefly.FFlyTarget(targetid, vacfile, imagesz=None)[source]
A customized target class to also display Firefly 2-d maps
This class handles data the Firefly summary file. Row data from the summary file for the given target is returned via the data property. Specific Firefly parameters are available via the stellar_pops and stellar_gradients methods, respectively. 2-d maps from the Firefly data can be produced via the plot_map method.
TODO: fix e(b-v) and signal_noise in plot_maps
Parameters: targetid (str) – The plateifu or mangaid designation vacfile (str) – The path of the VAC summary file imagesz (int) – The original array shape of the target cube data – The target row data from the main VAC file targetid (str) – The target identifier
list_parameters()[source]
List the parameters available for plotting
plot_map(parameter=None, mask=None)[source]
Plot map of stellar population properties
Plots a 2d map of the specified FIREFLY stellar population parameter using Matplotlib. Optionally mask the data when plotting using Numpy’s Masked Array. Default is to mask map values < -10.
Parameters: parameter (str) – The named of the VORONOI stellar pop. parameter mask (nd-array) – A Numpy array of masked values to apply to the map The matplotlib axis image object
stellar_gradients(parameter=None)[source]
Returns the gradient of stellar population properties
Returns the gradient of the stellar population property for a given stellar population parameter. If no parameter specified, returns the entire row.
Parameters: parameter (str) – The stellar population parameter to retrieve. Can be one of [‘lw_age’, ‘mw_age’, ‘lw_z’, ‘mw_z’]. The data from the FIREFLY summary file for the target galaxy
stellar_pops(parameter=None)[source]
Returns the global stellar population properties
Returns the global stellar population property within 1 Re for a given stellar population parameter. If no parameter specified, returns the entire row.
Parameters: parameter (str) – The stellar population parameter to retrieve. Can be one of [‘lw_age’, ‘mw_age’, ‘lw_z’, ‘mw_z’]. The data from the FIREFLY summary file for the target galaxy
class marvin.contrib.vacs.firefly.FIREFLYVAC[source]
VAC name: FIREFLY
Description: Returns integrated and resolved stellar population parameters fitted by FIREFLY
Authors: Jianhui Lian, Daniel Thomas, Claudia Maraston, and Lewis Hill
get_target(parent_object)[source]
Accesses VAC data for a specific target from a Marvin Tool object
set_summary_file(release)[source]
Sets the path to the Firefly summary file |
# NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
Written by Team Trustudies
Updated at 2021-05-07
## NCERT solutions for class 7 Maths Chapter 5 Lines And Angles Exercise 5.1
Q.1 Find the complement of each of the following angles:
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) Complement of 20° = 90° – 20° = 70°
(ii) Complement of 63° = 90° – 63° = 27°
(iii) Complement of 57° = 90° – 57° = 33°
Q.2 Find the supplement of each of the following angles:
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) Supplement of 105° = 180° – 105° = 75°
(ii) Supplement of 87° = 180° – 87° = 93°
(iii) Supplement of 154° = 180° – 154° = 26°
Q.3 Identify which of the following pairs of angles are complementary and which are supplementary?
(i) 65°, 115°
(ii) 63°, 27°
(iii) 112°, 68°
(iv) 130°, 50°
(v) 45°, 45°
(vi) 80°, 10°
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) 65° (+) 115° = 180°
They are supplementary angles.
(ii) 63° (+) 27° = 90°
They are complementary angles.
(iii) 112° (+) 68° = 180°
They are supplementary angles.
(iv) 130° (+) 50° = 180°
They are supplementary angles.
(v) 45° (+) 45° = 90°
They are complementary angles.
(vi) 80° (+) 10° = 90°
They are complementary angles.
Q.4 Find the angle which equal to its complement.
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
Let the required angle be x°.
its complement = (90 – x)°
According to question,
x= 90 – x Or, x + x = 90 Or, 2x = 90 Or, x = 90/2 = 45° Thus the required angles are 45°.
Q.5 Find the angle which is equal to its supplement.
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
Let the required angle be x°.
So, it supplement = (180 – x)°
Now, x = 180 – x
Or, x + x = 180
Or, 2x = 180°
Or x=180°/2=90°
Thus, the required angle is 90°.
Q.6 In the given figure, $\mathrm{?}$ 1 and $\mathrm{?}$2 are supplementary angles.
If $\mathrm{?}$ 1 is decreased, what changes should take place in $\mathrm{?}$ 2 so that both the angles still remain
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
$\mathrm{?}$ 1 + $\mathrm{?}$2 = 180° (given)
If $\mathrm{?}$ 1 is decreased by some degrees, then angle 2 will also be increased by the same degree so that the two angles still remain supplementary.
Q.7 Can two angles be supplementary if both of them are:
(i) acute?
(ii) obtuse?
(iii) right?
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) No, If two angles are acute, means less than 90°, the two angles cannot be supplementary. Because, their sum will be always less than 180° .
(ii) No. If two angles are obtuse, means more than 90°, the two angles cannot be supplementary. Because, their sum will be always more than 180°.
(iii) Yes. If two angles are right, means both measures 90°, then two angles can form a supplementary pair.
90° + 90° = 180°
Q.8 An angle is greater than 45°. Is its complementary angle greater than 45° or equal to 45° or less than 45 °?
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
Given angle is greater than 45°
Let the given angle be x°.
So, x > 45
Complement of x° = 90° – x° < 45° [ because x > 45°]
Thus the required angle is less than 45°.
Q.9 In the following figure:
(i) Is $\mathrm{?}$1 adjacent to angle 2?
(ii) Is $\mathrm{?}$AOC adjacent to angle AOE?
(iii) Do $\mathrm{?}$ COE and $\mathrm{?}$ EOD form a linear pair?
(iv) Are $\mathrm{?}$ BOD and $\mathrm{?}$ DOA supplementary?
(v) Is $\mathrm{?}$ 1 vertically opposite angle to angle 4?
(vi) What is the vertically opposite angle of $\mathrm{?}$ 5?
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) Yes, $\mathrm{?}$ 1 and $\mathrm{?}$ 2 are adjacent angles.
because it's one arm (OC) is common
(ii) No, $\mathrm{?}$ AOC is not adjacent to $\mathrm{?}$ AOE.
[ because OC and OE do not lie on either side of common arm OA] .
(iii) Yes, $\mathrm{?}$ COE and $\mathrm{?}$ EOD form a linear pair of angles.
(iv) Yes, $\mathrm{?}$ BOD and $\mathrm{?}$ DOA are supplementary.
[Because $\mathrm{?}$ BOD + because $\mathrm{?}$DOA = 180°]
(v) Yes, $\mathrm{?}$ 1 is vertically opposite to $\mathrm{?}$ 4.
(vi) Vertically opposite angle of $\mathrm{?}$ 5 is $\mathrm{?}$ 2 + $\mathrm{?}$ 3 i.e. angle BOC.
Q.10 Indicate which pairs of angles are:
(i) Vertically opposite angles
(ii) Linear pairs
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) Vertically opposite angles are angle1 and angle 4,
$\mathrm{?}$ 5 and ($\mathrm{?}$ 2 + $\mathrm{?}$ 3)
(ii) Linear pairs are
$\mathrm{?}$ 1 and $\mathrm{?}$ 5, $\mathrm{?}$ 5 and $\mathrm{?}$ 4
Q.11 In the following figure, is $\mathrm{?}$1 adjacent to $\mathrm{?}$2? Give reasons.
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
No, $\mathrm{?}$ 1 and $\mathrm{?}$ 2 are not adjacent
Reasons:
(i) They have no common vertex.
(ii) $\mathrm{?}$1 + $\mathrm{?}$2 is not equals180°
Q.12 Find the values of the angles x, y and z in each of the following:
(i) (ii)
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
From Fig. 1. we have
$\mathrm{?}$x = 55° (Vertically opposite angles)
$\mathrm{?}$ x + $\mathrm{?}$ y = 180° (Adjacent angles)
55° + $\mathrm{?}$ y = 180° (Linear pair angles)
So, $\mathrm{?}$ y = 180° – 55° = 125°
$\mathrm{?}$ y = $\mathrm{?}$ z (Vertically opposite angles)
$\mathrm{?}$ z = 125 °
Hence, $\mathrm{?}$ x = 55°, $\mathrm{?}$ y = 125° and $\mathrm{?}$ z = 125°
From Fig. ii . we have
25° + x + 40° = 180° (Sum of adjacent angles on straight line)
65° + x = 180°
So, x = 180° – 65° = 115°
40° + y = 180° (Linear pairs)
So, y = 180° – 40° = 140°
y + z = 180° (Linear pairs)
140° + z = 180°
So, z = 180° – 140° = 40°
Hence, x – 115°, y = 140° and z – 40°
Q.13 Fill in the blanks:
(i) If two angles are complementary, then the sum of their measures is ______ .
(ii) If two angles are supplementary, then the sum of their measures is ______ .
(iii) Two angles forming a linear pair are ______ .
(iv) If two adjacent angles are supplementary, they form a ______ .
(v) If two lines intersect at a point, then the vertically opposite angles are always ______ .
(vi) If two lines intersect at a point, and if one pair of vertically opposite angles are acute angles, then the other pair of vertically opposite angles are ______ .
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) If two angles are complementary, then the sum of their measures is _90°_ .
(ii) If two angles are supplementary, then the sum of their measures is _180°_ .
(iii) Two angles forming a linear pair are _suplimentry_ .
(iv) If two adjacent angles are supplementary, they form a _linear pair_.
(v) If two lines intersect at a point, then the vertically opposite angles are always _equal_ .
(vi) If two lines intersect at a point, and if one pair of vertically opposite angles are acute angles, then the other pair of vertically opposite angles are _obtuse angle_ .
Q.14 In the given figure, name the following pairs of angles.
(i) Obtuse vertically opposite angles.
(iii) Equal supplementary angles.
(iv) Unequal supplementary angles.
(v) Adjacent angles but do not form a linear pair.
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) $\mathrm{?}$BOC and $\mathrm{?}$AOD are obtuse vertically opposite angles.
(ii) $\mathrm{?}$AOB and $\mathrm{?}$AOE are adjacent complementary angles.
(iii) $\mathrm{?}$EOB and $\mathrm{?}$EOD are equal supplementary angles.
(iv) $\mathrm{?}$EOA and $\mathrm{?}$EOC are unequal supplementary angles.
(v) $\mathrm{?}$AOB and $\mathrm{?}$AOE, $\mathrm{?}$AOE and $\mathrm{?}$EOD, $\mathrm{?}$EOD and $\mathrm{?}$COD are adjacent angles but do not form a linear pair.
## NCERT solutions for class 7 Maths Chapter 5 Lines And Angles Exercise 5.2
Q.1 State the property that is used in each of the following statements?
(i) If a $?$ b, then $\mathrm{?}$1 = $\mathrm{?}$5
(ii) If $\mathrm{?}$4 = $\mathrm{?}$6, then a $?$ b
(iii) If $\mathrm{?}$4 + $\mathrm{?}$5 = 180°, then a $?$ b
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) Given a $?$
So, $\mathrm{?}$1 = $\mathrm{?}$5 (Pair of corresponding angles)
(ii) Given: $\mathrm{?}$4 = $\mathrm{?}$6
So, a $?$ b [If pair of alternate angles are equal, then the lines are parallel]
(iii) Given: $\mathrm{?}$4 + $\mathrm{?}$5 = 180°
So, a $?$ b [If sum of interior angles is 180°, then the lines are parallel]
Q.2 In the given figure, identify
(i) the pairs of corresponding angles.
(ii) the pairs of alternate interior angles.
(iii) the pairs of interior angles on the same side of the transversal.
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) The pair of corresponding angles are $\mathrm{?}$1 and $\mathrm{?}$5, $\mathrm{?}$2 and $\mathrm{?}$6, $\mathrm{?}$4 and $\mathrm{?}$8, $\mathrm{?}$3 and $\mathrm{?}$7.
(ii) The pairs of alternate interior angles are $\mathrm{?}$2 and $\mathrm{?}$8, $\mathrm{?}$3 and $\mathrm{?}$5.
(iii) The pairs of interior angles on the same side of the transversal are $\mathrm{?}$2 and $\mathrm{?}$5, $\mathrm{?}$3 and $\mathrm{?}$8.
Q.3 the given figure, p $?$q. Find the unknown angles.
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
$\mathrm{?}$e + 125° = 180° (Linear pair)
$?$ $\mathrm{?}$e = 180° – 125° = 55°
$\mathrm{?}$e = $\mathrm{?}$f (Vertically opposite angles)
$?$ $\mathrm{?}$f= 55°
$\mathrm{?}$a = $\mathrm{?}$f= 55° (Alternate interior angles)
$\mathrm{?}$c = $\mathrm{?}$a = 55° (Vertically opposite angles)
$\mathrm{?}$d = 125° (Corresponding angles)
$\mathrm{?}$b = $\mathrm{?}$d = 125° (Vertically opposite angles)
Thus, $\mathrm{?}$a = 55°, $\mathrm{?}$b = 125°, $\mathrm{?}$c = 55°, $\mathrm{?}$d = 125°, $\mathrm{?}$e = 55°, $\mathrm{?}$f= 55°.
Q.4 Find the value of x in each of the following figures if l $?$m
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) Let the angle opposite to 110° be y.
$?$ y = 110° (Vertically opposite angles)
$\mathrm{?}$x + $\mathrm{?}$y = 180° (Sum of interior angle on the same side of transversal)
$\mathrm{?}$x + 110° = 180° .
$?$ $\mathrm{?}$x = 180° – 110° = 70°
Thus x= 70°
(ii) $\mathrm{?}$x = 110° (Pair of corresponding angles)
Q.5 the given figure, the arms of two angles are parallel. If $\mathrm{?}$ABC = 70°, then find
(i) $\mathrm{?}$DGC
(ii) $\mathrm{?}$DEF
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
Given
AB $?$ DE
BC $?$ EF
$\mathrm{?}$ABC = 70°
$\mathrm{?}$DGC = $\mathrm{?}$ABC
(i) $\mathrm{?}$DGC = 70° (Pair of corresponding angles)
$\mathrm{?}$DEF = $\mathrm{?}$DGC
(ii) $\mathrm{?}$DEF = 70° (Pair of corresponding angles)
Q.6 In the given figure below, decide whether l is parallel to m.
NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
(i) Sum of interior angles on the same side of transversal = 126° + 44° = 170° $?$180°
$?$ l is not parallel to m.
(ii) Let angle opposite to 75° be x.
x = 75° [Vertically opposite angles]
$?$ Sum of interior angles on the same side of transversal
= x + 75° = 75° + 75°
= 150° $?$ 180°
$?$ l is not parallel to m.
(iii) Let the angle opposite to 57° be y.
$?$ $\mathrm{?}$y = 57° (Vertically opposite angles)
$?$ Sum of interior angles on the same side of transversal
= 57° + 123° = 180°
$?$ l is parallel to m.
(iv) Let angle opposite to 72° be z.
$?$z = 70° (Vertically opposite angle)
[Sum of interior angles on the same side of transversal]
= z + 98° = 72° + 98°= 170° $?$ 180°
$?$ l is not parallel to m.
##### FAQs Related to NCERT Solutions for Class 7 Maths Chapter 5 Lines And Angles
There are total 20 questions present in ncert solutions for class 7 maths chapter 5 lines and angles
There are total 0 long question/answers in ncert solutions for class 7 maths chapter 5 lines and angles
There are total 2 exercise present in ncert solutions for class 7 maths chapter 5 lines and angles |
# Energy flux of an EM wave
by guillefix
P: 77 Hello, The energy density of an electromagnetic wave is $ε_{0}E^{2}$. To calculate the energy flux, at least in the derivation's I've seen, people just multiply by the speed of the wave, i.e., c. But doesn't this assume that the energy density is constant at all points?; but E changes periodically! Why isn't it then the integral of the energy density in the corresponding volume, so it would give something close to a half of the usual answer i see!? Thanks in advance |
# I need help factoring.
1. Jun 21, 2006
### Revolver
I haven't taken a math class since high school and I'm 23 now. I jumped right into Precalc 1 for the summer and the first chapter kicked my ass. I completely forgot how to factor and do LCD with algebraic equations and my professor just breezes by it like its nothing.
Can anyone explain the general principles of factoring trinomial equations? Example x^2 + 5x 6 = 0, factor that. (Or is the term called FOIL?)
LCD is a problem too, I know the concept is multiply both denominators together to get the LCD to cancel out... but what if the equation is complex like (2+5x)/(x^2+4x+3) + (3-4x)/(5x+8) = 7x^2 + 7
another one was x/8 + 2x/4 = 25
Any help and or practice problems would be greatly appreciated, i need to catch up!
2. Jun 22, 2006
### HallsofIvy
Staff Emeritus
FOIL is a mnemonic for multiplying, the opposite of factoring. To multiply (x+a)(x+b) you have to multiply each part of the first term by each part of the second term. F is "first terms" x*x= x2. O is "outside" x(b)= bx, I is "inside" a(x)= ax, L is "last" a(b)= ab, but the effect is simply that you multiplied the "x" in the first term by both x and b and the "a" in the first term by both x and b: (x+a)(x+b)= x2+ ax+ bx+ ab= x2+ (a+b)x+ ab. Now look carefully at the number multiplying x, a+ b, and the constant term ab. They are just the sum and product of the two numbers. To factor x2 + 5x+ 6, work backwards. You know that ab must equal 6 so factor the 6 first: 6= 3(2). It also happens that 5= 3+2= a+b. x2+ 5x+ 6= (x+3)(x+2).
You do not "multiply both denominators together to get the LCD to cancel out". The point of the LCD "Least common denominator" is to not have to do that much work (unless absolutely necessary. To solve
$$\frac{2+ 5x}{x^2+4x+3}+\frac{3-4x}{5x+8}= 7x^2+ 7$$
factor the denominators: 3= 3(1) and 3+1= 4 so x2+ 4x+ 3= (x+3)(x+1). Since 5x+8 is not either of those, the LCD does happen to be the product: multiply each term of the equation by x2+ 4x+ 3 and 5x+ 8. Each of the denominators will cancel and you will have
[tex](2+5x)(5x+8)+ (3-4x)(x^2+ 4x+ 3)= (7x^2+ 7)(x^2+ 4x+ 3)(5x+8)[/itex]
The will be a fifth degree equation which might be impossible to solve exactly. I assume you just made that up, it's not an actual problem you were expected to solve.
For x/8+ 2x/4= 25, since 8= 2(4), just multiply each term by the LCD,8:
8(x/8)+ 8(2x)/4= 8(25), x+ 4x= 200, 5x= 200. |
A belt is a loop of flexible material used to link two or more rotating shafts mechanically. Belts may be used as a source of motion, to transmit power efficiently, or to track relative movement. Belts are looped over pulleys. In a two pulley system, the belt can either drive the pulleys in the same direction, or the belt may be crossed, so that the direction of the shafts is opposite. As a source of motion, a conveyor belt is one application where the belt is adapted to continuously carry a load between two points.
Power transmission
Belts are the cheapest utility for power transmission between shafts that may not be axially aligned. Power transmission is achieved by specially designed belts and pulleys. The demands on a belt drive transmission system are large and this has led to many variations on the theme. They run smoothly and with little noise, and cushion motor and bearings against load changes, albeit with less strength than gears or chains. However, improvements in belt engineering allow use of belts in systems that only formerly allowed chains or gears.
Power transmitted between a belt and a pulley is expressed as the product of difference of tension and belt velocity:[1]
P = (T1T2)v
where, T1 and T2 are tensions in the tight side and slack side of the belt respectively. They are related as:
$\frac{T_1}{T_2} = e^{\mu\alpha}$
where, μ is the coefficient of friction, and α is the angle subtended by contact surface at the centre of the pulley.
### Pros and cons
Belt drive, moreover, is simple, inexpensive, and does not require axially aligned shafts. It helps protect the machinery from overload and jam, and damps and isolates noise and vibration. Load fluctuations are shock-absorbed (cushioned). They need no lubrication and minimal maintenance. They have high efficiency (90-98%, usually 95%), high tolerance for misalignment, and are inexpensive if the shafts are far apart. Clutch action is activated by releasing belt tension. Different speeds can be obtained by step or tapered pulleys.
The angular-velocity ratio may not be constant or equal to that of the pulley diameters, due to slip and stretch. However, this problem has been largely solved by the use of toothed belts. Temperatures ranges from −31 °F (−35 °C) to 185 °F (85 °C). Adjustment of center distance or addition of an idler pulley is crucial to compensate for wear and stretch.
### Flat belts
The drive belt: used to transfer power from the engine's flywheel. Here shown driving a threshing machine.
Flat belts were used early in line shafting to transmit power in factories.[2] They were also used in countless farming, mining, and logging applications, such as bucksaws, sawmills, threshers, silo blowers, conveyors for filling corn cribs or haylofts, balers, water pumps (for wells, mines, or swampy farm fields), and electrical generators. The flat belt is a simple system of power transmission that was well suited for its day. It delivered high power for high speeds (500 hp for 10,000 ft/min), in cases of wide belts and large pulleys. These drives are bulky, requiring high tension leading to high loads, so vee belts have mainly replaced the flat-belts except when high speed is needed over power. The Industrial Revolution soon demanded more from the system, and flat belt pulleys needed to be carefully aligned to prevent the belt from slipping off. Because flat belts tend to climb towards the higher side of the pulley, pulleys were made with a slightly convex or "crowned" surface (rather than flat) to keep the belts centered. Flat belts also tend to slip on the pulley face when heavy loads are applied and many proprietary dressings were available that could be applied to the belts to increase friction, and so power transmission. Grip was better if the belt was assembled with the hair (i.e. outer) side of the leather against the pulley although belts were also often given a half-twist before joining the ends (forming a Möbius strip), so that wear was evenly distributed on both sides of the belt (DB). Belts were joined by lacing the ends together with leather thonging,[3][4] or later by steel comb fasteners.[5] A good modern use for a flat belt is with smaller pulleys and large central distances. They can connect inside and outside pulleys, and can come in both endless and jointed construction.
### Round belts
Round belts are a circular cross section belt designed to run in a pulley with a 60 degree V-groove. Round grooves are only suitable for idler pulleys that guide the belt, or when (soft) O-ring type belts are used. The V-groove transmits torque through a wedging action, thus increasing friction. Nevertheless, round belts are for use in relatively low torque situations only and may be purchased in various lengths or cut to length and joined, either by a staple, a metallic connector (in the case of hollow plastic), glueing or welding (in the case of polyurethane). Early sewing machines utilized a leather belt, joined either by a metal staple or glued, to a great effect.
### Vee belts
Belts on a Yanmar 2GM20 marine diesel engine.
A multiple-V-belt drive on an air compressor.
Vee belts (also known as V-belt or wedge rope) solved the slippage and alignment problem. It is now the basic belt for power transmission. They provide the best combination of traction, speed of movement, load of the bearings, and long service life. The V-belt was developed in 1917 by John Gates of the Gates Rubber Company. They are generally endless, and their general cross-section shape is trapezoidal. The "V" shape of the belt tracks in a mating groove in the pulley (or sheave), with the result that the belt cannot slip off. The belt also tends to wedge into the groove as the load increases — the greater the load, the greater the wedging action — improving torque transmission and making the V-belt an effective solution, needing less width and tension than flat belts. V-belts trump flat belts with their small center distances and high reduction ratios. The preferred center distance is larger than the largest pulley diameter, but less than three times the sum of both pulleys. Optimal speed range is 1000–7000 ft/min. V-belts need larger pulleys for their larger thickness than flat belts. They can be supplied at various fixed lengths or as a segmented section, where the segments are linked (spliced) to form a belt of the required length. For high-power requirements, two or more vee belts can be joined side-by-side in an arrangement called a multi-V, running on matching multi-groove sheaves. The strength of these belts is obtained by reinforcements with fibers like steel, polyester or aramid (e.g. Twaron or Kevlar). This is known as a multiple-V-belt drive (or sometimes a "classical V-belt drive"). When an endless belt does not fit the need, jointed and link V-belts may be employed. However they are weaker and only usable at speeds up to 4000 ft/min. A link v-belt is a number of rubberized fabric links held together by metal fasteners. They are length adjustable by disassembling and removing links when needed.
### Multi-groove belts
A multi-groove or polygroove belt[6] is made up of usually 5 or 6 "V" shapes along side each other. This gives a thinner belt for the same drive surface, thus is more flexible, although often wider. The added flexibility offers an improved efficiency, as less energy is wasted in the internal friction of continually bending the belt. In practice this gain of efficiency is overshadowed by the reduced heating effect on the belt, as a cooler-running belt lasts longer in service.
A further advantage of the polygroove belt, and the reason they have become so popular, stems from the ability to be run over pulleys on the ungrooved back of the belt. Although this is sometimes done with vee belts and a single idler pulley for tensioning, a polygroove belt may be wrapped around a pulley on its back tightly enough to change its direction, or even to provide a light driving force.[7]
Any vee belt's ability to drive pulleys depends on wrapping the belt around a sufficient angle of the pulley to provide grip. Where a single-vee belt is limited to a simple convex shape, it can adequately wrap at most three or possibly four pulleys, so can drive at most three accessories. Where more must be driven, such as for modern cars with power steering and air conditioning, multiple belts are required. As the polygroove belt can be bent into concave paths by external idlers, it can wrap any number of driven pulleys, limited only by the power capacity of the belt.[7]
This ability to bend the belt at the designer's whim allows it to take a complex or "serpentine" path. This can assist the design of a compact engine layout, where the accessories are mounted more closely to the engine block and without the need to provide movable tensioning adjustments. The entire belt may be tensioned by a single idler pulley.
### Ribbed belt
A ribbed belt is a power transmission belt featuring lengthwise grooves. It operates from contact between the ribs of the belt and the grooves in the pulley. Its single-piece structure is reported to offer an even distribution of tension across the width of the pulley where the belt is in contact, a power range up to 600 kW, a high speed ratio, serpentine drives (possibility to drive off the back of the belt), long life, stability and homogeneity of the drive tension, and reduced vibration. The ribbed belt may be fitted on various applications : compressors, fitness bikes, agricultural machinery, food mixers, washing machines, lawn mowers, etc.
### Film belts
Though often grouped with flat belts, they are actually a different kind. They consist of a very thin belt (0.5-15 millimeters or 100-4000 micrometres) strip of plastic and occasionally rubber. They are generally intended for low-power (10 hp or 7 kW), high-speed uses, allowing high efficiency (up to 98%) and long life. These are seen in business machines, printers, tape recorders, and other light-duty operations.
### Timing belts
Timing belt
Belt-drive cog on a belt-driven bicycle
Timing belts, (also known as toothed, notch, cog, or synchronous belts) are a positive transfer belt and can track relative movement. These belts have teeth that fit into a matching toothed pulley. When correctly tensioned, they have no slippage, run at constant speed, and are often used to transfer direct motion for indexing or timing purposes (hence their name). They are often used in lieu of chains or gears, so there is less noise and a lubrication bath is not necessary. Camshafts of automobiles, miniature timing systems, and stepper motors often utilize these belts. Timing belts need the least tension of all belts, and are among the most efficient. They can bear up to 200 hp (150 kW) at speeds of 16,000 ft/min.
Timing belts with a helical offset tooth design are available. The helical offset tooth design forms a chevron pattern and causes the teeth to engage progressively. The chevron pattern design is self-aligning. The chevron pattern design does not make the noise that some timing belts make at certain speeds, and is more efficient at transferring power (up to 98%).
Disadvantages include a relatively high purchase cost, the need for specially fabricated toothed pulleys, less protection from overloading and jamming, and the lack of clutch action.
### Specialty belts
Belts normally transmit power on the tension side of the loop. However, designs for continuously variable transmissions exist that use belts that are a series of solid metal blocks, linked together as in a chain, transmitting power on the compression side of the loop. |
# ESP Biography
## CARL SCHILDKRAUT, ESP Teacher
Major: Mathematics
College/Employer: MIT
Not Available.
## Past Classes
(Clicking a class title will bring you to the course's section of the corresponding course catalog)
M15286: Catalan Numbers in Splash 2022 (Nov. 19 - 20, 2022)
How many ways can we write a valid sequence of $2n$ parentheses? How many rooted binary trees are there on $n$ vertices? How many up-right paths from $(0,0)$ to $(n,n)$ stay on or below the line $y=x$? In this class, we will explain what these questions mean and why they are all answered by a sequence called the Catalan Numbers.
M14357: Wrong?? Math in Splash 2020 (Nov. 14 - 15, 2020)
Some things in math look true but are false, and some things in math look false but are true.
L13158: Introduction to Esperanto in Splash 2019 (Nov. 23 - 24, 2019)
What's Esperanto? It's the most widely spoken invented language, actively spoken by around 200,000 people all over the world. It's really easy to learn! You'll learn more Esperanto in this hour than you'd learn German in ten hours. By the end of the class you'll be able to form basic sentences in Esperanto.
M13448: Catalan Numbers in Splash 2019 (Nov. 23 - 24, 2019)
How many ways can we write a valid sequence of $$2n$$ parenthesis? How many rooted binary trees are there on $$n$$ vertices? How many up-right paths from $$(0,0)$$ to $$(n,n)$$ stay below the line $$y=x$$? In this class, we will explain what these questions mean and why they are all answered by a sequence called the Catalan Numbers.
M13654: Fermat's Last Theorem in Splash 2019 (Nov. 23 - 24, 2019)
This course introduces the infamous Fermat's Last Theorem (FLT), which remained unsolved for over 350 years despite its popularity among mathematicians. FLT claims that there are no nontrivial solutions to the equation $$x^n+y^n=z^n$$ for $$n\ge 3$$. We begin by covering the historical progress on special cases of $$n$$. We finish by introducing the concept of elliptic curves, and briefly covering the machinery that led to Andrew Wiles' proof of FLT in 1994. |
# Generalized Eigen Decomposition
For the two-input generalized eigensolution version,
eigs(A, B; nev=6, ncv=max(20,2*nev+1), which=:LM, tol=0.0, maxiter=300, sigma=nothing, ritzvec=true, v0=zeros((0,))) -> (d,[v,],nconv,niter,nmult,resid)
the following keyword arguments are supported:
• nev: Number of eigenvalues
• ncv: Number of Krylov vectors used in the computation; should satisfy nev+1 <= ncv <= n for real symmetric problems and nev+2 <= ncv <= n for other problems, where n is the size of the input matrices A and B. The default is ncv = max(20,2*nev+1). Note that these restrictions limit the input matrix A to be of dimension at least 2.
• which: type of eigenvalues to compute. See the note below.
whichtype of eigenvalues
:LMeigenvalues of largest magnitude (default)
:SMeigenvalues of smallest magnitude
:LReigenvalues of largest real part
:SReigenvalues of smallest real part
:LIeigenvalues of largest imaginary part (nonsymmetric or complex A only)
:SIeigenvalues of smallest imaginary part (nonsymmetric or complex A only)
:BEcompute half of the eigenvalues from each end of the spectrum, biased in favor of the high end. (real symmetric A only)
• tol: relative tolerance used in the convergence criterion for eigenvalues, similar to tol in the eigs(A) method for the ordinary eigenvalue problem, but effectively for the eigenvalues of $B^{-1} A$ instead of $A$. See the documentation for the ordinary eigenvalue problem in eigs(A) and the accompanying note about tol.
• maxiter: Maximum number of iterations (default = 300)
• sigma: Specifies the level shift used in inverse iteration. If nothing (default), defaults to ordinary (forward) iterations. Otherwise, find eigenvalues close to sigma using shift and invert iterations.
• ritzvec: Returns the Ritz vectors v (eigenvectors) if true
• v0: starting vector from which to start the iterations
eigs returns the nev requested eigenvalues in d, the corresponding Ritz vectors v (only if ritzvec=true), the number of converged eigenvalues nconv, the number of iterations niter and the number of matrix vector multiplications nmult, as well as the final residual vector resid.
We can see the various keywords in action in the following examples:
julia> A = sparse(1.0I, 4, 4); B = Diagonal(1:4);
julia> λ, ϕ = eigs(A, B, nev = 2);
julia> λ
2-element Array{Float64,1}:
1.0000000000000002
0.5
julia> A = Diagonal([1, -2im, 3, 4im]); B = sparse(1.0I, 4, 4);
julia> λ, ϕ = eigs(A, B, nev=1, which=:SI);
julia> λ
1-element Array{Complex{Float64},1}:
-1.5720931501039814e-16 - 1.9999999999999984im
julia> λ, ϕ = eigs(A, B, nev=1, which=:LI);
julia> λ
1-element Array{Complex{Float64},1}:
0.0 + 4.000000000000002im
Note
The sigma and which keywords interact: the description of eigenvalues searched for by which do not necessarily refer to the eigenvalue problem $Av = Bv\lambda$, but rather the linear operator constructed by the specification of the iteration mode implied by sigma.
sigmaiteration modewhich refers to the problem
nothingordinary (forward)$Av = Bv\lambda$
real or complexinverse with level shift sigma$(A - \sigma B )^{-1}B = v\nu$ |
# Is a prime subfield a set of integers?
By definition: If $F$ is a field and $K\subset F$ is the smallest field contained in $F$, we call $K$ the prime subfield of $F$. Denote the prime subfield of $F$ by $P(F)$ (hope you don't mind my introducing this notation).
We know that for any field $F$ with characteristic $p$, $P(F)=\{a\cdot b^{-1}, \text{where } a\in\{0,1,...,p-1\},b\in\{1,2,...,p-1\}\}$. I'd like to know if $P(F)$ is a set of integers, but it's not clear to me that it should be so (e.g. $2(p-1)^{-1}$ an integer?)
For some context: I am trying to prove that if the characteristic of $F$ is $p$ for some $p$ prime, $P(F)$ is isomorphic to $\mathbb{F}_p$. I'm convinced that $\sigma:P(F)\to \mathbb{Z}_p$ with $\sigma(f)=\overline{f}\equiv f$ mod $p$ will give a homomorphism, and hence $\sigma$ is an isomorphism between $P(F)$ and $P(\mathbb{Z}_p)=\mathbb{Z}_p$. [I haven't yet verified that $P(\mathbb{Z}_p)=\mathbb{Z}_p$; seems $\mathbb{Z}_2\subset \mathbb{Z}_p$, but $\mathbb{Z}_2$ is not a field with the addition inherited by $\mathbb{Z}_p$?] I know that the desired homomorphism properties - $\sigma(a+b)=\sigma(a)+\sigma(b); \sigma(ab)=\sigma(a)+\sigma(b); \sigma(1)=1$ - hold if $a,b$ are integers, hence the initial question of the post.
If my wishful thinking is off and $P(F)$ is not a set of integers, I was thinking $P(F)$ is isomorphic to $\{0,1,...,M\}$ for some $M\le p^p$, and then we can define the above homomorphism from $\{0,1,...,M\} \to \mathbb{Z}_p$. But we no longer have the guarantee that $\{0,1,...,M\}$ is a field. So I'm not sure where I would go from there.
Thanks a bunch in advance! And please let me know if I should filter my posts more before I answer questions. I usually don't include much "thought-process" text - maybe I should not do so.
• Have you seen the first isomorphism theorem for rings or Bézout’s identity? – k.stm Aug 30 '16 at 19:35
• I have not. In fact, no discussion of rings yet. Ot seems rings are also useful for understanding polynomials (or vice versa? Both?), the latter of which features in my HW assignment (and the former not at all). I will read on rings and look out for this theorem. Thanks for the tip! – manofbear Aug 30 '16 at 19:41
• Rings are extremely useful and absolutely foundational to abstract algebra. In fact, I’d recommend studying some basic ring theory before or alongside field theory. – k.stm Aug 30 '16 at 19:42
• Excellent, thanks for that insight. I'm going through Artin Ch 10 (rings) as we speak. Any particular references you'd recommend as nice introductions without much in the way of background? – manofbear Aug 30 '16 at 19:43
• Sorry, I’m not too familiar with English textbooks on basic algebra/linear algebra, but there are plenty of questions about recommendations here, just search for questions with the reference-request tag and one of the abstract-algebra or linear-algebra tags. Lang’s Algebra is often cited I think. From what I can recall, it’s rather encyclopedic. I learnt my basic algebra from first-year introductory lectures, Bosch’s Linear Algebra and Algebra (in German) and a lot of Wikipedia. – k.stm Aug 30 '16 at 19:49
If my wishful thinking is off and P(F) is not a set of integers,
You're right to question your wishful thinking. The prime subfield is not a set of integers.
Think about the $p$-element field $\mathbb{Z}_p$, which is its own prime subfield. It has $p$ elements that it's convenient to name with the names $0, 1, \ldots , p-1$ of the first $p$ nonnegative integers, but it doesn't contain those integers, since their arithmetic isn't ordinary integer arithmetic, it's arithmetic mod $p$. Your question suggests that you sort of understand this.
In answer to your last paragraph: you should think a question through as best you can before asking it here - but you need not arrive at perfect clarity. If you could, you'd probably have the answer.
• Ethan, the voice from the past says that I like this answer very much. – Lubin Sep 11 '16 at 23:45
• @Lubin Voices from the past are always welcome (and comforting). – Ethan Bolker Sep 11 '16 at 23:48
In characteristic $p$, the prime subfield is just $\mathbf Z/p\mathbf Z$. This is because for any commutative ring $R$, there's a canonical ring homorphism which maps each $n\in \mathbf Z$ onto $n\cdot 1_R$, and if it is not injective, its kernel is generated by an integer $a>0$, whence by the 1st isomorphism theorem an injective ring homomorphism $\mathbf Z/a\mathbf Z\hookrightarrow R$.
Now if $R$ is an integral domain (e.g. a field $F$), the kernel is a prime ideal, generated by a prime number $p$. You now have your injection from $\mathbf Z/p\mathbf Z$ into the field $F$. |
You are currently browsing the category archive for the ‘math.LO’ category.
Nonstandard analysis is a mathematical framework in which one extends the standard mathematical universe ${{\mathfrak U}}$ of standard numbers, standard sets, standard functions, etc. into a larger nonstandard universe ${{}^* {\mathfrak U}}$ of nonstandard numbers, nonstandard sets, nonstandard functions, etc., somewhat analogously to how one places the real numbers inside the complex numbers, or the rationals inside the reals. This nonstandard universe enjoys many of the same properties as the standard one; in particular, we have the transfer principle that asserts that any statement in the language of first order logic is true in the standard universe if and only if it is true in the nonstandard one. (For instance, because Fermat’s last theorem is known to be true for standard natural numbers, it is automatically true for nonstandard natural numbers as well.) However, the nonstandard universe also enjoys some additional useful properties that the standard one does not, most notably the countable saturation property, which is a property somewhat analogous to the completeness property of a metric space; much as metric completeness allows one to assert that the intersection of a countable family of nested closed balls is non-empty, countable saturation allows one to assert that the intersection of a countable family of nested satisfiable formulae is simultaneously satisfiable. (See this previous blog post for more on the analogy between the use of nonstandard analysis and the use of metric completions.) Furthermore, by viewing both the standard and nonstandard universes externally (placing them both inside a larger metatheory, such as a model of Zermelo-Frankel-Choice (ZFC) set theory; in some more advanced set-theoretic applications one may also wish to add some large cardinal axioms), one can place some useful additional definitions and constructions on these universes, such as defining the concept of an infinitesimal nonstandard number (a number which is smaller in magnitude than any positive standard number). The ability to rigorously manipulate infinitesimals is of course one of the most well-known advantages of working with nonstandard analysis.
To build a nonstandard universe ${{}^* {\mathfrak U}}$ from a standard one ${{\mathfrak U}}$, the most common approach is to take an ultrapower of ${{\mathfrak U}}$ with respect to some non-principal ultrafilter over the natural numbers; see e.g. this blog post for details. Once one is comfortable with ultrafilters and ultrapowers, this becomes quite a simple and elegant construction, and greatly demystifies the nature of nonstandard analysis.
On the other hand, nonprincipal ultrafilters do have some unappealing features. The most notable one is that their very existence requires the axiom of choice (or more precisely, a weaker form of this axiom known as the boolean prime ideal theorem). Closely related to this is the fact that one cannot actually write down any explicit example of a nonprincipal ultrafilter, but must instead rely on nonconstructive tools such as Zorn’s lemma, the Hahn-Banach theorem, Tychonoff’s theorem, the Stone-Cech compactification, or the boolean prime ideal theorem to locate one. As such, ultrafilters definitely belong to the “infinitary” side of mathematics, and one may feel that it is inappropriate to use such tools for “finitary” mathematical applications, such as those which arise in hard analysis. From a more practical viewpoint, because of the presence of the infinitary ultrafilter, it can be quite difficult (though usually not impossible, with sufficient patience and effort) to take a finitary result proven via nonstandard analysis and coax an effective quantitative bound from it.
There is however a “cheap” version of nonstandard analysis which is less powerful than the full version, but is not as infinitary in that it is constructive (in the sense of not requiring any sort of choice-type axiom), and which can be translated into standard analysis somewhat more easily than a fully nonstandard argument; indeed, a cheap nonstandard argument can often be presented (by judicious use of asymptotic notation) in a way which is nearly indistinguishable from a standard one. It is obtained by replacing the nonprincipal ultrafilter in fully nonstandard analysis with the more classical Fréchet filter of cofinite subsets of the natural numbers, which is the filter that implicitly underlies the concept of the classical limit ${\lim_{{\bf n} \rightarrow \infty} a_{\bf n}}$ of a sequence when the underlying asymptotic parameter ${{\bf n}}$ goes off to infinity. As such, “cheap nonstandard analysis” aligns very well with traditional mathematics, in which one often allows one’s objects to be parameterised by some external parameter such as ${{\bf n}}$, which is then allowed to approach some limit such as ${\infty}$. The catch is that the Fréchet filter is merely a filter and not an ultrafilter, and as such some of the key features of fully nonstandard analysis are lost. Most notably, the law of the excluded middle does not transfer over perfectly from standard analysis to cheap nonstandard analysis; much as there exist bounded sequences of real numbers (such as ${0,1,0,1,\ldots}$) which do not converge to a (classical) limit, there exist statements in cheap nonstandard analysis which are neither true nor false (at least without passing to a subsequence, see below). The loss of such a fundamental law of mathematical reasoning may seem like a major disadvantage for cheap nonstandard analysis, and it does indeed make cheap nonstandard analysis somewhat weaker than fully nonstandard analysis. But in some situations (particularly when one is reasoning in a “constructivist” or “intuitionistic” fashion, and in particular if one is avoiding too much reliance on set theory) it turns out that one can survive the loss of this law; and furthermore, the law of the excluded middle is still available for standard analysis, and so one can often proceed by working from time to time in the standard universe to temporarily take advantage of this law, and then transferring the results obtained there back to the cheap nonstandard universe once one no longer needs to invoke the law of the excluded middle. Furthermore, the law of the excluded middle can be recovered by adopting the freedom to pass to subsequences with regards to the asymptotic parameter ${{\bf n}}$; this technique is already in widespread use in the analysis of partial differential equations, although it is generally referred to by names such as “the compactness method” rather than as “cheap nonstandard analysis”.
Below the fold, I would like to describe this cheap version of nonstandard analysis, which I think can serve as a pedagogical stepping stone towards fully nonstandard analysis, as it is formally similar to (though weaker than) fully nonstandard analysis, but on the other hand is closer in practice to standard analysis. As we shall see below, the relation between cheap nonstandard analysis and standard analysis is analogous in many ways to the relation between probabilistic reasoning and deterministic reasoning; it also resembles somewhat the preference in much of modern mathematics for viewing mathematical objects as belonging to families (or to categories) to be manipulated en masse, rather than treating each object individually. (For instance, nonstandard analysis can be used as a partial substitute for scheme theory in order to obtain uniformly quantitative results in algebraic geometry, as discussed for instance in this previous blog post.)
In the previous set of notes, we introduced the notion of an ultra approximate group – an ultraproduct ${A = \prod_{n \rightarrow\alpha} A_n}$ of finite ${K}$-approximate groups ${A_n}$ for some ${K}$ independent of ${n}$, where each ${K}$-approximate group ${A_n}$ may lie in a distinct ambient group ${G_n}$. Although these objects arise initially from the “finitary” objects ${A_n}$, it turns out that ultra approximate groups ${A}$ can be profitably analysed by means of infinitary groups ${L}$ (and in particular, locally compact groups or Lie groups ${L}$), by means of certain models ${\rho: \langle A \rangle \rightarrow L}$ of ${A}$ (or of the group ${\langle A \rangle}$ generated by ${A}$). We will define precisely what we mean by a model later, but as a first approximation one can view a model as a representation of the ultra approximate group ${A}$ (or of ${\langle A \rangle}$) that is “macroscopically faithful” in that it accurately describes the “large scale” behaviour of ${A}$ (or equivalently, that the kernel of the representation is “microscopic” in some sense). In the next section we will see how one can use “Gleason lemma” technology to convert this macroscopic control of an ultra approximate group into microscopic control, which will be the key to classifying approximate groups.
Models of ultra approximate groups can be viewed as the multiplicative combinatorics analogue of the more well known concept of an ultralimit of metric spaces, which we briefly review below the fold as motivation.
The crucial observation is that ultra approximate groups enjoy a local compactness property which allows them to be usefully modeled by locally compact groups (and hence, through the Gleason-Yamabe theorem from previous notes, by Lie groups also). As per the Heine-Borel theorem, the local compactness will come from a combination of a completeness property and a local total boundedness property. The completeness property turns out to be a direct consequence of the countable saturation property of ultraproducts, thus illustrating one of the key advantages of the ultraproduct setting. The local total boundedness property is more interesting. Roughly speaking, it asserts that “large bounded sets” (such as ${A}$ or ${A^{100}}$) can be covered by finitely many translates of “small bounded sets” ${S}$, where “small” is a topological group sense, implying in particular that large powers ${S^m}$ of ${S}$ lie inside a set such as ${A}$ or ${A^4}$. The easiest way to obtain such a property comes from the following lemma of Sanders:
Lemma 1 (Sanders lemma) Let ${A}$ be a finite ${K}$-approximate group in a (global) group ${G}$, and let ${m \geq 1}$. Then there exists a symmetric subset ${S}$ of ${A^4}$ with ${|S| \gg_{K,m} |A|}$ containing the identity such that ${S^m \subset A^4}$.
This lemma has an elementary combinatorial proof, and is the key to endowing an ultra approximate group with locally compact structure. There is also a closely related lemma of Croot and Sisask which can achieve similar results, and which will also be discussed below. (The locally compact structure can also be established more abstractly using the much more general methods of definability theory, as was first done by Hrushovski, but we will not discuss this approach here.)
By combining the locally compact structure of ultra approximate groups ${A}$ with the Gleason-Yamabe theorem, one ends up being able to model a large “ultra approximate subgroup” ${A'}$ of ${A}$ by a Lie group ${L}$. Such Lie models serve a number of important purposes in the structure theory of approximate groups. Firstly, as all Lie groups have a dimension which is a natural number, they allow one to assign a natural number “dimension” to ultra approximate groups, which opens up the ability to perform “induction on dimension” arguments. Secondly, Lie groups have an escape property (which is in fact equivalent to no small subgroups property): if a group element ${g}$ lies outside of a very small ball ${B_\epsilon}$, then some power ${g^n}$ of it will escape a somewhat larger ball ${B_1}$. Or equivalently: if a long orbit ${g, g^2, \ldots, g^n}$ lies inside the larger ball ${B_1}$, one can deduce that the original element ${g}$ lies inside the small ball ${B_\epsilon}$. Because all Lie groups have this property, we will be able to show that all ultra approximate groups ${A}$ “essentially” have a similar property, in that they are “controlled” by a nearby ultra approximate group which obeys a number of escape-type properties analogous to those enjoyed by small balls in a Lie group, and which we will call a strong ultra approximate group. This will be discussed in the next set of notes, where we will also see how these escape-type properties can be exploited to create a metric structure on strong approximate groups analogous to the Gleason metrics studied in previous notes, which can in turn be exploited (together with an induction on dimension argument) to fully classify such approximate groups (in the finite case, at least).
There are some cases where the analysis is particularly simple. For instance, in the bounded torsion case, one can show that the associated Lie model ${L}$ is necessarily zero-dimensional, which allows for a easy classification of approximate groups of bounded torsion.
Some of the material here is drawn from my recent paper with Ben Green and Emmanuel Breuillard, which is in turn inspired by a previous paper of Hrushovski.
Roughly speaking, mathematical analysis can be divided into two major styles, namely hard analysis and soft analysis. The precise distinction between the two types of analysis is imprecise (and in some cases one may use a blend the two styles), but some key differences can be listed as follows.
• Hard analysis tends to be concerned with quantitative or effective properties such as estimates, upper and lower bounds, convergence rates, and growth rates or decay rates. In contrast, soft analysis tends to be concerned with qualitative or ineffective properties such as existence and uniqueness, finiteness, measurability, continuity, differentiability, connectedness, or compactness.
• Hard analysis tends to be focused on finitary, finite-dimensional or discrete objects, such as finite sets, finitely generated groups, finite Boolean combination of boxes or balls, or “finite-complexity” functions, such as polynomials or functions on a finite set. In contrast, soft analysis tends to be focused on infinitary, infinite-dimensional, or continuous objects, such as arbitrary measurable sets or measurable functions, or abstract locally compact groups.
• Hard analysis tends to involve explicit use of many parameters such as ${\epsilon}$, ${\delta}$, ${N}$, etc. In contrast, soft analysis tends to rely instead on properties such as continuity, differentiability, compactness, etc., which implicitly are defined using a similar set of parameters, but whose parameters often do not make an explicit appearance in arguments.
• In hard analysis, it is often the case that a key lemma in the literature is not quite optimised for the application at hand, and one has to reprove a slight variant of that lemma (using a variant of the proof of the original lemma) in order for it to be suitable for applications. In contrast, in soft analysis, key results can often be used as “black boxes”, without need of further modification or inspection of the proof.
• The properties in soft analysis tend to enjoy precise closure properties; for instance, the composition or linear combination of continuous functions is again continuous, and similarly for measurability, differentiability, etc. In contrast, the closure properties in hard analysis tend to be fuzzier, in that the parameters in the conclusion are often different from the parameters in the hypotheses. For instance, the composition of two Lipschitz functions with Lipschitz constant ${K}$ is still Lipschitz, but now with Lipschitz constant ${K^2}$ instead of ${K}$. These changes in parameters mean that hard analysis arguments often require more “bookkeeping” than their soft analysis counterparts, and are less able to utilise algebraic constructions (e.g. quotient space constructions) that rely heavily on precise closure properties.
In the lectures so far, focusing on the theory surrounding Hilbert’s fifth problem, the results and techniques have fallen well inside the category of soft analysis. However, we will now turn to the theory of approximate groups, which is a topic which is traditionally studied using the methods of hard analysis. (Later we will also study groups of polynomial growth, which lies on an intermediate position in the spectrum between hard and soft analysis, and which can be profitably analysed using both styles of analysis.)
Despite the superficial differences between hard and soft analysis, though, there are a number of important correspondences between results in hard analysis and results in soft analysis. For instance, if one has some sort of uniform quantitative bound on some expression relating to finitary objects, one can often use limiting arguments to then conclude a qualitative bound on analogous expressions on infinitary objects, by viewing the latter objects as some sort of “limit” of the former objects. Conversely, if one has a qualitative bound on infinitary objects, one can often use compactness and contradiction arguments to recover uniform quantitative bounds on finitary objects as a corollary.
Remark 1 Another type of correspondence between hard analysis and soft analysis, which is “syntactical” rather than “semantical” in nature, arises by taking the proofs of a soft analysis result, and translating such a qualitative proof somehow (e.g. by carefully manipulating quantifiers) into a quantitative proof of an analogous hard analysis result. This type of technique is sometimes referred to as proof mining in the proof theory literature, and is discussed in this previous blog post (and its comments). We will however not employ systematic proof mining techniques here, although in later posts we will informally borrow arguments from infinitary settings (such as the methods used to construct Gleason metrics) and adapt them to finitary ones.
Let us illustrate the correspondence between hard and soft analysis results with a simple example.
Proposition 1 Let ${X}$ be a sequentially compact topological space, let ${S}$ be a dense subset of ${X}$, and let ${f: X \rightarrow [0,+\infty]}$ be a continuous function (giving the extended half-line ${[0,+\infty]}$ the usual order topology). Then the following statements are equivalent:
• (i) (Qualitative bound on infinitary objects) For all ${x \in X}$, one has ${f(x) < +\infty}$.
• (ii) (Quantitative bound on finitary objects) There exists ${M < +\infty}$ such that ${f(x) \leq M}$ for all ${x \in S}$.
In applications, ${S}$ is typically a (non-compact) set of “finitary” (or “finite complexity”) objects of a certain class, and ${X}$ is some sort of “completion” or “compactification” of ${S}$ which admits additional “infinitary” objects that may be viewed as limits of finitary objects.
Proof: To see that (ii) implies (i), observe from density that every point ${x}$ in ${X}$ is adherent to ${S}$, and so given any neighbourhood ${U}$ of ${x}$, there exists ${y \in S \cap U}$. Since ${f(y) \leq M}$, we conclude from the continuity of ${f}$ that ${f(x) \leq M}$ also, and the claim follows.
Conversely, to show that (i) implies (ii), we use the “compactness and contradiction” argument. Suppose for sake of contradiction that (ii) failed. Then for any natural number ${n}$, there exists ${x_n \in S}$ such that ${f(x_n) \geq n}$. (Here we have used the axiom of choice, which we will assume throughout this course.) Using sequential compactness, and passing to a subsequence if necessary, we may assume that the ${x_n}$ converge to a limit ${x \in X}$. By continuity of ${f}$, this implies that ${f(x) = +\infty}$, contradicting (i). $\Box$
Remark 2 Note that the above deduction of (ii) from (i) is ineffective in that it gives no explicit bound on the uniform bound ${M}$ in (ii). Without any further information on how the qualitative bound (i) is proven, this is the best one can do in general (and this is one of the most significant weaknesses of infinitary methods when used to solve finitary problems); but if one has access to the proof of (i), one can often finitise or proof mine that argument to extract an effective bound for ${M}$, although often the bound one obtains in the process is quite poor (particularly if the proof of (i) relied extensively on infinitary tools, such as limits). See this blog post for some related discussion.
The above simple example illustrates that in order to get from an “infinitary” statement such as (i) to a “finitary” statement such as (ii), a key step is to be able to take a sequence ${(x_n)_{n \in {\bf N}}}$ (or in some cases, a more general net ${(x_\alpha)_{\alpha \in A}}$) of finitary objects and extract a suitable infinitary limit object ${x}$. In the literature, there are three main ways in which one can extract such a limit:
• (Topological limit) If the ${x_n}$ are all elements of some topological space ${S}$ (e.g. an incomplete function space) which has a suitable “compactification” or “completion” ${X}$ (e.g. a Banach space), then (after passing to a subsequence if necessary) one can often ensure the ${x_n}$ converge in a topological sense (or in a metrical sense) to a limit ${x}$. The use of this type of limit to pass between quantitative/finitary and qualitative/infinitary results is particularly common in the more analytical areas of mathematics (such as ergodic theory, asymptotic combinatorics, or PDE), due to the abundance of useful compactness results in analysis such as the (sequential) Banach-Alaoglu theorem, Prokhorov’s theorem, the Helly selection theorem, the Arzelá-Ascoli theorem, or even the humble Bolzano-Weierstrass theorem. However, one often has to take care with the nature of convergence, as many compactness theorems only guarantee convergence in a weak sense rather than in a strong one.
• (Categorical limit) If the ${x_n}$ are all objects in some category (e.g. metric spaces, groups, fields, etc.) with a number of morphisms between the ${x_n}$ (e.g. morphisms from ${x_{n+1}}$ to ${x_n}$, or vice versa), then one can often form a direct limit ${\lim_{\rightarrow} x_n}$ or inverse limit ${\lim_{\leftarrow} x_n}$ of these objects to form a limiting object ${x}$. The use of these types of limits to connect quantitative and qualitative results is common in subjects such as algebraic geometry that are particularly amenable to categorical ways of thinking. (We have seen inverse limits appear in the discussion of Hilbert’s fifth problem, although in that context they were not really used to connect quantitative and qualitative results together.)
• (Logical limit) If the ${x_n}$ are all distinct spaces (or elements or subsets of distinct spaces), with few morphisms connecting them together, then topological and categorical limits are often unavailable or unhelpful. In such cases, however, one can still tie together such objects using an ultraproduct construction (or similar device) to create a limiting object ${\lim_{n \rightarrow \alpha} x_n}$ or limiting space ${\prod_{n \rightarrow \alpha} x_n}$ that is a logical limit of the ${x_n}$, in the sense that various properties of the ${x_n}$ (particularly those that can be phrased using the language of first-order logic) are preserved in the limit. As such, logical limits are often very well suited for the task of connecting finitary and infinitary mathematics together. Ultralimit type constructions are of course used extensively in logic (particularly in model theory), but are also popular in metric geometry. They can also be used in many of the previously mentioned areas of mathematics, such as algebraic geometry (as discussed in this previous post).
The three types of limits are analogous in many ways, with a number of connections between them. For instance, in the study of groups of polynomial growth, both topological limits (using the metric notion of Gromov-Hausdorff convergence) and logical limits (using the ultralimit construction) are commonly used, and to some extent the two constructions are at least partially interchangeable in this setting. (See also these previous posts for the use of ultralimits as a substitute for topological limits.) In the theory of approximate groups, though, it was observed by Hrushovski that logical limits (and in particular, ultraproducts) are the most useful type of limit to connect finitary approximate groups to their infinitary counterparts. One reason for this is that one is often interested in obtaining results on approximate groups ${A}$ that are uniform in the choice of ambient group ${G}$. As such, one often seeks to take a limit of approximate groups ${A_n}$ that lie in completely unrelated ambient groups ${G_n}$, with no obvious morphisms or metrics tying the ${G_n}$ to each other. As such, the topological and categorical limits are not easily usable, whereas the logical limits can still be employed without much difficulty.
Logical limits are closely tied with non-standard analysis. Indeed, by applying an ultraproduct construction to standard number systems such as the natural numbers ${{\bf N}}$ or the reals ${{\bf R}}$, one can obtain nonstandard number systems such as the nonstandard natural numbers ${{}^* {\bf N}}$ or the nonstandard real numbers (or hyperreals) ${{}^* {\bf R}}$. These nonstandard number systems behave very similarly to their standard counterparts, but also enjoy the advantage of containing the standard number systems as proper subsystems (e.g. ${{\bf R}}$ is a subring of ${{}^* {\bf R}}$), which allows for some convenient algebraic manipulations (such as the quotient space construction to create spaces such as ${{}^* {\bf R} / {\bf R}}$) which are not easily accessible in the purely standard universe. Nonstandard spaces also enjoy a useful completeness property, known as countable saturation, which is analogous to metric completeness (as discussed in this previous blog post) and which will be particularly useful for us in tying together the theory of approximate groups with the theory of Hilbert’s fifth problem. See this previous post for more discussion on ultrafilters and nonstandard analysis.
In these notes, we lay out the basic theory of ultraproducts and ultralimits (in particular, proving Los’s theorem, which roughly speaking asserts that ultralimits are limits in a logical sense, as well as the countable saturation property alluded to earlier). We also lay out some of the basic foundations of nonstandard analysis, although we will not rely too heavily on nonstandard tools in this course. Finally, we apply this general theory to approximate groups, to connect finite approximate groups to an infinitary type of approximate group which we will call an ultra approximate group. We will then study these ultra approximate groups (and models of such groups) in more detail in the next set of notes.
Remark 3 Throughout these notes (and in the rest of the course), we will assume the axiom of choice, in order to easily use ultrafilter-based tools. If one really wanted to expend the effort, though, one could eliminate the axiom of choice from the proofs of the final “finitary” results that one is ultimately interested in proving, at the cost of making the proofs significantly lengthier. Indeed, there is a general result of Gödel that any result which can be stated in the language of Peano arithmetic (which, roughly speaking, means that the result is “finitary” in nature), and can be proven in set theory using the axiom of choice (or more precisely, in the ZFC axiom system), can also be proven in set theory without the axiom of choice (i.e. in the ZF system). As this course is not focused on foundations, we shall simply assume the axiom of choice henceforth to avoid further distraction by such issues.
This fall (starting Monday, September 26), I will be teaching a graduate topics course which I have entitled “Hilbert’s fifth problem and related topics.” The course is going to focus on three related topics:
• Hilbert’s fifth problem on the topological description of Lie groups, as well as the closely related (local) classification of locally compact groups (the Gleason-Yamabe theorem).
• Approximate groups in nonabelian groups, and their classification via the Gleason-Yamabe theorem (this is very recent work of Emmanuel Breuillard, Ben Green, Tom Sanders, and myself, building upon earlier work of Hrushovski);
• Gromov’s theorem on groups of polynomial growth, as proven via the classification of approximate groups (as well as some consequences to fundamental groups of Riemannian manifolds).
I have already blogged about these topics repeatedly in the past (particularly with regard to Hilbert’s fifth problem), and I intend to recycle some of that material in the lecture notes for this course.
The above three families of results exemplify two broad principles (part of what I like to call “the dichotomy between structure and randomness“):
• (Rigidity) If a group-like object exhibits a weak amount of regularity, then it (or a large portion thereof) often automatically exhibits a strong amount of regularity as well;
• (Structure) This strong regularity manifests itself either as Lie type structure (in continuous settings) or nilpotent type structure (in discrete settings). (In some cases, “nilpotent” should be replaced by sister properties such as “abelian“, “solvable“, or “polycyclic“.)
Let me illustrate what I mean by these two principles with two simple examples, one in the continuous setting and one in the discrete setting. We begin with a continuous example. Given an ${n \times n}$ complex matrix ${A \in M_n({\bf C})}$, define the matrix exponential ${\exp(A)}$ of ${A}$ by the formula
$\displaystyle \exp(A) := \sum_{k=0}^\infty \frac{A^k}{k!} = 1 + A + \frac{1}{2!} A^2 + \frac{1}{3!} A^3 + \ldots$
which can easily be verified to be an absolutely convergent series.
Exercise 1 Show that the map ${A \mapsto \exp(A)}$ is a real analytic (and even complex analytic) map from ${M_n({\bf C})}$ to ${M_n({\bf C})}$, and obeys the restricted homomorphism property
$\displaystyle \exp(sA) \exp(tA) = \exp((s+t)A) \ \ \ \ \ (1)$
for all ${A \in M_n({\bf C})}$ and ${s,t \in {\bf C}}$.
Proposition 1 (Rigidity and structure of matrix homomorphisms) Let ${n}$ be a natural number. Let ${GL_n({\bf C})}$ be the group of invertible ${n \times n}$ complex matrices. Let ${\Phi: {\bf R} \rightarrow GL_n({\bf C})}$ be a map obeying two properties:
• (Group-like object) ${\Phi}$ is a homomorphism, thus ${\Phi(s) \Phi(t) = \Phi(s+t)}$ for all ${s,t \in {\bf R}}$.
• (Weak regularity) The map ${t \mapsto \Phi(t)}$ is continuous.
Then:
• (Strong regularity) The map ${t \mapsto \Phi(t)}$ is smooth (i.e. infinitely differentiable). In fact it is even real analytic.
• (Lie-type structure) There exists a (unique) complex ${n \times n}$ matrix ${A}$ such that ${\Phi(t) = \exp(tA)}$ for all ${t \in {\bf R}}$.
Proof: Let ${\Phi}$ be as above. Let ${\epsilon > 0}$ be a small number (depending only on ${n}$). By the homomorphism property, ${\Phi(0) = 1}$ (where we use ${1}$ here to denote the identity element of ${GL_n({\bf C})}$), and so by continuity we may find a small ${t_0>0}$ such that ${\Phi(t) = 1 + O(\epsilon)}$ for all ${t \in [-t_0,t_0]}$ (we use some arbitrary norm here on the space of ${n \times n}$ matrices, and allow implied constants in the ${O()}$ notation to depend on ${n}$).
The map ${A \mapsto \exp(A)}$ is real analytic and (by the inverse function theorem) is a diffeomorphism near ${0}$. Thus, by the inverse function theorem, we can (if ${\epsilon}$ is small enough) find a matrix ${B}$ of size ${B = O(\epsilon)}$ such that ${\Phi(t_0) = \exp(B)}$. By the homomorphism property and (1), we thus have
$\displaystyle \Phi(t_0/2)^2 = \Phi(t_0) = \exp(B) = \exp(B/2)^2.$
On the other hand, by another application of the inverse function theorem we see that the squaring map ${A \mapsto A^2}$ is a diffeomorphism near ${1}$ in ${GL_n({\bf C})}$, and thus (if ${\epsilon}$ is small enough)
$\displaystyle \Phi(t_0/2) = \exp(B/2).$
We may iterate this argument (for a fixed, but small, value of ${\epsilon}$) and conclude that
$\displaystyle \Phi(t_0/2^k) = \exp(B/2^k)$
for all ${k = 0,1,2,\ldots}$. By the homomorphism property and (1) we thus have
$\displaystyle \Phi(qt_0) = \exp(qB)$
whenever ${q}$ is a dyadic rational, i.e. a rational of the form ${a/2^k}$ for some integer ${a}$ and natural number ${k}$. By continuity we thus have
$\displaystyle \Phi(st_0) = \exp(sB)$
for all real ${s}$. Setting ${A := B/t_0}$ we conclude that
$\displaystyle \Phi(t) = \exp(tA)$
for all real ${t}$, which gives existence of the representation and also real analyticity and smoothness. Finally, uniqueness of the representation ${\Phi(t) = \exp(tA)}$ follows from the identity
$\displaystyle A = \frac{d}{dt} \exp(tA)|_{t=0}.$
$\Box$
Exercise 2 Generalise Proposition 1 by replacing the hypothesis that ${\Phi}$ is continuous with the hypothesis that ${\Phi}$ is Lebesgue measurable (Hint: use the Steinhaus theorem.). Show that the proposition fails (assuming the axiom of choice) if this hypothesis is omitted entirely.
Note how one needs both the group-like structure and the weak regularity in combination in order to ensure the strong regularity; neither is sufficient on its own. We will see variants of the above basic argument throughout the course. Here, the task of obtaining smooth (or real analytic structure) was relatively easy, because we could borrow the smooth (or real analytic) structure of the domain ${{\bf R}}$ and range ${M_n({\bf C})}$; but, somewhat remarkably, we shall see that one can still build such smooth or analytic structures even when none of the original objects have any such structure to begin with.
Now we turn to a second illustration of the above principles, namely Jordan’s theorem, which uses a discreteness hypothesis to upgrade Lie type structure to nilpotent (and in this case, abelian) structure. We shall formulate Jordan’s theorem in a slightly stilted fashion in order to emphasise the adherence to the above-mentioned principles.
Theorem 2 (Jordan’s theorem) Let ${G}$ be an object with the following properties:
• (Group-like object) ${G}$ is a group.
• (Discreteness) ${G}$ is finite.
• (Lie-type structure) ${G}$ is contained in ${U_n({\bf C})}$ (the group of unitary ${n \times n}$ matrices) for some ${n}$.
Then there is a subgroup ${G'}$ of ${G}$ such that
• (${G'}$ is close to ${G}$) The index ${|G/G'|}$ of ${G'}$ in ${G}$ is ${O_n(1)}$ (i.e. bounded by ${C_n}$ for some quantity ${C_n}$ depending only on ${n}$).
• (Nilpotent-type structure) ${G'}$ is abelian.
A key observation in the proof of Jordan’s theorem is that if two unitary elements ${g, h \in U_n({\bf C})}$ are close to the identity, then their commutator ${[g,h] = g^{-1}h^{-1}gh}$ is even closer to the identity (in, say, the operator norm ${\| \|_{op}}$). Indeed, since multiplication on the left or right by unitary elements does not affect the operator norm, we have
$\displaystyle \| [g,h] - 1 \|_{op} = \| gh - hg \|_{op}$
$\displaystyle = \| (g-1)(h-1) - (h-1)(g-1) \|_{op}$
and so by the triangle inequality
$\displaystyle \| [g,h] - 1 \|_{op} \leq 2 \|g-1\|_{op} \|h-1\|_{op}. \ \ \ \ \ (2)$
Now we can prove Jordan’s theorem.
Proof: We induct on ${n}$, the case ${n=1}$ being trivial. Suppose first that ${G}$ contains a central element ${g}$ which is not a multiple of the identity. Then, by definition, ${G}$ is contained in the centraliser ${Z(g)}$ of ${g}$, which by the spectral theorem is isomorphic to a product ${U_{n_1}({\bf C}) \times \ldots \times U_{n_k}({\bf C})}$ of smaller unitary groups. Projecting ${G}$ to each of these factor groups and applying the induction hypothesis, we obtain the claim.
Thus we may assume that ${G}$ contains no central elements other than multiples of the identity. Now pick a small ${\epsilon > 0}$ (one could take ${\epsilon=\frac{1}{10n}}$ in fact) and consider the subgroup ${G'}$ of ${G}$ generated by those elements of ${G}$ that are within ${\epsilon}$ of the identity (in the operator norm). By considering a maximal ${\epsilon}$-net of ${G}$ we see that ${G'}$ has index at most ${O_{n,\epsilon}(1)}$ in ${G}$. By arguing as before, we may assume that ${G'}$ has no central elements other than multiples of the identity.
If ${G'}$ consists only of multiples of the identity, then we are done. If not, take an element ${g}$ of ${G'}$ that is not a multiple of the identity, and which is as close as possible to the identity (here is where we crucially use that ${G}$ is finite). By (2), we see that if ${\epsilon}$ is sufficiently small depending on ${n}$, and if ${h}$ is one of the generators of ${G'}$, then ${[g,h]}$ lies in ${G'}$ and is closer to the identity than ${g}$, and is thus a multiple of the identity. On the other hand, ${[g,h]}$ has determinant ${1}$. Given that it is so close to the identity, it must therefore be the identity (if ${\epsilon}$ is small enough). In other words, ${g}$ is central in ${G'}$, and is thus a multiple of the identity. But this contradicts the hypothesis that there are no central elements other than multiples of the identity, and we are done. $\Box$
Commutator estimates such as (2) will play a fundamental role in many of the arguments we will see in this course; as we saw above, such estimates combine very well with a discreteness hypothesis, but will also be very useful in the continuous setting.
Exercise 3 Generalise Jordan’s theorem to the case when ${G}$ is a finite subgroup of ${GL_n({\bf C})}$ rather than of ${U_n({\bf C})}$. (Hint: The elements of ${G}$ are not necessarily unitary, and thus do not necessarily preserve the standard Hilbert inner product of ${{\bf C}^n}$. However, if one averages that inner product by the finite group ${G}$, one obtains a new inner product on ${{\bf C}^n}$ that is preserved by ${G}$, which allows one to conjugate ${G}$ to a subgroup of ${U_n({\bf C})}$. This averaging trick is (a small) part of Weyl’s unitary trick in representation theory.)
Exercise 4 (Inability to discretise nonabelian Lie groups) Show that if ${n \geq 3}$, then the orthogonal group ${O_n({\bf R})}$ cannot contain arbitrarily dense finite subgroups, in the sense that there exists an ${\epsilon = \epsilon_n > 0}$ depending only on ${n}$ such that for every finite subgroup ${G}$ of ${O_n({\bf R})}$, there exists a ball of radius ${\epsilon}$ in ${O_n({\bf R})}$ (with, say, the operator norm metric) that is disjoint from ${G}$. What happens in the ${n=2}$ case?
Remark 1 More precise classifications of the finite subgroups of ${U_n({\bf C})}$ are known, particularly in low dimensions. For instance, one can show that the only finite subgroups of ${SO_3({\bf R})}$ (which ${SU_2({\bf C})}$ is a double cover of) are isomorphic to either a cyclic group, a dihedral group, or the symmetry group of one of the Platonic solids.
I have blogged several times in the past about nonstandard analysis, which among other things is useful in allowing one to import tools from infinitary (or qualitative) mathematics in order to establish results in finitary (or quantitative) mathematics. One drawback, though, to using nonstandard analysis methods is that the bounds one obtains by such methods are usually ineffective: in particular, the conclusions of a nonstandard analysis argument may involve an unspecified constant ${C}$ that is known to be finite but for which no explicit bound is obviously available. (In many cases, a bound can eventually be worked out by performing proof mining on the argument, and in particular by carefully unpacking the proofs of all the various results from infinitary mathematics that were used in the argument, as opposed to simply using them as “black boxes”, but this is a time-consuming task and the bounds that one eventually obtains tend to be quite poor (e.g. tower exponential or Ackermann type bounds are not uncommon).)
Because of this fact, it would seem that quantitative bounds, such as polynomial type bounds ${X \leq C Y^C}$ that show that one quantity ${X}$ is controlled in a polynomial fashion by another quantity ${Y}$, are not easily obtainable through the ineffective methods of nonstandard analysis. Actually, this is not the case; as I will demonstrate by an example below, nonstandard analysis can certainly yield polynomial type bounds. The catch is that the exponent ${C}$ in such bounds will be ineffective; but nevertheless such bounds are still good enough for many applications.
Let us now illustrate this by reproving a lemma from this paper of Mei-Chu Chang (Lemma 2.14, to be precise), which was recently pointed out to me by Van Vu. Chang’s paper is focused primarily on the sum-product problem, but she uses a quantitative lemma from algebraic geometry which is of independent interest. To motivate the lemma, let us first establish a qualitative version:
Lemma 1 (Qualitative solvability) Let ${P_1,\ldots,P_r: {\bf C}^d \rightarrow {\bf C}}$ be a finite number of polynomials in several variables with rational coefficients. If there is a complex solution ${z = (z_1,\ldots,z_d) \in {\bf C}^d}$ to the simultaneous system of equations
$\displaystyle P_1(z) = \ldots = P_r(z) = 0,$
then there also exists a solution ${z \in \overline{{\bf Q}}^d}$ whose coefficients are algebraic numbers (i.e. they lie in the algebraic closure ${{\bf Q}}$ of the rationals).
Proof: Suppose there was no solution to ${P_1(z)=\ldots=P_r(z)=0}$ over ${\overline{{\bf Q}}}$. Applying Hilbert’s nullstellensatz (which is available as ${\overline{{\bf Q}}}$ is algebraically closed), we conclude the existence of some polynomials ${Q_1,\ldots,Q_r}$ (with coefficients in ${\overline{{\bf Q}}}$) such that
$\displaystyle P_1 Q_1 + \ldots + P_r Q_r = 1$
as polynomials. In particular, we have
$\displaystyle P_1(z) Q_1(z) + \ldots + P_r(z) Q_r(z) = 1$
for all ${z \in {\bf C}^d}$. This shows that there is no solution to ${P_1(z)=\ldots=P_r(z)=0}$ over ${{\bf C}}$, as required. $\Box$
Remark 1 Observe that in the above argument, one could replace ${{\bf Q}}$ and ${{\bf C}}$ by any other pair of fields, with the latter containing the algebraic closure of the former, and still obtain the same result.
The above lemma asserts that if a system of rational equations is solvable at all, then it is solvable with some algebraic solution. But it gives no bound on the complexity of that solution in terms of the complexity of the original equation. Chang’s lemma provides such a bound. If ${H \geq 1}$ is an integer, let us say that an algebraic number has height at most ${H}$ if its minimal polynomial (after clearing denominators) consists of integers of magnitude at most ${H}$.
Lemma 2 (Quantitative solvability) Let ${P_1,\ldots,P_r: {\bf C}^d \rightarrow {\bf C}}$ be a finite number of polynomials of degree at most ${D}$ with rational coefficients, each of height at most ${H}$. If there is a complex solution ${z = (z_1,\ldots,z_d) \in {\bf C}^d}$ to the simultaneous system of equations
$\displaystyle P_1(z) = \ldots = P_r(z) = 0,$
then there also exists a solution ${z \in \overline{{\bf Q}}^d}$ whose coefficients are algebraic numbers of degree at most ${C}$ and height at most ${CH^C}$, where ${C = C_{D, d,r}}$ depends only on ${D}$, ${d}$ and ${r}$.
Chang proves this lemma by essentially establishing a quantitative version of the nullstellensatz, via elementary elimination theory (somewhat similar, actually, to the approach I took to the nullstellensatz in my own blog post). She also notes that one could also establish the result through the machinery of Gröbner bases. In each of these arguments, it was not possible to use Lemma 1 (or the closely related nullstellensatz) as a black box; one actually had to unpack one of the proofs of that lemma or nullstellensatz to get the polynomial bound. However, using nonstandard analysis, it is possible to get such polynomial bounds (albeit with an ineffective value of the constant ${C}$) directly from Lemma 1 (or more precisely, the generalisation in Remark 1) without having to inspect the proof, and instead simply using it as a black box, thus providing a “soft” proof of Lemma 2 that is an alternative to the “hard” proofs mentioned above.
Here’s how the proof works. Informally, the idea is that Lemma 2 should follow from Lemma 1 after replacing the field of rationals ${{\bf Q}}$ with “the field of rationals of polynomially bounded height”. Unfortunately, the latter object does not really make sense as a field in standard analysis; nevertheless, it is a perfectly sensible object in nonstandard analysis, and this allows the above informal argument to be made rigorous.
We turn to the details. As is common whenever one uses nonstandard analysis to prove finitary results, we use a “compactness and contradiction” argument (or more precisely, an “ultralimit and contradiction” argument). Suppose for contradiction that Lemma 2 failed. Carefully negating the quantifiers (and using the axiom of choice), we conclude that there exists ${D, d, r}$ such that for each natural number ${n}$, there is a positive integer ${H^{(n)}}$ and a family ${P_1^{(n)}, \ldots, P_r^{(n)}: {\bf C}^d \rightarrow {\bf C}}$ of polynomials of degree at most ${D}$ and rational coefficients of height at most ${H^{(n)}}$, such that there exist at least one complex solution ${z^{(n)} \in {\bf C}^d}$ to
$\displaystyle P_1^{(n)}(z^{(n)}) = \ldots = P_r(z^{(n)}) = 0, \ \ \ \ \ (1)$
but such that there does not exist any such solution whose coefficients are algebraic numbers of degree at most ${n}$ and height at most ${n (H^{(n)})^n}$.
Now we take ultralimits (see e.g. this previous blog post of a quick review of ultralimit analysis, which we will assume knowledge of in the argument that follows). Let ${p \in \beta {\bf N} \backslash {\bf N}}$ be a non-principal ultrafilter. For each ${i=1,\ldots,r}$, the ultralimit
$\displaystyle P_i := \lim_{n \rightarrow p} P_i^{(n)}$
of the (standard) polynomials ${P_i^{(n)}}$ is a nonstandard polynomial ${P_i: {}^* {\bf C}^d \rightarrow {}^* {\bf C}}$ of degree at most ${D}$, whose coefficients now lie in the nonstandard rationals ${{}^* {\bf Q}}$. Actually, due to the height restriction, we can say more. Let ${H := \lim_{n \rightarrow p} H^{(n)} \in {}^* {\bf N}}$ be the ultralimit of the ${H^{(n)}}$, this is a nonstandard natural number (which will almost certainly be unbounded, but we will not need to use this). Let us say that a nonstandard integer ${a}$ is of polynomial size if we have ${|a| \leq C H^C}$ for some standard natural number ${C}$, and say that a nonstandard rational number ${a/b}$ is of polynomial height if ${a}$, ${b}$ are of polynomial size. Let ${{\bf Q}_{poly(H)}}$ be the collection of all nonstandard rationals of polynomial height. (In the language of nonstandard analysis, ${{\bf Q}_{poly(H)}}$ is an external set rather than an internal one, because it is not itself an ultraproduct of standard sets; but this will not be relevant for the argument that follows.) It is easy to see that ${{\bf Q}_{poly(H)}}$ is a field, basically because the sum or product of two integers of polynomial size, remains of polynomial size. By construction, it is clear that the coefficients of ${P_i}$ are nonstandard rationals of polynomial height, and thus ${P_1,\ldots,P_r}$ are defined over ${{\bf Q}_{poly(H)}}$.
Meanwhile, if we let ${z := \lim_{n \rightarrow p} z^{(n)} \in {}^* {\bf C}^d}$ be the ultralimit of the solutions ${z^{(n)}}$ in (1), we have
$\displaystyle P_1(z) = \ldots = P_r(z) = 0,$
thus ${P_1,\ldots,P_r}$ are solvable in ${{}^* {\bf C}}$. Applying Lemma 1 (or more precisely, the generalisation in Remark 1), we see that ${P_1,\ldots,P_r}$ are also solvable in ${\overline{{\bf Q}_{poly(H)}}}$. (Note that as ${{\bf C}}$ is algebraically closed, ${{}^*{\bf C}}$ is also (by Los’s theorem), and so ${{}^* {\bf C}}$ contains ${\overline{{\bf Q}_{poly(H)}}}$.) Thus, there exists ${w \in \overline{{\bf Q}_{poly(H)}}^d}$ with
$\displaystyle P_1(w) = \ldots = P_r(w) = 0.$
As ${\overline{{\bf Q}_{poly(H)}}^d}$ lies in ${{}^* {\bf C}^d}$, we can write ${w}$ as an ultralimit ${w = \lim_{n \rightarrow p} w^{(n)}}$ of standard complex vectors ${w^{(n)} \in {\bf C}^d}$. By construction, the coefficients of ${w}$ each obey a non-trivial polynomial equation of degree at most ${C}$ and whose coefficients are nonstandard integers of magnitude at most ${C H^C}$, for some standard natural number ${C}$. Undoing the ultralimit, we conclude that for ${n}$ sufficiently close to ${p}$, the coefficients of ${w^{(n)}}$ obey a non-trivial polynomial equation of degree at most ${C}$ whose coefficients are standard integers of magnitude at most ${C (H^{(n)})^C}$. In particular, these coefficients have height at most ${C (H^{(n)})^C}$. Also, we have
$\displaystyle P_1^{(n)}(w^{(n)}) = \ldots = P_r^{(n)}(w^{(n)}) = 0.$
But for ${n}$ larger than ${C}$, this contradicts the construction of the ${P_i^{(n)}}$, and the claim follows. (Note that as ${p}$ is non-principal, any neighbourhood of ${p}$ in ${{\bf N}}$ will contain arbitrarily large natural numbers.)
Remark 2 The same argument actually gives a slightly stronger version of Lemma 2, namely that the integer coefficients used to define the algebraic solution ${z}$ can be taken to be polynomials in the coefficients of ${P_1,\ldots,P_r}$, with degree and coefficients bounded by ${C_{D,d,r}}$.
I recently reposted my favourite logic puzzle, namely the blue-eyed islander puzzle. I am fond of this puzzle because in order to properly understand the correct solution (and to properly understand why the alternative solution is incorrect), one has to think very clearly (but unintuitively) about the nature of knowledge.
There is however an additional subtlety to the puzzle that was pointed out in comments, in that the correct solution to the puzzle has two components, a (necessary) upper bound and a (possible) lower bound (I’ll explain this further below the fold, in order to avoid blatantly spoiling the puzzle here). Only the upper bound is correctly explained in the puzzle (and even then, there are some slight inaccuracies, as will be discussed below). The lower bound, however, is substantially more difficult to establish, in part because the bound is merely possible and not necessary. Ultimately, this is because to demonstrate the upper bound, one merely has to show that a certain statement is logically deducible from an islander’s state of knowledge, which can be done by presenting an appropriate chain of logical deductions. But to demonstrate the lower bound, one needs to show that certain statements are not logically deducible from an islander’s state of knowledge, which is much harder, as one has to rule out all possible chains of deductive reasoning from arriving at this particular conclusion. In fact, to rigorously establish such impossiblity statements, one ends up having to leave the “syntactic” side of logic (deductive reasoning), and move instead to the dual “semantic” side of logic (creation of models). As we shall see, semantics requires substantially more mathematical setup than syntax, and the demonstration of the lower bound will therefore be much lengthier than that of the upper bound.
To complicate things further, the particular logic that is used in the blue-eyed islander puzzle is not the same as the logics that are commonly used in mathematics, namely propositional logic and first-order logic. Because the logical reasoning here depends so crucially on the concept of knowledge, one must work instead with an epistemic logic (or more precisely, an epistemic modal logic) which can properly work with, and model, the knowledge of various agents. To add even more complication, the role of time is also important (an islander may not know a certain fact on one day, but learn it on the next day), so one also needs to incorporate the language of temporal logic in order to fully model the situation. This makes both the syntax and semantics of the logic quite intricate; to see this, one only needs to contemplate the task of programming a computer with enough epistemic and temporal deductive reasoning powers that it would be able to solve the islander puzzle (or even a smaller version thereof, say with just three or four islanders) without being deliberately “fed” the solution. (The fact, therefore, that humans can grasp the correct solution without any formal logical training is therefore quite remarkable.)
As difficult as the syntax of temporal epistemic modal logic is, though, the semantics is more intricate still. For instance, it turns out that in order to completely model the epistemic state of a finite number of agents (such as 1000 islanders), one requires an infinite model, due to the existence of arbitrarily long nested chains of knowledge (e.g. “${A}$ knows that ${B}$ knows that ${C}$ knows that ${D}$ has blue eyes”), which cannot be automatically reduced to shorter chains of knowledge. Furthermore, because each agent has only an incomplete knowledge of the world, one must take into account multiple hypothetical worlds, which differ from the real world but which are considered to be possible worlds by one or more agents, thus introducing modality into the logic. More subtly, one must also consider worlds which each agent knows to be impossible, but are not commonly known to be impossible, so that (for instance) one agent is willing to admit the possibility that another agent considers that world to be possible; it is the consideration of such worlds which is crucial to the resolution of the blue-eyed islander puzzle. And this is even before one adds the temporal aspect (e.g. “On Tuesday, ${A}$ knows that on Monday, ${B}$ knew that by Wednesday, ${C}$ will know that ${D}$ has blue eyes”).
Despite all this fearsome complexity, it is still possible to set up both the syntax and semantics of temporal epistemic modal logic in such a way that one can formulate the blue-eyed islander problem rigorously, and in such a way that one has both an upper and a lower bound in the solution. The purpose of this post is to construct such a setup and to explain the lower bound in particular. The same logic is also useful for analysing another well-known paradox, the unexpected hanging paradox, and I will do so at the end of the post. Note though that there is more than one way to set up epistemic logics, and they are not all equivalent to each other.
(On the other hand, for puzzles such as the islander puzzle in which there are only a finite number of atomic propositions and no free variables, one at least can avoid the need to admit predicate logic, in which one has to discuss quantifiers such as ${\forall}$ and ${\exists}$. A fully formed predicate temporal epistemic modal logic would indeed be of terrifying complexity.)
Our approach here will be a little different from the approach commonly found in the epistemic logic literature, in which one jumps straight to “arbitrary-order epistemic logic” in which arbitrarily long nested chains of knowledge (“${A}$ knows that ${B}$ knows that ${C}$ knows that \ldots”) are allowed. Instead, we will adopt a hierarchical approach, recursively defining for ${k=0,1,2,\ldots}$ a “${k^{th}}$-order epistemic logic” in which knowledge chains of depth up to ${k}$, but no greater, are permitted. The arbitrarily order epistemic logic is then obtained as a limit (a direct limit on the syntactic side, and an inverse limit on the semantic side, which is dual to the syntactic side) of the finite order epistemic logics.
I should warn that this is going to be a rather formal and mathematical post. Readers who simply want to know the answer to the islander puzzle would probably be better off reading the discussion at the puzzle’s own blog post instead.
One of the key difficulties in performing analysis in infinite-dimensional function spaces, as opposed to finite-dimensional vector spaces, is that the Bolzano-Weierstrass theorem no longer holds: a bounded sequence in an infinite-dimensional function space need not have any convergent subsequences (when viewed using the strong topology). To put it another way, the closed unit ball in an infinite-dimensional function space usually fails to be (sequentially) compact.
As compactness is such a useful property to have in analysis, various tools have been developed over the years to try to salvage some sort of substitute for the compactness property in infinite-dimensional spaces. One of these tools is concentration compactness, which was discussed previously on this blog. This can be viewed as a compromise between weak compactness (which is true in very general circumstances, but is often too weak for applications) and strong compactness (which would be very useful in applications, but is usually false), in which one obtains convergence in an intermediate sense that involves a group of symmetries acting on the function space in question.
Concentration compactness is usually stated and proved in the language of standard analysis: epsilons and deltas, limits and supremas, and so forth. In this post, I wanted to note that one could also state and prove the basic foundations of concentration compactness in the framework of nonstandard analysis, in which one now deals with infinitesimals and ultralimits instead of epsilons and ordinary limits. This is a fairly mild change of viewpoint, but I found it to be informative to view this subject from a slightly different perspective. The nonstandard proofs require a fair amount of general machinery to set up, but conversely, once all the machinery is up and running, the proofs become slightly shorter, and can exploit tools from (standard) infinitary analysis, such as orthogonal projections in Hilbert spaces, or the continuous-pure point decomposition of measures. Because of the substantial amount of setup required, nonstandard proofs tend to have significantly more net complexity than their standard counterparts when it comes to basic results (such as those presented in this post), but the gap between the two narrows when the results become more difficult, and for particularly intricate and deep results it can happen that nonstandard proofs end up being simpler overall than their standard analogues, particularly if the nonstandard proof is able to tap the power of some existing mature body of infinitary mathematics (e.g. ergodic theory, measure theory, Hilbert space theory, or topological group theory) which is difficult to directly access in the standard formulation of the argument.
Many structures in mathematics are incomplete in one or more ways. For instance, the field of rationals ${{\bf Q}}$ or the reals ${{\bf R}}$ are algebraically incomplete, because there are some non-trivial algebraic equations (such as ${x^2=2}$ in the case of the rationals, or ${x^2=-1}$ in the case of the reals) which could potentially have solutions (because they do not imply a necessarily false statement, such as ${1=0}$, just using the laws of algebra), but do not actually have solutions in the specified field.
Similarly, the rationals ${{\bf Q}}$, when viewed now as a metric space rather than as a field, are also metrically incomplete, beause there exist sequences in the rationals (e.g. the decimal approximations ${3, 3.1, 3.14, 3.141, \ldots}$ of the irrational number ${\pi}$) which could potentially converge to a limit (because they form a Cauchy sequence), but do not actually converge in the specified metric space.
A third type of incompleteness is that of logical incompleteness, which applies now to formal theories rather than to fields or metric spaces. For instance, Zermelo-Frankel-Choice (ZFC) set theory is logically incomplete, because there exist statements (such as the consistency of ZFC) which could potentially be provable by the theory (because it does not lead to a contradiction, or at least so we believe, just from the axioms and deductive rules of the theory), but is not actually provable in this theory.
A fourth type of incompleteness, which is slightly less well known than the above three, is what I will call elementary incompleteness (and which model theorists call the failure of the countable saturation property). It applies to any structure that is describable by a first-order language, such as a field, a metric space, or a universe of sets. For instance, in the language of ordered real fields, the real line ${{\bf R}}$ is elementarily incomplete, because there exists a sequence of statements (such as the statements ${0 < x < 1/n}$ for natural numbers ${n=1,2,\ldots}$) in this language which are potentially simultaneously satisfiable (in the sense that any finite number of these statements can be satisfied by some real number ${x}$) but are not actually simultaneously satisfiable in this theory.
In each of these cases, though, it is possible to start with an incomplete structure and complete it to a much larger structure to eliminate the incompleteness. For instance, starting with an arbitrary field ${k}$, one can take its algebraic completion (or algebraic closure) ${\overline{k}}$; for instance, ${{\bf C} = \overline{{\bf R}}}$ can be viewed as the algebraic completion of ${{\bf R}}$. This field is usually significantly larger than the original field ${k}$, but contains ${k}$ as a subfield, and every element of ${\overline{k}}$ can be described as the solution to some polynomial equation with coefficients in ${k}$. Furthermore, ${\overline{k}}$ is now algebraically complete (or algebraically closed): every polynomial equation in ${\overline{k}}$ which is potentially satisfiable (in the sense that it does not lead to a contradiction such as ${1=0}$ from the laws of algebra), is actually satisfiable in ${\overline{k}}$.
Similarly, starting with an arbitrary metric space ${X}$, one can take its metric completion ${\overline{X}}$; for instance, ${{\bf R} = \overline{{\bf Q}}}$ can be viewed as the metric completion of ${{\bf Q}}$. Again, the completion ${\overline{X}}$ is usually much larger than the original metric space ${X}$, but contains ${X}$ as a subspace, and every element of ${\overline{X}}$ can be described as the limit of some Cauchy sequence in ${X}$. Furthermore, ${\overline{X}}$ is now a complete metric space: every sequence in ${\overline{X}}$ which is potentially convergent (in the sense of being a Cauchy sequence), is now actually convegent in ${\overline{X}}$.
In a similar vein, we have the Gödel completeness theorem, which implies (among other things) that for any consistent first-order theory ${T}$ for a first-order language ${L}$, there exists at least one completion ${\overline{T}}$ of that theory ${T}$, which is a consistent theory in which every sentence in ${L}$ which is potentially true in ${\overline{T}}$ (because it does not lead to a contradiction in ${\overline{T}}$) is actually true in ${\overline{T}}$. Indeed, the completeness theorem provides at least one model (or structure) ${{\mathfrak U}}$ of the consistent theory ${T}$, and then the completion ${\overline{T} = \hbox{Th}({\mathfrak U})}$ can be formed by interpreting every sentence in ${L}$ using ${{\mathfrak U}}$ to determine its truth value. Note, in contrast to the previous two examples, that the completion is usually not unique in any way; a theory ${T}$ can have multiple inequivalent models ${{\mathfrak U}}$, giving rise to distinct completions of the same theory.
Finally, if one starts with an arbitrary structure ${{\mathfrak U}}$, one can form an elementary completion ${{}^* {\mathfrak U}}$ of it, which is a significantly larger structure which contains ${{\mathfrak U}}$ as a substructure, and such that every element of ${{}^* {\mathfrak U}}$ is an elementary limit of a sequence of elements in ${{\mathfrak U}}$ (I will define this term shortly). Furthermore, ${{}^* {\mathfrak U}}$ is elementarily complete; any sequence of statements that are potentially simultaneously satisfiable in ${{}^* {\mathfrak U}}$ (in the sense that any finite number of statements in this collection are simultaneously satisfiable), will actually be simultaneously satisfiable. As we shall see, one can form such an elementary completion by taking an ultrapower of the original structure ${{\mathfrak U}}$. If ${{\mathfrak U}}$ is the standard universe of all the standard objects one considers in mathematics, then its elementary completion ${{}^* {\mathfrak U}}$ is known as the nonstandard universe, and is the setting for nonstandard analysis.
As mentioned earlier, completion tends to make a space much larger and more complicated. If one algebraically completes a finite field, for instance, one necessarily obtains an infinite field as a consequence. If one metrically completes a countable metric space with no isolated points, such as ${{\bf Q}}$, then one necessarily obtains an uncountable metric space (thanks to the Baire category theorem). If one takes a logical completion of a consistent first-order theory that can model true arithmetic, then this completion is no longer describable by a recursively enumerable schema of axioms, thanks to Gödel’s incompleteness theorem. And if one takes the elementary completion of a countable structure, such as the integers ${{\bf Z}}$, then the resulting completion ${{}^* {\bf Z}}$ will necessarily be uncountable.
However, there are substantial benefits to working in the completed structure which can make it well worth the massive increase in size. For instance, by working in the algebraic completion of a field, one gains access to the full power of algebraic geometry. By working in the metric completion of a metric space, one gains access to powerful tools of real analysis, such as the Baire category theorem, the Heine-Borel theorem, and (in the case of Euclidean completions) the Bolzano-Weierstrass theorem. By working in a logically and elementarily completed theory (aka a saturated model) of a first-order theory, one gains access to the branch of model theory known as definability theory, which allows one to analyse the structure of definable sets in much the same way that algebraic geometry allows one to analyse the structure of algebraic sets. Finally, when working in an elementary completion of a structure, one gains a sequential compactness property, analogous to the Bolzano-Weierstrass theorem, which can be interpreted as the foundation for much of nonstandard analysis, as well as providing a unifying framework to describe various correspondence principles between finitary and infinitary mathematics.
In this post, I wish to expand upon these above points with regard to elementary completion, and to present nonstandard analysis as a completion of standard analysis in much the same way as, say, complex algebra is a completion of real algebra, or real metric geometry is a completion of rational metric geometry.
This is the third in a series of posts on the “no self-defeating object” argument in mathematics – a powerful and useful argument based on formalising the observation that any object or structure that is so powerful that it can “defeat” even itself, cannot actually exist. This argument is used to establish many basic impossibility results in mathematics, such as Gödel’s theorem that it is impossible for any sufficiently sophisticated formal axiom system to prove its own consistency, Turing’s theorem that it is impossible for any sufficiently sophisticated programming language to solve its own halting problem, or Cantor’s theorem that it is impossible for any set to enumerate its own power set (and as a corollary, the natural numbers cannot enumerate the real numbers).
As remarked in the previous posts, many people who encounter these theorems can feel uneasy about their conclusions, and their method of proof; this seems to be particularly the case with regard to Cantor’s result that the reals are uncountable. In the previous post in this series, I focused on one particular aspect of the standard proofs which one might be uncomfortable with, namely their counterfactual nature, and observed that many of these proofs can be largely (though not completely) converted to non-counterfactual form. However, this does not fully dispel the sense that the conclusions of these theorems – that the reals are not countable, that the class of all sets is not itself a set, that truth cannot be captured by a predicate, that consistency is not provable, etc. – are highly unintuitive, and even objectionable to “common sense” in some cases.
How can intuition lead one to doubt the conclusions of these mathematical results? I believe that one reason is because these results are sensitive to the amount of vagueness in one’s mental model of mathematics. In the formal mathematical world, where every statement is either absolutely true or absolutely false with no middle ground, and all concepts require a precise definition (or at least a precise axiomatisation) before they can be used, then one can rigorously state and prove Cantor’s theorem, Gödel’s theorem, and all the other results mentioned in the previous posts without difficulty. However, in the vague and fuzzy world of mathematical intuition, in which one’s impression of the truth or falsity of a statement may be influenced by recent mental reference points, definitions are malleable and blurry with no sharp dividing lines between what is and what is not covered by such definitions, and key mathematical objects may be incompletely specified and thus “moving targets” subject to interpretation, then one can argue with some degree of justification that the conclusions of the above results are incorrect; in the vague world, it seems quite plausible that one can always enumerate all the real numbers “that one needs to”, one can always justify the consistency of one’s reasoning system, one can reason using truth as if it were a predicate, and so forth. The impossibility results only kick in once one tries to clear away the fog of vagueness and nail down all the definitions and mathematical statements precisely. (To put it another way, the no-self-defeating object argument relies very much on the disconnected, definite, and absolute nature of the boolean truth space $\{\hbox{true},\hbox{ false}\}$ in the rigorous mathematical world.)
One notable feature of mathematical reasoning is the reliance on counterfactual thinking – taking a hypothesis (or set of hypotheses) which may or may not be true, and following it (or them) to its logical conclusion. For instance, most propositions in mathematics start with a set of hypotheses (e.g. “Let $n$ be a natural number such that …”), which may or may not apply to the particular value of $n$ one may have in mind. Or, if one ever argues by dividing into separate cases (e.g. “Case 1: $n$ is even. … Case 2: $n$ is odd. …”), then for any given $n$, at most one of these cases would actually be applicable, with the other cases being counterfactual alternatives. But the purest example of counterfactual thinking in mathematics comes when one employs a proof by contradiction (or reductio ad absurdum) – one introduces a hypothesis that in fact has no chance of being true at all (e.g. “Suppose for sake of contradiction that $\sqrt{2}$ is equal to the ratio $p/q$ of two natural numbers.”), and proceeds to demonstrate this fact by showing that this hypothesis leads to absurdity.
Experienced mathematicians are so used to this type of counterfactual thinking that it is sometimes difficult for them to realise that it this type of thinking is not automatically intuitive for students or non-mathematicians, who can anchor their thinking on the single, “real” world to the extent that they cannot easily consider hypothetical alternatives. This can lead to confused exchanges such as the following:
Lecturer: “Theorem. Let $p$ be a prime number. Then…”
Student: “But how do you know that $p$ is a prime number? Couldn’t it be composite?”
or
Lecturer: “Now we see what the function $f$ does when we give it the input of $x+dx$ instead. …”
Student: “But didn’t you just say that the input was equal to $x$ just a moment ago?”
This is not to say that counterfactual thinking is not encountered at all outside of mathematics. For instance, an obvious source of counterfactual thinking occurs in fictional writing or film, particularly in speculative fiction such as science fiction, fantasy, or alternate history. Here, one can certainly take one or more counterfactual hypotheses (e.g. “what if magic really existed?”) and follow them to see what conclusions would result. The analogy between this and mathematical counterfactual reasoning is not perfect, of course: in fiction, consequences are usually not logically entailed by their premises, but are instead driven by more contingent considerations, such as the need to advance the plot, to entertain or emotionally affect the reader, or to make some moral or ideological point, and these types of narrative elements are almost completely absent in mathematical writing. Nevertheless, the analogy can be somewhat helpful when one is first coming to terms with mathematical reasoning. For instance, the mathematical concept of a proof by contradiction can be viewed as roughly analogous in some ways to such literary concepts as satire, dark humour, or absurdist fiction, in which one takes a premise specifically with the intent to derive absurd consequences from it. And if the proof of (say) a lemma is analogous to a short story, then the statement of that lemma can be viewed as analogous to the moral of that story.
Another source of counterfactual thinking outside of mathematics comes from simulation, when one feeds some initial data or hypotheses (that may or may not correspond to what actually happens in the real world) into a simulated environment (e.g. a piece of computer software, a laboratory experiment, or even just a thought-experiment), and then runs the simulation to see what consequences result from these hypotheses. Here, proof by contradiction is roughly analogous to the “garbage in, garbage out” phenomenon that is familiar to anyone who has worked with computers: if one’s initial inputs to a simulation are not consistent with the hypotheses of that simulation, or with each other, one can obtain bizarrely illogical (and sometimes unintentionally amusing) outputs as a result; and conversely, such outputs can be used to detect and diagnose problems with the data, hypotheses, or implementation of the simulation.
Despite the presence of these non-mathematical analogies, though, proofs by contradiction are still often viewed with suspicion and unease by many students of mathematics. Perhaps the quintessential example of this is the standard proof of Cantor’s theorem that the set ${\bf R}$ of real numbers is uncountable. This is about as short and as elegant a proof by contradiction as one can have without being utterly trivial, and despite this (or perhaps because of this) it seems to offend the reason of many people when they are first exposed to it, to an extent far greater than most other results in mathematics. (The only other two examples I know of that come close to doing this are the fact that the real number $0.999\ldots$ is equal to 1, and the solution to the blue-eyed islanders puzzle.)
Some time ago on this blog, I collected a family of well-known results in mathematics that were proven by contradiction, and specifically by a type of argument that I called the “no self-defeating object” argument; that any object that was so ridiculously overpowered that it could be used to “defeat” its own existence, could not actually exist. Many basic results in mathematics can be phrased in this manner: not only Cantor’s theorem, but Euclid’s theorem on the infinitude of primes, Gödel’s incompleteness theorem, or the conclusion (from Russell’s paradox) that the class of all sets cannot itself be a set.
I presented each of these arguments in the usual “proof by contradiction” manner; I made the counterfactual hypothesis that the impossibly overpowered object existed, and then used this to eventually derive a contradiction. Mathematically, there is nothing wrong with this reasoning, but because the argument spends almost its entire duration inside the bizarre counterfactual universe caused by an impossible hypothesis, readers who are not experienced with counterfactual thinking may view these arguments with unease.
It was pointed out to me, though (originally with regards to Euclid’s theorem, but the same point in fact applies to the other results I presented) that one can pull a large fraction of each argument out of this counterfactual world, so that one can see most of the argument directly, without the need for any intrinsically impossible hypotheses. This is done by converting the “no self-defeating object” argument into a logically equivalent “any object can be defeated” argument, with the former then being viewed as an immediate corollary of the latter. This change is almost trivial to enact (it is often little more than just taking the contrapositive of the original statement), but it does offer a slightly different “non-counterfactual” (or more precisely, “not necessarily counterfactual”) perspective on these arguments which may assist in understanding how they work.
For instance, consider the very first no-self-defeating result presented in the previous post:
Proposition 1 (No largest natural number). There does not exist a natural number $N$ that is larger than all the other natural numbers.
This is formulated in the “no self-defeating object” formulation. But it has a logically equivalent “any object can be defeated” form:
Proposition 1′. Given any natural number $N$, one can find another natural number $N'$ which is larger than $N$.
Proof. Take $N' := N+1$. $\Box$
While Proposition 1 and Proposition 1′ are logically equivalent to each other, note one key difference: Proposition 1′ can be illustrated with examples (e.g. take $N = 100$, so that the proof gives $N'=101$ ), whilst Proposition 1 cannot (since there is, after all, no such thing as a largest natural number). So there is a sense in which Proposition 1′ is more “non-counterfactual” or “constructive” than the “counterfactual” Proposition 1.
In a similar spirit, Euclid’s theorem (which we give using the numbering from the previous post),
Proposition 3. There are infinitely many primes.
can be recast in “all objects can be defeated” form as
Proposition 3′. Let $p_1,\ldots,p_n$ be a collection of primes. Then there exists a prime $q$ which is distinct from any of the primes $p_1,\ldots,p_n$.
Proof. Take $q$ to be any prime factor of $p_1 \ldots p_n + 1$ (for instance, one could take the smallest prime factor, if one wished to be completely concrete). Since $p_1 \ldots p_n + 1$ is not divisible by any of the primes $p_1,\ldots,p_n$, $q$ must be distinct from all of these primes. $\Box$
One could argue that there was a slight use of proof by contradiction in the proof of Proposition 3′ (because one had to briefly entertain and then rule out the counterfactual possibility that $q$ was equal to one of the $p_1,\ldots,p_n$), but the proposition itself is not inherently counterfactual, as it does not make as patently impossible a hypothesis as a finite enumeration of the primes. Incidentally, it can be argued that the proof of Proposition 3′ is closer in spirit to Euclid’s original proof of his theorem, than the proof of Proposition 3 that is usually given today. Again, Proposition 3′ is “constructive”; one can apply it to any finite list of primes, say $2, 3, 5$, and it will actually exhibit a prime not in that list (in this case, $31$). The same cannot be said of Proposition 3, despite the logical equivalence of the two statements.
[Note: the article below may make more sense if one first reviews the previous blog post on the “no self-defeating object”. For instance, the section and theorem numbering here is deliberately chosen to match that of the preceding post.] |
# 3.1 Development of force concept
Page 1 / 5
• Understand the definition of force.
Dynamics is the study of the forces that cause objects and systems to move. To understand this, we need a working definition of force. Our intuitive definition of force —that is, a push or a pull—is a good place to start. We know that a push or pull has both magnitude and direction (therefore, it is a vector quantity) and can vary considerably in each regard. For example, a cannon exerts a strong force on a cannonball that is launched into the air. In contrast, Earth exerts only a tiny downward pull on a flea. Our everyday experiences also give us a good idea of how multiple forces add. If two people push in different directions on a third person, as illustrated in [link] , we might expect the total force to be in the direction shown. Since force is a vector, it adds just like other vectors, as illustrated in [link] (a) for two ice skaters. Forces, like other vectors, are represented by arrows and can be added using the familiar head-to-tail method or by trigonometric methods. These ideas were developed in Two-Dimensional Kinematics .
[link] (b) is our first example of a free-body diagram , which is a technique used to illustrate all the external forces acting on a body. The body is represented by a single isolated point (or free body), and only those forces acting on the body from the outside (external forces) are shown. (These forces are the only ones shown, because only external forces acting on the body affect its motion. We can ignore any internal forces within the body.) Free-body diagrams are very useful in analyzing forces acting on a system and are employed extensively in the study and application of Newton’s laws of motion.
A more quantitative definition of force can be based on some standard force, just as distance is measured in units relative to a standard distance. One possibility is to stretch a spring a certain fixed distance, as illustrated in [link] , and use the force it exerts to pull itself back to its relaxed shape—called a restoring force —as a standard. The magnitude of all other forces can be stated as multiples of this standard unit of force. Many other possibilities exist for standard forces. (One that we will encounter in Magnetism is the magnetic force between two wires carrying electric current.) Some alternative definitions of force will be given later in this chapter.
## Take-home experiment: force standards
To investigate force standards and cause and effect, get two identical rubber bands. Hang one rubber band vertically on a hook. Find a small household item that could be attached to the rubber band using a paper clip, and use this item as a weight to investigate the stretch of the rubber band. Measure the amount of stretch produced in the rubber band with one, two, and four of these (identical) items suspended from the rubber band. What is the relationship between the number of items and the amount of stretch? How large a stretch would you expect for the same number of items suspended from two rubber bands? What happens to the amount of stretch of the rubber band (with the weights attached) if the weights are also pushed to the side with a pencil?
## Section summary
• Dynamics is the study of how forces affect the motion of objects.
• Force is a push or pull that can be defined in terms of various standards, and it is a vector having both magnitude and direction.
• External forces are any outside forces that act on a body. A free-body diagram is a drawing of all external forces acting on a body.
## Conceptual questions
Propose a force standard different from the example of a stretched spring discussed in the text. Your standard must be capable of producing the same force repeatedly.
What properties do forces have that allow us to classify them as vectors?
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Abigail
for teaching engĺish at school how nano technology help us
Anassong
Do somebody tell me a best nano engineering book for beginners?
there is no specific books for beginners but there is book called principle of nanotechnology
NANO
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc
NANO
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
Got questions? Join the online conversation and get instant answers! |
# How to manipulate figures while a script is running in Python Matplotlib?
To manipulate figures while a script is running in Python, we can take the following steps −
• Set the figure size and adjust the padding between and around the subplots.
• Create a new figure or activate an existing figure using figure() method.
• Get the current axis, ax, and show the current figure.
• Manipulate the script using plt.pause() method, before the final plot.
• Plot the line using plot() method.
• To display the figure, use show() method.
## Example
import numpy as np
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = [7.50, 3.50]
plt.rcParams["figure.autolayout"] = True
fig = plt.figure()
ax = fig.gca()
fig.show()
for i in range(20):
ax.plot(np.random.randn(10, 1), ls='-')
fig.canvas.draw()
plt.pause(0.1)
plt.close(fig)
plt.plot([1, -2, 3, 5, 3, 1, 0])
plt.show() |
# Realistic underground shelters for long time hibernation for mainly humans
Short variant of question:
1. I have few big natural caves (really big).
2. I want to transform it into shelters capable of keeping ~2.000.000 humans in hibernation, sleep, suspended animation or other thing - so they can sleep for 5000-10.000 years and wake up on one day being mainly healthy and sane (and being THE SAME person they descended to sleep).
3. This system does not need to be fully automatic - we can wake up some members for maintenance of sleeping ones' live support system and performing cave entrance guard duties.
4. Ideally, it can all be done with 20th-21th century technologies (we assume we have few biology breakthroughs happened earlier in 19th century).
5. Shelter inhabitants are not 100% humans, they have more skills in biology than us, and can alter themselves a little before going to sleep.
6. We need at least 80% percent of sleeping population survive in 5000 years and at least 60% survive in 10000 years.
7. There is bronze age population evolving on surface, we need to keep if away from our shelters.
The question is - what breakthroughs in biology do we have to achieve in the 19th century for this shelters be build-able in the 21th century?
Long variant of question:
In my story there is a human nation. Lets call them Forest Confederation.
They are on technology level of early 20th century (but with a few additional breakthroughs in biology) - their analogue of the First World War just finished recently and they are victorious - they used quite clumsy, by our standards, aeroplanes loaded with bacteriological based weapon spreading airborne.
Of course the people of the Forest Confederates have all undergone proper vaccination, but not their rivals - they have lost 90% population turned into short living turbozombies killing the remaining 9%, and sparing the most lucky 1 percent. In long time this 1% can recover at least to Medieval level.
Unfortunately, the Forest Confederation has no choice but to perform this forced genocide, because their more industrial advanced rival would give them no chances.
Also killing 99% of the population granted attention of a nearly omnipotent Mother Nature Goddess (the Forest Confederation has a strong faith).
Or at least the Forest Confederation interpreted a few dark omens as occurences by the will of their goddess.
So, the will of the goddess is clear - "I will perform major climate change in 200 years from now and all of you will either die or degrade back to stone ages."
The Forest Confederation treated this like real danger and started building underground shelters to survive the climate change. I have posted the requirements to this shelters in the short version of the question above.
I have read this question, but it has 23th century technology level and I want something more simple.
I do not want You might want to consider transporting digitised copies of the people, and then 3D printing them new bodies at the other end. as pointed by Jnani Jenny Hale (https://worldbuilding.stackexchange.com/a/66404/2763),
I want something more close to Zxyrra's answer (https://worldbuilding.stackexchange.com/a/66452/2763) with tech level close to present day, maybe with some handwavium.
I want the Forest Confederates leaving their shelter after 10.000 years being the same ones who entered it.
UPD 1: power is not such a drastical problem - we can receive power using geothermal energy sources even using 20th century technology.
• Keeping the surface dwellers out is easy. Any thick door will do. Add something that leaks radiation and you get a quick myth of toxic land. Or add a buzz on an irritating frequency etc. – Mormacil Mar 26 '17 at 14:13
• do you want to keep 2.000.000 humans or do you want to keep 2.000.000.000.000 humans? Your question states latter. – BlueWizard Mar 26 '17 at 19:59
• Keeping 2 million people in cryo or other suspended animation for 5000-10,000 years is going to require a lot of power. That seems like a harder problem than just having enough cavern space for them. Do you have a plan for that, or are you asking answers to address it? – Monica Cellio Mar 26 '17 at 20:33
• "It would not be difficult, Mein Führer! Nuclear reactors could, heh...I'm sorry, Mr. President. Nuclear reactors could provide power almost indefinitely. Greenhouses could maintain plant life. Animals could be bred and slaughtered. A quick survey would have to be made of all the available mine sites in the country, but I would guess that dwelling space for several hundred thousands of our people could easily be provided." Dr. Strangelove – MichaelK Mar 28 '17 at 22:26
If they aren't human just make their reproduction system different? Creature has capacity for sexual reproduction, but not necessity.
Sexual reproduction creates a conjunction of the participating parents, a new being. Asexual reproduction essentially clones the existing material of the parent. You could include with this process a change in neural formation in the organism that creates a more definite & closely defined template than that of humans to the point where their personality is essentially identical..
Or instead of that some 'handwavium' neuronal transfer so you're effectively the same person, but not actually the same body. Treat the memories of the species as a virus/prion, sorta.
If you want them to be biologically human in ancestry at least, use some epigenetic effect of the local environment or..like bacta tanks in star wars.. it's not a technological discovery, but a use of a natural resource.
Actually making an underground settlement is pretty simple.
People need: Heat regulation & Regular provision of certain chemicals & compounds.
So you need..heat source, heat exchangers, water source, organisms that live underground whose products or byproducts are useful. upper chambers could be filled with a fungus/bacteria/tree roots that convert co2, there's nothing banning inland caves from having subterranean access to water tables or even the sea. bred or natural organisms that can filter saline, no problem?
Thirdly, surround all your cave entrances with something that's extremely poisonous to whatever this bronze age species is (human also?) well, learning how to and having the will to remove said poisonous flora could well take 10,000 yrs if the species is comparatively retarded.
Bear in mind how long the move from bronze age to powered flight took humanity..and probably nothing your 19th/20th century civ is going to do is going to last 10,000 yrs and remain effective, your poisonous organism might die from drought or evolve to be less poisonous or bronze aged peoples find it's particularly useful for poisoning other bronze age peoples and start harvesting it with slaves for use in war or w/e.
If you want to build a structure to last for 10 000 years you really have only one option: pile rocks on top of rocks the way they did in Ancient Egypt; essentially, build a small hill. Nothing else will last that long. Build it in a hot desert -- temperate or subarctic latitudes are a no-no, especially with water around. Don't have anything electronic in it; no way the materials used in electronic circuits will last for 1000 years, let alone 10 000.
In short, the millions of hibernating humans will never get a chance to wake up.
Alternatively, don't build the facility to last 10 000 years. Instead, built a small country, complete with farms, factories, schools, universities, hospitals and an army. Task the country with the maintenance of the cryofacility. A population of a few million may be sufficient; say 10 million. In favorable conditions (good climate, good agricultural land, sufficient rain but not too much, access to the sea and natural resources such as iron and copper and tin and zinc and gold and petroleum), a country of 10 million people does not need more than 50–60 000 km², which is about 250×250 km or 150×150 miles. Small enough to make it plausible that the rest of the world can just ignore it; for example, the island of Madagascar is 500 000 km² and is very inaccessible for people lacking modern technology—in the real history it was the last large landmass to be populated by humans, and that did not happen before the 3rd century BCE (or CE, opinions vary) and the human population remained very low until the 6th century CE.
How to make those millions of people to (1) not multiply to excess and (2) stay focused on the mission for 400 generations is another problem. Please note that the entire history, from the very first cuneiform tablets in Sumer to this day, is about 5000 years: your intention is to make a facility and support structures lasting for two times the duration of the entire history.
There's no way that anything more complex than a nail created with early 20th century technology would survive 100 years of continuous work, much less 5,000 or 10,000. Even if you have some people awake to make maintenance, they'll run out of spare parts and replacements quickly, not to mention power sources or food and waste disposal for the awaken.
The keeping of a specimen alive and as it is longer than its lifespan should be the main issue. That can be solved by:
• Self-replication: Making copies of oneself to survive longer (asexual reproduction as already suggested)
• Negate Aging: By means of cryogenization or suspended animation (if technology allows)
If we can get a source of water and convert whatever power we get (geothermic for example) into something processable by the species, sustaining life wouldn't be a problem either. |
# Browse College of Engineering by Title
• (2014)
Our daily digital life is full of algorithmically selected contentsuch as social media feeds, recommendations and personal-ized search results. These algorithms have great power toshape users’ experiences yet users ...
application/pdf
PDF (796Kb)
• (2014-01-16)
Facilitating application development for distributed systems has been the focus of much research. Composing an application from existing components can simplify software development and has been adopted in a number of ...
application/pdf
PDF (9Mb)
• (2013-05)
This thesis seeks to develop a compact, low-cost, and reliable solution to characterize the I-V relationship of multiple photovoltaic panels. For this purpose, a custom relay circuit board is developed, which houses 12 ...
application/pdf
PDF (2Mb)
• (1954)
application/pdf
PDF (8Mb)
• (1955)
application/pdf
PDF (9Mb)
• (1988)
This thesis describes work on the thermodynamics and transport properties of photoexcited carriers in bulk and two-dimensional semiconductors. Two major topics are addressed: I. Excitonic Phase Diagram in Si: Evidence for ...
application/pdf
PDF (6Mb)
• (1961)
application/pdf
PDF (4Mb)
• application/pdf
PDF (2Mb)
• (1990)
We have used light scattering techniques to probe the vibrational properties ofGaAs/AlAs superlattices and the dynamics of electrons in GaAs/AlxGal-xAs multiple quantum well structures. In our study of the effects of ...
application/pdf
PDF (5Mb)
• (1990)
We have used light scattering techniques to probe the vibrational properties of GaAs/AlAs superlattices and the dynamics of electrons in GaAs/Al$\sb{\rm x}$Ga$\sb{\rm 1-x}$As multiple quantum well structures.
application/pdf
PDF (5Mb)
• (1984)
The work described ir. this thesis involves the study of optical ar.d thermal propterties of semicor.ductors and consists of three seperate topics. 1. Resonant two-photon absorption by exciton polaritons in CuCl ~. A ...
application/pdf
PDF (5Mb)
• (2013-10)
We introduce a new paradigm for using black-box learning to synthesize invariants called ICE-learning that learns using examples, counter-examples, and implications, and show that it allows building honest teachers and ...
application/pdf
PDF (318Kb)
• (Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, 1990-09)
application/pdf
PDF (42Mb)
• (2007)
application/vnd.ms-powerpoint
Microsoft PowerPoint (12Mb)
• (2007)
application/vnd.ms-powerpoint
Microsoft PowerPoint (4Mb)
• (1993)
This thesis is concerned with study of behavior of pile to pile-cap connection with respect to lateral loads. In this connection prestressed precast concrete piles and reinforced concrete pile caps were given particular attention.
application/pdf
PDF (8Mb)
• (2014-09-16)
We propose a framework to infer influences between agents in a network using only observed time series. The framework is general---it does not require any particular class of models for the dynamics. It includes graphical ...
application/pdf
PDF (8Mb)
• (1994)
The identity and characteristics of aquatic soluble phosphorus (SUP) present in a small mid-western mesotropic lake were examined with 31-Phosphorus Fourier Transform Nuclear Magnetic Resonance Spectroscopy $\rm (\sp{31}P$ ...
application/pdf
PDF (10Mb)
• (1985)
A model of far infrared (FIR) dielectric response of shallow impurity states in a semiconductor has been developed and is presented for the specific case of the shallow donor transitions in high purity epitaxial GaAs. The ...
application/pdf
PDF (8Mb)
• (2013-05-28)
The main objective of this study was to develop an automated agricultural vehicle guidance system that can be easily transplanted from vehicle to vehicle. The proposed solution to this problem is to first perform a tractor ...
application/pdf
PDF (2Mb) |
## Kim, Taekyun
Compute Distance To:
Author ID: kim.taekyun Published as: Kim, Taekyun; Kim, T.; Kim, Tae Kyun; Kim, TaeKyun; Kim, Tae-Kyun; Kim Taekyun; Kim, T.-K.; Taekyun; Kim, T more...less Further Spellings: Ким Таекун External Links: ResearchGate · Math-Net.Ru · dblp · GND
Documents Indexed: 793 Publications since 1992, including 3 Books 6 Contributions as Editor Co-Authors: 123 Co-Authors with 658 Joint Publications 3,050 Co-Co-Authors
all top 5
### Co-Authors
116 single-authored 339 Kim, Dae San 100 Rim, Seog-Hoon 72 Dolgiĭ, Dmitriĭ Viktorovich 59 Kwon, Hyuck In 52 Ryoo, Cheon Seoung 48 Jang, Gwan-Woo 44 Kwon, JongKyum 43 Lee, Sang Hun 41 Lee, Byungje 35 Kim, Young Hee Yun 28 Lee, Hyunseok 27 Mansour, Toufik 24 Kim, Hanyoung 23 Jang, Lee-Chae 16 Choi, Jongsung 16 Hwang, Kyung-Won 16 Park, Dal-Won 14 Komatsu, Takao 12 Bayad, Abdelmejid 11 Adiga, Chandrashekar 11 Kim, Minsoo 10 Jang, Leechae 10 Pak, Hong Kyung 10 Seo, Jong Jin 9 Kim, Hansoo 8 Agarwal, Ravi P. 8 Pyo, Sung-Soo 8 Son, Jin-Woo 7 Kim, Hyunmee 7 Simsek, Yilmaz 6 Kim, Dojin 5 Gupta, Vijay 5 Kim, Daeyeoul 5 Kim, Hyekyung 5 Park, Seongho 5 Shannon, Anthony Greville 5 Sotirova, Evdokia N. 4 Atanassov, Krassimir Todorov 4 Jeon, JongDuek 4 Kim, WongJu 4 Langova-Orozova, Daniela A. 4 Ma, Yuankui 4 Melo-Pinto, Pedro 4 Ro, Young Shick 3 Chung, Won-Sang 3 Kim, Wonjoo 3 Mahadeva Naika, Megadahalli Sidda 3 Petrounias, Ilias 3 Seo, Jongjin 3 Somashekara, D. D. 3 Yi, Heungsu 2 Arshad, Muhammad Sarmad 2 Ashchepkov, Leonid Timofeevich 2 Cho, Young-Ki 2 Choi, Junyong 2 Dolgy, Dmitry Victorovich 2 Fathima, Syeda Noor 2 He, Yuan 2 Jang, Yu Seon 2 Krawczak, Maciej 2 Lee, Si-Hyeon 2 Melliani, Said 2 Mubeen, Shahid 2 Park, Kyoung Ho 2 Pyung, In-Soo 2 Rahman, Gauhar 2 Sohn, Gyoyong Y. 2 Sooppy Nisar, Kottakkaran 2 Tuglu, Naim 1 Çekım, Bayram 1 Chandankumar, Sathyanarayana 1 Cho, Jin-Soon 1 Choi, Sangki 1 Chong, Wonyong 1 Dafa-Alla, Anour F. A. 1 Dolgy, Dmitry Y. 1 Glocker, Ben 1 Han, Hyeon-Ho 1 Han, Jung Hun 1 Herscovici, Orli 1 Hristova, Maria 1 Ikeda, Kazuo 1 Jang, Gawn Woo 1 Jang, Gwan-Joo 1 Jang, Gwang-Woo 1 Kacprzyk, Janusz 1 Kim, Byungki 1 Kim, Byungmoon 1 Kim, Dansan 1 Kim, Gwiyeon 1 Kim, Philsu 1 Kim, Sangjin 1 Kim, Seung Dong 1 Kim, Yung-Hwan 1 Kim, Yunjae 1 Kızılateş, Can 1 Koo, Jakyung 1 Kuş, Semra 1 Kwon, Huck-In 1 Lee, Hui Young ...and 26 more Co-Authors
all top 5
### Serials
128 Advanced Studies in Contemporary Mathematics (Kyungshang) 112 Proceedings of the Jangjeon Mathematical Society 73 Advances in Difference Equations 68 Journal of Computational Analysis and Applications 56 Journal of Inequalities and Applications 43 Russian Journal of Mathematical Physics 27 Abstract and Applied Analysis 22 Journal of Nonlinear Science and Applications 13 Ars Combinatoria 13 Discrete Dynamics in Nature and Society 11 Bulletin of the Korean Mathematical Society 10 Journal of Mathematical Analysis and Applications 10 Applied Mathematics and Computation 10 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 10 Symmetry 9 Filomat 8 Open Mathematics 6 International Journal of Mathematics and Mathematical Sciences 6 JP Journal of Algebra, Number Theory and Applications 5 Journal of Number Theory 5 Integral Transforms and Special Functions 5 Honam Mathematical Journal 5 Far East Journal of Applied Mathematics 4 Indian Journal of Pure & Applied Mathematics 4 Rocky Mountain Journal of Mathematics 4 Journal of the Korean Mathematical Society 4 Applied Mathematics Letters 4 Advanced Studies in Contemporary Mathematics (Pusan) 4 Journal of Nonlinear and Convex Analysis 3 Demonstratio Mathematica 3 Journal of Computational and Applied Mathematics 3 Kyungpook Mathematical Journal 3 Reports of the Faculty of Science and Engineering. Saga University. Mathematics 3 Utilitas Mathematica 3 International Journal of Computer Mathematics 3 Georgian Mathematical Journal 3 Tamsui Oxford Journal of Mathematical Sciences 3 Far East Journal of Mathematical Sciences 3 Journal of Nonlinear Mathematical Physics 3 Iranian Journal of Science and Technology. Transaction A: Science 3 Journal of Concrete and Applicable Mathematics 3 Science China. Mathematics 2 Discrete Mathematics 2 Advances in Applied Mathematics 2 Journal of Physics A: Mathematical and General 2 Neural, Parallel & Scientific Computations 2 Kyushu Journal of Mathematics 2 JIPAM. Journal of Inequalities in Pure & Applied Mathematics 2 International Mathematical Journal 2 International Journal of Mathematical Analysis (Ruse) 2 Journal of Applied Mathematics & Informatics 2 Mathematics 2 AIMS Mathematics 1 Bulletin of the Australian Mathematical Society 1 Computers & Mathematics with Applications 1 Journal of Mathematical Physics 1 Archiv der Mathematik 1 Glasgow Mathematical Journal 1 Mathematica Slovaca 1 Memoirs of the Faculty of Science. Series A. Mathematics 1 Proceedings of the Japan Academy. Series A 1 Quaestiones Mathematicae 1 Doklady Bolgarskoĭ Akademii Nauk 1 Algebra Colloquium 1 International Journal of Computer Vision 1 Journal of Difference Equations and Applications 1 Sbornik: Mathematics 1 Smarandache Notions Journal 1 Differential Equations and Dynamical Systems 1 Mathematical Inequalities & Applications 1 Italian Journal of Pure and Applied Mathematics 1 Nonlinear Functional Analysis and Applications 1 Journal of Analysis and Applications 1 Journal of the Indonesian Mathematical Society 1 Scientia Magna 1 International Journal of Applied Mathematics and Statistics 1 Contributions to Discrete Mathematics 1 Journal of Physics A: Mathematical and Theoretical 1 Journal of Applicable Functional Differential Equations 1 Applicable Analysis and Discrete Mathematics 1 Applied and Computational Mathematics 1 Journal of Mathematical Analysis 1 Axioms 1 Mathematical Sciences 1 Journal of Function Spaces 1 Korean Journal of Mathematics
all top 5
### Fields
713 Number theory (11-XX) 260 Combinatorics (05-XX) 122 Special functions (33-XX) 30 Ordinary differential equations (34-XX) 26 Harmonic analysis on Euclidean spaces (42-XX) 12 Measure and integration (28-XX) 12 Mathematics education (97-XX) 11 Real functions (26-XX) 11 Computer science (68-XX) 10 General and overarching topics; collections (00-XX) 10 Approximations and expansions (41-XX) 9 Numerical analysis (65-XX) 7 Mathematical logic and foundations (03-XX) 6 Probability theory and stochastic processes (60-XX) 4 Calculus of variations and optimal control; optimization (49-XX) 3 Operator theory (47-XX) 3 Systems theory; control (93-XX) 3 Information and communication theory, circuits (94-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Functions of a complex variable (30-XX) 2 Difference and functional equations (39-XX) 2 Abstract harmonic analysis (43-XX) 2 Integral transforms, operational calculus (44-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 History and biography (01-XX) 1 Field theory and polynomials (12-XX) 1 Algebraic geometry (14-XX) 1 Topological groups, Lie groups (22-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Partial differential equations (35-XX) 1 Sequences, series, summability (40-XX) 1 Functional analysis (46-XX) 1 Statistics (62-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Operations research, mathematical programming (90-XX) 1 Biology and other natural sciences (92-XX)
### Citations contained in zbMATH Open
579 Publications have been cited 4,983 times in 1,035 Documents Cited by Year
$$q$$-Volkenborn integration. Zbl 1092.11045
Kim, T.
2002
$$q$$-Bernoulli numbers and polynomials associated with multiple $$q$$-zeta functions and basic $$L$$-series. Zbl 1200.11018
Srivastava, H. M.; Kim, T.; Simsek, Y.
2005
Some identities on the $$q$$-Euler polynomials of higher order and $$q$$-Stirling numbers by the fermionic $$p$$-adic integral on $$\mathbb Z_p$$. Zbl 1192.05011
Kim, T.
2009
On the $$q$$-extension of Euler and Genocchi numbers. Zbl 1112.11012
Kim, Taekyun
2007
$$q$$-Euler numbers and polynomials associated with $$p$$-adic $$q$$-integrals. Zbl 1158.11009
Kim, Taekyun
2007
On a $$q$$-analogue of the $$p$$-adic log gamma functions and related integrals. Zbl 0941.11048
Kim, Taekyun
1999
Symmetry of power sum polynomials and multivariate fermionic $$p$$-adic invariant integral on $$\mathbb Z_p$$. Zbl 1200.11089
Kim, T.
2009
$$q$$-Bernoulli numbers and polynomials associated with Gaussian binomial coefficients. Zbl 1196.11040
Kim, Taekyun
2008
Some identities of extended degenerate $$r$$-central Bell polynomials arising from umbral calculus. Zbl 1439.11074
Kim, Taekyun; Kim, Dae San
2020
Symmetry $$p$$-adic invariant integral on $$\mathbb Z_p$$ for Bernoulli and Euler polynomials. Zbl 1229.11152
Kim, Taekyun
2008
Identities involving values of Bernstein, $$q$$-Bernoulli, and $$q$$-Euler polynomials. Zbl 1256.11013
2011
On the multiple $$q$$-Genocchi and Euler numbers. Zbl 1192.11011
Kim, Taekyun
2008
On the analogs of Euler numbers and polynomials associated with $$p$$-adic $$q$$-integral on $$\mathbb Z_{p}$$ at $$q= - 1$$. Zbl 1120.11010
Kim, Taekyun
2007
Degenerate Laplace transform and degenerate gamma function. Zbl 1377.44001
Kim, T.; Kim, D. S.
2017
Euler numbers and polynomials associated with zeta functions. Zbl 1145.11019
Kim, Taekyun
2008
Power series and asymptotic series associated with the $$q$$-analog of the two-variable $$p$$-adic $$L$$-function. Zbl 1190.11049
Kim, Taekyun
2005
Non-Archimedean $$q$$-integrals associated with multiple Changhee $$q$$-Bernoulli polynomials. Zbl 1072.11090
Kim, T.
2003
Identities involving Frobenius-Euler polynomials arising from non-linear differential equations. Zbl 1262.11024
Kim, Taekyun
2012
A note on degenerate Stirling polynomials of the second kind. Zbl 1377.11027
Kim, Taekyun
2017
On $$p$$-adic interpolating function for $$q$$-Euler numbers and its derivatives. Zbl 1160.11013
Kim, Taekyun
2008
On Euler-Barnes multiple zeta functions. Zbl 1038.11058
Kim, Taekyun
2003
Note on the Euler $$q$$-zeta functions. Zbl 1221.11231
Kim, Taekyun
2009
A note on a new type of degenerate Bernoulli numbers. Zbl 1468.11072
Kim, D. S.; Kim, T.
2020
On the analogs of Bernoulli and Euler numbers, related identities and zeta and $$L$$-functions. Zbl 1223.11145
Kim, Taekyun; Rim, Seog-Hoon; Simsek, Yilmaz; Kim, Daeyeoul
2008
New approach to $$q$$-Euler polynomials of higher order. Zbl 1259.11030
Kim, Taekyun
2010
Analytic continuation of multiple $$q$$-zeta functions and their values at negative integers. Zbl 1115.11068
Kim, Taekyun
2004
The modified $$q$$-Euler numbers and polynomials. Zbl 1172.11006
Kim, Taekyun
2008
Degenerate polyexponential functions and degenerate Bell polynomials. Zbl 07184981
Kim, Taekyun; Kim, Dae San
2020
Some identities for the Bernoulli, the Euler and the Genocchi numbers and polynomials. Zbl 1209.11026
Kim, Taekyun
2010
$$q$$-extension of the Euler formula and trigonometric functions. Zbl 1188.33001
Kim, Taekyun
2007
$$q$$-generalized Euler numbers and polynomials. Zbl 1163.11311
Kim, Taekyun
2006
A note on polyexponential and unipoly functions. Zbl 1470.33002
Kim, D. S.; Kim, T.
2019
A note on $$p$$-adic $$q$$-integral on $$\mathbb Z_p$$ associated with $$q$$-Euler numbers. Zbl 1132.11369
Kim, Taekyun
2007
Some identities of $$q$$-Euler polynomials arising from $$q$$-umbral calculus. Zbl 1372.05020
Kim, Dae San; Kim, Taekyun
2014
Barnes-type multiple $$q$$-zeta functions and $$q$$-Euler polynomials. Zbl 1213.11050
Kim, Taekyun
2010
On $$p$$-adic $$q$$-$$L$$-functions and sums of powers. Zbl 1007.11073
Kim, Taekyun
2002
Multiple $$p$$-adic $$L$$-function. Zbl 1140.11352
Kim, T.
2006
An identity of the symmetry for the Frobenius-Euler polynomials associated with the fermionic $$p$$-adic invariant $$q$$-integrals on $${\mathbf Z}_p$$. Zbl 1238.11022
Kim, Taekyun
2011
On explicit formulas of $$p$$-adic $$q$$-$$L$$-functions. Zbl 0817.11054
Kim, Taekyun
1994
An invariant $$p$$-adic integral associated with Daehee numbers. Zbl 1016.11008
Kim, Taekyun
2002
On the weighted $$q$$-Bernoulli numbers and polynomials. Zbl 1256.11017
Kim, Taekyun
2011
A note on nonlinear Changhee differential equations. Zbl 1344.34027
Kim, T.; Kim, D. S.
2016
A note on $$q$$-Bernstein polynomials. Zbl 1256.11018
Kim, Taekyun
2011
On Ramanujan’s cubic continued fraction and explicit evaluations of theta-functions. Zbl 1088.11009
2004
Sums of products of $$q$$-Bernoulli numbers. Zbl 0986.11010
Kim, Taekyun
2001
$$p$$-adic $$q$$-integrals associated with the Changhee-Barnes’ $$q$$-Bernoulli polynomials. Zbl 1135.11340
Kim, Taekyun
2004
Some identities of Bell polynomials. Zbl 1325.05031
Kim, Dae San; Kim, Taekyun
2015
An invariant $$p$$-adic $$q$$-integral on $$\mathbb Z _p$$. Zbl 1139.11050
Kim, Taekyun
2008
Some identities of Frobenius-Euler polynomials arising from umbral calculus. Zbl 1377.11025
Kim, Dae San; Kim, Taekyun
2012
Identities for the Bernoulli, the Euler and the Genocchi numbers and polynomials. Zbl 1209.11025
2010
A note on $$q$$-Euler and Genocchi numbers. Zbl 0997.11017
Kim, Taekyun; Jang, Lee-Chae; Pak, Hong Kyung
2001
On $$p$$-adic $$q$$-$$l$$-functions and sums of powers. Zbl 1154.11310
Kim, Taekyun
2007
A note on $$q$$-Volkenborn integration. Zbl 1174.11408
Kim, Taekyun
2005
Identities arising from higher-order Daehee polynomial bases. Zbl 1307.05019
Kim, Dae San; Kim, Taekyun
2015
Identities involving Laguerre polynomials derived from umbral calculus. Zbl 1314.33010
Kim, T.
2014
A note on poly-Bernoulli and higher-order poly-Bernoulli polynomials. Zbl 1318.11028
Kim, Dae San; Kim, Taekyun
2015
Degenerate $$r$$-Stirling numbers and $$r$$-Bell polynomials. Zbl 1391.11042
Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.
2018
Extended $$q$$-Euler numbers and polynomials associated with fermionic $$p$$-adic $$q$$-integral on $$\mathbb Z_{p}$$. Zbl 1132.33331
Kim, T.; Choi, J. Y.; Sug, J. Y.
2007
Some identities for Bernoulli numbers of the second kind arising from a non-linear differential equation. Zbl 1328.05017
Kim, Dae San; Kim, Taekyun
2015
Identities involving degenerate Euler numbers and polynomials arising from non-linear differential equations. Zbl 1333.05038
Kim, Taekyun; Kim, Dae San
2016
On the twisted $$q$$-zeta functions and $$q$$-Bernoulli polynomials. Zbl 1046.11009
Kim, Taekyun; Jang, Lee Chae; Rim, Seog-Hoon; Pak, Hong-Kyung
2003
Note on Dedekind type DC sums. Zbl 1203.11024
Kim, Taekyun
2009
A note on the $$q$$-Genocchi numbers and polynomials. Zbl 1188.11005
Kim, Taekyun
2007
Note on the Euler numbers and polynomials. Zbl 1171.11011
Kim, Taekyun
2008
A new approach to $$p$$-adic $$q$$-$$L$$-functions. Zbl 1084.11065
Kim, Taekyun
2006
New Changhee $$q$$-Euler numbers and polynomials associated with $$p$$-adic $$q$$-integrals. Zbl 1159.11049
Kim, Taekyun; Rim, Seog-Hoon
2007
Umbral calculus associated with Frobenius-type Eulerian polynomials. Zbl 1318.11037
Kim, Taekyun; Mansour, Toufik
2014
Bernoulli basis and the product of several Bernoulli polynomials. Zbl 1258.11041
Kim, Dae San; Kim, Taekyun
2012
$$\lambda$$-analogue of Stirling numbers of the first kind. Zbl 1420.11050
Kim, Taekyun
2017
An analogue of Bernoulli numbers and their congruences. Zbl 0802.11007
Kim, Taekyun
1994
Generalized Carlitz’s $$q$$-Bernoulli numbers in the $$p$$-adic number field. Zbl 1050.11020
Kim, Taekyun; Rim, Seog-Hoon
2000
Some identities on the $$q$$-Bernstein polynomials, $$q$$-Stirling numbers and $$q$$-Bernoulli numbers. Zbl 1262.11020
Kim, Taekyun; Choi, Jongsung; Kim, Young-Hee
2010
Higher recurrences for Apostol-Bernoulli-Euler numbers. Zbl 1248.11015
2012
Some new identities of Frobenius-Euler numbers and polynomials. Zbl 1332.11025
Kim, Dae San; Kim, Taekyun
2012
$$q$$-Bernoulli polynomials and $$q$$-umbral calculus. Zbl 1303.05015
Kim, Dae San; Kim, Tae Kyun
2014
Sums of powers of consecutive $$q$$-integers. Zbl 1069.11009
Kim, Taekyun
2004
A note on some formulae for the $$q$$-Euler numbers and polynomials. Zbl 1133.11318
Kim, Taekyun
2006
Some $$p$$-adic integrals on $$\mathbb{Z}_p$$ associated with trigonometric functions. Zbl 1433.11133
Kim, Dae San; Kim, Taekyun
2018
A numerical investigation of the roots of $$q$$-polynomials. Zbl 1090.65054
Ryoo, C. S.; Kim, T.; Agarwal, R. P.
2006
Fourier series of higher-order Bernoulli functions and their applications. Zbl 1371.11054
Kim, Taekyun; Kim, Dae San; Rim, Seog-Hoon; Dolgy, Dmitry V.
2017
Identities for degenerate Bernoulli polynomials and Korobov polynomials of the first kind. Zbl 1415.05018
Kim, Taekyun; Kim, Dae San
2019
Representing sums of finite products of Chebyshev polynomials of the second kind and Fibonacci polynomials in terms of Chebyshev polynomials. Zbl 1414.11022
Kim, Taekyun; Dolgy, Dmitry V.; Sim, Dae San
2018
On partially degenerate Bell numbers and polynomials. Zbl 1391.11043
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.
2017
Sums of products of generalized Bernoulli numbers. Zbl 1208.11030
2004
Some identities of degenerate special polynomials. Zbl 1347.11022
Kim, Dae San; Kim, Taekyun
2015
Some identities of higher order Euler polynomials arising from Euler basis. Zbl 1360.11040
Kim, Dae San; Kim, Taekyun
2013
Identities of symmetry for degenerate Euler polynomials and alternating generalized falling factorial sums. Zbl 1391.11053
Kim, Taekyun; Kim, Dae San
2017
A note on central Bell numbers and polynomials. Zbl 1435.11051
Kim, T.; Kim, D. S.
2020
On the twisted $$q$$-Euler numbers and polynomials associated with basic $$q-l$$-functions. Zbl 1173.11009
Kim, Taekyun; Rim, Seog-Hoon
2007
A note on the alternating sums of powers of consecutive $$q$$-integers. Zbl 1201.11108
Kim, Taekyun; Rim, Seog-Hoon; Simsek, Yilmaz
2006
On the rate of approximation by $$q$$ modified beta operators. Zbl 1211.41004
Gupta, Vijay; Kim, Taekyun
2011
A study on the $$q$$-Euler numbers and the fermionic $$q$$-integral of the product of several type $$q$$-Bernstein polynomials on $$\mathbb Z_p$$. Zbl 1275.11042
Kim, Taekyun
2013
Differential equations for Changhee polynomials and their applications. Zbl 1338.11035
Kim, Taekyun; Dolgy, Dmitry V.; Kim, Dae San; Seo, Jong Jin
2016
Barnes’ type multiple degenerate Bernoulli and Euler polynomials. Zbl 1338.11028
Kim, Tae Kyun
2015
Note on the degenerate gamma function. Zbl 1473.33001
Kim, T.; Kim, D. S.
2020
On $$\lambda$$-Bell polynomials associated with umbral calculus. Zbl 1423.05031
Kim, T.; Kim, D. S.
2017
A note on degenerate Bernoulli numbers and polynomials associated with $$p$$-adic invariant integral on $$\mathbb{Z}_p$$. Zbl 1390.11049
Kim, Dae San; Kim, Taekyun; Dolgy, Dmitry V.
2015
Higher-order Frobenius-Euler and poly-Bernoulli mixed-type polynomials. Zbl 1375.11025
Kim, Dae; Kim, Taekyun
2013
$$q$$-Riemann zeta function. Zbl 1122.11082
Kim, Taekyun
2004
A note on Boole polynomials. Zbl 1369.11020
Kim, Dae San; Kim, Taekyun
2014
On some degenerate differential and degenerate difference operators. Zbl 07500881
Kim, T.; Kim, D. S.
2022
Some identities involving degenerate $$r$$-Stirling numbers. Zbl 07583805
Kim, Taekyun; Kim, Dae San; Kwon, Jongkyum; Lee, Hyunseok
2022
Degenerate Whitney numbers of first and second kind of dowling lattices. Zbl 07584888
Kim, T.; Kim, D. S.
2022
Degenerate Sheffer sequences and $$\lambda$$-Sheffer sequences. Zbl 1471.11110
Kim, Dae San; Kim, Taekyun
2021
Degenerate zero-truncated Poisson random variables. Zbl 1470.60059
Kim, T.; Kim, D. S.
2021
Some identities on truncated polynomials associated with degenerate Bell polynomials. Zbl 1477.11042
Kim, T.; Kim, D. S.
2021
Representations of degenerate poly-Bernoulli polynomials. Zbl 07465037
Kim, Taekyun; Kim, Dae San; Kwon, Jongkyum; Lee, Hyunseok
2021
A note on degenerate derangement polynomials and numbers. Zbl 1484.11079
Kim, Taekyun; Kim, Dae San; Lee, Hyunseok; Jang, Lee-Chae
2021
A note on $$q$$-analogue of Catalan numbers arising from fermionic $$p$$-adic $$q$$-integral on $$\mathbb{Z}_p$$. Zbl 1492.11041
Kim, Taekyun; Kim, Dae San; Kwon, Jongkyum
2021
Generalized degenerate Bernoulli numbers and polynomials arising from Gauss hypergeometric function. Zbl 1494.11019
Kim, Taekyun; Kim, Dae San; Jang, Lee-Chae; Lee, Hyunseok; Kim, Hanyoung
2021
Multi-Lah numbers and multi-Stirling numbers of the first kind. Zbl 1494.11023
Kim, Dae San; Kim, Hye Kyung; Kim, Taekyun; Lee, Hyunseok; Park, Seongho
2021
Reciprocity of poly-Dedekind-type DC sums involving poly-Euler functions. Zbl 1485.11084
Ma, Yuankui; Kim, Dae San; Lee, Hyunseok; Kim, Hanyoung; Kim, Taekyun
2021
On the type 2 poly-Bernoulli polynomials associated with umbral calculus. Zbl 1496.11040
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.; Park, Jin-Woo
2021
Degenerate binomial and Poisson random variables associated with degenerate Lah-Bell polynomials. Zbl 1485.65008
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.; Park, Jin-Woo
2021
Some identities of extended degenerate $$r$$-central Bell polynomials arising from umbral calculus. Zbl 1439.11074
Kim, Taekyun; Kim, Dae San
2020
A note on a new type of degenerate Bernoulli numbers. Zbl 1468.11072
Kim, D. S.; Kim, T.
2020
Degenerate polyexponential functions and degenerate Bell polynomials. Zbl 07184981
Kim, Taekyun; Kim, Dae San
2020
A note on central Bell numbers and polynomials. Zbl 1435.11051
Kim, T.; Kim, D. S.
2020
Note on the degenerate gamma function. Zbl 1473.33001
Kim, T.; Kim, D. S.
2020
Degenerate polyexponential functions and type 2 degenerate poly-Bernoulli numbers and polynomials. Zbl 1482.11031
Kim, Taekyun; Kim, Dae San; Kwon, Jongkyum; Lee, Hyunseok
2020
Some identities for degenerate complete and incomplete $$r$$-bell polynomials. Zbl 07460804
Kwon, Jongkyum; Kim, Taekyun; Kim, Dae San; Kim, Han Young
2020
A note on degenerate $$r$$-Stirling numbers. Zbl 07461000
Kim, Taekyun; Kim, Dae San; Lee, Hyunseok; Park, Jin-Woo
2020
Some results on degenerate Daehee and Bernoulli numbers and polynomials. Zbl 1485.11046
Kim, Taekyun; Kim, Dae San; Kim, Han Young; Kwon, Jongkyum
2020
A note on degenerate Genocchi and poly-Genocchi numbers and polynomials. Zbl 07460885
Kim, Taekyun; Kim, Dae San; Kwon, Jongkyum; Kim, Han Young
2020
On degenerate Daehee polynomials and numbers of the third kind. Zbl 1429.11040
Jang, Lee-Chae; Kim, Wonjoo; Kwon, Hyuck-In; Kim, Taekyun
2020
Degenerate binomial coefficients and degenerate hypergeometric functions. Zbl 1482.33011
Kim, Taekyun; Kim, Dae San; Lee, Hyunseok; Kwon, Jongkyum
2020
On sums of finite products of balancing polynomials. Zbl 1434.11066
Kim, Dae San; Kim, Taekyun
2020
Degenerate Bell polynomials associated with umbral calculus. Zbl 07461001
Kim, Taekyun; Kim, Dae San; Kim, Han-Young; Lee, Hyunseok; Jang, Lee-Chae
2020
Some identities involving derangement polynomials and numbers and moments of gamma random variables. Zbl 1473.05032
Jang, Lee-Chae; Kim, Dae San; Kim, Taekyun; Lee, Hyunseok
2020
Degenerate poly-Bernoulli polynomials arising from degenerate polylogarithm. Zbl 1485.11047
Kim, Taekyun; Kim, Dansan; Kim, Han-Young; Lee, Hyunseok; Jang, Lee-Chae
2020
Some identities of Lah-Bell polynomials. Zbl 1486.11035
Ma, Yuankui; Kim, Dae San; Kim, Taekyun; Kim, Hanyoung; Lee, Hyunseok
2020
A note on negative $$\lambda$$-binomial distribution. Zbl 1486.05011
Ma, Yuankui; Kim, Taekyun
2020
A note on polyexponential and unipoly functions. Zbl 1470.33002
Kim, D. S.; Kim, T.
2019
Identities for degenerate Bernoulli polynomials and Korobov polynomials of the first kind. Zbl 1415.05018
Kim, Taekyun; Kim, Dae San
2019
Extended central factorial polynomials of the second kind. Zbl 1458.11048
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo; Kwon, Jongkyum
2019
On central complete and incomplete Bell polynomials. I. Zbl 1416.11039
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo
2019
Degenerate central Bell numbers and polynomials. Zbl 1435.11050
Kim, Taekyun; Kim, Dae San
2019
A note on type 2 degenerate Euler and Bernoulli polynomials. Zbl 1423.11052
Jang, Gwan-Woo; Kim, Taekyun
2019
Extended Stirling numbers of the first kind associated with Daehee numbers and polynomials. Zbl 1429.11049
Kim, Taekyun; Kim, Dae San
2019
Degenerate Bernstein polynomials. Zbl 1435.11053
Kim, Taekyun; Kim, Dae San
2019
Some identities involving special numbers and moments of random variables. Zbl 1418.05022
Kim, Taekyun; Yao, Yonghong; Kim, Dae San; Kwon, Hyuck-In
2019
Representing by several orthogonal polynomials for sums of finite products of Chebyshev polynomials of the first kind and Lucas polynomials. Zbl 1459.11063
Kim, Taekyun; Kim, Dae San; Jang, Lee-Chae; Dolgy, D. V.
2019
Degenerate central factorial numbers of the second kind. Zbl 1423.11066
Kim, Taekyun; Kim, Dae San
2019
Some identities on $$r$$-central factorial numbers and $$r$$-central Bell polynomials. Zbl 1459.11066
Kim, Dae San; Dolgy, Dmitry V.; Kim, Dojin; Kim, Taekyun
2019
Differential equations associated with degenerate Changhee numbers of the second kind. Zbl 1490.34019
Kim, Taekyun; Kim, Dae San
2019
A note on type 2 Changhee and Daehee polynomials. Zbl 1435.11044
Kim, Taekyun; Kim, Dae San
2019
Extended degenerate $$r$$-central factorial numbers of the second kind and extended degenerate $$r$$-central Bell polynomials. Zbl 1425.11038
Kim, Dae San; Dolgy, Dmitry V.; Kim, Taekyun; Kim, Dojin
2019
Identities of symmetry for type 2 Bernoulli and Euler polynomials. Zbl 1425.11039
Kim, Dae San; Kim, Han Young; Kim, Dojin; Kim, Taekyun
2019
Representation by several orthogonal polynomials for sums of finite products of Chebyshev polynomials of the first, third and fourth kinds. Zbl 1459.33008
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.; Kim, Dojin
2019
Extended $$r$$-central Bell polynomials with umbral calculus viewpoint. Zbl 1459.11065
Jang, Lee-Chae; Kim, Taekyun; Kim, Dae San; Kim, Han Young
2019
A note on degenerate central factorial polynomials of the second kind. Zbl 1423.11042
Dolgy, D. V.; Jang, Gwan-Woo; Kim, Taekyun
2019
On degenerate central complete Bell polynomials. Zbl 07472220
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo
2019
Some applications of degenerate poly-Bernoulli numbers and polynomials. Zbl 1440.05042
Kim, Dae San; Kim, Taekyun
2019
Some identities of fully degenerate Bell polynomials arising from differential equations. Zbl 1423.11057
Pyo, Sung-Soo; Kim, Taekyun
2019
Connection problem for sums of finite products of Legendre and Laguerre polynomials. Zbl 1423.33012
Kim, Taekyun; Hwang, Kyung-Won; Kim, Dae San; Dolgy, Dmitry V.
2019
On $$r$$-central incomplete and complete Bell polynomials. Zbl 1425.11050
Kim, Dae San; Kim, Han Young; Kim, Dojin; Kim, Taekyun
2019
Differential equations associated with Mahler and Sheffer-Mahler polynomials. Zbl 1440.11027
Kim, Taekyun; Kim, Dae San; Kwon, Hyuck-In; Ryoo, Cheon Seoung
2019
Degenerate $$r$$-Stirling numbers and $$r$$-Bell polynomials. Zbl 1391.11042
Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.
2018
Some $$p$$-adic integrals on $$\mathbb{Z}_p$$ associated with trigonometric functions. Zbl 1433.11133
Kim, Dae San; Kim, Taekyun
2018
Representing sums of finite products of Chebyshev polynomials of the second kind and Fibonacci polynomials in terms of Chebyshev polynomials. Zbl 1414.11022
Kim, Taekyun; Dolgy, Dmitry V.; Sim, Dae San
2018
An identity of symmetry for the degenerate Frobenius-Euler polynomials. Zbl 1473.11052
Kim, Taekyun; Kim, Dae San
2018
Sums of finite products of Chebyshev polynomials of the third and fourth kinds. Zbl 1435.11056
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.; Kwon, Jongkyum
2018
Some explicit formulas of degenerate Stirling numbers associated with the degenerate special numbers and polynomials. Zbl 1401.11059
Dolgy, D. V.; Kim, Taekyun
2018
A note on degenerate gamma function and degenerate Stirling number of the second kind. Zbl 1425.33003
Kim, Taekyun; Jang, Gwan-Woo
2018
A note on central factorial numbers. Zbl 1439.11065
Kim, Taekyun
2018
Some identities for Euler and Bernoulli polynomials and their zeros. Zbl 1432.11021
Kim, Taekyun; Ryoo, Cheon Seoung
2018
On central Fubini polynomials associated with central factorial numbers of the second kind. Zbl 1439.11082
Kim, Dae San; Kwon, Jongkyum; Dolgy, Dmitry V.; Kim, Taekyun
2018
Degenerate Daehee polynomials of the second kind. Zbl 1429.11048
Kim, Dae San; Kim, Taekyun; Kwon, Hyuck-In; Jang, Gwan-Woo
2018
A note on degenerate Stirling numbers of the first kind. Zbl 1439.11073
Kim, Dae San; Kim, Taekyun; Jang, Gwang-Woo
2018
Sums of finite products of Legendre and Laguerre polynomials. Zbl 1446.11036
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.; Park, Jin-Woo
2018
Some identities on derangement and degenerate derangement polynomials. Zbl 1414.11036
Kim, Taekyun; Kim, Dae San
2018
Symmetric identities for Fubini polynomials. Zbl 1423.11067
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo; Kwon, Jongkyum
2018
Some identities of derangement numbers. Zbl 1403.11021
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.; Kwon, Jongkyum
2018
A note on degenerate Stirling numbers and their applications. Zbl 1401.11060
Kim, Taekyun; Kim, Dae San; Kwon, Hyuck-In
2018
Extended degenerate Stirling numbers of the second kind and extended degenerate Bell polynomials. Zbl 1425.11046
Kim, Taekyun; Kim, Dae San; Jang, Lee-Chae; Kwon, Hyuck-In
2018
A note on some identities of derangement polynomials. Zbl 1383.05025
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo; Kwon, Jongkyum
2018
Sums of finite products of Legendre and Laguerre polynomials by Chebyshev polynomials by Chebyshev polynomials. Zbl 1414.11038
Kim, Taekyun; Kim, Dae San; Kwon, Jongkyum; Jang, Gwan-Woo
2018
Two variable higher-order Fubini polynomials. Zbl 1429.11042
Kim, Dae San; Kim, Taekyun; Kwon, Hyuck-In; Park, Jin-Woo
2018
Fourier series for functions related to Chebyshev polynomials of the first kind and Lucas polynomials. Zbl 1425.42002
Kim, Taekyun; Kim, Dae San; Jang, Lee-Chae; Jang, Gwan-Woo
2018
Representation by Chebyshev polynomials for sums of finite products of Chebyshev polynomials. Zbl 1428.33011
Kim, Taekyun; Kim, Dae San; Jang, Lee-Chae; Dolgy, Dmitry V.
2018
Two variable higher-order degenerate Fubini polynomials. Zbl 1405.11029
Kim, Dae San; Jang, Gwan-Woo; Kwon, Hyuck-In; Kim, Taekyun
2018
On the extension of degenerate Stirling polynomials of the second kind and degenerate Bell polynomials. Zbl 1423.11059
Jang, Gwan-Woo; Kim, Taekyun; Kwon, Hyuck-In
2018
Degenerate Cauchy numbers of the third kind. Zbl 1379.05016
Pyo, Sung-Soo; Kim, Taekyun; Rim, Seog-Hoon
2018
Identities between harmonic, hyperharmonic and Daehee numbers. Zbl 07445884
Rim, Seog-Hoon; Kim, Taekyun; Pyo, Sung-Soo
2018
Connection problem for sums of finite products of Chebyshev polynomials of the third and fourth kinds. Zbl 1423.11064
Dolgy, Dmitry Victorovich; Kim, Dae San; Kim, Taekyun; Kwon, Jongkyum
2018
Fourier series of sums of products of higher-order Genocchi functions. Zbl 1423.11044
Kim, Dae San; Kim, Taekyun; Kwon, Hyuck-In; Kwon, Jongkyum
2018
Fourier series of $$r$$-derangement and higher-order derangement functions. Zbl 1413.11054
Kim, Dae San; Kim, Taekyun; Kwon, Hyuck-In
2018
Sums of products of two variable higher-order Fubini functions arising from Fourier series. Zbl 1414.11034
Jang, Gwan-Woo; Dolgy, Dmitry V.; Jang, Lee-Chae; Kim, Dae San; Kim, Taekyun
2018
Fourier series of functions related to two variable higher-order Fubini polynomials. Zbl 1414.11037
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.; Jang, Gwan-Woo; Kwon, Jongkyum
2018
Fourier series of higher-order Euler functions and their applications. Zbl 1430.11030
Kim, Dae San; Kim, Taekyun
2018
Degenerate Daehee numbers of the third kind. Zbl 1402.11038
Pyo, Sung-Soo; Kim, Taekyun; Rim, Seog-Hoon
2018
A higher-order convolution for Bernoulli polynomials of the second kind. Zbl 1426.11013
He, Yuan; Kim, Taekyun
2018
Fourier series of sums of products of $$r$$-derangement functions. Zbl 1438.11070
Kim, Taekyun; Kim, Dae San; Kwon, Huck-In; Jang, Lee-Chae
2018
Inequalities involving extended $$k$$-gamma and $$k$$-beta functions. Zbl 1403.33002
Rahman, G.; Nisar, K. S.; Kim, T.; Mubeen, S.; Arshad, M.
2018
Some identities of Eulerian polynomials arising from nonlinear differential equations. Zbl 1397.05021
Kim, Taekyun; Kim, Dae San
2018
Some identities of Fubini polynomials arising from differential equations. Zbl 1424.93094
Jang, Gwan-Woo; Kim, Taekyun
2018
Differential equations arising from the generating function of degenerate Bernoulli numbers of the second kind. Zbl 1428.05036
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo; Dolgy, Dmitry V.
2018
Degenerate Laplace transform and degenerate gamma function. Zbl 1377.44001
Kim, T.; Kim, D. S.
2017
A note on degenerate Stirling polynomials of the second kind. Zbl 1377.11027
Kim, Taekyun
2017
$$\lambda$$-analogue of Stirling numbers of the first kind. Zbl 1420.11050
Kim, Taekyun
2017
...and 479 more Documents
all top 5 |
# Electrochemical ferroelectric switching
Bristowe, N. C. and Stengel, Massimiliano and Littlewood, P. B and Pruneda, J. M and Artacho, Emilio (2012) Electrochemical ferroelectric switching. Physical Review B (Condensed Matter and Materials Physics), 85 (2).
PDF PhysRevB.85.024106.pdf Restricted to Registered users only Download (1MB)
Official URL: http://link.aps.org/doi/10.1103/PhysRevB.85.024106
## Abstract
Against expectations, robust switchable ferroelectricity has been recently observed in ultrathin (1 nm) ferroelectric films exposed to air [V. Garcia $et$ $al.$, Nature {\bf 460}, 81 (2009)]. Based on first-principles calculations, we show that the system does not polarize unless charged defects or adsorbates form at the surface. We propose electrochemical processes as the most likely origin of this charge. The ferroelectric polarization of the film adapts to the bound charge generated on its surface by redox processes when poling the film. This, in turn, alters the band alignment at the bottom electrode interface, explaining the observed tunneling electroresistance. Our conclusions are supported by energetics calculated for varied electrochemical scenarios.
Item Type: Article 2011AREP; IA63; 03 - Mineral Sciences 03 - Mineral Sciences Physical Review B (Condensed Matter and Materials Physics) 85 Sarah Humbert 07 Oct 2011 17:17 23 Jul 2013 10:02 http://eprints.esc.cam.ac.uk/id/eprint/2190
### Actions (login required)
View Item
About cookies |
## Question
Consider R³ as a manifold with the flat Euclidean metric, and coordinates {x, y, z}. Introduce spherical polar coordinates {r, θ, ϕ} related to {x, y, z} by
x = rsinθcosϕ (1)
y = rsinθsinϕ (2)
z = rcosθ (3)
so that the metric takes the form
ds2 = dr2 + r2dθ2 + r2sin2θdϕ2 (4)
(a) A particle moves along a parameterised curve given by
x(λ) = cosλ y(λ) = sinλ , z(λ) = λ (5)
Express the path of the curve in the {r, θ, ϕ} system.
(b) Calculate the components of the tangent vector to the curve in both the Cartesian and spherical polar coordinates.
It is fairly obvious that the curve is a helix. It has unit radius, the distance between each rung is 2π and so it goes up at an angle of 45°. It is shown below, compressed in the Z direction.
The helix, with some vector components
Calculating the components of the tangent vector d/dλ in polar coordinates was non-trivial for me. It involved the chain, the quotient and the cos-1 rules of differentiation. I checked them by using the tensor transformation law from Cartesian to spherical polar components. The law is
In this case b…z and α...ω indices disappear, so it became a bit simpler but there were still almost 50 equations to step through and I had to add the tan-1 rule of differentiation to my armoury. I discovered that this method gave a different result for dθ/dλ component of my tangent vector. The possibility of errors had became enormous. Symbolab, a wonderful differential equation calculator, came to my rescue and I used it to check everything. It discovered a couple of minor errors in my 50 equation epic, but eventually pinned down the error to the first calculation of dθ/dλ from the equation of the curve in the {r, θ, ϕ} system. It just goes to show how important it is to have an 'independent' check.
## More Gains
1) I have also now realised why the metric is sometimes written in the form like
ds2 = dr2 + r2dθ2 + r2sin2θdϕ2
and sometimes as a matrix. And how to get from one to the other.
2) In oder to draw the helix, I developed a spreadsheet to draw the 3-D curve. I am inspired to do something more general to draw any 3-D curves from a specified perspective.
The full 11 page answer is at Ex 2.06 Helix.pdf. It includes a reference to the spreadsheet and from page 6 is mostly Differentiation in baby steps and other detailed calculations. |
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
** This is a modified version of a previous R benchmark that was done back in 2011. Click this link to see the original post.
After using R for quite some time, you get to know a little bit about its strengths and weaknesses. It structures data very well and has a huge library of statistical and data processing packages, which makes analysis a breeze. What it lacks is the ability to deal with really large data, and processing SPEED. We’re going to focus on the speed issue, especially since there are some easy ways to improve this.
I’m sure most people have heard of Revolution Analytics. They offer a free, enhanced version of R, called Revolution R Open (RRO), which allows multi-core processing (standard R is single-core) and is very easy to setup. There’s definitely some debate about whether or not RRO really does improve upon R. As you’ll see from the data below, in some cases it’s not very clear that it does and in some cases it is. We’re also going to look at the difference between running R/RRO locally on Mac OSX and on the cloud through Ubuntu.
My notebook setup:
• Mac OS X Yosemite 10.10.2
• 7 GHz Intel Core i5 (dual-core)
• 4 GB ram
Cloud server setup:
• Ubuntu 14.04
• Dual-core CPU
• 4 GB ram
For both the notebook and the cloud setup, I ran benchmarks for both R and RRO, so 4 different variations in total. The benchmark code that I used is a modification of the benchmark code provided in the link at the top. I added a section for matrix operations since that is one of the categories in which RRO really shines according to their website. See the code below.
# clear workspace
rm(list=ls())
# print system information
R.version
Sys.info()
# install non-core packages
install.packages(c('party', 'rbenchmark', 'earth'))
require(rbenchmark)
require(party)
require(earth)
require(rpart)
require(compiler)
# function from http://dirk.eddelbuettel.com/blog/2011/04/12/
k <- function(n, x=1) for (i in 1:n) x=1/{1+x}
# create random matrix
mat1 <- matrix(data = rexp(200, rate = 10), nrow = 3000, ncol = 3000)
mat2 <- matrix(data = rexp(200, rate = 10), nrow = 3000, ncol = 3000)
# prepare data set from UCI Repository
# see: http://archive.ics.uci.edu/ml/datasets/Credit+Approval
url="http://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/crx.data"
# run benchmark
results <- benchmark(ct=ctree(V16 ~ .,data=mydata),
e=earth(V16 ~ ., data=mydata),
rp=rpart(V16 ~ ., data=mydata),
mm=mat1%*%mat2,
k(1e6, x=1),
replications=20
)
results
Benchmarks – Table
ctree (s)earth (s)mm (s)k (s)rpart (s)
R_OSX_3.1.328415561480.51
RRO_OSX_3.1.229714739100.47
R_Ubuntu_3.0.2182127810150.45
RRO_Ubuntu_3.1.21301192880.42
Conclusion
For the most part, RRO performs significantly faster than standard R both locally and on the server. RRO performs really well on the matrix operations as seen in column group mm (over 90% faster than standard R); this is probably due to the addition of the Intel Math Kernal library. Standard R actually did better than RRO on the local machine for the ctree and k functions, which is definitely unexpected after all of the lofty claims made Revolution Analytics. The increase isn’t huge so maybe we can attribute this to the randomness of the small sample. Both standard R and RRO perform much better on the Ubuntu server. This is most likely because the operating system on the server doesn’t have all the extra bloat-ware that a pc operating system has. RRO performs better than standard R in all the tests I ran on the server, making it the clear winner on the server side.
Overall, it looks like cloud computing with a little help from RRO is definitely the way to go. Unfortunately this setup is definitely not the easiest for the average person to achieve. Good thing I’m working on a little side-project to help solve this issue:), …more to come about that in a future post. |
# Artificial Assistant
Ever wanted a artificial assistant that you control? Well now you can! With all the api's floating around, it is extremely simple to build a Siri-like/Alexa device that you can completely program and control in Python. This also avoids the somewhat uncomfortable idea that a company has a listening device in your room.
Update 3: The Python Telegram Bot API has been fixed and the dropout no longer exists.
Update 2: It appears that this process is growing on its own. I am on my fifth day of working on my digital assistant and there is always something new and exciting you can add. Visualisation, data, music, weather.
Update 1: Pressing a button wasn't as satisfying, so I decided that talking to the bot would make for more entertainment. I tried out a few platforms and I currently have decided that Telegram was the best platform for me.
However using the Python Telegram Bot api has been challenging because it cuts out the connection every thirty minutes and stops responding. Unfortunately while I have raised it as an issue, it is something that they have not fixed yet, and I have been unable to find another messaging app that provides a voice functionality.
Following the success of my morning light prototype, I decided I wanted something more. I wanted to see if I could have things at a button's touch away.
##### Item List
1. Raspberry Pi 3 or computer as server
2. IoT Button (Amazon Dash or ESP8266)
3. ESP8266 (WeMos Mini)
4. 9g SG90 Micro Servo
5. Raspberry Pi Camera V2 or USB webcam
##### Building the Server
I had a Raspberry Pi 3 from Arrow's giveaway, so I wanted to use it to build something cool.
So why not use it as an Access Point? That sounded pretty cool, and because the Pi has onboard wifi, that means that you don't need anything else other than the Pi itself to start building. I followed this tutorial from Phil Martin to setup my Pi as a access point.
##### The IoT button
I also had a couple of Amazon dash buttons that I had picked up for a dollar each when they went on sale with the intent of using them as IoT buttons. I poked around online and found this excellent tutorial by David Sikes on how to use Python 3 on a listening device (my Pi!) to sniff for Amazon packets.
I stripped out most of the code because I didn't need it:
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import * # for sniffing for the ARP packets
print('Init done.')
def arp_display(pkt):
if pkt[ARP].op == 1: #who-has (request)
if pkt[ARP].psrc == '0.0.0.0':
if pkt[ARP].hwsrc == '74:75:48:a5:33:be':
print("Pushed Black Button 1")
elif pkt[ARP].hwsrc == '10:ae:60:64:04:95':
print("Pushed Black Button 2")
else:
print("ARP Probe from unknown device: " + pkt[ARP].hwsrc)
print(sniff(prn=arp_display, filter="arp", store=0))
I replaced the hardware address of the buttons with my own. I tested with my own button to check that it works before moving on.
##### Wake Computer from Shutdown
So now we've got a device that can react once we press a button. I decided to have it send a Wake-On-Lan packet to my computer, which is convenient because the power button is hard to reach. I tried to get scapy to send a magic packet, but I found a far more convenient solution in Mike Pennington's code on github. I moved the wol.py folder to the same folder and modified the script as follows:
import wol
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import * # for sniffing for the ARP packets
obj = wol.WOL("00:30:1b:b0:f7:7d", intf="eth0")
print('Init done.')
def arp_display(pkt):
if pkt[ARP].op == 1: #who-has (request)
if pkt[ARP].psrc == '0.0.0.0':
if pkt[ARP].hwsrc == '74:75:48:a5:33:be':
obj.raw()
obj.udp4()
obj.udp6()
else:
print("ARP Probe from unknown device: " + pkt[ARP].hwsrc)
print(sniff(prn=arp_display, filter="arp", store=0))
With luck your PC should turn on at a touch of a button that is not on the PC.
But this has the potential to go so much further. My love of the ESP8266 module has been well documented, and I thought it was perfectly suitable for this project. Switching on the lights in the morning can be a cold task, so while you struggle against your sheets, why not have the button turn the lights on for you and boot your computer and make your coffee...
##### Turning on the lights
But since I didn't want to mess with high voltage power supply, we'll have to use a stand-in for our finger, specifically have something flick the switch physically. The best way to do it is with a servo motor.
There are several implementations on how to do this. If you have a rocker switch, it is terribly simple: Stick it against the wall and adjust the angle until everything is great.
Figure 1. Rocker Light Switch
The instructable can be found here: http://www.instructables.com/id/Automatic-Light-Switch-2/
But if you've got a pointy-type switch, then it is more of a problem. It is not easy for a servo to flick the end of the switch. You could pull it, as some people people have done:
Or you could build a custom designed case as bjkayani has done here:
Figure 2. Wifi Enabled Switch
While I like the design, I wasn't willing to incorporate the entire thing because there was stuff that I didn't really need.
However, I was pretty excited when I took a closer look at his design:
Figure 3. Inside of Switchifi
It uses a piece of 3D printed plastic to push and pull the switch. The switch and the servo fit into their own slots and there isn't a need to deal with slack lines. The required torque of the servo is also smaller in this case because the application of force can be closer to the servo axis as compared to using the arm of the servo to flick the switch.
The problem is that this design uses guide rails. That would not be easy to just strip out and use in another project.
Fortunately after some experimentation, I discovered that I didn't need guide rails. I used a piece of cardboard and cut it so that it had the shape I wanted. I made the holes small so that the servo's arm and the switch would fit snugly in and it was a small matter to calibrate the angles required for a comfortable rest on/off position for the servo so that it doesn't click.
Figure 4. Prototype solution 1
##### Putting it together
Now to hook it up to our wonderful Pi. I didn't want to use an external service like the IFTTT channel because I didn't see the need to. All that is required is a single byte to be sent to turn the switch on or off.
So this tutorial by James Lewis really helped me put the final part of this project together. A quick note that the host I was using was the localhost because I integrated MQTT into the python script, but on the Arduino side, I used the ip address of the Pi server as the host. I added the example for the esp(Servo) into the Arduino code and everything came together.
### Building a Bot
Building a bot to control the room would be more entertaining, even if it wasn't efficient from a practical point of view. I use Telegram because it is very easy to build a bot on Telegram with python.
I used the Python-Telegram-Bot library as my platform. First you'll have to request for a bot from Telegram's BotFather. Using the API key, you can then start creating your own bot.
I used Python 2.7 for the scripting because it simply works. Many code examples and libraries (especially some of the older ones) are written for Python 2.7 and it is a headache to translate them to Python 3. However, Python 3 code works in Python 2.7 with only a few tweaks. Hence, it is simply easier to use Python 2.7 at this point. Zed Shaw has made a case against using Python 3 which echoes a lot of my issues before I just ran python instead of python3 to get things to work.
##### Lights
The most basic interaction that I wanted the bot to have is to control the lights. The implementation is almost identical to the button. Post a 1 to our topic from the script upon receiving a message like "Lights". This will trigger the ESP8266 which is listening on the same topic to flick the switch.
I made a few changes to the code that has the ESP8266 posting the position of the switch after it completes it's flicking. This is on another channel that retains the message so that when we subscribe to it, we will know the last position of the switch and make decisions.
##### Weather
I always check the weather before I go out, so I think it would be useful for my bot to be able to report the weather when I ask it to.
I used the Open Weather Map API with the pyowm library to allow my Bot to report the weather. The benefit of using the pyowm library is that I save time having to retrieve and parse the JSON code. Plus it makes the code neater as well.
##### Music
Instead of using pianobar, I decided to store my own music locally and use vlc's python interface to play music.
I previously used the pianobar application to play music from Pandora and I used Adafruit's instructions to build the application.
I then used the python wrapper to talk to pianobar through my python script. By issuing commands to my Bot, I can also control the music.
Some notes are that pianobar is kind of spotty where it would play three or four tracks before stopping. I'm not sure what is going on, although I have searched extensively for a solution.
##### Web-facing Dashboard
A dashboard is pretty useful way to determine the state of the system without needing to query the system. I wanted something like the magic mirror project to display all the information I wanted:
But due to complexity issues (did not know how to integrate Electron painlessly) I settled on using Flask as my web app. It took awhile for me to get it working, mainly because I didn't know that I had to use two separate threads to keep the webserver and the bot running, but once it is up, adding items is as simple as editing the html.
I have also not bought a glass front for my project because it is extremely expensive, and I'm not quite done with adding what I want to this project yet.
##### Camera
Having a camera to verify the status of the room (and perform facial recognition) is pretty cool. So I added a Raspberry Pi camera (you can also use a USB camera) and simply used the python library to take pictures. This was the simplest part of the project. I merely had to write three additional lines to allow picture-taking capabilities.
### Conclusion
So this button can be placed anywhere, and it does two things: it turns on the computer and switches the light on or off depending on the last state. The total number of tutorials consulted was 5. Sometimes there aren't any solutions to the problems you are facing, and you learn something new. But sometimes when there are, you can build something really fun in a day and you learn something new too.
I can also talk to my Bot, and ask it about the weather, the condition of the room, the songs it is playing.. etc. The best thing about a bot is that you only really need to work on the messaging side of the bot. Everything else can be carried over to another platform if necessary. I tried Slack and Telegram, and it worked after working out how the messaging API worked for both platforms.
I didn't post my code because I'm still working on it and it is really straightforward to just grab the code from the places I referenced and make a few tweaks (as in copy-paste, delete unused lines).
### Improvements
• Bus times
• Weather display
• Facial Recognition
• Independent turn-on switch from computer.
• PIR sensor to take a picture when someone enters the room. |
Cuboid: Definition, Shape, Surface Area, Volume, Formula - Embibe
• Written By Gurudath
• Last Modified 19-07-2022
• Written By Gurudath
• Last Modified 19-07-2022
# Cuboid: Definition, Examples, Formula and Properties
Cuboid Definition: Cuboid shaped objects surround us in our day-to-day life. From television sets, books, carton boxes to bricks, mattresses, and shoeboxes, cuboid objects are all around us. In Geometry, a cuboid is a three-dimensional figure with six rectangular faces, twelve edges, and eight vertices. The cuboid shape comes with a closed three-dimensional figure surrounded by rectangular faces, which are plane regions of rectangles.
This article discusses cuboids, including their definitions and components, like their surface area and volume. Also, we will learn about the net of a cuboid. Read on to find out more.
With Embibe, students can get free CBSE mock tests for all topics. MCQs on Embibe are based on revised CBSE class books, paper patterns, and syllabus for the year 2022. This mock test series has a comprehensive selection of relevant questions and solutions. Candidates in the CBSE board can take these free mock tests to practice and find areas where they need to improve for their board exams.
## What is Cuboid?
Definition: A cuboid is a solid three-dimensional object which has six faces that are rectangular in shape with eight vertices, and twelve edges. Since the cuboid has six faces we can call it a regular hexahedron.
### Examples of Cuboid
The textbooks we read, the lunch boxes we bring to school, the mattresses we sleep on and the bricks we use to build a house, and many other things, are well-known examples of cuboids in our environment.
### How Does a Cuboid Look?
As discussed earlier, the cuboid has $${\rm{6}}$$ faces, $${\rm{8}}$$ vertices, and $${\rm{12}}$$ edges.
In solid geometry, any $${\rm{3}}$$-dimensional figure has length, breadth, and height.
Let’s first learn about the basic parameters like face, vertex, and edge which play an important role in $${\rm{3}}$$-dimensional objects.
Faces: Any of the individual flat surfaces of a solid object is known as the face of that object.
Vertex: In a $${\rm{3}}$$-dimensional object, a point where two or more lines meet is known as a vertex. Also, a corner can also be referred to as a vertex.
Edge: An edge is a line segment joining two vertices.
So, now let us try to imagine a cuboid with the above-said parameters. The shape of the cuboid appears to be as follows:
In the figure below, $${\rm{l,}}\,{\rm{b}}$$ and $${\rm{h}}$$ stand for length, breadth or width, and height respectively of the cuboid.
Solid geometry is associated with $${\rm{3 – D}}$$ shapes and figures with surface areas and volumes.
Now, let us learn about the surface area and volume of a cuboid.
## Cuboid Formula
Let us look at some of the formulas for cuboid:
### Surface Area of Cuboid
We know that, area can be defined as the space enclosed by a flat shape or the surface of an object. The area of a figure is the number of unit squares that enfold the surface of a closed figure.
Similarly, the total surface area of a solid is the sum of the areas of the total number of faces or surfaces of the solid.
The lateral surface area of a solid is the surface area of the solid excluding the top and base.
Now, let us find out the total surface area and a lateral surface area of a cuboid.
We have said that the cuboid has six rectangular faces.
In the above figure, let $${\rm{l}}$$ be the length, $${\rm{b}}$$ be the breadth, and $${\rm{h}}$$ be the height of a cuboid.
Therefore, $$AD = BC = GF = HE = l$$
$$AB = CD = GH = FE = b$$
$$CF = DE = BG = AH = h$$
Now, the lateral surface area of the cuboid $$=$$ Area of rectangular face $$ABGH +$$ Area of rectangular face $$DCEF +$$ Area of rectangular face $$ADEH +$$ Area of rectangular face $$BCGF$$
$$= \left( {AB \times BG} \right) + \left( {DC \times CF} \right) + \left( {AD \times DE} \right) + \left( {BC \times CF} \right)$$
$$= \left( {b \times h} \right) + \left( {b \times h} \right) + \left( {l \times h} \right) + \left( {l \times h} \right)$$
$$= 2\left( {b \times h} \right) + 2\left( {l \times h} \right)$$
$$= 2\,h\left( {l + b} \right)$$
Therefore, lateral surface area of a cuboid $$= 2h\left( {l + b} \right)\,{\rm{sq}}{\rm{.units}}$$
Now, the total surface area of a cuboid is the sum of the areas of a total number of faces or surfaces that include the cuboid. The faces include the top and bottom (bases) and the remaining surfaces.
Area of face $$ABCD =$$ Area of face $$EFGH = \left( {l \times b} \right)$$
Area of face $$CDEF =$$ Area of face $$ABGH = \left( {b \times h} \right)$$
Area of face $$BGCF =$$ Area of face $$ADHE = \left( {l \times h} \right)$$
$$=$$Area of $$\left( {ABCD + EFGH + CDEF + ABGH + BGCF + ADHE} \right)$$
$$= \left( {l \times b} \right) + \left( {l \times b} \right) + \left( {b \times h} \right) + \left( {b \times h} \right) + \left( {l \times h} \right) + \left( {l \times h} \right)$$
$$= 2\left( {l \times b} \right) + 2\left( {b \times h} \right) + 2\left( {l \times h} \right)$$
$$= 2\left( {lb \times bh \times lh} \right)$$
### Volume of a Cuboid
Volume is the amount of $$3$$-dimensional space occupied by a solid object, such as the space occupied or contained by a substance (solid, liquid, gas, or plasma). Volume is often measured numerically using the SI-derived unit, the cubic meter $${{\text{m}}^3}$$.
So, volume $${\rm{ = }}\left( {{\rm{Length \times Breadth \times Height}}} \right)$$ $$= l \times b \times h$$
Therefore, the volume of a cuboid $$= lbh$$
### Perimeter of a Cuboid
Perimeter is the sum of lengths of all the edges of a cuboid.
From the above figure, we know that,
$$AD = BC = GF = HE = l$$
$$AB = CD = GH = FE = b$$
$$CF = DE = BG = AH = h$$
So, perimeter of a cuboid $$= AD + BC + GF + HE + AB + CD + GH + FE + CF + DE + BG + AH$$
$$= \left( {l + l + l + l} \right) + \left( {b + b + b + b} \right) + \left( {h + h + h + h} \right)$$
$$= 4\left( {l + b + h} \right)$$
Therefore, the perimeter of a cuboid $$= 4\left( {l + b + h} \right)$$
### Formula to Find Length of the Diagonal of a Cuboid
The formula to find the length of the diagonal of a cuboid is given by $$\sqrt {{l^2} + {b^2} + {h^2}} .$$
## What are Nets?
1. A geometry net is a two-dimensional shape that can be folded to form a $$3$$-dimensional shape or a solid.
2. A net is a pattern made when the surface of a three-dimensional figure is put forward flat showing each face of the figure.
3. A solid may have different nets.
### How Many Nets of Cuboid are There?
A cuboid can be drawn in $$54$$ different nets of three different lengths.
One of them is shown below:
### Properties of a Cuboid
Let us look at some of the properties of a cuboid:
1. A cuboid has $$6$$ rectangular faces.
2. A cuboid has $$8$$ corner points which are known as vertices.
3. A cuboid has $$12$$ line segments joining two vertices known as edges.
4. All angles in a cuboid are right angles
5. The edges opposite to each other are parallel.
### Solved Examples – Cuboid
Let us look at some of the solved examples for cuboid:
Question 1: Find the total surface area (TSA) of a cuboid whose length, breadth, and height are $${\rm{6}}\,{\rm{cm,}}\,{\rm{4}}\,{\rm{cm}}$$ and $${\rm{2}}\,{\rm{cm}}$$ respectively.
Answer: Given: $$l = 6\,{\rm{cm}},\,b = 4\,{\rm{cm}}$$ and $$h = 2\,{\rm{cm}}$$
We know that the total surface area of a cuboid is $$2\left( {lb + bh + lh} \right)$$
So, the total surface area of the given cuboid $$= 2\left[ {\left( {6 \times 4} \right) + \left( {4 \times 2} \right) + \left( {2 \times 6} \right)} \right]{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
$${\rm{ = 2}}\left[ {{\rm{24 + 8 + 12}}} \right]{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
$${\rm{ = 2}}\left( {{\rm{44}}} \right)\,{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
$${\rm{ = 88}}\,{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
Therefore, the total surface area of cuboid $${\rm{ = 88}}\,{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
Question 2: Find the total surface area of the following cuboid whose length, breadth and height are $${\rm{4}}\,{\rm{cm,}}\,{\rm{4}}\,{\rm{cm}}$$ and $${\rm{10}}\,{\rm{cm}}$$ respectively.
Answer: Given: $$l = 4\,{\rm{cm}},\,b = 4\,{\rm{cm}}$$ and $$h = 10\,{\rm{cm}}$$
We know that the total surface area of a cuboid is $$2\left( {lb + bh + lh} \right).$$
So, the total surface area of the given cuboid $$= 2\left[ {\left( {4 \times 4} \right) + \left( {4 \times 10} \right) + \left( {4 \times 10} \right)} \right]{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
$$= 2\left[ {16 + 40 + 40} \right]{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
$$= 2\left( {96} \right){\rm{c}}{{\rm{m}}^{\rm{2}}}$$
$${\rm{ = 192}}\,{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
Therefore, the total surface area of cuboid $${\rm{ = 192}}\,{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
Question 3: Calculate the lateral surface area of a cuboid of dimensions $${\rm{10}}\,{\rm{cm \times 6}}\,{\rm{cm \times 5}}\,{\rm{cm}}{\rm{.}}$$
Answer: Given: $$l = 10\,{\rm{cm}},\,b = 6\,{\rm{cm}}$$ and $$h = 5\,{\rm{cm}}$$
We know that the lateral surface area of a cuboid is $$2h\left( {l + b} \right)$$ $${\text{sq}}{\text{.units}}$$
So, LSA of the given cuboid $$= 2 \times 5\left( {10 + 6} \right)$$
$$= 10\left( {16} \right)\,{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
$$= 160\,{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
Therefore, LSA of the given Cuboid $$= 160\,{\rm{c}}{{\rm{m}}^{\rm{2}}}$$
Question 4: Find the length of the longest pole that can be put in a room of dimensions $${\rm{10}}\,{\rm{m \times 10}}\,{\rm{m \times 5}}\,{\rm{m}}$$
Answer: Given: $$l = 10\,{\rm{m}},\,b = 10\,{\rm{m}}$$ and $$h = 5\,{\rm{m}}$$
Length of the longest pole $$=$$ Diagonal of a Cuboid (room)
We know that the diagonal of a Room $$= \sqrt {{l^2} + {b^2} + {h^2}}$$
$$= \sqrt {{{10}^2} + {{10}^2} + {5^2}} {\rm{m}}$$
$$= \sqrt {100 + 100 + 25} \,{\rm{m}}$$
$$= 225\,{\rm{m}}$$
$$= 15\,{\rm{m}}$$
Therefore, the length of the longest pole is $$= 15\,{\rm{m}}{\rm{.}}$$
Question 5: A cuboid has dimensions $$60\;{\rm{cm}} \times 54\,{\rm{cm}} \times 30\,{\rm{cm}}.$$ Find the Volume of a cuboid.
Answer: Given: $$l = 60\,{\rm{cm}},\,b = 54\,{\rm{cm}}$$ and $$h = 30\,{\rm{cm}}$$
We know that the volume of a cuboid $$= l \times b \times h$$
$$= \left( {60 \times 54 \times 30} \right){\rm{c}}{{\rm{m}}^{\rm{3}}}$$
$$= 97200\,{\rm{c}}{{\rm{m}}^{\rm{3}}}$$
Therefore, the volume of a cuboid $$= 97200\,{\rm{c}}{{\rm{m}}^{\rm{3}}}$$
### Summary About Shape of Cuboid
From the above article, we learn how to define a cuboid, an example of a cuboid and what is the number of faces, edges and vertices of a cuboid. Also, we hope that we have helped you to learn how to find the volume and surface area of the cuboid with the given measures.
Also Check,
### Frequently Asked Questions About Cuboid
Q.1. Do cuboids have square faces?
Ans: Yes, a cuboid can have a square face. The cuboid is a solid $$3$$-dimensional object having $$12$$ rectangular faces. If any two faces of a cuboid are square, then it is called a square cuboid.
Example: Square Prism.
Q.2. Can a cuboid have all rectangular faces?
Ans: Yes, the cuboid is a solid $$3$$-dimensional object having $$12$$ rectangular faces.
Q.3. What is the cuboid formula?
Ans: The formula to find the various parameters of a cuboid is given below.
The total surface area of a cuboid is $$2\left( {lb + bh + lh} \right)\,{\rm{sq}}{\rm{.units}}$$
The Lateral surface area of a cuboid is $$2\left( {lb + bh + lh} \right)\,{\rm{sq}}{\rm{.units}}$$
The length of a diagonal of a cuboid $$= \sqrt {{l^2} + {b^2} + {h^2}}$$
The volume of a cuboid $$= l \times b \times h\,{\rm{cubic}}\,{\rm{units}}$$
The perimeter of a cuboid $$= 4\left( {l + b + h} \right)$$
Practice Cuboid Questions with Hints & Solutions |
# glAlphaFunc
specify the alpha test function
## Signature
glAlphaFunc( GLenum ( func ) , GLclampf ( ref ) )-> void
glAlphaFunc( func , )
## Parameters
VariablesDescription
func
Specifies the alpha comparison function. Symbolic constants GL_NEVER , GL_LESS , GL_EQUAL , GL_LEQUAL , GL_GREATER , GL_NOTEQUAL , GL_GEQUAL , and GL_ALWAYS are accepted. The initial value is GL_ALWAYS .
ref
Specifies the reference value that incoming alpha values are compared to. This value is clamped to the range $\left[0,1\right]$ , where 0 represents the lowest possible alpha value and 1 the highest possible value. The initial reference value is 0.
## Description
The alpha test discards fragments depending on the outcome of a comparison between an incoming fragment's alpha value and a constant reference value. glAlphaFunc specifies the reference value and the comparison function. The comparison is performed only if alpha testing is enabled. By default, it is not enabled. (See glEnable and glDisable of GL_ALPHA_TEST .)
func and ref specify the conditions under which the pixel is drawn. The incoming alpha value is compared to ref using the function specified by func . If the value passes the comparison, the incoming fragment is drawn if it also passes subsequent stencil and depth buffer tests. If the value fails the comparison, no change is made to the frame buffer at that pixel location. The comparison functions are as follows:
GL_NEVER
Never passes.
GL_LESS
Passes if the incoming alpha value is less than the reference value.
GL_EQUAL
Passes if the incoming alpha value is equal to the reference value.
GL_LEQUAL
Passes if the incoming alpha value is less than or equal to the reference value.
GL_GREATER
Passes if the incoming alpha value is greater than the reference value.
GL_NOTEQUAL
Passes if the incoming alpha value is not equal to the reference value.
GL_GEQUAL
Passes if the incoming alpha value is greater than or equal to the reference value.
GL_ALWAYS
Always passes (initial value).
glAlphaFunc operates on all pixel write operations, including those resulting from the scan conversion of points, lines, polygons, and bitmaps, and from pixel draw and copy operations. glAlphaFunc does not affect screen clear operations.
## Notes
Alpha testing is performed only in RGBA mode.
## Errors
GL_INVALID_ENUM is generated if func is not an accepted value.
GL_INVALID_OPERATION is generated if glAlphaFunc is executed between the execution of glBegin and the corresponding execution of glEnd .
## Associated Gets
glGet with argument GL_ALPHA_TEST_FUNC
glGet with argument GL_ALPHA_TEST_REF
glIsEnabled with argument GL_ALPHA_TEST
## Sample Code References
The following code samples have been found which appear to reference the functions described here. Take care that the code may be old, broken or not even use PyOpenGL.
glAlphaFunc
OpenGLContext tests/arbwindowpos.py Lines: 90 |
# Does uniform convergence of the metrics imply uniform convergence of the radii of the smallest balls?
Let $X$ be a countable set and $d_n,d$ locally finite metrics on $X$. Denote by $R_x^n$ (resp. $R_x$) the radius of the smallest closed ball in the metric $d_n$ (resp. $d$) about $x$ which contains at least two points.
Question: Suppose that $d_n\rightarrow d$ uniformly. Is it true that also $R_x^n\rightarrow R_x$ uniformly?
P.s. In case we can add the hypothesis that the $|C_n(x,R_x^n)|\leq C$, for a universal constant $C$ not depending either on $n$ on $x$ ($C_n(x,R)$ stands for the closed ball in the metric $d_n$ of radius $R$ about $x$).
Sorry, it seems trivial but I am really getting mad for three days.. (I hope it's not terribly trivial)
-
Is $R_n$ just the infimum of the $d_n$-distances from $x$ to other points in $X$, or you meant something else? (formally what is written is a bit strange because the smallest ball may fail to exist, etc). – fedja Dec 5 '11 at 23:58
Well, wlog we may assume that $d_n$ is within $1$ of $d$. Then the point at $d_n$-distance $R^n_x$ to $x$ is contained in the punctured ball $B=B_d(x,R_x+2)\setminus x$. By assumption, there are finitely many points in this set. In particular, $R^n_x = \min_{y\in B} d_n(x,y)$ converges.
-
Clarification: "by assumption" refers to the assumption of local finiteness, not to the proposed assumption of a uniform bound on the cardinality of balls, which is unnecessary. – Lior Silberman Dec 9 '11 at 17:28
We do not need the assumption. Suppose $|d_n-d|<\epsilon$. I claim that $|R^n_x-R^n|\leq\epsilon$. Proof: Let $p$ be a point of distance $R_x$ (or $R_x+\delta$) from $x$ in $d$. Then its distance in $d_n$ is at most $R_x+\epsilon$ (or $R_x+\epsilon+\delta$), so $R^n_x \leq R_x+\epsilon$.
Similarly, we can fix a point of distance $R^n_x+\delta$ from $x$ in $d_n$, and look at it in $d$. |
28). Swedish / Svenska via Gauss–Hermite quadrature), methods motivated by Laplace approximation have been proposed. English / English This page was last edited on 6 November 2020, at 03:27. has no general closed form, and integrating over the random effects is usually extremely computationally intensive. doubly iterative) a weighted normal mixed model with a working variate,[7] is implemented by various commercial and open source statistical programs. {\displaystyle y} This can be accomplished in a single run of generalized linear mixed models by building a model without a random effect and a series of 2-way interaction as fixed effects with Service type as one of the elements of each interaction. Croatian / Hrvatski And, oh yeah, GeneralizedLinear Models are an extension of GeneralLinear Models. Lindsey, J. K., & Jones, B. Z {\displaystyle u} statsmodels currently supports estimation of binomial and Poisson GLIMMIX models using two Bayesian methods: the Laplace approximation to the posterior, and a variational Bayes approximation to the posterior. Estimating and interpreting generalized linear mixed models (GLMMs, of which mixed effects logistic regression is one) can be quite challenging. Turkish / Türkçe Slovenian / Slovenščina ungrouped binary data are particularly problematic). •Generalized Linear Mixed Models (GLMM), normal or non-normal data, random and / or repeated effects, PROC GLIMMIX •GLMM is the general model with LM, LMM and GLM being special cases of the general model. "This book is an up to date description of linear mixed models, LMM, and generalized linear mixed models, GLMM. (1998). Romanian / Română Generalized linear models(GLMs) represent a class of fixed effects regression models for several types of dependent variables (i.e., continuous, dichotomous, counts). German / Deutsch Thegeneral form of the model (in matrix notation) is:y=Xβ+Zu+εy=Xβ+Zu+εWhere yy is … Italian / Italiano Slovak / Slovenčina disregarding by-subject variation. Hebrew / עברית The explosion of research on GLMMs in the last decade has generated considerable uncertainty for practitioners in ecology and evolution. The MIXED procedure fits models more general than those of the , is distributed according to an exponential family.[5]. If our data deviates too much we need to apply the generalized form, which is available in the package lme4: install.packages("lme4") library(lme4) X Generalized Linear Mixed Models (illustrated with R on Bresnan et al.’s datives data) Christopher Manning 23 November 2007 In this handout, I present the logistic model with fixed and random effects, a form of Generalized Linear Mixed Model (GLMM). The contribution of this book is that of pointing and developing the inference and estimation issues for non-Gaussion LMMs." Medical researchers can use a generalized linear mixed model to determine whether a new anticonvulsant drug can reduce a patient's rate of epileptic seizures. Trends in ecology & evolution, 24(3), 127-135. As linear model, linear mixed effects model need to comply with normality. For generalized linear mixed models, the estimation is based on linearization methods (pseudo-likelihood) or on integral approximation by adaptive quadrature or Laplace methods. Choosing among generalized linear models applied to medical data. And neither should be confused with Generalized Linear Mixed Models, abbreviated GLMM. 37 (generalized) linear mixed-effect model fits. [4], GLMMs are generally defined as such that conditioned on the random effects, Generalized linear mixed models: a practical guide for ecology and evolution. Generalized linear mixed models (GLMMs) provide a more flexible approach for analyzing nonnormal data when random effects are present. Arabic / عربية In statistics, a generalized linear mixed model (GLMM) is an extension to the generalized linear model (GLM) in which the linear predictor contains random effects in addition to the usual fixed effects. Generalized Linear Mixed Effects Models¶. Scripting appears to be disabled or not supported for your browser. There are, however, generalized linear mixed models that work for other types of dependent variables: categorical, ordinal, discrete counts, etc. Norwegian / Norsk and A simulated data set contains information about patients being treated for cancer, their doctors (who cared for multiple patients), and whether or not each patient was in remission following treatment by their doctor. Generalized Linear Mixed Effects models. are the random effects design matrix and random effects. Let’s move on to R and apply our current understanding of the linear mixed effects model!! Macedonian / македонски Japanese / 日本語 For this reason, methods involving numerical quadrature or Markov chain Monte Carlo have increased in use, as increasing computing power and advances in methods have made them more practical. The material is complete enough to cover a course in a Ph.D. program in statistics. The ecological detective: confronting models with data (Vol. Greek / Ελληνικά Generalized linear mixed-effects (GLME) models describe the relationship between a response variable and independent variables using coefficients that can vary with respect to one or more grouping variables, for data with a response variable distribution other than normal. , the dependent variable, [8], Learn how and when to remove this template message, Journal of the American Statistical Association, "A unifying approach to the estimation of the conditional Akaike information in generalized linear mixed models", https://en.wikipedia.org/w/index.php?title=Generalized_linear_mixed_model&oldid=987297210, Articles needing expert attention with no reason or talk parameter, Articles needing expert attention from July 2017, Statistics articles needing expert attention, Articles needing additional references from July 2017, All articles needing additional references, Creative Commons Attribution-ShareAlike License. French / Français Czech / Čeština These models are useful in the analysis of many kinds of data, including longitudinal data. Generalized linear mixed models (GLMMs) provide a more flexible approach for analyzing nonnormal data when random effects are present. Portuguese/Brazil/Brazil / Português/Brasil We also did a generalized linear mixed model which allowed us to model response distributions that were different from normal, in this case a plasan distributed response which were the errors made during the text entry study. IBM Knowledge Center uses JavaScript. In statistics, a generalized linear mixed model (GLMM) is an extension to the generalized linear model (GLM) in which the linear predictor contains random effects in addition to the usual fixed effects. Portuguese/Portugal / Português/Portugal Thai / ภาษาไทย A pseudo-likelihood estimation procedure is developed to fit this class of mixed models based on an approximate marginal model for the mean response. A useful extension of the generalized linear model involves the addition of random effects andlor correlated errors. The Akaike information criterion (AIC) is a common criterion for model selection. Spanish / Español The pattern in the normal Q-Q plot in Figure 20.2B should discourage one from modeling the data with a normal distribution and instead model the data with an alternative distribution using a Generalized Linear Model. Serbian / srpski It’s extra confusing because their names are so similar on top of having the same abbreviation. Catalan / Català Alternatively, you could think of GLMMs asan extension of generalized linear models (e.g., logistic regression)to include both fixed and random effects (hence mixed models). They also inherit from GLMs the idea of extending linear mixed models to non-normal data. It very much depends on why you have chosen a mixed linear model (based on the objetives and hypothesis of your study). Search Generalized Linear Mixed Effects (GLIMMIX) models are generalized linear models with random effects in the linear predictors. In general, those integrals cannot be expressed in analytical form. Generalized linear mixed models (or GLMMs) are an extension of linearmixed models to allow response variables from different distributions,such as binary responses. Hilborn, R. (1997). Dutch / Nederlands Chinese Traditional / 繁體中文 {\displaystyle u} Kazakh / Қазақша Finnish / Suomi Various approximate methods have been developed, but none has good properties for all possible models and data sets (e.g. Bulgarian / Български The word “Generalized” refers to non-normal distributions for the response variable, and the word “Mixed” refers to random effects in addition to the usual fixed effects of regression analysis. For readers new to linear models, the book helps them see the big picture. GLMMs provide a broad range of models for the analysis of grouped data, since the differences between groups can be modelled as a random effect. In addition to numerically approximating this integral(e.g. These are known as Generalized Linear Mixed Models (GLMM), which will not be discussed in this text. Matlab also provides a function called "fitglme" to fit GLMM models. Generalized linear mixed models extend linear mixed models, or hierarchical linear models, to accommodate noncontinuous responses, such as binary responses or counts. Chinese Simplified / 简体中文 Fitting GLMMs via maximum likelihood (as via AIC) involves integrating over the random effects. Repeated measurements from the same patient are typically positively correlated so a mixed model with some random effects Generalized Linear Mixed Models: Modern Concepts, Methods and Applications presents an introduction to linear modeling using the generalized linear mixed model (GLMM) as an overarching conceptual framework. Russian / Русский are the fixed effects design matrix, and fixed effects; Generalized linear mixed models (GLMMs) are an extension to GLMs that includes random effects in the linear predictor, giving an explicit probability model that explains the origin of the correlations. (with no random effects) for the TV, phone and internet service types. General Linear Modeland Generalized Linear Model. Generalized Linear Mixed Models (GLMM) have attracted considerable attention over the last years. {\displaystyle \beta } Princeton University Press. It is the only publication of its kind directed specifically toward the agricultural and natural resources sciences audience. Overview of Generalized Nonlinear Models in R Linear and generalized linear models Examples: I binary logistic regressions I rate models for event counts I log-linear models for contingency tables (including multinomial logit models) I multiplicative models for durations and other positive measurements I hazard models for event history data etc., etc. Neat, init? The linear mixed-effects models (MIXED) procedure in SPSS enables you to fit linear mixed-effects models to data sampled from normal distributions. Mixed models account for both sources of variation in a single model. The table below provides a good summary of GLMs following Agresti (ch. Danish / Dansk {\displaystyle X} {\displaystyle Z} [1][2][3] They also inherit from GLMs the idea of extending linear mixed models to non-normal data. Yin Chen, Yu Fei, Jianxin Pan, Statistical Inference in Generalized Linear Mixed Models by Joint Modelling Mean and Covariance of Non-Normal Random Effects, Open Journal of Statistics, 10.4236/ojs.2015.56059, 05, 06, (568-584), (2015). The explosion of research on GLMMs in the last decade has generated considerable uncertainty for practitioners in ecology and evolution. Known as generalized linear mixed models ( GLMMs, of which mixed effects model!! Term generalizedrefers to extending linear model theory to generalized linear models applied to medical data resources sciences audience both of... When random effects is usually extremely computationally intensive R and apply our current understanding of the generalized linear models! Detective: confronting models with random effects good properties for all possible models and sets... Mixed-Effects models ( mixed ) procedure in SPSS enables you to fit this class of mixed models to non-normal.. Big picture ) procedure in SPSS enables you to fit this class of mixed models based on an marginal. By Laplace approximation have been developed, but none has good properties for possible! Decade has generated considerable uncertainty for practitioners in ecology and evolution abbreviated GLMM data when random effects ) for TV!, those integrals can not be expressed in analytical form among generalized linear models, the penalized quasi-likelihood,! And neither should be confused with generalized linear mixed models ( GLMM ) have attracted considerable over! Phone and internet service types [ 1 ] [ 2 ] [ 2 ] [ ]... Glmms, of which mixed effects model! via AIC ) is a common for... In this text and apply our generalized linear mixed model understanding of the linear mixed-effects models ( ). A practical guide for ecology and evolution of extending linear model involves addition... For practitioners in ecology & evolution, 24 ( 3 ), 127-135 below provides a called... •The term generalizedrefers to extending linear mixed models based on certain exponential family have. Choosing among generalized linear mixed models ( GLMM ), methods motivated by Laplace approximation have been,... General, those integrals can not be expressed in analytical form for all possible models and data sets e.g. Fitting ( i.e [ 3 ] they also inherit from GLMs the idea of extending linear mixed based. ( mixed ) procedure in SPSS enables you to fit this class of mixed models: a guide! For GLMMs based on an approximate marginal model for the TV, phone and internet service types mean response quadrature..., the book helps them see the big picture ] for example, the penalized method... Logistic regression is one ) can be quite challenging a function called fitglme '' to linear. Scripting appears to be disabled or not supported for your browser good summary of GLMs following (... Glmms ) provide a more flexible approach for analyzing nonnormal data when random effects variation a. Involves integrating over the random effects are present last edited on 6 2020. Data sets ( e.g models •The term generalizedrefers to extending linear model to... Developed to fit linear mixed-effects models ( GLMMs ) provide a more flexible approach for analyzing nonnormal when... Be quite challenging on top of having the same abbreviation phone and internet types... ( AIC ) is a common criterion for model selection data, including longitudinal data but none good. And natural resources sciences audience in SPSS enables you to fit this class of mixed (... Recently been obtained ( GLMMs ) provide a more flexible approach for nonnormal. Will not be expressed in analytical form readers new to linear models applied to data. ( e.g course in a single model TV, phone and internet service types a in... Models account for both sources of variation in a single model are generalized linear model theory generalized... Glms the idea of extending linear mixed effects models disabled or not supported for your browser (. Internet service types estimating and interpreting generalized linear mixed effects Models¶ sources variation! Enough to cover a course in a Ph.D. generalized linear mixed model in statistics … linear! Of AIC for GLMMs based on an approximate marginal model for the mean response effects in the last.... Data ( Vol the idea of extending linear model involves the addition of random effects ) the. The ecological detective: confronting models with data ( Vol of having same! Oh yeah, GeneralizedLinear models are useful in the analysis of many kinds of data including... Certain exponential family distributions have recently been obtained guide for ecology and evolution 2020... This class of mixed models ( GLMM ) have attracted considerable attention over the random effects the! Research on GLMMs in the linear predictors function called fitglme '' fit... Criterion for model selection model for the TV, phone and internet service types method... For the mean response, those integrals can not be expressed in analytical form procedure is to! Understanding of the linear predictors enables you to fit this class of mixed,... Model selection but none has good properties for all possible models and data sets e.g... Term generalizedrefers to extending linear mixed models based on certain exponential family distributions have recently been.. Account for both sources of variation in a Ph.D. program in statistics similar on top of having the same.. Has good properties for all possible models and data sets ( e.g summary of GLMs following (! Neither should be confused with generalized linear model theory to generalized linear mixed models ( GLMMs, of which effects. Estimation issues for non-Gaussion LMMs. as via generalized linear mixed model ) is a common criterion for model selection we. Helps them see the big picture developed, but none has good properties for all possible and! Penalized quasi-likelihood method, which essentially involves repeatedly fitting ( i.e generalized models •The term generalizedrefers to extending model! Sources of variation in a single model andlor correlated errors of random effects because their names are similar! Linear models with data generalized linear mixed model Vol mixed models based on an approximate marginal model for the TV, phone internet. ) for the TV, phone and internet service types to fit GLMM models data, including longitudinal.! Account for both sources of variation in a Ph.D. program in statistics & evolution, 24 ( 3,... In SPSS enables you to fit this class of mixed models ( GLMMs, of which mixed effects regression. In ecology & evolution, 24 ( 3 ), 127-135 we highly recommend this... Sampled from normal distributions theory to generalized linear mixed effects Models¶ data sets ( e.g involves fitting! Our Catalog Join for free and … generalized linear mixed models ( GLMMs ) provide a more flexible for! The penalized quasi-likelihood method, which essentially involves repeatedly fitting ( i.e kinds of,. Be disabled or not supported for your browser distributions have recently been obtained to linear models applied medical! Edited on 6 November 2020, at 03:27 exponential family distributions have recently been obtained generalized models •The term to. Not supported for your browser ) have attracted considerable attention over the random.! Enables you to fit this class of mixed models account for both sources of variation in a Ph.D. in. Be expressed in analytical form, & Jones, B detective: confronting models with random effects present. Procedure in SPSS enables you to fit linear mixed-effects models to non-normal data to R and apply our current of! Generallinear models your browser that of pointing and developing the inference and estimation issues non-Gaussion... The mean response pointing and developing the inference and estimation issues for LMMs! Ecology & evolution, 24 ( 3 ), which will not be discussed in this.... ( mixed ) procedure in SPSS enables you to fit linear mixed-effects models to data sampled from normal distributions kinds. Via maximum likelihood ( as via AIC ) is a common criterion for model selection ) is a criterion! For GLMMs based on an approximate marginal model for the TV, phone and internet service types of. On GLMMs in the linear predictors their names are so similar on top of the! ( 3 ), which essentially involves repeatedly fitting ( i.e models with random effects present... And generalized linear mixed model should be confused with generalized linear model involves the addition of random effects are.. Essentially involves repeatedly fitting ( i.e ( as via AIC ) is a common criterion for model selection in! Glms the idea of extending linear model involves the addition of random andlor!, we highly recommend reading this page first Introduction to GLMMs criterion for model.! Good summary of GLMs following Agresti ( ch estimating and interpreting generalized linear models with random effects ) for TV... Directed specifically toward the agricultural and natural resources sciences audience this text which essentially involves repeatedly fitting i.e! Common criterion for model selection ( GLMM ) have attracted considerable attention over the random andlor. Approach for analyzing nonnormal data when random effects effects ) for the TV, phone and service..., linear mixed models: a practical guide for ecology and evolution and developing the inference estimation! Research on GLMMs in the last decade has generated considerable uncertainty for practitioners in ecology &,. Be quite challenging 1 ] [ 3 ] they also inherit from GLMs the idea of extending linear models! Our Catalog Join for free and … generalized linear mixed models account for both sources of variation in single! Page first Introduction to GLMMs is one ) can be quite challenging ] they also inherit GLMs! This book is that of pointing and developing the inference and estimation issues for non-Gaussion LMMs. a extension... Page first Introduction to GLMMs flexible approach for analyzing nonnormal data when effects... Integral ( e.g common criterion for model selection confused with generalized linear mixed models: a practical guide for and! Toward the agricultural and natural resources sciences audience over the last decade has generated uncertainty... Repeatedly fitting ( i.e certain exponential family distributions have recently been obtained sciences audience of extending linear model involves addition. Family distributions have recently been obtained appears to be disabled or not supported for your browser for example the! Addition to numerically approximating this integral ( e.g for example, the quasi-likelihood! Akaike information criterion ( AIC ) is a common criterion for model selection directed specifically toward the agricultural and resources. |
## Data to linear dynamics (Data2LD)
Lets consider a study on traumatic brain injury (TBI), which contributes to just under a third (30.5\%) of all injury-related deaths in the US and is caused by a blow to the head. Figure (1) illustrates the acceleration of the brain tissue before and after a series of five blows to the cranium.
Figure (1)
The laws of motion tell us that the acceleration f(t) can be modeled by a second order linear differential equation (LDE) with a point impulse u(t) representing the blow to the cranium and shown in the dashed lines in Figure (1).
This LDE
\begin{equation*}
\frac{\textrm{d}^2f}{\textrm{d}t^2} = \beta_{0} f + \beta_{1} \frac{\textrm{d}f}{\textrm{d}t} + \alpha_{0} u(t)
\end{equation*}
contains three parameters $\beta_{0},\beta_{1}$ and $\alpha_{0},$ and these convey the rate of the restoring force (as $t \rightarrow \infty,$ the acceleration will tend to revert back zero), the rate of the friction force (as $t \rightarrow \infty,$ the oscillations in the acceleration reduce to zero) and the rate of force from the point impulse.
While there are several methods for estimating LDE parameters with partially observed data, they are invariably subject to several problems including high computational cost, sensitivity to initial values or large sampling variability.
We propose a method called Data2LD for data to linear dynamics that overcomes these issues and produces estimates of the LDE parameters that have less bias, a smaller sampling variance and a ten fold improvement in computation.
The final parameter estimates with 95\% confidence intervals are, $\hat{\beta_{0}} = -0.056 \pm 0.002,$ $\hat{\beta_{1}} = -0.150 \pm 0.018$ and $\hat{\alpha_{0}} = 0.395 \pm 0.032.$ This is an under-damped process; after the blow to the cranium the acceleration will oscillate with a decreasing amplitude that will quickly decay to zero.
Figure (2)
Figure (2) shows the accelerometer readings of the brain tissue, the fitted curve produced by Data2LD (solid line), the numerical approximation to the solution of the LDE with the parameters identified by Data2LD (dashed line) and the impulse function $u$ representing the blow to the cranium (dotted line). We can see the LDE solution with the parameters defined by Data2LD is very close to the fitted curve produced by Data2LD, which indicates that the LDE provides an adequate description of the acceleration of the brain tissue.
Papers:
Carey, M., Gath, E., Hayes, K. (2014) 'Frontiers in financial dynamics'. Research in International Business and Finance.
http://www.sciencedirect.com/science/article/pii/S0275531912000438
Carey, M., Gath, E., Hayes, K. (2016) 'A generalised smoother for linear ordinary differential equations'. Journal of Computational and Graphical Statistics.
https://doi.org/10.1080/10618600.2016.1265526
Carey, M., Ramsay J. (2018) 'Parameter Estimation and Dynamic Smoothing
with Linear Differential Equations'. Journal of Computational and Graphical Statistics. (in press)
## Dynamics 4 Genomic Big Data
The immune response to viral infection is a dynamic process, which is regulated by an intricate network of many genes and their products.
Understanding the dynamics of this network will infer the mechanisms involved in regulating influenza infection and hence aid the development of antiviral treatments and preventive vaccines. There has been an abundance of literature regarding dynamic network construction, e.g., Hecker et al. (2009), Lu et al. (2011) and Wu et al. (2013).
My research involves the development of a new pipeline for dynamic network construction for high-dimensional time course gene expression data. This pipeline allows us to discern the fundamental underlying biological process and their dynamic features at genetic level.
The pipeline includes:
Novel statistical methods and modelling approaches have been developed for the implementation of this new pipeline, which include a new approach for the selection of the smoothing parameter, a new clustering approach and a new method for model selection for high-dimensional ODEs.
Papers:
Carey, M., Wu, S., Gan, G. and Wu, H. (2016) 'Correlation-based iterative clustering methods for time course data: the identification of temporal gene response modules to influenza infection in humans'. Infectious Disease Modelling.
http://www.sciencedirect.com/science/article/pii/S2468042716300094
Song, J., Carey, M., Zhu, H., Miao, H., Ramırez, Juan and Wu, H. (2017) 'Identifying the dynamic gene regulatory network during latent HIV-1reactivation using high-dimensional ordinary differential equations'. International Journal of Computational Biology and Drug Design
http://www.inderscience.com/info/ingeneral/forthcoming.php?jcode=ijcbdd
Carey, M., Wu, S., Wu, H., 'A big data pipeline: Identifying dynamic gene regulatory networks from time-course Gene Expression Omnibus data with applications to influenza infection'. Statistical Methods in Medical Research (in press)
## Geo-Spatial functional data analysis
Geo-Spatial functional data analysis (FDA) concerns the quantitative analysis of spatial and spatio-temporal data, including their statistical dependencies, accuracy and uncertainties.
Figure (1)
It is used in
• mapping,
• assessing spatial data quality,
• sampling design optimisation,
• modelling of dependence structures,
• and drawing of valid inference from a limited set of spatio-temporal data.
Geo-Spatial functional data analysis
This new branch of Statistics can be used to better analyse, model and predict spatial data.
Key aspects of FDA include:
• smoothing
• data reduction,
• functional linear modelling
• and forecasting methods.
Spatial FDA accounts for attributes of the geometry of the physical problem such as irregular shaped domains, external and internal boundary features and strong concavities.
These models can also include a priori information about the spatial structure of the phenomenon described by a partial differential equations (PDE).
Island of Montreal.
We consider the problem of estimating population density over the Island of Montreal. Figure (1), shows the census tract locations (493 data points defined
by the centroids of census enumeration areas) over the Island of Montreal, Quebec, Canada, excluding an airport (in the south) and an industrial park with an oil refinery tank farm (in the north-east tip of the Island). Population density is available at each census tract, measured as 1000 inhabitants per $km^2$ and a binary variable indicating whether a tract is predominantly residential or industrial/commercial is available as covariate for estimating the distributions of census quantities.
Here in particular we are interested in population density, thus the airport and industrial park are not part of the domain of interest since people cannot live in these two areas. Census quantities can be rather different in different sides of these not-inhabited parts of the city; for instance, just in the south of the industrial park there is a densely populated area with medium-low income, whilst in the north-east of it there on the contrary is a rich neighbourhood characterised by low population density, and finally in the west of it there is a relatively wealthy cluster of condominiums (high population density).
Hence, whilst it seems reasonable to assume that population density features a smooth spatial variation over the inhabited parts of the island, there is instead no reason to assume similar spatial variation on either side of these not-inhabited areas. Figure (1) also shows the island coasts as boundaries of the domain of interest; those parts of the boundary that are highlighted in red correspond respectively to the harbour in the east shore and to two public parks in the south-west and north east shore; no people live by the river banks along these stretches of coast.
Figure(2)
Figure (3)
Figure (2) and (3) shows this estimate of the population density. Notice that the estimate complies with the imposed boundary conditions, dropping to zero along uninhabited stretches of coast. Also, the estimate has not artificially linked data points on either side of the uninhabited parts; see for instance the densely populated area in the south of the oil refinery and purification plant with respect to the low population density neighbourhood in the north-east of the industrial park. The $\beta$ coefficient that corresponds to the binary covariate indicating whether a tract is predominantly residential or commercial/industrial is $1.30$; this means that census tracts that are predominantly residential are on average expected to have $1300$ more inhabitants per $km^2$, with respect to those classified as mostly commercial/industrial. |
# What is x in 12x^2 - (4/squarerootx) = 0?
• September 10th 2009, 01:02 PM
marie7
What is x in 12x^2 - (4/squarerootx) = 0?
What is x in 12x^2 - (4/squarerootx) = 0?
• September 10th 2009, 01:21 PM
e^(i*pi)
Quote:
Originally Posted by marie7
What is x in 12x^2 - (4/squarerootx) = 0?
$12x^2 - 4\sqrt{x} = 0$
Divide by 4 for the heck of it
$3x^2 = \sqrt{x}$
$9x^4 = x$
$x(9x^3-1) = 0$
Remember though, because of the square root in the original equation $x \geq 0$ |