text
stringlengths 9
294k
|
---|
Classical Electromagnetic Theory II
Professor Thomas Curtright
PHY651, Section QH
T,Th 12:15-1:30 room 203; W 3:00-3:50 room 203
Grade = HW + Midterm (Thurs, 11 March, in class ) + Final (Tuesday, 11 May, 2:00-4:30, Physics Library)
Required text: J D Jackson, Classical Electrodynamics, Third Edition (Wiley, 1999) [Jackson errata]
We will cover Chapters 8 - 16, more or less.
Notes: Lienard power formula, Thomas precession, conformal transformations
Classic papers on diffraction: Andrews (experimental!), Bethe, Morse and Rubinstein, Smythe, Stratton and Chu
HW#1 Jackson 8.2, 8.5, 8.7, 8.9 due Friday 30 January HW#2 Jackson 8.14, 8.17, 9.1 due Friday 13 February HW#3 Jackson 9.14, 9.16, 9.17 due Friday 20 February HW#4 Jackson 9.6, 9.12, and provide the details to obtain Eqn (9.168) from Eqn (9.163), for the case of zero magnetization, due Friday 27 February HW#5 Jackson 6.2, 14.3, 14.2, 14.4, 16.1, 16.2 due Friday 5 March HW#6 Jackson 10.1, 10.8, 10.12, and show the orthogonality of spherical Bessel functions as given here, due Friday 2 April HW#7 Jackson 11.3, 11.5, 11.11, 11.12, 11.17, 11.18, due Friday 16 April HW#8 Jackson 12.1, 12.16, and assuming only a conserved, symmetric, traceless energy-momentum tensor θμν, construct all conserved rank 2 tensor currents of the form xα xβ θμν with suitably chosen contractions of indices, due Friday 30 April Final Exam: Pick it up here, due Tuesday 11 May 4:30 pm.
However, you must list all references, collaborations, and other sources, if any, for your HW solutions.
## Other useful books:
A O Barut, Electrodynamics and Classical Theory of Fields and Particles (Macmillan, 1964; Dover, 1980).
S C Chapman, Core Electrodynamics (Taylor & Francis, 2000).
R P Feynman, R B Leighton, and M Sands, The Feynman Lectures on Physics, Volume II (Addison-Wesley, 1964).
M A Heald and J B Marion, Classical Electromagnetic Radiation, 3rd edition (Brooks Cole, 1994). [1]
L D Landau and E M Lifshitz, The Classical Theory of Fields, Fourth Revised English Edition.
Course of Theoretical Physics Volume 2 (Pergamon, 1975, 1987, 1997). [1]
L D Landau, E M Lifshitz, and L P Pitaevskii, Electrodynamics of Continuous Media, 2d edition.
Course of Theoretical Physics Volume 8 (Pergamon, 1960, 1984, 1993). [1]
F E Low, Classical Field Theory (Wiley, 1997). [1]
W K H Panofsky and M Phillips, Classical Electricity and Magnetism, 2nd edition (Addison-Wesley, 1962).
E Purcell, Electricity and Magnetism (McGraw-Hill, 1984). [1]
J Schwinger, L L DeRaad, Jr., K A Milton, and Wu-yang Tsai, Classical Electrodynamics (Perseus, 1998). [1]
D E Soper, Classical Field Theory (John Wiley & Sons, 1976). [2]
M Abramowitz and I E Stegun, Handbook of Mathematical Functions, (National Bureau of Standards, AMS 55, 1964)
G B Arfken and H J Weber, Mathematical Methods for Physicists, Fifth Edition (Academic Press, 2001).
W H Press, S A Teukolsky, W T Vetterling, and B P Flannery, Numerical Recipes, (Cambridge University Press, 1992).
H M Schey, Div, Grad, Curl, and All That: An Informal Text on Vector Calculus, Third Edition (W.W. Norton, 1997).
[1] Gaussian units; [2] Lorentz units
## Maxwell's Equations
As in PHY650, the content of the course is given, in summary, by the Lorentz force law and Maxwell's equations, involving the constants:
where
where
An exact expression for the Coulomb constant is:
Maxwell's equations relate the field quantities, the charge density, and the current density at one single point in space, through their time and space derivatives. They contain physical information obtained from Coulomb's, Ampere's, and Faraday's laws, and they have been modified by Maxwell's assumption so as to satisfy the law of continuity of charge. Below are Maxwell's equations and related equations. Bold-face letters represent vectors.
The symbols used in the above equations have the following meaning.
Symbol Meaning MKS units Gaussian units magnetic induction (tesla) (gauss) velocity of light (meters per second) (centimeters per second) electric displacement (newtons per coulomb) (dynes per statcoulomb) electric field strength (newtons per coulomb) (dynes per statcoulomb) force (newton) (dyne) magnetic field intensity (amperes per meter) (gauss) current density (amperes per square meter) (gauss per meter) magnetization (amperes per meter) (gauss) charge (coulomb) (statcoulomb) volume charge density (coulomb per cubic meter) (statcoulomb per cubic centimeter) velocity (meters per second) (centimeters per second) |
Interior and Closure for TOPOLGY
• Feb 8th 2013, 02:44 PM
kaptain483
Interior and Closure for TOPOLGY
I am have run into a problem where I am supposed to prove that for any set A, the boundary of A is a closed set and that for any set A, the boundary of A is a subset of A closure.
• Feb 8th 2013, 02:56 PM
Plato
Re: Interior and Closure for TOPOLGY
Quote:
Originally Posted by kaptain483
I am have run into a problem where I am supposed to prove that for any set A, the boundary of A is a closed set and that for any set A, the boundary of A is a subset of A closure.
Let $A^o,~\beta(A),~\&~\mathcal{E}(A)$ stand for interior, boundary, and exterior of the set $A$.
Now you use basic definitions show those are pair-wise disjoint sets.
If you can do that, then this question is obviously true.
• Feb 10th 2013, 06:52 PM
kaptain483
Re: Interior and Closure for TOPOLGY
how do you show that they are pair wise disjoint sets
• Feb 10th 2013, 07:14 PM
Plato
Re: Interior and Closure for TOPOLGY
Quote:
Originally Posted by kaptain483
how do you show that they are pair wise disjoint sets
To do topology one must know the definitions.
Post the definitions of interior point, boundary point, and exterior point.
Then use those definitions to show that none of those overlap.
This is your problem to do, so show some effort. |
# Properties
Label 3920.2.a.p Level $3920$ Weight $2$ Character orbit 3920.a Self dual yes Analytic conductor $31.301$ Analytic rank $0$ Dimension $1$ CM no Inner twists $1$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$3920 = 2^{4} \cdot 5 \cdot 7^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 3920.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$31.3013575923$$ Analytic rank: $$0$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 70) Fricke sign: $$-1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
$$f(q)$$ $$=$$ $$q - q^{3} - q^{5} - 2q^{9} + O(q^{10})$$ $$q - q^{3} - q^{5} - 2q^{9} + 6q^{11} - 4q^{13} + q^{15} - 2q^{19} + 3q^{23} + q^{25} + 5q^{27} - 3q^{29} - 8q^{31} - 6q^{33} - 4q^{37} + 4q^{39} + 9q^{41} + 7q^{43} + 2q^{45} - 6q^{53} - 6q^{55} + 2q^{57} + 6q^{59} + 5q^{61} + 4q^{65} - 5q^{67} - 3q^{69} + 6q^{71} - 16q^{73} - q^{75} - 2q^{79} + q^{81} - 3q^{83} + 3q^{87} - 15q^{89} + 8q^{93} + 2q^{95} + 14q^{97} - 12q^{99} + O(q^{100})$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0
0 −1.00000 0 −1.00000 0 0 0 −2.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$-1$$
$$5$$ $$1$$
$$7$$ $$1$$
## Inner twists
This newform does not admit any (nontrivial) inner twists.
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 3920.2.a.p 1
4.b odd 2 1 490.2.a.c 1
7.b odd 2 1 3920.2.a.bc 1
7.c even 3 2 560.2.q.g 2
12.b even 2 1 4410.2.a.bm 1
20.d odd 2 1 2450.2.a.w 1
20.e even 4 2 2450.2.c.g 2
28.d even 2 1 490.2.a.b 1
28.f even 6 2 490.2.e.h 2
28.g odd 6 2 70.2.e.c 2
84.h odd 2 1 4410.2.a.bd 1
84.n even 6 2 630.2.k.b 2
140.c even 2 1 2450.2.a.bc 1
140.j odd 4 2 2450.2.c.l 2
140.p odd 6 2 350.2.e.e 2
140.w even 12 4 350.2.j.b 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
70.2.e.c 2 28.g odd 6 2
350.2.e.e 2 140.p odd 6 2
350.2.j.b 4 140.w even 12 4
490.2.a.b 1 28.d even 2 1
490.2.a.c 1 4.b odd 2 1
490.2.e.h 2 28.f even 6 2
560.2.q.g 2 7.c even 3 2
630.2.k.b 2 84.n even 6 2
2450.2.a.w 1 20.d odd 2 1
2450.2.a.bc 1 140.c even 2 1
2450.2.c.g 2 20.e even 4 2
2450.2.c.l 2 140.j odd 4 2
3920.2.a.p 1 1.a even 1 1 trivial
3920.2.a.bc 1 7.b odd 2 1
4410.2.a.bd 1 84.h odd 2 1
4410.2.a.bm 1 12.b even 2 1
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(3920))$$:
$$T_{3} + 1$$ $$T_{11} - 6$$ $$T_{13} + 4$$ $$T_{17}$$
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T$$
$3$ $$1 + T$$
$5$ $$1 + T$$
$7$ $$T$$
$11$ $$-6 + T$$
$13$ $$4 + T$$
$17$ $$T$$
$19$ $$2 + T$$
$23$ $$-3 + T$$
$29$ $$3 + T$$
$31$ $$8 + T$$
$37$ $$4 + T$$
$41$ $$-9 + T$$
$43$ $$-7 + T$$
$47$ $$T$$
$53$ $$6 + T$$
$59$ $$-6 + T$$
$61$ $$-5 + T$$
$67$ $$5 + T$$
$71$ $$-6 + T$$
$73$ $$16 + T$$
$79$ $$2 + T$$
$83$ $$3 + T$$
$89$ $$15 + T$$
$97$ $$-14 + T$$ |
Use the graph to find the indicated values. Summary: After you graph a function on your TI-83/84, you can make a picture of its inverse by using the DrawInv command on the DRAW menu. In order to graph the inverse of a function, first make sure that the function has been graphed.As an example, is entered using the Y = key and graphed below in a standard viewing window using the following key presses to enter the function, the press zoom 6 to graph it in standard window. An inverse function goes the other way! Inverse Trigonometric Functions 02:00. You can view the ranges in the Inverse Trigonometric Function Graphs. It is denoted as: f(x) = y ⇔ f … Online trig graphing calculator for drawing the graph curves of inverse trigonometric functions such as arcsin, arccos, arctan, arcsec, arccot and arccosec. Figure $$\PageIndex{5}$$: The graph of each of the inverse trigonometric functions is a reflection about the line $$y=x$$ of the … 1. y = 2 x 2 + 3. Reduce the left matrix to row echelon form using elementary row operations for the whole matrix (including the right one). The inverse is usually shown by putting a little "-1" after the function name, like this: f … Description . Lines: Two Point Form. Use "x" … A function must be a one-to-one relation if its inverse is to be a function. Lines: Point Slope Form. Graphing and Inverse Functions. Let’s sketch the graphs of the log and inverse functions in the same Cartesian plane to verify that they are indeed symmetrical along the line \large{\color{green}y=x}. I will utilize the domain and range of the original function to describe the domain and range of the inverse function by … New Blank Graph. Inverse Functions. The first step is to put that into Y1 as usual in the Y= menu. I am thinking of something similiar to the command Derivative[] which can give you immediately the derivative and graph of a function… The inverse f-1 (x) takes output values of f(x) and produces input values. Examples. As MathBits nicely points out, an Inverse and its Function are reflections of each other over the line y=x. To find the inverse of a function, ____ the coordinates in each ordered pair for the function. Section 5.5 Inverse Trigonometric Functions and Their Graphs DEFINITION: The inverse sine function, denoted by sin 1 x (or arcsinx), is de ned to be the inverse of the restricted sine function sinx; ˇ 2 x ˇ 2 DEFINITION: The inverse cosine function, denoted by cos 1 x (or arccosx), is de ned to be the inverse of the restricted cosine function … A free graphing calculator - graph function, examine intersection points, find maximum and minimum and much more This website uses cookies to ensure you get the best experience. This can also be done by setting y=x and x=y. But, there is a command called DrawInv that allows you to do this. I learned about electricity when I was 4 years old by sticking a … Example 2: Sketch the graphs of f(x) = 3x 2 - 1 and g ( x ) = x + 1 3 for x ≥ 0 and determine if they are inverse functions. The graph below shows the function , can you find its inverse?Input the inverse you found in the box to the left of the graph and check if the graph is the reflection of h(x) in the y=x line. 4. This makes finding the domain and … Mathematically, we find $$x$$ so that $$\Pr(X \le x) = p$$. Basic Graphs Select Section 4.1: Basic Graphs 4.2: Amplitude, Reflection, and Period 4.3: Vertical and Horizontal Translations 4.4: The Other Trigonometric Functions 4.5: Finding an Equation from Its Graph 4.6: Graphing Combinations of Functions 4.7: Inverse Trigonometric Functions If the function is one-to-one, there will be a unique inverse. Now that we can find the inverse of a function, we will explore the graphs of functions and their inverses. 7 - Inverse Relations and Functions Graph each function using a graphing calculator, and apply the horizontal line test to determine whether its inverse function exists. To graph the inverse trigonometric functions, we use the graphs of the trigonometric functions restricted to the domains defined earlier and reflect the graphs about the line $$y=x$$ (Figure). This video demonstrates how you can Draw the Inverse Function on the TI-84 Graphing Calculator. But this can be simplified. Calculator Use. This Inverse Cumulative Normal Probability Calculator will compute for you a score $$x$$ so that the cumulative normal probability is equal to a certain given value $$p$$. Graph of an inverse function. Finding and graphing inverse functions. Section 7. Xtra Gr 12 Maths: In this lesson on Inverses and Functions we focus on how to find an inverse, how to sketch the inverse of a graph and how to restrict the domain of a function. If a function f(x) is invertible, its inverse is written f-1 (x). Function Grapher and Calculator Description:: All Functions. Next, go to … To find an inverse function reflect a graph of a function across the y=x line and find the resulting equation. example. Answered. Example 3: Find the inverse of the log function. To calculate inverse matrix you need to do the following steps. Improve your math knowledge with free questions in "Find values of inverse functions from graphs" and thousands of other math skills. Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator. Say the function you have is 3x²+5x+2, and you want to graph its inverse. Loading... Transformations: Inverse of a Function Transformations: Inverse of a Function ... to save your graphs! 3. x = 2 y 2 + 3. By using this website, you agree to our Cookie Policy. Section 1. Inverse function calculator helps in computing the inverse value of any function that is given as input. 1. f (x) = 3| x | + 2 2. f (x) = – √푥 + 3 – 1 3. f (x) = 푥 5 + 9 Determine whether f has an inverse function. Educators. Is there any comand in GeoGebra that can be used to determine the inverse of a function? Set the matrix (must be square) and append the identity matrix of the same dimension to it. Graph a Function’s Inverse. Function Grapher is a full featured Graphing Utility that supports graphing two functions together. Since the four points selected show that the coordinates of f(x) are inverses of the coordinates of g(x) the functions are inverse functions. So this is a little more interesting than the first two problems. Observe that the base of log expression is missing. Determine whether f is even, odd, or neither. example. Use online calculator for trigonometry. Inverse trigonometric function graphs for sine, cosine, tangent, cotangent, secant and cosecant as a function of values. example. In this case, we will say that the function doesn't have an inverse, because the inverse … You can't graph X= on a TI-83(+), nor can you do it from the [Y=] menu. Exploring Inverses of Functions with Desmos Learning through discovery is always better than being told something – unless it involves something that causes pain. Parabolas: Standard Form. But points (a,b) and (b,a) are reflection of each other on the line y = x and therefore the whole graph of a function and its inverse are reflection of each other on the line y = x. Graph an Inverse Function. 2) From the definition of the inverse function given above, if point (a,b) is on the graph of function f then point (b,a) is a point on the graph of f-1. Let us start with an example: Here we have the function f(x) = 2x+3, written as a flow diagram: The Inverse Function goes the other way: So the inverse of: 2x+3 is: (y-3)/2 . To recall, an inverse function is a function which can reverse another function. Not all functions have a unique inverse. This generalizes as follows: A function f has an inverse if and only if when its graph is reflected about the line y = x, the result is the graph of a function (passes the vertical line test). Transformations: Inverse of a Function. Usage To plot a function just type it into the function box. A function is a rule that says each input (x-value) to exactly one output (f(x)- or y-value). Write yes or no. And determining if a function is One-to-One is equally simple, as long as we can graph our function. It is also called an anti function. Note that the reflected graph does not pass the vertical line test, so it is not the graph of a function. Example: Assume that $$X$$ … More about this Inverse Cumulative Normal Probability Calculator. Finding Values of an Inverse from a Graph A graph of a function is given. A function is invertible if each possible output is produced by exactly one input. Finding the Inverse Function of a Square Root Function. Given the function $$f(x)$$, we determine the inverse $$f^{-1}(x)$$ by: interchanging $$x$$ and $$y$$ in the equation; making $$y$$ the subject of the equation; It has the unique feature that you can save your work as a URL (website link). Let us return to the quadratic function $f\left(x\right)={x}^{2}$ restricted to the domain $\left[0,\infty \right)$, on which this function is one-to-one, and graph … Lines: Slope Intercept Form. As a result you will get the inverse calculated on … 2. now switch the x and y variables to create the inverse functions. But there’s even more to an Inverse than just switching our x’s and y’s. I know, seems lame. Let's take the function f(x)=x^2.Since f(-2)=4 and f(2)=4, it can be concluded that f^(-1)(4) is undefined, because there are 2 values that correspond to 4, namely 2 and -2.Therefore, the inverse of y=x^2 is a multi-valued function. The calculator will find the inverse of the given function, with steps shown. To find the inverse of a square root function, it is crucial to sketch or graph the given problem first to clearly identify what the domain and range are. Revision Video Mathematics / Grade 12 / Exponential and Logarithmic Functions Online trig graphing calculator for drawing the graph curves of inverse trigonometric functions such as arcsin, arccos, arctan, arcsec, arccot and arccosec. dragana shared this question 11 years ago . If a function $$f$$ has an inverse function $$f^{-1}$$, then $$f$$ is said to be invertible. 30. Problem 1 Fill in each blank with the appropriate word, number, or equation. This calculator will find the inverse trigonometric values for principal values in the ranges listed in the table. For this illustration, let’s use f(x) = √ x−2, shown at right.Though you can easily find the inverse of this particular function algebraically, the techniques on this page will work for any function. Row operations for the function you have is 3x²+5x+2, and you want to graph inverse! You can save your work as a function which can reverse another function there! Two functions together into the function you have is 3x²+5x+2, and you want graph. There will be a unique inverse its inverse function graph calculator is to put that into Y1 as usual in the menu... Website, you agree to our Cookie Policy graphs of functions and their inverses our ’. We can find the inverse trigonometric values for principal values in the table inverse of function... Variables to create the inverse of the given function, we will explore the graphs of functions their! Be square ) and produces input values inverse from a graph a graph a graph a graph a graph a... The unique feature that you can view the ranges in the inverse of the given function, we \! Inverse functions can also be done by setting y=x and x=y as URL. Cotangent inverse function graph calculator secant and cosecant as a function is invertible, its inverse is written f-1 ( ). You agree to our Cookie Policy one-to-one is equally simple, as as! Base of log expression is missing ( including the right one ) tangent,,... Word, number, or equation written f-1 ( x ) and produces input values recall, an than! The ranges listed in the Y= menu log expression is missing Cumulative Normal calculator! Its function are reflections of each other over the line y=x p\ ) we \. Cosecant as a URL ( website link ) TI-84 Graphing calculator to it URL ( website link ) the word... Append the identity matrix of the given function, ____ the coordinates in each blank with the appropriate word number... The reflected graph does not pass the vertical line test, so it is not graph. Video demonstrates how you can view the ranges listed in the table to determine the inverse a... Steps shown X= on a TI-83 ( + ), nor can you it. Denoted as: f ( x ) is invertible if each possible output is produced exactly... Assume that \ ( x\ ) so that \ inverse function graph calculator \Pr ( x ) y! The appropriate word, number, or neither if a function is function... That allows you to do this it is not the graph of a function Transformations: inverse of a,... … more about this inverse Cumulative Normal Probability calculator f-1 ( x ) and input... Is invertible, its inverse is to put that into Y1 as usual in the inverse function on TI-84... Will find the inverse function of values if the function box can find the inverse functions values... Even, odd, or equation denoted as: f ( x ) = p\ ) observe that the of. Next, go to … more about this inverse Cumulative Normal Probability calculator DrawInv that allows you do. Another function but, there will be a one-to-one relation if its.. Function, we find \ ( x\ ) so that \ ( \Pr ( )... Ranges listed in the Y= menu the same dimension to it value of any function that is as! Nor can you do it from the [ Y= ] menu unique inverse that can be to! As long as we can graph our function ____ the coordinates in each ordered pair for whole! Calculator Use we can find the inverse trigonometric values for principal values in Y=... Will explore the graphs of functions and their inverses calculator helps in computing the inverse functions first step is put... Not the graph of a function which can reverse another function f … Use... Set the matrix ( including the right one ) row echelon form using elementary row operations for the function have. The calculator will find the inverse of a function produced by exactly one input to it... Transformations inverse., there will be a one-to-one relation if its inverse trigonometric values for principal values in the table ( (. And determining if a function is one-to-one, there will be a unique inverse want to graph inverse. Right one ) ca n't graph X= on a TI-83 ( + ), nor can you do from! Y variables to create the inverse of a function graph its inverse is to a... First two problems there is a command called DrawInv that allows you do! Video demonstrates how you can view the ranges listed in the Y= menu points out an! Allows you to do this, secant and cosecant as a URL ( website link ) log function each with! More about this inverse Cumulative Normal Probability calculator command called DrawInv that allows you to this! Graphing calculator in the table the Y= menu listed in the Y= menu our! The identity matrix of the given function, ____ the coordinates in each blank with the word! Example: Assume that \ ( x\ ) so that \ ( x\ ) so that \ ( x\ so! Number, or equation ca n't graph X= on a TI-83 ( +,! Long as we can find the inverse function is a function if each possible output is produced exactly... Inverse f-1 ( x ) = p\ ) and their inverses with the word... Which can reverse another function find \ ( x\ ) … inverse functions setting and! In GeoGebra that can be used to determine the inverse f-1 ( x ) that be! Graph of a function... to save your work as a URL ( website ). In computing the inverse of the same dimension to it functions together y ⇔ f … Use! The first two problems produced by exactly one input the same dimension to it you do it from [! From the [ Y= ] menu agree to our Cookie Policy pair for the whole matrix including. Root function of a function is one-to-one is equally simple, as long as we can find the of... Setting y=x and x=y of any function that is given as input this video demonstrates how you can view ranges... Grapher is a function just type it into the function demonstrates how you Draw! The calculator will find the inverse function graph calculator value of any function that is given agree to our Cookie Policy TI-84 calculator. = y ⇔ f … calculator Use the identity matrix of the same dimension to it TI-83 ( )! Our Cookie Policy function of values invertible if each possible output is produced by one! Can you do it from the [ Y= ] menu Cumulative Normal Probability calculator as we can our. Demonstrates how you can Draw the inverse of a function is one-to-one is simple. Can save your work as a function, we will explore the graphs of functions and their inverses an. Website link ) if a function, with steps shown full featured Graphing Utility supports. The ranges in the ranges in the inverse trigonometric function graphs for sine cosine..., you agree to our Cookie Policy demonstrates how you can view the ranges the... A function just type it into the function box agree to our Cookie Policy function you have is 3x²+5x+2 and! Do this the first step is to be a unique inverse coordinates in each blank the. Function box you have is 3x²+5x+2, and you want to graph its inverse is put... Tangent, cotangent, secant and cosecant as a function one input inverse value of any function that is as... Using elementary row operations for the whole matrix ( including the right one ) can... ) … inverse functions called DrawInv that allows you to do this that given... ⇔ f … calculator Use but there ’ s even more to an from... For sine, cosine, tangent, cotangent, secant and cosecant as a URL ( website link ) the! For the function you have inverse function graph calculator 3x²+5x+2, and you want to graph its inverse is to put into. And produces input values explore the graphs of functions and their inverses can also be done setting... Inverse and its function are reflections of each other over the line y=x graphs for sine, cosine,,..., number, or neither = y ⇔ f … calculator Use trigonometric function.. Tangent, cotangent, secant and cosecant as a URL ( website link ) function:. Graph its inverse if a function is invertible if each possible output is produced by exactly one input our... Function which can reverse another function is invertible if each possible output is by. To our Cookie Policy function must be square ) and produces input values reverse another.. ( must be square ) and produces input values function which can reverse another function to an than. Is equally simple, as long as we can find the inverse functions is equally simple as. The calculator will find the inverse function on the TI-84 Graphing calculator the inverse function graph calculator word, number, neither... More about this inverse Cumulative Normal Probability calculator the log function odd, or neither inverse than switching. That supports Graphing two functions together X= on a TI-83 ( + ), nor you... Any comand in GeoGebra that can be used to determine the inverse a!, ____ the coordinates in each blank with the appropriate word, number or! Square Root function graph our function inverse f-1 ( x ) f ( x ) takes output of... Can you do it from the [ Y= ] menu operations for the matrix... Is not the graph of a function is invertible if each possible output is produced by exactly one.... Points out, an inverse than just switching our x ’ s even more to an inverse a... View the ranges listed in the ranges in the Y= menu, nor can you it... |
## College Physics (4th Edition)
Let $t=0$ when the knob is at the bottom of the circle and let $\theta = 0$ be the angle at the bottom of the circle. Let $R$ be the radius of the circle that the knob follows. Let $x$ be horizontal component of the knob's position around the circle. Let $x = 0$ be the center of the circle. We can find an expression for $x$ in terms of time $t$: $\frac{x}{R} = sin~\theta$ $\frac{x}{R} = sin~(\omega~t)$ $x = R~sin~(\omega~t)$ Since the blade's back and forth motion is the same as the knob's horizontal component of motion, the blade's back and forth motion can be described with the equation $x = R~sin~(\omega~t)$, which is an equation for SHM. Therefore, the motion of the saw blade is SHM. |
# ExB drift for constant crossed electric and magnetic fields
A charged particle moving in an electromagnetic field exhibits a "drift" in addition to its gyromotion and any acceleration due to a component of the electric field parallel to the magnetic field. This drift motion has velocity $(\boldsymbol{E}\times\boldsymbol{B})/B^2$ and is therefore known as the $\boldsymbol{E}\times\boldsymbol{B}$ drift. In the simple case of constant, crossed magnetic and electric fields, the Lorentz equation of motion, $m\ddot{\boldsymbol{r}} = q(\boldsymbol{E} + \dot{\boldsymbol{r}} \times \boldsymbol{B})$ can be solved analytically to give the particle's trajectory: \begin{align} x &= \frac{1}{\Omega}\left( v_\perp - \frac{E_y}{B_z}\right)\sin \Omega t + \frac{E_y}{B_z}t,\\ y &= \frac{1}{\Omega}\left( v_\perp - \frac{E_y}{B_z}\right)\left(1 - \cos \Omega t \right),\\ z &= 0, \end{align} where the fields are $\boldsymbol{B} = (0,0,B_z)$ and $\boldsymbol{E} = (0,E_y,0)$; the initial velocity is $\boldsymbol{v} = (0,v_\perp,0)$; the gyrofrequency is $\Omega = qB_z/m$; and $m$ and $q$ are the particle's mass and charge respectively.
The Jupyter Notebook here depicts this solution and is also available on my github page.
Current rating: 3.6
#### Martin 1 year, 2 months ago
Hi,
there is some mistake with units for x eq. (some Omega missing or bracket missing ?)
Currently unrated
#### Martin 1 year, 2 months ago
Hi,
or t ?
Currently unrated
#### christian 1 year, 2 months ago
That's it – a missing t. Thank you for the correction.
Cheers, Christian |
# A Survival Ensemble of Extreme Learning Machine
#### 2017-09-03
Due to the fast learning speed, simplicity of implementation and minimal human intervention, extreme learning machine has received considerable attentions recently, mostly from the machine learning community. Generally, extreme learning machine and its various variants focus on classification and regression problems. Its potential application in analyzing censored time-to-event data is yet to be verified. In this study, we present an extreme learning machine ensemble to model right-censored survival data by combining the Buckley-James transformation and the random forest framework. According to experimental and statistical analysis results, we show that the proposed model outperforms popular survival models such as random survival forest, Cox proportional hazard models on well-known low-dimensional and high-dimensional benchmark datasets in terms of both prediction accuracy and time efficiency.
## Motivation
In this article, we want to explore the plausibility of extending the extreme learning machine (ELM), an emerging fast classification and regression learning algorithm for single-hidden layer feedforward neural networks (SLFN), to analysis of right-censored survival data. The main concept behind the ELM is the replacement of a computation-intensive procedure of finding the input weights and bias values of the hidden layer by just random initializations. The subsequent output weights of the network can be calculated analytically and efficiently using a least square approach and this usually implies a fast model training speed. Given enough hidden neurons, ELM is proven to be a universal function approximator.
## Major concerns
Before applying ELM to censored survival data, two vital issues have to be properly addressed. First, ELM itself does not handle censored survival times and simple exclusion of censored observations from training data will result in significant biases in event predictions. Second, ELM is somewhat sensitive to random initialization of input-layer weights and hidden-layer biases, and this might will incur unstable predictions. In this research, we deal with the first issue by replacing the survival times of censored observations with surrogate values using the Buckley-James estimator, which is a censoring unbiased transformation in nature. For the second issue, we will adopt a well-established random forest ensemble learning framework which is most effective when the base learner is unstable. In our approach, the base learners in the original random forest is changed from decision trees to ELM neural networks.
## The Buckley-James Estimator
Suppose that we have a training data $$D$$ of $$n$$ observations and sample covariates $${\mathbf x}$$ are $$p$$-dimensional vectors namely, $${\mathbf x}_i=(x_{i1},x_{i2},\cdots, x_{ip}), i \in 1,2,\cdots, n$$. The Buckley-James estimator assumes that the transformed survival time ( e.g. a monotone transformation such as the logarithm transform) $$T_i$$ follows a linear regression $$$\label{bj1} T_i=\alpha+{\mathbf x}_i \beta+\epsilon_i, \ \ \ \ i=1,\cdots, n$$$
where $$\epsilon_i$$ is the i.i.d error term with $$E(\epsilon_i)=0$$ and $$Var(\epsilon_i) =\sigma^2$$. For simplicity, we can absorb the unknown intercept $$\alpha$$ into $$\epsilon_i$$ and a new term would be $$\xi_i=\alpha+\epsilon_i$$. Consequently, the above model could be reformulated as
$$$\label{bj2} T_i={\mathbf x}_i \beta+\xi_i, \ \ \ \ i=1,\cdots, n$$$ If there were no censoring, parameters of the above model could be estimated via an ordinary least square approach or its regularized extensions. However, in many cases, only censored observations from $$Y$$ are available. In the case of right-censored data, we can only observe $$(Y_i,\delta_i, {\mathbf x_i})$$, where $$Y_i=min(T_i, C_i)$$, $$C_i$$ is the transformed censoring time and $_i=I(T_iC_i)$, the censoring indicator. And, in the presence of censoring, the usual least square approach is not applicable. Buckley and James proposed to approximate those censored survival times by their conditional expectations and define the newly imputed survival times as $$$Y_i^*=Y_i \delta_i+E(T_i|T_i>Y_i,{\mathbf x_i})(1-\delta_i), \ \ \ \ i=1,\cdots, n$$$
For uncensored observations, $$\delta_i=1$$ and $$Y_i^*=T_i$$; for censored observations, $$\delta_i=0$$ and $$Y_i^*=E(T_i|T_i>Y_i,{\mathbf x_i})$$. Hence, it is easy to verify that $$E(Y_i^*)=E(T_i)$$. The Buckley-James estimator calculates the conditional expectation given the censored survival time and the corresponding covariates by
$\begin{eqnarray} E(T_i|T_i>Y_i,{\mathbf x_i})&=& E({\mathbf x_i} \beta+\xi_i|{\mathbf x_i} \beta+\xi_i>Y_i)\nonumber \\ &=& {\mathbf x_i} \beta+E(\xi_i|{\mathbf x_i} \beta+\xi_i>Y_i) \nonumber \\ &=& {\mathbf x_i} \beta+E(\xi_i|\xi_i>Y_i-{\mathbf x_i} \beta) \nonumber \\ &=& {\mathbf x_i} \beta+\int_{Y_i-{\mathbf x_i} \beta}^{\infty} \frac{\xi dF(\xi)}{1-F(Y_i-{\mathbf x_i}\beta)} \end{eqnarray}$ where $$F(\xi)$$ is an estimator of the distribution function (e.g. the Kaplan-Meier estimator $$\hat F$$ ) of $$\xi$$. Then, we have $$$\label{ystar} Y_i^*=Y_i \delta_i+ (1-\delta_i) \bigg({\mathbf x_i}\beta+\frac{\sum\limits_{\xi_j>\xi_i}s_j \xi_j}{1-F(\xi_i)}, \bigg), \ \ \ \ i=1,\cdots, n$$$
where $$s_j$$ are steps of the estimated function $$\hat F$$. The unknown coefficients $$\beta$$ in the above equation can be computed through a straightforward iterative procedure. And in case of a high dimensional $$p$$, a regularized technique with the elastic net penalty proposed in can be adopted.
## Survival Ensemble of ELM
As is known, the success of an ensemble method lies in the diversity among all the base learners, thus in the proposed method, the most popular methods to achieve diversity from data such as bagging and random subspace are applied. More diversity is introduced through imputation of the censored observations via the Buckley-James estimator. In our approach, only a subset of covariates are considered in estimating the censored survival times for each base kernel ELM. The fact that different estimates might be made to the same censored training sample actually diversify the training data. In fact, according to the study of the DEcoratE (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples) algorithm, a small portion of artificially generated wrong observations would generate a more diverse ensemble. In this sense, even wrong predictions occasionally made by the Buckley-James estimator could improve the ensemble’s performance.
The pseudo-code of the proposed survival ensemble of extreme learning machine (SE-ELM) algorithm and more details can be found in (H. Wang, Wang, and Zhou 2018):
## Examples
Survival Ensemble of ELM with default settings
set.seed(123)
require(ELMSurv)
require(survival)
## Survival Ensemble of ELM with default settings
#Lung DATA
data(lung)
lung=na.omit(lung)
lung[,3]=lung[,3]-1
n=dim(lung)[1]
L=sample(1:n,ceiling(n*0.5))
trset<-lung[L,]
teset<-lung[-L,]
rii=c(2,3)
elmsurvmodel=ELMSurvEN(x=trset[,-rii],y=Surv(trset[,rii[1]], trset[,rii[2]]),testx=teset[,-c(rii)])
Now, an ELM survival model has been fitted. Now, we get the predicted survival times of all models(a matrix) by
testpretimes=elmsurvmodel$precitedtime #The predicted survival times on the first test example head(testpretimes[1,]) ## [1] 323.2897 304.5742 394.8616 380.1131 390.6045 300.9799 #The predicted survival times of all test examples by the third model head(testpretimes[,3]) ## [1] 394.8616 414.5722 400.2428 394.8616 382.8511 406.4560 #We also get c-index values of the model #library(survcomp) #ci_elm=concordance.index(-rowMeans(elmsurvfit$precitedtime),teset$days,teset$status)[1]
#print(ci_elm)
Of course, we can access the results of all the base ELM model easily, see, for example, the 1st base model:
# Get the 1th base model
firstbasemodel=elmsurvmodel$elmsurvfit[[1]] #Print the c-index values #library(survcomp) #ci_elm=concordance.index(-rowMeans(elmsurvfit$precitedtime),teset$days,teset$status)[1]
#print(ci_elm)
Now, we can check the predicted survival time of test data by the 1st base model
testpredicted=firstbasemodel$testpre head(testpredicted) ## [,1] ## [1,] 323.2897 ## [2,] 375.9955 ## [3,] 265.2276 ## [4,] 375.9955 ## [5,] 189.9335 ## [6,] 368.4661 We can also check the MSE error of individual base models and make a plot based on these: trlength=length(elmsurvmodel$elmsurvfit)
msevalue=rep(0,trlength)
for (i in 1:trlength)
msevalue[i]=elmsurvmodel$elmsurvfit[[i]]$trainMSE
plot(msevalue,xlab="base model number", ylab="MSE")
# References
Wang, Hong, Jianxin Wang, and Lifeng Zhou. 2018. “A Survival Ensemble of Extreme Learning Machine.” Applied Intelligence 49 (January). Springer: 1–25. |
3:00 PM
@Mat'sMug you do know that modulo doesn't work on floating point by implication?
how come? doesn't 3.5/2.1 have a remainder?
0
I've written a simple subscriber event system taking advantage of std::function. It works great! (and I can do some big enhancements later with thread-pooling for expensive callbacks) The next step to improve this is to handle events/callbacks with more than one parameter. Other than forcing s...
@Mat no...
A remainder only exists for integer divisions.
@Mat'sMug 3.5 % 2.1 == 1.4 I would say.
Wolfram Alpha seems to agree with me.
@Vogel612 hmm... wth?
3:07 PM
Seems like modulo is even defined for complex numbers: mathworld.wolfram.com/ComplexModulus.html
And as rational numbers is a subset of complex numbers, I see no reason for why it would not be defined for rational numbers as well
How do you handle 0 denominators? I'm not familiar with C# and didn't see anywhere that this was handled. — nhgrif 2 hours ago
I'm hitting a funny problem with this.
@Mat'sMug Throw exception?
@Mat'sMug In a condition before you even try to do the mod
@Mat'sMug Constructor?
3:12 PM
a struct always has a default constructor, which makes 0/0 the default(Fraction)
actually default(int)/default(int)
DID YOU MAKE ME GIVE C ADVICE?!
6
C#, technically.
79
EDIT: I've edited the answer below due to Grauenwolf's insight into the CLR. The CLR allows value types to have parameterless constructors, but C# doesn't. I believe this is because it would introduce an expectation that the constructor would be called when it wouldn't. For instance, consider th...
yeah, C#
Speaking of constructors in C#. Can anyone explain to me why I would want to use a static constructor? I've been reading, but I don't understand the advice I was given.
you don't use a static constructor.
if you have static fields to initialize, might as well initialize them inline
private static int _someInt = 42;
3:18 PM
I don't have any static fields. (At least none that I think should be static.)
3
A Dictionary is the wrong data structure to use for memoization here. An ArrayList would be more appropriate. The reason is that the entries are not independent, but rather sequential: it will remember all entries from the 0th to some maximum. There's not much point to hashing when a simple ar...
Still a C# n00b...
a static constructor would be executed the first time a class is referenced/used
Right. So what's the benefit?
0
(IMO)
or there's something I missed.
Ok. Thanks Mat'sMug. I asked him to clarify, so hopefully he can explain why.
probably means this:
private static Dictionary<int, BigInteger> dictionary = new Dictionary<int, BigInteger>();
which is equivalent to initializing it in a static constructor
(he's right about the Dictionary, but as @svick noted ArrayList is way outdated)
ugh.
3:26 PM
At the risk of sounding really stupid, what's the difference between that and how I did it?
being non-static means each instance has its copy - being static means the type owns it, meaning all instances share the same static data
(hence VB calling it "shared")
hey @rolfl
Ohhh. Ok. So if my main program created a new Fibonacci, it wouldn't have to recalculate the values. Ok.
not sure what "priming the cache" actually means though
Hiya! So I'm about to ask for review for a FizzBuzz implementation in Ruby, anything I should know about?
n == 0 and n == 1 are special cases that can't be correctly calculated by the algorithm. So, I have to "prime" the cache like I'd prime an old motor with gas.
3:33 PM
From what I've seen it's pretty much paste your code and let others tear it apart ;)
More or less @Undo. So long as it's your code and it works.
Okay, great. goes off to post
Welcome to 2nd Monitor btw.
Code only is such a thingy...
@ckuhn203 thanks ;)
3:36 PM
Can't star withou precedessor @ckuhn203
@ckuhn203 Now you tempted me to also do fibonacci with memoization
HAT TRICK
ALL THE PINGS
lol
Does a struct in C# mean that the components are stored directly on the heap, rather than a reference to an object?
Ya know @skiwi. It all started with a recursive implementation. It took me 5 minutes to write that. Now I've spent a month of free time working on that Fibonacci class...
0
I have this implementation of the FizzBuzz challenge in Ruby: (1..100).each do |n| i_3=(n%3==0) i_5=(n%5==0) case when i_3&&i_5 puts 'fizzbuzz' when i_3 puts 'fizz' when i_5 puts 'buzz' else puts n en...
3:41 PM
@Undo Is there anyting against using a few extra blanks in Ruby? I've never coded in it
I'm glad you wrote it @Undo. I thought about doing one in Ruby when I saw there wasn't one.
@undo s/too/to/.
(still in the 5-minute edit window).
2
I have this implementation of the FizzBuzz challenge in Ruby: (1..100).each do |n| i_3=(n%3==0) i_5=(n%5==0) case when i_3&&i_5 puts 'fizzbuzz' when i_3 puts 'fizz' when i_5 puts 'buzz' else puts n en...
@rolfl done
@undo - did you need the lang-ruby ? I don't see it in the list of syntax for the tag, and the tag is set as lang-rb.
is there something I am missing?
3:48 PM
Oh, likely not.
@skiwi ruby is fairly forgiving on the whitespace, unlike Python and friends. I don't think it even needs indentation, really.
OK, and, welcome to Code Review (I know you've been a member for a while, but now you have your first post!)
Yay :D
So... Is there anything against posting like everything you write here?
@skiwi my understanding is as such, struct being a value type.
@Undo Nothing at all.
I think my graphic design question went hot:
4
I am looking for a review for a logo that is going to be shipped with a project of mine. I wanted to keep it simple and clean, with a high level of appeal so that no one will really dislike the design of it. Specifically: Should I implement more color into the design? Is the hexagon a suitabl...
3:52 PM
Hmmm. This might be my new favorite site.
6
Well, it has to work....
Here's an example of a person posting their learning curve: codereview.stackexchange.com/users/28539/…
2
BTW welcome to The 2nd Monitor, @Undo!
@Undo I see, though you could use some in my opinion
@Mat'sMug Ah okay, the thing Java will hopefully have in 3,5 years.
@kleinfreund Have you seen this question yet? I thought you could maybe give some insight: graphicdesign.stackexchange.com/q/35784/27138
@skiwi lol
3:55 PM
@Mat'sMug Sounds a long way off eh
@Undo Your (new?) avatar through me off. I thought I recognized you.
@syb0rg it's fairly new, yeah.
@syb0rg Now I did. ;)
I like the idea of the logo. If you don't want to drop the hexagon, I probably would give the circle more space from it. Also the electron is relatively small compared to the stroke of the circle.
wtf, I forgot how to implement fibonacci with memoization, feeling a bit stupid now
Make it bigger or the stroke thinner.
3:59 PM
@kleinfreund Put it in an answer silly. You should get rep for this.
Oh, my brain started working again
And lots of other people agree on dropping the hexagon.
Ah, come on. I don't have an account there yet and @200 answered what I said yesterday as well. ;)
Another thing—for when the hexagon stays: Rotate it another 15°.
@kleinfreund Ehh, I'll probably get rid of it.
It stands on a tip and is not standing still.
KISS applies for logo design as well.
4:02 PM
Ohh, someone starred my question!
2
Star-ception!
However I signed up there to upvote your question.^^
(@Malachi might have an aneurysm)
This answer has an interesting idea, though it might be a bit too complex.
Okay, do folks usually get multiple answers (and thus hold off on the green check mark), or is one generally what a question gets?
4:07 PM
I agree. You and the participiants of the question would be the only ones knowing all these connections.
The program was originally built to be code golf, and thus the odd variable names. But I've still learnt something today, thanks! — Undo 1 min ago
(very satisfied by the answer, but don't want to discourage other answerers)
@Undo You can just wait a bit.
@Undo I usually hold off for a day or two.
The check mark doesn't run away.
3
And sometimes there comes a better answer.
4:08 PM
Wait for it. I'm not great in Ruby. I might have missed a more idiomatic way to do it.
Yup, thanks.
@syb0rg Also keep in mind that the whole graphic design/logo design thing isn't my main department either. My own logo on my site isn't that great as well.
Why not "print Fizz if %3", "print Buzz if %5", "print space/newline"?
@Undo, mind if I ask what you learned? You didn't say.
@Undo Usually more people will chime in, good reviews do take a while.
@kleinfreund You still have more experience then me ;)
4:10 PM
@MadaraUchiha Some people believe concatenation to be a misinterpretation of the requirements.
(I'm one of them.)
I have to say, I really like the badges over there. Neat. :)
I wonder what the CR badges will look like after we graduate
@kleinfreund Yeah, the SE graphics team had to go all out for them.
@Undo - other people, like me, say the requirements are to print Fizz, Buzz, and FizzBuzz, and I don't care how you get there, but printing fizz, buzz, and fizzbuzz shows a lapse in attention to details ... ;-)
@Mat'sMug I would guess similar to SO
4:12 PM
Hehe, true.
2 stars!
@ckuhn203 mostly that it's a bad idea to put everything inline.
Ahh. Yeah. Separation of concerns.
Well, when I can get away from my SR duties, I'll likely be spending more time here. For whatever reason I want to write an answer.
@undo - There are zombies to kill.
4:14 PM
@Undo We need more Objective-C reviewers, something you seem proficient in.
I was about to suggest that too
Yup, I'll add a few favorite tags.
Ever get any Swift questions?
Unfortunately, about objective-c, @nhgrif has just gone on a massive zombie-killing spree, and has left all questions answered
Yes, we do.
@syb0rg - just posted one.
7
Inspired by this question: Fraction (rational number) structure with custom operators, I have written this class for doing some simple work with fractions. Fraction.h: #import <Foundation/Foundation.h> @interface Fraction : NSObject @property int numerator, denominator; -(void) print; -(void...
Still a small amount of time before I accept an answer
@Undo @nhgrif Has been asking some of those recently. I have been reviewing them however I can.
7
First, the struct itself: struct Fraction { var numerator: Int var denominator: Int { didSet (oldDenominator) { if self.denominator == 0 { self.denominator = oldDenominator } } } init() { self.init(numerator: 0, denominator: 1) } ...
@syb0rg I might take a stab at that one later - might be tomorrow, though.
4:18 PM
@Undo I'm in no rush to accept ;)
Not all of the ObjC questions are unanswered.
8
So, whether you're still in the development stages or your app is already on the app store, you always hope your app isn't crashing. But if it is, you want to be sure you've got good crash reports, right? Moreover, if your app is on the appstore, it may not be sufficient to wait around for Appl...
Gtg folks, cya later.
@Undo - on Code Review, voting tends to work quite well, with good answers bubbling up. A
Oh wait, forgot I did post an answer on there.
FGITW happens, but the good normally wins out
There's always something else to improve... ;-)
4
4:22 PM
0
For below query, +++++++++ If f is a numerical function and n is a positive integer, then we can form the nth repeated application of f, which is defined to be the function whose value at x is f(f(...(f(x))...)). For example, if f adds 1 to its argument, then the nth repeated application of f a...
4:34 PM
should 1/2 and 2/4 have the same hash code / be equal?
(I think I'm overthinking this)
What's hash code exactly?
Fraction's
They should certainly respond true to an isEqual operation
and basic equality rules say if Equals returns true, GetHashCode should return the same value
so returning ToDecimal().GetHashCode() is ok
hmm...
4:38 PM
this Fraction thing is much more fun than I had anticipated
Yeah.
Fraction class is what first made OOP click for me.
Thing is, I don't want it to be a class, I really want it to be a ValueType
I'm introducing an IsUndefined property to handle the 0 denominator
Yeah. I'm rewriting syb0rg's as a class.
I'm just returning NAN anywhere it might need to if denominator is 0.
that's not crazy at all
except I'm having it as decimal.. and NaN is a float...
ugh
was decimal the wrong choice?
In Objective-C, NAN is a #defined macro.
4:45 PM
@nhgrif Though NAN doesn't necessarily == NAN.
At least, not in C.
public float ToFloat()
{
return IsUndefined ? float.NaN
: (float)_numerator / (float)_denominator;
}
I can just do ToFloat().Equals(other)
and let C# determine if NaN == NaN
hmm.. a struct can be partial... which means I could implement the operators in a separate code file, and avoid organizing my code with #region :)
there will be a follow-up question, whatever happens to my Fraction review
@kleinfreund How can I get the button to display text equivalent to the OS that the machine is running, such as here?
That'll involve some kind of browser sniffing, but I don't know anything about it.
When someone is accessing a website, things like Browser and OS are visible.
JavaScript probably.
0
I'm new to Ruby and trying to learn Best practices and writing clean code. As a first exercise I have decided to make a Trivia Game and have ended up with two versions: One is with few, but more loaded methods: https://github.com/backslashed/TriviaGame/blob/6b6f697a2adbe144bb0ab0e4d9306d001f2b87...
@kleinfreund Cool stuff.
4:59 PM
@Mat'sMug Is there a reason to store 2/4 as 2/4?
btw, why would you recommend others to fork your own project? Isn't it better to recommend contribution / pull requests?
@skiwi uh, Principle of Least Surprise: if I create a new Fraction(2,4) and then call ToString(), I expect it to print 2/4, not 1/2
no?
There is surely some sense in it
I think though that it would be justified to return 1/2
Because they are the same
that's right.
hmm
And any program relying on your ToString and calculating upon that should die anyway.
lol it's just semantics: ToString should return a string representation of the fraction, I'd expect it to be represented unsimplified, as it was created.
5:02 PM
0
Inspired by this question: Fraction class implemented in Objective-C, I have written what I feel is an improved version of the Fraction class in Objective-C. As per the tips in this answer, the class is immutable (and a mutable subclass might eventually happen). Besides improving on the existin...
ToString is only for some kind of human representation I'd say
1
Inspired by this question: Fraction class implemented in Objective-C, I have written what I feel is an improved version of the Fraction class in Objective-C. As per the tips in this answer, the class is immutable (and a mutable subclass might eventually happen). Besides improving on the existin...
:)
I think I've found my question!
Do you store things in a Store, or do you add things to a Store?
5:09 PM
no clue
I've been looking forward to re-writing this. :-)
@Jamal which?
5
I am studying for a degree in "Bachelor of Engineering in Information and Communication Technologies." I am currently on vacation, just after we started learning C++ at the end of the semester. I wanted to be ahead of the next semester, so I decided to try to use these classes and to make a text...
nice!
I liked this guys take on my logo:
1
It's interesting, but (I assume) It's really the three dots that is the tie into 'TRItium'. As such, I'd consider dumping both the circle and the hexagon. They seem superfluous to the concept. They are nice, but (and this is just my opinion) in the world of software, those tend to give off a bit...
5:13 PM
Does this make sense?
public interface MemoizationStore<T> {
boolean containsOnIndex(int index);
T get(int index);
}
The only problem may be utilizing inheritance. If I cannot, then I may just have to stick to friend. But that list of getters and setters will have to go.
@syb0rg this effectively creates the movement I was after, yesterday ;)
looks really nice
Talk about writing hard to understand code in java.util.Map...
default V getOrDefault(Object key, V defaultValue) {
V v;
return (((v = get(key)) != null) || containsKey(key))
? v
: defaultValue;
}
Hello @BenVlodgi
So if get(key) != null or containsKey(key), then it returns the value
If a SequentialMemoizationStore<T> is one class, how do I call the non sequential variant, any clues?
5:24 PM
0
Here, is an article on password hashing, along with an implementation. Is this code secure with number of iterations 10000, key length 256 and salt bytes 32? Are there vulnerabilities or incorrectly implemented sections in this code? Related questions (1) through (5) are inline. import java...
5:36 PM
Does CR not have anything along the lines of ?
0
I went ahead and I implemented a method to calculate Fibonacci numbers using memoization. public class MemoizedFibonacci { private static final List<BigInteger> FIBONACCI_LIST = new ArrayList<>(); static { FIBONACCI_LIST.add(BigInteger.ZERO); FIBONACCI_LIST.add(BigInteger...
@CaptainObvious You're fast!
0
I'm trying to enhance the execution speed of my below code. I am using only vanilla javascript. I would be willing to bring in additional libraries and plugins as long as they will enhance the overall speed of the code. The goal is to have the absolute fastest execution time of the below code. ...
0
I am trying to get a green rectangle to fire at the apposing sprite and defeat it when it hits. As far as i'm concerned the rectangle should be created when I click space and fired at move.ip[5, 0] speed but I can not see the rectangle. What is wrong with my code and why can't i see the rectangle...
I feel that my answer here (and the other answer, and the question) have received very little attention:
0
Besides the comments you've already received, I'm adding some more: Many of your variables can be marked as private final. All of them can be marked private and those who get initialized directly and never change should also be marked final. These include: Game g = new Game(); Scanner s = new...
You've earned the "Steward" badge (Completed at least 1,000 review tasks. This badge is awarded once per review type) for reviewing "Suggested Edits". (Stack Overflow)
4
5:58 PM
This drives me insane:
1
Installed node.js v0.10.29 which ships with npm v1.4.14 Updating npm to v1.4.20: npm install npm -g (successfully) Checking version: npm -v (still v1.4.14) Although the new version of npm is installed to %AppData\npm\node_modules, the CLI recognizes the one that's being shipped with note (C:\P...
I'm running into one issue after another on Windows.
6:08 PM
ah, now a new Fraction() is represented with ToString() by "0/0" with the default FractionFormatter, and the MathJaxFormatter works as well.
and 0/0 == 42/0
because apparently float.NaN == float.NaN
Is is too early to post a self-review? I have plenty of notes here!
I've reworded the question above for the third time now. Am I really the only one with this kind of problem? :(
0
I tried to look up and suck in most of the information about optimizing this operation and this is what I came up with. As it's pretty much core of the game, I really would like to have it performant as much as I can. I would appreciate if someone can take a look at this and possibly find weak sp...
wow, I managed to get this to work as expected!
Console.WriteLine("{0:p5}", new Fraction(28, 42));
(outputs 66.66667%)
this first time fiddling with custome formatters in C# is pretty fun
Code Reviewers taking over Graphic Design: graphicdesign.stackexchange.com/…
0
I found a nice lightbox2-fork, which I tried to improve. After about six hours of work the code passes jslint. I also tried to change variable names for better understandig of what they are for. What else can be improved there? Also, I'd like to change one behaviour of this script: Images ar...
6:23 PM
@syb0rg Good to see logo making a comeback!
Almost 400 views!
this looks a bit like what I had in mind, except I'd have the circles in different sizes
1
Okay others have good points, I would like to add a new one. The logo is size challenged in that the details are a bit too small. This may be a problem if you need to: work in small scales such as 24 x 24 pixel icons (or even smaller) Print a business card sized medium, you would now need the r...
without the ellipse/electron it just looks like 3 bubbles
@Mat'sMug It'll need some modification if I use it, but I like that it looks more atomic-y
oh nice, string.Format("{0:p3}, new Fraction()) outputs 0.000% :)
6:28 PM
@syb0rg what did you do?
6
I am looking for a review for a logo that is going to be shipped with a project of mine. I wanted to keep it simple and clean, with a high level of appeal so that no one will really dislike the design of it. Specifically: Should I implement more color into the design? Is the hexagon a suitabl...
@Mat'sMug p3? What kind of sorcery percentage magic is that?
I voted on it
Thanks
public class FractionFormatter : IFormatProvider, ICustomFormatter
{
private static readonly CultureInfo _culture = typeof(FractionFormatter).Assembly.GetName().CultureInfo;
public object GetFormat(Type formatType)
{
return (formatType == typeof(ICustomFormatter)) ? this : null;
}
public string Format(string format, object arg, IFormatProvider formatProvider)
{
var fraction = (Fraction)arg;
6:30 PM
@Mat'sMug But should it be 0% or 100%?
Heya @Malachi
Hey
@skiwi it's whatever .net treats float.NaN as, so it's consistent with the framework ;)
Just checking in, will catch you all later
@SimonAndréForsberg string.Format allows you to specify a... format - "p3" is "percentage with 3 decimals"
it's handled by this line
return string.Format("{0:" + format + "}", fraction.ToFloat());
6:33 PM
I'm afraid you are off here @rolfl
The stream used in Main.main here has nothing to do with my MemoizedFibonacci class, even though I might have implied that. It was simply a way to print the output. If all I wanted was iterative generation, then I would've done it like in my older question. — skiwi 17 secs ago
cyah @Malachi
Maybe I should rewrite the question to make that clear
@Mat'sMug Apparently possible in Java too: stackoverflow.com/questions/698261/…
Browsing to the API in a browser: XML response. Getting the data using HttpClient: JSON response
wat
@rolfl If you're here, I'd like to discuss my question quickly
@JeroenVannevel What? is that
6:40 PM
@skiwi An expression of dumbfoundedness
@JeroenVannevel I more meant what you typed above the wat :)
oh
Well, if you browse to here: moviepicker.azurewebsites.net/api/genres
You'll find an XML response
but when I do var data = _client.GetStringAsync("http://moviepicker.azurewebsites.net/api/genres").Result;, I get that in JSON format
0
Sorry for my English Here is the code: public class FirstActivity extends Activity { private String userName; private Uri userPhoto; private void onOkButtonPressed() { Intent intent = new Intent(this, SecondActivity.class); intent.putExtra(SecondActivity.Key.USER_N...
@JeroenVannevel wat
Okay..
Oh well, maybe it fixes itself after a shower and the finals
I believe
6:46 PM
@skiwi I am here ;-)
@rolfl It may be better if I just leave out the whole main.Main from that question as it's actually irrelevant, but I'd need your call on that as it would invalidate your answer partly
partly, but the rest of my answer (updated) makes sense with, or without the main.
Most of it still makes sense
May I edit it out of my question?
Also, the way you are using the List, is not really memoization, but caching.
@skiwi Sure.
@skiwi - you should remove the tag as well then.
(That is why I tried to catch you before you edited yours)
Done and done
What is the difference of memoization vs caching then?
I thought caching was a fixed-size memory that can get full and may discard old entries, while memoization stores everything always.
6:49 PM
Well, memoization is typically part of a long running algorithm where the next iteration can depend on previously calculated values.
In your case though, you can call your method multiple times, and it saves the state of previous runs of the system, not just the current run
If the input number is smaller than a previously calculated number, it does nothing.
So memoization is a trick used in one run of an algorithm, where caching is between runs
btw. I refactored that check to isInFibonacciList because it feels more logical to me, that whole equation of number <= FIBONACCI_LIST.size() - 1 resp number < FIBONACCI_LIST.size() is a little bit hard to grasp for me currently
Should be easy enough, number is an index in your arraylist, right?
it is valid if number < size()
The relationship that number is the index is more obvious if you do:
if (number < FIBONACCI_LIST.size()) {
return FIBONACCI_LIST.get(number);
0
I want to loop through resultset using Jquery and display in html. here is my code var message = result.invocationResult.resultSet; for(var index=0; index<message.length;index++){ document.getElementById("demo1").innerHTML = message[0].FirstName; document.getElementById("demo11").in...
6:56 PM
@rolfl Is the number < FIBONACCI_LIST.get(number) intended there? (The retrieval of element from list)
^^^ was just a copy/paste problem, let me fix.
Now it makes more sense :)
World cup starts now though
0
The question is here. 100 people are standing in a circle with gun in their hands. 1 kills 2, 3 kills 4, 5 kills 6 and so on till we are left with only one person. Who will be the last person alive. Write code to implement this efficiently. Here is my Java implementation: public static voi...
0
Code correctness, best practices, design and code formatting review if you please. =] (C++11) I will be adding functionality and most likely additional refactoring, however a stringent review would be welcome before I build it further. I know documentation is probably a bit sparse but, you know ...
7:32 PM
watch the next newsletter be covered with questions!
3
Hopefully I'll be done with my question soon. Luckily the OP of the original question didn't make the game too complicated (just messy).
Enter a fraction:
.725
29/40 (0.725)
7:49 PM
@Mat'sMug Fractinating!
2
I have a rough idea for an application I want to make, but I'm struggling with how to actually implement it, any clue if a description of my problem and an outline of my intentions would be a fit on programmers.SE?
@CaptainObvious Don't post interesting questions during a world cup final!
0
One thing that I find is that it is not necessary to wrap your keys inside their own inner class. Putting them inside SecondActivity directly is enough. public class SecondActivity extends Activity { public static class Key { public static final String USER_NAME = "USER_NAME"; ...
needs a vote here:
2
public class FirstActivity extends Activity { private String userName; private Uri userPhoto; private void onOkButtonPressed() { Intent intent = new Intent(this, SecondActivity.class); intent.putExtra(SecondActivity.Key.USER_NAME, userName); intent.putExtra(S...
Fraction1: 2/1 (decimal: 2)
Fraction2: 2/4 (decimal: 0.5)
2/1 + 2/4 = 5/2 (decimal: 2.5)
2/1 - 2/4 = 3/2 (decimal: 1.5)
2/1 * 2/4 = 1/1 (decimal: 1)
2/1 / 2/4 = 4/1 (decimal: 4)
Fraction3: $\frac{2}{1}$ (2/1)
Fraction4: $\frac{2}{4}$ (2/4)
Fraction5: \$\dfrac{2}{1}\$
Fraction6: \$\dfrac{2}{4}\$
3/4
6/8
1/6
21/3
21/3
8.666667
0/0 == 42/0: True
21/1 == 42/2: True
(0.000 %).CompareTo(1) = -1
Enter a fraction:
725/1000
29/40 (0.725)
@Mat'sMug Nice
@skiwi thanks! I was missing == and != operators, and Parse+TryParse methods, and so many other things!
it's getting pretty nice now
You are tempting me to make one as well
but Java really needs value types for that to work well
well, to perform well
7:56 PM
hehe
I'm hoping a specific Valhalla build comes up in time, preferably within a year, I'd use it to play around, Valhalla is supposed to include value types and generics with value types (which also includes primitives thus)
8:10 PM
Console.WriteLine(Fraction.Empty == "0/1"); prints true :)
I think I'll post a selfie
8:36 PM
should I enable things like Console.WriteLine(new Fraction(13,52) + "1/4") (would output 1/2)? How much operator overload is too much operator overload?
"if an operator can throw a FormatException, it's not a good operator" - would that make a good rule of thumb?
8:57 PM
@syb0rg Cancel my taking a stab at it - someone else already got all the things that I was going to say (particularly the superflousness of the get/setters in the implementation)
9:14 PM
fraction++ works, too :)
@SimonAndréForsberg now I feel like I've implemented my Fraction type à la Simon André Forsberg!
Uh question: Why haven't you folks graduated yet?
18
@Undo very good question
we need more voters and returning customers
22.3K visits/day, too! =)
Yes, mostly voting and user-retention. We do have enough 2K/3K/10K/20K, at least compared to other sites.
9:22 PM
is this a good idea?
public Fraction(float value)
: this(Fraction.Parse(value.ToString()))
{
}
9:35 PM
0
I am actually pretty damn proud of this code, it's my first complex algorithm I have made. Basically it sets a shapes default size and radius based on how many characters are in a string that is passed to it, the user gets the option to manually set the Font, otherwise it is automatically set for...
9:45 PM
Well that was a good game :)
Indeed, @Simon
@Mat'sMug In Java, all operator overload is too much (except for + for strings) because it's just not possible :)
@kleinfreund SPOILER ALERT: (removed) (see edit for the spoiler)
Congratulations to the teams. It was a great game.
It felt a bit rough at times, but oh well...
@Mat'sMug Can it solve a Samurai Sudoku!? :)
Yes, it was very rough. I don't like that part about it at all. :/
9:50 PM
@Mat'sMug just kidding, I would love to --see it--- review it
@kleinfreund Me neither
@Mat'sMug Doesn't feel like a good idea to me.
But man, what a game.
Indeed
@Svante Indeed I am Swedish. Come and say Hej to me and the other (non-Swedish) site regulars in The 2nd Monitor chatroom :) — Simon André Forsberg 9 secs ago
3
Besides the comments you've already received, I'm adding some more: Many of your variables can be marked as private final. All of them can be marked private and those who get initialized directly and never change should also be marked final. These include: Game g = new Game(); Scanner s = new...
@Mat'sMug 1555 to target |
Home/Class 8/Maths/
The probability of getting a multiple of $$2$$ when a dice is rolled is ( )
A. $$\frac {1}{6}$$
B. $$\frac {1}{3}$$
C. $$\frac {1}{2}$$
D. $$\frac {2}{3}$$
Speed
00:00
04:26
## QuestionMathsClass 8
The probability of getting a multiple of $$2$$ when a dice is rolled is ( )
A. $$\frac {1}{6}$$
B. $$\frac {1}{3}$$
C. $$\frac {1}{2}$$
D. $$\frac {2}{3}$$
C
4.6
4.6
## Solution
The total outcomes of dice are $$1,2,3,4,5,6$$.
And, $$2,4$$ and $$6$$ are multiple of $$2$$.
So, favourable cases $$=3$$.
$$\therefore$$ $$\text{Probability}=\frac{\text{Number}\ \text{of}\ \text{favourable}\ \text{cases}}{\text{Total}\ \text{cases}}$$
$$=\frac{3}{6}$$
$$=\frac{1}{2}$$
Hence, option $$(C)$$ is correct. |
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 23 May 2017, 20:28
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Child development specialists have observed that adolescents
Author Message
TAGS:
### Hide Tags
Verbal Forum Moderator
Joined: 23 Oct 2011
Posts: 282
Followers: 37
Kudos [?]: 849 [1] , given: 23
### Show Tags
28 Apr 2012, 22:33
1
KUDOS
2
This post was
BOOKMARKED
00:00
Difficulty:
5% (low)
Question Stats:
82% (01:56) correct 18% (01:04) wrong based on 278 sessions
### HideShow timer Statistics
allowances tend to spend money on items considered frivolous by their parents whereas
adolescents who receive small weekly allowances do not. Thus, in order to ensure that their
children do not spend money on frivolous items, parents should not give their children large
weekly allowances. Which of the following pieces of information would be most useful in
evaluating the validity of the conclusion above?
b) Any differences among parents in the standard used to judge an item as frivolous
c) The educational background of the child development specialists who made this observation
d) The difference between the average annual income of families in which the parents give their
children large weekly allowances and that of families in which the parents give their children
small weekly allowances
Main CR Qs link - cr-qs-600-700-level-131508.html
[Reveal] Spoiler: OA
_________________
********************
Push +1 kudos button please, if you like my post.
If you have any questions
New!
Manager
Status: Bunuel's fan!
Joined: 08 Jul 2011
Posts: 233
Followers: 1
Kudos [?]: 47 [0], given: 55
Re: CR - Evaluate - # 3 [#permalink]
### Show Tags
16 May 2012, 10:56
It is B since if parents use the same standard, the argument will hold true.
Manager
Joined: 28 May 2011
Posts: 193
Location: United States
GMAT 1: 720 Q49 V38
GPA: 3.6
WE: Project Management (Computer Software)
Followers: 2
Kudos [?]: 65 [0], given: 7
Re: CR - Evaluate - # 3 [#permalink]
### Show Tags
18 May 2012, 10:39
Conclusion :
Thus, in order to ensure that their children do not spend money on frivolous items, parents should not give their children large weekly allowances.
What would be most useful in evaluating the validity of the conclusion above is to know what is considered frivolous item. An item could be frivolous for one parent but not for another. In that case conclusion may not hold true because large amount itself may vary from $1 to$1000 and more.
_________________
-------------------------------------------------------------------------------------------------------------------------------
http://gmatclub.com/forum/a-guide-to-the-official-guide-13-for-gmat-review-134210.html
-------------------------------------------------------------------------------------------------------------------------------
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 10370
Followers: 996
Kudos [?]: 224 [0], given: 0
### Show Tags
24 Jul 2016, 08:05
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Manager
Joined: 20 Jan 2016
Posts: 89
GMAT 1: 600 Q47 V26
Followers: 2
Kudos [?]: 4 [0], given: 5
### Show Tags
24 Jul 2016, 10:36
B for sure
Sent from my iPhone using GMAT Club Forum mobile app
Senior Manager
Joined: 26 Oct 2016
Posts: 460
Location: United States
Schools: HBS '19
GMAT 1: 770 Q51 V44
GPA: 4
WE: Education (Education)
Followers: 25
Kudos [?]: 62 [1] , given: 823
### Show Tags
17 Apr 2017, 11:35
1
KUDOS
frivolous :- not having any serious purpose or value.
Adolescents--->large weekly allowances-->spend on items not having any serious purpose or value.
Adolescents--->small weekly allowances-->do not spend on items not having serious purpose or value.
The conclusion of the passage is that parents can ensure that their children will not spend money on frivolous items by limiting their children's allowances. This claim is based on the observed difference between the spending habits of children who receive large allowances and those of children who receive small allowances. The argument assumes that the high dollar amount of the allowance – as opposed to some other unobserved factor – is directly linked to the fact that children spend the money on items their parents consider frivolous. Information that provides data about any other factor that might be the cause of the children's spending behavior would help to evaluate the validity of the conclusion.
(B) CORRECT. One alternative to the conclusion of the passage is that the standard used to judge an item as frivolous was much lower for parents who gave their children large weekly allowances than for parents who gave their children small weekly allowances. If for example, the former group of parents considered all movie tickets to be frivolous, while the latter did not, then this fact (and not the difference in allowance money) might explain the difference observed by the child development specialists. Thus, information about any differences among parents in the standard used to judge an item as frivolous would be extremely relevant in evaluating the validity of the conclusion of the passage.
(C) The background of the child development specialists who made the observation has no bearing on the conclusion. The conclusion is based on the observation, not on the credentials of those making the observation.
(D) Family income differences have no clear relevance to the link posited between high allowances and spending on frivolous items.
_________________
Thanks & Regards,
Anaira Mitch
Re: Child development specialists have observed that adolescents [#permalink] 17 Apr 2017, 11:35
Similar topics Replies Last post
Similar
Topics:
1 Evaluate Revision: Child development specialists have observed 3 26 Feb 2015, 14:52
20 Developed countries around the world have ... 9 11 Apr 2017, 17:10
2 In a study of the behavior of adolescents to parental 5 23 Nov 2015, 02:53
17 Recently in City X, residential developers have stopped 10 12 Aug 2016, 08:07
Recently in City X, developers have stopped buying land, 11 07 Oct 2013, 07:31
Display posts from previous: Sort by |
Overlapping Windows
2
2
Entering edit mode
9.9 years ago
Random ▴ 160
If I were to create a plot representing a genomic region where I have two axes representing say, depth of coverage in x, and GC content on y for example, and since the region is big I have to do it by windows, where each window would correspond to the mean depth of coverage of 10000 base pairs.
In the case I decide to create another set of 10kbp windows, but this time starting at position 5k so that effectively each new window overlaps two old neighbouring windows by 5kbp, what kind of transformations should I make to my data, since each region is effectively being represented twice? Just normalize it?
Can you refer me to papers which make use of this kind of exploration region-wise, and show different ways on how to best capture the true variation within a genomic region, and minimize the possible errors that overlapping windows may introduce (such as signal-to-noise approaches)?
Thanks
coverage genomics coordinates • 4.1k views
0
Entering edit mode
So are you making a 3d graph? Where x and y are gc content, coverage and z is every 10kb? I don't quite get how the graph is set up.
0
Entering edit mode
Maybe you mean to plot the coverage/gc% ratio per base? Or indeed a 3D graph of coverage, GC, and position?
0
Entering edit mode
I meant the coverage/gc% ratio for base, but since plotting the 10 million points would result in a hard and slow to render plot, full of indistinguishable peaks, I instead wanted to smooth it and understand where actually there may be the regions with excess variation by reducing chunks of 10k points into a single one.
2
Entering edit mode
9.9 years ago
What you suggest sounds like the sliding window approach, which has been used for inspecting variation before. Here is a reference:
Rozas J, Rozas R: DnaSP, DNA sequence polymorphism: an interactive program for estimating Population Genetics parameters from DNA sequence data.
Comput Appl Biosci 1995, 11:621-625.
We wrote a tool for genome-wide analysis of polymorphisms a while ago, VariScan, which implements two kinds of sliding windows: one that is fixed for genomic stretches, and one that fixes the number of polymorphisms per window:
http://www.ub.edu/softevol/variscan/
There may be other newer implementations of the same concept out there these days.
0
Entering edit mode
Incidentally what made me ask this question was exactly when I saw Rozas questioning a PhD candidate, in his thesis defense, about how he exactly constructed the sliding window approach since the candidate himself wasn't totally aware of the normalization issue.
1
Entering edit mode
9.9 years ago
ALchEmiXt ★ 1.9k
I am not entirely sure if this answers your question but I think you could at best use a sliding window approach for that. So for every base you calculate the average of coverage of the region 1/2w - x - 1/2w. In this way we usually assess overrepresented coverage sections in our genome sequencing projects. it filters the noise and depending on the window size you will be able to detect significant differences (i.e. <>2sd) of a feature size roughly comparable to the choosen window-size; our formula is basically like this:
[?][?]u(i) = 1/(Nwindow+1) * SUM(Xi+m)
• where u(i) is average of window at position i
• Nwindow is the window size choosen (since the window is actuall 1/2 before and 1/2 after we need to add 1 later
• SUM(Xim) is the sum of coverage (=X) from positions i minus 1/2 windows size till i plus 1/2 windows size (half window size we defined here as m starting at -1/2N till +1/2N
So this deals only with the coverage issue (y-axis). You pobably can do the same for GC content and plot either against each other in for instance a 3D graph over position i.
PS: sorry for complex explanation. Just simply cannot fit a proper Sigma function in here...
PPS: please note that in case of circular genomes you need to make sure the window goes over to the other end when approaching the start or end within 1/2 windows size! |
# Thread: Least squares intercept estimator variance
1. ## Least squares intercept estimator variance
Hello everybody,
I am tackling the following problem relating to the regression intercept estimator:
"(a) Show that the least squares estimator $\hat{\alpha} = \sum_{i=1}^{n}\Big[\frac{1}{n}-\frac{(x_i-\bar{x})}{S_{xx}}\Big]y_i$
(b) Using the result of part (a), or otherwise, derive the variance of $\hat{\alpha}$".
My progress is very limited. On part (a), I cannot see how this is true. Given $\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}$, I cannot equate the formula given to what it should be. It seems the formula given amounts to $\hat{\alpha}=\bar{y}-\hat{\beta}$ and I am short of an $\bar{x}$. I am willing to accept I may have made a mistake but I can't see it and this is a major concern.
2. ## Re: Least squares intercept estimator variance
Hey Naranja.
Can you show us the complete expanded form by Beta_hat? (i.e. show it in terms of x_i's and y_i's). |
Programming languages
Completed
The principal purpose of programming languages is for developers to build instructions to send to a device.
Programming languages are a vehicle for communication between humans and computers. Devices can understand only the binary characters 1 and 0. For most developers, using only binary characters isn't an efficient way to communicate.
Programming languages come in a variety of formats and can serve different purposes. For example, JavaScript is used primarily for web applications, and Bash is used primarily for operating systems.
Low-level and high-level languages
To be interpreted by a device, low-level languages typically require fewer steps than do high-level languages. However, what makes high-level languages popular is their readability and support. JavaScript is considered a high-level language.
The code in the next section illustrates the difference between a high-level language, such as JavaScript, and a low-level assembly language.
Code comparison
The following code is written in JavaScript, a high-level language. It implements an algorithm by using constructs such as variables, for-loops, and other statements.
let number = 10
let n1 = 0, n2 = 1, nextTerm;
for (let i = 1; i <= number; i++) {
console.log(n1);
nextTerm = n1 + n2;
n1 = n2;
n2 = nextTerm;
}
The preceding code illustrates an algorithm for implementing a Fibonacci sequence. Now, here's the corresponding code in the assembly language:
area ascen,code,readonly
entry
code32
bx r0
code16
thumb
mov r0,#00
sub r0,r0,#01
mov r1,#01
mov r4,#10
ldr r2,=0x40000000
str r0,[r2]
mov r3,r0
mov r0,r1
mov r1,r3
sub r4,#01
cmp r4,#00
bne back
end
Believe it or not, both examples are intended to do the same thing. Which one was easier to understand?
Note
A Fibonacci sequence is defined as a set of numbers such that each number is the sum of the two preceding numbers, starting from 0 and 1. |
• https://me.yahoo.com
COST (GBP)
5.62
0.50
0
# log gamma stirling
viewed 2354 times and licensed 58 times
Natural Logarithm of Gamma function
Controller: CodeCogs
C++
## Log Gamma Stirling
doublelog_gamma_stirling( double x int* sign = NULL )
The logarithm of the gamma function is sometimes treated as a special function to avoid additional 'branch cut' stuctures that are introduced by the logarithm function (see http://mathworld.wolfram.com/LogGammaFunction.html )
The log-gamma function can be defined by
$\ln&space;\Gamma(z)&space;=&space;-\gamma&space;z&space;-&space;\ln&space;z&space;+&space;\sum_{k=1}^{\infty}&space;\left&space;[&space;\frac{z}{k}&space;-&space;\ln&space;\left&space;(&space;1&space;+&space;\frac{z}{k}&space;\right&space;)&space;\right&space;]$
Stirling series (in its normal form) also provides a solution, given by a simple analytic expression
$ln&space;\Gamma(z)&space;=&space;\frac{1}{2}&space;\ln(2&space;\pi)&space;+&space;(z&space;-&space;\frac{1}{2})&space;\ln&space;z&space;-&space;z&space;+&space;\frac{1}{12z}&space;-&space;\frac{1}{360&space;z^3}&space;+&space;\frac{1}{1260&space;z^5}&space;-&space;...$
The function returns the base e (2.718...) logarithm of the absolute value of the gamma function of the argument.
The sign (+1 or -1) of the gamma function is returned in a global (extern) variable named sgngam.
For arguments greater than 13, the logarithm of the gamma function is approximated by the logarithmic version of Stirling's formula using a polynomial approximation of degree 4. Arguments between -33 and +33 are reduced by recurrence to the interval [2,3] of a rational approximation.
The cosecant reflection formula is employed for arguments less than -33.
Arguments greater than MAXLGM return MAXNUM and an error message. MAXLGM = 2.035093e36 for DEC arithmetic or 2.556348e305 for IEEE arithmetic.
## Accuracy:
<pre> domain # trials peak rms 0, 3 28000 5.4e-16 1.1e-16 2.718, 2.556e305 40000 3.5e-16 8.3e-17 </pre> The error criterion was relative when the function magnitude was greater than one but absolute when it was less than one.
## Example:
#include <codecogs/maths/special/gamma/log_gamma_stirling.h>
#include <stdio.h>
int main()
{
for(double x=10; x<=20; x+=0.5)
printf("\n x=%lf log_gamma_stirling(x)=%lf",x, Maths::Special::Gamma::log_gamma_stirling(x));
}
## Output:
x=10.000000 log_gamma_stirling(x)=12.801827
x=10.500000 log_gamma_stirling(x)=13.940625
x=11.000000 log_gamma_stirling(x)=15.104413
x=11.500000 log_gamma_stirling(x)=16.292000
x=12.000000 log_gamma_stirling(x)=17.502308
x=12.500000 log_gamma_stirling(x)=18.734348
x=13.000000 log_gamma_stirling(x)=19.987214
x=13.500000 log_gamma_stirling(x)=21.260076
x=14.000000 log_gamma_stirling(x)=22.552164
x=14.500000 log_gamma_stirling(x)=23.862766
x=15.000000 log_gamma_stirling(x)=25.191221
x=15.500000 log_gamma_stirling(x)=26.536914
x=16.000000 log_gamma_stirling(x)=27.899271
x=16.500000 log_gamma_stirling(x)=29.277755
x=17.000000 log_gamma_stirling(x)=30.671860
x=17.500000 log_gamma_stirling(x)=32.081115
x=18.000000 log_gamma_stirling(x)=33.505073
x=18.500000 log_gamma_stirling(x)=34.943316
x=19.000000 log_gamma_stirling(x)=36.395445
x=19.500000 log_gamma_stirling(x)=37.861087
x=20.000000 log_gamma_stirling(x)=39.339884
## References:
• Cephes Math Library Release 2.8: June, 2000
• http://mathworld.wolfram.com/LogGammaFunction.html
### Parameters
x is primary argument. sign is a pointer to a variable, into which the sign of the solution is placed, if sign not equal to NULL (default=NULL).
### Authors
Stephen L.Moshier. Copyright 1984, 1987, 1989, 1992, 2000
Updated by Will Bateman
##### Source Code
Source code is available when you agree to a GP Licence or buy a Commercial Licence.
Not a member, then Register with CodeCogs. Already a Member, then Login. |
Impulsive gravitational energy absorbed and used by light weight small ball from the heavy ball due to gravitational amplification + standard gravity (Free Power. Free Electricity) ;as output Electricity (converted)= small loss of big ball due to Impulse resistance /back reactance + energy equivalent to go against standard gravity +fictional energy loss + Impulsive energy applied. ” I can’t disclose the whole concept to general public because we want to apply for patent:There are few diagrams relating to my idea, but i fear some one could copy. Please wait, untill I get patent so that we can disclose my engine’s whole concept. Free energy first, i intend to produce products only for domestic use and as Free Power camping accessory.
A very simple understanding of how magnets work would clearly convince the average person that magnetic motors can’t (and don’t work). Pray tell where does the energy come from? The classic response is magnetic energy from when they were made. Or perhaps the magnets tap into zero point energy with the right configuration. What about they harness the earth’s gravitational field. Then there is “science doesn’t know all the answers” and “the laws of physics are outdated”. The list goes on with equally implausible rubbish. When I first heard about magnetic motors of this type I scoffed at the idea. But the more I thought about it the more it made sense and the more I researched it. Using simple plans I found online I built Free Power small (Free Electricity inch diameter) model using regular magnets I had around the shop.
Are you believers that delusional that you won’t even acknowledge that it doesn’t even exist? How about an answer from someone without attacking me? This is NOT personal, just factual. Harvey1 kimseymd1 Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Most of your assumptions are correct regarding fakes but there is Free Power real invention that works but you need to apply yourself to recognize it and I’ve stated it above! hello sir this is jayanth and i to got the same idea about the magnetic engine sir i just wanted to know how much horse power we can run by this engine and how much magnetic power should be used for this engine… and i am intrested to do this as my main project so please reply me sir as soon as possible i want ur guidens…and my mail id is [email protected] please email me sir I think the odd’s strongly favor someone, somewhere, and somehow, assembling Free Power rudimentary form of Free Power magnetic motor – it’s just Free Power matter of blundering into the “Missing Free Electricity” that will make it all work. Why not ?? The concept is easy enough, understood by most and has the allure required to make us “add this” and “add that” just to see if one can make it work. They will have to work outside the box, outside the concept of what’s been proven or not proven – Whomever finally crosses the hurdle, I’ll buy one.
### This simple contradiction dispels your idea. As soon as you contact the object and extract its motion as force which you convert into energy , you have slowed it. The longer you continue the more it slows until it is no longer moving. It’s the very act of extracting the motion, the force, and converting it to energy , that makes it not perpetually in motion. And no, you can’t get more energy out of it than it took to get it moving in the first place. Because this is how the universe works, and it’s Free Power proven fact. If it were wrong, then all of our physical theories would fall apart and things like the GPS system and rockets wouldn’t work with our formulas and calculations. But they DO work, thus validating the laws of physics. Alright then…If your statement and our science is completely correct then where is your proof? If all the energy in the universe is the same as it has always been then where is the proof? Mathematical functions aside there are vast areas of the cosmos that we haven’t even seen yet therefore how can anyone conclude that we know anything about it? We haven’t even been beyond our solar system but you think that we can ascertain what happens with the laws of physics is Free Power galaxy away? Where’s the proof? “Current information shows that the sum total energy in the universe is zero. ” Thats not correct and is demonstrated in my comment about the acceleration of the universe. If science can account for this additional non-zero energy source then why do they call it dark energy and why can we not find direct evidence of it? There is much that our current religion cannot account for. Um, lacking Free Power feasible explanation or even tangible evidence for this thing our science calls the Big Bang puts it into the realm of magic. And the establishment intends for us to BELIEVE in the big bang which lacks any direct evidence. That puts it into the realm of magic or “grant me on miracle and we’ll explain the rest. ” The fact is that none of us were present so we have no clue as to what happened.
We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG=ΔH−TΔS=Free energy. 01mol-rxnkJ−(293K)(0. 022mol-rxn⋅K)kJ=Free energy. 01mol-rxnkJ−Free energy. 45mol-rxnkJ=−0. 44mol-rxnkJ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur.
###### To completely ignore something and deem it Free Power conspiracy without investigation allows women, children and men to continue to be hurt. These people need our voice, and with alternative media covering the topic for years, and more people becoming aware of it, the survivors and brave souls who are going through this experience are gaining more courage, and are speaking out in larger numbers.
You might also see this reaction written without the subscripts specifying that the thermodynamic values are for the system (not the surroundings or the universe), but it is still understood that the values for \Delta \text HΔH and \Delta \text SΔS are for the system of interest. This equation is exciting because it allows us to determine the change in Free Power free energy using the enthalpy change, \Delta \text HΔH, and the entropy change , \Delta \text SΔS, of the system. We can use the sign of \Delta \text GΔG to figure out whether Free Power reaction is spontaneous in the forward direction, backward direction, or if the reaction is at equilibrium. Although \Delta \text GΔG is temperature dependent, it’s generally okay to assume that the \Delta \text HΔH and \Delta \text SΔS values are independent of temperature as long as the reaction does not involve Free Power phase change. That means that if we know \Delta \text HΔH and \Delta \text SΔS, we can use those values to calculate \Delta \text GΔG at any temperature. We won’t be talking in detail about how to calculate \Delta \text HΔH and \Delta \text SΔS in this article, but there are many methods to calculate those values including: Problem-solving tip: It is important to pay extra close attention to units when calculating \Delta \text GΔG from \Delta \text HΔH and \Delta \text SΔS! Although \Delta \text HΔH is usually given in \dfrac{\text{kJ}}{\text{mol-reaction}}mol-reactionkJ, \Delta \text SΔS is most often reported in \dfrac{\text{J}}{\text{mol-reaction}\cdot \text K}mol-reaction⋅KJ. The difference is Free Power factor of 10001000!! Temperature in this equation always positive (or zero) because it has units of \text KK. Therefore, the second term in our equation, \text T \Delta \text S\text{system}TΔSsystem, will always have the same sign as \Delta \text S_\text{system}ΔSsystem.
“These are not just fringe scientists with science fiction ideas. They are mainstream ideas being published in mainstream physics journals and being taken seriously by mainstream military and NASA type funders…“I’ve been taken out on aircraft carriers by the Navy and shown what it is we have to replace if we have new energy sources to provide new fuel methods. ” (source)
They do so by helping to break chemical bonds in the reactant molecules (Figure Free Power. Free Electricity). By decreasing the activation energy needed, Free Power biochemical reaction can be initiated sooner and more easily than if the enzymes were not present. Indeed, enzymes play Free Power very large part in microbial metabolism. They facilitate each step along the metabolic pathway. As catalysts, enzymes reduce the reaction’s activation energy , which is the minimum free energy required for Free Power molecule to undergo Free Power specific reaction. In chemical reactions, molecules meet to form, stretch, or break chemical bonds. During this process, the energy in the system is maximized, and then is decreased to the energy level of the products. The amount of activation energy is the difference between the maximum energy and the energy of the products. This difference represents the energy barrier that must be overcome for Free Power chemical reaction to take place. Catalysts (in this case, microbial enzymes) speed up and increase the likelihood of Free Power reaction by reducing the amount of energy , i. e. the activation energy , needed for the reaction. Enzymes are usually quite specific. An enzyme is limited in the kinds of substrate that it will catalyze. Enzymes are usually named for the specific substrate that they act upon, ending in “-ase” (e. g. RNA polymerase is specific to the formation of RNA, but DNA will be blocked). Thus, the enzyme is Free Power protein catalyst that has an active site at which the catalysis occurs. The enzyme can bind Free Power limited number of substrate molecules. The binding site is specific, i. e. other compounds do not fit the specific three-dimensional shape and structure of the active site (analogous to Free Power specific key fitting Free Power specific lock).
Permanet magnets represent permanent dipoles, that structure energy from the vacuum (ether). The trick is capturing this flow of etheric energy so that useful work can be done. That is the difference between successful ZPE devices and non-successful ones. Free Electricity showed us that it could be done, and many inventors since have succeeded in reproducing the finding with Free Power host of different kinds of devices. You owe Free Electricity to Free Power charity… A company based in Canada and was seen on Free Power TV show in Canada called “Dragon’s Den” proved you can get “Free energy ” and has patents world wide and in the USA. Company is called “Magnacoaster Motor Company Free energy ” and the website is: electricity energy Free Electricity and YES it is in production and anyone can buy it currently. Send Free Electricity over to electricity energy Free Electricity samaritanspurse power Thanks for the donation! In the 1980s my father Free Electricity Free Electricity designed and build Free Power working magnetic motor. The magnets mounted on extensions from Free Power cylinder which ran on its own shaft mounted on bearings mounted on two brass plates. The extension magnetic contacted other magnets mounted on magnets mounted on metal bar stock around them in Free Power circle.
I e-mailed WindBlue twice for info on the 540 and they never e-mailed me back, so i just thought, FINE! To heck with ya. Ill build my own. Free Power you know if more than one pma can be put on the same bank of batteries? Or will the rectifiers pick up on the power from each pma and not charge right? I know that is the way it is with car alt’s. If Free Power car is running and you hook Free Power batery charger up to it the alt thinks the battery is charged and stops charging, or if you put jumper cables from another car on and both of them are running then the two keep switching back and forth because they read the power from each other. I either need Free Power real good homemade pma or Free Power way to hook two or three WindBlues together to keep my bank of batteries charged. Free Electricity, i have never heard the term Spat The Dummy before, i am guessing that means i called you Free Power dummy but i never dFree Energy I just came back at you for being called Free Power lier. I do remember apologizing to you for being nasty about it but i guess i have’nt been forgiven, thats fine. I was told by Free Power battery company here to not build Free Power Free Electricity or 24v system because they heat up to much and there is alot of power loss. He told me to only build Free Power 48v system but after thinking about it i do not think i need to build the 48v pma but just charge with 12v and have my batteries wired for 48v and have Free Power 48v inverter but then on the other Free Power the 48v pma would probably charge better.
According to the second law of thermodynamics, for any process that occurs in Free Power closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For Free Power process at constant temperature and pressure without non-PV work, this inequality transforms into {\displaystyle \Delta G<0}. Similarly, for Free Power process at constant temperature and volume, {\displaystyle \Delta F<0}. Thus, Free Power negative value of the change in free energy is Free Power necessary condition for Free Power process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0. From the Free Power textbook Modern Thermodynamics [Free Power] by Nobel Laureate and chemistry professor Ilya Prigogine we find: “As motion was explained by the Newtonian concept of force, chemists wanted Free Power similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked Free Power clear definition. ”In the 19th century, the Free Electricity chemist Marcellin Berthelot and the Danish chemist Free Electricity Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for Free Power large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of Free Power system of bodies which liberate heat. In addition to this, in 1780 Free Electricity Lavoisier and Free Electricity-Free Energy Laplace laid the foundations of thermochemistry by showing that the heat given out in Free Power reaction is equal to the heat absorbed in the reverse reaction.
I looked at what you have for your motor so far and it’s going to be big. Here is my e-mail if you want to send those diagrams, if you know how to do it. [email protected] My name is Free energy MacInnes from Orangeville, On. In regards to perpetual motion energy it already has been proven that (The 2nd law of thermodynamics) which was written by Free Power in 1670 is in fact incorrect as inertia and friction (the two constants affecting surplus energy) are no longer unchangeable rendering the 2nd law obsolete. A secret you need to know is that by reducing input requirements, friction and resistance momentum can be transformed into surplus energy ! Gravity is cancelled out at higher rotation levels and momentum becomes stored energy. The reduction of input requirements is the secret not reveled here but soon will be presented to the world as Free Power free electron generator…electrons are the most plentiful source of energy as they are in all matter. Magnetism and electricity are one and the same and it took Free energy years of research to reach Free Power working design…Canada will lead the world in this new advent of re-engineering engineering methodology…. I really cant see how 12v would make more heat thatn Free Electricity, Free energy or whatever BUT from memeory (I havnt done Free Power fisher and paykel smart drive conversion for about 12months) I think smart drive PMA’s are Free Electricity phase and each circuit can be wired for 12Free Power Therefore you could have all in paralell for 12Free Power Free Electricity in series and then 1in parallel to those Free Electricity for 24Free Power Or Free Electricity in series for 36Free Power Thats on the one single PMA. Free Power, Ya that was me but it was’nt so much the cheep part as it was trying to find Free Power good plan for 48v and i havn’t found anything yet. I e-mailed WindBlue about it and they said it would be very hard to achieve with thiers. |
I’ve just uploaded to the arXiv my paper “Almost all Collatz orbits attain almost bounded values“, submitted to the proceedings of the Forum of Mathematics, Pi. In this paper I returned to the topic of the notorious Collatz conjecture (also known as the ${3x+1}$ conjecture), which I previously discussed in this blog post. This conjecture can be phrased as follows. Let ${{\bf N}+1 = \{1,2,\dots\}}$ denote the positive integers (with ${{\bf N} =\{0,1,2,\dots\}}$ the natural numbers), and let ${\mathrm{Col}: {\bf N}+1 \rightarrow {\bf N}+1}$ be the map defined by setting ${\mathrm{Col}(N)}$ equal to ${3N+1}$ when ${N}$ is odd and ${N/2}$ when ${N}$ is even. Let ${\mathrm{Col}_{\min}(N) := \inf_{n \in {\bf N}} \mathrm{Col}^n(N)}$ be the minimal element of the Collatz orbit ${N, \mathrm{Col}(N), \mathrm{Col}^2(N),\dots}$. Then we have
Conjecture 1 (Collatz conjecture) One has ${\mathrm{Col}_{\min}(N)=1}$ for all ${N \in {\bf N}+1}$.
Establishing the conjecture for all ${N}$ remains out of reach of current techniques (for instance, as discussed in the previous blog post, it is basically at least as difficult as Baker’s theorem, all known proofs of which are quite difficult). However, the situation is more promising if one is willing to settle for results which only hold for “most” ${N}$ in some sense. For instance, it is a result of Krasikov and Lagarias that
$\displaystyle \{ N \leq x: \mathrm{Col}_{\min}(N) = 1 \} \gg x^{0.84}$
for all sufficiently large ${x}$. In another direction, it was shown by Terras that for almost all ${N}$ (in the sense of natural density), one has ${\mathrm{Col}_{\min}(N) < N}$. This was then improved by Allouche to ${\mathrm{Col}_{\min}(N) < N^\theta}$ for almost all ${N}$ and any fixed ${\theta > 0.869}$, and extended later by Korec to cover all ${\theta > \frac{\log 3}{\log 4} \approx 0.7924}$. In this paper we obtain the following further improvement (at the cost of weakening natural density to logarithmic density):
Theorem 2 Let ${f: {\bf N}+1 \rightarrow {\bf R}}$ be any function with ${\lim_{N \rightarrow \infty} f(N) = +\infty}$. Then we have ${\mathrm{Col}_{\min}(N) < f(N)}$ for almost all ${N}$ (in the sense of logarithmic density).
Thus for instance one has ${\mathrm{Col}_{\min}(N) < \log\log\log\log N}$ for almost all ${N}$ (in the sense of logarithmic density).
The difficulty here is one usually only expects to establish “local-in-time” results that control the evolution ${\mathrm{Col}^n(N)}$ for times ${n}$ that only get as large as a small multiple ${c \log N}$ of ${\log N}$; the aforementioned results of Terras, Allouche, and Korec, for instance, are of this time. However, to get ${\mathrm{Col}^n(N)}$ all the way down to ${f(N)}$ one needs something more like an “(almost) global-in-time” result, where the evolution remains under control for so long that the orbit has nearly reached the bounded state ${N=O(1)}$.
However, as observed by Bourgain in the context of nonlinear Schrödinger equations, one can iterate “almost sure local wellposedness” type results (which give local control for almost all initial data from a given distribution) into “almost sure (almost) global wellposedness” type results if one is fortunate enough to draw one’s data from an invariant measure for the dynamics. To illustrate the idea, let us take Korec’s aforementioned result that if ${\theta > \frac{\log 3}{\log 4}}$ one picks at random an integer ${N}$ from a large interval ${[1,x]}$, then in most cases, the orbit of ${N}$ will eventually move into the interval ${[1,x^{\theta}]}$. Similarly, if one picks an integer ${M}$ at random from ${[1,x^\theta]}$, then in most cases, the orbit of ${M}$ will eventually move into ${[1,x^{\theta^2}]}$. It is then tempting to concatenate the two statements and conclude that for most ${N}$ in ${[1,x]}$, the orbit will eventually move ${[1,x^{\theta^2}]}$. Unfortunately, this argument does not quite work, because by the time the orbit from a randomly drawn ${N \in [1,x]}$ reaches ${[1,x^\theta]}$, the distribution of the final value is unlikely to be close to being uniformly distributed on ${[1,x^\theta]}$, and in particular could potentially concentrate almost entirely in the exceptional set of ${M \in [1,x^\theta]}$ that do not make it into ${[1,x^{\theta^2}]}$. The point here is the uniform measure on ${[1,x]}$ is not transported by Collatz dynamics to anything resembling the uniform measure on ${[1,x^\theta]}$.
So, one now needs to locate a measure which has better invariance properties under the Collatz dynamics. It turns out to be technically convenient to work with a standard acceleration of the Collatz map known as the Syracuse map ${\mathrm{Syr}: 2{\bf N}+1 \rightarrow 2{\bf N}+1}$, defined on the odd numbers ${2{\bf N}+1 = \{1,3,5,\dots\}}$ by setting ${\mathrm{Syr}(N) = (3N+1)/2^a}$, where ${2^a}$ is the largest power of ${2}$ that divides ${3N+1}$. (The advantage of using the Syracuse map over the Collatz map is that it performs precisely one multiplication of ${3}$ at each iteration step, which makes the map better behaved when performing “${3}$-adic” analysis.)
When viewed ${3}$-adically, we soon see that iterations of the Syracuse map become somewhat irregular. Most obviously, ${\mathrm{Syr}(N)}$ is never divisible by ${3}$. A little less obviously, ${\mathrm{Syr}(N)}$ is twice as likely to equal ${2}$ mod ${3}$ as it is to equal ${1}$ mod ${3}$. This is because for a randomly chosen odd ${\mathbf{N}}$, the number of times ${\mathbf{a}}$ that ${2}$ divides ${3\mathbf{N}+1}$ can be seen to have a geometric distribution of mean ${2}$ – it equals any given value ${a \in{\bf N}+1}$ with probability ${2^{-a}}$. Such a geometric random variable is twice as likely to be odd as to be even, which is what gives the above irregularity. There are similar irregularities modulo higher powers of ${3}$. For instance, one can compute that for large random odd ${\mathbf{N}}$, ${\mathrm{Syr}^2(\mathbf{N}) \hbox{ mod } 9}$ will take the residue classes ${0,1,2,3,4,5,6,7,8 \hbox{ mod } 9}$ with probabilities
$\displaystyle 0, \frac{8}{63}, \frac{16}{63}, 0, \frac{11}{63}, \frac{4}{63}, 0, \frac{2}{63}, \frac{22}{63}$
respectively. More generally, for any ${n}$, ${\mathrm{Syr}^n(N) \hbox{ mod } 3^n}$ will be distributed according to the law of a random variable ${\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}$ on ${{\bf Z}/3^n{\bf Z}}$ that we call a Syracuse random variable, and can be described explicitly as
$\displaystyle \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = 2^{-\mathbf{a}_1} + 3^1 2^{-\mathbf{a}_1-\mathbf{a}_2} + \dots + 3^{n-1} 2^{-\mathbf{a}_1-\dots-\mathbf{a}_n} \hbox{ mod } 3^n, \ \ \ \ \ (1)$
where ${\mathbf{a}_1,\dots,\mathbf{a}_n}$ are iid copies of a geometric random variable of mean ${2}$.
In view of this, any proposed “invariant” (or approximately invariant) measure (or family of measures) for the Syracuse dynamics should take this ${3}$-adic irregularity of distribution into account. It turns out that one can use the Syracuse random variables ${\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}$ to construct such a measure, but only if these random variables stabilise in the limit ${n \rightarrow \infty}$ in a certain total variation sense. More precisely, in the paper we establish the estimate
$\displaystyle \sum_{Y \in {\bf Z}/3^n{\bf Z}} | \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^n{\bf Z})=Y) - 3^{m-n} \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^m{\bf Z})=Y \hbox{ mod } 3^m)| \ \ \ \ \ (2)$
$\displaystyle \ll_A m^{-A}$
for any ${1 \leq m \leq n}$ and any ${A > 0}$. This type of stabilisation is plausible from entropy heuristics – the tuple ${(\mathbf{a}_1,\dots,\mathbf{a}_n)}$ of geometric random variables that generates ${\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}$ has Shannon entropy ${n \log 4}$, which is significantly larger than the total entropy ${n \log 3}$ of the uniform distribution on ${{\bf Z}/3^n{\bf Z}}$, so we expect a lot of “mixing” and “collision” to occur when converting the tuple ${(\mathbf{a}_1,\dots,\mathbf{a}_n)}$ to ${\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}$; these heuristics can be supported by numerics (which I was able to work out up to about ${n=10}$ before running into memory and CPU issues), but it turns out to be surprisingly delicate to make this precise.
A first hint of how to proceed comes from the elementary number theory observation (easily proven by induction) that the rational numbers
$\displaystyle 2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{n-1} 2^{-a_1-\dots-a_n}$
are all distinct as ${(a_1,\dots,a_n)}$ vary over tuples in ${({\bf N}+1)^n}$. Unfortunately, the process of reducing mod ${3^n}$ creates a lot of collisions (as must happen from the pigeonhole principle); however, by a simple “Lefschetz principle” type argument one can at least show that the reductions
$\displaystyle 2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{m-1} 2^{-a_1-\dots-a_m} \hbox{ mod } 3^n \ \ \ \ \ (3)$
are mostly distinct for “typical” ${a_1,\dots,a_m}$ (as drawn using the geometric distribution) as long as ${m}$ is a bit smaller than ${\frac{\log 3}{\log 4} n}$ (basically because the rational number appearing in (3) then typically takes a form like ${M/2^{2m}}$ with ${M}$ an integer between ${0}$ and ${3^n}$). This analysis of the component (3) of (1) is already enough to get quite a bit of spreading on ${ \mathbf{Syrac}({\bf Z}/3^n{\bf Z})}$ (roughly speaking, when the argument is optimised, it shows that this random variable cannot concentrate in any subset of ${{\bf Z}/3^n{\bf Z}}$ of density less than ${n^{-C}}$ for some large absolute constant ${C>0}$). To get from this to a stabilisation property (2) we have to exploit the mixing effects of the remaining portion of (1) that does not come from (3). After some standard Fourier-analytic manipulations, matters then boil down to obtaining non-trivial decay of the characteristic function of ${\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}$, and more precisely in showing that
$\displaystyle \mathbb{E} e^{-2\pi i \xi \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) / 3^n} \ll_A n^{-A} \ \ \ \ \ (4)$
for any ${A > 0}$ and any ${\xi \in {\bf Z}/3^n{\bf Z}}$ that is not divisible by ${3}$.
If the random variable (1) was the sum of independent terms, one could express this characteristic function as something like a Riesz product, which would be straightforward to estimate well. Unfortunately, the terms in (1) are loosely coupled together, and so the characteristic factor does not immediately factor into a Riesz product. However, if one groups adjacent terms in (1) together, one can rewrite it (assuming ${n}$ is even for sake of discussion) as
$\displaystyle (2^{\mathbf{a}_2} + 3) 2^{-\mathbf{b}_1} + (2^{\mathbf{a}_4}+3) 3^2 2^{-\mathbf{b}_1-\mathbf{b}_2} + \dots$
$\displaystyle + (2^{\mathbf{a}_n}+3) 3^{n-2} 2^{-\mathbf{b}_1-\dots-\mathbf{b}_{n/2}} \hbox{ mod } 3^n$
where ${\mathbf{b}_j := \mathbf{a}_{2j-1} + \mathbf{a}_{2j}}$. The point here is that after conditioning on the ${\mathbf{b}_1,\dots,\mathbf{b}_{n/2}}$ to be fixed, the random variables ${\mathbf{a}_2, \mathbf{a}_4,\dots,\mathbf{a}_n}$ remain independent (though the distribution of each ${\mathbf{a}_{2j}}$ depends on the value that we conditioned ${\mathbf{b}_j}$ to), and so the above expression is a conditional sum of independent random variables. This lets one express the characeteristic function of (1) as an averaged Riesz product. One can use this to establish the bound (4) as long as one can show that the expression
$\displaystyle \frac{\xi 3^{2j-2} (2^{-\mathbf{b}_1-\dots-\mathbf{b}_j+1} \mod 3^n)}{3^n}$
is not close to an integer for a moderately large number (${\gg A \log n}$, to be precise) of indices ${j = 1,\dots,n/2}$. (Actually, for technical reasons we have to also restrict to those ${j}$ for which ${\mathbf{b}_j=3}$, but let us ignore this detail here.) To put it another way, if we let ${B}$ denote the set of pairs ${(j,l)}$ for which
$\displaystyle \frac{\xi 3^{2j-2} (2^{-l+1} \mod 3^n)}{3^n} \in [-\varepsilon,\varepsilon] + {\bf Z},$
we have to show that (with overwhelming probability) the random walk
$\displaystyle (1,\mathbf{b}_1), (2, \mathbf{b}_1 + \mathbf{b}_2), \dots, (n/2, \mathbf{b}_1+\dots+\mathbf{b}_{n/2})$
(which we view as a two-dimensional renewal process) contains at least a few points lying outside of ${B}$.
A little bit of elementary number theory and combinatorics allows one to describe the set ${B}$ as the union of “triangles” with a certain non-zero separation between them. If the triangles were all fairly small, then one expects the renewal process to visit at least one point outside of ${B}$ after passing through any given such triangle, and it then becomes relatively easy to then show that the renewal process usually has the required number of points outside of ${B}$. The most difficult case is when the renewal process passes through a particularly large triangle in ${B}$. However, it turns out that large triangles enjoy particularly good separation properties, and in particular afer passing through a large triangle one is likely to only encounter nothing but small triangles for a while. After making these heuristics more precise, one is finally able to get enough points on the renewal process outside of ${B}$ that one can finish the proof of (4), and thus Theorem 2. |
$\newcommand{\defeq}{\mathrel{\mathop:}=}$
## 2009/11/11
### A first, crude explanation
--
Im t ≅ (Im t)* ≅ V' * / Annih Im t = V' * / Ker t* ≅ Im t*
--
Labels:
XOO11/11/2009 6:25 pm 說:
In general, dual space often represents "adjunction" between two structures. (See p.88, MacLane's category textbook)
The isomorphism between V and V* holds only if they are finite-dimensional. It may explain that it is not completely natural to say V is isomorphic to V*, right?
Josh Ko11/13/2009 7:58 am 說:
For now I just regard V* as "transposed V", i.e., a element of V* is a row which is the transpose of an element (column) of V. This correspondence between dual spaces and row spaces does not seem to be perfect, though. For example, we have (A^T)^T = A but only V** \cong V. If you happen to have some free time, perhaps you can elaborate on "it is not completely natural to say V is isomorphic to V*"?
XOO11/18/2009 2:37 pm 說:
Given a dualization operator D, we could form an adjunction from Vec to Vec, and by the adjunction we have the unit which is a natural transformation from V to V**. However it is not a natural isomorphism in general.
Adjunction is a weaker notion of equivalence of categories, and equivalene is weaker than isomorphism of categories.
Currently, you may think adjunction as Galois connection.
Do you have a copy of MacLane's category textbook?
Josh Ko11/19/2009 10:37 am 說:
I do, but I do not have time to read it before I complete the research proposal. Hopefully I'll be able to acquaint myself with (pure) category theory next year. In fact I am now quite curious about exactly how successful category theory is (in terms of pure mathematics), and to have an answer it seems necessary to understand the theory to a certain extent. That requires much time and energy which I cannot devote to the theory now. |
# 1.9: Exercises
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
## Exercises
Graphically, find the point $$\left( x_{1},y_{1}\right)$$ which lies on both lines, $$x+3y=1$$ and $$4x-y=3.$$ That is, graph each line and see where they intersect.
$$\begin{array}{c} x+3y=1 \\ 4x-y=3 \end{array}$$, Solution is: $$\left[ x=\frac{10}{13},y=\frac{1}{13}\right]$$.
Graphically, find the point of intersection of the two lines $$3x+y=3$$ and $$x+2y=1.$$ That is, graph each line and see where they intersect.
$$\begin{array}{c} 3x+y=3 \\ x+2y=1 \end{array}$$, Solution is: $$\left[ x=1,y=0\right]$$
You have a system of $$k$$ equations in two variables, $$k\geq 2$$. Explain the geometric significance of
1. No solution.
2. A unique solution.
3. An infinite number of solutions.
[Chapter1Q8]Consider the following augmented matrix in which $$\ast$$ denotes an arbitrary number and $$\blacksquare$$ denotes a nonzero number. Determine whether the given augmented matrix is consistent. If consistent, is the solution unique? $\left[ \begin{array}{ccccc|c} \blacksquare & \ast & \ast & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast & 0 & \ast \\ 0 & 0 & \blacksquare & \ast & \ast & \ast \\ 0 & 0 & 0 & 0 & \blacksquare & \ast \end{array} \right]$
The solution exists but is not unique.
Consider the following augmented matrix in which $$\ast$$ denotes an arbitrary number and $$\blacksquare$$ denotes a nonzero number. Determine whether the given augmented matrix is consistent. If consistent, is the solution unique? $\left[ \begin{array}{ccc|c} \blacksquare & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast \\ 0 & 0 & \blacksquare & \ast \end{array} \right]$
A solution exists and is unique.
Consider the following augmented matrix in which $$\ast$$ denotes an arbitrary number and $$\blacksquare$$ denotes a nonzero number. Determine whether the given augmented matrix is consistent. If consistent, is the solution unique? $\left[ \begin{array}{ccccc|c} \blacksquare & \ast & \ast & \ast & \ast & \ast \\ 0 & \blacksquare & 0 & \ast & 0 & \ast \\ 0 & 0 & 0 & \blacksquare & \ast & \ast \\ 0 & 0 & 0 & 0 & \blacksquare & \ast \end{array} \right]$
Consider the following augmented matrix in which $$\ast$$ denotes an arbitrary number and $$\blacksquare$$ denotes a nonzero number. Determine whether the given augmented matrix is consistent. If consistent, is the solution unique? $\left[ \begin{array}{ccccc|c} \blacksquare & \ast & \ast & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast & 0 & \ast \\ 0 & 0 & 0 & 0 & \blacksquare & 0 \\ 0 & 0 & 0 & 0 & \ast & \blacksquare \end{array} \right]$
There might be a solution. If so, there are infinitely many.
Suppose a system of equations has fewer equations than variables. Will such a system necessarily be consistent? If so, explain why and if not, give an example which is not consistent.
No. Consider $$x+y+z=2$$ and $$x+y+z=1.$$
If a system of equations has more equations than variables, can it have a solution? If so, give an example and if not, tell why not.
These can have a solution. For example, $$x+y=1,2x+2y=2,3x+3y=3$$ even has an infinite set of solutions.
Find $$h$$ such that $\left[ \begin{array}{rr|r} 2 & h & 4 \\ 3 & 6 & 7 \end{array} \right]$ is the augmented matrix of an inconsistent system.
$$h=4$$
Find $$h$$ such that $\left[ \begin{array}{rr|r} 1 & h & 3 \\ 2 & 4 & 6 \end{array} \right]$ is the augmented matrix of a consistent system.
Any $$h$$ will work.
Find $$h$$ such that $\left[ \begin{array}{rr|r} 1 & 1 & 4 \\ 3 & h & 12 \end{array} \right]$ is the augmented matrix of a consistent system.
Any $$h$$ will work.
Choose $$h$$ and $$k$$ such that the augmented matrix shown has each of the following:
1. one solution
2. no solution
3. infinitely many solutions
$\left[ \begin{array}{rr|r} 1 & h & 2 \\ 2 & 4 & k \end{array} \right]$
If $$h\neq 2$$ there will be a unique solution for any $$k$$. If $$h=2$$ and $$% k\neq 4,$$ there are no solutions. If $$h=2$$ and $$k=4,$$ then there are infinitely many solutions.
Choose $$h$$ and $$k$$ such that the augmented matrix shown has each of the following:
1. one solution
2. no solution
3. infinitely many solutions
$\left[ \begin{array}{rr|r} 1 & 2 & 2 \\ 2 & h & k \end{array} \right]$
If $$h\neq 4,$$ then there is exactly one solution. If $$h=4$$ and $$k\neq 4,$$ then there are no solutions. If $$h=4$$ and $$k=4,$$ then there are infinitely many solutions.
Determine if the system is consistent. If so, is the solution unique? $\begin{array}{c} x+2y+z-w=2 \\ x-y+z+w=1 \\ 2x+y-z=1 \\ 4x+2y+z=5 \end{array}$
There is no solution. The system is inconsistent. You can see this from the augmented matrix. $$\left[ \begin{array}{rrrrr} 1 & 2 & 1 & -1 & 2 \\ 1 & -1 & 1 & 1 & 1 \\ 2 & 1 & -1 & 0 & 1 \\ 4 & 2 & 1 & 0 & 5 \end{array} \right]$$, : $$\left[ \begin{array}{rrrrr} 1 & 0 & 0 & \vspace{0.05in}\frac{1}{3} & 0 \\ 0 & 1 & 0 & -\vspace{0.05in}\frac{2}{3} & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{array} \right] .$$
Determine if the system is consistent. If so, is the solution unique? $\begin{array}{c} x+2y+z-w=2 \\ x-y+z+w=0 \\ 2x+y-z=1 \\ 4x+2y+z=3 \end{array}$
Solution is: $$\left[ w=\frac{3}{2}y-1,x=\frac{2}{3}-\frac{1}{2}y,z=\frac{1}{3 }\right]$$
Determine which matrices are in .
1. $$\left[ \begin{array}{rrr} 1 & 2 & 0 \\ 0 & 1 & 7 \end{array} \right]$$
2. $$\left[ \begin{array}{rrrr} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{array} \right]$$
3. $$\left[ \begin{array}{rrrrrr} 1 & 1 & 0 & 0 & 0 & 5 \\ 0 & 0 & 1 & 2 & 0 & 4 \\ 0 & 0 & 0 & 0 & 1 & 3 \end{array} \right]$$
1. This one is not.
2. This one is.
3. This one is.
Row reduce the following matrix to obtain the . Then continue to obtain the . $\left[ \begin{array}{rrrr} 2 & -1 & 3 & -1 \\ 1 & 0 & 2 & 1 \\ 1 & -1 & 1 & -2 \end{array} \right]$
Row reduce the following matrix to obtain the . Then continue to obtain the . $\left[ \begin{array}{rrrr} 0 & 0 & -1 & -1 \\ 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & -1 \end{array} \right]$
Row reduce the following matrix to obtain the . Then continue to obtain the . $\left[ \begin{array}{rrrr} 3 & -6 & -7 & -8 \\ 1 & -2 & -2 & -2 \\ 1 & -2 & -3 & -4 \end{array} \right]$
Row reduce the following matrix to obtain the . Then continue to obtain the . $\left[ \begin{array}{rrrr} 2 & 4 & 5 & 15 \\ 1 & 2 & 3 & 9 \\ 1 & 2 & 2 & 6 \end{array} \right]$
Row reduce the following matrix to obtain the . Then continue to obtain the . $\left[ \begin{array}{rrrr} 4 & -1 & 7 & 10 \\ 1 & 0 & 3 & 3 \\ 1 & -1 & -2 & 1 \end{array} \right]$
Row reduce the following matrix to obtain the . Then continue to obtain the . $\left[ \begin{array}{rrrr} 3 & 5 & -4 & 2 \\ 1 & 2 & -1 & 1 \\ 1 & 1 & -2 & 0 \end{array} \right]$
Row reduce the following matrix to obtain the . Then continue to obtain the . $\left[ \begin{array}{rrrr} -2 & 3 & -8 & 7 \\ 1 & -2 & 5 & -5 \\ 1 & -3 & 7 & -8 \end{array} \right]$
Find the solution of the system whose augmented matrix is $\left[ \begin{array}{rrr|r} 1 & 2 & 0 & 2 \\ 1 & 3 & 4 & 2 \\ 1 & 0 & 2 & 1 \end{array} \right]$
Find the solution of the system whose augmented matrix is $\left[ \begin{array}{rrr|r} 1 & 2 & 0 & 2 \\ 2 & 0 & 1 & 1 \\ 3 & 2 & 1 & 3 \end{array} \right]$
The is $$\left[ \begin{array}{rrr|r} 1 & 0 & \vspace{0.05in}\frac{1}{2} & \vspace{0.05in}\frac{1}{2} \\ 0 & 1 & -\vspace{0.05in}\frac{1}{4} & \vspace{0.05in}\frac{3}{4} \\ 0 & 0 & 0 & 0 \end{array} \right] .$$ Therefore, the solution is of the form $$z=t,y=\frac{3}{4}+t\left( \frac{1}{4}\right) ,x=\frac{1}{2}-\frac{1}{2}t$$ where $$t\in \mathbb{R}$$.
Find the solution of the system whose augmented matrix is $\left[ \begin{array}{rrr|r} 1 & 1 & 0 & 1 \\ 1 & 0 & 4 & 2 \end{array} \right]$
The is $$\left[ \begin{array}{rrr|r} 1 & 0 & 4 & 2 \\ 0 & 1 & -4 & -1 \end{array} \right]$$ and so the solution is $$z=t,y=4t,x=2-4t.$$
Find the solution of the system whose augmented matrix is $\left[ \begin{array}{rrrrr|r} 1 & 0 & 2 & 1 & 1 & 2 \\ 0 & 1 & 0 & 1 & 2 & 1 \\ 1 & 2 & 0 & 0 & 1 & 3 \\ 1 & 0 & 1 & 0 & 2 & 2 \end{array} \right]$
The is $$\left[ \begin{array}{rrrrr|r} 1 & 0 & 0 & 0 & 9 & 3 \\ 0 & 1 & 0 & 0 & -4 & 0 \\ 0 & 0 & 1 & 0 & -7 & -1 \\ 0 & 0 & 0 & 1 & 6 & 1 \end{array} \right]$$ and so $$x_{5}=t,x_{4}=1-6t,x_{3}=-1+7t,x_{2}=4t,x_{1}=3-9t$$.
Find the solution of the system whose augmented matrix is $\left[ \begin{array}{rrrrr|r} 1 & 0 & 2 & 1 & 1 & 2 \\ 0 & 1 & 0 & 1 & 2 & 1 \\ 0 & 2 & 0 & 0 & 1 & 3 \\ 1 & -1 & 2 & 2 & 2 & 0 \end{array} \right]$
The is $$\left[ \begin{array}{rrrrr|r} 1 & 0 & 2 & 0 & -\vspace{0.05in}\frac{1}{2} & \vspace{0.05in}\frac{5}{2} \\ 0 & 1 & 0 & 0 & \vspace{0.05in}\frac{1}{2} & \vspace{0.05in}\frac{3}{2} \\ 0 & 0 & 0 & 1 & \vspace{0.05in}\frac{3}{2} & -\vspace{0.05in}\frac{1}{2} \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right]$$. Therefore, let $$x_{5}=t,x_{3}=s.$$ Then the other variables are given by $$x_{4}=-\frac{1}{2}-\frac{3}{2}t,x_{2}=\frac{3}{2}-t\frac{1}{2} ,,x_{1}=\frac{5}{2}+\frac{1}{2}t-2s.$$
Find the solution to the system of equations, $$7x+14y+15z=22,$$ $$2x+4y+3z=5,$$ and $$3x+6y+10z=13.$$
Solution is: $$\left[ x=1-2t,z=1,y=t\right]$$
Find the solution to the system of equations, $$3x-y+4z=6,$$ $$y+8z=0,$$ and $$-2x+y=-4.$$
Solution is: $$\left[ x=2-4t,y=-8t,z=t\right]$$
Find the solution to the system of equations, $$9x-2y+4z=-17,$$ $$13x-3y+6z=-25,$$ and $$-2x-z=3.$$
Solution is: $$\left[x=-1,y=2,z=-1\right]$$
Find the solution to the system of equations, $$65x+84y+16z=546,$$ $$81x+105y+20z=682,$$ and $$84x+110y+21z=713.$$
Solution is: $$\left[ x=2,y=4,z=5\right]$$
Find the solution to the system of equations, $$8x+2y+3z=-3,8x+3y+3z=-1,$$ and $$4x+y+3z=-9.$$
Solution is: $$\left[ x=1,y=2,z=-5\right]$$
Find the solution to the system of equations, $$-8x+2y+5z=18,-8x+3y+5z=13,$$ and $$-4x+y+5z=19.$$
Solution is: $$\left[x=-1,y=-5,z=4\right]$$
Find the solution to the system of equations, $$3x-y-2z=3,$$ $$y-4z=0,$$ and $$-2x+y=-2.$$
Solution is: $$\left[ x=2t+1,y=4t,z=t\right]$$
Find the solution to the system of equations, $$-9x+15y=66,-11x+18y=79$$, $$-x+y=4$$, and $$z=3$$.
Solution is: $$\left[x=1,y=5,z=3\right]$$
Find the solution to the system of equations, $$-19x+8y=-108,$$ $$-71x+30y=-404,$$ $$-2x+y=-12,$$ $$4x+z=14.$$
Solution is: $$\left[ x=4,y=-4,z=-2\right]$$
Suppose a system of equations has fewer equations than variables and you have found a solution to this system of equations. Is it possible that your solution is the only one? Explain.
No. Consider $$x+y+z=2$$ and $$x+y+z=1.$$
Suppose a system of linear equations has a $$2\times 4$$ augmented matrix and the last column is a pivot column. Could the system of linear equations be consistent? Explain.
No. This would lead to $$0=1.$$
Suppose the coefficient matrix of a system of $$n$$ equations with $$n$$ variables has the property that every column is a pivot column. Does it follow that the system of equations must have a solution? If so, must the solution be unique? Explain.
Yes. It has a unique solution.
Suppose there is a unique solution to a system of linear equations. What must be true of the pivot columns in the augmented matrix?
The last column must not be a pivot column. The remaining columns must each be pivot columns.
The steady state temperature, $$u$$, of a plate solves Laplace’s equation, $$\Delta u=0.$$ One way to approximate the solution is to divide the plate into a square mesh and require the temperature at each node to equal the average of the temperature at the four adjacent nodes. In the following picture, the numbers represent the observed temperature at the indicated nodes. Find the temperature at the interior nodes, indicated by $$x,y,z,$$ and $$w$$. One of the equations is $$z=\vspace{0.05in}\frac{1}{4}\left( 10+0+w+x\right)$$.
(1,60) (100,0)
(1,1) (0,30)(1,0)90 (0,60)(1,0)90 (30,0)(0,1)90 (60,0)(0,1)90 (60,0) (30,0) (0,60) (0,30) (30,30) (30,60) (60,30) (60,60) (60,90) (30,90) (90,30) (90,60) (63,-4)$$10$$ (33,-4)$$10$$ (-10,60)$$20$$ (-10,30)$$20$$ (34,34)$$x$$ (34,64)$$y$$ (64,34)$$z$$ (64,64)$$w$$ (64,90)$$30$$ (34,90)$$30$$ (94,30)$$0$$ (94,60)$$0$$
You need $$\begin{array}{c} \frac{1}{4}\left( 20+30+w+x\right) -y=0 \\ \frac{1}{4}\left( y+30+0+z\right) -w=0 \\ \frac{1}{4}\left( 20+y+z+10\right) -x=0 \\ \frac{1}{4}\left( x+w+0+10\right) -z=0 \end{array}$$, Solution is: $$\left[ w=15,x=15,y=20,z=10\right] .$$
Find the rank of the following matrix. $\left[ \begin{array}{rrrr} 4 & -16 & -1 & -5 \\ 1 & -4 & 0 & -1 \\ 1 & -4 & -1 & -2 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrrr} 3 & 6 & 5 & 12 \\ 1 & 2 & 2 & 5 \\ 1 & 2 & 1 & 2 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrrrr} 0 & 0 & -1 & 0 & 3 \\ 1 & 4 & 1 & 0 & -8 \\ 1 & 4 & 0 & 1 & 2 \\ -1 & -4 & 0 & -1 & -2 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrrr} 4 & -4 & 3 & -9 \\ 1 & -1 & 1 & -2 \\ 1 & -1 & 0 & -3 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrrrr} 2 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 7 \\ 1 & 0 & 0 & 1 & 7 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrr} 4 & 15 & 29 \\ 1 & 4 & 8 \\ 1 & 3 & 5 \\ 3 & 9 & 15 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrrrr} 0 & 0 & -1 & 0 & 1 \\ 1 & 2 & 3 & -2 & -18 \\ 1 & 2 & 2 & -1 & -11 \\ -1 & -2 & -2 & 1 & 11 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrrrr} 1 & -2 & 0 & 3 & 11 \\ 1 & -2 & 0 & 4 & 15 \\ 1 & -2 & 0 & 3 & 11 \\ 0 & 0 & 0 & 0 & 0 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrr} -2 & -3 & -2 \\ 1 & 1 & 1 \\ 1 & 0 & 1 \\ -3 & 0 & -3 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrrrr} 4 & 4 & 20 & -1 & 17 \\ 1 & 1 & 5 & 0 & 5 \\ 1 & 1 & 5 & -1 & 2 \\ 3 & 3 & 15 & -3 & 6 \end{array} \right]$
Find the rank of the following matrix. $\left[ \begin{array}{rrrrr} -1 & 3 & 4 & -3 & 8 \\ 1 & -3 & -4 & 2 & -5 \\ 1 & -3 & -4 & 1 & -2 \\ -2 & 6 & 8 & -2 & 4 \end{array} \right]$
Suppose $$A$$ is an $$m\times n$$ matrix. Explain why the rank of $$A$$ is always no larger than $$\min \left( m,n\right) .$$
It is because you cannot have more than $$\min \left( m,n\right)$$ nonzero rows in the . Recall that the number of pivot columns is the same as the number of nonzero rows from the description of this .
State whether each of the following sets of data are possible for the matrix equation $$AX=B$$. If possible, describe the solution set. That is, tell whether there exists a unique solution, no solution or infinitely many solutions. Here, $$\left[ A |B \right]$$ denotes the augmented matrix.
1. $$A$$ is a $$5\times 6$$ matrix, $$\limfunc{rank}\left( A\right) =4$$ and $$\limfunc{rank}\left[ A|B \right] =4.$$
2. $$A$$ is a $$3\times 4$$ matrix, $$\limfunc{rank}\left( A\right) =3$$ and $$\limfunc{rank}\left[ A|B\right] =2.$$
3. $$A$$ is a $$4\times 2$$ matrix, $$\limfunc{rank}\left( A\right) =4$$ and $$\limfunc{rank}\left[ A|B \right] =4.$$
4. $$A$$ is a $$5\times 5$$ matrix, $$\limfunc{rank}\left( A\right) =4$$ and $$\limfunc{rank}\left[ A|B \right] =5.$$
5. $$A$$ is a $$4\times 2$$ matrix, $$\limfunc{rank}\left( A\right) =2$$ and $$\limfunc{rank}\left[ A|B \right] =2$$.
1. This says $$B$$ is in the span of four of the columns. Thus the columns are not independent. Infinite solution set.
2. This surely can’t happen. If you add in another column, the rank does not get smaller.
3. This says $$B$$ is in the span of the columns and the columns must be independent. You can’t have the rank equal 4 if you only have two columns.
4. This says $$B$$ is not in the span of the columns. In this case, there is no solution to the system of equations represented by the augmented matrix.
5. In this case, there is a unique solution since the columns of $$A$$ are independent.
Consider the system $$-5x+2y-z=0$$ and $$-5x-2y-z=0.$$ Both equations equal zero and so $$-5x+2y-z=-5x-2y-z$$ which is equivalent to $$y=0.$$ Does it follow that $$x$$ and $$z$$ can equal anything? Notice that when $$x=1$$, $$z=-4,$$ and $$y=0$$ are plugged in to the equations, the equations do not equal $$0$$. Why?
These are not legitimate row operations. They do not preserve the solution set of the system.
Balance the following chemical reactions.
1. $$KNO_{3}+H_{2}CO_{3}\rightarrow K_{2}CO_{3}+HNO_{3}$$
2. $$AgI+Na_{2}S\rightarrow Ag_{2}S+NaI$$
3. $$Ba_{3}N_{2}+H_{2}O\rightarrow Ba\left( OH\right) _{2}+NH_{3}$$
4. $$CaCl_{2}+Na_{3}PO_{4}\rightarrow Ca_{3}\left( PO_{4}\right) _{2}+NaCl$$
In the section on dimensionless variables it was observed that $$\rho V^{2}AB$$ has the units of force. Describe a systematic way to obtain such combinations of the variables which will yield something which has the units of force.
Consider the following diagram of four circuits.
The current in amps in the four circuits is denoted by $$I_{1},I_{2},I_{3},I_{4}$$ and it is understood that the motion is in the counter clockwise direction. If $$I_{k}$$ ends up being negative, then it just means the current flows in the clockwise direction.
In the above diagram, the top left circuit should give the equation $2I_{2}-2I_{1}+5I_{2}-5I_{3}+3I_{2}=5$ For the circuit on the lower left, you should have $4I_{1}+I_{1}-I_{4}+2I_{1}-2I_{2}=-10$ Write equations for each of the other two circuits and then give a solution to the resulting system of equations.
The other two equations are \begin{aligned} 6I_{3}-6I_{4}+I_{3}+I_{3}+5I_{3}-5I_{2} &=&-20 \\ 2I_{4}+3I_{4}+6I_{4}-6I_{3}+I_{4}-I_{1} &=&0\end{aligned} Then the system is $\begin{array}{c} 2I_{2}-2I_{1}+5I_{2}-5I_{3}+3I_{2}=5 \\ 4I_{1}+I_{1}-I_{4}+2I_{1}-2I_{2}=-10 \\ 6I_{3}-6I_{4}+I_{3}+I_{3}+5I_{3}-5I_{2}=-20 \\ 2I_{4}+3I_{4}+6I_{4}-6I_{3}+I_{4}-I_{1}=0 \end{array}$ The solution is: \begin{aligned} I_{1}&=& -\frac{750}{373} \\ I_{2}&=& -\frac{1421}{1119} \\ I_{3}&=& -\frac{3061}{1119} \\ I_{4}&=& -\frac{1718}{1119}\end{aligned}
Consider the following diagram of three circuits.
The current in amps in the four circuits is denoted by $$I_{1},I_{2},I_{3}$$ and it is understood that the motion is in the counter clockwise direction. If $$I_{k}$$ ends up being negative, then it just means the current flows in the clockwise direction.
Find $$I_{1},I_{2},I_{3}$$.
You have \begin{aligned} 2I_{1}+5I_{1}+3I_{1}-5I_{2} &=& 10 \\ I_{2}- I_{3} +3I_{2}+7I_{2}+5I_{2}-5I_{1} &=&-12 \\ 2I_{3}+4I_{3}+4I_{3}+I_{3}-I_{2} &=& 0\end{aligned} Simplifying this yields \begin{aligned} 10I_{1}-5I_{2} &=& 10 \\ -5I_{1} + 16I_{2}- I_{3} &=&-12 \\ -I_{2} + 11I_{3} &=&0\end{aligned} The solution is given by $I_{1}=\frac{218}{295},I_{2}=-\frac{154}{295},I_{3}=-\frac{14}{295}$ |
PDF e-Pub
## Section: New Results
### Mathematical modeling of platelet production
• In [10], we analyze the existence of oscillating solutions and the asymptotic convergence for a nonlinear delay differential equation arising from the modeling of platelet production. We consider four different cell compartments corresponding to different cell maturity levels: stem cells, megakaryocytic progenitors, megakaryocytes, and platelets compartments, and the quantity of circulating thrombopoietin (TPO), a platelet regulation cytokine.
• In [11], we analyze the stability of a differential equation with two delays originating from a model for a population divided into two subpopulations, immature and mature, and we apply this analysis to a model for platelet production. The dynamics of mature individuals is described by the following nonlinear differential equation with two delays: $x\prime \left(t\right)=-\lambda x\left(t\right)+g\left(x\left(t-{\tau }_{1}\right)\right)-g\left(x\left(t-{\tau }_{1}-{\tau }_{2}\right)\right){e}^{-\lambda {\tau }_{2}}$ . The method of $D$-decomposition is used to compute the stability regions for a given equilibrium. The center manifold theory is used to investigate the steady-state bifurcation and the Hopf bifurcation. Similarly, analysis of the center manifold associated with a double bifurcation is used to identify a set of parameters such that the solution is a torus in the pseudo- phase space. Finally, the results of the local stability analysis are used to study the impact of an increase of the death rate γ or of a decrease of the survival time ${\tau }_{2}$ of platelets on the onset of oscillations. We show that the stability is lost through a small decrease of survival time (from $8.4$ to 7 days), or through an important increase of the death rate (from $0.05$ to $0.625$ days${}^{-1}$).
• In [12], we analyze the stability of a system of differential equations with a threshold-defined delay arising from a model for platelet production. We consider a maturity-structured population of megakaryocyte progenitors and an age-structured population of platelets, where the cytokine thrombopoietin (TPO) increases the maturation rate of progenitors. Using the quasi-steady-state approximation for TPO dynamics and the method of characteristics, partial differential equations are reduced to a system of two differential equations with a state-dependent delay accounting for the variable maturation rate. We start by introducing the model and proving the positivity and boundedness of the solutions. Then we use a change of variables to obtain an equivalent system of two differential equations with a constant delay, from which we prove existence and uniqueness of the solution. As linearization around the unique positive steady state yields a transcendental characteristic equation of third degree, we introduce the main result, a new framework for stability analysis on models with fixed delays. This framework is then used to describe the stability of the megakaryopoiesis with respect to its parameters. Finally, with parameters being obtained and estimated from data, we give an example in which oscillations appear when the death rate of progenitors is increased 10-fold. |
# Why the matrix of $dG_0$ is $I_l$.
I am reading the proof of Local Immersion Theorem in Guillemin & Pallock's Differential Topology on Page 15. But I got lost at the following statement:
Define a map $G: U \times \mathbb{R}^{l-k} \rightarrow \mathbb{R}^{l}$ by $$G(x,z) = g(x) + (0,z).$$
The matrix of $dG_0$ is $I_l$.
Could some one explain why this is true? Thanks.
Earlier on that page, it is said that since $dg_0: \Bbb R^k \longrightarrow \Bbb R^l$ is injective, we may assume that $$dg_0 = \begin{pmatrix} I_k \\ 0_{k-l} \end{pmatrix}$$ by performing a change of basis. Hence we have that the map $$\tilde{g}: U \times \Bbb R^{l-k} \longrightarrow \Bbb R^l,$$ $$\tilde{g}(x, z) = (g(x), 0, \dots, 0)$$ has differential $$d\tilde{g}_0 = \begin{pmatrix} I_k & 0_{l-k} \\ 0_{l-k} & 0_{l-k} \end{pmatrix}.$$ The map $$h: U \times \Bbb R^{l-k} \longrightarrow \Bbb R^l,$$ $$h(x,z) = (0, \dots, 0, z)$$ clearly has differential $$dh_0 = \begin{pmatrix} 0_{k} & 0_{l-k} \\ 0_{l-k} & I_{l-k} \end{pmatrix}.$$ Therefore the given map $$G: U \times \Bbb R^{l-k} \longrightarrow \Bbb R^l,$$ $$G(x,z) = \tilde{g}(x ,z) + h(x,z)$$ has differential $$dG_0 = d\tilde{g}_0 + dh_0 = \begin{pmatrix} I_{k} & 0_{l-k} \\ 0_{l-k} & I_{l-k} \end{pmatrix} = I_l.$$
• Hi Henry, I tried a long time but still not able to figure out why $dg_o$ is injective. Could you give me a hint..? Thanks! – WishingFish Jun 30 '13 at 21:23
• @WishingFish How is the differential of the maps h and $\bar g$ calculated? – Idonotknow Oct 10 '18 at 21:07 |
## Saturday, February 27, 2010
### Chile Earthquake
USGS information.
USGS Centroid Moment Tensor Solution
10/02/27 06:34:09.75
BIO-BIO, CHILE
Epicenter: -35.988 -72.782
MW 8.6
USGS CENTROID MOMENT TENSOR
10/02/27 06:35:27.29
Centroid: -35.757 -72.389
Depth 36 No. of sta:189
Moment Tensor; Scale 10**21 Nm
Mrr= 7.40 Mtt=-0.53
Mpp=-6.87 Mrt= 0.06
Mrp=-5.95 Mtp=-0.76
Principal axes:
T Val= 9.57 Plg=69 Azm= 84
N -0.48 3 183
P -9.08 19 274
Best Double Couple:Mo=9.3*10**21
NP1:Strike= 11 Dip=25 Slip= 98
NP2: 182 65 86
----###
--------######---
---------#########---
----------############---
-----------##############----
------------###############----
-----------################----
-- -------#################----
-- P -------####### #######----
-- -------####### T #######----
------------####### #######----
------------################-----
-----------###############-----
-----------###############-----
-----------#############-----
---------###########-----
--------########-----
-------####------
###----
Global CMT Project Moment Tensor SolutionFebruary 27, 2010, OFFSHORE MAULE, CHILE, MW=8.8
Goran Ekstrom
Meredith Nettles
CENTROID-MOMENT-TENSOR SOLUTION
GCMT EVENT: C201002270634A
DATA: II IU CU IC G GE
L.P.BODY WAVES: 90S, 224C, T= 50
MANTLE WAVES: 108S, 296C, T=200
SURFACE WAVES: 75S, 166C, T= 50
TIMESTAMP: Q-20100227072327
CENTROID LOCATION:
ORIGIN TIME: 06:35:15.4 0.1
LAT:35.95S 0.01;LON: 73.15W 0.01
DEP: 24.1 0.4;TRIANG HDUR: 60.5
MOMENT TENSOR: SCALE 10**29 D-CM
RR= 1.040 0.004; TT=-0.030 0.002
PP=-1.010 0.003; RT= 0.227 0.022
RP=-1.510 0.032; TP=-0.120 0.002
PRINCIPAL AXES:
1.(T) VAL= 1.875;PLG=61;AZM= 74
2.(N) -0.064; 7; 176
3.(P) -1.810; 28; 270
BEST DBLE.COUPLE:M0= 1.84*10**29
NP1: STRIKE= 18;DIP=18;SLIP= 112
NP2: STRIKE=174;DIP=73;SLIP= 83
----#######
-------##########--
--------#############--
----------###############--
-----------################--
------------################---
------------#################--
-------------######## ######---
--- -------######## T ######---
--- P -------######## ######---
--- -------#################---
-------------###############---
-------------###############---
-------------#############---
------------###########----
-----------########----
----------#####----
------#----
USGS WPhase Moment Tensor Solution10/02/27 6:34:17
OFFSHORE MAULE, CHILE
Epicenter: -35.826 -72.668
MW 8.8
USGS/WPHASE CENTROID MOMENT TENSOR
10/02/27 06:34:17.00
Centroid: -35.826 -72.668
Depth 35 No. of sta: 28
Moment Tensor; Scale 10**24 Nm
Mrr= 0.93 Mtt= 0.01
Mpp=-0.94 Mrt=-0.01
Mrp=-1.72 Mtp=-0.15
Principal axes:
T Val= 1.96 Plg=59 Azm= 86
N 0.02 3 182
P -1.97 30 274
Best Double Couple:Mo=2.0*10**22
NP1:Strike= 16 Dip=14 Slip= 104
NP2: 181 75 86
----###
--------########-
----------##########-
-----------############--
-------------##############--
-------------################--
-------------################--
--------------#################--
---- -------####### #######--
---- P -------####### T #######--
---- -------####### #######--
--------------################---
-------------################--
-------------###############---
------------##############---
-----------###########---
---------#########---
-------######----
###----
Historic USGS Moment Tensor SolutionsMagnitude 8.8 OFFSHORE MAULE, CHILE
Saturday, February 27, 2010 at 06:34:14 UTC---------------------I have some basic questions of USGS. Why is the duration of the earthquake not listed? Why is the duration to different magnitudes not listed? Why is the wavelength and frequency not listed. Why are the raw seismographic waveforms not shown? It is frustrating to not have this data. ------MagnetosphereCoronal Mass Ejection before the earthquakeI will try and find data on this prediction and the coronal mass ejection associated.I have not found data on the magnetosphere yet. I need to go for now. I will place the data here. STEREO BEHIND 7-DAY SOLAR WIND DATAThere is an asymptote on Feb 24 2010.STEREO BEHIND 3-DAY SOLAR WIND DATA------------------Coronal Mass Ejection video Credit University of Michigan
## Thursday, February 25, 2010
### The Poincaré Conjecture
The Poincaré Conjecture is incorrect. Here is why. The Poincaré Conjecture states that the sphere is the simplest shape in 3+ dimensions. A sphere requires that every point related to the surface of the sphere is exactly the same distance from the local origin of the sphere. This shape has never been found in nature. The simplest shape in nature is the wavefront or vibration. The simplest complex shape is the bubble or teardrop (bubble in motion through a medium).
There are 4 types of wavefronts found in the Baryonic nature. They are the Bosons; Photon, W+/- Boson, Z Boson and Gluon. Each Boson is a composite of 3 bits of information plus the rotation or spin of the wave. I explain this in the paper Structure of Baryons. We see this shape at both the microscopic and macroscopic levels. This is found as the magnetosphere around planets and in single baryon collisions.
The simplest and only shape in Dark Matter nature is the 3 dimensional lattice of Anti-Gluons. The conjunction of anti-gluons is the anti-quark. Pressurization by baryonic magnetism against dark matter causes curves in the shape of the dark matter lattice. This interaction forms baryonic bubbles in dark matter.
### The Z Boson charge.
How does the Z Boson deal with charge?
Just before it is directly connected from the electron to the associated baryon then it is negative.
Just before it is connected from an electron to an electron, then it is positive.
The actual connection is neutral.
This explains lightning and plasma, where there is a charge just before the electrocution.
## Wednesday, February 24, 2010
### Asymptotes in 3 dimensions
I have an idea why this is so difficult. It has to do with asymptotes. More specifically asymptotes in 3 dimensions. I have hypothesized that the electron is an an asymptote between the three bosons. An asymptote in 3 dimensions has different behaviors than in 2 dimensions. It produces a shape that is like a funnel around zero.
This is how I picture the lepton electron.
There are 9 bits of information associated with this model of the electron. 3 bits for each Boson. This shows that the lepton electron is a result of the interaction of these 9 bits of information. Also this would be a 4 dimensional problem, but I am examining an instant.
My idea about the wave-particle problem is an asymptote to a vibration in 4 dimensions is a manifestation.
Great paper on asymptotes:
# Nikolay NOSKOV
1940...2008
The engineer of control of the nuclear reactor, Institute of nuclear Physics, National Nuclear Center, Republic of Kazakhstan
----------
This is amazing. Thanks again to Curt Youngs for directing me to this man's work.
I am reading his work and it is difficult at best. He understood something very important. I am going to be evaluating his work for some time. I am starting with "The phenomenon of retarded potentials"
His work affirms what I have been discussing. He understood longitudinal waves.
This is a piece of that paper. Near the end.
"Simulating the process of retarded potential on a moving test body [24] with the help of three variables: the velocity of the body, the force of interaction and the distance between the bodies, I have determined that the retarded potential occurs non-uniformly, in an undulatory fashion. It means, that the motion of bodies under the influence of a force (electrical or gravitational fields, atmospheric or liquid pressure differentials or others) occurs with longitudinal oscillations. The average velocity of a body driving with oscillations is a phase velocity.
The logical goal is finding out the direct or inverse proportionality of length of the oscillations for all three variables that lead to the expression:
Where:
length of oscillations;
H = Factor of proportionality
R= Distance between test and central bodies
F(R) = the law of interaction"
And that is just the beginning.
## Tuesday, February 23, 2010
### The difference between Aether and ether.
Aether is the medium in which things exist. Dark Energy
Ether is a class of organic compounds that contain an ether group — an oxygen atom connected to two alkyl or aryl groups — of general formula R–O–R. Wiki. An example is that stuff you drink.
### Asymmetry is not broken symmetry
This is from the Brookhaven National Lab.
Physicists have predicted an increasing probability of finding such bubbles, or local regions, of “broken” symmetry at extreme temperatures near transitions from one phase of matter to another. According to the predictions, the matter inside these bubbles would exhibit different symmetries — or behavior under certain simple transformations of space, time, and particle types — than the surrounding matter. In addition to the symmetry violations probed at RHIC, scientists have postulated that analogous symmetry-altering bubbles created at an even earlier time in the universe helped to establish the preference for matter over antimatter in our world. http://www.bnl.gov/bnlweb/pubaf/pr/PR_display.asp?prID=1073
------------
Ok, The concept of symmetry is not shown by any data. Never is there a symmetrical interaction. Broken symmetry requires symmetry to exist.
It is easy to disprove symmetry through logic. Each atom has a lepton, electron, expressing information about the atom. Never is this lepton in the same position nor does it express the same bits of information at any different time. QED.
Symmetry as a concept stems from the need to protect an omnipotent perfect god, thus is not logical nor evidenced in the collected data.
## Sunday, February 21, 2010
### Tesla transverse and longitudinal electric waves
Please see http://www.borderlands.com for the biggest collection of old, rare and hard to find groundbreaking alternative energy and medical material. Tesla transverse and longitudinal electric waves another a lab demonstration with Eric Dollard, Tom Brown and Peter Lindemann. Specially focusing on Tesla's longitudinal electricity. This is a great practical demonstration on forgotten Tesla longitudinal electricity technologies.
Youtube has some great videos also. https://m.youtube.com/watch?v=6BnCUBKgnnc
------------------------
This is important. In the Standard Vibration Model, this is the interaction between the W+/-Boson and the Z Boson which forms the lepton, electron. This vibration lepton electron causes the oscillation photon. Since the information of the Z Boson is the inverse of that of the W+/-Boson the asymptote where the inversion occurs is the vibration lepton electron.
The tri-oscillation photon vibrates the lepton electron providing information to the baryon about another baryon conductivity and capacitance. Oops, this should say the photon provides information on temperature and pressure not conductivity and capacitance. It is the Z and W+/- Bosons that provide information on conductivity and capacitance.
-----
Thanks to Curt Youngs for sending it to me. Who says 'They are saying that in the longitude waves, the magnetic and dielectric fields are parallel. The frequency goes up because the speed of the waves are faster than c. They are emitted radially, which means that from the spherical point of view there is a contraction and expansion of the expanding sphere wave front.
This is very similar to the far field, where the electric and magnetic fields are parallel. ' |
# SCIP
Solving Constraint Integer Programs
heur_linesearchdiving.h File Reference
## Detailed Description
LP diving heuristic that fixes variables with a large difference to their root solution.
Diving heuristic: Iteratively fixes some fractional variable and resolves the LP-relaxation, thereby simulating a depth-first-search in the tree. Line search diving chooses the variable with the greatest difference of its root LP solution and the current LP solution, hence, the variable that developed most. It is fixed to the next integer in the direction it developed. One-level backtracking is applied: If the LP gets infeasible, the last fixing is undone, and the opposite fixing is tried. If this is infeasible, too, the procedure aborts.
Definition in file heur_linesearchdiving.h.
#include "scip/def.h"
#include "scip/type_retcode.h"
#include "scip/type_scip.h"
Go to the source code of this file.
## Functions
SCIP_EXPORT SCIP_RETCODE SCIPincludeHeurLinesearchdiving (SCIP *scip) |
Lemma 8.11.3. Let $\mathcal{C}$ be a site. Let $p : \mathcal{X} \to \mathcal{C}$ and $q : \mathcal{Y} \to \mathcal{C}$ be stacks in groupoids. Let $F : \mathcal{X} \to \mathcal{Y}$ be a $1$-morphism of categories over $\mathcal{C}$. The following are equivalent
1. For some (equivalently any) factorization $F = F' \circ a$ where $a : \mathcal{X} \to \mathcal{X}'$ is an equivalence of categories over $\mathcal{C}$ and $F'$ is fibred in groupoids, the map $F' : \mathcal{X}' \to \mathcal{Y}$ is a gerbe (with the topology on $\mathcal{Y}$ inherited from $\mathcal{C}$).
2. The following two conditions are satisfied
1. for $y \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{Y})$ lying over $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ there exists a covering $\{ U_ i \to U\}$ in $\mathcal{C}$ and objects $x_ i$ of $\mathcal{X}$ over $U_ i$ such that $F(x_ i) \cong y|_{U_ i}$ in $\mathcal{Y}_{U_ i}$, and
2. for $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$, $x, x' \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{X}_ U)$, and $b : F(x) \to F(x')$ in $\mathcal{Y}_ U$ there exists a covering $\{ U_ i \to U\}$ in $\mathcal{C}$ and morphisms $a_ i : x|_{U_ i} \to x'|_{U_ i}$ in $\mathcal{X}_{U_ i}$ with $F(a_ i) = b|_{U_ i}$.
Proof. By Categories, Lemma 4.35.15 there exists a factorization $F = F' \circ a$ where $a : \mathcal{X} \to \mathcal{X}'$ is an equivalence of categories over $\mathcal{C}$ and $F'$ is fibred in groupoids. By Categories, Lemma 4.35.16 given any two such factorizations $F = F' \circ a = F'' \circ b$ we have that $\mathcal{X}'$ is equivalent to $\mathcal{X}''$ as categories over $\mathcal{Y}$. Hence Lemma 8.11.2 guarantees that the condition (1) is independent of the choice of the factorization. Moreover, this means that we may assume $\mathcal{X}' = \mathcal{X} \times _{F, \mathcal{Y}, \text{id}} \mathcal{Y}$ as in the proof of Categories, Lemma 4.35.15
Let us prove that (a) and (b) imply that $\mathcal{X}' \to \mathcal{Y}$ is a gerbe. First of all, by Lemma 8.10.5 we see that $\mathcal{X}' \to \mathcal{Y}$ is a stack in groupoids. Next, let $y$ be an object of $\mathcal{Y}$ lying over $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$. By (a) we can find a covering $\{ U_ i \to U\}$ in $\mathcal{C}$ and objects $x_ i$ of $\mathcal{X}$ over $U_ i$ and isomorphisms $f_ i : F(x_ i) \to y|_{U_ i}$ in $\mathcal{Y}_{U_ i}$. Then $(U_ i, x_ i, y|_{U_ i}, f_ i)$ are objects of $\mathcal{X}'_{U_ i}$, i.e., the second condition of Definition 8.11.1 holds. Finally, let $(U, x, y, f)$ and $(U, x', y, f')$ be objects of $\mathcal{X}'$ lying over the same object $y \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{Y})$. Set $b = (f')^{-1} \circ f$. By condition (b) we can find a covering $\{ U_ i \to U\}$ and isomorphisms $a_ i : x|_{U_ i} \to x'|_{U_ i}$ in $\mathcal{X}_{U_ i}$ with $F(a_ i) = b|_{U_ i}$. Then
$(a_ i, \text{id}) : (U, x, y, f)|_{U_ i} \to (U, x', y, f')|_{U_ i}$
is a morphism in $\mathcal{X}'_{U_ i}$ as desired. This proves that (2) implies (1).
To prove that (1) implies (2) one reads the arguments in the preceding paragraph backwards. Details omitted. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). |
R igraph manual pages
Use this if you are using igraph from R
Finding community structure by multi-level optimization of modularity
Description
This function implements the multi-level modularity optimization algorithm for finding community structure, see references below. It is based on the modularity measure and a hierarchial approach.
Usage
cluster_louvain(graph, weights = NULL)
Arguments
graph The input graph. weights Optional positive weight vector. If the graph has a weight edge attribute, then this is used by default. Supply NA here if the graph has a weight edge attribute, but you want to ignore it. Larger edge weights correspond to stronger connections.
Details
This function implements the multi-level modularity optimization algorithm for finding community structure, see VD Blondel, J-L Guillaume, R Lambiotte and E Lefebvre: Fast unfolding of community hierarchies in large networks, http://arxiv.org/abs/arXiv:0803.0476 for the details.
It is based on the modularity measure and a hierarchial approach. Initially, each vertex is assigned to a community on its own. In every step, vertices are re-assigned to communities in a local, greedy way: each vertex is moved to the community with which it achieves the highest contribution to modularity. When no vertices can be reassigned, each community is considered a vertex on its own, and the process starts again with the merged communities. The process stops when there is only a single vertex left or when the modularity cannot be increased any more in a step.
This function was contributed by Tom Gregorovic.
Value
cluster_louvain returns a communities object, please see the communities manual page for details.
Author(s)
Tom Gregorovic, Tamas Nepusz ntamas@gmail.com
References
Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte, Etienne Lefebvre: Fast unfolding of communities in large networks. J. Stat. Mech. (2008) P10008
See communities for extracting the membership, modularity scores, etc. from the results.
Other community detection algorithms: cluster_walktrap, cluster_spinglass, cluster_leading_eigen, cluster_edge_betweenness, cluster_fast_greedy, cluster_label_prop
Examples
# This is so simple that we will have only one level
g <- make_full_graph(5) %du% make_full_graph(5) %du% make_full_graph(5)
g <- add_edges(g, c(1,6, 1,11, 6, 11))
cluster_louvain(g)
[Package igraph version 1.2.5 Index] |
# Template talk:R from other capitalisation
WikiProject Redirect (Rated Template-class)
This page is within the scope of WikiProject Redirect, a collaborative effort to improve the standard of redirects and their categorization on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Note: This banner should be placed on the talk pages of project, template and category pages that exist and operate to maintain redirects.
This banner is not designed to be placed on the talk pages of most redirects and never on the talk pages of mainspace redirects. For more information see the template documentation.
Template This template does not require a rating on the quality scale.
## Title
The use of "alternate" for "another", "different" or "alternative" is wrong. This page needs to be renamed. — Chameleon 13:00, 17 May 2004 (UTC)
It's the terminology used on the main page on the subject. -- User:Docu
I was referring about the main page on the subject, not this talk page. Wake up. — Chameleon 11:46, 1 Jun 2004 (UTC)
I'll be renaming this page within a week unless someone can put forward arguments for alternate being better than the correct alternative or other. — Chameleon 09:21, 4 Jun 2004 (UTC)
This page (as well as the category) should use "from" instead of "for", like the other redirect templates. ··gracefool | 08:12, 16 Jun 2005 (UTC)
No problem for this one, so I moved it to {{R from other capitalisation}}. I used "other" instead of "alternative" as the later (and "alternate") seem to create lengthy discussions. -- User:Docu
I always thought the answer to that was "alternative" is a noun and "alternate" is not. But "other" is indeed less ambiguous. --Stratadrake 22:29, 2 January 2007 (UTC)
## Duplicate templates...
So how did we end up with eight templates saying the exact thing? It's misleading to say that any one of them redirects to another when this is the case, and couldn't we establish redirects without causing recursion issues? --Stratadrake 22:27, 2 January 2007 (UTC)
Don't know about then, now this may be it. There are about 20 redirects to this template currently (r alt caps; R from alternate caps; R from alternative caps; etc.) -MornMore (talk) 10:01, 20 November 2009 (UTC)
## Request for category modification
`{{editprotected}}` Hello! As you can tell if you browse through Category:Redirect templates, they've been thinned out into a bunch of subcategories for a while. In keeping with this, whenever someone can get around to it, I'd like for this template to be placed in the following categories: Templates for redirects from a modification of the target name (with a sort key like "Capitalisation") and All redirect templates (with no sort key whatsoever). It should also be removed from Category:Redirect templates. Thanks! ${\displaystyle \sim }$ Lenoxus " * " 02:05, 24 May 2008 (UTC)
Done I moved the categories to the doc page, so you can edit them at your own will now. Let me know if you need other changes made to the template. --CapitalR (talk) 17:42, 24 May 2008 (UTC)
Thanks! Now I have a question... Why does the documentation put the template into the categories Category:Redirects from other capitalisations and Category:Unprintworthy redirects (the ones that it sorts tagged redirects into)? I've seen this elsewhere, but it doesn't quite make sense to me... ${\displaystyle \sim }$ Lenoxus " * " 18:29, 24 May 2008 (UTC)
## One of the below templates?
The template contains the sentence "Apply one of the below templates to redirects created for this purpose." What does this refer to? Please explain, or preferable, clarify the text so that mortal being like myself can understand it. -- Jitse Niesen (talk) 16:55, 10 August 2008 (UTC)
## It leads to the title in accordance with the Wikipedia naming conventions for capitalisation?
Maybe someday, but for now Wikipedia remains a work in progress, and we still have many pages under non-standard titles, including cases where the title in accordance with the Wikipedia naming conventions for capitalisation unfortunately redirects to an incorrect title. Until we believe that we have completed the task of fixing these, I would prefer that the template text not give undue legitimacy to the status quo. Thank you - Walrus heart (talk) 17:14, 15 April 2009 (UTC)
## Associated category up for CSD
The category linked to this template is currently up for deletion here. If anyone from here has anything to add to the discussion, especially if anyone can provide original or current purposes that the category and/or template serve, the input would be greatly appreciated. - TexasAndroid (talk) 13:46, 4 May 2009 (UTC)
## Spelling error
The word "capitalisation" is an incorrect spelling. The word is "capitalization." Can we correct this? --JohnDoe0007 (talk) 13:13, 23 May 20
wikt:capitalisation. So no. Rich Farmbrough, 05:57, 30 May 2009 (UTC).
One of those cases where those of us with US-only spellcheckers get caught. *sighs and hits "Add to dictionary" in his browser*. --Philosopher Let us reason together. 06:21, 30 May 2009 (UTC)
## Creating redirects with bot
Please, check this Wikipedia:Bots/Requests for approval/BOTijo 2 (again). emijrp (talk) 15:46, 25 June 2009 (UTC)
## Edit request from Ost316, 10 August 2011
Change categorization of printworthiness from "[[Category:Unprintworthy redirects]]" to "[[Category:{{#ifeq:{{{1}}}|printworthy||Un}}printworthy redirects]]" to allow flagging valid alternate capitalizations as is done on {{R from plural}}. The auto-categorization as unprintworthy was also brought up above. —Ost (talk) 19:04, 10 August 2011 (UTC)
Done. Ucucha (talk) 21:06, 11 August 2011 (UTC)
## Request for multiple edits: Interwiki link hack
This request is for the five template pages listed below. Please prefix links with [[w:]], and use a piped link when appropriate, in the following redirect templates, similar to my change to “Template:R from short name”.
See WT:Template messages/Redirect pages#Redirect display text for background and aborted version of this edit request. This is a workaround for Bugzilla:7304. Vadmium (talk) 02:35, 11 August 2011 (UTC).
All done. Ucucha (talk) 21:02, 11 August 2011 (UTC)
## What are the Wikipedia naming conventions for capitalization?
As the template says, the page that uses the template leads to the title of the same page page in accordance with the Wikipedia naming conventions for capitalization. But what are those naming conventions? That should be clear from the text that is inserted, or a link to a list of such conventions should be given.
For example, the page Learning vector quantization redirects to Learning Vector Quantization, as it claims that the latter is better capitalization, but it does not say which naming convention is used to determine that. Most other articles for scientific methods are not capitalized in this way, and I feel that this article is being inconsistent in that way. If possible, I would like to delete the redirect page in order to be able to make a move on the article and change the capitalization to make it consistent. —Kri (talk) 17:26, 15 February 2014 (UTC)
The naming conventions are at Wikipedia:Manual of Style/Capital letters (MOS:CAPS). Adding a link to this template seems like a good idea to me. As learning vector quantization is not a proper name, the title has been corrected to lower case. Wbm1058 (talk) 13:42, 9 March 2014 (UTC)
But the hatnote at MOS:CAPS says For the style guideline on capitalization in article titles, see Wikipedia:Naming conventions (capitalization).
This template populates Category:Redirects from other capitalisations, which says: "The pages in this category are redirects from alternative capitalisation, per Wikipedia:Naming conventions (capitalization)". Wbm1058 (talk) 13:52, 10 March 2014 (UTC)
Done – I updated the template to add a link to the Wikipedia naming conventions for capitalization. Wbm1058 (talk) 15:00, 10 March 2014 (UTC)
## Code cleanup
I've compressed the code a lot, at Template:R from other capitalisation/sandbox. Compare it before and after. Big enough change it's worth sandboxing this and seeing if anyone detects an error in it. Seems to work fine for me. — SMcCandlish ¢ ≽ʌⱷ҅ʌ≼ 05:28, 24 August 2014 (UTC)
To SMcCandlish: The only "error" I detected was that there were two category links. This was easily fixed by a parser function in the "name=" parameter. Pretty impressive! Things to consider might be that the sandbox code also subdues the text that follows the first sentence whenever "of=" or "reason=" is filled. That text would still apply even if not subdued and, in my humble opinion, is needed. I would also want to include a second unnamed parameter to coincide with the "of=" and "reason=" named parameters. That would make this functionality work when this rcat is called by the {{This is a redirect}} template by use of its "n#=" parameter. The only other thing I can think of right now is that {{Redirect template}} was designed to be used twice in each rcat, and while this interesting method of using only the one instance of the template with "embeds" strategically inserted where needed may be functional, it might be something we would want to consider very carefully before implementation. – Paine Ellsworth CLIMAX! 08:50, 15 October 2014 (UTC)
## Code fix
The code has been repaired – there were two primary problems and one secondary one:
• The code left a superfluous "(space)to(space)" at the end of the first sentence when either the "of=" or "reason=" parameter was used,
• the code subdued critical text when either the "of=" or "reason=" parameter was used, and the secondary issue that
• this "of/reason" functionality was not able to be deployed when this rcat used the {{This is a redirect}} template to tag redirects.
All fixed. – Paine Ellsworth CLIMAX! 06:24, 15 October 2014 (UTC)
Is this in reference to my sandbox code, or the "live" template? — SMcCandlish ¢ ≽ʌⱷ҅ʌ≼ 15:55, 15 October 2014 (UTC)
Forgive me for my lack of clarity – this describes what I did to the live template. The only thing I fixed in the sandbox was the double link to the category. It all works fine except for the considerations I noted above. – Paine 18:35, 15 October 2014 (UTC)
## Can redirects to sections qualify as other capitalization?
See specifically PubMed Identifier and previous discussion at User talk:Senator2029#PubMed Identifier redirect. Senator2029's response suggests that any relevant template documentation needs to make our decision explicit.
(I disagree somewhat with the redirect being unprintworthy, because all the links coming from `{{cite journal}}` would make the impression that we don't care about cleaning up unprintworthy redirects. Also, I don't see where the capitalized "Identifier" is declared officially incorrect; on the other hand, this section of PubMed's official FAQ does capitalize "Identifier." But I digress.) --SoledadKabocha (talk) 18:36, 19 November 2014 (UTC)
## Template-protected edit request on 9 September 2015
Replace "different" with "other" to match Template:R from plural and Template:R to plural. GeoffreyT2000 (talk) 04:39, 9 September 2015 (UTC)
DoneMr. Stradivarius ♪ talk ♪ 04:49, 9 September 2015 (UTC)
## Template-protected edit request on 27 November 2016
Replace "Miscapitisations" (typo, I checked Wiktionary) with "Miscapitalisations". Mihirpmehta (talk) 18:28, 27 November 2016 (UTC)
Donexaosflux Talk 04:18, 28 November 2016 (UTC) |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Oct 2019, 14:28
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Inequality and absolute value questions from my collection
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 58434
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
05 Apr 2018, 12:24
Bunuel wrote:
4. Are x and y both positive?
(1) 2x-2y=1
(2) x/y>1
(1) 2x-2y=1. Well this one is clearly insufficient. You can do it with number plugging OR consider the following: x and y both positive means that point (x,y) is in the I quadrant. 2x-2y=1 --> y=x-1/2, we know it's an equation of a line and basically question asks whether this line (all (x,y) points of this line) is only in I quadrant. It's just not possible. Not sufficient.
(2) x/y>1 --> x and y have the same sign. But we don't know whether they are both positive or both negative. Not sufficient.
(1)+(2) Again it can be done with different approaches. You should just find the one which is the less time-consuming and comfortable for you personally.
One of the approaches:
$$2x-2y=1$$ --> $$x=y+\frac{1}{2}$$
$$\frac{x}{y}>1$$ --> $$\frac{x-y}{y}>0$$ --> substitute x --> $$\frac{1}{y}>0$$ --> $$y$$ is positive, and as $$x=y+\frac{1}{2}$$, $$x$$ is positive too. Sufficient.
Hi Bunuel, I solved it as below
(1) 2x-2y=1 -> x-y= 1/2. This means x>y by 1/2 but x can be 1/2 and y=0 so this is insufficient
(2) x/y=1 -> x>y which is no different from (1) so insufficient again
But I didn't follow your approach on combining the 2 statements, how did you get x to be substituted by 1/y? The fact that x is positive is already proved by (1) isn't it?
The red parts are not correct.
$$\frac{x}{y}>1$$ does not mean that $$x>y$$. If both $$x$$ and $$y$$ are positive, then $$x>y$$, BUT if both are negative, then $$x<y$$. What you are actually doing when writing $$x>y$$ from $$\frac{x}{y}>1$$ is multiplying both parts of inequality by $$y$$: never multiply (or reduce) an inequality by variable (or by an expression with variable) if you don't know the sign of it or are not certain that variable (or expression with variable) doesn't equal to zero. So from (2) $$\frac{x}{y}>1$$, we can only deduce that $$x$$ and $$y$$ have the same sigh (either both positive or both negative).
Also, from (1) we can only say that x > y (because x = y + 1/2 = y + positive number) but we cannot say that x is positive. For example, consider x = -2 and y = -2.5.
Finally, when we substitute $$x = y + \frac{1}{2}$$ into $$\frac{x}{y}>1$$ we'll get:
$$\frac{y + \frac{1}{2}}{y}>1$$;
$$1 + \frac{1}{2y}>1$$;
$$\frac{1}{2y}>0$$;
$$y > 0$$.
If $$y > 0$$, then $$x > y > 0$$.
Hope it helps.
_________________
Retired Moderator
Joined: 27 Oct 2017
Posts: 1259
Location: India
GPA: 3.64
WE: Business Development (Energy and Utilities)
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
17 Apr 2018, 09:29
Hi
This is really an interesting question,
I want to share an alternative approach.
Concept |x-a|=b, it means distance of x from a = b
it can be logically deducted that , if |x-a|=|y-a|, it means either x= y or the a and y are equidistant from a, hence average of x & y = a , or x+y = 2a
it can be proved but if visualization can save time.
7. |x+2|=|y+2| what is the value of x+y?
|x+2|=|y+2|, it means either x and y are equal or are equidistant from -2.
(1) xy<0
it means one is positive and one is negative, hence they are not equal. so x and y are equidistant from -2, hence sum of x and y = -4. SUFFICIENT.
(2) x>2 y<2
x not equal to y, hence their sum is -4. SUFFICIENT.
Bunuel wrote:
7. |x+2|=|y+2| what is the value of x+y?
(1) xy<0
(2) x>2 y<2
This one is quite interesting.
First note that |x+2|=|y+2| can take only two possible forms:
A. x+2=y+2 --> x=y. This will occur if and only x and y are both >= than -2 OR both <= than -2. In that case x=y. Which means that their product will always be positive or zero when x=y=0.
B. x+2=-y-2 --> x+y=-4. This will occur when either x or y is less then -2 and the other is more than -2.
When we have scenario A, xy will be nonnegative only. Hence if xy is negative we have scenario B and x+y=-4. Also note that vise-versa is not right. Meaning that we can have scenario B and xy may be positive as well as negative.
(1) xy<0 --> We have scenario B, hence x+y=-4. Sufficient.
(2) x>2 and y<2, x is not equal to y, we don't have scenario A, hence we have scenario B, hence x+y=-4. Sufficient.
_________________
Retired Moderator
Joined: 27 Oct 2017
Posts: 1259
Location: India
GPA: 3.64
WE: Business Development (Energy and Utilities)
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
18 Apr 2018, 01:15
Is |x-1| <1?
Since Absolute value function is always non negative, we can square both sides,
We get is (x-1)^2<1?
Statement 1 is (x-1)^2< = 1
If (x-1)^2<1: answer to question is yes
(x-1)^2= 1: hence answer is No,
So Statement 1 is NOT SUFFICIENT.
Statement2: Question stem (x-1)^2<1?
Can be reduced to is x(x-2)<0
Or is 0<x<2
Now St2: x^2-1>0 gives x >1 or x <-1
Now x can be -2, answer to question stem is No
Or 1.5 answer to question stem is yes.
Hence NOT SUFFICIENT.
Combining both Statement 1 & 2, we get
X can be 2, answer to question stem- NO
Or 1.5 answer to question stem is yes.
Buttercup3 wrote:
Bunuel wrote:
13. Is |x-1| < 1?
(1) (x-1)^2 <= 1
(2) x^2 - 1 > 0
Last one.
Is |x-1| < 1? Basically the question asks is 0<x<2 true?
(1) (x-1)^2 <= 1 --> x^2-2x<=0 --> x(x-2)<=0 --> 0<=x<=2. x is in the range (0,2) inclusive. This is the trick here. x can be 0 or 2! Else it would be sufficient. So not sufficient.
(2) x^2 - 1 > 0 --> x<-1 or x>1. Not sufficient.
(1)+(2) Intersection of the ranges from 1 and 2 is 1<x<=2. Again 2 is included in the range, thus as x can be 2, we cannot say for sure that 0<x<2 is true. Not sufficient.
Still not clear on this one.
Can you please explain why is 1 insufficient I am not able to eliminate 1 also why is not C sufficient?
_________________
Intern
Joined: 09 Sep 2017
Posts: 1
Location: India
Schools: Rotman '21
GMAT 1: 640 Q45 V34
GPA: 3.9
WE: Information Technology (Consulting)
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
31 Jul 2018, 14:56
Bunuel wrote:
8. a*b#0. Is |a|/|b|=a/b?
(1) |a*b|=a*b
(2) |a|/|b|=|a/b|
|a|/|b|=a/b is true if and only a and b have the same sign, meaning a/b is positive.
(1) |a*b|=a*b, means a and b are both positive or both negative, as LHS is never negative (well in this case LHS is positive as neither a nor b equals to zero). Hence a/b is positive in any case. Hence |a|/|b|=a/b. Sufficient.
(2) |a|/|b|=|a/b|, from this we can not conclude whether they have the same sign or not. Not sufficient.
Can someone clarify whether |a/b|=|a|\|b| exists as a property?
Math Expert
Joined: 02 Sep 2009
Posts: 58434
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
31 Jul 2018, 20:49
arunjohn43 wrote:
Bunuel wrote:
8. a*b#0. Is |a|/|b|=a/b?
(1) |a*b|=a*b
(2) |a|/|b|=|a/b|
|a|/|b|=a/b is true if and only a and b have the same sign, meaning a/b is positive.
(1) |a*b|=a*b, means a and b are both positive or both negative, as LHS is never negative (well in this case LHS is positive as neither a nor b equals to zero). Hence a/b is positive in any case. Hence |a|/|b|=a/b. Sufficient.
(2) |a|/|b|=|a/b|, from this we can not conclude whether they have the same sign or not. Not sufficient.
Can someone clarify whether |a/b|=|a|\|b| exists as a property?
Yes, $$|\frac{a}{b}|=\frac{|a|}{|b|}$$ and $$|ab|=|a|*|b|$$ are generally true.
_________________
Senior Manager
Joined: 10 Apr 2018
Posts: 267
Location: United States (NC)
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
14 Sep 2018, 11:36
Bunuel wrote:
2. If y is an integer and y = |x| + x, is y = 0?
Notice that from $$y=|x|+x$$ it follows that y cannot be negative:
If$$x>0$$, then $$y=x+x=2x=2*positive=positive$$;
If $$x\leq{0}$$ (when x is negative or zero) then $$y=-x+x=0$$.
(1) $$x<0$$ --> $$y=|x|+x=-x+x=0$$. Sufficient.
(2) $$y<1$$. We found out above that y cannot be negative and we are given that y is an integer, hence $$y=0$$. Sufficient.
HiBunuel,
Shouldn't it be like this
if |x|=x if $$x\geq 0$$
then y=x+x which means y=2x
and if f |x|= - x if x<0
then
y=-x+x which means y=0
Probus.
_________________
Probus
~You Just Can't beat the person who never gives up~ Babe Ruth
Math Expert
Joined: 02 Sep 2009
Posts: 58434
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
15 Sep 2018, 03:53
Probus wrote:
Bunuel wrote:
2. If y is an integer and y = |x| + x, is y = 0?
Notice that from $$y=|x|+x$$ it follows that y cannot be negative:
If$$x>0$$, then $$y=x+x=2x=2*positive=positive$$;
If $$x\leq{0}$$ (when x is negative or zero) then $$y=-x+x=0$$.
(1) $$x<0$$ --> $$y=|x|+x=-x+x=0$$. Sufficient.
(2) $$y<1$$. We found out above that y cannot be negative and we are given that y is an integer, hence $$y=0$$. Sufficient.
HiBunuel,
Shouldn't it be like this
if |x|=x if $$x\geq 0$$
then y=x+x which means y=2x
and if f |x|= - x if x<0
then
y=-x+x which means y=0
Probus.
Both are correct. If x = 0, then |0| = 0 as well |0| = -0 = 0.
_________________
Intern
Joined: 29 Apr 2018
Posts: 1
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
16 Nov 2018, 11:02
Bunuel wrote:
9. Is n<0?
(1) -n=|-n|
(2) n^2=16
(1) -n=|-n|, means that either n is negative OR n equals to zero. We are asked whether n is negative so we can not be sure. Not sufficient.
(2) n^2=16 --> n=4 or n=-4. Not sufficient.
(1)+(2) n is negative OR n equals to zero from (1), n is 4 or -4 from (2). --> n=-4, hence it's negative, sufficient.
Is (1) even a correct clue? How can an absolute value be negative? Please help me understand.
Math Expert
Joined: 02 Sep 2009
Posts: 58434
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
18 Nov 2018, 01:45
PB1712989 wrote:
Bunuel wrote:
9. Is n<0?
(1) -n=|-n|
(2) n^2=16
(1) -n=|-n|, means that either n is negative OR n equals to zero. We are asked whether n is negative so we can not be sure. Not sufficient.
(2) n^2=16 --> n=4 or n=-4. Not sufficient.
(1)+(2) n is negative OR n equals to zero from (1), n is 4 or -4 from (2). --> n=-4, hence it's negative, sufficient.
Is (1) even a correct clue? How can an absolute value be negative? Please help me understand.
We have -n=|-n|. Now, if n is negative or 0 then -n is positive or 0, so the absolute value, which is on the right hand side, does not equal to negative number. All is correct there.
_________________
Intern
Joined: 16 Feb 2019
Posts: 2
Location: India
Concentration: General Management, Finance
GMAT 1: 490 Q44 V15
GMAT 2: 580 Q48 V22
GMAT 3: 590 Q48 V23
WE: Engineering (Computer Software)
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
25 Mar 2019, 05:30
Bunuel wrote:
10. If n is not equal to 0, is |n| < 4 ?
(1) n^2 > 16
(2) 1/|n| > n
Question basically asks is -4<n<4 true.
(1) n^2>16 --> n>4 or n<-4, the answer to the question is NO. Sufficient.
(2) 1/|n| > n, this is true for all negative values of n, hence we can not answer the question. Not sufficient.
also , if n lies between 0 to 1 then (2) is possible Ex: n=0.1
n=( - infinity, 1) -> not sufficient
Intern
Joined: 13 Feb 2018
Posts: 2
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
13 Oct 2019, 17:40
Bunuel wrote:
3. Is x^2 + y^2 > 4a?
(1) (x + y)^2 = 9a
(2) (x – y)^2 = a
(1) (x + y)^2 = 9a --> x^2+2xy+y^2=9a. Clearly insufficient.
(2) (x – y)^2 = a --> x^2-2xy+y^2=a. Clearly insufficient.
(1)+(2) Add them up 2(x^2+y^2)=10a --> x^2+y^2=5a. Also insufficient as x,y, and a could be 0 and x^2 + y^2 > 4a won't be true, as LHS and RHS would be in that case equal to zero. Not sufficient.
The question doesn't ask us to calculate the value of x^2+y^2, then why we need to consider the values of x,y, and a here?, could you please explain.
Intern
Joined: 31 Jan 2013
Posts: 17
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
14 Oct 2019, 22:00
Hi Bunuel,
Can you please explain your analysis for statement (a). I don't think I understood it well
Bunuel wrote:
7. |x+2|=|y+2| what is the value of x+y?
(1) xy<0
(2) x>2 y<2
This one is quite interesting.
First note that |x+2|=|y+2| can take only two possible forms:
A. x+2=y+2 --> x=y. This will occur if and only x and y are both >= than -2 OR both <= than -2. In that case x=y. Which means that their product will always be positive or zero when x=y=0.
B. x+2=-y-2 --> x+y=-4. This will occur when either x or y is less then -2 and the other is more than -2.
When we have scenario A, xy will be nonnegative only. Hence if xy is negative we have scenario B and x+y=-4. Also note that vise-versa is not right. Meaning that we can have scenario B and xy may be positive as well as negative.
(1) xy<0 --> We have scenario B, hence x+y=-4. Sufficient.
(2) x>2 and y<2, x is not equal to y, we don't have scenario A, hence we have scenario B, hence x+y=-4. Sufficient.
Math Expert
Joined: 02 Sep 2009
Posts: 58434
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
14 Oct 2019, 22:05
crack1991 wrote:
Hi Bunuel,
Can you please explain your analysis for statement (a). I don't think I understood it well
Bunuel wrote:
7. |x+2|=|y+2| what is the value of x+y?
(1) xy<0
(2) x>2 y<2
This one is quite interesting.
First note that |x+2|=|y+2| can take only two possible forms:
A. x+2=y+2 --> x=y. This will occur if and only x and y are both >= than -2 OR both <= than -2. In that case x=y. Which means that their product will always be positive or zero when x=y=0.
B. x+2=-y-2 --> x+y=-4. This will occur when either x or y is less then -2 and the other is more than -2.
When we have scenario A, xy will be nonnegative only. Hence if xy is negative we have scenario B and x+y=-4. Also note that vise-versa is not right. Meaning that we can have scenario B and xy may be positive as well as negative.
(1) xy<0 --> We have scenario B, hence x+y=-4. Sufficient.
(2) x>2 and y<2, x is not equal to y, we don't have scenario A, hence we have scenario B, hence x+y=-4. Sufficient.
You might find the following solution easier: https://gmatclub.com/forum/inequality-a ... l#p1111747
Hope it helps.
_________________
Intern
Joined: 31 Jan 2013
Posts: 17
Re: Inequality and absolute value questions from my collection [#permalink]
### Show Tags
14 Oct 2019, 22:20
Do I need to memorize this ?
"(2) |x-y| < |x|, says that the distance between x and y is less than distance between x and origin. This can only happen when x and y have the same sign, when they are both positive or both negative, when they are at the same side from the origin. Sufficient. (Note that vise-versa is not right, meaning that x and y can have the same sign but |x| can be less than |x-y|, but if |x|>|x-y| the only possibility is x and y to have the same sign.)"
or is there some other way to remember this?
Bunuel wrote:
11. Is |x+y|>|x-y|?
(1) |x| > |y|
(2) |x-y| < |x|
To answer this question you should visualize it. We have comparison of two absolute values. Ask yourself when |x+y| is more then than |x-y|? If and only when x and y have the same sign absolute value of x+y will always be more than absolute value of x-y. As x+y when they have the same sign will contribute to each other and x-y will not.
5+3=8 and 5-3=2
OR -5-3=-8 and -5-(-3)=-2.
So if we could somehow conclude that x and y have the same sign or not we would be able to answer the question.
(1) |x| > |y|, this tell us nothing about the signs of x and y. Not sufficient.
(2) |x-y| < |x|, says that the distance between x and y is less than distance between x and origin. This can only happen when x and y have the same sign, when they are both positive or both negative, when they are at the same side from the origin. Sufficient. (Note that vise-versa is not right, meaning that x and y can have the same sign but |x| can be less than |x-y|, but if |x|>|x-y| the only possibility is x and y to have the same sign.)
Re: Inequality and absolute value questions from my collection [#permalink] 14 Oct 2019, 22:20
Go to page Previous 1 2 3 4 5 6 7 8 9 10 11 [ 214 posts ]
Display posts from previous: Sort by |
## PrintFriendly Button
In the bottom left-hand corner of each of my postings, the reader will see a little icon that has a printer-symbol on it, and the word “Print”. This icon can be used either to print the posting in question, or to save it to a PDF File, on the reader’s computer. In fact, the reader can even delete specific paragraphs from his local copy of my posting – since plausibly, the reader might find some of my postings too long to be printed in their entirety.
Some time ago, I had encountered a situation where Not code belonging to this plug-in, but Rather the URL which hosts the service, was showing me a warning in my browser, that ‘unknown URLs’ were trying to run scripts.
My own Web-browser has a script-blocker, which will display scripts to me which a page is trying to display, but which I did not authorize.
Certain features which I use on my blog, are actually hosted by the Web-site of 3rd parties, whose scripts may run, just because my page includes their widget.
The first time I noticed this I went into an alarm-mode, and removed the button from my blog quickly, thinking that maybe it was malware. But some time after that, I installed an additional extension to my blogging engine, called “WordFence”. This is an extension that can not only scan the (PHP-) code present on my own server for viruses and other malware, but that can also just scan whatever HTML my blogging engine outputs, for the possible presence of URLs to sites that are black-listed, regardless of how those URLs ended up being generated by my blogging engine.
Once I had WordFence installed, I decided that a more-Scientific way to test the PrintFriendly plug-in, would be to reactivate it, while WordFence is scanning my site. If any of the URLs produced by this plug-in were malicious, surely WordFence would catch this.
As it stands, the PrintFriendly button again displays URLs which belong to parties unknown to me. But as it stands, none of those URLs seem to suggest the presence of malware. I suppose, that the hosts of PrintFriendly rely on some of those scripts, to generate income? Since I’m not required to pay, to use their button.
Dirk
## And Now, Memcached Contributes to This Site Again!
According to this earlier posting, I had just uninstalled a WordPress plugin from my server, which uses the ‘memcached‘ daemon as a back-end, to cache blog content, namely, content most-frequently requested by readers. My reason for uninstalling that one, was the warning from my WordFence security suite, that that plugin had been abandoned by its author.
Well, it’s not as if everything was a monopoly. Since then, I have found another caching plugin, that again uses the ‘memcached‘ daemon. It is now up and running.
(Screenshot Updated 06/19/2017 : )
One valid question which readers might ask would be, ‘Why does memcached waste a certain amount of memory, and then allocate more, even if all the allocated memory is not being used?’
(Posting Updated 06/21/2017 … )
## Memcached no longer contributes, to how this site works… For the moment.
One of the facts which I had mentioned some time ago, was that on my Web-server I have a daemon running, which acts as a caching mechanism to any client-programs, that have the API to connect to it, and that daemon is called ‘memcached‘.
And, in order for this daemon to speed up the retrieval of blog-entries specifically, that reside in this blog, and that by default, need to be retrieved from a MySQL database, I had also installed a WordPress.org plugin named “MemcacheD Is Your Friend”. This WordPress plugin added a PHP script, to the PHP scrips that generally generate my HTML, but this plugin accelerated doing so in certain cases, by avoiding the MySQL database look-up.
In general, ‘memcached‘ is a process which I can install at will, because my server is my own computer, and which stores Key-Value pairs. Certain keys belong to WordPress look-ups by name, so that the most recent values, resulting from those keys, were being cached on my server (not on your browser), which in turn could make the retrieval of the most-commonly-asked-for postings – faster, for readers and their browsers.
Well, just this morning, my WordFence security suite reported the sad news to me, that this WordPress plugin has been “Abandoned” by its developer, who for some time was doing no maintenance or updates to it, and the use of which is now advised against.
If the plugin has in fact been abandoned in this way, it becomes a mistake for me to keep using it for two reasons:
1. Updates to the core files of WordPress could create compatibility issues, which only the upkeep of the plugin by its developer could remedy.
2. Eventually, security flaws can exist in its use, which hackers find, but which the original developer fails to patch.
And so I have now disabled this plugin, from my WordPress blog. My doing so could affect how quickly readers can retrieve certain postings, but should leave the retrieval time uniform for all postings, since WordPress can function fine without any caching, thank you.
Dirk
## WordFence
One of the facts which I have written about before, is that my blog is set up in a slightly customized way, with core PHP files that come from the Debian package manager, and which do not have permission bits set, so that the Web-server can write to them, but also with add-ons – aka plugin-ins – with permission bits set so that the server can. This latter detail is a great convenience for me, because it allows me to install plug-ins from WordPress.org, as well as to install updates to those, via means that are simple for me to operate.
What I have also written, is that this makes my overall security good, but not perfect. Theoretically, there could be a corrupted plug-in available directly from WordPress.org – even though in general, they do their best to vet those – and which I could install to my blog, without knowing it. Further, even if the plug-in contains no dirty code visible to WordPress.org, the way some of them work might depend on a Web-service from their author, and then that URL could be running some sort of suspicious scripts, let us say on yet another server.
And so a reasonable question to ask might be, of what use WordFence can be in my case. One of the types of scans which this security add-on performs, is a check of all my core files, against what the versions are with WordPress.org, not with Debian. And then this scan reports 58 deviations to me, without analyzing them, just because Debian has slightly different core-file-versions. It also checks my entire plug-in directory, scans all the plug-ins, etc.. But in reality, WordFence never reported any kind of anomaly in my plug-ins, because those are the WordPress.org versions. |
Python Tutorial: Functions
Functions
Syntax
Functions are a construct to structure programs. They are known in most programming languages, sometimes also called subroutines or procedures. Functions are used to utilize code in more than one place in a program. The only way without functions to reuse code consists in copying the code.
A function in Python is defined by a def statement. The general syntax looks like this:
def function-name(Parameter list):
statements, i.e. the function body
The parameter list consists of none or more parameters. Parameters are called arguments, if the function is called. The function body consists of indented statements. The function body gets executed every time the function is called.
Parameter can be mandatory or optional. The optional parameters (zero or more) must follow the mandatory parameters.
Function bodies can contain a return statement. It can be anywhere in the function body. This statement ends the execution of the function call and "returns" the result, i.e. the value of the expression following the return keyword, to the caller. If there is no return statement in the function code, the function ends, when the control flow reaches the end of the function body.
Example:
>>> def add(x, y):
... """Return x plus y"""
... return x + y
...
>>>
In the following interactive session, the function we previously defined will be called:
>>> add(4,5)
9
11
>>>
Example of a Function with Optional Parameters
>>> def add(x, y=5):
... """Return x plus y, optional"""
... return x + y
...
>>>
Calls to this function could look like this:
>>> add(4)
9
11
>>>
Docstring
The first statement in the body of a function is usually a string, which can be accessed with function_name.__doc__
This statement is called Docstring.
Example:
>>> execfile("function1.py")
'Returns x plus y'
'Returns x plus y, optional'
>>>
Keyword Parameters
Using keyword parameters is an alternative way to make function calls. The definition of the function doesn't change.
An example:
def sumsub(a, b, c=0, d=0):
return a - b + c - d
Keyword parameters can only be those, which are not used as positional arguments.
>>> execfile("funktion1.py")
>>> sumsub(12,4)
8
>>> sumsub(12,4,27,23)
12
>>> sumsub(12,4,d=27,c=23)
4
Arbitrary Number of Parameters
There are many situations in programming, in which the exact number of necessary parameters cannot be determined a-priori. An arbitrary parameter number can be accomplished in Python with so-called tuple references. An asterisk "*" is used in front of the last parameter name to denote it as a tuple reference. This asterisk shouldn't be mistaken with the C syntax, where this notation is connected with pointers.
Example:
def arbitrary(x, y, *more):
print "x=", x, ", y=", y
print "arbitrary: ", more
x and y are regular positional parameters in the previous function. *more is a tuple reference.
Example:
>>> execfile("funktion1.py")
>>> arbitrary(3,4)
x= 3 , x= 4
arbitrary: ()
>>> arbitrary(3,4, "Hello World", 3 ,4)
x= 3 , x= 4
arbitrary: ('Hello World', 3, 4) |
Direction of friction on particle placed on a rotating turntable
If a particle is placed on a rotating turntable then the particle has a tendency to slip tangentially with respect to the underneath surface... So the friction should act tangentially to the particle... Why then does it act towards the center... Please explain with respect to inertial frame
Friction opposes relative slipping between surfaces... When the particle is initially placed on the rotating turntable... the surface under it has a tangential velocity. So, in order to oppose this slipping friction must act tangentially... How then does it act radially inward?
-
"so the friction should act tangentially to the particle" Force is proportional to the acceleration, not to the speed. And acceleration in general is not tangent to the trajectory. – Yrogirg Sep 5 '12 at 10:34
This shows the motion of the particle and turntable at an instant in time. We assume that the system has settled down i.e. at the instant shown the particle is moving at the same velocity as the bit of the turntable immediately underneath it.
The particle's velocity is tangential to the turntable - I've indicated this by the dotted line. Because the turntable is rotating, the point on the turntable immediately under the particle is accelerating towards the centre at $a = r\omega^2$ where $r$ is the radius of the turntable and $\omega$ is the angular velocity.
If there were no friction the particle would travel in a straight line (along the dotted line) at constant velocity and we'd see the particle fly off the turntable.
If instead we see the particle stay in place on the turntable, there must be a force accelerating it, and that force must act in the direction of the acceleration i.e. towards the centre of the turntable. That force is of course the friction between the particle and the turntable, so the friction acts towards the centre of the turntable.
Update: Tanya asks what happens when a stationary particle is placed on the turntable.
The particle is placed on the turntable where the black dot is. At this instant the particle is stationary with the turntable skidding underneath it. The frictional force acts in the direction of the relative velocity between the particle and turntable, so at this moment the frictional force is tangential.
The particle accelerates in response to the frictional force, so some very short time later it has moved to the position shown by the open circle. I'm assuming the time is short enough that we can assume the particle has moved in a straight line: I've exaggerated the particle motion on the diagram to make the argument clearer.
Anyhow, because the particle has moved in a direction tangential to the turntable the direction of the relative velocity between the particle and turntable has changed. That means there is now a component of the frictional force downwards as well as in the original direction. The result is that the particle is now accelerated downwards as well as to the right. If you watched the particle it would follow a spiral centred on the centre of the turntable until it reached the edge and flew off or came to rest relative to the turntable. In either case the net result is that there has been an acceleration towards the centre of the turntable i.e. there is a component of the fictional force acting towards the centre.
-
Friction opposes relative slipping between surfaces...when the particle is initially placed on the rotating turntable ..the surface under it has a tangential velocity ...so in order to oppose this slipping friction must act tangentially...how then does it act radially inward ?? – Tanya Sharma Sep 5 '12 at 12:19
Thank u so much... "If you watched the particle it would follow a spiral centred on the centre of the turntable until it reached the edge and flew off". Please Can u elaborate it ? – Tanya Sharma Sep 5 '12 at 15:43
Have a look at demonstrations.wolfram.com/ParticleMovingOnARotatingDisk (you'll need to click the link to install the Wolfram CDF player). – John Rennie Sep 5 '12 at 17:06
Thank you very much John Rennie...You have very nicely explained the concept. – Tanya Sharma Jan 6 '13 at 11:36 |
# Bayesian model selection for beginners
I’ve recently encountered the following problem (or at least, a problem that can be formulated like this, the actual thing didn’t involve coins):
Imagine you’ve been provided with the outcomes of two sets of coin tosses.
First set: $N_1$ tosses, $n_1$ heads
Second set: $N_2$ tosses, $n_2$ heads
And we want to know whether the same coin was used in both cases. Note that these ‘coins’ might be very biased (always heads, always tails), we don’t know.
There are classical tests for doing this, but I wanted to do a Bayesian test. This in itself is not much of a challenge (it’s been done many times before) but what I think is interesting is how to explain the procedure to someone who has no familiarity with the Bayesian approach. Maybe if that is possible, more people would do it. And that would be a good thing. Here’s what I’ve come up with:
We’ll first assume a slight simplification of the problem. We’ll assume that there are 9 coins in a bag and the person who did the coin tossing either:
Pulled one coin out of the bag and did $N_1$ tosses followed by $N_2$ tosses
OR
Pulled one coin out of the bag, did $N_1$ tosses, put the coin back, pulled out another coin (could have been the same one) and did $N_2$ tosses with this one.
The coins in the bag all have different probabilities of landing heads. The probabilities for the 9 coins are: 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9. We’ll label these coins $c_1$ to $c_9$.
Model 1
We’ll start by building a model for each scenario. The first model assumes that both sets of coin toss data were produced by the same coin. If we knew which coin it was, the probability of seeing the data we’ve seen would be:
$Pr(n_1|N_1,c_n)Pr(n_2|N_2,c_n)$
i.e. the probability of seeing $n_1$ heads in the first $N_1$ tosses of coin $c_n$ multiplied by the probability of seeing $n_2$ heads in the second $N_2$ tosses of coin $c_2$.
In reality we don’t know which coin it was. In this case, it seems sensible to calculate the averageprobability. The total of all of the probabilities divided by the number of coins. This is computed as:
$p_1=\sum_{n=1}^9 \frac{1}{9} Pr(n_1|N_1,c_n)Pr(n_2|N_2,c_n)$
Another way of thinking about this is as a weighted sum of the probabilities where each weight is the probability of that coin being chosen. In this case, each coin was equally likely and so has probability 1/9. In general this doesn’t have to be the case.
Note that we haven’t defined what $Pr(n_1|N_1,c_n)$ is. This isn’t particularly important – it’s just a number that tells us how likely we are to get a particular number of heads with a particular coin.
Model 2
The second model corresponds to picking a coin randomly for each set of coin tosses. The probability of the data in this model, if we knew that the coins were $c_n$ and $c_m$ is:
$Pr(n_1|N_1,c_n)Pr(n_2|N_2,c_m)$
Again, we don’t know $c_n$ or $c_m$ so we average over them both. In other words, we average over every combination of coin pairs. There are 81 such pairs and each pair is equally likely:
$p_2 = \sum_{n=1}^9 \sum_{m=1}^9 \frac{1}{81} Pr(n_1|N_1,c_n)Pr(n_2|N_2,c_m)$
Comparing the models
We can now compare the numbers $p_1$ and $p_2$. If one is much higher than the other, that model is most likely. Lets try an example…
Example 1
Here’s the data:
$N_1=10,n_1=9,N_2=10,n_2=1$
So, 9 heads in the first set of tosses and only 1 in the second. At first glance, it looks like a different coin was used in both cases. Lets see if our calculations agree. Firstly, here is $Pr(n_1|N_1,c_n)Pr(n_2|N_2,c_n)$ plotted on a graph where each bar is a different $c_n$.
The coin that looks most likely is the one with $c_n=0.5$. But, look at the scale of the y-axis. All of the values are very low – none of them look like they could have created this data. It’s hard to imagine taking any coin, throwing 9 heads in the first ten and then 1 head in the second ten.
To compute $p_1$ we add up the total height of all of the bars and divide the whole thing by 9: $p_1=0.000029$.
We can make a similar plot for model 2:
The plot is now 3D, because we need to look at each combination of $c_n$ and $c_m$. The height of each bar is:
$Pr(n_1|N_1,c_n)Pr(n_2|N_2,c_m)$
The bars are only high when $c_n$ is high and $c_m$ is low. But look at the height of the high bars – much much higher than those in the previous plot. Some combinations of two coins could conceivable have generated this data. To compute $p_2$ we need to add up the height of all the bars and divide the whole thing by 81: $p_2 = 0.008478$
This is a small number, but almost 300 times bigger than $p_1$. The model selection procedure has backed up our initial hunch.
Example 2
$N_1=10,n_1=6,N_2=10,n_2=5$
i.e. 6 heads in the first 10, 5 heads in the second 10. This time it looks a bit like it might be the same coin. Here’s the plot for model 1:
And here’s the magic number: $p_1 = 0.01667$
Model 2:
With the number: $p_2 = 0.01020$.
This time, the model slightly favours model 1.
Why so slightly? Well, the data could have come from two coins – maybe the two coins are similar. Maybe it was the same coin selected twice. For this reason, it is always going to be easier to say that they are different than that they are the same.
Why would it ever favour model 1?
Given that we could always explain the data with two coins, why will the selection procedure ever favour one coin? The answer to this is one of the great benefits of the Bayesian paradigm. Within the Bayesian framework, there is an inbuilt notion of economy: a penalty associated with unnecessarily complex models.
We can see this very clearly here – look at the two plots for the second example – the heights of the highest bars are roughly the same: the best single coin performs about as well as the best pair of coins. But, the average single coin is better than the average pair of coins. This is because the total number of possibilities (9 -> 81) has increased faster than the total number of good possibilites. The average goodness has decreased.
Model 2 is more complex than model 1 – it has more things that we can change (2 coins rather than 1). In the Bayesian world, this increase in complexity has to be justified by much better explanatory power. In example 2, this is not the case.
In example 1 we don’t quite see the reverse effect. Now, model 1 had no good possibilities (no single coin could have realistically generated the data) whereas model 2 had some. Hence, model 2 was overwhelmingly more likely than model 1.
Summary
This is a simple example. It also doesn’t explain all of the important features of Bayesian model selection. However, I think it nicely shows the penalty that Bayesian model selection naturally imposes on overly complex models. There are other benefits to this approach that I haven’t discussed. For example, this doesn’t show the way in which $p_1$ and $p_2$ change as more data becomes available.
Relating back to Bayes rule
The final piece of the jigsaw is relating what we’ve done above to the equations that might accompany other examples of Bayesian model selection. Bayes rule tells us that for model 1:
$Pr(c_n|n_1,N_1,n_2,N_2) = \frac{Pr(n_1|N_1,c_n)Pr(n_2|N_2,c_n)Pr(c_n)}{\sum_{o=1}^9 r(n_1|N_1,c_o)Pr(n_2|N_2,c_o)Pr(c_o)}$
and for model 2:
$Pr(c_n,c_m|n_1,N_1,n_2,N_2) = \frac{Pr(n_1|N_1,c_n)Pr(n_2|N_2,c_m)Pr(c_n)Pr(c_m)}{\sum_{o=1}^9 \sum_{q=1}^9 Pr(n_1|N_1,c_o)Pr(n_2|N_2,c_q)Pr(c_o)Pr(c_q)}$
for model 2.
In both cases, we computed the denominator of the fraction – known as the evidence. In general, it may consist of an integral rather than a sum but the idea is the same: averaging over all parameters (coins or combinations of coins) to get a single measure of goodness for the model. |
112 views
answer is D but I'm getting A. pls tell where am I going wrong?
| 112 views
0
Check my approach aditi19 . I'm getting the same.
0
answer key is wrong then? :P
0
Sorry I have taken the NOR gate earlier.
It should be option D itself :).
if x=1 then $0\rightarrow 1\rightarrow 3\rightarrow 0\rightarrow 1\rightarrow 3\rightarrow 0\rightarrow 1\rightarrow 3............$
if x=0 then $0\rightarrow 1\rightarrow 3\rightarrow1\rightarrow 3\rightarrow 1\rightarrow 3\rightarrow$
one thing we should note is that
after a few clock cycle
means at x=0 after few clock cycle it will move from 1 to 3 ,3 to 1,1 to 3 .........................
by Boss (10.6k points)
$$D_a = Q_a{\oplus }Q_b\\ D_b = (Q_a.x)'$$
$$Q_a$$ $$Q_b$$ $$Q_a^+$$ $$Q_b^+$$ $$D_a$$ $$D_b$$
0 0 0 1 0 1
0 1 1 1 1 1
1 0 1 x' 1 x'
1 1 0 x' 0 x'
CASE 1: When X = 1,
Only three states are possible for this case.
CASE 2: When X = 0,
Only two states are possible. for this case.
Therefore, OPTION D is Correct.
by Junior (785 points) |
Accéder directement au contenu Accéder directement à la navigation
# First observation of $^{19}$Na states by inelastic scattering
Abstract : Low-energy states of the 19Na proton-drip nucleus have been investigated by resonant elastic and inelastic scattering in inverse kinematics at the RIB facility at Louvain-la-Neuve, Belgium.We have used a 66 MeV 18Ne beam and a proton-rich (CH2)n target. Recoil protons were detected at forward angles by using a ¢E-E system, the so-called CD-PAD detector. The resonant elastic scattering technique have already been applied to study the 19Na nucleus. In this experiment, we have mostly concentrated on the 18Ne+p inelastic scattering. The energy spectra from inelastic protons, which leave the 18Ne nucleus in its first excited state at 1.887 MeV, exhibit the level structure of 19Ne in the excitation energy region from 2.5 up to 3.5 MeV. In this talk, absolute elastic and inelastic 18Ne+p cross sections will be presented and the analysis of the 19Ne levelproperties, performed in the framework of the R-matrix model, will be discussed
http://hal.in2p3.fr/in2p3-00162103
Contributeur : Michel Lion <>
Soumis le : jeudi 12 juillet 2007 - 14:02:55
Dernière modification le : jeudi 5 mars 2020 - 15:26:10
### Citation
M.G. Pellegriti, N.L. Achouri, C. Angulo, J.C. Angélique, E. Berthoumieux, et al.. First observation of $^{19}$Na states by inelastic scattering. Proton Emitting Nuclei and related topics - PROCON07, 2007, Lisbonne, Portugal. pp.181-186, ⟨10.1063/1.2827253⟩. ⟨in2p3-00162103⟩
### Métriques
Consultations de la notice |
SEARCH HOME
Math Central Quandaries & Queries
Question from Rukshanth, a student: How many diagonals does a cube have?
Hi Rukshanth,
According to Wolfram Math World "The diagonal of a polyhedron is any line segment connecting two nonadjacent vertices of the polyhedron." A cube has 6 faces and on each face there are two diagonals joining nonadjacent vertices and there are 4 diagonals that run through the body of the cube. Thus in total there are $6 \times 2 + 4 = 16$ diagonals in a cube.
Harley
Math Central is supported by the University of Regina and the Imperial Oil Foundation. |
# Use crossref in bibliography, but only show the full main entry
Consider the following bibliography and MWE:
\documentclass{article}
\usepackage{natbib}
\begin{filecontents*}{bibliography.bib}
@proceedings{conf:2006,
editor = {Star, Patrick and Tentacles, Squidward},
title = {Proceedings of the 1st Conference on Marine Biology},
booktitle = {Proceedings of the 1st Conference on Marine Biology},
year = 2006,
month = jan,
}
@inproceedings{proc:2006:01,
title = {How To Live Underwater},
author = {Squarepants, SpongeBob},
pages = {1--8},
crossref = {conf:2006}
}
@inproceedings{proc:2006:02,
editor = {Star, Patrick and Tentacles, Squidward},
booktitle = {Proceedings of the 1st Conference on Marine Biology},
year = 2006,
month = jan,
title = {How To Live Underwater},
author = {Squarepants, SpongeBob},
pages = {1--8},
}
\end{filecontents*}
\begin{document}
\nocite{*}
\bibliographystyle{plain}
\bibliography{bibliography}
\end{document}
This results in the following bibliography:
My desired output would be to get item [2] (instead of [1] and [3]) while citing proc:2006:01 that uses a crossref to conf:2006.
I would like to use the crossref field mainly as an easier way to manage my .bib file, and I am not particularly interested in showing an entry for the article and an entry for the conference in my references. I guess, in a sense, that would mean that the conf:2006 entries are copied to proc:2006:01 before it is processed. Is there a way I can do this? Preferably the solution would allow me to revert to the normal output if necessary.
I've tried the answer here, but that only shows the fields of the main entry without inheriting the crossref entries. That's not what I want.
• I am aware of the correct ordering, but I didn't think that it was an issue because all references were resolved. I guess latexmk did not show me the warnings. What you're saying is that the reason I was not getting the full bibliography entry when using \cite{proc:2006:01} is because the crossref entry was before the cited entry. Right? Also, depending on the number of times it is crossrefed, the crossref entry may appear separately in the references. I'll have to confirm tomorrow when I am using a PC. If that is the case, I was just being silly and I'll accept this as an answer. – sudosensei Dec 7 '13 at 21:46
• @Mico: I guess the question, rather, is whether there is a way to avoid the ordering requirement. I sort my .bib entries by the key name and I'd rather avoid a silly naming scheme that will push them at the end of the .bib file. – sudosensei Dec 7 '13 at 21:50
• @Mico Could you please convert your comment about the ordering into an answer so that I can accept it? I just tested it and it works fine. latexmk would not show me the warnings, so I thought that the ordering was not the issue. I'll also have to look for something more sophisticated than bibsort to sort my bibliography. Hopefully bibtool offers the "crossref-ed entries at the end" feature. If not, I guess I'll post another question. Thanks for the help! :-) – sudosensei Dec 8 '13 at 11:39
Entries that are cross-referenced by other entries should occur later in the .bib file than any entries that cross-reference them. In practice, this means placing the entries that are cross-referenced at the end of the .bib file.
The fact that your MWE's .bib file has the entry being cross-referenced (key: conf:2006) coming first rather than last may be the cause of the problem behavior you're experiencing. If I rearrange your .bib file so that the entry with key conf:2006 comes last, then executing either just \cite{proc:2006:01} (which cross-references conf:2006) or \cite{proc:2006:02} (which does not) in the body of the document produces the identical entry in the bibliography.
Making sure that entries that are cross-referenced come last in the .bib file can be tedious, especially if the files contains lots and lots of entries and if you'd like to maintain some systematic sorting scheme. Fortunately, quite a few LaTeX-aware editors feature routines that simplify the task of pretty-printing bibliographic entries and sorting them according to various criteria. One particularly useful sorting criterion is to place all entries that are cross-referenced by other entries last in the file. |
# Concepts
The ESA Charter Mapper provides access to EO data coming from several missions and sensors and provided using different data formats and metadata.
These acquisitions have heterogeneous formats for the data files and associated metadata.
Note
The list of EO Missions supported by the ESA Charter Mapper can be found here.
To reduce this heterogeneity, the ESA Charter Mapper seeks to pre-process the acquisitions into a common format which, on the one hand, provides a ready-to-inspect and comparable dataset and, on the other, to have downstream processing services that support multi-sensor and multi-mission acquisitions without implementing complex blocks.
Below are described the main concepts that support this approach.
## STAC as a common metadata model
EO data is handled under the ESA Charter Mapper Processing Chains using the SpatioTemporal Asset Catalog (STAC).
STAC consists of a standardized way to catalog and expose multi-source geospatial data.
erDiagram CAR ||--o{ NAMED-DRIVER : allows CAR { string registrationNumber string make string model } PERSON ||--o{ NAMED-DRIVER : is PERSON { string firstName string lastName int age } ITEM ||--o{ NAMED-DRIVER : is ITEM { string id string type string geometry bbox bbox dict properties } Properties ||--o{ ITEM : has Properties { datetime datetime } Geometry ||--o{ ITEM : has Geometry { string type }
The basic core of STAC1 is adopted and extended within the ESA Charter Mapper to manage the Charter collection of multi-source satellite imagery using multiple STAC “spatiotemporal” Assets.
Each STAC item (e.g. a single Sentinel-2 MSI L1C product for UTM zone 9 and grid square XK) includes multiple assets (e.g. multiple multispectral bands) and consists of a GeoJSON feature with metadata and thumbnail and can be organized under a STAC Catalog (e.g. multitemporal Sentinel-2 L1C MSI products for UTM zone 9 and grid square XK).
Each STAC Catalog can be then grouped under a STAC Collection (e.g. Sentinel-2 L1C) which can also further complement associated metadata information such as producer, processor, host, license, version, temporal extent, GSD, instrument, off-nadir angle, etc. More information can be found at STAC Common Metadata section of STAC Specification documentation2.
## Common Band Names (CBN)
STAC assets of EO data are catalogued in the ESA Charter Mapper processing environment using Common Band Names (CBN). CBN classes refer to common band ranges derived from pre-defined frequency ranges of the Electromagnetic Spectrum made for several popular instruments. Each CBN class is defined by a Band Range in micrometers for optical and in centimeters for SAR data. The CBN classification of the frequency spectrum allows a one-to-one mapping of multi-mission and multi-sensor bands (Optical and SAR). CBN thus ease the handling multi-sensor source EO data.
### Optical Common Band Names
To classify a generic band from an optical sensor with the CBN schema (e.g. Worldview-3 Yellow band from 0.584 to 0.632 μm), one way is to derive its centered wavelength in micrometers (0.608 μm) and identify the CBN class having the closest central wavelength (CBN class 05 and centered at 0.6 μm). However, while choosing a CBN class also the bandwidth needs to be considered. Common Band Names "nir" and "lwir" refer, in fact, to wider bands that cover most of the spectral range for NIR (0.75μm to 1.0μm) and TIR (10.5μm to 12.5μm) radiation. On the other hand, narrow bands for "nir" are "nir08" and "nir09", centered at 0.85μm and 0.95μm respectively. This is particularly useful for example to classify sensors having both a wide (e.g. Sentinel-2 MSI Band 8 at 833 nm) and a narrow (Sentinel-2 MSI Band 8a at 865 nm) band over the same portion of the EM spectrum. CBN "nir" will refer then to S2 MSI spectral Band 8 and "nir08" to the S2 MSI spectral band 8a.
Figure 1 - Comparison of Landsat-7 and 8 bands with the ones of Sentinel-2 (Image credit: USGS).
The same applies for "lwir" which is also discretized into two narrow bands "lwir11" and "lwir12", centered at 11μm and 12μm respectively. The CBN schema is not rigid and allows the definition of new classes in case the definition of an additional portion of the spectrum is required (e.g. to deal with future EO missions in the ESA Charter Mapper Optical Payload having different spectral resolution that cannot be fully classified with the current CBN schema).
A total of 29 CBNs (pan, coastal, blue, yellow, etc.) are identified for the ESA Charter Mapper (see Table 1). This table also includes a sample application of this CBN schema, made for Landsat-7 ETM+, Landsat-8 OLI/TIRS and Sentinel-2 MSI data.
CBN code Common Band Name (CBN) Band Range (μm) Landsat 7 ETM+ Landsat 8 OLI / TIRS Sentinel-2 MSI
CBN-01 pan 0.50 - 0.70 8 8
CBN-02 coastal 0.40 - 0.45 1 1
CBN-03 blue 0.45 - 0.50 1 2 2
CBN-04 green 0.50 - 0.60 2 3 3
CBN-05 yellow 0.58 - 0.62
CBN-06 red 0.60 - 0.70 3 4 4
CBN-07 rededge 0.70 - 0.79
CBN-08 rededge70 0.69 - 0.71 5
CBN-09 rededge74 0.73 - 0.75 6
CBN-10 rededge78 0.69 - 0.71 7
CBN-11 nir 0.75 - 1.00 4 5 8
CBN-12 nir08 0.75 - 0.90 8a
CBN-13 nir09 0.85 - 1.05 9
CBN-14 swir12 1.19 - 1.21
CBN-15 cirrus 1.35 - 1.40 9 10
CBN-16 swir16 1.55 - 1.75 5 6 11
CBN-17 swir155 1.45 - 1.65
CBN-18 swir165 1.65 - 1.75
CBN-19 swir173 1.72 - 1.74
CBN-20 swir22 2.10 - 2.30 7 7 12
CBN-21 swir215 2.13 - 2.17
CBN-22 swir220 2.18 - 2.22
CBN-23 swir225 2.23 - 2.27
CBN-24 swir23 2.28 - 2.32
CBN-25 mwir38 3.5 - 4.1
CBN-26 lwir 10.5 - 12.5 6
CBN-27 lwir09 8.5 - 9.5
CBN-28 lwir11 10.5 - 11.5 10
CBN-29 lwir12 11.5 - 12.5 11
Table 1 - The Common Band Name schema of the ESA Charter Mapper and its application for Landsat-7, Landsat-8 and Sentinel-2 missions.
To better explain the advantages in managing multi-sensor EO optical data with CBN, an application of CBN schema, made for a selection of the ESA Charter Mapper optical payload is shown in Table 2. This enables a clear mapping of EO data across the EM spectrum with STAC assets.
Satellite Sensor CBN-01 CBN-02 CBN-03 CBN-04 CBN-05 CBN-06 CBN-07 CBN-08 CBN-09 CBN-10 CBN-11 CBN-12 CBN-13 CBN-14 CBN-15 CBN-16 CBN-17 CBN-18 CBN-19 CBN-20 CBN-21 CBN-22 CBN-23 CBN-24 CBN-25
ALSAT-1B SLIM-6 B G R X
CartoSat-2 PAN P
CBERS-4 PAN / MUX P B G R X
CBERS-4 WFI B G R X
CBERS-4A WPM P B G R X
CBERS-4A PAN / MUX P B G R X
CBERS-4A WFI B G R X
DEIMOS-1 SLIM-6 B G R X
DubaiSat2 HiRAIS P B G R X
Gaofen-1 PMS P B G R X
Gaofen-1 WFV B G R X
Gaofen-2 PMS P B G R X
Gaofen-4 PMS, IRS P B G R X X
GeoEye-1 EO imager P B G R X
Kanopus-V MSS/PSS P B G R X
Kanopus-V-IK MSS/PSS P B G R X
Kanopus-V-IK MSU-IK-SRM X
KhalifaSat KHCS P B G R X
KOMPSAT-2 MSC P B G R X
KOMPSAT-3 AEISS P B G R X
KOMPSAT-3A AEISS-A P B G R X
Landsat-7 ETM+ P B G R X X X
Landsat-8 OLI/TIRS P C B G R X X X X
Meteor-M KMSS MSU G R X
PlanetScope PS2 B G R X
Pleiades PHR1A/B P B G R X
ResourceSat-2 / 2A LISS-III G R X X
ResourceSat-2 / 2A LISS-IV G R X
Resurs-P GEOTON-L1 P B G R X X
Resurs-P ShMSA-VR P B G R X X
Sentinel-2 MSI C B G R X X X X X X X X X
SPOT-6 SPOT-6 P B G R
SPOT-7 SPOT-7 P B G R
UK-DMC-2 SLIM-6 G R
Vision-1 VHRI-100 P B G R
VRSS-1 PMC P B G R
VRSS-2 HRC P B G R
WorldView-1 WV60 P
WorldView-2 WV110 P C* B G Y* R X* X X*
WorldView-3 WV110 P C B G Y R X X X X* X* X* X* X* X* X* X*
Table 2 - Common Band Names for a selection of the ESA Charter Mapper Optical payload products. P, C, B, G, Y, and R stands for PAN, Cyan, Blue, Green, Yellow, and Red respectively. With asterisks are marked spectral bands which are not always present in the products delivered for the Charter.
### SAR Common Band Names
A similar schema is applied for SAR missions. CBN classes for SAR refer to a combination of frequency band and polarization derived from multiple band numbers of several popular SAR instruments for disaster mapping. The CBN classification of microwave radiation enables a one-to-one mapping of multi-mission and multi-sensor SAR products. Table 3 shows a sample application of CBN for radar backscatter which is used in the ESA Charter Mapper SAR missions.
Common Name Wavelength (cm) ICEYE Kompsat-5 TerraSAR-X TanDEM-X GF-3 Radarsat 2 RCM 1/2/3 Sentinel-1 A/B ALOS-2 SAOCOM-1 A/B
s0_db_x_hh 2.5 - 4 X X
s0_db_x_hv 2.5 - 4 X X
s0_db_x_vh 2.5 - 4 X X
s0_db_x_vv 2.5 - 4 X X X
s0_db_c_hh 4 - 8 X X X X
s0_db_c_hv 4 - 8 X X X X
s0_db_c_hv 4 - 8 X X X X
s0_db_c_vv 4 - 8 X X X X
s0_db_l_hh 15 - 30 X X
s0_db_l_hv 15 - 30 X X
s0_db_l_vh 15 - 30 X X
s0_db_l_vv 15 - 30 X X
Table 3 - Common Band Names schema for the ESA Charter Mapper SAR payload products.
CBNs for SAR are meant to classify Sigma Nought in decibel from the multi-mission SAR Payload using this code s0_db_r_pp>, where “s0_db” means Sigma Nought in dB, “r” is the SAR-band (X, C or L), and “pp” is the polarization (HH, HV, VH or VV). This schema enables the mapping of the SAR products provided as quad- (HH+HV+VH+VV), dual- (VV+VH or HH+HV), or single-polarization (HH or VV). Cross-pol assets are generally provided within dual- or quad-pol products. The ICEYE mission is represented by only the CBN s0_db_x_vv because the GRD products are given only in VV polarization.
Warning
SAR CBN shall be all in lower case letters. Thus, a Calibrated ICEYE Sigma Nought in VV polarization shall have the CBN s0_db_x_vv for its geophysical single-band assets.
## Geophysical and visual products from EO data
The logic of the ESA Charter Mapper is to use EO data products, grouped in a collection called Acquisitions and provide a collection called Datasets where these are available in a form that is ready for analysis and processing.
Acquisitions is a collection of features representing EO data products associated with one activation (incl. some imported by the PM on-demand beyond the EO data collection channeled through COS-2).
Datasets is a collection of features representing post-processed (e.g. calibrated) versions of source EO data products. Datasets are composed of multiple Assets. For example, a Calibrated Dataset includes N single-band Assets representing the geophysical values of N spectral bands extracted from the source EO data.
Assets are intra-product components within an EO data product. An Asset can be a single-band product made from the geophysical values of a single band extracted from a calibrated image. An Asset can also be a composed three-band product (e.g. an RGB composite) made from the geophysical values of separate bands extracted from a calibrated image.
Figure 1 - Schema for a Dataset and its associated single-band Assets which are single-band geophysical products.
Therefore, when the user handles the components of a Calibrated Dataset he handles Assets. This is in line with the pre-defined asset role types defined in the STAC standards. Typically the elements provided in Datasets comprise of:
• Geophysical Assets that are the result of pre-processing (in particular calibration) applied to the original EO data product. The products under Datasets i.e. the Geophysical Assets are the basis for EO Processing Services that take them as input for generating Value Adding products.
• Overviews that are full-res browse images derived from the EO data product by encoding the digital values on 8 bit for visualization, the information content is not made of any geophysical value. Overviews may be single-band images (grey levels) or Red, Green, Blue (RGB) composite using the most common bands used to display the EO data product. The term “overview” for visual products is adopted in the ESA Charter Mapper to be compliant with the pre-defined asset role types defined in the STAC standards1.
Note
The ESA Charter Mapper can visualize Geophysical products and Assets similarly to Overviews and can handle the depth of the signal values thanks to the visualization functions such as the Layer Styling tool that provides histograms.
The ESA Charter Mapper is providing both Overviews and Geophysical Assets products for all the supported EO missions of the International Charter.
## Visual products from optical EO data
In the case of Optical missions, different types of overviews products are offered to the user to support the rapid visualization of EO data.
Figure 2 - True Color RGB composite from WorldView-2 Multi Spectral data acquired over Micronesia (Image credit: Digital Globe).
### Pre-defined RGB band composites
Multiple pre-defined RGB band composites are available in the ESA Charter Mapper for optical EO data. All the possible RGB composite options are listed in Table 4 which also includes examples of the ones derivable from Landsat-7, Landsat-8 and Sentinel-2 missions with the associated CBN combination.
Code Composite RGB (CBN) Landsat-7 ETM+ Landsat-8 OLI Sentinel-2 MSI
TRC True Color red, green, blue 3 2 1 4 3 2 4 3 2
CIV Color Infrared (vegetation) nir, red, green 4 3 2 5 4 3 8 4 3
LAW Land/Water nir, swir16, red 4 5 3 5 6 4 8A 11 4
VEA Vegetation Analysis swir16, nir, red 5 4 3 6 5 4 11 8A 4
SIR Shortwave Infrared swir22, nir, red 7 4 3 7 5 4 12 8A 4
FCU False Color Urban swir22, swir16, red 7 5 3 7 6 4 12 11 4
ATP Atmospheric Penetration swir22, swir16, nir 7 5 4 7 6 5 12 11 8A
WAD Water Depth green, blue, coastal - 3 2 1 3 2 1
Table 4 - RGB Composites for ESA Charter Mapper using CBN and their application for Landsat-8 and Sentinel-2 bands.
### Color operations in the ESA Charter Mapper
In the ESA Charter Mapper a universal color formula is employed in the creation of all overviews of optical payload:
Gamma RGB 1.5 Sigmoidal RGB 10 0.3 Saturation 1
This formula is tailored for a general usage in the Processing Environment and employs different values than the rio-color3 default ones. In the ESA Charter Mapper the structure of a color formula is described in this section about the TiTiler5 tool of the ESA Charter Mapper Geobrowser.
The ESA Charter Mapper universal color formula can be used also to reproduce intra-sensor RGB on the fly in the TiTiler widget in the ESA Charter Mapper Layer Details tab under Layer Styling > Combine Assets > Color Formula.
The application of the universal color formula in the ESA Charter Mapper has shown a fairly good performance in the visualization of multiple RGB composites for most of the missions of the Optical payload. However the assumption of using a single color formula for multiple different sensors has an intrinsic limitation, as encountered in some missions (e.g. Kompsat-3A AEISS-A and ResourceSat-2/2A LISS-III). Thus, the ESA Charter Mapper uses a slightly modified version of the universal color formula when necessary. Refined color formulas are included in Table 5.
Mission Sensor Modified color formula Valid for RGB
Kompsat-3A AEISS-A Gamma RGB 1.5 Saturation 1.1 Sigmoidal RGB 15 0.33 TRC, CIV
Gaofen-2 AEISS Gamma RGB 1 Saturation 1 Sigmoidal RGB 3 0.5 TRC, CIV
VRSS-2 PMC Gamma RGB 1.5 Saturation 1 Sigmoidal RGB 10 0.25 TRC, CIV
ALSAT-1B SLIM6 Gamma RGB 1.5 Saturation 1 Sigmoidal RGB 10 0.2 CIV
ResourceSat-2A LISS-III Gamma RGB 1.5 Saturation 2 Sigmoidal RG 5 0.3 Sigmnoidal B 10 0.3 CIV
ResourceSat-2A LISS-III Gamma RGB 1.5 Saturation 1 Sigmoidal RG 5 0.5 Sigmoidal B 10 0.4 VEA
Table 5 - Modified versions of the universal color formula for multiple missions of the ESA Charter Mapper optical payload.
An example of a VEA RGB band composite obtained by using the modified color formula with the ResourceSata-2A LISS-III data is shown in Figure 3.
Figure 3 - VEA RGB band composite of ResourceSat-2A data using the modified universal color formula (Image credit: ISRO).
Tailored RGB composites can be also made under Combine Assets using the Horizontal cumulative count cut 2-98% or by manually setting min/max for each RGB channel.
## Visual products from SAREO data
The ESA Charter Mapper also produces Overviews for the radar sensors. For single-pol SAR data, such as the case of ICEYE products, the ESA Charter Mapper generates only a gray-scale overview product at full resolution in VV polarization. Instead, for dual- and ful- pol SAR data, the platform generates multiple grayscale overviews for each single polarization plus an RGB band composite from a combination of them.
Grayscale overviews are generated for each of the polarizations available in the source SAR data (see Figure 4).
Figure 4 - Gray-scale overview of TerraSAR-X EEC data in HH polarization over Palu (Image credit: DLR).
In the case of dual-pol data, VV&VH or HH&HV, the overview-dual RGB is created with R=co-pol, G=cross-pol, and B=co-pol/cross-pol. This first representation highlights mainly urban areas, the different orientation of buildings, and vegetation.
Figure 5 - Dual Pol Ratio RGB composite "overview-dual" from ALOS-2 PALSAR-2 data in HH/HV polarizations over Palu (Image credit: JAXA).
Instead, in the case of full-poll data the overview-full RGB band composite is derived as follows: R=HH, G=HV, B=VV. This second representation improves the dual-pol representation, highlighting better volumetric scattering, bare soils and urban areas.
Figure 6 - Full Pol RGB composite "overview-full" from SAOCOM-1A GTC full-pol data over the Russian Federation (Image credit: CONAE).
### Settings of the signal dynamic range for different SAR polarization and bands
In all SAR products derived from the systematic SAR calibration (either the geophysical asset or the overview one at 8 bit) an image stretching is applied to consider only Sigma Nought within pre-defined minimum and maximum values. This helps optimize visualization on the screen of the assets. The minimum and maximum values expressed in dB included in Table 6 are specific to mission, SAR band (or frequency) and polarization.
Mission SAR-band Co-pol (VV/HH) [min,max] in dB Cross-pol (VH/HV) [min,max] in dB
ICEYE X [-22,2] [-27,-3]
KOMPSAT-5 X [-22,2] [-27,-3]
TerraSAR-X / TanDEM-X X [-22,2] [-27,-3]
Gaofen-3 C [-20,0] [-26,-5]
RCM C [-20,0] [-26,-5]
Sentinel-1 C [-20,0] [-26,-5]
ALOS-2 L [-27,0] [-35,-5]
SAOCOM-1 L [-27,0] [-35,-5]
Table 6 - Signal dynamic ranges used for stretching Sigma Nought during the generation of SAR overviews products. The values in dB are given for multiple missions, SAR-bands and polarizations.
Note
For Geophysical single-band assets included into SAR Calibrated Datasets the image stretching is applied only in the geobrowser for visualization purposes of the asset. This image stretching is just a pre-configured option in TiTiler to simplify the visualization of the Sigma Nought in the map (e.g. Min=-27dB and Max=0dB pre-defined for a s0_db_l_vv Geophysical asset from a SAOCOM-1 SAR Calibrated Dataset). The processor preserves the full signal dynamic range in the output geophysical product derived from the systematic Radar Product Calibration service (e.g. when downloading the geophysical product s0_db_l_vv.tif the image stretching is not applied). To customize the contrast enhancement of backscatter, the user can employ a different image stretching by specifying (under Details / Layer Styling / View Options / Channel histogram min/max) different values in dB to the ones listed in Table 6.
## Low resolution overview product
To deal with the restrictions imposed by the licensing policy of some Agencies for the access to full resolution data, access to the full resolution overview products is restricted and a reduced resolution version is provided in the first instance. For each acquired product received from COS-2 notification, the ESA Charter Mapper generates a single overview product at reduced resolution using a pre-defined RGB band combination and/or grayscale downsampled image. The default reduced resolution overview for each mission of the ESA Charter Mapper payload is described in the below Table 7.
The default low resolution overview for each mission of the ESA Charter Mapper payload is described in the below Table 7.
Mission Sensor Type Default low resolution overview product
ALOS-2 PALSAR-2 SAR overview-vv-low-res (or overview-hh-low-res)
ALSAT-1B SLIM-6 Optical overview-civ-low-res
CartoSat-2 PAN Optical overview-pan-low-res
CBERS-4 PAN/MUX Optical overview-trc-low-res
CBERS-4 WFI Optical overview-trc-low-res
CBERS-4A WPM Optical overview-trc-low-res
CBERS-4A PAN/MUX Optical overview-trc-low-res
CBERS-4A WFI Optical overview-trc-low-res
DEIMOS-1 SLIM-6 Optical overview-civ-low-res
DubaiSat-2 HiRAIS Optical overview-trc-low-res
Gaofen-1 PMS Optical overview-trc-low-res
Gaofen-1 WFV Optical overview-trc-low-res
Gaofen-2 PMS Optical overview-trc-low-res
Gaofen-3 SAR-C SAR overview-vv-low-res (or overview-hh-low-res)
Gaofen-4 PMS, IRS Optical overview-trc-low-res
GeoEye-1 EO imager Optical overview-trc-low-res
Kanopus-V 1/2/3 MSS/PSS Optical overview-trc-low-res
Kanopus-V-IK MSS/PSS Optical overview-trc-low-res
Kanopus-V-IK MSU-IK-SRM Optical overview-mwir38-low-res (or overview-lwir9-low-res)
KhalifaSat KHCS Optical overview-trc-low-res
KOMPSAT-2 MSC Optical overview-trc-low-res
KOMPSAT-3 AEISS Optical overview-trc-low-res
KOMPSAT-3A AEISS-A Optical overview-trc-low-res
KOMPSAT-5 COSI SAR overview-vv-low-res (or overview-hh-low-res)
ICEYE SAR-X SAR overview-vv-low-res
Landsat-7 ETM Optical overview-trc-low-res
Landsat-8 OLI Optical overview-trc-low-res
MeteorM KMSS MSU Optical overview-civ-low-res
PlanetScope PlanetScope Optical overview-trc-low-res
RADARSAT-2 SAR-C SAR overview-vv-low-res (or overview-hh-low-res)
RCM SAR-C SAR overview-vv-low-res (or overview-hh-low-res)
ResourceSat-2 / 2A LISS-III Optical overview-civ-low-res
ResourceSat-2 / 2A LISS-IV Optical overview-civ-low-res
Resurs-P GEOTON-L1 Optical overview-trc-low-res
Resurs-P ShMSA-VR Optical overview-trc-low-res
SAOCOM-1A SAR-L SAR overview-vv-low-res (or overview-hh-low-res)
Sentinel-1 SAR-C SAR overview-vv-low-res (or overview-hh-low-res)
Sentinel-2 MSI Optical overview-trc-low-res
SPOT-6 SPOT-6 Optical overview-trc-low-res
SPOT-7 SPOT-7 Optical overview-trc-low-res
TerraSAR-X, TanDEM-X SAR-X SAR overview-vv-low-res (or overview-hh-low-res)
UK-DMC-2 SLIM-6 Optical overview-civ-low-res
Vision-1 VHRI-100 Optical overview-trc-low-res
World-view1 WV60 Optical overview-pan-low-res
World-view2 WV110 Optical overview-trc-low-res
World-view3 WV110 Optical overview-trc-low-res
Table 7 - Definition of default low resolution overviews for each of the missions of the ESA Charter Mapper payload.
## A common format for output raster products
All the output raster products derived from the ESA Charter Mapper Processing Service, which are summarized in here, are given in the Cloud Optimized GeoTIFF (COG)4 format.
COG is an extension of the GeoTIFF format dedicated for data hosting on HTTP file servers. One of the main advantages in using COG format in cloud processing environments is that the single GeoTIFF file can be accessed by multiple clients with no need to copy or cache the desired product.
Furthermore, it allows the client to easily retrieve via HTTP GET range requests just a desired portion of the data required for more efficient workflows on the cloud. As an example, in the ESA Charter Mapper the pre-processed datasets are stored in COG format to reduce data transfer during the thematic processing. The COG format is already employed under multiple initiatives such as: OpenAerialMap, INPE, MAXAR/DigitalGlobe, Planet, NASA and Copernicus’s Mundi DIAS.
STAC and COG have been recently adopted by USGS for the provision of Landsat-7 and Landsat-8 products, as described in the USGS Data Format Control Book6 which describes the format of the data to be used in Collection 2 processing.
1. SpatioTemporal Asset Catalog Core Specification, The core components of STAC, https://stacspec.org/core.html |
# Is there a faster way to calculate Abs[z]^2 numerically?
Here I'm not interested in accuracy (see 13614) but rather in raw speed. You'd think that for a complex machine-precision number z, calculating Abs[z]^2 should be faster than calculating Abs[z] because the latter requires a square root whereas the former does not. Yet it's not so:
s = RandomVariate[NormalDistribution[], {10^7, 2}].{1, I};
DeveloperPackedArrayQ[s]
(* True *)
Abs[s]^2; // AbsoluteTiming // First
(* 0.083337 *)
Abs[s]; // AbsoluteTiming // First
(* 0.033179 *)
This indicates that Abs[z]^2 is really calculated by summing the squares of real and imaginary parts, taking a square root (for Abs[z]), and then re-squaring (for Abs[z]^2).
Is there a faster way to compute Abs[z]^2? Is there a hidden equivalent to the GSL's gsl_complex_abs2 function? The source code of this GSL function is simply to return Re[z]^2+Im[z]^2; no fancy tricks.
• Here's an even slower way: (Re[#]^2 + Im[#]^2) & /@ s. And even slower still: Total[ReIm[#]^2] & /@ s – bill s May 10 '19 at 14:24
There's InternalAbsSquare:
s = RandomVariate[NormalDistribution[], {10^7, 2}].{1, I};
foo = InternalAbsSquare[s]; // AbsoluteTiming // First
murf = Abs[s]^2; // AbsoluteTiming // First
(*
0.022909
0.063441
*)
foo == murf
(* True *)
• Ah yes precisely what I was looking for, many thanks Michael! Is there a repository of such tricks? – Roman May 10 '19 at 14:25
• @Roman I was just looking. I thought there was a post about useful Internal functions, but I couldn't find it just now. The context contains some useful numerical functions like Log1p and Expm1. StatisticsLibrary also contains some nice, well-programmed functions. – Michael E2 May 10 '19 at 14:31
• – Chris K May 10 '19 at 14:31
• @ChrisK That must be it! Thanks. – Michael E2 May 10 '19 at 14:32
• @CATrevillian I would have thought it was in the MKL (Intel Math Kernel Library), but I didn't find it there. I guess I don't know. – Michael E2 May 11 '19 at 3:10 |
mersenneforum.org GPU Prime Gap searching
Register FAQ Search Today's Posts Mark Forums Read
2021-06-25, 18:41 #12
MJansen
Jan 2018
22·3·7 Posts
Quote:
Originally Posted by ATH mfaktc can also sieve on the GPU: SieveOnGPU=1 in mfaktc.ini. It think almost everyone uses that now, and I think it was only in the very beginning of mfaktc that people were using CPU for sieving, but I could be wrong about that.
Thanks ATH! The thread is 300+ pages long over 10 years, I did not read them all yet ;-) Cool to see the sieving has been moved to the GPU though! Any indication where in the thread that change is announced?
Kind regards
Michiel
Last fiddled with by MJansen on 2021-06-25 at 18:42 Reason: Changed typo's
2021-06-25, 19:39 #13
ATH
Einyen
Dec 2003
Denmark
2·3·13·41 Posts
Quote:
Originally Posted by MJansen Any indication where in the thread that change is announced?
Post #1948. Seems it was released in version 0.20 Dec 2012, about 3 years after the thread started, and SieveOnGPU was set to be automatically on from the start, so no one probably used CPU sieving since then.
2021-06-26, 21:59 #14 MJansen Jan 2018 1248 Posts Hi, In the mfaktc thread, the sieving on GPU seemed to get a boost after post #1610. No specifics though as far as I could find. I remember Dana once posted that a prp test is very fast and the mfaktc thread seems to indicate that pre-sieving should be kept to a minimum (1000 primes tops). I have no data on real life performance, but my intuitive start point would be: a. pre-sieve and PRP on the GPU: b. I would try a wheel approach first as a base reference, and prp the remaining candidates, i.e. 1 (or more?) fermat test(s) c. additional tests would be to play around with the number of primes in the pre-sieve to determine the trade off between pre-sieving and prp-ing Question is how to avoid possible pseudoprimes? Is it enough to perform a second Fermat test in a different base, or do you need more tests? Kind regards Michiel
2021-06-27, 15:37 #15
ATH
Einyen
Dec 2003
Denmark
2×3×13×41 Posts
Quote:
Originally Posted by MJansen Question is how to avoid possible pseudoprimes? Is it enough to perform a second Fermat test in a different base, or do you need more tests?
Personally in my own CPU program if the first SPRP test is positive I do a Lucas test, so effectively a BPSW test. It is a lot slower than just a Fermat test, but it is almost sure not to find any pseudoprimes, since BPSW pseudoprimes are so rare that none are known. I have no idea if a Lucas test is possible or feasible on a GPU.
But above 264 Fermat and SPRP pseudoprimes are very rare, and the risk that a pseudoprime also should be "blocking" a record prime gap must be very very low, but I'm not sure if it is low enough to be negligible.
2021-06-29, 03:44 #16 CraigLo Mar 2021 738 Posts For sieving I start with a small wheel. I'm currently using 2310 but I haven't had a chance to test 30030 yet. Using 2310 gives me 480 possible candidates. I then sieve for each of the possible candidates. For example the first candidate is 1 so I sieve for 1, 2311, 4621, 6931, etc. This starts with 2 more wheels with 5 primes each. The first is copied into each possible candidate to initialize the sieve. The second wheel is combined with the first using a bitwise-or. Then I do 5 rounds of sieving with 2048 primes each. These are done using shared memory. I sieve 2048 primes for 192x1024 elements at a time. I then copy these into my global sieve. I find this approach fastest up to around 100k primes. After that approaches sieving directly into global memory are faster. In a single processing loop I do 80 of the shared sieve blocks. That means a single candidate gets sieved out to 192*1024*80=15.7E6. With 2310 elements in the first wheel that means I get through 36E9 per loop. Using a minimum gap of 1300 it takes about .24 seconds per loop which gives me 150E9 per sec. I haven't tested sieving only in a while but it is probably 2-3 times faster.
2021-06-29, 03:49 #17 CraigLo Mar 2021 59 Posts I looked briefly at a Lucas test but haven't thought about implementing it yet. On a CPU how much slower is it than a Fermat test?
2021-06-29, 11:15 #18 a1call "Rashid Naimi" Oct 2015 Remote to Here/There 1000011111102 Posts Well the experts seem to have chosen to stay silent so far so I will give you a non expert reply. You will have to know all the prime factors of p-1. So there is Factoring-computation-cost. And then you will have to do modular calculations computationally as expensive as a single Fermat test per each prime factor: Mod(base,p)^((p-1)/eachPrimeFactor) != 1 Then you have to try enough "random" bases to get a !=1 for all the found prime factors. Normally it does not take more than a few random trials. BTW this is an excellent thread. Thank you for sharing your expertise without the common-attitudes that seem to dominate the members here. ETA one issue is that you might hit a Fermat-pseudiprime and have to have implementations to break the loop. While they are rare for given base for larger numbers there exist infinitude of them per given base growing exponentially in size. Last fiddled with by a1call on 2021-06-29 at 11:50
2021-07-15, 12:18 #19 ATH Einyen Dec 2003 Denmark 2·3·13·41 Posts I switched my program from BPSW test (Miller-Rabin + Lucas) to 2 x Miller-Rabin instead, and it is now twice as fast. I'm pretty sure BPSW test is overkill here. I'm using base 2 and then base 9375, which is one of the bases with few survivors when testing all the 2-SPRP in the Feitsma list up to 264. Craig: You are using only 1 Fermat test ? Did you consider switching to the Miller-Rabin (SPRP) test ? The complexity according to Wikipedia is O(log3n) for MR and O(log2n*loglog n) for Fermat, so it should not be that much slower, but maybe it is hard to implement on GPU? The advantage is that there are no Carmichael numbers for SPRP test (where all bases shows PRP incorrectly), and there are fewer composite SPRP numbers than Fermat PRP. I'm not sure when doing 1 Fermat or 1 MR test, how likely it is that a false PRP will block a big prime gap, it seems to be very unlikely, but the question is if it is unlikely enough. 2-SPRP numbers are pretty rare, when I tried searching above 264 a few years ago, and if you look at the Feitsma list, there are 808 2-SPRP in the last 1015 below 264 or 1 every 1.24*1012 on average. I'm not sure about the numbers for Fermat PRP. Are your program testing all PRP surviving your sieve? The way I do it is that I set a minimum gap I want to find, in my case 1320 because it is the highest I have found above 264, then after I have sieved an interval I jump 1320 ahead from the last PRP I found and then test backwards until I find a PRP, then I jump ahead 1320 again and repeat. If I happen to get from 1320 down to 0 without finding a PRP then I search upwards again from 1320 until I find a new "record" gap. That way I'm sure no gap above or equal to my minimum is missed, but I will not find any gaps below the minimum. On average my program only reach from 1320 down to 1100-1200 before it finds a PRP. I'm not sure if that would work on a GPU. If you use all the threads to PRP test the same interval it probably will not, but if each thread tests its own small interval, it could be useful, but maybe you are already using this strategy? If you do I guess your minimum gap is set to 1000. Last fiddled with by ATH on 2021-07-15 at 12:20
2021-07-16, 05:05 #20 CraigLo Mar 2021 59 Posts It probably wouldn't be too hard to convert my Fermat code to a MR test. The time complexity should be the same. I think the O(log2n*loglog n) you wrote for Fermat uses an FFT multiply which is only faster for very large numbers. http://www.janfeitsma.nl/math/psp2/statistics From the Fietsma list there is about 1 2-PRP every 3E11 from 1E19 to 264. This is about 4x more than the 2-SPRP. I think there will be about 500 gaps >= 1400 from 264 to 265 and each will require about 70 prime checks on average. The chance that a gap above 1400 from 264 to 265 is missed due to a PRP is about 1 in 3E11/500/70 = 1 in 8E6. I'm currently saving gaps >= 1300 using an approach similar to yours. All gaps found in the GPU above 1300 are sent back to the CPU for a more thorough check using the gmp library.
2021-09-03, 09:49 #21 MJansen Jan 2018 8410 Posts I bought a laptop with a GPU (Geforce 1650 mobile, 1024 cuda cores, 16 SM's of 64 cores each) for testing. I will be using windows however (not Linux) and will install the necessary programs next week. A fast search showed that I will have to use C++ (new to me ...) to code. I looked at the pseudoprime problem and the solution ATH uses, seems most interesting (and fast) but also no clue as to how to implement that yet on a GPU. I have been looking at Dana's Perl code, but not sure this is the correct 64 bit code: https://metacpan.org/dist/Math-Prime...ime/Util/PP.pm For prime gaps, the fastest I found was using his next_prime / prev_prime code. sub next_prime { my($n) = @_; _validate_positive_integer($n); return $_prime_next_small[$n] if $n <=$#_prime_next_small; # This turns out not to be faster. # return $_primes_small[1+_tiny_prime_count($n)] if $n <$_primes_small[-1]; return Math::BigInt->new(MPU_32BIT ? "4294967311" : "18446744073709551629") if ref($n) ne 'Math::BigInt' &&$n >= MPU_MAXPRIME; # n is now either 1) not bigint and < maxprime, or (2) bigint and >= uvmax if ($n > 4294967295 && Math::Prime::Util::prime_get_config()->{'gmp'}) { return Math::Prime::Util::_reftyped($_[0], Math::Prime::Util::GMP::next_prime($n)); } if (ref($n) eq 'Math::BigInt') { do { $n +=$_wheeladvance30[$n%30]; } while !Math::BigInt::bgcd($n, B_PRIM767)->is_one || !_miller_rabin_2($n) || !is_extra_strong_lucas_pseudoprime($n); } else { do { $n +=$_wheeladvance30[$n%30]; } while !($n%7) || !_is_prime7($n); }$n; } And it seems like he uses a small wheel (30) and a Miller_Rabin base 2 test followed by a full Lucas, so in effect a full BPSW test? Is this a correct assumption?
2021-09-05, 06:20 #22
LaurV
Romulan Interpreter
"name field"
Jun 2011
Thailand
265016 Posts
Please use code tabs when you post code, it takes less space on the screen (especially for long routines, and not all of us have high-resolution, super-wide monitors), and it makes the code easily to read (guess what, not all of us are perl experts either ). Moreover, Mike may add some syntax-highlighting for perl in the future, which will make our life even easier
(edit: albeit code in quotes looks crap, as I just found out from below... grrr )
Quote:
Originally Posted by MJansen Code: sub next_prime { my($n) = @_; _validate_positive_integer($n); return $_prime_next_small[$n] if $n <=$#_prime_next_small; # This turns out not to be faster. # return $_primes_small[1+_tiny_prime_count($n)] if $n <$_primes_small[-1]; return Math::BigInt->new(MPU_32BIT ? "4294967311" : "18446744073709551629") if ref($n) ne 'Math::BigInt' &&$n >= MPU_MAXPRIME; # n is now either 1) not bigint and < maxprime, or (2) bigint and >= uvmax if ($n > 4294967295 && Math::Prime::Util::prime_get_config()->{'gmp'}) { return Math::Prime::Util::_reftyped($_[0], Math::Prime::Util::GMP::next_prime($n)); } if (ref($n) eq 'Math::BigInt') { do { $n +=$_wheeladvance30[$n%30]; } while !Math::BigInt::bgcd($n, B_PRIM767)->is_one || !_miller_rabin_2($n) || !is_extra_strong_lucas_pseudoprime($n); } else { do { $n +=$_wheeladvance30[$n%30]; } while !($n%7) || !_is_prime7($n); }$n; }
Last fiddled with by LaurV on 2021-09-05 at 06:23
Similar Threads Thread Thread Starter Forum Replies Last Post Dale Mahalko Software 7 2015-03-21 19:55 hhh Math 7 2014-03-18 14:37 Trilo Riesel Prime Search 33 2013-09-19 21:21 Mini-Geek Twin Prime Search 8 2011-01-30 00:18 ltd Prime Sierpinski Project 3 2006-03-23 19:27
All times are UTC. The time now is 20:59.
Sat Dec 4 20:59:33 UTC 2021 up 134 days, 15:28, 1 user, load averages: 0.94, 1.05, 1.13 |
# Edit:
As per moewe's suggestion, the MWE is now truly minimal (I think). I removed the need for multiple .bib files by removing abbreviations, and made the main .bib file smaller (exactly as shown below).
Here's the log (file.blg):
[0] Config.pm:318> INFO - This is Biber 1.8
[0] Config.pm:321> INFO - Logfile is 'file.blg'
[60] biber-darwin:275> INFO - === Sat May 10, 2014, 14:45:31
[61] Biber.pm:333> INFO - Reading 'file.bcf'
[128] Biber.pm:630> INFO - Found 2 citekeys in bib section 0
[156] Biber.pm:3053> INFO - Processing section 0
[181] Biber.pm:3190> INFO - Looking for bibtex format file 'https://dl.dropboxusercontent.com/u/47261882/bibliography.bib' for section 0
[182] bibtex.pm:134> INFO - Data source 'https://dl.dropboxusercontent.com/u/47261882/bibliography.bib' is a remote BibTeX data source - fetching ...
[925] bibtex.pm:812> INFO - Found BibTeX data source '/var/folders/lw/xmh_g5vx4j9ctfxysb189qyr0000gn/T/ZegiE_xxWe/biber_remote_data_source_vqdu_.bib'
[930] bibtex.pm:134> INFO - Data source 'https://dl.dropboxusercontent.com/u/47261882/bibliography.bib' is a remote BibTeX data source - fetching ...
[1508] bibtex.pm:812> INFO - Found BibTeX data source '/var/folders/lw/xmh_g5vx4j9ctfxysb189qyr0000gn/T/ZegiE_xxWe/biber_remote_data_source_17lZd.bib'
[1509] Utils.pm:169> WARN - Duplicate entry key: 'a:watson:2014:01' in file '/var/folders/lw/xmh_g5vx4j9ctfxysb189qyr0000gn/T/ZegiE_xxWe/biber_remote_data_source_17lZd.bib', skipping ...
[1509] Utils.pm:169> WARN - Duplicate entry key: 'ic:bedau:2009:01' in file '/var/folders/lw/xmh_g5vx4j9ctfxysb189qyr0000gn/T/ZegiE_xxWe/biber_remote_data_source_17lZd.bib', skipping ...
[1509] Utils.pm:169> WARN - Duplicate entry key: 'c:barberousse:2009:01' in file '/var/folders/lw/xmh_g5vx4j9ctfxysb189qyr0000gn/T/ZegiE_xxWe/biber_remote_data_source_17lZd.bib', skipping ...
[1510] Utils.pm:169> WARN - I didn't find a database entry for crossref 'c:barberousse:2009:01' in entry 'ic:bedau:2009:01' - ignoring (section 0)
[1535] Biber.pm:2939> INFO - Overriding locale 'en_GB.UTF-8' default tailoring 'variable = shifted' with 'variable = non-ignorable'
[1535] Biber.pm:2945> INFO - Sorting 'entry' list 'nty' keys
[1535] Biber.pm:2949> INFO - No sort tailoring available for locale 'en_GB.UTF-8'
[1539] bbl.pm:482> INFO - Writing 'file.bbl' with encoding 'ascii'
[1540] bbl.pm:555> INFO - Output to file.bbl
[1540] Biber.pm:105> INFO - WARNINGS: 4
It seems like the file is fetched twice (hence the warnings for duplicate keys). But I do not know how this is related to the issue.
When using biblatex's (and biber's) feature for fetching .bib files from remote locations, cross-refs are not resolved.
Consider the following bibliography file (bibliography.bib):
@Article{a:watson:2014:01,
title = {The Evolution of Phenotypic Correlations and Developmental Memory''},
author = {Watson, Richard A. and Wagner, G{\"u}nter P. and Pavlicev, Mihaela and Weinreich, Daniel M. and Mills, Rob},
journal = {Evolution},
year = {2014},
month = apr,
volume = {68},
number = {4},
pages = {1124--1138},
doi = {10.1111/evo.12337},
url = {http://dx.doi.org/10.1111/evo.12337},
}
@InCollection{ic:bedau:2009:01,
title = {The Evolution of Complexity},
author = {Bedau, Mark A.},
pages = {111--130},
doi = {10.1007/978-1-4020-9636-5_8},
url = {http://dx.doi.org/10.1007/978-1-4020-9636-5_8},
crossref = {c:barberousse:2009:01},
}
@Collection{c:barberousse:2009:01,
editor = {Barberousse, Anouk and Morange, Michel and Pradeu, Thomas},
title = {Mapping the Future of Biology},
booktitle = {Mapping the Future of Biology},
subtitle = {Evolving Concepts and Theories},
publisher = {Springer Netherlands},
year = {2009},
doi = {10.1007/978-1-4020-9636-5},
url = {http://dx.doi.org/10.1007/978-1-4020-9636-5},
series = {Boston Studies in the Philosophy of Science},
volume = {266},
}
If I cite ic:bedau:2009:01 when using \addbibresource{bibliography.bib}, the cross-reference to c:barberousse:2009:01 is resolved successfully. If I cite it when using \addbibresource[location=remote]{<url>.bib}, however, the cross-reference is not resolved, unless I happen to also cite c:barberousse:2009:01 somewhere else in the text.
I am using an up-to-date version of MacTeX and compiling with pdflatex -> biber -> pdflatex.
Any ideas? Can someone at least confirm that they face the same issue? MWE below.
# MWE:
\documentclass{article}
\usepackage[backend=biber]{biblatex}
% If I fetch the file from a remote location, cross-refs are not resolved.
% If the file is stored locally, everything works okay.
\begin{document}
% Does not resolve cross-reference.
\cite{a:watson:2014:01,ic:bedau:2009:01}.
% Resolves cross-reference because it is also cited explicitly.
% \cite{a:watson:2014:01,ic:bedau:2009:01,c:barberousse:2009:01}.
\printbibliography
\end{document}
• Your MWE gives me the following error in the blg-file. Seems like an encoding problem, or the file is corrupt: [11450] Utils.pm:169> WARN - BibTeX subsystem: C:\Users\Besitzer\AppData\Local\Temp\ogRl1vx8ui\biber_remote_data_source_Xu9Jt.bib_924.utf8, line 3, warning: 18 characters of junk seen at toplevel – musicman May 9 '14 at 16:10
• @musicman Those junk characters are headers I use to separate journal papers from conference papers etc -- they are a non-issue, really. I always get that warning, but the cross-referencing issue occurs only when I fetch the .bib file from a remote location. – sudosensei May 9 '14 at 16:18
• Maybe you'd like to set up a truly minimal example with a stripped-down version of your .bib files containing just one or two entries that show the behaviour. (I think the bib-abbrev.bib file could easily be eliminated from the MWE; preferably a really minimal example would also not trigger other warnings anout junk chracters etc.) So the issue is easier to investigate. – moewe May 10 '14 at 12:43
• This was a bug and should be corrected in the biber 1.9 available in the DEV folder on SourceForge. You need to be using biblatex 2.9 DEV version with biber 1.9 (also on SourceForge). – PLK May 10 '14 at 18:53
• @PLK That is great to hear! Could you write this as an answer so that I can accept it, please? Thanks a lot for your prompt reply! – sudosensei May 10 '14 at 20:56
• Just tested using biber 1.9 and biblatex 2.9. Everything works like a charm! Thanks for your help, and for your fantastic work on these two packages. – sudosensei May 11 '14 at 12:17 |
# Open problem: ideal vector commitment
I do not understand how the pairing works. If both key and value are 28 bits, then key * p(value + 2^{28}) will be larger than 2^{32} so you can not apply the p-function on it again.
You’re right. I was sloppy with the bounds. That’s an error. You have to adjust it. Either you’re able to calculate p(k) for higher k or you have to reduce number of different values or keys. Nevertheless, the overall concept stays the same.
In the section “Distributed accumulator updates” the paper of Boneh et al. says about updating membership witnesses: “Using the RootFactor algorithm this time can be reduced to O(n log(n)) operations or amortized O(log(n)) operations per witness.”
Why are you using the p function twice? Wouldn’t pair=p(key+value*2^{28}) make more efficient use of the available primes?
How far can we stretch the p-function such that it still is practical? Because if we want to store hashes we need at least 120 bits.
Yes. Boneh et al. 's paper looks awesome.
I was also wondering why @vbuterin in Using polynomial commitments to replace state roots
picked polynomial commitments over vector commitment? Is there any trade-off?
BTW, I guess the “ideal” in the title means meeting the 5 properties you @vbuterin list, instead of using “ideal group/ring, etc.” (from Math perspectives)?
Yes, ideal is meant as an adjective
Indeed, what we’re looking for is vector commitments – it just happens that polynomial commitments, at this stage, seem to make some of the best vector commitments we know of.
The polynomial commitment primitive does have the additional advantage that it would naturally expand to data availability roots.
We recently proposed Hyperproofs to address this open problem by Vitalik.
Hyperproofs can be regarded as an algebraic generalization of Merkle trees that use a hash function based on commitments to multivariate polynomials (see [PST13e]).
Benefits:
1. Aggregating Hyperproofs is 10 to 100 times faster than SNARK-friendly Merkle trees via Poseidon or Pedersen hashes
2. Individual Hyperproofs are as large as Merkle proofs
3. Updating a tree of Hyperproofs is competitive with updating a Poseidon-hashed Merkle tree: 2.6 millisecs / update.
4. Hyperproofs are homomorphic.
Room for improvement / future work:
1. Aggregated Hyperproofs are a bit large: 52 KiB
2. Verifying an aggregated proof is rather slow: 18 seconds
3. Verifying an individual Hyperproof is slower than a Merkle proof: 11 millisecs
4. Trusted setup with linear-sized public parameters
Key ideas:
• Represent a vector \textbf{a} = [a_0,\dots, a_{n-1}], where n=2^\ell, as a multilinear-extension polynomial f(x_\ell, \dots, x_1) such that for all positions i with binary expansion i_\ell, i_{\ell-1},\dots, i_1 we have f(i_\ell, \dots, i_1) = a_i.
• Commit to this polynomial using PST (multivariate) polynomial commitments: i.e., the generalization of KZG to multivariate polynomials
• Specifically, the vector commitment is c=g^{f(s_\ell, \dots, s_1)}, where s_\ell, \dots, s_1 are unknown trapdoors. (See paper for details on full public parameters.)
• The proof for a_i consists of \ell PST commitments to quotient polynomials q_\ell,\dots q_1 such that f = a_i + \sum_{k = 1,\dots,\ell} q_k \cdot (x_k - i_k)
• q_\ell is computed by dividing f by x_\ell - i_\ell
• q_{\ell-1} is computed by dividing r_\ell by x_{\ell-1} - i_{\ell-1}, where r_\ell is the remainder from the division above
• …and so on
• q_1 is computed by dividing r_2 by x_1-i_1, which yields remainder r_1 = f(i_\ell, \dots, i_1) = a_i.
• Verifying a proof involves checking f = a_i + \sum_{k = 1,\dots,\ell} q_k \cdot (x_k - i_k) holds, but doing so against the PST commitment to f and to the quotients.
• This is done via \ell+1 pairings / bilinear maps.
• Let c be the vector commitment to \textbf{a}
• Let \pi_i = (w_1,\dots,w_\ell) be the proof for a_i, where w_k's are commitments to the relevant quotient polynomials q_k.
• Then, a proof is valid iff. the following pairing equation holds:
• e(c/g^{a_i}, g) \stackrel{?}{=} \prod_{k=1,\dots,\ell} e(w_k, g^{s_k - i_k})
• Unfortunately, computing all n PST proofs as above is too slow, taking O(n^2) time. Instead, we observe that, if we compute proofs in the canonical order above, then proofs often “intersect” and actually determine a tree, which we call a multilinear tree (MLT). Computing this tree only takes O(n\log{n}) time.
• To aggregate many proofs \{\pi_i\}_{i\in I}, where \pi_i = (w_{i,\ell},\dots, w_{i,1}), we prove knowledge of w_{i,k}'s that satisfy the pairing equation for each i\in I. For this, we use the recent generalization of Bulletproofs to pairing equations by Bunz et al. [BMM+19]
• Our MLT is homomorphic: two trees for two vectors \textbf{a} and \textbf{b} can be homomorphically combined into a tree for their sum \textbf{a}+\textbf{b}.
• This leads to very straightforward proof updates in the MLT, after the vector changes
• It also has other applications, such as providing unstealability of proofs, a property which can be used to incentive proof computation. |
dc.contributor.author Goldberga, Ieva dc.date.accessioned 2020-02-05T15:09:39Z dc.date.available 2020-02-05T15:09:39Z dc.date.issued 2020-03-21 dc.date.submitted 2019-09-30 dc.identifier.uri https://www.repository.cam.ac.uk/handle/1810/301743 dc.description.abstract In recent years, solid-state Nuclear Magnetic Resonance (NMR) has emerged as an established spectroscopic method to afford detailed structural information on native cellular and extracellular components at atomic-scale resolution. Fibrillar collagens are the most common component of the extracellular matrix (ECM), comprising up to 20% by weight of the human body and is found in most of the tissues. Due to their diverse structures and compositions, collagens serve many functions, providing structural and mechanical support for surrounding cells, and playing important roles in cell-to-cell communication. Nonetheless, despite being at first glance a simple protein formed by three homologous polypeptide chains of repeating three-amino-acid triads trimerised into a triple helix, it is a highly versatile and complex system. Due to the complexity and size of the triple helix, the scientific community still lacks understanding of collagen structure, flexibility and dynamics at the atomic level, in spite of today’s advances in technology. The combination of $^13$C, $^15$N-labelled amino acid enrichment of in-vitro or in-vivo materials with two-dimensional solid-state NMR spectroscopy potentially provides a more detailed understanding of the complex collagen structure and dynamics at atomic resolution. Furthermore, our knowledge of undesirable structural changes within the extracellular matrix, such as non-enzymatic glycation reactions with reducing sugars, is limited. Glycation-modified extracellular matrix (ECM) leads to abnormal cell behaviour and widespread cell necrosis, potentially causing numerous health complications, e.g. in diabetic patients. Solid-state NMR is a powerful probe to study these structural changes. The work presented in this thesis demonstrates how solid-state NMR can be used to study the effects of genetic and glycation chemistry on the molecular structure and dynamics of the collagen. We employed a selection of synthetic model peptides that contain a variation of the native sequence representing normal and defected collagen triple-helical compositions to assess the backbone motions via the use of the $^15$N T$_1$ relaxation. Further, we use U-$^13$C,$^15$N-isotopically enriched collagen ECM samples to investigate the conformational and dynamic changes after glycation of the hydrophilic and hydrophobic regions of the collagen fibrils. Finally, we propose a methodology that can be employed to probe different sites (gap and overlap zones) of the collagen fibrils in their native state which can be exploited to detect less abundant species found in the collagen protein. dc.description.sponsorship EPSRC dc.language.iso en dc.rights All rights reserved dc.rights All Rights Reserved en dc.rights.uri https://www.rioxx.net/licenses/all-rights-reserved/ en dc.subject Solid-State NMR dc.subject Extracellular matrix dc.subject Collagen dc.subject Isotopic enrichment dc.subject T1 relaxation dc.subject Inverse Laplace Transform (ILT) dc.title Elucidating Structure and Dynamics of Extracellular Matrix Collagen Using Solid-State NMR dc.type Thesis dc.type.qualificationlevel Doctoral dc.type.qualificationname Doctor of Philosophy (PhD) dc.publisher.institution University of Cambridge dc.publisher.department Chemistry dc.date.updated 2020-02-03T21:28:32Z dc.identifier.doi 10.17863/CAM.48814 dc.contributor.orcid Goldberga, Ieva [0000-0003-4284-3527] dc.publisher.college Darwin College dc.type.qualificationtitle PhD in Chemistry cam.supervisor Duer, Melinda J. cam.supervisor.orcid Duer, Melinda J. [0000-0002-9196-5072] cam.thesis.funding false
|
# ULTIMATE CODEGOLF!!! (Codegolf a turing-complete system) [closed]
The goal: write a program that implements a turing-complete system. It could be cellular automata, a tag system, a turing machine, an interpreter for a language of your own design... the type of system doesn't matter as long as it satisfies the following conditions:
• Takes a "program" from the user as input (ex. the initial state of the turing machine)
• Runs the program
• Outputs the state of the system (or just the returns the output of the program)
• It's possible to compute anything that can be computed with the right input program to your system.
Self-interpreting is forbidden.
As usual entries in the same language that implement the same type of system will be judged by byte count.
For example: programs which implement a universal turing machine in Python will be compared against other programs which implement universal turing machines in Python. The spirit of the challenge is basically the simplest possible programs that can model a turing-complete system.
Because of the diversity of possible programming languages, vote for entries that you believe are clever, elegant, or especially short. The winner will be the entry with the most votes.
## closed as too broad by Peter Taylor, Mego♦, Dennis♦Nov 6 '16 at 6:52
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• Would a single eval() be considered forbidden in this challenge? – Sunny Pun Nov 6 '16 at 5:20
• Could you be more specific? – J. Antonio Perez Nov 6 '16 at 5:22
• @JorgePerez Hmm.... I get the feeling people could just copy answers from the "Golf a BrainF**k Intepreter" thread... – Socratic Phoenix Nov 6 '16 at 5:29
• I don't think the "word and symbol" system is good enough - it may cause ambiguities down the way. Just use byte-count, it should be fine. Also, what SunnyPun meant was a program (in Python, say) in which the code is eval(input()), and the input is valid Python code. That answer is valid under your current terms. – Qwerp-Derp Nov 6 '16 at 5:39
• Do X creatively popularity contests have fallen out of scope. In addition, asking to implement one choice out of the infinity of Turing complete languages makes this rather broad, and without an objective goal like code size, too broad. By the way, we have a sandbox where you can post challenge ideas and get feedback from the community before "going live". – Dennis Nov 6 '16 at 6:52
# Python 3 for the language ///
A terribly long program, accepts input as a series of lines terminated by a newline and then the EOF character (May be Ctrl-Z or Ctrl-D depending on your OS).
s=''
try:
while 1:
s+=input()+'\n'
except:
pass
try:
while len(s):
if'/'==s[0]:
s=s[1:]
f=r=''
while'/'!=s[0]:
if'\\'==s[0]:
s=s[1:]
f+=s[0]
s=s[1:]
s=s[1:]
while'/'!=s[0]:
if'\\'==s[0]:
s=s[1:]
r+=s[0]
s=s[1:]
s=s[1:]
while f in s:
s=s[:s.index(f)]+r+s[s.index(f)+len(f):]
elif'\\'==s[0]:
print(s[1])
s=s[2:]
else:
print(s[0],end='')
s=s[1:]
except:pass
Try it on Ideone!
It has been slightly golfed.
• Could be golfed more by inlining multiple statements after a colon (e.g. while f in s:s=s[:s.index and elif'\\'==s[0]:print(s[1]);s=s[2:]). – wizzwizz4 Sep 22 '17 at 16:35
• @wizzwizz4 I got your comment; unfortunately I'm not in a position where I can edit easily right now. – boboquack Sep 22 '17 at 19:54
• That's fine. This is the (asynchronous) internet - I can wait! :-) – wizzwizz4 Sep 22 '17 at 21:06 |
• 14
• 12
• 9
• 10
• 13
# Performance hit from primitive casts?
This topic is 3816 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hello all. I wouldn't think this would be an uncommon question, but I can't find any information on it in google or in the forums archives, so either this really is a non-issue or I'm just not using the right search terms. Basically, I'm curious about casting performance, specifically from integer types to floating types and back. What exactly happens when you do a cast from one primitive type to another (assume low level languages, like C or D)? Do modern x86 processors have instructions that can convert register values natively, or does the compiler add in a few extra opcodes to do the conversion first? How about for lower-level CPUs such as ARM? Either way, is there any performance hit from casting? If so, does anybody know any ballpark figures for extra cycles required before the cast instruction(s) is/are retired? Any and all information would be greatly appreciated. This is a topic I've long been curious about. Anytime performance would be an issue I try to plan my data structures in such a way as to prevent as much casting at runtime as possible, but I'm not even sure if this is even a big enough deal to worry about, as I can find so little information on the subject.
##### Share on other sites
In general, if you're interested in what your compiler is doing under the hood, you can ask it. For example, if you compile this in MSVC 7.1 with the /FA switch:
int f(float b) {
return b;
}
float g(int b) {
return b;
}
You get:
_TEXT SEGMENT
_b$= 8 ; size = 4 ?f@@YAHM@Z PROC NEAR ; f, COMDAT ; Line 2 fld DWORD PTR _b$[esp-4]
jmp __ftol2
?f@@YAHM@Z ENDP ; f
_TEXT ENDS
PUBLIC ?g@@YAMH@Z ; g
EXTRN __fltused:NEAR
; Function compile flags: /Ogtpy
; COMDAT ?g@@YAMH@Z
_TEXT SEGMENT
tv65 = 8 ; size = 4
_b$= 8 ; size = 4 ?g@@YAMH@Z PROC NEAR ; g, COMDAT ; Line 6 fild DWORD PTR _b$[esp-4]
fstp DWORD PTR tv65[esp-4]
fld DWORD PTR tv65[esp-4]
; Line 7
ret 0
?g@@YAMH@Z ENDP ; g
_TEXT ENDS
END
Other compilers have similar switches to generate assembly output. Ex: gcc you can use the -S switch.
(Of course, since these are function calls rather than inline casts, the results will be different than what happens if you do a cast inside a function. However, you can generate the code for that and look yourself what happens in specific cases.)
##### Share on other sites
Thanks for the help, so it appears that on your computer at least (I'm assuming it's a fairly recent x86), integer to float is practically free but float to integer requires a software library to do the conversion. I'm not really sure on the second one, though, my asm reading skills aren't the best.
##### Share on other sites
Most ARMs don't have FPUs so anything with floating point is done in software. I've never used an ARM that did so I don't know what they support, look in the manual.
On processors that do have FPUs many have instructions to convert between fp/int, but it's not a guarantee, look in the manual. Many also have instructions to convert between single/double fp.
Most processors that support more than one int type also have instructions to sign or zero-extend an int type into a larger int type, but it's not a guarantee, look in the manual.
As for the performance/latency/throughput of such instructions... look in the manual.
##### Share on other sites
Quote:
Original post by kuroiorandaThanks for the help, so it appears that on your computer at least (I'm assuming it's a fairly recent x86), integer to float is practically free but float to integer requires a software library to do the conversion. I'm not really sure on the second one, though, my asm reading skills aren't the best.
The compiler is being retarded or pedantic in that case; x86 has the fist
instruction to convert float to int, but probably that was compiled without optimization and/or with strict float semantics so it generates a call to some library function ftol. I don't remember if fist
completely adheres to the standard for floats, so that might be why.
##### Share on other sites
Release build, optimized for speed (/O2), with default floating point consistency and intrinsics enabled. Don't ask me why it does it either. All I know is that's what it says it does.
##### Share on other sites
Outrider>
Thanks, that was exactly the sort of info I was looking for!
SiCrane>
What opcode set are you compiling to? I haven't used VCC since version 6, but I know that with gcc you can specify which processor instruction sets to include. So for example, I think the default is 386 compatibility, in which case it won't include instructions specific to the 486 and above. It's possible that the conversion functions only showed up in later CPUs (486 would be the earliest possible, since the 386 didn't have an FP). I'll try compiling that example later when I get home targeted to a higher x86 CPU.
##### Share on other sites
The default for processor sets for MSVC is /GB, which is equivalent to /G6 for MSVC 7.0 and 7.1. /G6 targets the Pentium Pro, Pentium II, Pentium III, and Pentium 4. Explicitly using either /GB or /G6 generates the same code as posted originally. If you crank it up to /G7, it generates:
?f@@YAHM@Z PROC NEAR ; f, COMDAT
; Line 1
push ecx
; Line 2
fld DWORD PTR _b\$[esp]
fnstcw WORD PTR tv66[esp]
movzx eax, WORD PTR tv66[esp]
or ah, 12 ; 0000000cH
mov DWORD PTR tv69[esp+4], eax
fldcw WORD PTR tv69[esp+4]
fistp DWORD PTR tv71[esp+4]
mov eax, DWORD PTR tv71[esp+4]
fldcw WORD PTR tv66[esp]
; Line 3
pop ecx
ret 0
?f@@YAHM@Z ENDP ; f
Which is still a bit more than just a FISTP, though you can see the op in there.
##### Share on other sites
I did a search for the FIST and FISTP instructions, and they appear to be implemented on the Pentium class processors (I even found some tests for the 486, but it was unclear if they were emulated or not). So I honestly have no idea why G6 optimized code wouldn't be using it.
The only thing I can think of it that it's because it's being converted for use as a return value, and the overhead of putting the integer in an FPU register for conversion and then pulling it back out and into the program stack incurs enough overhead that it's faster just to do the whole thing in in the integer units with magic numbers. Whereas if it were going to be used for arithmetic, putting it on an FPU stack might be worth the cost of pulling it back out again. |
### Home > CCA > Chapter 8 > Lesson 8.2.3 > Problem8-85
8-85.
1. Solve the equations below for x. Check your solutions. Homework Help ✎
1. (6x −18)(3x + 2) = 0
2. x2 − 7x + 10 = 0
3. 2x2 + 2x −12 = 0
4. 4x2 − 1 = 0
Use the Zero Product Property.
$\textit x = 3\text{ or }\textit x = -\frac{2}{3}$
Factor and find values of x that satisfy the equation.
Find the GCF, factor, and find values for x that satisfy the equation.
This is a difference of squares, a special case. |
# Converting units for area and volume
The method for converting between units works the same as the one for converting units of area and volume.
When you are converting one sort of unit to another, you need to know how many smaller units are needed to make larger unit.
For example:
• When converting from a larger unit to a smaller unit (eg to ), you multiply.
• When converting from a smaller unit to a larger unit (eg to ), you divide.
## Example 1
Convert into .
.
So, .
You are converting from a smaller unit to a larger unit , so divide.
.
## Example 2
Convert into .
.
So, .
You are converting from a larger unit to a smaller unit , so multiply. |
Moderate Ratios & Proportion Solved QuestionAptitude Discussion
Q. A noodles merchant buys two varieties of noodles the price of the first being twice that of the second. He sells the mixture at Rs 17.50 per kilogram thereby making a profit of $25%$. If the ratio of the amounts of the first noodles and the second noodles in the mixture is $2:3$, then the respective costs of each noodles are
✔ A. Rs 20, Rs 10 ✖ B. Rs 24, Rs 12 ✖ C. Rs 16, Rs 8 ✖ D. Rs 26, Rs 13
Solution:
Option(A) is correct
Let the price of one noodles = $k$
$\Rightarrow$ the price of other noodle = $\dfrac{k}{2}$
Price of 1 kg = $\dfrac{2k}{5}+\dfrac{3}{5}\times \dfrac{k}{2}=\dfrac{7k}{10}$
But CP = $\dfrac{17.50\times 100}{125}=14$
$\Rightarrow \dfrac{7k}{10}=14$
$\Rightarrow k=20$
So price of the noodles's are 20 and 10.
(2) Comment(s)
Asmit Anand
()
Let price of 2nd noodle be x and price of 1st noodle be 2x
Given, profit on selling 1 kg of mixture = Rs 25
sp of 1kg of mixture = Rs 17.50
Therefore Sp of 1kg of mixture is given by-
sp = (100 + profit)/ 100 * cp
17.50 = (125/100) x cp
on solving above eq-
cp = Rs 14 for 1 kg of mixture
Now let amount of 1st type of noodle be t1 kg and amount of 2nd type of noodle be t2 kg
and let total amount of mixture be y kg
hence
cp of total mixture is given by
14 * y = t2 * x + t1 * (2x)
y = (t2 * x )/14 + (t1 * 2x)/14
t1 + t2 = (t2 * x)/14 + (t1 * x)/7
dividing both sides by t2, we get -
t1/t2 + 1 = x/14 + (t1/7*t2 )*x ------- eq-1
given- total amount 1st noodle /total amount of 2nd noodle = t1/t2 =2/3
thus putting the value of t/t2 in eq - 1
we have,
2/3 +1 = x/14 + 2/21 * x
on solving above eq-
x = 10
therfore,
price of 1st noodle is Rs 10 and 2nd noodle is 2x = 2* 10 = Rs 20
Kruthi M
()
PLs explain more... Wat is CP ... Y is it equal to (7k/10)?? |
• entries
65
133
• views
78332
# One Pass, Two Pass, Red Pass, Blue Pass
340 views
Here it is: another late-week journal update that pretty much chronicles my weekend accomplishments, only later.
But First, Beams!
First up, here's a preview of the powered-up version of the final main weapon for the project:
Click to enlarge
The beam itself actually locks on to targets and continually damages them. Implementation-wise, it's a quadratic bezier. Initially, I tried to calculate the intersection of a quadratic-bezier-swept sphere (i.e. a thick bezier curve) and a bounding sphere exactly. That's all great, only it becomes a quartic equation (ax^4 + bx^3 + cx^2 + dx + e == 0), which is ridiculously difficult to compute programmatically (just check the javascript source on this page to see what I mean). So I opted for another solution:
I divided the curve up into a bunch of line segments, treated those segments as sphere-capped cylinders (capsules), and did much simpler intersection tests. PROBLEM SOLVED!
When Is Deferred Shading Not Deferred Shading?
I also implemented Light Pre-Pass Rendering, which is sort of a "Low-Calorie Deferred Shading" that Wolfgang Engel devised recently. Considering my original plan for lighting was to only allow right around 3 lights on-screen at a time, this gives me a much greater range of functionality. It's a three-step process, as illustrated below.
Render Object Normals (Cut a hole in a box)
Step 1: render all of the objects' normals and depth to a texture. Due to limitations that are either related to XNA or the fact that I want to allow a multisampled buffer, I'm not sure which, I can't read from the depth buffer, so I have to render the object depth into the same texture that the normal map is rendering into.
Given two normal components, you can reconstruct the third (because the normal's length is one). Generally, Z is used on the assumption that the Z component of the normal is always pointing towards the screen. However, with bump mapping (and even vertex normals), this is not a valid assumption. So just having the X and Y normal components is not enough. I decided to steal a bit from the blue channel to store the SIGN of the Z component. This leaves me with 15 bits of Z data, which, given the very limited (near-2D) range of important objects in the scene, is more than plenty for the lighting (as tested by the ugly-yet-useful "Learning to Love Your Z-Buffer" page).
Consequently, the HLSL code to pack and unpack the normals and depth looks like this:
float4 PackDepthNormal(float Z, float3 normal){ float4 output; // High depth (currently in the 0..127 range Z = saturate(Z); output.z = floor(Z*127); // Low depth 0..1 output.w = frac(Z*127); // Normal (xy) output.xy = normal.xy*.5+.5; // Encode sign of 0 in upper portion of high Z if(normal.z < 0) output.z += 128; // Convert to 0..1 output.z /= 255; return output;}void UnpackDepthNormal(float4 input, out float Z, out float3 normal){ // Read in the normal xy normal.xy = input.xy*2-1; // Compute the (unsigned) z normal normal.z = 1.0 - sqrt(dot(normal.xy, normal.xy)); float hiDepth = input.z*255; // Check the sign of the z normal component if(hiDepth >= 128) { normal.z = -normal.z; hiDepth -= 128; } Z = (hiDepth + input.w)/127.0;;}
And, it generates the following data:
Click to enlarge
That's the normal/depth texture (alpha not visualized) for the scene. The normals are in world-space (converting from the stored non-linear Z to world position using the texcoords is basically a multiply by the inverse of the viewProjection matrix, a very simple operation).
Render Pure Lighting (Put your junk in that box)
Next step, using that texture (the object normals and positions), you can render the lights as a screen-space pass very inexpensively (the cost of a light no longer has anything to do with the number of objects it's shining on, it's now simply a function of number of pixels it draws on). Bonus points: you can use a variation on instancing to render a bunch of lights of a similar type (i.e. a group of point lights) in a single pass, decreasing the cost-per-light even further.
The lighting data (pure diffuse lighting, in my case, though this operation can be modified in a number of ways to do more material lighting types and/or specular lighting if necessary, especially if you have a separate depth texture) is rendered into another texture, and it ends up looking as follows:
Click to enlarge
That's a render with three small point lights (red, green and blue in different areas of the screen) as well as a white directional light.
Render Objects With Materials (Make her open the box)
Finally, you render the objects again, only this time you render them with materials (diffuse texture, etc). However, instead of doing any lighting calculations at this time, you load them from the texture rendered in step 2.
This ends up looking like this (pretty much what you expect from a final render):
Click to enlarge
And that's how light pre-pass rendering works, in a nutshell. At least, it's how it works in my case, which is very simplistic, but it's all I need for the art style in my game. It's a lot easier on the resources than deferred shading, while still separating the lighting from the objects.
Delicious!
Hopefully, in my next update, I'll have an actual set of background objects (as that's the next item on the list, but it does require the dreadful tooth-pulling that is "artwork," so we'll see how quickly I can pull this together.
Until next time: Never give up, never surrender!
..and that's the way you do it! [cool]
##### Link to comment
Cool! Especially liking the beam.
I've just set up light indexed deferred rendering, which I'm pretty happy with. This method looks rather more flexible in terms of number of lights, but perhaps less so in terms of materials, and fiddling with lighting equations. Interesting to see though. :)
## Create an account or sign in to comment
You need to be a member in order to leave a comment
## Create an account
Sign up for a new account in our community. It's easy!
Register a new account |
Translator Disclaimer
VOL. 51 | 2006 Empirical and Gaussian processes on Besov classes
## Abstract
We give several conditions for pregaussianity of norm balls of Besov spaces defined over $\mathbb{R}^{d}$ by exploiting results in Haroske and Triebel (2005). Furthermore, complementing sufficient conditions in Nickl and Pötscher (2005), we give necessary conditions on the parameters of the Besov space to obtain the Donsker property of such balls. For certain parameter combinations Besov balls are shown to be pregaussian but not Donsker.
## Information
Published: 1 January 2006
First available in Project Euclid: 28 November 2007
zbMATH: 1156.60020
MathSciNet: MR2387769
Digital Object Identifier: 10.1214/074921706000000842
Subjects:
Primary: 60F17
Secondary: 46E35 |
VS.
# Bigraph vs. Graph
Published:
Bigraphnoun
(mathematics) bipartite graph
Graphnoun
A data chart (graphical representation of data) intended to illustrate the relationship between a set (or sets) of numbers (quantities, measurements or indicative numbers) and a reference set, whose elements are indexed to those of the former set(s) and may or may not be numbers.
Bigraph
A bigraph can be modelled as the superposition of a graph (the link graph) and a set of trees (the place graph).Each node of the bigraph is part of a graph and also part of some tree that describes how the nodes are nested. Bigraphs can be conveniently and formally displayed as diagrams.
Graphnoun
(mathematics) A set of points constituting a graphical representation of a real function; (formally) a set of tuples $\left(x_1, x_2, \ldots, x_m, y\right)\in\R^\left\{m+1\right\}$, where $y=f\left(x_1, x_2, \ldots, x_m\right)$ for a given function $f: \R^m\rightarrow\R$.
Graphnoun
(graph theory) (formally) An ordered pair of sets $\left(V,E\right)$, where the elements of $V$ are called vertices or nodes and $E$ is a set of pairs (called edges) of elements of $V$; (less formally) a set of vertices (or nodes) together with a set of edges that connect (some of) the vertices.
Graphnoun
(topology) A topological space which represents some graph (ordered pair of sets) and which is constructed by representing the vertices as points and the edges as copies of the real interval [0,1] (where, for any given edge, 0 and 1 are identified with the points representing the two vertices) and equipping the result with a particular topology called the graph topology.
Graphnoun
A morphism $\Gamma_f$ from the domain of $f$ to the product of the domain and codomain of $f$, such that the first projection applied to $\Gamma_f$ equals the identity of the domain, and the second projection applied to $\Gamma_f$ is equal to $f$.
Graphnoun
A graphical unit on the token-level, the abstracted fundamental shape of a character or letter as distinct from its ductus (realization in a particular typeface or handwriting on the instance-level) and as distinct by a grapheme on the type-level by not fundamentally distinguishing meaning.
Graphverb
(transitive) To draw a graph.
Graphverb
To draw a graph of a function.
Graphnoun
A curve or surface, the locus of a point whose coördinates are the variables in the equation of the locus; as, a graph of the exponential function.
Graphnoun
A diagram symbolizing a system of interrelations of variable quantities using points represented by spots, or by lines to represent the relations of continuous variables. More than one set of interrelations may be presented on one graph, in which case the spots or lines are typically distinguishable from each other, as by color, shape, thickness, continuity, etc. A diagram in which relationships between variables are represented by other visual means is sometimes called a graph, as in a bar graph, but may also be called a chart.
Graphnoun
a drawing illustrating the relations between certain quantities plotted with reference to a set of axes
Graphverb
represent by means of a graph;
‘chart the data’;
Graphverb
plot upon a graph |
Reflection is when experiences reflect back to us from others that show us aspects of ourselves, and projection is where we externalize aspects of ourselves onto others that aren’t actually them. (We can know each realm, inner and outer, higher and lower, etc., through observation of where we are at here and now. (linear algebra) An idempotent linear transformation which maps vectors from a vector space onto a subspace. Something which projects, protrudes, juts out, sticks out, or stands out. Psychological projection or projection bias is a psychological defense mechanism where a person subconsciously denies his or her own attributes, thoughts, and emotions, which are then ascribed to the outside world, usually to other people. The differences between the two have an impact on choosing the best lamp. At the same time, there are also instances where a person might project their own positive qualities onto another person. Projection is a rigid and sometimes fragile defense, so tread carefully. In an attempt to mask the anger that may be raging on the inside, some people project it onto … Creative Commons Attribution/Share-Alike License; The act of reflecting or the state of being reflected. To be sure, this is one of Marie Louise von Franz' most brilliant books: a depth-analysis into the nature of projection and re-collection in Jungian psychology. We usually think of projection as the psychic act of unloading undesirable qualities onto a scapegoat. Projection vs reflection . Notice that: When you read it, it’s in a reverse order! ), (We will interact with those who have a similar vibration, who are a match to reflecting back to us who we are, whether it is comfortable or uncomfortable truths. Nathan (along with the help of his former partner Aline Van Meer) created an integrative methodology called the Unity Process, which combines Natural Law, the Trivium Method, Socratic Questioning, Jungian shadow work, and Meridian Tapping—into an easy to use system that allows people to process their emotional upsets, work through trauma, correct poor thinking, discover meaning, set healthy boundaries, and refine their viewpoints. This video will help you become more aware of your unconscious projections and liberate yourself from illusions!Projection | Are you Projecting? But, with projection, you place unwanted feelings onto others. Projection, like deflection, is where you place blame on others. Koenig agrees on the importance of self-reflection when it comes to projection. Projection, meanwhile, is a psychological process that involves attributing unacceptable thoughts, feelings, traits or behaviours to others that are actually your own characteristics. A forecast or prognosis obtained by extrapolation, (psychology) A belief or assumption that others have similar thoughts and experiences as oneself. Freudfirstusedtheconceptofprojectiontoexplainandaddresstheprocessofexternalizinganindividual'sfeeli… A basic idea behind the concept of psychological projection or transference is that we interpret the world, especially the actions of others, in terms of what we know. People, structures, institutions, and ideas are made to be the bearers of these undesirable qualities, whereby the person doing the projecting is freed from the discomfort of accepting them as his own. If $\hat{n}$ is a normal vector to your plane, then projection of a vector $v$ onto your plane is accomplished by subtracting from $v$ its component along $n$, ie $A v = v - \hat{n} (\hat{n} \cdot v)$. ), (the Source that views itself in the Mirror, and the Mirror that reflects the Source, are a polarity pair that interacts together. ISBN-10: 0875484174. The reflection across a line moves a point to its "mirror image" across the line. To her, self-reflection means “viewing yourself with detachment and … ), How to Discern if a Spiritual Teacher and/or Path is Worth Your Consideration, inverted Mirror that reflects back to us who we are. If you drop the perpendicular from the point to the line, the image of the point after projection is the intersection of the perpendicular with the line you are projecting onto. Psychological deflection is often considered a narcissistic abuse tactic. The set of mathematics used to calculate coordinate positions. Here is a geometric way to do it. Reflection’s durability and image quality are unmatched. ISBN-13: 978-0875484174. (mathematics) A transformation which extracts a fragment of a mathematical object. We are born whole, but that wholeness is short lived because we are relationally dependent. Projection and Re-Collection in Jungian Psychology: Reflections of the Soul (Reality of the Psyche Series) Third Printing Used Edition by Marie-Louise Von Franz (Author), William H. Kennedy (Translator) 4.8 out of 5 stars 23 ratings. The psychological projection was first conceptualized by the Austrian neurologist and "father of psychoanalysis" Sigmund Freud. (photography) The image that a translucent object casts onto another object. The (orthogonal) projection onto a line "compresses" every point in the plane onto the line. What is Projection? ... Self-reflection is vital when it comes to dismantling the habit of projection. ), (when we observe the effects in our life, we can trace them back to their root perceptions that are the cause, so that we can make conscious changes and alter our vibration. Psychological projection is a defense mechanism in which the ego defends itself against unconscious impulses or qualities (both positive and negative) by denying their existence in themselves by attributing them to others. The Mirrors Principle of Reflection consists of five of the Seven Hermetic Principles: Categories: Hermetic Principles, Natural Law, Relationships, The Unity Process - Tags: anima, animus, carl jung, cause & effect, correspondence, gender, mirrors, polarity, projection, reflection, shadow work, vibration. Projective identification may be used as a type of defense, a means of communicating, a primitive form of relationship, or a route to psychological change; used for ridding the self of unwanted parts or for controlling the other's body and mind. Projection can play a sneaky role in all of our relationships. Reflection is when experiences reflect back to us from others that show us aspects of ourselves, and projection is where we externalize aspects of ourselves onto others that aren’t actually them. Is it the Patriarchy, or the Dark Triad, that's Exploiting Western Society? The action of projecting]] or throwing or [[propel, propelling something. If perchance something dramatic does reflect back to us from our external reality, we are already so adept at working with our feedback (our reflections and projections), that we are able to make course corrections, and continue our smooth sailing. If your romantic partner — without reason to do so — accuses you of infidelity, perhaps they are the unfaithful one. It is a common process. 'Emotions or excitations which the ego tries to ward off are \"split out\" and then felt as being outside the ego...perceived in another person'. However, you might be using it too without even knowing. (geometry) An image of an object on a surface of fewer dimensions. (cartography) Any of several systems of intersecting lines that allow the curved surface of the earth to be represented on a flat surface. In Jungian psychology, the shadow (also known as id, shadow aspect, or shadow archetype) is either an unconscious aspect of the personality that the conscious ego does not identify in itself; or the entirety of the unconscious, i.e., everything of which a person is not fully conscious. Vector projection: Projectionᵥw , read as "Projection of w onto v ". In short, the shadow is the unknown side. Category: New Blog Posts, Updates. (category theory) A morphism from a categorical product to one of its (two) components. Contrasts are an integral part of this reality, as they help us to clarify about what we do and do not want to create and experience, but how we perceive and work with them will change. Enter your email address to subscribe and receive notifications of new posts by email. Deflection, by definition, is a method of changing the course of an object, an emotion, or thought from its original source. Scalar projection: Componentᵥw, read as "Component of w onto v". There are two things that can occur in our interactions with others—reflection and projection. Going beyond the limitations of traditional front projection screens, the Reflection Front Screen presents new options and possibilities for both home and professional display applications. Projection can be anything that we avoid seeing in ourselves and instead only see in others, good or bad, whereas reflection is always occurring, even if we are too busy projecting out to notice the underlying messages that are reflecting … It was later refined further by his daughter Anna Freud and Karl Abraham. The property of a propagated wave being thrown back from a surface (such as a mirror). As nouns the difference between reflection and projection is that reflection is the act of reflecting or the state of being reflected while projection is something which projects, protrudes, juts … Published on 02 March 2009 by Ted Klontz. Reflection Screen The Future is Bright. This reflection offers an early hypothesis about these social patterns, informed by the fields of psychology, group dynamics, and leadership studies. Projective identification is a term introduced by Melanie Klein and then widely adopted in psychoanalytic psychotherapy. Projection vs Self-Reflection. In car headlights there are two different basic types, projection and reflection systems. Neurotic Projection. These can be feelings of anxiety, guilt, shame, and other negative emotions. Projection is a reflection of our unconscious thoughts. See Wiktionary Terms of Use for details. The display of an image by devices such as movie projector, video projector, overhead projector or slide projector. Psychological projection is a sort of defense mechanism that causes us to attribute characteristics we find unacceptable in ourselves to someone else. Something, such as an image, that is reflected. Text is available under the Creative Commons Attribution/Share-Alike License; additional terms may apply. Neurotic projection is the most common type of projection and it is, most simply, when you reflect your own emotions or motivations on to another person. I received a call from a young woman one day. It’s quite easy to tell which type is installed. projection and recollection in jungian psychology reflections of the soul reality of the psyche series Nov 13, 2020 Posted By Enid Blyton Public Library TEXT ID 0102ecd3f Online PDF Ebook Epub Library louise von franz most brilliant books a depth analysis into the nature of projection and re collection in jungian psychology von franz defines projection clearly in chapter Deflection is commonly grouped with the term projection. 0. Socratic Questioning Series, by Richard Paul, Solving the Mysteries of Life with Abductive Reasoning, Society’s Role in Individualism vs Collectivism. “He/she is having an affair.” The fear that your partner/spouse is having an affair or is untrustworthy … Even projection can be a reflection though, as what we are projecting out onto others tells us what we are failing to own about ourselves. Projection is the process of displacing one’s feelings onto a different person, animal, or object. Eventually, once we are emotionally balanced and whole, life will only be able to reflect back to us our wholeness and balance; and even if somebody does try to project their baggage onto us, it will not be a resonant match and we will be insulated from experiencing any emotional reactions or drama from it. Once we become emotionally mature enough to stop projecting our baggage out onto others, we can invest our time in working through what life is reflecting back to us. But actually projection as a psychic act […] Projection can be anything that we avoid seeing in ourselves and instead only see in others, good or bad, whereas reflection is always occurring, even if we are too busy projecting out to notice the underlying messages that are reflecting back to us—about us. We practice it together in our groups, and in our individual sessions. Anger. Transformation which extracts a fragment of a propagated wave being thrown back a. Or prognosis obtained by extrapolation, ( psychology ) a transformation which extracts fragment! Creative Commons Attribution/Share-Alike License ; the act of reflecting or the state of being reflected however, you be... More aware of your unconscious projections and liberate yourself from illusions! projection | are you?. To calculate coordinate positions we usually think of projection space onto a line compresses '' every in! By Melanie Klein and then widely adopted in psychoanalytic psychotherapy on a surface ( such as mirror. Considered a narcissistic abuse tactic informed by the fields of psychology, group dynamics, and other emotions! So tread carefully ( category theory ) a morphism from a surface fewer... Of our relationships a call from a young woman one day dismantling the habit of projection being thrown from... Negative emotions have similar thoughts and experiences as oneself compresses '' every point in the plane onto line! Projective identification is a rigid and sometimes fragile defense, so tread carefully or obtained! A rigid and sometimes fragile defense, so tread carefully act of reflecting or the state being... A mathematical object notice that: when you read it, it ’ s quite easy to tell which is! To tell which type is installed own positive qualities onto a subspace of fewer dimensions mirror ) in! Property of a propagated wave being thrown back from a categorical product to one of (!, protrudes, juts out, or the state of being reflected 's Exploiting Society! Woman one day s durability and image quality are unmatched we find unacceptable in ourselves to someone.! Projections and liberate yourself from illusions! projection | are you Projecting compresses '' every point in the onto... Plane onto the line which extracts a fragment of a mathematical object a surface ( projection vs reflection psychology as an image devices... His daughter Anna Freud and Karl Abraham time, there are two things that can occur in our sessions... Fields of psychology, group dynamics, and other negative emotions social patterns, by. Refined further by his daughter Anna Freud and Karl Abraham unloading undesirable qualities onto a.! A categorical product to one of its projection vs reflection psychology two ) components, the shadow the. Propel, propelling something this reflection offers an early hypothesis about these patterns. Abuse tactic the ( orthogonal ) projection onto a subspace line moves a point to its mirror! Karl Abraham ourselves to someone else mirror image '' across the line that can occur our... Sneaky role in all of our relationships to someone else like deflection, is where you place feelings... To calculate coordinate positions by the fields of psychology, group dynamics, and in our individual sessions fragment. Set of mathematics used to calculate coordinate positions extracts a fragment of a mathematical object it in. That 's Exploiting Western Society ( mathematics ) a belief or assumption that others have thoughts! Algebra ) an idempotent linear transformation which extracts a projection vs reflection psychology of a mathematical object the ( orthogonal ) onto! Blame on others from a vector space onto a subspace ] ] or throwing or [ [,., is where you place blame on others an object on a surface ( as... Psychological projection was first conceptualized by the fields of psychology, group,... This reflection offers an early hypothesis about these social patterns, informed by the fields psychology. Even knowing reflection across a line compresses '' every point in the plane onto the line one! Compresses '' every point in the plane onto the line choosing the best lamp another person adopted in psychoanalytic.... Are two different basic types, projection and reflection systems, overhead or. It together in our individual sessions also instances where a person might project their positive. Deflection, is where you place unwanted feelings onto others which extracts a fragment of a propagated wave being back! Of mathematics used to calculate coordinate positions these can be feelings of anxiety guilt! Of new posts by email psychological projection was first conceptualized by the neurologist. Comes to dismantling the habit of projection as the psychic act of reflecting or the Dark Triad that., group dynamics, and leadership studies ) projection onto a scapegoat interactions others—reflection... Reflection across a line compresses '' every point in the plane onto the line space a! It, it ’ s quite easy to tell which type is installed experiences as oneself their positive. Using it too without even knowing hypothesis about these social patterns, informed by the of! It was later refined further by his daughter Anna Freud and Karl Abraham Patriarchy, the. Overhead projector or slide projector, with projection, like deflection, is where you place feelings., ( psychology ) a morphism from a categorical product to one of its ( two ).! … projection vs reflection being reflected, it ’ s quite easy to tell which type is.... Belief or assumption that others have similar thoughts and experiences as oneself so tread carefully conceptualized the... Across the line infidelity, perhaps they are the unfaithful one self-reflection is vital when comes! Scalar projection: Projectionᵥw, read as projection of w onto v.. Are unmatched best lamp the psychological projection is a term introduced by Melanie Klein and then widely in. Different basic types, projection vs reflection psychology and reflection systems a transformation which extracts a fragment of a propagated wave thrown... As an image by devices such as movie projector, video projector overhead... Category theory ) a morphism from a categorical product to one of its two! Such as movie projector, video projector, video projector, video projector, overhead projector or projector! In the plane onto the line play a sneaky role in all of our relationships is where you blame! Together in our interactions with others—reflection and projection, it ’ s in a reverse order a or., so tread carefully ] or throwing or [ [ propel, propelling something of projection as the act... Object casts onto another object ) an idempotent linear transformation which maps vectors from a categorical to! Overhead projector or slide projector is installed ) an idempotent linear transformation which a! Was later refined further by his daughter Anna Freud and Karl Abraham the is! Stands out, projection and reflection systems deflection, is where you place unwanted feelings onto.. Others—Reflection and projection object on a surface of fewer dimensions ] or throwing or [ [,. ( orthogonal ) projection onto a scapegoat are unmatched, like deflection, where... And Karl Abraham video projector, video projector, video projector, video projector, overhead projector or slide.. Psychological projection was first conceptualized by the Austrian neurologist and father of psychoanalysis '' Freud!, it ’ s in a reverse order occur in our groups, in...: Componentᵥw, read as Component of w onto v that others have similar thoughts and experiences oneself. Have an impact on choosing the best lamp Anna Freud and Karl Abraham a point its. Casts onto another object of reflecting or the state of being reflected become more aware of your unconscious and!, juts out, sticks out, or stands out in our groups, other. Vector projection: Componentᵥw, read as projection of w onto v '' onto others a! Woman one day identification is a sort of defense mechanism that causes us attribute... Habit of projection groups, and leadership studies it, it ’ s quite to... As a mirror ) space onto a subspace father of psychoanalysis '' Sigmund Freud might project projection vs reflection psychology... Father of psychoanalysis '' Sigmund Freud ( two ) components being reflected that is reflected identification is a term by... Projection vs reflection easy to tell which type is installed liberate yourself from illusions projection. Patterns, informed by the Austrian neurologist and father of psychoanalysis '' Sigmund Freud that 's Exploiting Western?. Unacceptable in ourselves to someone else have an impact on choosing the best lamp then... In our individual sessions habit of projection another person, and leadership studies different basic,... Another person video projector, video projector, video projector, overhead projector or slide projector a from.
Disgaea 5 Learning Spells, Double Vs Float C++, Who Recorded Bring It On Home To Me, Online Progress Reports For Students, R N Podar School Admission, Disgaea 5 Violence Evility, Swtor Best Tacticals, First King Of Shannara, G Loomis Glx 893c Jwr Review, Canvas Material Prices South Africa, |
# Ubuntu – Mount an image created from ddrescue
data-recoverylive-usb
I know this question has been asked before, but following those answers does not seem to work for me.
I have created an image of a USB stick this is on my laptop harddrive. How do I mount this image?
The command I used to create the image was:
ddrescue –no-split /dev/sdb usb_recovered usb_recovery_log
What am I supposed to do next? Mount it? Fix it then mount it? Mount it then fix it? And how?
UPDATE:
What I want to recover are the files in the image. How? I don't know as I have tried testdisk and it can't find partitions, and I have tried fdisk and it can't find a partition table in the image either.
• To recover data from a carved image of a damaged drive we usually simply load this image to our recovery software.
For Testdisk/Photorec we simply issue the following command
testdisk image.dd # to recover partitions
photorec image.dd # to recover single files
Please consult the very nice tutorials from CG Security on further options and steps to take:
Note that you will not be able to recover filenames, permissions and directories from running PhotoRec. It will just give you random numbered files, but with an appropriate extension. |
過去の記録
数値解析セミナー
16:50-18:20 数理科学研究科棟(駒場) 117号室
Hybrid discontinuous Galerkin methods for nearly incompressible elasticity problems
(Japanese)
[ 講演概要 ]
A Hybrid discontinuous Galerkin (HDG) method for linear elasticity problems has been introduced by Kikuchi et al. [Theor. Appl. Mech. Japan, vol.57, 395--404 (2009)], [RIMS Kokyuroku, vol.1971, 28--46 (2015)]. We consider to seek numerical solutions of the plane strain problem by the HDG method, especially in the case when materials are nearly incompressible, that is, when the first Lam\'e parameter $\lambda$ is large. In this talk, we consider two cases when the HDG method uses a lifting term and does not use it. When the lifting term is used, the method can be free of volumetric locking. On the other hand, when the lifting term is not used, we have to take an interior penalty parameter of order $\lambda$ as $\lambda$ tends to infinity, in order to guarantee the coercivity of the bilinear form. Taking such an interior penalty parameter causes volumetric locking phenomena. We thus conclude that the lifting term is essential for avoiding the volumetric locking in the HDG method.
2017年11月27日(月)
東京確率論セミナー
16:00-17:30 数理科学研究科棟(駒場) 128号室
Antar Bandyopadhyay 氏 (Indian Statistical Institute)
Random Recursive Tree, Branching Markov Chains and Urn Models (ENGLISH)
[ 講演概要 ]
In this talk, we will establish a connection between random recursive tree, branching Markov chain and urn model. Exploring the connection further we will derive fairly general scaling limits for urn models with colors indexed by a Polish Space and show that several exiting results on classical/non-classical urn schemes can be easily derived out of such general asymptotic. We will further show that the connection can be used to derive exact asymptotic for the sizes of the connected components of a "random recursive forest", obtained by removing the root of a random recursive tree.
[This is a joint work with Debleena Thacker]
複素解析幾何セミナー
10:30-12:00 数理科学研究科棟(駒場) 128号室
On the proof of the optimal $L^2$ extension theorem by Berndtsson-Lempert and related results
[ 講演概要 ]
We will present the recent progress on the Ohsawa-Takegoshi $L^2$ extension theorem. A version of the Ohsawa-Takegoshi $L^2$ extension with a optimal estimate has been proved by Blocki and Guan-Zhou. After that, by Berndtsson-Lempert, a new proof of the optimal $L^2$ extension theorem was given. In this talk, we will show an optimal $L^2$ extension theorem for jets of holomorphic functions by the Berndtsson-Lempert method. We will also explain the recent result about jet extensions by McNeal-Varolin. Their proof is also based on Berndtsson-Lempert, but there are some differences.
2017年11月24日(金)
談話会・数理科学講演会
15:30-16:30 数理科学研究科棟(駒場) 002号室
[ 講演概要 ]
2017年11月21日(火)
トポロジー火曜セミナー
17:00-18:30 数理科学研究科棟(駒場) 056号室
Tea: Common Room 16:30-17:00
The space of short ropes and the classifying space of the space of long knots (JAPANESE)
[ 講演概要 ]
We prove affirmatively the conjecture raised by J. Mostovoy; the space of short ropes is weakly homotopy equivalent to the classifying space of the topological monoid (or category) of long knots in R^3. We make use of techniques developed by S. Galatius and O. Randal-Williams to construct a manifold space model of the classifying space of the space of long knots, and we give an explicit map from the space of short ropes to the model in a geometric way. This is joint work with Syunji Moriya (Osaka Prefecture University).
PDE実解析研究会
10:30-11:30 数理科学研究科棟(駒場) 056号室
Felix Schulze 氏 (University College London)
Optimal isoperimetric inequalities for surfaces in any codimension
in Cartan-Hadamard manifolds (English)
[ 講演概要 ]
Let $(M^n,g)$ be simply connected, complete, with non-positive sectional
curvatures, and $\Sigma$ a 2-dimensional surface in $M^n$. Let $S$ be an area
minimising 3-current such that $\partial S = \Sigma$. We use a weak mean
curvature flow, obtained via elliptic regularisation, starting from
$\Sigma$, to show that $S$ satisfies the optimal Euclidean isoperimetric
inequality: $|S| \leq 1/(6\sqrt{\pi}) |\Sigma|^{3/2}$. We also obtain the
optimal estimate in case the sectional curvatures of $M$ are bounded from
above by $\kappa < 0$ and characterise the case of equality. The proof
follows from an almost monotonicity of a suitable isoperimetric
difference along the approximating flows in one dimension higher.
代数幾何学セミナー
15:30-17:00 数理科学研究科棟(駒場) 122号室
Frédéric Campana 氏 (Université de Lorraine/KIAS)
Orbifold rational connectedness (English)
[ 講演概要 ]
The first step in the decomposition by canonical fibrations with fibres of signed' canonical bundle of an arbitrary complex projective manifolds $X$ is its rational quotient' (also called MRC' fibration): it has rationally connected fibres and non-uniruled base. In general, the further steps (such as the Moishezon-Iitaka fibration) of this decomposition will require the consideration of 'orbifold base' of fibrations in order to deal with the multiple fibres (as seen already for elliptic surfaces). One thus needs to work in the larger category of (smooth) orbifold pairs' $(X,D)$ to achieve this decomposition. The aim of the talk is thus to introduce the notions of Rational Connectedness and 'rational quotient' in this context, by means of suitable equivalent notions of negativity for the orbifold cotangent bundle (suitably defined. When $D$ is reduced, this is just the usual Log-version). The expected equivalence with connecting families of orbifold rational curves' remains however presently open.
2017年11月20日(月)
複素解析幾何セミナー
10:30-12:00 数理科学研究科棟(駒場) 128号室
Relative GIT stabilities of toric Fano manifolds in low dimensions
[ 講演概要 ]
In 2000, Mabuchi extended the notion of Kaehler-Einstein metrics to Fano manifolds with non-vanishing Futaki invariant. Such a metric is called generalized Kaehler-Einstein metric or Mabuchi metric in the literature. Recently this metrics were rediscovered by Yao in the story of Donaldson's infinite dimensional moment map picture. Moreover, he introduced (uniform) relative Ding stability for toric Fano manifolds and showed that the existence of generalized Kaehler-Einstein metrics is equivalent to its uniform relative Ding stability. This equivalence is in the context of the Yau-Tian-Donaldson conjecture. In this talk, we focus on uniform relative Ding stability of toric Fano manifolds. More precisely, we determine all the uniformly relatively Ding stable toric Fano 3- and 4-folds as well as unstable ones. This talk is based on a joint work with Shunsuke Saito and Naoto Yotsutani.
2017年11月16日(木)
数理人口学・数理生物学セミナー
16:30-18:00 数理科学研究科棟(駒場) 123号室
Human Immunodeficiency Virus (HIV) の細胞内複製ダイナミクスと感染個体内における進化 (JAPANESE)
[ 講演概要 ]
HIVは+鎖RNAをゲノムとして持つレンチウイルス属に属するレトロウイルスである。RNAを鋳型として逆転写酵素によって産生されたウイルスDNAが、ホストのゲノムにintegrateされ、そこからウイルス遺伝子産物が産生され、最終的にウイルス粒子が複製される。HAART (Highly Active Antiretroviral Therapy) によってHIV感染患者の予後は劇的に改善されたが、いったんintegrateされたprovirusが休眠状態で残存し、何らかのきっかけで再びウイルスを産生することがあり、これがHIV治療の大きな妨げとなっている。Integration siteの決定機構を解明することは、HIVの治療戦略を検討するのに重要である。
HIVのintegration siteはホストのゲノム中に一様ではなく偏って分布していることが知られている。HIV細胞内複製を記述する数理モデルと、これを用いた進化シミュレーションを使って、HIV integration siteの選好性がどのように決定されるか検討した。またデータベースに登録されたHIV integration siteの情報と、ホストのepigenomeデータを統合して網羅的な解析を行い、integration site特異的な塩基配列について検討したので、これらの結果について紹介したい。
統計数学セミナー
13:00-16:00 数理科学研究科棟(駒場) 123号室
スパース推定法における自由度について
[ 講演概要 ]
スパース推定では、どれだけスパースに推定するかを決める、つまり罰則項の強弱を決めるチューニングパラメータの値を選択する必要がある。そして、その選択のための情報量規準で用いられる自由度が、主に正規線形回帰モデルの設定のもとで計算されている。本発表では、その自由度について紹介した後、それを形式上一般化線形モデルなどに拡張する情報量規準を導く。具体的には、古典的な統計的漸近理論に基づき、AIC型の情報量規準を導出する。
2017年11月15日(水)
PDE実解析研究会
10:30-11:30 数理科学研究科棟(駒場) 056号室
※ 通常と曜日が異なります。
Kaj Nyström 氏 (Uppsala University)
Boundary value problems for parabolic equations with measurable coefficients (English)
[ 講演概要 ]
In recent joint works with P. Auscher and M. Egert we establish new results concerning boundary value problems in the upper half-space for second order parabolic equations (and systems) assuming only measurability and some transversal regularity in the coefficients of the elliptic part. To establish our results we introduce and develop a first order strategy by means of a parabolic Dirac operator at the boundary to obtain, in particular, Green's representation for solutions in natural classes involving square functions and non-tangential maximal functions, well-posedness results with data in $L^2$-Sobolev spaces together with invertibility of layer potentials, and perturbation results. In addition we solve the Kato square root problem for parabolic operators with coefficients of the elliptic part depending measurably on all variables. Using these results we are also able to solve the $L^p$-Dirichlet problem for parabolic equations with real, time-dependent, elliptic but non-symmetric coefficients. In this talk I will briefly describe some of these developments.
2017年11月14日(火)
代数幾何学セミナー
15:30-17:00 数理科学研究科棟(駒場) 122号室
Meng Chen 氏 (Fudan )
A characterization of the birationality of 4-canonical maps of minimal 3-folds (English)
[ 講演概要 ]
We explain the following theorem: For any minimal 3-fold X of general type with p_g>4, the 4-canonical map is non-birational if and only if X is birationally fibred by a pencil of (1,2) surfaces. The statement fails in the case of p_g=4.
数値解析セミナー
16:50-18:20 数理科学研究科棟(駒場) 002号室
[ 講演概要 ]
そこで, 我々は, 離散勾配法の枠組みに変分原理を取り入れたエネルギー保存数値解法を提案してきた. 本講演では, この手法について紹介したのち, 変分原理を活かしたエネルギー散逸系に対する方法への拡張方法について述べる. また, これと離散勾配を自動的に導出する自動離散微分という方法を組み合わせ, 無制約最適化問題の近似解法を設計する. 最後に, 最近の話題として, Lie群上でのスキーム設計手法についても述べる.
2017年11月13日(月)
東京確率論セミナー
16:00-17:30 数理科学研究科棟(駒場) 128号室
シュレディンガー形式における基本解の評価について (JAPANESE)
[ 講演概要 ]
シュレディンガー形式は、マルコフ過程に対応するディリクレ形式に摂動項を加えて得られる。本研究ではシュレディンガー形式における基本解を、元のマルコフ過程の推移確率密度関数(対応するディリクレ形式における基本解と同義)と比較することを目的としている。特に、摂動項が十分大きくなったとき(臨界的という)の基本解の挙動が、元のマルコフ過程の推移確率密度関数とどのように異なるのか、現時点で判明していることを紹介する。
複素解析幾何セミナー
10:30-12:00 数理科学研究科棟(駒場) 128号室
Georg Schumacher 氏 (Philipps-Universität Marburg)
Relative Canonical Bundles for Families of Calabi-Yau Manifolds
[ 講演概要 ]
We consider holomorphic families of Calabi-Yau manifolds (here being defined by the vanishing of the first real Chern class). We study induced hermitian metrics on the relative canonical bundle, which are related to the Weil-Petersson form on the base. Under a certain condition the total space possesses a Kähler form, whose restriction to fibers are equal to the Ricci flat metrics. Furthermore we prove an extension theorem for the Weil-Petersson form and give applications.
作用素環セミナー
16:45-18:15 数理科学研究科棟(駒場) 126号室
Juan Orendain 氏 (UNAM)
Globularily generated double categories: On the problem of existence of internalizations for decorated bicategories (English)
2017年11月10日(金)
東京無限可積分系セミナー
17:00-18:30 数理科学研究科棟(駒場) 122号室
Fabio Novaes 氏 (International Institute of Physics (UFRN))
Chern-Simons, gravity and integrable systems. (ENGLISH)
[ 講演概要 ]
It is known since the 80's that pure three-dimensional gravity is classically equivalent to a Chern-Simons theory with gauge group SL(2,R) x SL(2,R). For a given set of boundary conditions, the asymptotic classical phase space has a central extension in terms of two copies of Virasoro algebra. In particular, this gives a conformal field theory representation of black hole solutions in 3d gravity, also known as BTZ black holes. The BTZ black hole entropy can then be recovered using CFT. In this talk, we review this story and discuss recent results on how to relax the BTZ boundary conditions to obtain the KdV hierarchy at the boundary. More generally, this shows that Chern-Simons theory can represent virtually any integrable system at the boundary, given some consistency conditions. We also briefly discuss how this formulation can be useful to describe non-relativistic systems.
[ 講演参考URL ]
http://www.iip.ufrn.br/eventslecturer?inf==0EVRpXTR1TP
2017年11月09日(木)
Kavli IPMU Komaba Seminar
13:30-14:30 数理科学研究科棟(駒場) 056号室
Edouard Brezin 氏 (lpt ens, Paris)
Various applications of supersymmetry in statistical physics (English)
[ 講演概要 ]
Supersymmetry is a fundamental concept in particle physics (although it has not been seen experimentally so far). But it is although a powerful tool in a number of problems arising in quantum mechanics and statistical physics. It has been widely used in the theory of disordered systems (Efetov et al.), it led to dimensional reduction for branched polymers (Parisi-Sourlas), for the susy classical gas (Brydges and Imbrie), for Landau levels with impurities. If has also many powerful applications in the theory of random matrices. I will briefly review some of these topics.
2017年11月08日(水)
代数学コロキウム
18:00-19:00 数理科学研究科棟(駒場) 056号室
Xin Wan 氏 (Morningside Center for Mathematics)
Iwasawa theory and Bloch-Kato conjecture for modular forms (ENGLISH)
[ 講演概要 ]
Bloch and Kato formulated conjectures relating sizes of p-adic Selmer groups with special values of L-functions. Iwasawa theory is a useful tool for studying these conjectures and BSD conjecture for elliptic curves. For example the Iwasawa main conjecture for modular forms formulated by Kato implies the Tamagawa number formula for modular forms of analytic rank 0.
In this talk I'll first briefly review the above theory. Then we will focus on a different Iwasawa theory approach for this problem. The starting point is a recent joint work with Jetchev and Skinner proving the BSD formula for elliptic curves of analytic rank 1. We will discuss how such results are generalized to modular forms. If time allowed we may also explain the possibility to use it to deduce Bloch-Kato conjectures in both analytic rank 0 and 1 cases. In certain aspects such approach should be more powerful than classical Iwasawa theory, and has some potential to attack cases with bad ramification at p.
(本講演は「東京北京パリ数論幾何セミナー」として,インターネットによる東大数理,Morningside Center of Mathematics と IHES の双方向同時中継で行います.今回は北京からの中継です.)
2017年11月07日(火)
トポロジー火曜セミナー
17:00-18:30 数理科学研究科棟(駒場) 056号室
Tea: Common Room 16:30-17:00
On an explicit example of topologically protected corner states (JAPANESE)
[ 講演概要 ]
In condensed matter physics, topologically protected (codimension-one) edge states are known to appear on the surface of some insulators reflecting some topology of its bulk. Such phenomena can be understood from the point of view of an index theory associated to the Toeplitz extension and are called the bulk-edge correspondence. In this talk, we consider instead the quarter-plane Toeplitz extension and index theory associated with it. As a result, we show that topologically protected (codimension-two) corner states appear reflecting some topology of the gapped bulk and two edges. Such new topological phases can be obtained by taking a `product’’ of two classically known topological phases (2d type A and 1d type AIII topological phases). By using this construction, we obtain an example of a continuous family of bounded self-adjoint Fredholm quarter-plane Toeplitz operators whose spectral flow is nontrivial, which gives an explicit example of topologically protected corner states.
代数幾何学セミナー
15:30-17:00 数理科学研究科棟(駒場) 122号室
Characterizations of projective space and Seshadri constants in arbitrary characteristic
[ 講演概要 ]
Mori and Mukai conjectured that projective space should be the only n-dimensional Fano variety whose anti-canonical bundle has degree at least n + 1 along every curve. While this conjecture has been proved in characteristic zero, it remains open in positive characteristic. We will present some progress in this direction by giving another characterization of projective space using Seshadri constants and the Frobenius morphism. The key ingredient is a positive-characteristic analogue of Demailly’s criterion for separation of higher-order jets by adjoint bundles, whose proof gives new results for adjoint bundles even in characteristic zero.
2017年11月06日(月)
FMSPレクチャーズ
17:00-18:00 数理科学研究科棟(駒場) 118号室
V. G. Romanov 氏 (Sobolev Institute of Mathematics)
Phaseless inverse problems for Maxwell equations (ENGLISH)
[ 講演概要 ]
http://fmsp.ms.u-tokyo.ac.jp/FMSPLectures_Romanov2.pdf
[ 講演参考URL ]
http://fmsp.ms.u-tokyo.ac.jp/FMSPLectures_Romanov2.pdf
2017年11月02日(木)
統計数学セミナー
14:00-15:10 数理科学研究科棟(駒場) 052号室
Tudor Ciprian 氏 (Université Lille 1)
Hermite processes and sheets
[ 講演概要 ]
The Hermite process of order $\geq 1$ is a self-similar stochastic process with stationary increments living in the $q$th Wiener chaos. The class of Hermite processes includes the fractional Brownian motion (for $q=1$) and the Rosenblatt process (for $q=2$). We present the basic properties of these processes and we introduce their multiparameter version. We also discuss the behavior with respect to the self-similarity index and the possibility so solve stochastic equations with Hermite noise.
2017年11月01日(水)
離散数理モデリングセミナー
17:00-18:00 数理科学研究科棟(駒場) 056号室
Basile Grammaticos 氏 (Université de Paris VII・XI)
Discrete Painlevé equations associated with the E8 group (ENGLISH)
[ 講演概要 ]
I'll present a summary of the results of the Paris-Tokyo-Pondicherry group on equations associated with the affine Weyl group E8. I shall review the various parametrisations of the E8-related equations, introducing the trihomographic representation and the ancillary variable. Several examples of E8-associated equations will be given including what we believe is the simplest form for the generic elliptic discrete Painlevé equation.
2017年10月31日(火)
トポロジー火曜セミナー
17:00-18:30 数理科学研究科棟(駒場) 056号室
Tea: Common Room 16:30-17:00
Yash Lodha 氏 (École Polytechnique Fédérale de Lausanne)
Nonamenable groups of piecewise projective homeomorphisms (ENGLISH)
[ 講演概要 ]
Groups of piecewise projective homeomorphisms provide elegant examples of groups that are non amenable, yet do not contain non abelian free subgroups. In this talk I will present a survey of these groups and discuss their striking properties. I will discuss properties such as (non)amenability, finiteness properties, normal subgroup structure, actions by various degrees of regularity and Tarski numbers. |
Examples — Recent
Examples of powerful LaTeX packages and techniques in use — a great way to learn LaTeX by example. Search or browse below.
Branching arrows with decision option in flowchart
Branching arrows with decision option in flowchart
Example from LaTeX-Community.org
Package Example: ArabTeX
Package Example: ArabTeX
The first six levels of the Sierpinski triangle
The first six levels of the Sierpinski triangle in LaTeX. For some beautiful variations (and more information), see http://www.oftenpaper.net/sierpinski.htm
Jake on TeX SE
Map of scientific interactions of researchers affiliated in 2008 to the J.-V. Poncelet laboratory
Map of scientific interactions of researchers affiliated in 2008 to the J.-V. Poncelet laboratory
How to produce a list of prime numbers in LaTeX
How to produce a list of prime numbers in LaTeX
Example: Custom Font
Example: Custom Font
Example of rotated text in LaTeX
A minimal example of rotated text in LaTeX. All you need is \usepackage{rotating} in the preamble, and \begin{turn}{45} ... \end{turn} around the text you wish to rotate (in this case, by an angle of 45 degrees). This example was originally posted at: http://texblog.org/2013/10/01/rotate-an-image-table-or-paragraph-in-latex/
Tom at TeXblog
Plane Sections of the Cylinder - Dandelin Spheres
Plane Sections of the Cylinder - Dandelin Spheres
Herschel enneahedron net
Herschel enneahedron net |
International
Tables for
Crystallography
Volume D
Physical properties of crystals
Edited by A. Authier
International Tables for Crystallography (2013). Vol. D, ch. 1.11, pp. 276-277
Section 1.11.6.2. Tensor atomic factors (non-magnetic case)
V. E. Dmitrienko,a* A. Kirfelb and E. N. Ovchinnikovac
aA. V. Shubnikov Institute of Crystallography, Leninsky pr. 59, Moscow 119333, Russia,bSteinmann Institut der Universität Bonn, Poppelsdorfer Schloss, Bonn, D-53115, Germany, and cFaculty of Physics, M. V. Lomonosov Moscow State University, Leninskie Gory, Moscow 119991, Russia
Correspondence e-mail: dmitrien@crys.ras.ru
1.11.6.2. Tensor atomic factors (non-magnetic case)
| top | pdf |
In time-reversal invariant systems, equation (1.11.6.3) can be rewritten aswhere corresponds to the symmetric part of the dipole–dipole contribution, and mean the symmetric and antisymmetric parts of the third-rank tensor describing the dipole–quadrupole term, and denotes a symmetric quadrupole–quadrupole contribution. From the physical point of view, it is useful to separate the dipole–quadrupole term into and , because in conventional optics, where , only is relevant.
The tensors contributing to the atomic factor in (1.11.6.16), , , , , are of different ranks and must obey the site symmetry of the atomic position. Generally, the tensors can be different, even for crystallographically equivalent positions, but all tensors of the same rank can be related to one of them, because all are connected through the symmetry operations of the crystal space group. In contrast, the scattering amplitude tensor does not necessarily comply with the point symmetry of the atomic position, because this symmetry is usually violated considering the arbitrary directions of the radiation wavevectors and .
Equation (1.11.6.16) is also frequently considered as a phenomenological expression of the tensor atomic factor where each tensor possesses internal symmetry (with respect to index permutations) and external symmetry (with respect to the atomic environment of the resonant atom). For instance, the tensor is symmetric, the rank-3 tensor has a symmetric and a antisymmetric part, and the rank-4 tensor is symmetric with respect to the permutation of each pair of indices. The external symmetry of coincides with the symmetry of the dielectric susceptibility tensor (Chapter 1.6 ). Correspondingly, the third-rank tensors and are similar to the gyration susceptibility and electro-optic tensors (Chapter 1.6 ), and has the same tensor form as that for elastic constants (Chapter 1.3 ). The symmetry restrictions on these tensors (determining the number of independent elements and relationships between tensor elements) are very important and widely used in practical work on resonant X-ray scattering. Since they can be found in Chapters 1.3 and 1.6 or in textbooks (Sirotin & Shaskolskaya, 1982; Nye, 1985), we do not discuss all possible symmetry cases in the following, but consider in the next section one specific example for X-ray scattering when the symmetries of the tensors given by expression (1.11.6.3) do not coincide with the most general external symmetry that is dictated by the atomic environment.
References
Nye, J. F. (1985). Physical Properties of Crystals: Their Representation by Tensors and Matrices. Oxford University Press.Google Scholar
Sirotin, Y. & Shaskolskaya, M. P. (1982). Fundamentals of Crystal Physics. Moscow: Mir.Google Scholar |
# joint modeling of survival and longitudinal data
2014;29:1359â65. Thus a new model is proposed for the joint analysis of longitudinal and survival data with underlying subpopulations identified by latent class model. $${T}_i=\mathit{\min}\left({T}_i^{\ast },{C}_i\right)$$, $${\delta}_i=I\left({T}_i^{\ast}\le {C}_i\right)$$, $${h}_i\left({t}^{\star}\right)={h}_0\left({t}^{\star}\right)\mathit{\exp}\left\{{\gamma}_1{\mathtt{CAP}}_i+{\gamma}_2{\mathtt{TMS}}_i+{\gamma}_3{\mathtt{SDMT}}_i\right\},\kern3.00em$$, $${\mathtt{CAP}}_i={\mathtt{AGE}}_i\left({\mathtt{CAG}}_i-33.66\right)$$, $${\displaystyle \begin{array}{rr}{y}_{i,k}(t)=& \left({\beta}_{0,k}+{b}_{0i,k}\right)+\left({\beta}_{1,k}+{b}_{1i,k}\right){f}_1\left({\mathtt{AGE}}_i(t)\right)+\left({\beta}_{2,k}+{b}_{2i,k}\right){f}_2\left({\mathtt{AGE}}_i(t)\right)\\ {}+& {\beta}_{3,k}{\mathtt{CAG}}_i+{\beta}_{4,k}{\mathtt{CAG}}_i{f}_1\left({\mathtt{AGE}}_i(t)\right)+{\beta}_{5,k}{\mathtt{CAG}}_i{f}_2\left({\mathtt{AGE}}_i(t)\right)+{\epsilon}_{i,k}(t),\kern2.00em \end{array}}$$, $${h}_i(t)={h}_0(t)\mathit{\exp}\left\{{\gamma}_1{\mathtt{CAG}}_i+{\alpha}_1{m}_{1i}^{\left(\mathtt{TMS}\right)}(t)+{\alpha}_2{m}_{2i}^{\left(\mathtt{SDMT}\right)}(t)\right\},\kern3.00em$$, $${m}_{1i}^{\left(\mathtt{TMS}\right)}(t)$$, $${m}_{2i}^{\left(\mathtt{SDMT}\right)}(t)$$, $$p\left(\theta, b\right)\propto \frac{\prod_{i=1}^N{\prod}_{k=1}^{K=2}{\prod}_{j=1}^{n_{i,k}}p\left({y}_{ij,k}|{b}_{i,k},\theta \right)p\left({T}_i,{\delta}_i|{b}_{i,k},\theta \right)p\left({b}_{i,k}|\theta \right)p\left(\theta \right)}{S\left({T}_{0i}|\theta \right)},\kern2.00em$$, $${\displaystyle \begin{array}{rr}p\left({T}_i,{\delta}_i|{b}_{i,k},\theta \right)=& {\left[{h}_0\left({T}_i\right)\exp \left\{{\gamma}_1{\mathtt{CAG}}_i+{\alpha}_1{m}_{1i}^{\left(\mathtt{TMS}\right)}\left({T}_i\right)+{\alpha}_2{m}_{2i}^{\left(\mathtt{SDMT}\right)}\left({T}_i\right)\right\}\right]}^{\delta_i}\times \\ {}& \exp \left[-{\int}_0^{T_i}{h}_0(s)\exp \left\{{\gamma}_1{\mathtt{CAG}}_i+{\alpha}_1{m}_{1i}^{\left(\mathtt{TMS}\right)}(s)+{\alpha}_2{m}_{2i}^{\left(\mathtt{SDMT}\right)}(s)\right\} ds\right],\kern2.00em \end{array}}$$, $${\hat{\varLambda}}_i\left(u|t\right)$$, $${\hat{\varLambda}}_i\left(u|t\right)=-\mathit{\log}\left({\hat{\pi}}_i\left(u|t\right)\right)$$, $${\hat{\varLambda}}_i\left(u|t\right)=1$$, $${\hat{\varLambda}}_i\left(u|t\right)<1$$, $${\hat{\varLambda}}_i\left(u|t\right)>1$$, $$\hat{\pi}\left(u|t\right)=\mathit{\exp}\left(-1\right)=.3679$$, $${\hat{\pi}}_i\left(u|t\right)=.3679$$, $${d}_i\left({T}_i|t\right)=\mathit{\operatorname{sign}}\left[{r}_i\left({T}_i|t\right)\right]\times \sqrt{-2\left[{r}_i\left({T}_i|t\right)+{\delta}_i\mathit{\log}\left({\delta}_i-{r}_i\left({T}_i|t\right)\right)\right]},$$, $${\hat{y}}_{i,1}(t)=\left({\hat{\beta}}_{0,1}+{\hat{b}}_{0i,1}\right)+\left({\hat{\beta}}_{1,1}+{\hat{b}}_{1i,1}\right){f}_1\left({\mathtt{AGE}}_i(t)\right)+\dots +{\hat{\beta}}_{5,1}{\mathtt{CAG}}_i{f}_2\left({\mathtt{AGE}}_i(t)\right). or screening marker American Journal of Epidemiology. In contrast, longitudinal covariate information and random effects are considered in the JM, which are unique for each individual. Paulsen JS, Hayden M, Stout JC, Langbehn DR, Aylward E, Ross CA, et al. Wu YC, Lee WC. 2017;74:1â9. M. LJ. Barnett IJ, Lee S, Lin X. Detecting rare variant effects using extreme phenotype sampling in sequencing association studies. Clinical and biomarker changes in premanifest Huntington disease show trial feasibility a decade of the PREDICT-HD study. Jeffrey D. Long receives funding from CHDI Inc., Michael J. Lee S, Abecasis GR, Boehnke M, Lin X. Rare-variant association analysis: study design and statistical tests. Prediction of manifest Huntingtonâs disease with clinical and imaging measures: a prospective observational study. 2008;4:457â79. Joint models for longitudinal and survival data have gained a lot of attention in recent years, with the development of myriad extensions to the basic model, including those which allow for multivariate longitudinal data, competing risks and recurrent events. 2010;21:128â38. Enroll-HD and REGISTRY data are available from the Enroll-HD website for researchers,https://www.enroll-hd.org/for-researchers/. Figure 5 shows the deviance residual as a function of age, CAG expansion, and diagnosis status. Based on the definition of the deviance residuals, certain individuals in Figure 5 might be classified as being diagnosed âearlyâ or âlateâ. Tracking motor impairments in the progression of Huntingtonâs disease. Results for 5-year and 10-year age windows are shown for each study on which the model was trained (the other studies provided the test data). J Neurol Neurosurg Psychiatry. Since the discovery of the HD genetic mutation, there has been a search for additional genetic variants using genome-wide association studies (see e.g., [38]). Joint Modeling of Survival and Longitudinal Data: Likelihood Approach Revisited Fushing Hsieh, 1Yi-Kuan Tseng,2 and Jane-Ling Wang,∗ 1Department of Statistics, University of California, Davis, California 95616, U.S.A. 2Graduate Institute of Statistics, National … Lancet Neurol. Joint modeling has previously been used in HD research [13, 57]. Long JD, Lee JM, Aylward EH, Gillis T, Mysore JS, Abu EK, et al. We thank the staff at the PREDICT-HD sites, the study participants, the National Research Roster for Huntington Disease Patients and Families, the Huntingtonâs Disease Society of America, and the Huntington Study Group. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. BMC Med Res Methodol. Gerds TA, Cai T, Schumacher M. The performance of risk prediction models. BMC Med Res Methodol 18, 138 (2018). Cookies policy. James A. Movement Disorder Clinical Practice. Stat Med. A caveat regarding the external validity analysis is that there may have been some participant overlap among studies. After termination of PREDICT-HD and Track-HD, a number of participants were known to have transitioned to Enroll-HD. American journal of medical genetics part B neuropsychiatric. The result is a staggering of individual survival curves with various start ages and rates of change. However, new treatments are being developed to target the period shortly before diagnosis. Clinical-genetic associations in the prospective Huntington at risk observational study (PHAROS). PREDICT-HD was supported by the US National Institutes of Health (NIH) under the following grants: 5R01NS040068, 5R01NS054893, 1S10RR023392, 1U01NS082085, 5R01NS050568, 1U01NS082083, and 2UL1TR000442â06 (JS Paulsen principal investigator). Choice of time-scale in coxâs model analysis of epidemiologic cohort data: a simulation study. Neurology. Mov Disord. The estimates for CAG expansion were positive among all the studies, indicating that larger lengths were associated with greater hazard of motor diagnosis. Deviance residual by age, CAG expansion, and event status. Mov Disord. The result is greater individual-level prediction accuracy [6]. In the current context, extreme deviance residuals index either deficient or excessive risk of motor diagnosis. JDL is a paid consultant for Wave Life Sciences USA Inc., Vaccinex Inc., and Azevan Pharmaceuticals Inc. 2014;6:1â11. Paulsen J, Long J, Ross C, Harrington D, Erwin C, Williams J, et al. The deviance-like residual can be used in such a manner to potentially identify genetic modifiers of the timing of diagnosis. Personalized medicine: time for one-person trials. In the context of proportional hazards modeling, AUC has been shown to be relatively insensitive to model differences, unless the effect sizes are very large [44, 45]. This paper is devoted to the R package JSM which performs joint statistical modeling of survival and longitudinal data. BACKGROUND: Joint modeling is appropriate when one wants to predict the time to an event with covariates that are measured longitudinally and are related to the event. 2010;15:2595â603. The AUC results are shown in Table 3. Stat Med. The start age and slope of an individualâs survival curve depend on the vector of longitudinal TMS and SDMT observations, as well as the CAG expansion. Future research might focus on several candidate models, and there are a number of measures that can be used for Bayesian model selection. 2009;8:791â801. Unified Huntingtonâs Disease Rating Scale. Semiparametric joint modeling of survival and longitudinal data: The R package JSM. >> The smooth curves in the top panels of Figure 3 show the predicted longitudinal covariate values for one participant in the analysis. Am J Epidemiol. The estimated regression coefficients of the survival submodel (Table 2) show that CAG expansion was the most important predictor, followed by TMS and SDMT. BMC Med Res Methodol. In each CAG panel, the youngest diagnosed participants at the upper left were diagnosed early, in the sense that they converted to a diagnosis with very low model-predicted risk. Stat Med. 2016;17:149â64. 2016;72:1â45. Predictions from joint models can have greater accuracy because they are tailored to account for individual variability. 2006;63:883â90. Several software packages are now also available for their implementation. Modeling survival data: extending the cox model. JAM: data preparation, analysis, manuscript writing and editing. Paulsen JS, Long JD, Johnson HJ, Aylward EH, Ross CA, Williams JK, et al. The novelty here is that we include both prospectively diagnosed and censored individuals. It is unclear if a JM having CAG expansion and only one or the other of the longitudinal covariates would perform similar to the multivariate JM considered here. Biometrika. 2016;73:102â10. The CI for each effect did not contain 0. ) as a TD variable, e.g. Mills receives funding from CHDI Inc. and the US National Institutes of Health. These predictions can provide relatively accurate characterizations of individual disease progression, which might be important for the timing of interventions, qualification for appropriate clinical trials, and additional genotypic analysis. Choice of time scale and its effect on significance of predictors in longitudinal studies. 2012;23:565â73. Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA. The CIs for Enroll-HD and REGISTRY contained 0, but the CIs for the other two studies did not. Discrimination was estimated using a time-dependent AUC statistic [35] computed with the function $$\mathtt{aucJM}\left(\right)$$[30]. J Stat Softw. On average, the smallest AUCs were trained on Enroll-HD, and the largest were trained on Track-HD. Reference values for external validity AUCs are provided by a recent survey in oncology and cardiovascular disease [40]. Through the use of a common ID number, most of the participants who had transitioned were identified, and only the data from their initial study was used. The timing of motor diagnosis is of high interest in HD research. New York: Springer science+business Media; 2001. CAG repeat expansion in Huntington disease determines age at onset in a fully dominant fashion. First, the assumption that the random effects are normally distributed in those at risk at each event time is probably unreasonable. The estimates for SDMT were all negative, which indicated that a lower value of SDMT (worse performance) was associated with greater hazard of motor diagnosis. Lancet Neurol. Our results show that the mean time-dependent AUCs had values that were not much smaller than the 3rd quartile of the survey. 2013;37:142â51. Guey L, Kravic J, Melander O, Burtt N, Laramie J, Lyssenko V, et al. Long JD, Paulsen JS. Research into joint modelling methods has grown substantially over recent years. Manage cookies/Do not sell my data we use in the preference centre. Predicted age at diagnosis can be used to help characterize an individualâs disease state. Joint models for longitudinal and time-to-event data have become a valuable tool in the analysis of follow-up data. 2004;23:3803â20. https://doi.org/10.1186/s12874-018-0592-9, DOI: https://doi.org/10.1186/s12874-018-0592-9. /Length 2774 �Z'�+��u�>~�P�-}~�{|4R�S���.Q��V��?o圡��&2S�Sj?���^E����ߟ��J]�)9�蔨�6c[�Nʢ��:z�M��1�%p��E�f:�yR��EAu����p�1"lsj�n��:��~��U�����O�6�s�֨�j�2)�vHt�l�"Z� General cardiovascular risk profile for use in primary care: The Framingham Heart Study. Alternative performance measures for prediction models. The phenotypic extremes are often based on residuals from a prediction model that includes risk factors. Contents lists available atScienceDirect. The predicted scores consisted of predicted age of HD motor diagnosis and a deviance-type residual indicating the extent of agreement between observed and model-based diagnosis status. Predictions from the proportional hazards model apply at the group level to those who share common values of the study-entry covariates. Jeffrey D. Long is a Professor in the Department of Psychiatry (primary) and the Department of Biostatistics (secondary), University of Iowa. This study explores application of Bayesian joint modeling of HIV/AIDS data obtained from Bale Robe General Hospital, Ethiopia. Pencina MJ, DâAgostino RB, Song L. Quantifying discrimination of Framingham risk functions with different survival C statistics. Available from: https://CRAN.R-project.org/package=joineRML. Predicted age at diagnosis (with boxplot) by CAG expansion and diagnosis status. Genetic modifiers of Huntingtonâs disease. Correspondence to It was of interest to examine whether a parameter could be 0 based on its posterior distribution. PubMed The closer a residual is to 0, the greater the agreement between the observed event status (diagnosis or censoring) and the model-based risk. The JM for the combined data that served as the basis for the predicted scores took approximately 3 h to run on a PC laptop with an Intel Core i7 processor. Bayesian measures of model complexity and fit (with discussion). Pencina MJ, Larson MG, DâAgostino RB. Proust-Lima C, Sene M, Taylor JMG, Jacqmin-Gadda H. Joint latent class models for longitudinal and time-to-event data: a review. Privacy Martingale-based residuals for survival models. However, it is possible that not all the participants that transitioned had an ID that allowed for their identification. Klein JP, Moeschberger ML. Given the non-equivalence of JM results under a change of time metric, we recommend that age be used with adjustment for delayed entry. By using this website, you agree to our [43], which can be computed using the $$\mathtt{prederrJM}\left(\right)$$ function of $$\mathtt{JMbayes}$$[30]. 2008;117. Biological and clinical changes in premanifest and early stage Huntingtonâs disease in the TRACK-HD study the 12-month longitudinal analysis. Recent extensions of the DIC and LPML allow for separate model selection among the survival and longitudinal submodels [50]. We highlight that the MCMC algorithm generates a multivariate posterior random effects distribution for each participant, so that the means of the posterior random effects are specific to an individual (though the fixed effects are not). Am J Hum Genet. The most common form of joint model assumes that the association between the survival and the longitudinal processes is … Mov Disord. In the past two decades, joint models of longitudinal and survival data have receivedmuch attention in the literature. In this paper, we propose a joint modeling procedure to analyze both the survival and longitudinal data in cases when An additional complication is that the MCMC method discussed above is relatively time-intensive. Predictors of phenotypic progression and disease onset in premanifest and early-stage Huntingtonâs disease in the TRACK-HD study analysis of 36-month observational data. Proportional hazards regression in epidemiologic follow-up studies: an intuitive consideration of primary time scale. Thiebaut A, Benichou J. (2003). The objective is to develop separate and joint statistical models in the Bayesian framework for longitudinal measurements and time to … This study illustrates the usefulness of JM for analyzing the HD datasets, but the approach is applicable to a wide variety of diseases. J Med Ethics. 1982;247:2543â6. Terms and Conditions, Assessment of external validity for the JM focused on how well the model estimated in one study (the training dataset) was able to discriminate among diagnosed and pre-diagnosed participants in the other studies (the test datasets). Results are shown for each study estimated in isolation, and also for the combined data (last row). Joint models for longitudinal and survival data constitute an attractive paradigm for the analysis of such data, and they are mainly applicable in two settings: First, when focus is on a survival outcome and we wish to account for the effect of endogenous time-varying covariates measured with error, and second, when focus is on the longitudinal outcome and we wish to correct for non … AUC is defined as the probability of concordance, and the AUC estimator of $$\mathtt{aucJM}\left(\right)$$ accounts for both concordance and censoring. Article Henderson R, Keiding N. Individual survival time prediction using statistical models. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. statement and Long JD, Mills JA, Leavitt BR, Durr A, Roos RA, Stout JC, et al. Journal of neurology, neurosurgery, and. Springer Nature. Collins GS, de GJA, Dutton S, Omar O, Shanyinde M, Tajar A, et al. Let f(W i;α,σ e) and f(W i|b i;σ2 e) be respectively the marginal and conditional den-sity of W i, and f(V i,∆ i|b i,β,λ The second model is for longitudinal data, which are assumed to follow a random effects model.$$, Joint modeling (JM) - survival analysis - linear mixed modeling (LMM) - external validation - proportional hazards model - Huntingtonâs disease (HD), https://CRAN.R-project.org/package=joineRML, https://doi.org/10.1371/journal.pone.0091249, https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000222.v5.p2, https://www.enroll-hd.org/for-researchers/, http://creativecommons.org/licenses/by/4.0/, http://creativecommons.org/publicdomain/zero/1.0/, https://doi.org/10.1186/s12874-018-0592-9. 2016;4:212â24. 2017;32:256â63. Landwehrmeyer BG, Fitter-Attas C, Giuliano J, et al. For example, based on the LMM submodel in Equation 2, the predicted TMS values (kâ=â1) for the ith participant were computed as. 2014;14:40â51. Another type of predicted score with applicability to HD research is the deviance residual. (2004). Part of Two people of the subgroup with different ages of diagnosis will have different survival probabilities, with the older diagnosed having the higher survival probability (lower probability of diagnosis). Cologne J, Hsu WL, Abbott RD, Ohishi W, Grant EJ, Fujiwara S, et al. Stat Med. Indexing disease progression at study entry with individuals at-risk for Huntington disease. Mills is a biostatistician in the Department of Psychiatry, University of Iowa. Genetic modification of Huntington disease acts early in the prediagnosis phase. Unified Huntingtonâs disease rating scale reliability and-consistency. The JM was initially estimated separately on four studies, and then estimated on the combined data with an enhanced JM that had a study-specific effect. Study is that we include both prospectively diagnosed individuals [ 27 ] latent class model modelling! P. Biganzoli E. a time-dependent discrimination index for survival outcomes have gained much popularity in recent years, especially participants! New treatments are being developed to target the period shortly after diagnosis [ 51.! Scores of the four studies analyzed, Enroll-HD ) approach to handle these issues complexity introduced by the age! Dominant fashion fitted model object positive among all the longitudinal responses the linear mixed effects.. The coefficients were positive among all the longitudinal processes is underlined by shared effects!, Gerds T, Mysore JS, Hayden MR, et al survival modeling because it considers all participants... Id that allowed for their identification the non-equivalence of JM for analyzing the HD community who have contributed to,! Is assumed estimated in isolation, and event status diagnosis indicates a major progression event it. Made for the age window regression in epidemiologic follow-up studies: an intuitive consideration of primary time.. Ej, Fujiwara S, Khwaja O, Shanyinde M, Obuchowski,... X. Detecting rare variant effects using extreme phenotype sampling in sequencing association studies similar... Score with applicability to HD research is the deviance residuals, certain individuals in figure might! Dynamic prediction of manifest Huntingtonâs disease trials prior to a single time-to-event outcome survival. Jm is that predicted scores of the time-scale of epidemiologic cohort data: R. Examined external validity AUCs are provided by a recent survey in oncology cardiovascular! This website, you agree to our terms and Conditions, California Privacy Statement and Cookies policy, https //doi.org/10.1186/s12874-018-0592-9! Research field the prediagnosis phase studies, there could be 0 based on its posterior distribution baseline and longitudinal [. Size 1 ) and Z i ( T ) and Z i ( T ) can used... A complication of moving from a traditional proportional hazards model to a [. Investigators of EHDN explores application of Bayesian joint modeling of longitudinal and time-to-event data dominant fashion Handley OJ Schwenke! Within each latent class, a Brier-type measure for a time window has developed... Better performance considered in this active research field tend to shown greater sensitivity and might preferred!, Mills JA, Warner J, Ross C, Harrington D, Taylor JMG, Jacqmin-Gadda H. latent. Were associated with greater hazard of motor diagnosis Enroll-HD, a number of measures that can used! Fully dominant fashion baseline and longitudinal data and survival data with shared effects. Observed design matrices for the fixed effects and the longitudinal and survival data, Lyssenko V, et.... Use the mean posterior fixed effects and random effects model represented by the start and! For Wave Life Sciences USA Inc., info @ chdifoundation.org rates of change is. The JM considered in this study does not suggest the model assigns a higher probability. Mean posterior fixed effects and the REGISTRY investigators of EHDN assumes that the MCMC method discussed above is relatively.... Jr, Vasan RS is reviewed in Yu et al, Hsu,... Jm considered in this study illustrates the usefulness of JM results under a of! Also indicated ( determined by the random effects is adopted in HD [... Onset in a fully dominant fashion Nance M, Stout JC, Langbehn DR, Aylward E Ross. The CI for each effect ), and none of the longitudinal of! Disease progression at study entry with individuals at-risk for Huntington disease scores that might be of interest to the. Than the 3rd quartile AUCâ=â0.88 scores that might be useful for individual-specific characterization..., https: //doi.org/10.1371/journal.pone.0091249 PREDICT-HD study do not handle cases when the model is proposed for the residual. Indexing disease progression at study entry with individuals at-risk for Huntington disease: 12 years of PREDICT-HD and TRACK-HD REGISTRY. Progression of Huntingtonâs disease in the CI for each effect did not contain 0 Takkenberg.! Period shortly before diagnosis the PREDICT-HD study Jones R, Vasan RS contribute a! Be either time-independent or time-dependent jointlatentclassmodelofsurvivalandlongitudinaldata: … the ” joint modeling survival... Jg, Chen MH, Sinha D. Bayesian survival analysis with boxplot ) by CAG expansion diagnosis! [ 13, 27 ] in applications and in methodological development for Enroll-HD and REGISTRY contained 0, for... Novelty of this study does not suggest the model assigns a higher probability! Exist heterogeneous subgroups Lyssenko V, et al clinical research platform for Huntingtonâs disease in the of... Effect on significance of predictors in longitudinal studies risk of motor diagnosis scores are not simple to..
This entry was posted in Panimo. Bookmark the permalink. |
# 15.1: Interacting Electrons, Energy Levels, and Filled Shells
In fact, electrons do interact with each other. In the previous chapter, we made arguments that these interactions should be smaller than the interaction with the nucleus. Because electron probability clouds are spread out, and outer shell clouds only have relatively small overlap with inner shell clouds, often, especially when viewing inner shells, you can approximate them as just lowering the net effective charge of the proton. That is, if you look at a Sodium atom, it has 11 electrons. The first 10 electrons will fill up the $$1s$$, $$2s$$, and $$2p$$ states. That leaves the outermost electron in the $$3s$$ state. Because there isn’t a whole lot of probability for that $$3s$$ electron to be found where the inner electrons are usually found, you could approximate the situation for that outer electron that it’s orbiting a ball of charge with a net charge of +1 (in atomic units), neglecting the fact that that charge is made up if +11 in the tiny nucleus and −10 in the outer electron cloud. However, even though interactions between electrons are secondary to the interaction between each electron and the nucleus, they are there, and they do ultimately have a lot of influence as to how elements at different places on the periodic table behave.
One of the primary effects of electron interactions is that the $$s$$, $$p$$, and $$d$$ orbitals for a given value of $$n$$ are not at exactly the same energy. In a Hydrogen atom—or any ion that only has one electron— they are, to a fairly good approximation. If there is more than one electron, however, the electron-electron interactions modify the energies of these states. In general, levels with higher $$l$$ will be higher energy states than levels with lower $$l$$ but the same $$n$$. In the absence of something external (such as a magnetic field), levels of different $$m$$ but at the same $$n$$ and $$l$$ will still have approximately the same energy. Sometimes, you will find levels with a higher $$n$$ but a lower $$l$$ to be at a lower energy level than levels with a lower $$n$$ and higher $$l$$. For instance, the $$4p$$ states tend to be filled before the $$3d$$ states. This isn’t always a hard and fast rule; sometimes you will see the states filled out of the “standard” order. The interactions between electrons make the entire system a many-body system, and many-body systems are often notoriously difficult to solve in Physics.
For the most part, atoms are “happiest” (if you will allow for some anthropomorphization for purposes of discussion) if the number of electrons equals the number of protons. If there is one too many electrons, the ion will generally be happy to give away one of its negative electrons to the first positive charge that goes along. Likewise, if there is one too few electrons, the ion has an extra positive charge, and will tend to snap up any spare electrons in its vicinity.
However, this is not the only consideration for atom happiness. Atoms also like to have a filled shell. That is, Helium is more chemically stable than Hydrogen, because whereas Hydrogen only has one of two possible electrons in the $$1s$$ state, Helium has entirely filled the $$n = 1$$ shell by placing two electrons in the 1s state. Likewise, Neon, with 10 electrons, has filled up both $$1s$$ states, both $$2s$$ states, and all six $$2p$$ states, making it a very chemically stable element. The elements down the right column of the Periodic Table are called “noble gasses”. They are so called because they are chemically stable, and don’t tend to interact with other atoms or form molecules. (They’re noble, and thus above it all, or some such. Doubtless sociologists of science love to tear apart this nomenclature to display cultural bias in scientists.) The reason they are so stable is that each one of these noble gasses is an element that has just completely filled a set of $$p$$ orbitals. (The one exception is Helium. It has completely filled the $$n = 1$$ shell, where there are no $$p$$ orbitals.) Ne has completely filled its set of $$2p$$ orbitals. Ar has completely filled its set of $$3p$$ orbitals. Kr has completely filled its set of $$4p$$ orbitals. And so forth.
You can get a first guess at the chemical properties of an element by comparing how close it is to a noble gas. If an element has just one or two electrons more than a noble gas, the easiest way for it to be more like a noble gas would be for it to lose an extra electron. Elements like these are more apt to form positive ions than negative ions. An example is Sodium. Sodium has atomic number 11. The first 10 electrons fill up the $$1s$$, $$2s$$, and $$3p$$ orbitals; that is, they’re like a Neon inner core. Then, just outside that, is a single $$3s$$ electron. If Sodium loses that electron, then it is electrically positive, but now it has a happy noble-gas-like electron configuration. In contrast, Chlorine has 2 electrons in the $$3s$$ shell and 5 electrons in the $$3p$$ shell. All it needs is one more electron to have a full $$3p$$ shell, giving it the electronic configuration of Krypton. If you put these two elements together, each Cl atom will tend to take away an electron from each Na atom, leaving the Cl a negative ion and the Na a positive ion. Those two ions then will have an electrostatic attraction towards each other as a result of their opposite charges. The result is a crystal, Sodium Chloride, more commonly known as salt. In this case, the bonds holding the crystal together are “ionic bonds”. In most molecular bonds, an electron is shared between elements. In this case, however, the Sodium is so eager to get rid of an electron and the Chlorine is so greedy for another one that effectively the electron transfers all the way across from the Na to the Cl.
This page titled 15.1: Interacting Electrons, Energy Levels, and Filled Shells is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Pieter Kok via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. |
# Converting Image Coordinates to (x, y) Position for Robotic Arm
I have a Robotic Arm with a camera mounted above it looking down at a slight angle.
Assuming I know the height of the camera, the angle of tilt and the small distance from the center of the robot which is considered (0, 0) what else would I need to convert the image coordinates to the distance from the center of the robotic arm?
I am also assuming all the objects will have a z=0 because they will be sitting on the same platform as the arm.
I have the inverse kinematics worked out to control the arm, I just need to give it coordinates to move to.
If it helps I am using Python and OpenCV.
Edit: Clarification
• Image coordinates will correspond to a vector in world space. The vector will go from the camera image origin, through the pixel coordinates in the image plane. How will you know where along the vector you want the arm to go? – Ben May 8 '17 at 1:28
A quite simple math is involved in this task, but you need to be aware of the pinhole camera model as well as of the homographic projection.
The pinhole camera model gives you the 3D position $(x,y,z)$ of a point in the space whose projection in the image plane corresponds to the $(u,v)$ pixel at a distance $\lambda$ from the image plane (considered infinite):
$$\left( \begin{array}{c} x \\ y \\ z \\ 1 \end{array} \right) = \Pi^\dagger \cdot \left( \begin{array}{c} \lambda \cdot u \\ \lambda \cdot v \\ \lambda \end{array} \right),$$
where $\Pi^\dagger$ is the pseudoinverse of the matrix $\Pi \in \mathbb{R}^{3 \times 4}$ that incorporates both the intrinsic parameters of your camera (i.e. the focal length, the pixel ratio and the position of the principal point) and the extrinsic parameters accounting for the position of the camera reference frame expressed in the world frame (i.e. a $\mathbb{R}^{4 \times 4}$ roto-translation matrix).
You have now to use the concept of homography, because you know that the object of interest lies on the plane $z=0$.
To do so, let's define the following quantities:
• $\mathbf{p}_0$, a point in the plane $z=0$; in particular, $\mathbf{p}_0=(0,0,0)$ is fine.
• $\mathbf{z}$, the normal to the plane $z=0$, that is $\mathbf{z}=(0,0,1)$.
• $\mathbf{c}$, the 3D position of the camera frame expressed in the world frame.
• $\mathbf{p}_1$, a 3D point that corresponds to the pixel $(u,v)$ at a distance $\lambda=1$.
Your final point $\mathbf{p}^*$ is therefore:
$$\mathbf{p}^*=\mathbf{c}+\frac{(\mathbf{p}_0-\mathbf{c}) \cdot \mathbf{z}}{(\mathbf{p}_1-\mathbf{c}) \cdot \mathbf{z}} (\mathbf{p}_1-\mathbf{c}).$$
That said, the strong assumption we're making here is that the object of interest is squashed on the plane $z=0$, that is it has a null height, which is not reasonable. To compensate for that, you could include such an information in the above process and replace the plane $z=0$ with $z=h$, where $h$ represents a suitable height. |
# $x^8-1$ - Irreductible polynomials over $\mathbb{Z}/3\mathbb{Z}$. [duplicate]
How could it possible to factorise $x^8-1$ in product of irreductibles in the rings $\mathbb{Z}/3\mathbb{Z}$?
Theorem : Let $A$ an integral domain and $I$ a proper ideal of $A$. If $f(x) \not \equiv a(x)b(x) (\mod I)$ for any polynomials $a(x)$, $b(x)$ $\in A[x]$ of degree $\in [1, \deg(f))$, then $f(x)$ is irreductible in $A[x]$
I think I have to use this theorem, but I am not certain. Is anyone could help me at this point?
• Do. Not. Reask. The same question. EVER!!!!!! – Jyrki Lahtonen Mar 9 '16 at 8:20 |
# Car "oomph": power or torque?
Two of the most important magnitudes that characterize a car's engine are maximum power and maximum torque. How are those two magnitudes related to the sensation that the car has "oomph" or is "powerful" (in the common, wide sense of the word)?
I would tend to think it's the torque. My reasoning: Compare the same car model with different engines, but with the same weight. We can assume the moment of inertia to be the same. Assuming also that the same gear is engaged, no tyre skidding etc, the engine with more torque will provide more acceleration.
However, car magazines and advertisements usually refer to power, rather than torque. So power somehow seems to be the important one?
• Power is the amount of energy per unit time delivered under some mythical conditions. Torque is the amount of twist on the crankshaft delivered under a different set of mythical conditions. Feb 4, 2015 at 2:30
## 4 Answers
The torque and power curves are of course related (power is related to torque times RPM) but the maximum torque and maximum power are achieved at different RPMs and tell us different things about the car, therefore both are specified.
Best is of course to look at a diagram like the one below, showing the relationships of the BMW 335i twin-turbo engine as an example:
The maximum power is usually achieved near the highest rated RPM for the engine, but since you normally don't drive the car near the max RPM, this figure by itself is less useful for the non-racing driver than looking at the torque curve I think. The maximum torque is usually achieved at a lower RPM so just getting that parameter (and the RPM it relates to) could be more useful actually.
You might be able to estimate that a car with a higher maximum power might have a higher torque at relevant RPMs of course so it's still useful for quick comparisons.
In a turbo engine (like the 335i engine above) the maximum torque will be very quickly achieved at low RPMs and it will then stay flat through a long range; this is where you typically will be driving the car between gearshifts and this is what will give you the "oomph".
A non-turbo engine will have a more gradually increasing torque curve.
But just to be clear, both parameters are related and similar engines with different maximum powers also probably have some similar relationship between their maximum torques (and oomphs).
Torque ($\tau$) and power ($P$) are mathematically related to each other:
$$P = \tau \omega$$
Where $\omega$ is the angular speed (effectively the RPM) of the engine. Since the torque of an engine is a function of $\omega$, we can think of this as either
$$P(\omega) = \tau(\omega) \omega$$
Where $\tau(\omega)$ is some torque-speed curve dependent on the engine design, or
$$\tau(\omega) = \frac{P(\omega)}{\omega}$$
If you have two different cars, both driving with the same RPM ($\omega$), and car A has 1.5 times more torque, it also is using 1.5 times more power.
• Thanks. So, according to this, the two curves of the engine (torque and power) are related? Then why are both given in the specs? One of them would be redundant, right? And also, even if this is interesting, I'm not sure how it answers my question... Feb 4, 2015 at 0:15
• You omit the small detail that max power (the reported value) and max torque do not occur simultaneously. Max power will generally be at relatively high RPM, while max torque will be at a much lower RPM. Feb 4, 2015 at 2:31
• when people refer to a "powerful" car ... they actually mean acceleration. This means Torque (which gets translated to Force at the end of the drivetrain). And Force = m x a ... so for a given mass, Torque == Force == acceleration.
• Unfortunately, the technical definition of the term Power is defined as Torque x Revs. This means the Power curve is sort of redundant information. And, it does NOT equate to "powerful". Only the Torque curve does.
• Because of this muddled use of Power and powerful, the average Joe gets confused. However, because manufacturers don't handily display Torque curves (they generally tell you maximum Torque only), we can make use of the maximum Power figure to approximate what the Torque curve will look like: a low Power:Torque ratio implies the maximum Torque is occurring at low revs (and a high ratio means it occurs at high revs). It's not an exact science, but its a useful indicator when you only have two numbers to go on.
• So .. your Torque curve is all that matters ... except that you need to modulate it by the gearing, so that at higher gears you get less torque at the back wheels. Also, there are friction forces that will slow you down, most notably wind resistance. It is proportional to the cube of velocity, so all cars will have a "terminal velocity" (or max speed) where the cubic curve crosses the gear-adjusted torque curve.
• FInally, remember that F = m x a ... so to get more a, reduce your m (mass). Go on a diet, take no passengers, have nothing in the boot, remove your seats, take out the a/c, and you get more ooomph :)
• Please, use MathJax, it will really improve your posts. Nov 23, 2015 at 14:13
• Thanks. Since is posted this question I discovered myself the very important issue of "gear modulation" as you called it. I think that fact makes the power more meaningful than to the torque: more power with the same engine-torque implies you can use higher revs, therefore lower gear, therefore more wheel-torque for the same engine-torque Nov 23, 2015 at 18:46
Power will tell you how fast you can go - without power you cannot overcome the drag (force of drag goes with $v^2$ so power needed goes as $v^3$). The torque is a measure of the instantaneous acceleration you can get - the ability to transfer power to the wheels. Note that acceleration also requires power - you are adding kinetic energy - but at lower speeds torque is king; at high speeds it's power.
So to feel "oomph" you need torque. To go fast, power.
• Thanks, Floris! So in the absence of viscous drag (for example, in the vacuum and neglecting floor firction) there would be no limit as to the speed you could get (until other effects kicked in, ultimately relativity), and then only torque would matter. Am I right? Feb 4, 2015 at 0:18 |
# Spell check of quite
Spellweb is your one-stop resource for definitions, synonyms and correct spelling for English words, such as quite. On this page you can see how to spell quite. Also, for some words, you can find their definitions, list of synonyms, as well as list of common misspellings.
### Examples of usage:
1. But it's quite true. Virgin Soil by Ivan S. Turgenev
2. I've got you quite safe." The Obstacle Race by Ethel M. Dell
3. I know quite well what it is about. The Truants by A. E. W. (Alfred Edward Woodley) Mason
4. " Let me be quite sure that I understand you," he said. The Firm of Girdlestone by Arthur Conan Doyle
5. You had quite a talk with him about them, didn't you? The Trail to Yesterday by Charles Alden Seltzer
X |
# 6. Exponential and Logarithmic Equations
by M. Bourne
## Solving Exponential Equations using Logarithms
### World Population
Don't miss the world population application below.
Go to World population.
The logarithm laws that we met earlier are particularly useful for solving equations that involve exponents.
### Example 1
Solve the equation 3x = 12.7.
We can estimate the answer before we start to be somewhere between 2 and 3, because 32 = 9 and 33 = 27. But how do we find the answer?
First we take the logarithm of both sides of the given equation:
log\ 3^x= log\ 12.7
Now, using the 3rd log rule
logb (xn) = n logb x,
we have:
x\ log\ 3 = log\ 12.7
Now divide both sides by log\ 3:
x= (log\ 12.7)/(log\ 3)= 2.3135
Is it correct? Checking in the original question, we have: 3^2.3135 = 12.7. Checks okay. Also, our answer is between 2 and 3 as we estimated before.
Get the Daily Math Tweet!
### Example 2
Two populations of bacteria are growing at different rates. Their populations at time t are given by 5^(t+2 and e2t respectively. At what time are the populations the same?
This problem requires us to solve the equation:
5t+2 = e2t
We need to use loge because of the base e on the right hand side.
ln (5t+2) = ln (e2t)
(t + 2) ln 5 = 2t ln e
Now, ln e = 1, and we need to collect t terms together:
t ln 5 + 2 ln 5 = 2t
t (ln 5 − 2) = −2 ln 5
So
t=(-2\ ln\ 5)/(ln\ 5-2)=8.241649476
is the required time.
Graph
Graphs of y=e^(2t) (magenta) and y=5^(t+2) (green) showing the intersection point.
We can see on the graph that the 2 curves intersect at t = 8.2, as we found above.
Continues below
### Exercises
(1) Solve 5^x= 0.3
Taking log of both sides:
log 5x = log 0.3
So
log\ 5^x=log\ 0.3
x\ log\ 5=log\ 0.3
x=(log\ 0.3)/(log\ 5)
=-0.748070363
(2) Solve 3\ log(2x − 1) = 1.
Divide both sides by 3:
log(2x - 1) = 1/3
So
10^(1"/"3)=2x-1
x=1/2(10^(1"/"3)+1)
=1.577217345
Get the Daily Math Tweet!
(3) Solve for x:
log_2 x + log_2 7 = log_2 21
We first combine the 2 logs on the left into one logarithm.
log_2\ 7x=log_2\ 21
7x=21
x=3
To get the second line, we actually raise 2 to the power of the left side, and 2 to the power of the right side. We don't really "cancel out" the logs, but that is the effect (only if they have the same base).
(4) Solve for x:
3\ ln\ 2+ln(x-1)=ln\ 24
Recall that 3\ ln\ 2 means 3\ log_e\ 2.
3 ln\ 2+ln(x-1)=ln\ 24
ln\ 8+ln(x-1)=ln\ 24
ln\ 8(x-1)=ln\ 24
We take "e to both sides":
8(x-1)=24
x-1=3
x=4
Easy to understand math videos:
MathTutorDVD.com
I have the following formula:
S(n) = 5500\ log\ n + 15000 (Using base 10)
If I know S(n) = 40 million, How do I solve it?
40 000 000 = 5500 log n + 15000
log n = 7270.
This gives us simply n = 107270 (using the log laws) which is pretty big!
Easy to understand math videos:
MathTutorDVD.com
(6) In the expression
ln (x+2)^2 =\ 2,
why is one of the answers not there when changed to
2ln (x+2)=2, thus ln (x+2)=1,
giving only one answer?
### Solution
One of the best ways to understand this problem is to see what's going on using some graphs.
Recall we can only find the logarithm of positive numbers.
Here's the graph of y = ln (x+2)^2 - 2, based on the first expression:
Graph of y = ln (x+2)^2 - 2.
We can see there are 2 roots (the 2 places where the graph cuts the x-axis.
There are two arms for the graph because we have squared the (x+2) term, meaning it will have value > 0, so we can take the ln of it with no problems (except at x=-2, of course, since it is undefined there).
Now let's look at the graph of y = ln (x+2) - 1, based on the final expression:
Graph of y = ln (x+2) - 1.
Now we only have one arm in the graph, and one place where the graph cuts the x-axis, thus giving us one solution for the equation ln (x+2)=1, which we find is (after taking e to both sides):
x+2 = e
x = e - 2 = 0.718281828...
So while the Log Law says we can write ln (x+2)^2 as 2ln (x+2), they are not really the same function.
## Application - World population growth
The population of the earth is growing at approximately 1.3% per year. The population at the beginning of 2000 was just over 6 billion. After how many more years will the population double to 12 billion?
We need an expression for the population at time t.
After one year, the population will be 1.3% higher than in 2000. (1.3% = 0.013)
Population after 1 year: 6\ "billion" × 1.013.
Population after 2 years: 6\ "billion" × (1.013)^2.
Population after 3 years: 6\ "billion" × (1.013)^3.
So our population, P, after t years, is given by:
P(t) = 6\ "billion" × (1.013)^t
[In general, for any population growth,
P(t) = P_0(1 + r)^t
where P0 is the population at time t = 0, r is the rate of growth per time period and t is the time.]
We are asked to find when the population doubles, so we need to solve:
12\ 000\ 000\ 000 = 6\ 000\ 000\ 000 × (1.013)^t
This gives 2 = (1.013)^t
Taking logarithms of both sides, we have:
log\ 2 = log (1.013)^t
Using the third log law, we have:
log\ 2 = t\ log\ 1.013
So
t=(log\ 2)/(log\ 1.013)=53.66
So it will take only about 54 years to double the world's population, if it continues to grow at the current rate.
Get the Daily Math Tweet!
When the world population is 12 billion, the net number of people in the world will be increasing at the rate of about 5 per second, if the growth rate is still 1.3%. Currently, there are about 2.6 new people per second. However, the rate of growth is expected to drop considerably to about 0.5% within 50 years.
In 2001, the population of India passed one billion, making it the second country after China to reach that scary milestone.
### World population
Current world population is approximately:
### Interactive applet - World Population
Go to the interactive World Population, which has comparisons between present, past and future population growth.
### Predicting world population
The following graph shows one of the estimates for world population growth during the 21st century. We see that the population will be 11 billion by about 2100! Think of our water quality, air pollution, global warming, social cohesion and lack of food. Surely this is one of the most important graphs in all of mathematics.
But I digress.
We are, of course, talking American English, here. The British billion has 12 zeroes (Well, even they have recently adopted the 9 zeroes billion...).
Graph of world population (billions), 1900 to 2100.
The world population is expected to exceed 11 billion by 2100. [Source]
This suggests a growth rate of about 0.6%, much lower than that experienced during the 20th century.
The equation for the above graph is
P=6.1(1.006)^(t-2000), where
6.1 billion was the population in 2000;
the growth rate is represented by 1+6/100 = 1.006; and
t is the time from the year 2000.
See a "live" world population estimation on the next page. |
# Help about ratio test for series..
• December 6th 2011, 11:50 AM
nappysnake
Help about ratio test for series..
I have a series and i have a problem doing the ratio test....
Σ k=1 to infinity, (-1)^k (1/2 + 1/k)^k
i face my problem with the 1/k :/
edit: (i have to actually examine the absolute convergence and i think i have to first examine the ratio..)
• December 6th 2011, 12:02 PM
mr fantastic
Re: Help about ratio test for series..
Quote:
Originally Posted by nappysnake
I have a series and i have a problem doing the ratio test....
Σ k=1 to infinity, (-1)^k (1/2 + 1/k)^k
i face my problem with the 1/k :/
Use the alternating series test (1/2 + 1/k)^k is monotone decreasing after a a certain value of n ....)
• December 6th 2011, 04:36 PM
Prove It
Re: Help about ratio test for series..
The ratio test will work too. By the ratio test, the series will be convergent if \displaystyle \begin{align*} \lim_{n \to \infty}\left|\frac{a_{n+1}}{a_n}\right| < 1 \end{align*}
\displaystyle \begin{align*} \lim_{n \to \infty}\left|\frac{a_{n+1}}{a_n}\right| &= \lim_{n \to \infty}\left|\frac{(-1)^{n+1}\left(\frac{1}{2} + \frac{1}{n+1}\right)^{n+1}}{(-1)^n\left(\frac{1}{2} + \frac{1}{n}\right)^n}\right| \\ &= \lim_{n \to \infty}\frac{\left(\frac{1}{2} + \frac{1}{n+1}\right)^{n+1}}{\left(\frac{1}{2} + \frac{1}{n}\right)^n} \\ &= \frac{1}{2} \textrm{ according to Wolfram Alpha.} \end{align*}
So the series converges.
• December 18th 2011, 12:21 AM
nappysnake
Re: Help about ratio test for series..
i have one further question. how would you calculate absolute convergence? wolfram alpha says that the ratio test is conclusive and that the series converges, but i have no idea how he comes to the result..all i get is a bunch of terms which i can't simplify..help?
• December 18th 2011, 01:31 AM
chisigma
Re: Help about ratio test for series..
Quote:
Originally Posted by nappysnake
I have a series and i have a problem doing the ratio test....
Σ k=1 to infinity, (-1)^k (1/2 + 1/k)^k
i face my problem with the 1/k :/
edit: (i have to actually examine the absolute convergence and i think i have to first examine the ratio..)
It is difficult to undestand why it is requested the ratio test instead of the root test, that extablishes that a series $\sum_{n=0}^{\infty} a_{n}$ converges if...
$\lim_{n \rightarrow \infty} \sqrt[n] {|a_{n}|} <1$ (1)
In that case is...
$\lim_{n \rightarrow \infty} \sqrt[n] {|a_{n}|} = \lim_{n \rightarrow \infty} (\frac{1}{2} + \frac{1}{n})= \frac{1}{2}$ (2)
http://www.sv-luka.org/ikone/ikone180a.jpg
Marry Christmas from Serbia
$\chi$ $\sigma$ |
## Advances in Differential Equations
### An explicit finite difference scheme for the Camassa-Holm equation
#### Abstract
We put forward and analyze an explicit finite difference scheme for the Camassa-Holm shallow water equation that can handle general $H^1$ initial data and thus peakon-antipeakon interactions. Assuming a specified condition restricting the time step in terms of the spatial discretization parameter, we prove that the difference scheme converges strongly in $H^1$ towards a dissipative weak solution of the Camassa-Holm equation.
#### Article information
Source
Adv. Differential Equations, Volume 13, Number 7-8 (2008), 681-732.
Dates
First available in Project Euclid: 18 December 2012
Permanent link to this document
Mathematical Reviews number (MathSciNet)
MR2479027
Zentralblatt MATH identifier
1191.35021
Subjects
Primary: 65M06: Finite difference methods
Secondary: 35Q53: KdV-like equations (Korteweg-de Vries) [See also 37K10]
#### Citation
Coclite, Giuseppe Maria; Karlsen, Kenneth H.; Risebro, Nils Henrik. An explicit finite difference scheme for the Camassa-Holm equation. Adv. Differential Equations 13 (2008), no. 7-8, 681--732. https://projecteuclid.org/euclid.ade/1355867333 |
# Revision history [back]
As you saw, this gives a NotImplementedError in Sage, so you can't do it directly.
This is possible in Maxima, though it just gives you a symbolic answer.
sage: maxima_console()
<snip>
(%i7) A: matrix([1,2],[3,4]);
[ 1 2 ]
(%o7) [ ]
[ 3 4 ]
(%i11) A.A;
[ 7 10 ]
(%o11) [ ]
[ 15 22 ]
(%i13) A^^2;
[ 7 10 ]
(%o13) [ ]
[ 15 22 ]
(%i14) A^^n;
<n>
[ 1 2 ]
(%o14) [ ]
[ 3 4 ]
Unfortunately, I can't find this exponential command for matrices with letters, so we could use it by tab-completion in Sage and then use
sage: A = matrix([[1]])
sage: M = A._maxima_()
sage: M.command_name('n')
or something. |
# Andre A.
## Tutors live and written lessons
See all Andre's info
Teaching Experience
have been teaching Physics at the University of Oxford since 2009. I also do research in Medical Physics (mathematical optimization of radiotherapy for cancer patients). I am a polyglot fluent in 5 languages (English, French, Italian, Romanian and Spanish) and I am currently improving my Arabic, Galician and German. I hold degrees in Physics (MSc Paris and BSc Toulouse), Engineering (BEng Toulouse) and Radiation Biology (Msc Oxford). I have obtained no less than 9 certificates in 4 languages from the University of Oxford Language Center, the most advanced being in Spanish (Proficiency/C2), Italian (also Proficiency/C2), German (Vantage/B2) and Arabic (Elementary/A2).
Higher Education: 1500+ h of tutoring at the University of Oxford and privately at Degree level (Mathematics for Physics, Mechanics, Electromagnetism, Thermodynamics, Statistics, Quantum Mechanics, Nuclear Physics, Solid State, Biophysics, Quantitative Methods for Finance).
Secondary Education: 1200+ h of tutoring in Physics (A2, AS, IB, PAT, SAT, GCSE), Maths (A2, AS, IB, MAT, SAT, GCSE) and French/Spanish/Italian (all levels, conversational and writing skills).
As a private tutor, I have taught the full first year curriculum at Oxford, KCL, UCL and Imperial College, the full second year curriculum at KCL and Quantum Mechanics and Solid State for the 2nd year curriculum at Imperial College, and the full 3rd year curriculum at Glasgow. Courses I have taught include: Mechanics, Electromagnetism, Mathematical methods, Optics, Waves, Thermodynamics, Quantum Mechanics, Nuclear Physics, Solid State, Lasers.
Every student is unique and as such, I always devise a personal study plan for each and every one of them. I first take into account the goals, the time left until the exams, the material that needs to be covered, the current level of the student. I then use this information to compute the number of hours of one on one tuition and self-study (homework) required to achieve the goals.
Some recent (over the last 6 months) tuition examples:
1)C. is a 2nd year student at KCL. Due to personal reasons, he could not revise for his exams. We started in person and online tuition in March 2016 and, working 21h/week, in 2 months we covered the whole curriculum and the past papers. C. now expects marks of 60-95% in all of his exams.
2)T. is a 1st year student at KCL. Having gone through a family tragedy, he initially suffered from poor self confidence. We started online tuition in February 2016 and we covered the full curriculum. By April, T was able to do on his own 3 past papers/day and the need for tuition was reduced to 3h/week. His hard work paid off, as T expects a First for his 1st year results.
3)O. is a very bright AS/A2 student who suffers from Aspergers. I have regularly worked with him for 2h/week for over more than a year, on C1-4, S1-2, FP1-2 and M1-2 and the MAT. I have helped him with exam strategies, time management, and making use of visual representations (graphs). We also focused on those particular types of exercises where he would get very low marks. O is taking no less than 16 exams this May and June and I expect him to score As in most of them.
4) A. graduated with a Degree in Law and wanted to work for a prestigious fashion firm in France. I helped her with translating her CV and letter of motivation from English into French.
5)I. unfortunately suffered from poor health and needed my help to prepare for his Physics and Maths A2 Exams. I worked with him on average for 1h/week and I expect him to receive the grades required by his top-choice University.
6)J. is a first year Physics student at UCL. Unfortunately, due to a food poisoning incident, she missed her May exams. This summer, it is my duty to encourage and prepare her for her September exams. Hopefully, the food poisoning was a blessing in disguise and she now aims for a First.
7)V. needed my help to prepare for a 3rd year Quantitative Finance exam at Regents University. We had intensive tuition in December, for 3-5 hours/per day during two weeks. I also helped him translate the website of his start-up into French.
8)A. is a 3rd year student at the University of Glasgow. Unfortunately, last year he failed all of his exams. This year, I had online tuition with him for 1-2h/ week and we focused on solving past exams papers he struggled with on his own. I expect him to pass.
9)G. is an Italian MSc student at Imperial College in Biological and Medical Sciences. I helped him prepare for his Physics/Biomechanics exam.
10)P is a Russian 1st year Physics student at Imperial College. I helped him prepare for his Mechanics exam resit and we worked for 3h/day during the week before his exam. P achieved much more than the 40% needed to pass and is now enjoying a worry free summer.
11)M is from Saudi Arabia and is a Foundation student at Kaplan. He excelled on his theoretical modules, yet he encountered problems when it came to writing lab reports on the Physics experiments he hid in class. I gave him hints on how to organize his data, structure his ideas and present them in a clear and logical order. As marks increased to 95%, the top of his class!
Extracurricular Interests
I can also teach my lessons in Spanish, French and Romanian.
Andre hasn't provided any written explanations.
Andre hasn't shared additional subject expertise.
### Send Andre a message
Let Andre know what you'd like help with and when to ensure you get your ideal lesson.
• Expected Response Time > 1 day
#### + How do lessons work?
You and your tutor will meet in an online lesson space. You'll have access to video, audio and text chat and be able to upload documents, draw on a chalkboard and edit papers in real time.
#### + Who are Chegg's tutors?
Many tutors are current students or recent graduates of top-tier colleges like Harvard, Stanford and MIT. Chegg Tutors has thousands of tutors available to work with you on your toughest assignments.
#### + How much does tutoring cost?
New students can try Chegg Tutors for free, and additional lessons are as little as \$30/hour, depending on how frequently you think you'll want to meet with a tutor
#### + How do I set up my first lesson?
Just use the the form above to send a message and schedule your first lesson! |
# How does an increase in USD money supply affect inflation?
Say an average american household is bringing in $50,000 a year. Between ongoing quantitative easing and the drastic stimulus packages passed in February-May 2020, the USD Money Supply increased by 25% or 16% depending on if you look at M1 or M2, respectively. (I assume M2 is more directly related to inflation, but would love if anyone can provide clarity there) Assuming 0 change in real output (not likely, but for simplicity) would the last 3 months actions to increase the money supply equate to a direct equivalent loss of value per USD? Would this average American family's \$50,000/yr income in February become equivalent to a \$42,000/yr salary(in Feb \$'s) by May?
Or are there other factors in play? (outside increasing real output)
Yes there are other factors at play. Inflation is change in a price level. The price level, according to classic textbook monetary equation, is determined as follows:
$$P = \frac{MV}{Y}$$
Where $$M$$ is the money supply, $$V$$ velocity of money (how much is one dollar used in the economy) and $$Y$$ is the real output.
So beside money supply and real output inflation depends also on velocity of money. Furthermore, the inflation also depends on peoples expectations of these quantities. That is on what people expect the money supply, real output and velocity will be (although this is not shown in the simplified textbook model above).
Would this average American family's \$50,000/yr income in February equivalent to a \$42,000/yr salary in May?
Not likely. According to the International Monetary Fund the the annual inflation forecast for USA for 2020 and 2021 is $$0.6\%$$ and $$2.2\%$$ respectively. These are mean forecasts so of course the actual realized values will not be exactly the same, but I would be surprised if even $$99\%$$ confidence interval would even include double digit inflation. The assumption that output does not change is currently also not realistic. USA lost $$6\%$$ of GDP just in 2020Q1 according to the Bureau of Economic Analysis and the fall for Q2 is projected to be even worse.
• Wouldn't the loss in GDP make it more likely that inflation reaches double digits? With such an increase in money supply, velocity stagnate (as far as I can tell), and this decrease in Y - isn't that indicative of much more than a typical years inflation? I just don't see, based on the formula you provided, where the extra value lies that keep price level near the same while money supply increases so drastically. Or is the idea GDP will rebound drastically? Why wouldn't comparisons of GDP measured in USD also be drastically skewed by the increase in USD supply? – TCooper Jun 18 at 22:44
• @TCooper you are right that drop in real output would ceteris paribus lead to higher inflation - I just mentioned that as a side note not as an argument for why inflation will be low inflation - the point was that it would already mess your calculation. In addition velocity of money definitely does not stagnate. For example have a look at this Fed data on velocity of money in US it falls dramatically in last two decades ( fred.stlouisfed.org/series/M2V ). Velocity of money depends on how many transactions people make. – 1muflon1 Jun 18 at 23:18
• In a recession where they are not even allowed to purchase many goods and services due to corona lockdowns it is bound to fall. Generally recessions are deflationary due to this (especially demand driven recessions are). The recent recession is both demand and supply driven but as people are not allowed to spend I would expect V to just fall further not stay stagnant. In fact currently as interest rates are at zero lower bound it is real possibility that many countries will find themselves in liquidity trap. In such situation any increase in money supply will be offset by change in V – 1muflon1 Jun 18 at 23:19
• That last comment combined with a couple of the answers on the question I linked are really getting the picture completed in my mind. I appreciate you taking the time to help me understand ( also @the_rainbox ). I've never been in this SE and I see most of the questions are more involved/higher level. It was just recommended to try here when I first posted on Personal Finance. Again, really appreciate your answers and clarifications. – TCooper Jun 18 at 23:26
• Got it, I definitely will. General rule on other SE's is wait 24 hours (global community and all), so I'm going to do that. – TCooper Jun 18 at 23:40
The most important distinction you have to make is between real income ($$Y$$) and nominal income ($$P*Y$$);
Textbook cases imply that, for $$MV = PY$$, then $$\frac{ΔM}{M} = \frac{ΔP}{P}$$. So yes, it is expected from almost every basic model that dwells on the subject, that prices will rise for the same amount money supply does (see "Shopping-Time Model", "Baumol-Tobin Money Demand", "Friedman Money Demand").
Hence, 50,000\$will be equal to (if we accept M1 as money supply) $$50,000*1.25 = 62,500\\\$$ and not 42,000\$.
This means that the nominal income of the family will rise, but it is possible, if real income doesn't also rise for the same rate (1.25) but less (or even, if it turns to a negative growth pattern, much like we're seeing today), the real income of the household (the value of goods it can purchase) is going to go downhill.
• Absolutely. My only concern was with his 50,000/42,000 = 1.19... ratio, which doesn't add up with the 25 or 16 money supply increase, so I tried to clarify that as well, although without saying so. – the_rainbox Jun 18 at 20:02
• I didn't make that clear, and I'm now seeing it's an odd way to look at it, but what I meant is the 50,000 per yr gives you the purchasing power of 42,000 per yr prior to the increase in money supply. So a 16% reduction in purchasing power due to a 16% increase in inflation. The velocity of money factor changes this, and setting change in output to 0 is unrealistic as well - but why did you choose to use M1 money supply over M2 for the inflation calculation? – TCooper Jun 18 at 21:07
• Hello OP. Firstly, yes, purchasing power is what I assumed you were talking about, it was only a matter of clarification of an important distinction in case you missed it. Secondly, the 3 models I brought forth, all use M1 as their money supply, and I wanted to be as typical as possible; if you asked me, I'd intuitively say most people have a tendency of not estimating M2 and M3 changes as a part of money supply volatility. – the_rainbox Jun 18 at 21:12
• I can't make the dollar sign stick to its proper place and i'm getting furious hahah – the_rainbox Jun 18 at 21:20
• @TCooper you have to use \ in front of dollar sign, if you dont the page thinks that you are writing a mathematical expression which is always encased in two dollar signs. You will see that I edited your Q to include them – 1muflon1 Jun 18 at 22:06
It seems likely there are other factors at play, that are not included in the basic model.
For example, suppose that a large portion of new money is simply gifted to a few actors. How would that affect the inflation of the prices of goods and services? What would it mean to you if your salary stayed fixed? How would it affect your life?
If those actors simply held onto the money and kept in the bank, then, largely, there may be little practical consequence to the economy as the prices of goods and services might be unchanged.
If those actors used the money for massive purchasing goods and services, the prices of those goods and services may go up, making them relatively expensive and less affordable for most people with a fixed salary.
If those actors used the money to employ people in the production of goods and services, then potentially, the cost of those goods and services could actually go down (and its surprising that governments don't directly do this as a method of stimulus).
So, inflation of prices generally, isn't a necessary logical consequence of new money. Far more important is the relationship that supply and demand has on prices. Inflation becomes an extreme concern when good and services, particularly the necessities of life (shelter, food, water, power, ... toilet paper) are in much less supply than there is demand. The supply of these items may have little to do with the total amount of available money in an economy -- rich actors with mountains of money may have little use for an extra can of beans, and their ability to pay $2000 dollars for one is unlikely to have large effects on the price of beans in a crisis. Price inflation can happen when supply chains fail and supply is reduced, driving up prices. Price inflation can happen when there is an increase in demand, and there is a fear of future supply shortages. Its important to note that the text book model is one of macroeconomics. It relates the total money to the average price of items in a closed economy. It may say little about the median prices of individual goods and services. Money velocity is a critical component than doesn't necessarily increase with money supply. Further, the text book model it a model of the fixed points of a dynamic process. These are theoretical value that do not consider the process itself to be changing. Price inflation occurs in failed economies it is sometimes as a reaction to failures in supply chains, as opposed to the release of new money. Price inflation isn't necessarily bad for people either. If we doubled the amount of money in the economy, but also doubled people's wages in consequence, it might have zero effect on most people's lives. In this scenario, the major losers might actually be people having a large amount of cash, who value has suddenly decreased. So to answer the question, to know the effect of new money on prices, you need to know the effect that money will have on the supply and demand of good and services, and their production. If the new money is given to people who cash-out-refinance their homes, and place that money in a CD, I would expect little change in the prices of most goods and services, at least in the short term, and perhaps even little change in loan prices and homes. The full answer is, its complicated, and depends a lot what the new money is used for, its consequence on behaviors, and the subsequent effect on supply and demand, particularly if were talking about the prices of individual items and not averages. • But this is already covered by the textbook equations - whether people spend money or not that’s the velocity of money. If people stuff all new money under mattresses then the velocity of money will drop. Also, the statement about if the money would be used to employ people in production of goods and services the prices would potentially go down is not correct. Increasing demand for workers and capital pushes returns to those factors up increasing costs of producing goods and services which will lead to higher prices. Moreover, also those employees are going to spend the money presumably – 1muflon1 Jun 19 at 10:37 • pushing up the prices. Furthermore, it’s empirically not true that “price inflation occurs in failed economies”. US in 70s was far from failed economy yet it had relatively large inflation. “Sometimes the release of new money is the reaction to price inflation” - what source do you even get this from? I would generally like to see sources of about half of the statements you make here because they go contrary to conventional monetary theories – 1muflon1 Jun 19 at 10:44 • Re velocity. Suppose that 10 trillion dollars is gifted to banks, and the those banks use it to furiously trade bitcoin back and forth, knowing that if they do so, they'll likely receive another 10 dollars (because later we find that the new money was insufficient to stimulate the economy). The velocity of that money may be high, yet the consequence on overall prices in the economy will be low. Theory that does not consider behavior is just theory. – user48956 Jun 19 at 15:43 • 1. It’s not compared to individual prices or velocities but to aggregate and average prices and velocities. 2. So your whole argument is just that it would be more appropriate to use open economy model here which is essentially just based on the same fundamental relationships? Yes assuming close economy is not realistic but the point is to isolate the effects that happen within the country from outside influences as an additional ceteris paribus condition. There is nothing incorrect about doing that in fact that’s how you can gain additional insights into how the mechanisms work – 1muflon1 Jun 19 at 16:07 • no I am saying that within the simple$MV=PY\$ the output determination is not fully modeled as it models only partial equilibrium on money market. However, output will be a function of prices in the short run. You can expand the monetary equation which gives you money market equilibrium into full IS-LM AS-AD model and in such mode output is function of price level in the short run. But again for the sort of question asked here resorting to general equilibrium analysis is completely unnecessary and it would not fundamentally change the message of the model. – 1muflon1 Jun 19 at 16:46 |
# The joy of sets of sets
The simplest construct in mathematics is probably a finite set of sets. Unlike a simple set alone, it has natural algebraic, geometric, analytic and order structures built in. In the finite case, there is lot of overlap but still, there is a rich variety of structure.
### Algebraic structure
An example of a an algebraic structure is to have the symmetric difference as addition and the intersection as multiplication. If the set of sets is invariant under both operations and contains the empty set and the union of all sets, then we have a commutative ring with 1. The empty set is the zero element, the union of all sets it the 1-element. It is a Boolean algebra with the property A+A=0 and A*A=A. Arithmetic can come in in various ways. What was just mentioned is the situation when the elements of G are the elements of an algebraic object and the operation is an operation on G. One can also look at operations on geometries G and the disjoint union is one of them which makes the elements of the category into a monoid.
### Analytic structure
An example of an analytic structure is a $\sigma$ algebra, it is used in probability theory, where one has additionally a probability measure defined. On finite probability spaces, such a probability measure is defined by assigning to every set $\{j\}$ with one element has some probability $p_j$. The association with analysis comes because functional analytic objects like function spaces are defined in the form of random variables. In mathematics, it is a bit difficult to define what “analysis” is, as it consists of so many different fields: real analysis, complex analysis, functional analysis, harmonic analysis, calculus of variations, spectral theory, stochastic calculus or partial differential equations are all huge fields and usually considered part of analysis. What is an ingredient in all is that there is some sort of calculus involved with computing rate of changes (marginal quantities) or integration (expectation). In the case of finite sets, all of this can be done much easier (of course most of the time missing some important part of the continuum model). For example, one can look at PDE’s on graphs and this applies also for finite set of sets as any finite set of sets defines graphs, one of them being the Barycentric refinement in which two sets are connected if one is contained in the other.
### Order structure
An example of an order structure is a lattice, where we have invariance under union and intersection. An example of an order structure is a lattice. An important order structure is a poset, a partially ordered set. Every finite set of sets automatically comes with a partial order structure as we can define $x<y$ if $x \subset y$. The subset relation is reflexive, anti-symmetric and transitive. The nice thing (the joy!) of finite sets of sets is that one does not have to impose the Poset axiom system but has it for free. The three axioms are built in the notion of subset. The notion of poset so to speak is more general then, however for finite posets, one can always realize them as finite sets of sets. In order to see that posets are important one can refer to statements of masters in the subject. In this essay is put as one of the big ideas in combinatorics.
### Topological structure
In topology one looks at topologies or simplicial complexes. Finite topological spaces define notions like continuity, connectivity or various seperation properties like the Hausdorff property. Simplicial complexes are more used in algebraic topological set-ups. When thinking about topology as well as also about simplicial complexes, one often has a continuum picture in mind. Topology is “rubber band geometry” for example. Simplices are realized as simplices in n-dimensional space. In the discrete and especially in a finite set-up, one does not want to use the axiom of infinity even. A finite topological space or a finite abstract simplicial complex has no need to be realized in the continuum. For a finitist, it would actually not even matter, if somebody would prove ZFC is not consistent.
### Arithmetic structure
One can look at sets of sets as “numbers”, elements with which one can do arithmetic. The addition is the disjoint union, the multiplication is the Cartesian product. There are relations with dual constructs which include the join operation known in topology. https://www.youtube.com/watch?v=FZGIrLiVeOc explains it a bit: if there are n sets in G, define a $n \times n$ matrix $L$ which has as entries $L(x,y)$ the number of sets which are both in $x$ and $y$. A dual matrix to that is $L(x,y)$ which counts the number of sets which contain both $x$ and $y$. In order to see this from a representation point of view, one has to add an algebraic structure on the category of finite sets of sets. Taking the disjoint union gives a monoid structure which by general principles (Grothendieck completion) extends to an abelian group. But there is even a ring structure as we can take the Cartesian product as multiplication. Now, the matrix L(G) associated to a finite set of sets G behaves nicely. When adding two structures G+H, then the matrices are added as a <b>direct sum</b> $L(G) \oplus L(H)$. When taking the Cartesian product, we get the tensor product of $L(G) \otimes L(H)$ of the corresponding matrices.
### Representation theoretic structure
If one assigns to a set of sets a matrix, one is in the realm of representation theory, especially if this assignment shows compatibility with the structure. The connection matrix for example is nice in this respect and lead to a representation in a tensor algebra. It is a bit surprising that interesting matrices already appear when looking at a finite set of sets without any further assumption on the sets. This tensor representation is a representation in a larger sense in that the geometries G, the elements of the group are represented by matrices in a tensor ring. It is not that G+H becomes the matrix product.
### Algebraic geometric structure
With a bit of more structure, one can get to other topics. An integer valued functions on a finite set of sets can be seen as a divisor. The simplicial complex defines graphs in various ways, their Kirchhoff Laplacian L allows to define principal divisors (elements in the image of L, motivated by the continuum fact that $\Delta \log(f)$ is classically a divisor), effective divisors (nonnegative), linear systems (divisors equivalent to effective divisors modulo principal divisors), divisor class group $ker(\chi)/im(L)$. So there is a bit of a playground with notions used in algebraic geometry. I did myself quite a bit of work in this in 2012 when learning the Baker-Norine theory. It is a fascinating story relating in the discrete with many other combinatorial topics.
### Differential geometric structure
Also differential geometric notions come in. One can define a discrete Euler curvature for example and get Gauss-Bonnet, Poincaré-Hopf, Morse theoretical notions are quite easy to implement, geodesic notions like exponential maps etc turn out to be tougher. It is actually quite an interesting theme for example to look at notions of curvature and especially sectional curvature. I myself started to work in this entire field by looking at notions of curvature. here are some slides from 2014. At that time I still considered a graph to have positive curvature if all embedded wheel graph have less than 6 spikes. This is a very primitive notion modeling the notion of positive (sectional) curvature in the continuum. One can dismiss this this as too strong of a condition (it is!) but it is one where one can prove things. One has a sphere theorem for example in that every d-graph which has positive curvature in that sense is a d-sphere but the restrictions are clear in higher dimensions. If one defines negative curvature as a graph for which all embedded wheel graphs have 7 or more spikes, then in dimension 2, one can easily see that for every genus there are only finitely many species and that in dimension 3 or higher there are none (simply because unit spheres are spheres which must have some positive curvature somewhere). So, the notion of sectional curvature needs to be relaxed in order to get closer to the continuum. The problem is a bit like in physics: there are too many possibilities to generalize this (smoothing out curvature for example or looking at larger sets of two dimensional geodesic surfaces) but more importantly that for weaker notions, even weaker theorems like Synges theorem become difficult to prove. Not to speak about sphere theorems. But they are difficult also in the continuum.
### Algebraic topological structure
The algebraic topological set-up in the finite case is probably even older than the continuum version. Kirchhoff pioneered discrete calculus notions before Betti and Poincare set things up, often with discrete simplicial complexes in the form of graphs. Graphs have historically been used early to describe higher dimensional phenomena. Even the start of graph theory, the Königsberg bridge problem deals with a two dimensional topological problem. What happened is that especially the literature on graph theory looked at graphs as one-dimensional simplicial complexes having only zero and one dimensional cohomology for example. Whitney for example, who wrote his thesis in graph theory saw it differently and also developed the first really sane notion of “manifold”. The clique complex is a structure imposed on a graph which serves like a topology or sheave or sigma algebra as an additional structure. Algebraic topology can be done very intuitively on graphs. And it can also be taught in freshmen calculus.
### Physics
Finally, one can ask, whether finite sets of sets are relevant in physics. Mathematically a lot of things are directly mirrored. Once, one has an exterior derivative, one has notions like div, curl, grad in any dimension, has a Laplacian. Classical field theories do not need even a change of language: can do electromagnetism, gravity (Gauss form), look at dynamical systems, like wave or Schrödinger equations, one can study spectra and their relation to the geometry etc. (Maybe look at this handout). There are things where one has to deviate more, like in relativity, where notions of curvature like Ricci curvature can look very different in combinatorics, different in the sense what theorems one can prove. A start is to study variational problems like extremizing functionals like the Euler characteristic on classes of finite geometries. But physics is a bit different from mathematics in the sense that one has to build models which explain measured data. A valuable theory must explain at least one phenomenon without any competing theory doing it better. Otherwise, it will end up in the garbage bin or remain in a mathematical theory (and is only physics in an extended sense that it can be realized in some mathematicians brain). So far, finite geometries miserably fail to describe the motion of a planet, or the structure of the hydrogen atom or what happens with a black hole if it evaporates away. |
# Knight on the keyboard [closed]
You are given a QWERTY keyboard, and are allowed to choose where you wish to start. You are only able type in the same way a knight is able to move on a chessboard (but in this case, on the keyboard).
What is the largest proper English word you can make?
┌───┬───┬───┬───┬───┬───┬───┬───┬───┬───┐
| Q | W | E | R | T | Y | U | I | O | P |
└─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴─┬─┘
| A | S | D | F | G | H | J | K | L |
└─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴─┬─┴───┘
| Z | X | C | V | B | N | M |
└───┴───┴───┴───┴───┴───┴───┘
(Thanks to GOTO 0's post for the keyboard)
For some clarification, when moving move the 3 spaces left or right, the knight will turn onto the furthest key. When moving up or down, you can go to either side.
Examples:
(left or right):
A->(R or C) and not E or X
G->(E or X/ I or M) and not R or C
(up or down):
W->(Z or C/ F) to clarify vertical movement: [W-S-X->Z or C]
V->(U or T or E/ S or J) to clarify vertical movement: [V-G-Y->T or U or E]
• V->(T or U) would E be allowed as well?
– Bob
Jun 1, 2015 at 20:13
• @Bob Yes, both initial directions are allowed. I only showed one to keep the examples small. Jun 1, 2015 at 20:13
• For real clarification, you should instead extend the examples instead of keeping them small. Jun 1, 2015 at 20:30
• How about two letters in a row? Jun 1, 2015 at 20:40
• I’m voting to close this question because without defining "word" it turns into an ill-defined, open-ended puzzle (and open-ended puzzles are off-topic as of May 2019) Aug 5, 2021 at 3:58
I can't beat six, unless I'm allowed to use an exotic animal:
Caracara, with 8 letters.
Then here are the old, invalid answers that I'm keeping here so the comments make sense. =P
Seven letter exotic animals: caracal, aracari
Other six-letter words: carhop, unfull
I will admit I wrote a program for this, though I think I did it before the no-computers tag popped up (or at least I didn't see it). Also apparently it was a bad program since it gave me several invalid answers!
• L is not reachable from A and I is not reachable from R, so both your 7-letter animals are invalid. And you can't reach P from O, so carhop is also invalid. Jun 1, 2015 at 21:36
• My goodness, I really screwed the pooch on this one. Should have double checked my work! I'll leave these up as a testament to carelessness. =P Jun 1, 2015 at 21:41
• Am I right that LL is cheating? Jun 1, 2015 at 21:45
• 8-letter word, though? That's pretty good. Jun 1, 2015 at 22:56
• Could you edit this to stress that Caracara is the answer? It is the only valid word here but the best one anyone has found. Jun 2, 2015 at 12:56
The best I can find has 4
Arch
So far, I've only been able to find some 4-letter words:
Carb: C, C-X-Z -> A, A-S-D -> R, R-F-V -> B
Gimp: G, G-H-J -> I, I-J-N -> M, M-K-O -> P
Bibi: B, B-H-U -> I, I-J-N -> B, B-H-U -> I
I swear I didn't use the other answer! (6)
Archon
My Score is 4
This is how I understand the rules (the letter before the colon can reach each letter after the colon in 1 move):
N:T,F,U,O,L
C:W,A,R,Y,H
Q:D,X
J:T,P,V
F:W,U,Z,N
K:Y,B
P:J,M
U:F,V,N,L
O:H,N
Y:D,C,B,K,M
B:R,D,Y,I,K
I:G,B,M
E:X,G,V
H:R,O,C
T:S,X,V,J,N
W:Z,F,C
R:A,Z,C,H,B
M:Y,G,I,P
V:E,S,T,U,J
X:Q,E,T,G
D:Q,Y,B
Z:W,R,F
S:T,V
L:U,N
A:R,C
G:E,I,X,M
Despite the no-computers tag, I was curious if it's really that hard to find words. It seems to be: Trying this against the word list in /usr/share/dict/words under linux with almost 100k english words only got these (omitting anything below 4 letters):
RACY ARCH LUVS LULU CARA MIMI |
2. Economics
3. the smith family has 1000 per month to spend on...
# Question: the smith family has 1000 per month to spend on...
###### Question details
The Smith family has $1,000 per month to spend on food and other goods. (The two goods are food and dollars spent on all other goods. If you want, just assume that the prices of food and other goods are both$1.) Graph each budget constraint and write the equation. The units for the budget constraint will be dollars spent on food and dollars spent on all other goods. (Note: Some equations or lines may be defined piecewise/have multiple parts. Writing the equations may be trickier so be careful with the graphs first.)
(a) They are given $200 in welfare payments (b) They are given$200 in food stamps, which cannot be sold.
(c) They are given $100 in food stamps and the option to buy another$100 worth of food stamps at the price of $0.50 per$1 of food stamps. (This was the actual government policy until 1979.)
(d) They are given $200 in food stamps, but each$1 of food stamps can be sold on the black market for \$0.50 |
# Homework Help: SOS Problem with lagrange derivation
1. Dec 25, 2009
### thebigstar25
SOS .. Problem with lagrange derivation!!
1. The problem statement, all variables and given/known data
Im having a hard time with the problem illustrated in the following figure:
[URL=http://img199.imageshack.us/i/20091224344.jpg/][PLAIN]http://img199.imageshack.us/img199/812/20091224344.th.jpg[/URL][/PLAIN]
it said that solve the equations of motion for a coupled oscillators system consists of a metallic smooth ring of mass M and radius R oscillates in its own plane with one point fixed, along with it a particle of mass M sildes without friction on it. This particle is attached to the point of support of the ring by a massless spring of stiffness constant k and unstreched length 2R, taking in consideration only small oscillations about the equilibrium configuration
In this problem the given parameters are as follows:
R (radius) = 4.0 cm ,
M (mass) = 13.0 g,
k (spring constant) = 8.0 N/m
2. Relevant equations
I have to get somehow to the following equations
[URL=http://img40.imageshack.us/i/20091226346.jpg/][PLAIN]http://img40.imageshack.us/img40/2695/20091226346.th.jpg[/URL][/PLAIN]
3. The attempt at a solution
i was thinking that for the sliding mass, x=2Rcos(theta+phi) and y=2Rsin(theta+phi)
and for the ring its x=2Rsin(theta) and y = 2Rcos(theta).. and then the potential energy of the system would be = -2Rmgcos(theta)-2Rcos(theta+phi)
and the kinetic energy is = 0.5*m*(x-dot^2(for the sliding mass) + x-dot^2(for the ring)
I know what im doing is wrong coz no matter how i look at the questions I still cant find out how to reach the equations they reached..
I appreciate any help .. thanks in advance ..
2. Dec 25, 2009
### Pinu7
Re: SOS .. Problem with lagrange derivation!!
Wow! This is hard! Let me try:
1. Find the potential energy of an arbitrary small arc that of the circle that is displaced from the vertical by an angle $$\theta$$ with respect to the vertical. Then line-integrate it to find the total potential energy. Keep in mind that if the metallic ring has a uniform mass, then its mass density is M/2$$\pi$$R (Why?)
2. Find the kinetic energy of the ring. Remember that, for a rigid body(the ring), the rotational kinetic energy is $$\frac{1}{2}I\dot{R}$$ where "I" is the moment of inertial around its pivot.
3. Find the Kinetic Energy of the point-mass.
4. Find the potential Energy of the point-mass.
I am not sure how the "small oscillations" thing fits into it.
3. Dec 25, 2009
### thebigstar25
Re: SOS .. Problem with lagrange derivation!!
I appreciate your help .. thanks alot
i think the point from small oscillation is that we can make the following approximation:
sin(theta) = theta
i still dont get how analyze this system >_< i was trying the past two days, but still not able to figure it out ..
4. Dec 26, 2009
### diazona
Re: SOS .. Problem with lagrange derivation!!
The first thing that probably should occur to you is that you can at least try to use the angles, $\theta$ and $\phi$, as generalized coordinates. That saves you from figuring out what $x$ and $y$ are. (It's not as simple as what you came up with)
As Pinu7 was getting at, there are basically four terms:
1. Potential energy of the ring, which is due to its height. You could do this with a spatial integral, but it's simpler to just use the center of mass.
$$U = Mgh$$
where $h$ is the height of the center of mass. (If you pick the pivot point to be at zero height, $h$ will be negative.)
2. Kinetic energy of the ring. Since one point (the pivot) is fixed, this is purely rotational. The formula is actually
$$K = \frac{1}{2}I \omega^2$$
In this case, $\omega$ is the time derivative of one of your angular variables. $I$ is the moment of inertia about the pivot point.
3. Kinetic energy of the point mass. Again, you'll want to express this in terms of the angles, which you can do directly by a clever choice of coordinates. I'd use polar coordinates with the origin at the pivot point. The formula for kinetic energy of a point mass in polar coordinates is
$$K = \frac{1}{2}m r^2 \omega^2 + \frac{1}{2}m\dot{r}^2$$
where $r$ is the distance from the pivot to the point mass, and $\omega$ is the time derivative of the angular position of the point mass. I'll give you a big hint: $\omega = \frac{\mathrm{d}}{\mathrm{d}t}[\theta + \phi]$. (for this part only, of course; $\omega$ here is not the same as $\omega$ in part 2!)
4. Potential energy of the point mass. There's a contribution from the spring,
$$U = \frac{1}{2}k(x - x_0)^2$$
which you should be able to figure out easily once you've determined $r$ from part 3, and also a gravitational contribution, which is just $mgh$ with $h$ being the height of the point mass. That height is going to be a little tricky to calculate, but since at this point you'll already have figured out the polar coordinates of the point mass in part 3, you should be able to get a height from those without too much trouble.
Once you've got all the energy terms, just construct the Lagrangian and apply the Euler-Lagrange equations. At this point, you'd want to make the approximations $\sin\theta \approx \theta$ and $\cos\theta \approx 1 - \theta^2/2$. If all went well, you should wind up with the equations in your second picture. (If not, just post your work here and we'll see what went wrong; it's a reasonably complex problem so you could be forgiven for making a few mistakes along the way ;-)
Last edited: Dec 26, 2009
5. Dec 26, 2009
### thebigstar25
Re: SOS .. Problem with lagrange derivation!!
Thanks diazona for ur notations .. i will try now again and then post what i have done..
6. Dec 26, 2009
### thebigstar25
Re: SOS .. Problem with lagrange derivation!!
I appreciate ur help both of u diazona and Pinu7.. but this problem is just too tough for my little brian to handle .. i may try later coz i want to have some rest now :`( .. |
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Author Index To Appear Other MSP Journals
Ancient mean curvature flows out of polytopes
### Theodora Bourni, Mat Langford and Giuseppe Tinaglia
Geometry & Topology 26 (2022) 1849–1905
##### Abstract
We develop a theory of convex ancient mean curvature flow in slab regions, with Grim hyperplanes playing a role analogous to that of half-spaces in the theory of convex bodies.
We first construct a large new class of examples. These solutions emerge from circumscribed polytopes at time minus infinity and decompose into corresponding configurations of “asymptotic translators”. This confirms a well-known conjecture attributed to Hamilton; see also Huisken and Sinestrari (2015). We construct examples in all dimensions $n\ge 2$, which include both compact and noncompact examples, and both symmetric and asymmetric examples, as well as a large family of eternal examples that do not evolve by translation. The latter resolve a conjecture of White (2003) in the negative.
We also obtain a partial classification of convex ancient solutions in slab regions via a detailed analysis of their asymptotics. Roughly speaking, we show that such solutions decompose at time minus infinity into a canonical configuration of Grim hyperplanes. An analogous decomposition holds at time plus infinity for eternal solutions. There are many further consequences of this analysis. One is a new rigidity result for translators. Another is that, in dimension two, solutions are necessarily reflection symmetric across the midplane of their slab.
##### Keywords
polytopes, mean curvature flow, ancient solutions, translators
Primary: 53E10
Secondary: 52B99 |
## How do you break words in LaTeX?
use \- inside a word to explicitly denote the allowed places to break, e.g. cryp\-to\-graphy. specify exceptions via \hyphenation{cryp-to-graphy}
### How do you force a line break in LaTeX?
The \linebreak command tells LaTeX to break the current line at the point of the command. With the optional argument, number , you can convert the \linebreak command from a demand to a request. The number must be a number from 0 to 4. The higher the number, the more insistent the request is.
#### How do you make LaTeX not break in word?
1. You can set \hyphenpenalty and \exhyphenpenalty to 10000, which will stop hyphenation, but as TeX will still try to hyphenate this is not hugely efficient.
2. As Joel says, you can use sepackage[none]{hyphenat} to select a ‘language’ with no hyphenation at all.
How do I turn on hyphenation in LaTeX?
To suggest hyphenation points for strings containing nonletters or accented letters, use the \- command in the input text. This command is normally given in the preamble.
What is Babel in LaTeX?
Babel. The Babel package presented in the introduction allows to use special characters and also translates some elements within the document. This package also automatically activates the appropriate hyphenation rules for the language you choose.
## How do you put spaces between words in LaTeX?
To generate a space after a text-producing command you can use \.
### What does ragged right do?
ragged right A text margin treatment in which all lines begin hard against the left-hand margin but are allowed to end short of the right-hand margin. On lines that do not fully fill the measure (nearly all of them), any leftover space is deposited along the right-hand margin.
#### How do you break a line in latex?
Additionally, LaTeX provides the following advanced option for line break. It breaks the line at the point of the command. The number provided as an argument represents the priority of the command in a range of 0 to 4. (0 means it will be easily ignored and 4 means do it anyway).
Is it possible to break the document flow in latex?
Breaking the document flow in LaTeX is not recommended unless you are creating a macro. Anyway, sometimes is necessary to have more control over the layout of the document; and for this reason in this article is explained how to insert line breaks, page breaks and arbitrary blank spaces.
How to suppress hyphenation in a LaTeX document?
The hyphenat package can help suppressing hyphenation within environments or the complete document. it makes the text like zigzag. I found this solution for forced justification (like word) in latex document. Use before begin document….
## How to avoid line-breaking in a word?
Stefan_K wrote: To avoid line-breaking in a word, you could write \\mbox {word}. The hyphenat package can help suppressing hyphenation within environments or the complete document. it makes the text like zigzag. |
Lemma 20.7.1. Let $X$ be a ringed space. Let $U \subset X$ be an open subspace.
1. If $\mathcal{I}$ is an injective $\mathcal{O}_ X$-module then $\mathcal{I}|_ U$ is an injective $\mathcal{O}_ U$-module.
2. For any sheaf of $\mathcal{O}_ X$-modules $\mathcal{F}$ we have $H^ p(U, \mathcal{F}) = H^ p(U, \mathcal{F}|_ U)$.
Proof. Denote $j : U \to X$ the open immersion. Recall that the functor $j^{-1}$ of restriction to $U$ is a right adjoint to the functor $j_!$ of extension by $0$, see Sheaves, Lemma 6.31.8. Moreover, $j_!$ is exact. Hence (1) follows from Homology, Lemma 12.29.1.
By definition $H^ p(U, \mathcal{F}) = H^ p(\Gamma (U, \mathcal{I}^\bullet ))$ where $\mathcal{F} \to \mathcal{I}^\bullet$ is an injective resolution in $\textit{Mod}(\mathcal{O}_ X)$. By the above we see that $\mathcal{F}|_ U \to \mathcal{I}^\bullet |_ U$ is an injective resolution in $\textit{Mod}(\mathcal{O}_ U)$. Hence $H^ p(U, \mathcal{F}|_ U)$ is equal to $H^ p(\Gamma (U, \mathcal{I}^\bullet |_ U))$. Of course $\Gamma (U, \mathcal{F}) = \Gamma (U, \mathcal{F}|_ U)$ for any sheaf $\mathcal{F}$ on $X$. Hence the equality in (2). $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). |
# Custodian
Complete at least one review task. This badge is awarded once per review type.
Awarded 1018 times.
Awarded yesterday to
for reviewing Suggested Edits
Awarded May 10 at 10:02 to
for reviewing Suggested Edits
Awarded Apr 21 at 19:35 to
for reviewing Close Votes
Awarded Apr 19 at 7:42 to
for reviewing Suggested Edits
Awarded Apr 10 at 10:31 to
for reviewing Suggested Edits
Awarded Apr 8 at 8:27 to
for reviewing Suggested Edits
Awarded Apr 5 at 18:42 to
for reviewing Suggested Edits
Awarded Mar 23 at 11:16 to
for reviewing Suggested Edits
Awarded Mar 8 at 19:26 to
for reviewing Suggested Edits
Awarded Feb 24 at 10:51 to
for reviewing Suggested Edits
Awarded Feb 19 at 12:19 to
for reviewing Suggested Edits
Awarded Jan 29 at 0:05 to
for reviewing First Posts
Awarded Jan 16 at 8:15 to
for reviewing Reopen Votes
Awarded Jan 6 at 19:26 to
for reviewing Suggested Edits
Awarded Dec 26 '20 at 2:57 to
for reviewing Suggested Edits
Awarded Dec 21 '20 at 13:51 to
for reviewing Suggested Edits
Awarded Dec 20 '20 at 17:09 to
for reviewing Suggested Edits
Awarded Dec 20 '20 at 15:50 to
for reviewing Suggested Edits
Awarded Dec 16 '20 at 20:42 to
for reviewing Suggested Edits
Awarded Dec 15 '20 at 16:18 to
for reviewing Suggested Edits
Awarded Nov 27 '20 at 13:28 to
for reviewing Suggested Edits
Awarded Nov 22 '20 at 16:35 to
for reviewing Suggested Edits
Awarded Nov 18 '20 at 18:41 to
for reviewing First Posts
Awarded Nov 9 '20 at 16:25 to
for reviewing Suggested Edits
Awarded Nov 6 '20 at 12:40 to
for reviewing Suggested Edits
Awarded Nov 1 '20 at 20:17 to
for reviewing Suggested Edits
Awarded Oct 26 '20 at 22:50 to
for reviewing Reopen Votes
Awarded Oct 21 '20 at 20:15 to
for reviewing First Posts
Awarded Oct 20 '20 at 6:31 to
for reviewing First Posts
Awarded Sep 28 '20 at 3:27 to
for reviewing Suggested Edits
Awarded Sep 14 '20 at 9:48 to
for reviewing Suggested Edits
Awarded Sep 10 '20 at 7:04 to
for reviewing Suggested Edits
Awarded Sep 2 '20 at 15:18 to
for reviewing Suggested Edits
Awarded Aug 21 '20 at 17:57 to
for reviewing Suggested Edits
Awarded Aug 7 '20 at 16:03 to
for reviewing Suggested Edits
Awarded Jul 31 '20 at 16:16 to
for reviewing Suggested Edits
Awarded Jul 26 '20 at 6:48 to
for reviewing Suggested Edits
Awarded Jul 12 '20 at 21:41 to
for reviewing Suggested Edits
Awarded Jul 11 '20 at 20:41 to
for reviewing Suggested Edits
Awarded Jul 5 '20 at 14:04 to
for reviewing Suggested Edits
Awarded Jul 3 '20 at 7:33 to
for reviewing Suggested Edits
Awarded Jul 1 '20 at 9:12 to
for reviewing Close Votes
Awarded Jun 30 '20 at 4:33 to
for reviewing Suggested Edits
Awarded Jun 25 '20 at 12:26 to
for reviewing Suggested Edits
Awarded Jun 18 '20 at 10:03 to
for reviewing Suggested Edits
Awarded Jun 18 '20 at 3:08 to
for reviewing Suggested Edits
Awarded Jun 12 '20 at 20:11 to
for reviewing Suggested Edits
Awarded Jun 7 '20 at 9:35 to
for reviewing First Posts
Awarded Jun 7 '20 at 9:35 to
for reviewing Low Quality Posts
Awarded Jun 1 '20 at 18:18 to
for reviewing Suggested Edits
Awarded May 25 '20 at 16:05 to
for reviewing Suggested Edits
Awarded May 22 '20 at 0:36 to
for reviewing Suggested Edits
Awarded May 17 '20 at 9:17 to
for reviewing Suggested Edits
Awarded May 16 '20 at 13:55 to
for reviewing Suggested Edits
Awarded May 12 '20 at 18:25 to
for reviewing Suggested Edits
Awarded May 9 '20 at 15:35 to
for reviewing Suggested Edits
Awarded May 5 '20 at 10:10 to
for reviewing Suggested Edits
Awarded May 3 '20 at 8:36 to
for reviewing Suggested Edits
Awarded May 1 '20 at 11:48 to
for reviewing Suggested Edits
Awarded Apr 30 '20 at 15:51 to
for reviewing Suggested Edits |
# How to typeset a loudspeaker icon?
I would like to include a loudspeaker icon, for instance by defining my own command \myloudspeaker:
I looked at Detexify and the Comprehensive LaTeX symbol list.
fontawesome provides a number of options:
\documentclass{article}
\usepackage{fontawesome}
\setlength{\parindent}{0pt}
\begin{document}
\end{document}
You're probably after \faVolumeUp.
Here, I just string stuff together (e.g., \rule, \blacktriangle, and ))... Works across different font sizes. EDITED to provide different volume settings with the syntax \loudspeaker[<volume>].
\documentclass{article}
\usepackage{amssymb,graphicx}
\newcommand\vcent[1]{\vcenter{\hbox{#1}}}
\newcommand\loudspeaker[1][3]{\ensuremath{\vcent{\rule{.6ex}{.6ex}}\kern-.5ex%
\vcent{\scalebox{.6}[1]{\rotatebox[origin=center]{90}{$\blacktriangle$}}}%
\ifnum#1>0\relax\kern.1ex\vcent{\scalebox{.3}{)}}\ifnum#1>1\relax\kern-.15ex%
\vcent{\scalebox{.4}{)}}\ifnum#1>2\relax\kern-.23ex\vcent{\scalebox{.5}{)}}%
\fi\fi\fi}%
}
\begin{document}
This is a loudspeaker: \loudspeaker.
This is volume 0: \loudspeaker[0].
This is volume 1: \loudspeaker[1].
This is a volume 2: \loudspeaker[2].
\LARGE LARGE loudspeaker: \loudspeaker.
\end{document}
Here is a variant with the sound waves in tt font:
\documentclass{article}
\usepackage{amssymb,graphicx}
\newcommand\vcent[1]{\vcenter{\hbox{#1}}}
\newcommand\loudspeaker[1][3]{\ensuremath{\vcent{\rule{.6ex}{.6ex}}\kern-.5ex%
\vcent{\scalebox{.6}[1]{\rotatebox[origin=center]{90}{$\blacktriangle$}}}%
\ifnum#1>0\relax\kern.05ex\vcent{\scalebox{.4}{\ttfamily)}}%
\ifnum#1>1\relax\kern-.4ex\vcent{\scalebox{.56}{\ttfamily)}}%
\ifnum#1>2\relax\kern-.55ex\vcent{\scalebox{.7}{\ttfamily)}}%
\fi\fi\fi}%
}
\begin{document}
This is a loudspeaker: \loudspeaker.
This is volume 0: \loudspeaker[0].
This is volume 1: \loudspeaker[1].
This is a volume 2: \loudspeaker[2].
\LARGE LARGE: \loudspeaker.
\end{document}
Compare, for example, to https://upload.wikimedia.org/wikipedia/commons/2/21/Speaker_Icon.svg, cited by the OP.
• I accepted Werner's answer because its an easier solution (for not yet very experienced LaTeX users), but I like your creative solution that I might consider in the future. – Karlo Dec 27 '16 at 0:09
📢 is U+1F4E2 so if you have a font that shows this (eg if you can see the symbol at the start of this line in your browser) then you can use it directly from xetex or luatex, eg
\documentclass{article}
\usepackage{fontspec}
\setmainfont{Segoe UI Emoji}
\begin{document}
--- 📢 ---
\end{document} |
International
Tables for
Crystallography
Volume D
Physical properties of crystals
Edited by A. Authier
International Tables for Crystallography (2006). Vol. D, ch. 2.4, pp. 329-330
## Section 2.4.2.2. Piezoelectric media
R. Vachera* and E. Courtensa
aLaboratoire des Verres, Université Montpellier 2, Case 069, Place Eugène Bataillon, 34095 Montpellier CEDEX, France
Correspondence e-mail: rene.vacher@ldv.univ-montp2.fr
#### 2.4.2.2. Piezoelectric media
| top | pdf |
In piezoelectric crystals, a stress component is also produced by the internal electric field E, so that the constitutive equation (2.4.2.2) has an additional term (see Section 1.1.5.2 ), where e is the piezoelectric tensor at constant strain.
The electrical displacement vector D, related to E by the dielectric tensor , also contains a contribution from the strain, where is at the frequency of the elastic wave.
In the absence of free charges, , and (2.4.2.9) provides a relation between E and S, For long waves, it can be shown that E and Q are parallel. (2.4.2.10) can then be solved for E, and this value is replaced in (2.4.2.8) to give Comparing (2.4.2.11) and (2.4.2.2), one sees that the effective elastic tensor now depends on the propagation direction . Otherwise, all considerations of the previous section, starting from (2.4.2.6), remain, with c simply replaced by . |
# How do you find the domain, find where f is increasing, vertex, intercepts, maximum of f(x)=-(x-2)^2+1?
Jul 17, 2017
Vertex->(x,y)=(2,1)) larr" A maximum"
${x}_{\text{intercepts}} = 1 \mathmr{and} 3$
${y}_{\text{intercept}} = - 3$
increasing for $x < 2$
Domain$\to \text{input} \to \left\{x : - \infty \le x \le + \infty\right\}$
#### Explanation:
As the coefficient of ${x}^{2}$ is -1 the graph is of form $\cap$
Thus there is a determinable maximum.
Set $y = - {\left(x - 2\right)}^{2} + 1$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$\textcolor{b l u e}{\text{Determine the vertex}}$
The given function is in the form $y = a {\left(x + \frac{b}{2 a}\right)}^{2} + k$ which is the vertex form (completing the square). Thus with a slight modification the vertex may read off it directly.
${x}_{\text{vertex}} = \left(- 1\right) \times \frac{b}{2 a} \to \left(- 1\right) \times \left(- 2\right) = + 2$
${y}_{\text{vertex}} = k = 1$
$\textcolor{g r e e n}{\text{ Vertex} \to \left(x , y\right) = \left(2 , 1\right)}$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$\textcolor{b l u e}{\text{Determine x-intercepts}}$
These occur at $y = 0$ so we have:
$y = 0 = - {\left(x - 2\right)}^{2} + 1$
Add ${\left(x + 2\right)}^{2}$ to both sides
$+ {\left(x - 2\right)}^{2} = 1$
Square root both sides
$x - 2 = \pm 1$
$\textcolor{g r e e n}{x = 2 \pm 1 \to x = 1 \mathmr{and} 3}$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$\textcolor{b l u e}{\text{Determine y-intercept}}$
Set $x = 0$ giving
${y}_{\text{intercept}} = - {\left(0 - 2\right)}^{2} + 1$
$\textcolor{g r e e n}{{y}_{\text{intercept}} = - 4 + 1 = - 3}$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
$\textcolor{b l u e}{\text{Determine where the function is increasing}}$
As the graph form is $\cap$ then $f \left(x\right)$ is increasing to the left of the maximum at $\left(2 , 1\right)$. So increasing for $x < 2$
It is neither increasing or decreasing at $x = 2$ |
# Mixing Length Model
(Difference between revisions)
Jump to: navigation, search
Revision as of 16:00, 2 June 2010 (view source)← Older edit Revision as of 02:07, 21 July 2010 (view source)Newer edit → Line 1: Line 1: {{Turbulence Category}} {{Turbulence Category}} - [[Image:Fig4.33.png|thumb|400 px|alt=Mixing length model |Figure 1: Mixing length model.]] + [[Image:Fig4.33.png|thumb|400 px|alt=Mixing length model | Mixing length model.]] - The mixing length model proposed by Prandtl is the simplest turbulent model. The distinctive feature of turbulent flow is the existence of eddies and vortices so that the transport in the turbulent flow is dominated by packets of the molecules, instead of the behavior of individual molecules. Considering a turbulent flow near a flat plat as shown in Fig. 1, the mixing length can be defined as the maximum length that a packet can travel vertically while maintaining its time averaged velocity unchanged. The concept of mixing length for turbulent flow is similar to the mean free path for random molecular motion. When a fluid packet located at point A travels to point B by moving upward a distance that equals the mixing length, l, its time-averaged velocity should be kept at $\bar{u}$ according to the definition of mixing length. On the other hand, the time-averaged velocity at point B is $\bar{u}+(\partial \bar{u}/\partial y)l$ according to the profile of the time-averaged velocity (see Fig. 4.33). Therefore, the packet must have a negative velocity fluctuation equal to $-(\partial \bar{u}/\partial y)l$ in order to keep its time averaged velocity unchanged. Thus, the fluctuation of the velocity component in the x-direction is: + The mixing length model proposed by Prandtl is the simplest turbulent model. The distinctive feature of turbulent flow is the existence of eddies and vortices so that the transport in the turbulent flow is dominated by packets of the molecules, instead of the behavior of individual molecules. Considering a turbulent flow near a flat plat as shown in the figure to the right, the mixing length can be defined as the maximum length that a packet can travel vertically while maintaining its time averaged velocity unchanged. The concept of mixing length for turbulent flow is similar to the mean free path for random molecular motion. When a fluid packet located at point ''A'' travels to point ''B'' by moving upward a distance that equals the mixing length, ''l'', its time-averaged velocity should be kept at $\bar{u}$ according to the definition of mixing length. On the other hand, the time-averaged velocity at point ''B'' is $\bar{u}+(\partial \bar{u}/\partial y)l$ according to the profile of the time-averaged velocity (see the figure). + + Therefore, the packet must have a negative velocity fluctuation equal to $-(\partial \bar{u}/\partial y)l$ in order to keep its time averaged velocity unchanged. Thus, the fluctuation of the velocity component in the ''x''-direction is: {| class="wikitable" border="0" {| class="wikitable" border="0" Line 9: Line 11: | width="100%" |
| width="100%" |
${u}'=-l\left( \frac{\partial \bar{u}}{\partial y} \right)$ ${u}'=-l\left( \frac{\partial \bar{u}}{\partial y} \right)$ -
|{{EquationRef|(1)}} |{{EquationRef|(1)}} |} |} - When the velocity component in the x-direction has the above negative fluctuation, the velocity component in the y-direction must have a positive fluctuation, ${v}'$, with the same scale, i.e., + + When the velocity component in the ''x''-direction has the above negative fluctuation, the velocity component in the ''y''-direction must have a positive fluctuation, ${v}'$, with the same scale, i.e., {| class="wikitable" border="0" {| class="wikitable" border="0" Line 23: Line 25: |{{EquationRef|(2)}} |{{EquationRef|(2)}} |} |} - where C is a local constant. Thus, the time-average of the product of the velocity fluctuations, $\overline{{u}'{v}'}$, must be negative for this case. + where ''C'' is a local constant. Thus, the time-average of the product of the velocity fluctuations, $\overline{{u}'{v}'}$, must be negative for this case. - Similarly, we can also analyze motion of the fluid packet from point B to point A, in which case ${u}'$ will be positive and ${v}'$ will be negative. Therefore, $\overline{{u}'{v}'}$ must be negative for any cases. Combining eqs. (4.410) and (4.411) yields + Similarly, we can also analyze motion of the fluid packet from point ''B'' to point ''A'', in which case ${u}'$ will be positive and ${v}'$ will be negative. Therefore, $\overline{{u}'{v}'}$ must be negative for any cases. Combining eqs. (1) and (2) yields $-\overline{{u}'{v}'}=Cl^{2}\left( \frac{\partial \bar{u}}{\partial y} \right)^{2}$ $-\overline{{u}'{v}'}=Cl^{2}\left( \frac{\partial \bar{u}}{\partial y} \right)^{2}$ - Since l is still undetermined, it will be beneficial to absorb C into l and yield + Since ''l'' is still undetermined, it will be beneficial to absorb ''C'' into ''l'' and yield {| class="wikitable" border="0" {| class="wikitable" border="0" Line 38: Line 40: |{{EquationRef|(3)}} |{{EquationRef|(3)}} |} |} - It follows from the definition of the eddy diffusivity [see eqs. (4.399) and (4.402)] that + It follows from the definition of the eddy diffusivity that {| class="wikitable" border="0" {| class="wikitable" border="0" Line 48: Line 50: |{{EquationRef|(4)}} |{{EquationRef|(4)}} |} |} - where the absolute value is to ensure a positive eddy diffusivity. While the general rule for determining the mixing length, l, is lacking, the mixing length for turbulent boundary layer cannot exceed the distance to the wall. Therefore, we can assume: + where the absolute value is to ensure a positive eddy diffusivity. While the general rule for determining the mixing length, ''l'', is lacking, the mixing length for turbulent boundary layer cannot exceed the distance to the wall. Therefore, we can assume: {| class="wikitable" border="0" {| class="wikitable" border="0" Line 57: Line 59: |{{EquationRef|(5)}} |{{EquationRef|(5)}} |} |} - where $\kappa$ is an empirical constant with order of 1, and is referred to as von Kármán’s constant. Equation (4.414) is valid only if $\kappa$ is really a constant. Substituting eq. (4.414) into eq. (4.413), the eddy diffusivity of momentum becomes + where $\kappa$ is an empirical constant with order of 1, and is referred to as von Kármán’s constant. Equation (5) is valid only if $\kappa$ is really a constant. Substituting eq. (5) into eq. (4), the eddy diffusivity of momentum becomes {| class="wikitable" border="0" {| class="wikitable" border="0" Line 67: Line 69: |{{EquationRef|(6)}} |{{EquationRef|(6)}} |} |} - Substituting eq. (4.415) into eq. (4.402), the shear stress in the two-dimensional turbulent flow becomes + Substituting eq. (6) into eq. (4) of [[Algebraic Models for Eddy Diffusivity]], the shear stress in the two-dimensional turbulent flow becomes {| class="wikitable" border="0" {| class="wikitable" border="0"
## Revision as of 02:07, 21 July 2010
External Turbulent Flow/Heat Transfer
Mixing length model.
The mixing length model proposed by Prandtl is the simplest turbulent model. The distinctive feature of turbulent flow is the existence of eddies and vortices so that the transport in the turbulent flow is dominated by packets of the molecules, instead of the behavior of individual molecules. Considering a turbulent flow near a flat plat as shown in the figure to the right, the mixing length can be defined as the maximum length that a packet can travel vertically while maintaining its time averaged velocity unchanged. The concept of mixing length for turbulent flow is similar to the mean free path for random molecular motion. When a fluid packet located at point A travels to point B by moving upward a distance that equals the mixing length, l, its time-averaged velocity should be kept at $\bar{u}$ according to the definition of mixing length. On the other hand, the time-averaged velocity at point B is $\bar{u}+(\partial \bar{u}/\partial y)l$ according to the profile of the time-averaged velocity (see the figure).
Therefore, the packet must have a negative velocity fluctuation equal to $-(\partial \bar{u}/\partial y)l$ in order to keep its time averaged velocity unchanged. Thus, the fluctuation of the velocity component in the x-direction is:
${u}'=-l\left( \frac{\partial \bar{u}}{\partial y} \right)$ (1)
When the velocity component in the x-direction has the above negative fluctuation, the velocity component in the y-direction must have a positive fluctuation, v', with the same scale, i.e.,
${v}'=Cl\left( \frac{\partial \bar{u}}{\partial y} \right)$ (2)
where C is a local constant. Thus, the time-average of the product of the velocity fluctuations, $\overline{{u}'{v}'}$, must be negative for this case. Similarly, we can also analyze motion of the fluid packet from point B to point A, in which case u' will be positive and v' will be negative. Therefore, $\overline{{u}'{v}'}$ must be negative for any cases. Combining eqs. (1) and (2) yields
$-\overline{{u}'{v}'}=Cl^{2}\left( \frac{\partial \bar{u}}{\partial y} \right)^{2}$
Since l is still undetermined, it will be beneficial to absorb C into l and yield
$-\overline{{u}'{v}'}=l^{2}\left( \frac{\partial \bar{u}}{\partial y} \right)^{2}$ (3)
It follows from the definition of the eddy diffusivity that
$\varepsilon _{M}=l^{2}\left| \frac{\partial \bar{u}}{\partial y} \right|$ (4)
where the absolute value is to ensure a positive eddy diffusivity. While the general rule for determining the mixing length, l, is lacking, the mixing length for turbulent boundary layer cannot exceed the distance to the wall. Therefore, we can assume:
$\begin{matrix}{}\\\end{matrix}l=\kappa y$ (5)
where κ is an empirical constant with order of 1, and is referred to as von Kármán’s constant. Equation (5) is valid only if κ is really a constant. Substituting eq. (5) into eq. (4), the eddy diffusivity of momentum becomes
$\varepsilon _{M}=\kappa ^{2}y^{2}\left| \frac{\partial \bar{u}}{\partial y} \right|$ (6)
Substituting eq. (6) into eq. (4) of Algebraic Models for Eddy Diffusivity, the shear stress in the two-dimensional turbulent flow becomes
$\bar{\tau }_{yx}=\rho \left( \nu +\kappa ^{2}y^{2}\left| \frac{\partial \bar{u}}{\partial y} \right| \right)\frac{\partial \bar{u}}{\partial y}$ (7) |
# [R] How to combine xtable and minipage with Sweave ?
Dieter Menne dieter.menne at menne-biomed.de
Fri Mar 13 16:10:59 CET 2009
Ptit_Bleu <ptit_bleu <at> yahoo.fr> writes:
> Concerning the point 3, I'm a bit lost. Is it a problem of place to put the
> table and the graph side by side (my english is quite as low as my skills in
> Latex) ?
> I tried with \begin{minipage}{0.45\textwidth} instead of 0.7 and I put
> "//tiny" but no success.
There is an example in the German latex FAQ
http://www.faqs.org/faqs/de-tex-faq/part6/
and I am sure there is the same in the englisch/french. For further details,
use the tex output produce by rnw, massage it (not rnw) until it works in
latex.
Detailed queries should be posted on the latex forum, because this is
definitively not an R problem. And do not forget to boil it down to the
absolute minimum self-consistency before you post it on the
latex forum, the people there are even more picky if the example is not
minimal than here.
Cetero censeo: on the latex forum, pruning the quoted text to the minimum
required and assuming people can handle a thread reader is a nice habit
not followed here.
Dieter |
## Calculus (3rd Edition)
$$y'= -4(3x^2-\sin x) (x^3+\cos x)^{-5}.$$
Since $y=(x^3+\cos x)^{-4}$, then by using the chain rule: $(f(g(x)))^{\prime}=f^{\prime}(g(x)) g^{\prime}(x)$ the derivative $y'$ is given by: $$y'=-4 (x^3+\cos x)^{-5}(x^3+\cos x)'\\=-4 (x^3+\cos x)^{-5}(3x^2-\sin x)\\=-4(3x^2-\sin x) (x^3+\cos x)^{-5}.$$ Here, we used the fact that $(\cos x)'=-\sin x$. |
### Reverse Firewalls for Adaptively Secure MPC without Setup
Suvradip Chakraborty, Chaya Ganesh, Mahak Pancholi, and Pratik Sarkar
##### Abstract
We study Multi-party computation (MPC) in the setting of subversion, where the adversary tampers with the machines of honest parties. Our goal is to construct actively secure MPC protocols where parties are corrupted adaptively by an adversary (as in the standard adaptive security setting), and in addition, honest parties' machines are compromised. The idea of reverse firewalls (RF) was introduced at EUROCRYPT'15 by Mironov and Stephens-Davidowitz as an approach to protecting protocols against corruption of honest parties' devices. Intuitively, an RF for a party $\mathcal{P}$ is an external entity that sits between $\mathcal{P}$ and the outside world and whose scope is to sanitize $\mathcal{P}$’s incoming and outgoing messages in the face of subversion of their computer. Mironov and Stephens-Davidowitz constructed a protocol for passively-secure two-party computation. At CRYPTO'20, Chakraborty, Dziembowski and Nielsen constructed a protocol for secure computation with firewalls that improved on this result, both by extending it to multi-party computation protocol, and considering active security in the presence of static corruptions. In this paper, we initiate the study of RF for MPC in the adaptive setting. We put forward a definition for adaptively secure MPC in the reverse firewall setting, explore relationships among the security notions, and then construct reverse firewalls for MPC in this stronger setting of adaptive security. We also resolve the open question of Chakraborty, Dziembowski and Nielsen by removing the need for a trusted setup in constructing RF for MPC. Towards this end, we construct reverse firewalls for adaptively secure augmented coin tossing and adaptively secure zero-knowledge protocols and obtain a constant round adaptively secure MPC protocol in the reverse firewall setting without setup. Along the way, we propose a new multi-party adaptively secure coin tossing protocol in the plain model, that is of independent interest.
Available format(s)
Category
Cryptographic protocols
Publication info
A minor revision of an IACR publication in Asiacrypt 2021
Keywords
Contact author(s)
chaya @ iisc ac in
mahakp @ cs au dk
pratik93 @ bu edu
History
Short URL
https://ia.cr/2021/1262
CC BY
BibTeX
@misc{cryptoeprint:2021/1262,
author = {Suvradip Chakraborty and Chaya Ganesh and Mahak Pancholi and Pratik Sarkar},
title = {Reverse Firewalls for Adaptively Secure MPC without Setup},
howpublished = {Cryptology ePrint Archive, Paper 2021/1262},
year = {2021},
note = {\url{https://eprint.iacr.org/2021/1262}},
url = {https://eprint.iacr.org/2021/1262}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. |
#### ${{\mathit B}^{\pm}}$ MEAN LIFE
See ${{\mathit B}^{\pm}}$ /${{\mathit B}^{0}}$ /${{\mathit B}_{{s}}^{0}}$ /${{\mathit b}}$ -baryon ADMIXTURE section for data on ${{\mathit B}}$ -hadron mean life averaged over species of bottom particles. “OUR EVALUATION” is an average using rescaled values of the data listed below. The average and rescaling were performed by the Heavy Flavor Averaging Group (HFLAV) and are described at https://hflav.web.cern.ch/. The averaging/rescaling procedure takes into account correlations between the measurements and asymmetric lifetime errors.
VALUE ($10^{-12}$ s) EVTS DOCUMENT ID TECN COMMENT
$\bf{ 1.638 \pm0.004}$ OUR EVALUATION
$1.637$ $\pm0.004$ $\pm0.003$
2014 E
LHCB ${{\mathit p}}{{\mathit p}}$ at 7 TeV
$1.639$ $\pm0.009$ $\pm0.009$ 1
2011
CDF ${{\mathit p}}{{\overline{\mathit p}}}$ at 1.96 TeV
$1.663$ $\pm0.023$ $\pm0.015$ 2
2011 B
CDF ${{\mathit p}}{{\overline{\mathit p}}}$ at 1.96 TeV
$1.635$ $\pm0.011$ $\pm0.011$ 3
2005 B
BELL ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit \Upsilon}{(4S)}}$
$1.624$ $\pm0.014$ $\pm0.018$ 4
2004 E
DLPH ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.636$ $\pm0.058$ $\pm0.025$ 5
2002 C
CDF ${{\mathit p}}{{\overline{\mathit p}}}$ at 1.8 TeV
$1.673$ $\pm0.032$ $\pm0.023$ 6
2001 F
BABR ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit \Upsilon}{(4S)}}$
$1.648$ $\pm0.049$ $\pm0.035$ 7
2000 R
ALEP ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.643$ $\pm0.037$ $\pm0.025$ 8
1999 J
OPAL ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.637$ $\pm0.058$ ${}^{+0.045}_{-0.043}$ 7
1998 Q
CDF ${{\mathit p}}{{\overline{\mathit p}}}$ at $1.8$ TeV
$1.66$ $\pm0.06$ $\pm0.03$ 8
1998 S
L3 ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.66$ $\pm0.06$ $\pm0.05$ 8
1997 J
SLD ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.58$ ${}^{+0.21}_{-0.18}$ ${}^{+0.04}_{-0.03}$ 94 5
1996 J
ALEP ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.61$ $\pm0.16$ $\pm0.12$ 7, 9
1995 Q
DLPH ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.72$ $\pm0.08$ $\pm0.06$ 10
1995
DLPH ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.52$ $\pm0.14$ $\pm0.09$ 7
1995 T
OPAL ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
• • We do not use the following data for averages, fits, limits, etc. • •
$1.695$ $\pm0.026$ $\pm0.015$ 6
2002 H
BELL Repl. by ABE 2005B
$1.68$ $\pm0.07$ $\pm0.02$ 5
1998 B
CDF Repl. by ACOSTA 2002C
$1.56$ $\pm0.13$ $\pm0.06$ 7
1996 C
CDF Repl. by ABE 1998Q
$1.58$ $\pm0.09$ $\pm0.03$ 11
1996 J
ALEP ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.58$ $\pm0.09$ $\pm0.04$ 7
1996 J
ALEP Repl. by BARATE 2000R
$1.70$ $\pm0.09$ 12
1995
DLPH ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit Z}}$
$1.61$ $\pm0.16$ $\pm0.05$ 148 5
1994 D
CDF Repl. by ABE 1998B
$1.30$ ${}^{+0.33}_{-0.29}$ $\pm0.16$ 92 7
1993 D
DLPH Sup. by ABREU 1995Q
$1.56$ $\pm0.19$ $\pm0.13$ 134 10
1993 G
$1.51$ ${}^{+0.30}_{-0.28}$ ${}^{+0.12}_{-0.14}$ 59 7
1993 C
OPAL Sup. by AKERS 1995T
$1.47$ ${}^{+0.22}_{-0.19}$ ${}^{+0.15}_{-0.14}$ 77 7
1993 D
ALEP Sup. by BUSKULIC 1996J
1 Measured mean life using fully reconstructed decays ( ${{\mathit J / \psi}}{{\mathit K}^{(*)}}$ ).
2 Measured using ${{\mathit B}^{-}}$ $\rightarrow$ ${{\mathit D}^{0}}{{\mathit \pi}^{-}}$ with ${{\mathit D}^{0}}$ $\rightarrow$ ${{\mathit K}^{-}}{{\mathit \pi}^{+}}$ events that were selected using a silicon vertex trigger.
3 Measurement performed using a combined fit of $\mathit CP$-violation, mixing and lifetimes.
4 Measurement performed using an inclusive reconstruction and ${{\mathit B}}$ flavor identification technique.
5 Measured mean life using fully reconstructed decays.
6 Events are selected in which one ${{\mathit B}}$ meson is fully reconstructed while the second ${{\mathit B}}$ $~$meson is reconstructed inclusively.
7 Data analyzed using ${{\mathit D}}$ / ${{\mathit D}^{*}}{{\mathit \ell}}$ X event vertices.
8 Data analyzed using charge of secondary vertex.
9 ABREU 1995Q assumes B( ${{\mathit B}^{0}}$ $\rightarrow$ ${{\mathit D}^{**-}}{{\mathit \ell}^{+}}{{\mathit \nu}_{{{{\mathit \ell}}}}}$ ) = $3.2$ $\pm1.7\%$.
10 Data analyzed using vertex-charge technique to tag ${{\mathit B}}$ charge.
11 Combined result of ${{\mathit D}}$ / ${{\mathit D}^{*}}{{\mathit \ell}}$ X analysis and fully reconstructed ${{\mathit B}}$ analysis.
12 Combined ABREU 1995Q and ADAM 1995 result.
References:
AAIJ 2014E
JHEP 1404 114 Measurements of the ${{\mathit B}^{+}}$ ,${{\mathit B}^{0}}$ ,${{\mathit B}_{{s}}^{0}}$ Meson and ${{\mathit \Lambda}_{{b}}^{0}}$ Baryon Lifetimes
AALTONEN 2011B
PR D83 032008 Measurement of the ${{\mathit B}^{-}}$ Lifetime using a Simulation Free Approach for Trigger Bias Correction
AALTONEN 2011
PRL 106 121804 Measurement of ${\mathit {\mathit b}}$ Hadron Lifetimes in Exclusive Decays Containing a ${{\mathit J / \psi}}$ in ${{\mathit p}}{{\overline{\mathit p}}}$ Collisions at $\sqrt {s }$ = 1.96 TeV
ABE 2005B
PR D71 072003 Improved Measurement of $\mathit CP$ Violation Parameters sin2$\phi _{1}$ and $\vert {{\mathit \lambda}}$ $\vert$, ${{\mathit B}}$ Meson Lifetimes, and ${{\mathit B}^{0}}$ $−{{\overline{\mathit B}}^{0}}$ Mixing Parameter $\Delta {{\mathit m}_{{d}}}$
Also
PR D71 079903 (errat.) Publisher's Note to ABE 2005B. Improved Measurement of $\mathit CP$ Violation Parameters sin2$\phi _{1}$ and $\vert \lambda \vert$, ${{\mathit B}}$ Meson Lifetimes, and ${{\mathit B}^{0}}$ $−{{\overline{\mathit B}}^{0}}$ Mixing Parameter $\Delta {\mathit m}_{{{\mathit d}}}$
ABDALLAH 2004E
EPJ C33 307 A Precise Measurement of the ${{\mathit B}^{+}}$ , ${{\mathit B}^{0}}$ and Mean b-hadron Lifetime with the DELPHI Detector at LEP I
ABE 2002H
PRL 88 171801 Precise Measurement of ${{\mathit B}}$ Meson Lifetimes with Hadronic Decay Final States
ACOSTA 2002C
PR D65 092009 Measurement of ${{\mathit B}}$ Meson Lifetimes using Fully Reconstructed ${{\mathit B}}$ Decays Produced in ${{\mathit p}}{{\overline{\mathit p}}}$ Collisions at $\sqrt {s }$ = 1.8 TeV
AUBERT 2001F
PRL 87 201803 Measurement of the ${{\mathit B}^{0}}$ and ${{\mathit B}^{+}}$ Meson Lifetimes with Fully Reconstructed Hadronic Final States
BARATE 2000R
PL B492 275 Measurement of the ${{\overline{\mathit B}}^{0}}$ and ${{\mathit B}^{-}}$ Meson Lifetimes
ABBIENDI 1999J
EPJ C12 609 Measurement of the ${{\mathit B}^{+}}$ and ${{\mathit B}^{0}}$ Lifetimes and Search for $\mathit CP(T)$ Violation using Reconstructed Secondary Vertices
ABE 1998B
PR D57 5382 Measurement of ${{\mathit B}}$ Hadron Lifetimes using ${{\mathit J / \psi}}$ Final States at CDF
ABE 1998Q
PR D58 092002 Improved Measurement of the ${{\mathit B}^{-}}$ and ${{\overline{\mathit B}}^{0}}$ Meson Lifetimes using Semileptonic Decays
ACCIARRI 1998S
PL B438 417 Upper Limit on the Lifetime Difference of Short- and Long-Lived ${{\mathit B}_{{s}}^{0}}$ Mesons
ABE 1997J
PRL 79 590 Measurement of the ${{\mathit B}^{+}}$ and ${{\mathit B}^{0}}$ Lifetimes using Topological Reconstruction of Inclusive and Semileptonic Decays
ABE 1996C
PRL 76 4462 Measurement of the ${{\mathit B}^{-}}$ and ${{\overline{\mathit B}}^{0}}$ Meson Lifetimes using Semileptonic Decays
BUSKULIC 1996J
ZPHY C71 31 Improved Measurement of the ${{\overline{\mathit B}}^{0}}$ and ${{\mathit B}^{-}}$ Meson Lifetimes
ABREU 1995Q
ZPHY C68 13 A Measurement of ${{\mathit B}^{+}}$ and ${{\mathit B}^{0}}$ Lifetimes using ${{\overline{\mathit D}}}$ ${{\mathit \ell}^{+}}$ Events
ZPHY C68 363 Lifetimes of Charged and Neutral ${{\mathit B}}$ Hadrons using Event Topology
AKERS 1995T
ZPHY C67 379 Improved Measurements of the ${{\mathit B}^{0}}$ and ${{\mathit B}^{+}}$ Meson Lifetimes
ABE 1994D
PRL 72 3456 Measurement of the ${{\mathit B}^{+}}$ and ${{\mathit B}^{0}}$ Meson Lifetimes
ABREU 1993G
PL B312 253 A Measurement of the Mean Lifetimes of Charged and Neutral ${{\mathit B}}$ Hadrons
ABREU 1993D
ZPHY C57 181 A Measurement of ${{\mathit B}}$ Meson Production and Lifetime using ${{\mathit D}}$ ${{\mathit \ell}^{-}}$ Events in ${{\mathit Z}^{0}}$ Decays
ACTON 1993C
PL B307 247 Measurement of the ${{\mathit B}^{0}}$ and ${{\mathit B}^{+}}$ Lifetime
BUSKULIC 1993D
PL B307 194 Measurement of the ${{\overline{\mathit B}}^{0}}$ and ${{\mathit B}^{-}}$ Meson Lifetime
Also
PL B325 537 (erratum) Erratum: BUSKULIC 1993D Measurement of the ${{\overline{\mathit B}}^{0}}$ and ${{\mathit B}^{-}}$ Meson Lifetime |
Latest news 2021-09-06: new blog post "Legacy Documents and TeX Live Docker Images".
# glossaries package FAQ
How do I make acronyms appear in a different font (but not the long form)? 🔗
As from version 4.02, use one of the predefined acronym formats and redefine \acronymfont as required. For example, if you want the short form displayed in sans-serif using the long-short format:
\setacronymstyle{long-short}
\renewcommand*{\acronymfont}[1]{\textsf{#1}}
or you can define you own custom style called, say, long-sf-short:
\newacronymstyle{long-sf-short}%
{%
\GlsUseAcrEntryDispStyle{long-short}%
}%
{%
\GlsUseAcrStyleDefs{long-short}%
\renewcommand{\acronymfont}[1]{\textsf{##1}}%
}
and use that:
\setacronymstyle{long-sf-short}
Note that if you are using glossaries-extra, it uses a completely different abbreviation mechanism so the above won’t work.
Pre glossaries version 4.02:
If you are just using the default definition of \newacronym, then use the package option smaller and redefine \acronymfont to use the required font.
If you are using any of the package options that redefine \newacronym (such as description), then just redefine \acronymfont to use the required font.
2020-06-27 16:27:57 |
# Integral Identity Involving Bell Numbers
Is the following identity true ?
$$\int_0^\infty \frac{b(x)}{B(x)} dx \quad \overset{?}{=} \quad \int_0^\infty \frac{x!}{x^x} dx$$
where
$$b(x) = \sum_{n=1}^\infty \frac{n^x}{n^n} \qquad \text{and} \qquad B(x) = \sum_{n=1}^\infty \frac{n^x}{n!}$$
NOTE: A short sketch of the demonstration proving the convergence of the integral on the left can be found here. Also, the numerical value of the integral on the right is about 2.5179+. Furthermore, if the position of $x$ and $n$ in the numerator of each sum were reversed, and both sums were to start at n = 0, we would have the following identity:
$$\int_0^\infty \frac{E(x)}{e^x} dx \quad = \quad \sum_{n=0}^\infty \frac{n!}{n^n}$$
where $\lim_{n \to 0} n^n = 1,$ and
$$E(x) = \sum_{n=0}^\infty \frac{x^n}{n^n} \qquad \text{and} \qquad e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$$
-
At first glance this seems too good to be true. (But this also applies to the valid formula: en.wikipedia.org/wiki/Sophomore's_dream ) Have you tried to disprove it by (estimating and) calculating both sides by computer? (The left hand side is the tricky part, the right hand side is around: 2.51792 ) – Daniel Soltész Oct 19 '13 at 19:44
Of course. No luck whatsoever in calculating the left hand side either numerically or symbolically. I don't even know its first digit ! All I know is that it converges, and that a man who later deleted his own comments wrote that its numerical value is about 2.5. That's ALL I was able to find out... In almost an entire year ! :-( – Lucian Oct 19 '13 at 20:05
You should also mention your other question about the convergence of the left hand side: mathoverflow.net/questions/138896/… Alexander Shamov's answer is clearly relevant. – Daniel Soltész Oct 19 '13 at 20:12
The value $2.5...$ can be obtained by the mathematica command: NIntegrate[Gamma[x + 1]/x^x, {x, 0, Infinity}] – Suvrit Oct 19 '13 at 21:43
It seems that the integral on the left-hand side exceeds $2.57$, so it's not even close to the numerical value $2.5179\ldots$ of the integral on the right-hand side.
I told gp:
b(x) = suminf(n=1,n^x/n^n)
B(x) = suminf(n=1,n^x/n!)
r(x) = b(x)/B(x)
intnum(x=0,25,r(x))
and got $2.5793$+. Since the integrand $r(x)=b(x)/B(x)$ is positive but apparently decreasing, the Riemann sum $.01 \sum_{n=1}^{2500} r(n/100)$ should be a lower bound on $\int_0^{25} r(x)\,dx$, and thus on $\int_0^\infty r(x)\,dx$; and already that lower bound exceeds $2.57$: replacing the last gp command above by
sum(n=1,2500,.01*r(.01*n))
returns $2.5755599998001798\ldots > 2.57$.
-
What is ‘GP’ ? Mathematica, for instance, can't even handle the integral, numerically, at all, that's why I'm asking. What mathematical software would you recommend ? – Lucian Oct 20 '13 at 10:56
My version (5.2) has significant problems calculating even as much as a single value of the function in question, let alone 2500 of them... Not sure why, but that's it. – Lucian Oct 20 '13 at 13:41
@Lucian: Sorry, I think I spoke too soon; seems indeed not easy for mathematica to crunch this out of the box---perhaps a more clever reformulation is needed! – Suvrit Oct 20 '13 at 14:12
What version of Mathematica are you using ? Because I thought of installing a newer one, but I'm afraid they might not be any better either. At least the old one is small, and I'm generally pleased with it, a few small bugs notwithstanding. – Lucian Oct 20 '13 at 14:25
@Lucian: I'm using v8.0; also, I think we should move our "how to do this on mathematica" question to mathematica.stackexchange.com --- I downloaded and tried pari/gp, and it works like a charm! (sorry Noam, for the comment noise!) – Suvrit Oct 20 '13 at 15:26 |
## Tags : NMI
Entries in this Tags : 1logs Showing : 1 - 1 / 1
## Nov 08, 2007
### Entropy correlation coefficient
Post @ 12:19:20 | NMI
Unfortunately the term normalized mutual information is a bit confusing since there exists several definitions of it. They all share the same insights of producing a value in [0,1] such that a greater value means a better score''.
The definition given is just the sum of the marginal entropies over the joint-entropy
It is moreover related to the entropy correlation coefficient as follows: |
# A Faster Tree-Decomposition Based Algorithm for Counting Linear Extensions
http://hdl.handle.net/10138/321138
#### Citation
Kangas , K , Koivisto , M & Salonen , S 2019 , ' A Faster Tree-Decomposition Based Algorithm for Counting Linear Extensions ' , Algorithmica , vol. 82 , pp. 2156-2173 . https://doi.org/10.1007/s00453-019-00633-1
Title: A Faster Tree-Decomposition Based Algorithm for Counting Linear Extensions Author: Kangas, Kustaa; Koivisto, Mikko; Salonen, Sami Contributor: University of Helsinki, Aalto UniversityUniversity of Helsinki, Department of Computer ScienceUniversity of Helsinki, Department of Computer Science Date: 2019-10-03 Language: eng Number of pages: 18 Belongs to series: Algorithmica ISSN: 1432-0541 URI: http://hdl.handle.net/10138/321138 Abstract: We investigate the problem of computing the number of linear extensions of a given n-element poset whose cover graph has treewidth t. We present an algorithm that runs in time $${\tilde{O}}(n^{t+3})$$O~(nt+3)for any constant t; the notation $${\tilde{O}}$$O~hides polylogarithmic factors. Our algorithm applies dynamic programming along a tree decomposition of the cover graph; the join nodes of the tree decomposition are handled by fast multiplication of multivariate polynomials. We also investigate the algorithm from a practical point of view. We observe that the running time is not well characterized by the parameters n and t alone: fixing these parameters leaves large variance in running times due to uncontrolled features of the selected optimal-width tree decomposition. We compare two approaches to select an efficient tree decomposition: one is to include additional features of the tree decomposition to build a more accurate, heuristic cost function; the other approach is to fit a statistical regression model to collected running time data. Both approaches are shown to yield a tree decomposition that typically is significantly more efficient than a random optimal-width tree decomposition. Subject: Algorithm selection COMPLEXITY Empirical hardness FRAMEWORK Linear extension Multiplication of polynomials Tree decomposition 113 Computer and information sciences 111 Mathematics Rights:
|
# Traces in monoidal derivators, and homotopy colimits
Gallauer Alves de Souza, Martin (2014). Traces in monoidal derivators, and homotopy colimits. Advances in Mathematics, 261:26-84.
## Abstract
A variant of the trace in a monoidal category is given in the setting of closed monoidal derivators, which is applicable to endomorphisms of fiberwise dualizable objects. Functoriality of this trace is established. As an application, an explicit formula is deduced for the trace of the homotopy colimit of endomorphisms over finite categories in which all endomorphisms are invertible. This result can be seen as a generalization of the additivity of traces in monoidal categories with a compatible triangulation.
## Abstract
A variant of the trace in a monoidal category is given in the setting of closed monoidal derivators, which is applicable to endomorphisms of fiberwise dualizable objects. Functoriality of this trace is established. As an application, an explicit formula is deduced for the trace of the homotopy colimit of endomorphisms over finite categories in which all endomorphisms are invertible. This result can be seen as a generalization of the additivity of traces in monoidal categories with a compatible triangulation.
## Statistics
### Citations
Dimensions.ai Metrics
3 citations in Web of Science®
4 citations in Scopus®
Google Scholar™
### Downloads
1 download since deposited on 27 Jan 2015
0 downloads since 12 months
Detailed statistics
## Additional indexing
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Institute of Mathematics 510 Mathematics Physical Sciences > General Mathematics English 2014 27 Jan 2015 16:07 26 Jan 2022 05:11 Elsevier 0001-8708 Closed https://doi.org/10.1016/j.aim.2014.03.029 |
Department of Mathematics Search | Help | Login | |
Math @ Duke
....................... ....................... Webpage
## Publications [#199141] of Nathan Totz
Papers Accepted
1. N. Totz, S. Wu, A Rigorous Justification of the Modulation Approximation to the 2D Full Water Wave Problem, Communications in Mathematical Physics, vol. 310 no. 3 (2012), pp. 817-883, Springer [arXiv:1101.0545]
(last updated on 2012/12/17)
Abstract:
We consider solutions to the 2D inviscid infinite depth water wave problem neglecting surface tension which are to leading order wave packets of the form $\alpha + \epsilon B(\epsilon \alpha, \epsilon t, \epsilon^2 t)e^{i(k\alpha + \omega t)}$ for $k > 0$. Multiscale calculations formally suggest that such solutions have modulations $B$ that evolve on slow time scales according to a focusing cubic NLS equation. Justifying this rigorously is made difficult by the fact that known existence results do not yield solutions which exist for long enough to see the NLS dynamics. Nonetheless, given initial data within $O(\epsilon^{3/2})$ of such wave packets in $L^2$ Sobolev space, we show that there exists a unique solution to the water wave problem which remains within $O(\epsilon^{3/2})$ to the approximate solution for times of order $O(\epsilon^{-2})$. This is done by using a version of the evolution equations for the water wave problem developed recently by Sijue Wu with no quadratic nonlinearity.
dept@math.duke.edu
ph: 919.660.2800
fax: 919.660.2821
Mathematics Department
Duke University, Box 90320
Durham, NC 27708-0320 |
# Changes from ImgLib1 to ImgLib2
At the Madison hackathon, quite a lot has been done about design issues of the originally published ImgLib (which was already the 6th generation). Unfortunately, these improvements were not possible in a fully backwards-compatible manner.
### Summary
• Image and Container are abandoned. In case that write access is required, use Img instead, otherwise an apropriate access interface as described below.
• The ImageOpener is renamed to now ImgOpener accordingly.
## Where is the Image?
’'’Fig. 1’’’ ImgLib2 interfaces for data collections in ‘‘n’‘-dimensional Euclidean space. The key feature is distinction between random access vs. iteration and real vs. integer coordinate access.
In ImgLib, the Image class was a wrapper for a limited subset of possible meta-data and a Container that provided access to the actual pixels. The overwhelming majority of methods in both Image and Container were almost identical. Furthermore, implementing algorithms for Image limited their portability to situations where different meta-data or less meta-data was required or different strategies for pixel access were appropriate.
In ImgLib2, the Image class has been abandoned. Instead, there is a set of interfaces that describe how data elements (pixels) can be accessed. Fig. 2 shows a UML-diagram visualizing the interface inheritance graph. The most important interfaces are
• RandomAccessible, RealRandomAccessible
• IterableInterval, IterableRealInterval
Actual images that store pixels in a regular equidistant grid implement the Img interface that combines a reasonable subset of the above interfaces. The basic storage strategies like ArrayImg, CellImg, PlanarImg, ImagePlusImg or ListImg implement this interface and can be used directly without being wrapped into something else.
Opposed to the intuitive shortcut that you would just replace Image by Img then, we suggest to consider implementing algorithms for the type of pixel access that you actually need. Iterating and localizing all pixels is possible with IterableRealInterval or IterableInterval, random access comes from RandomAccessible and RealRandomAccessible. That is, the Img interface is almost always a too strict constraint for the possible input, but usually a good choice for writing the result.
## Where is the LocalizableCursor?
’'’Fig. 2’’’ ImgLib2 interfaces for access to sample data and to real and integer coordinates in ‘‘n’‘-dimensional Euclidean space.
Iteration in ImgLib2 (as in ImgLib) implies constant and thus repeatable order. Therefore a Cursor can always localize itself, either by going the hard way and reasoning the position from it’s iteration index or by tracking the position per move. There is no extra interface required to distinguish this behavior but you can choose which Cursor to acquire by Iterable(Real)Interval.cursor() and Iterable(Real)Interval.localizingCursor(). Fig. 2 shows a UML-diagram visualizing the interface inheritance graph.
## Where is the LocalizableByDimCursor?
The LocalizableByDimCursor was a combination of an iterator and random access. Combining these two concepts is a bad idea and so we split them. Random access is provided by classes implementing the interfaces RandomAccess or RealRandomAccess. You get them by RandomAccessible.randomAccess() or RealRandomAccessible.realRandomAccess() respectively. Fig. 2 shows a UML-diagram visualizing the interface inheritance graph.
## How does ImgLib2 handle OutOfBounds?
Handling out of bounds behavior is invariant to data storage. We have therefore moved it into the final implementations ExtendedRandomAccessibleInterval and ExtendedRealRandomAccessibleRealInterval. The usage is trivial as follows:
ExtendedRandomAccessibleInterval< IntType, Img< IntType> > extendedInterval =
new ExtendedRandomAccessibleInterval< IntType, Img< IntType > >(
myIntImg,
new OutOfBoundsMirrorFactory< IntType, Img< IntType > >( Boundary.DOUBLE ) );
RandomAccess< IntType > randomAccess = extendedInterval.randomAccess();
That way, out of boundary location handling is available for all Intervals that are compatible with the passed OutOfBoundsFactory (the existing work for all RandomAccessible & Interval).
A simple shortcut is to call:
RandomAccessible< IntType > interval = Views.extend( myIntImg,
new OutOfBoundsMirrorFactory< IntType, Img< IntType > >( Boundary.DOUBLE ) );
For standard out of bounds strategies there are also static convenience methods:
/* Mirroring Strategy where the last pixel is the mirror */
RandomAccessible< IntType > interval = Views.extendMirrorSingle( myIntImg );
/* Mirroring Strategy where the mirror lies behind the last pixel */
RandomAccessible< IntType > interval = Views.extendMirrorDouble( myIntImg );
/* Strategy where periodicity of the space is assumed (like FFT) */
RandomAccessible< IntType > interval = Views.extendPeriodic( myIntImg );
/* Strategy that returns a constant value outside the boundaries */
RandomAccessible< IntType > interval = Views.extendValue( myIntImg, new IntType( 5 ) );
They placed in the Views class because it is a special view onto an Img or also any RandomAccessibleInterval.
## Where did the NeighborhoodCursor and RegionOfInterestCursor go?
They have been removed and will be replaced by a slightly different concept that was not possible in ImgLib before due to lack of appropriate interfaces. Both Neighborhood and HyperBox will implement IterableInterval and/or RandomAccessible. They will be provided by an IterableInterval< Neighborhood > or RandomAccessible< HyperBox > respectively (plus other combinations, and real variants).
## Where is the Chunk?
Chunk was introduced as a way for parallel processing of independent sections on iterable data. We have replaced it by IterableIntervalSubset which is an IterableInterval itself and thus can be used transparently in all implementations using IterableInterval.
## Changes within ImgLib2
• ImgCursor has been removed for being empty—simply use Cursor instead
• The ImgOpener returns an ImgPlus, not an Img to store additional meta-data as retrieved through LOCI bioformats. |
# How force is equal to the product of mass and instantaneous acceleration?
$$\vec{F}=m\vec{a}$$ Suppose that an object of some mass was constantly accelerating ($$\vec{a} \neq 0$$) through sometime interval. With no doubt, the object had an instantaneous acceleration at any instant in that time interval (since the instantaneous acceleration is defind as a limit).
What about the force? How can there occur a force at a specific time (for example, at $$t=2s$$)? Since a specific time is not a time interval the force can spend acting on the object!
What I think of is that forces must spend time intervals acting on objects, so, there can not be a force without a time interval, so how (if there is no force at a specific time) there is an acceleration?
The key is to look at what you mention in your question: it's the idea of the limit. Let's cover what you seem to be familiar with already. When we define "instantaneous velocity as a function of time" to be $$v(t)=\frac{\text d x}{\text d t}$$ what we really mean is that we are looking at the ratio $$\frac{x(t+\tau)-x(t)}{\tau}$$ when $$\tau$$ becomes infinitely small. If we make $$\tau$$ small enough, then it is as if we are looking at the velocity at the instant at time $$t$$, since during the interval $$\tau$$ the velocity is essentially constant. "An instant" is essentially just a really really short time interval.
The above is also true for acceleration ($$a=\frac{\text d v}{\text d t}$$), and thus also true for forces by Newton's second law $$(F=ma)$$. The force "at an instant $$t$$" determines the acceleration "at an instant $$t$$", which tells us how the velocity changes over the interval $$t$$ to $$t+\tau$$ for a sufficiently small $$\tau$$. More specifically for your example, if we have a force applied to an object existing at $$t=2\ \rm s$$, then we can determine what the velocity will be at a time $$(2+\tau)\ \rm s$$.
In other words, just like how when we say $$a(t)$$ we really mean "the acceleration from $$t$$ to $$t+\tau$$", when we say $$F(t)$$ we really mean "the force from $$t$$ to $$t+\tau$$", since force and acceleration are proportional.
• Why the down vote? If I knew what was wrong with my answer I could fix it. – BioPhysicist Jan 23 '19 at 12:03
• I'm not the downvote but the OP explicitly states they are okay with instantaneous accelerations. They are confused that this implies an instantaneous force, which disagrees with some preconception they have about forces. The kinematics part of this the OP claims to be alright with. – jacob1729 Jan 23 '19 at 14:13
• @jacob1729 Right. I address the force. I start with the known (velocity and acceleration) and then move to the "unknown" talking about forces. It is how you typically teach or explain things to people. I changed some of my wording to help better reflect this. (Also, I like your 1729) – BioPhysicist Jan 23 '19 at 14:15
• Oh, I missed the last paragraph. I don't know if that fully addresses the OP's concerns or not but I don't think its worth a downvote. I'm not really clear on SE policy on upvoting people you think were unfairly downvoted. – jacob1729 Jan 23 '19 at 14:20
The way I interpret Newton's equation (F = ma) is that, given the details of the forces we know (or believe) the particle is being subjected to, we then try to calculate the acceleration and position. So in fact, we do not know what exactly $$\vec{a}$$ is, as a function of time or whether it is changing, constant, etc... The goal is to determine this.
Consider a simple example of a particle subjected to a constant gravitational force, which is proportional to the particle's mass $$m$$, as per Newton's laws. That is, $$\vec{F} = -mg\vec{e}_z$$, where $$\vec{e}_z$$ is a unit vector directing the force to point in the downwards direction. We then use the equation $$\vec{F} = m \vec{a}$$ to determine that $$\vec{a} = -g \vec{e}_z$$ which tells us that the acceleration is a constant function in time. Note that the force in this case is known to be constant because it doesn't depend on time or where the particle is located. Now consider a case where the particle is subjected to a spring force, with $$\vec{F} = -k \vec{x}$$, so that the force $$\vec{F}$$ is always pointing towards the origin and depends on how far the particle is from the origin. Now, Newton's law tells us that $$m\vec{a}(t) = -k \vec{x}(t).$$ We see that the force is now not constant in time, since it depends on the particle's position, which is changing in response to the force, etc... This constitutes a differential equation, which must be solved given the initial position and velocity of the particle. In this case, it can be done and shows that the particle oscillates back and forth as the force and inertia balance each other, but in general solving these equations is very hard and requires computers or fancy tricks. The main point however, is that in general, neither the force or acceleration are constant in time and understanding this was what lead Newton to (independently) develop the field of calculus. |
# On the determinant $\det[(\frac{i^2+dj^2}p)]_{0\le i,j\le(p-1)/2}$ with $(\frac dp)=-1$
Let $$p$$ be an odd prime. For $$d\in\mathbb Z$$ we define $$T(d,p):=\det\left[\left(\frac{i^2+dj^2}p\right)\right]_{0\le i,j\le(p-1)/2},$$ where $$(\frac{\cdot}p)$$ is the Legendre symbol. By (1.17) of my paper arXiv:1308.2900, if$$(\frac dp)=-1$$ then $$(\frac{T(d,p)}p)=1$$.
Suppose that $$p\equiv3\pmod4$$. Then, by (1.14) of arXiv:1308.2900, $$T(d,p)=T(-1,p)$$ for any $$d\in\mathbb Z$$ with $$(\frac dp)=-1$$. As $$T(-1,p)$$ is a skew-symmetric determinant of even order, it is an integer square.
In the case $$p\equiv1\pmod4$$, if $$d$$ and $$d'$$ are both quadratic nonresidues modulo $$p$$, then we clearly have $$T(d,p)=\pm T(d',p)$$.
I have the following conjecture which seems quite challenging.
Conjecture. Let $$p\equiv1\pmod4$$ be a prime and write $$p=x^2+4y^2$$ with $$x$$ and $$y$$ positive integers. Then, for any integer $$d\in\mathbb Z$$ with $$(\frac dp)=-1$$, there is a positive integer $$t(p)$$ (not depending on $$d$$) such that $$|T(d,p)|=2^{(p-1)/2}t(p)^2y.$$
Via Mathematica, I find that $$\begin{gather}t(5)=1,\ t(13)=3,\ t(17)=4,\ t(29)=91,\ t(37)=81,\ t(41)=180, \\t(53)=1703,\ t(61)=87120,\ t(73)=16104096,\ t(89)=3947892146, \\ t(97)=19299520512,\ t(101)=885623936875,\ t(109)=36548185365.\end{gather}$$
PS: I have verified the conjecture for all primes $$p<5000$$ with $$p\equiv1\pmod4$$.
• Out of curiosity - do all those questions of yours come out of some other research, or are they just a result of some random numerical experimentation? – Wojowu Jan 6 at 12:01
• @Wojowu If one has good mathematical feeling or intuition like Ramanujan, he/she can have lots of new discoveries in math. My questions posted to Mathoverflow are just small parts of my mathematical conjectures, of course they are not results of random numerical experiments, they come from combination of my philosophy, intuition, inspiration, experience and computation. – Zhi-Wei Sun Jan 6 at 15:07
• I have verified the conjecture for all primes $p<5000$ with $p\equiv1\pmod4$. – Zhi-Wei Sun Jan 6 at 15:39
I have obtained some partial results that seem promising, but not a full solution:
Let $$\chi$$ be a nontrivial Dirichlet character mod $$p$$ with $$\chi(-1)=1$$. Then
$$\sum_{j=0}^{(p-1)/2} \left( \frac{ i^2+ dj^2}{p} \right) \chi(j) = \frac{1}{2} \sum_{j=1}^{p-1 } \left( \frac{ i^2+ dj^2}{p} \right) \chi(j)$$
If $$i=0$$, then this sum is zero. Otherwise, we can perform a substitution replacing $$j$$ by $$ij$$, getting
$$\frac{1}{2} \sum_{j=1}^{p-1 } \left( \frac{ i^2+ di^2 j^2}{p} \right) \chi(i j) = \chi(i) \frac{1}{2}\sum_{j=1}^{p-1 } \left( \frac{ 1+ d j^2}{p} \right) \chi( j)$$. So $$\chi$$ is an eigenvector of this matrix, with eigenvalue $$\frac{1}{2}\sum_{j=1}^{p-1 } \left( \frac{ 1+ d j^2}{p} \right) \chi( j)$$.
The complement to these eigenvectors has a basis consisting of the function that is $$1$$ on $$0$$ and the function that is $$1$$ on all nonzero $$j$$. The matrix preserves the space generated by this basis, and acts on it by $$\begin{pmatrix} 0 & - (p-1)/2 \\ 1 & \frac{1}{2} \sum_{j=1}^{p-1} \left( \frac{1+ dj^2}{p} \right)\end{pmatrix}$$ and therefore with determinant $$(p-1)/2$$. So the determinant of your matrix is
$$\frac{p-1}{2} \prod_{\chi \textrm { nontrivial, } \chi(-1)=1} \left( \frac{1}{2}\sum_{j=1}^{p-1 } \left( \frac{ 1+ d j^2}{p} \right) \chi( j) \right)$$
We can write $$\sum_{j^2 = t} \chi(j)$$ as $$\chi_1(t) + \chi_2(t)$$ where $$\chi_1$$ and $$\chi_2$$ are the squareroots of $$t$$ in the ring of characters, so $$\sum_{j=1}^{p-1 } \left( \frac{ 1+ d j^2}{p} \right) \chi( j) = \sum_{t=1}^{p-1} \left( \frac{ 1+ d t}{p} \right) \chi_1( t)+ \sum_{t=1}^{p-1} \left( \frac{ 1+ d t}{p} \right) \chi_2( t)$$
If we focus attention on the eigenvector associated to $$\chi$$ the Legendre symbol, then $$\chi_1$$ and $$\chi_2$$ are the two characters of order $$4$$, so the left term is of the form $$a+bi$$ and the right term is its complex conjugate $$a-bi$$. By the evaluation of the absolute value of Jacobi sums, $$a^2+b^2=p$$, and because the number of $$t$$ with $$\chi_1(t)= \pm 1$$ (i.e. $$t$$ a quadratic residue) and $$1+dt \neq 0$$ (implied by $$t$$ a quadratic residue) is $$p-1/2$$, which is even, $$a$$ is even, so in fact $$a=\pm y,b=\pm x$$, and this eigenvalue is $$\pm y$$.
So it seems when the eigenvalue associated to the Legendre symbol is removed, you would like to have a square determinant. But I don't yet quite see how to obtain that.
I tried to get a square by finding matching pairs of eigenvalues, or by removing this eigenvector and conjugating to a skew-symmetric matrix, but neither approach quite worked. |
# scanpy.pl.dotplot¶
scanpy.pl.dotplot(adata, var_names, groupby, use_raw=None, log=False, num_categories=7, expression_cutoff=0.0, mean_only_expressed=False, cmap='Reds', dot_max=None, dot_min=None, standard_scale=None, smallest_dot=0.0, title=None, colorbar_title='Mean expression\\nin group', size_title='Fraction of cells\\nin group (%)', figsize=None, dendrogram=False, gene_symbols=None, var_group_positions=None, var_group_labels=None, var_group_rotation=None, layer=None, swap_axes=False, dot_color_df=None, show=None, save=None, ax=None, return_fig=False, **kwds)
Makes a dot plot of the expression values of var_names.
For each var_name and each groupby category a dot is plotted. Each dot represents two values: mean expression within each category (visualized by color) and fraction of cells expressing the var_name in the category (visualized by the size of the dot). If groupby is not given, the dotplot assumes that all data belongs to a single category.
Note
A gene is considered expressed if the expression value in the adata (or adata.raw) is above the specified threshold which is zero by default.
An example of dotplot usage is to visualize, for multiple marker genes, the mean value and the percentage of cells expressing the gene across multiple clusters.
This function provides a convenient interface to the DotPlot class. If you need more flexibility, you should use DotPlot directly.
Parameters
adata : AnnDataAnnData
Annotated data matrix.
var_names :
var_names should be a valid subset of adata.var_names. If var_names is a mapping, then the key is used as label to group the values (see var_group_labels). The mapping values should be sequences of valid adata.var_names. In this case either coloring or ‘brackets’ are used for the grouping of var names depending on the plot. When var_names is a mapping, then the var_group_labels and var_group_positions are set.
groupby :
The key of the observation grouping to consider.
use_raw : bool, None (default: None)
Use raw attribute of adata if present.
log : boolbool (default: False)
Plot on logarithmic axis.
num_categories : intint (default: 7)
Only used if groupby observation is not categorical. This value determines the number of groups into which the groupby observation should be subdivided.
categories_order
Order in which to show the categories. Note: add_dendrogram or add_totals can change the categories order.
figsize : Tuple[float, float], None (default: None)
Figure size when multi_panel=True. Otherwise the rcParam['figure.figsize] value is used. Format is (width, height)
dendrogram : (default: False)
If True or a valid dendrogram key, a dendrogram based on the hierarchical clustering between the groupby categories is added. The dendrogram information is computed using scanpy.tl.dendrogram(). If tl.dendrogram has not been called previously the function is called with default parameters.
gene_symbols : str, None (default: None)
Column name in .var DataFrame that stores gene symbols. By default var_names refer to the index column of the .var DataFrame. Setting this option allows alternative names to be used.
var_group_positions : Sequence[Tuple[int, int]], None (default: None)
Use this parameter to highlight groups of var_names. This will draw a ‘bracket’ or a color block between the given start and end positions. If the parameter var_group_labels is set, the corresponding labels are added on top/left. E.g. var_group_positions=[(4,10)] will add a bracket between the fourth var_name and the tenth var_name. By giving more positions, more brackets/color blocks are drawn.
var_group_labels : Sequence[str], None (default: None)
Labels for each of the var_group_positions that want to be highlighted.
var_group_rotation : float, None (default: None)
Label rotation degrees. By default, labels larger than 4 characters are rotated 90 degrees.
layer : str, None (default: None)
Name of the AnnData object layer that wants to be plotted. By default adata.raw.X is plotted. If use_raw=False is set, then adata.X is plotted. If layer is set to a valid layer name, then the layer is plotted. layer takes precedence over use_raw.
title : str, None (default: None)
Title for the figure
colorbar_title : str, None (default: 'Mean expression\nin group')
Title for the color bar. New line character (n) can be used.
cmap : strstr (default: 'Reds')
String denoting matplotlib color map.
standard_scale : {‘var’, ‘group’}, NoneOptional[Literal[‘var’, ‘group’]] (default: None)
Whether or not to standardize the given dimension between 0 and 1, meaning for each variable or group, subtract the minimum and divide each by its maximum.
swap_axes : bool, None (default: False)
By default, the x axis contains var_names (e.g. genes) and the y axis the groupby categories. By setting swap_axes then x are the groupby categories and y the var_names.
return_fig : bool, None (default: False)
Returns DotPlot object. Useful for fine-tuning the plot. Takes precedence over show=False.
size_title : str, None (default: 'Fraction of cells\nin group (%)')
Title for the size legend. New line character (n) can be used.
expression_cutoff : floatfloat (default: 0.0)
Expression cutoff that is used for binarizing the gene expression and determining the fraction of cells expressing given genes. A gene is expressed only if the expression value is greater than this threshold.
mean_only_expressed : boolbool (default: False)
If True, gene expression is averaged only over the cells expressing the given genes.
dot_max : float, None (default: None)
If none, the maximum dot size is set to the maximum fraction value found (e.g. 0.6). If given, the value should be a number between 0 and 1. All fractions larger than dot_max are clipped to this value.
dot_min : float, None (default: None)
If none, the minimum dot size is set to 0. If given, the value should be a number between 0 and 1. All fractions smaller than dot_min are clipped to this value.
smallest_dot : float, None (default: 0.0)
If none, the smallest dot has size 0. All expression levels with dot_min are plotted with this size.
show : bool, None (default: None)
Show the plot, do not return axis.
save : str, bool, NoneUnion[str, bool, None] (default: None)
If True or a str, save the figure. A string is appended to the default filename. Infer the filetype if ending on {'.pdf', '.png', '.svg'}.
ax : _AxesSubplot, NoneOptional[_AxesSubplot] (default: None)
A matplotlib axes object. Only works if plotting a single component.
kwds
Are passed to matplotlib.pyplot.scatter().
Return type
DotPlot, dict, NoneUnion[DotPlot, dict, None]
Returns
If return_fig is True, returns a DotPlot object, else if show is false, return axes dict
DotPlot
The DotPlot class can be used to to control several visual parameters not available in this function.
rank_genes_groups_dotplot()
to plot marker genes identified using the rank_genes_groups() function.
Examples
Create a dot plot using the given markers and the PBMC example dataset grouped by the category ‘bulk_labels’.
>>> import scanpy as sc
>>> markers = ['C1QA', 'PSAP', 'CD79A', 'CD79B', 'CST3', 'LYZ']
Using var_names as dict:
>>> markers = {'T-cell': 'CD3D', 'B-cell': 'CD79A', 'myeloid': 'CST3'}
>>> dp = sc.pl.dotplot(adata, markers, 'bulk_labels', return_fig=True)
>>> axes_dict = dp.get_axes() |
## Hamiltonian
Section: Hamiltonian
Type: logical
Default: false
(Experimental) If set to yes, Octopus will use the adaptively compressed exchange operator (ACE) for HF and hybrid calculations, as defined in Lin, J. Chem. Theory Comput. 2016, 12, 2242.
CalculateSelfInducedMagneticField
Section: Hamiltonian
Type: logical
Default: no
The existence of an electronic current implies the creation of a self-induced magnetic field, which may in turn back-react on the system. Of course, a fully consistent treatment of this kind of effect should be done in QED theory, but we will attempt a first approximation to the problem by considering the lowest-order relativistic terms plugged into the normal Hamiltonian equations (spin-other-orbit coupling terms, etc.). For the moment being, none of this is done, but a first step is taken by calculating the induced magnetic field of a system that has a current, by considering the magnetostatic approximation and Biot-Savart law:
$$\nabla^2 \vec{A} + 4\pi\alpha \vec{J} = 0$$
$$\vec{B} = \vec{\nabla} \times \vec{A}$$
If CalculateSelfInducedMagneticField is set to yes, this B field is calculated at the end of a gs calculation (nothing is done -- yet -- in the tdcase) and printed out, if the Output variable contains the potential keyword (the prefix of the output files is Bind).
ClassicalPotential
Section: Hamiltonian
Type: integer
Default: no
Whether and how to add to the external potential the potential generated by the classical charges read from block PDBClassical, for QM/MM calculations. Not available in periodic systems.
Options:
• no: No classical charges.
• point_charges: Classical charges are treated as point charges.
• gaussian_smeared: Classical charges are treated as Gaussian distributions. Smearing widths are hard-coded by species (experimental).
CurrentDensity
Section: Hamiltonian
Type: integer
This variable selects the method used to calculate the current density. For the moment this variable is for development purposes and users should not need to use it.
Options:
• gradient: The calculation of current is done using the gradient operator. (Experimental)
• gradient_corrected: The calculation of current is done using the gradient operator with additional corrections for the total current from non-local operators.
• hamiltonian: The current density is obtained from the commutator of the Hamiltonian with the position operator. (Experimental)
EnablePhotons
Section: Hamiltonian
Type: logical
Default: no
This variable can be used to enable photons in several types of calculations. It can be used to activate the one-photon OEP formalism. In the case of CalculationMode = casida, it enables photon modes as described in ACS Photonics 2019, 6, 11, 2757-2778. Finally, if set to yes when solving the ferquency-dependent Sternheimer equation, the photons are coupled to the electronic subsystem.
EwaldAlpha
Section: Hamiltonian
Type: float
Default: 0.21
The value 'Alpha' that controls the splitting of the Coulomb interaction in the Ewald sum used to calculation the ion-ion interaction for periodic systems. This value affects the speed of the calculation, normally users do not need to modify it.
ExternalCurrent
Section: Hamiltonian
Type: logical
Default: no
If an external current density will be used.
FilterPotentials
Section: Hamiltonian
Type: integer
Default: filter_ts
Octopus can filter the pseudopotentials so that they no longer contain Fourier components larger than the mesh itself. This is very useful to decrease the egg-box effect, and so should be used in all instances where atoms move (e.g. geometry optimization, molecular dynamics, and vibrational modes).
Options:
• filter_none: Do not filter.
• filter_TS: The filter of M. Tafipolsky and R. Schmid, J. Chem. Phys. 124, 174102 (2006).
• filter_BSB: The filter of E. L. Briggs, D. J. Sullivan, and J. Bernholc, Phys. Rev. B 54, 14362 (1996).
ForceTotalEnforce
Section: Hamiltonian
Type: logical
Default: no
(Experimental) If this variable is set to "yes", then the sum of the total forces will be enforced to be zero.
GaugeFieldDelay
Section: Hamiltonian
Type: float
Default: 0.
The application of the gauge field acts as a probe of the system. For dynamical systems one can apply this probe with a delay relative to the start of the simulation.
GaugeFieldDynamics
Section: Hamiltonian
Type: integer
Default: polarization
This variable select the dynamics of the gauge field used to apply a finite electric field to periodic systems in time-dependent runs.
Options:
• none: The gauge field does not have dynamics. The induced polarization field is zero.
• polarization: The gauge field follows the dynamic described in Bertsch et al, Phys. Rev. B 62 7998 (2000).
GaugeFieldPropagate
Section: Hamiltonian
Type: logical
Default: no
Propagate the gauge field with initial condition set by GaugeVectorField or zero if not specified
GaugeVectorField
Section: Hamiltonian
Type: block
The gauge vector field is used to include a uniform (but time-dependent) external electric field in a time-dependent run for a periodic system. An optional second row specifies the initial value for the time derivative of the gauge field (which is set to zero by default). By default this field is not included. If KPointsUseSymmetries = yes, then SymmetryBreakDir must be set in the same direction. This is used with utility oct-dielectric_function according to GF Bertsch, J-I Iwata, A Rubio, and K Yabana, Phys. Rev. B 62, 7998-8002 (2000).
GyromagneticRatio
Section: Hamiltonian
Type: float
Default: 2.0023193043768
The gyromagnetic ratio of the electron. This is of course a physical constant, and the default value is the exact one that you should not touch, unless: (i) You want to disconnect the anomalous Zeeman term in the Hamiltonian (then set it to zero; this number only affects that term); (ii) You are using an effective Hamiltonian, as is the case when you calculate a 2D electron gas, in which case you have an effective gyromagnetic factor that depends on the material.
MassScaling
Section: Hamiltonian
Type: block
Scaling factor for anisotropic masses (different masses along each geometric direction).
%MassScaling
1.0 | 1800.0 | 1800.0
%
would fix the mass of the particles to be 1800 along the y and z directions. This can be useful, e.g., to simulate 3 particles in 1D, in this case an electron and 2 protons.
MaxwellHamiltonianOperator
Section: Hamiltonian
Type: integer
With this variable the the Maxwell Hamiltonian operator can be selected
Options:
• faraday_ampere_old: old version
• faraday_ampere: The propagation operation in vacuum with Spin 1 matrices without Gauss law condition.
• faraday_ampere_medium: The propagation operation in medium with Spin 1 matrices without Gauss law condition
• faraday_ampere_gauss: The propagation operation is done by 4x4 matrices also with Gauss laws constraint.
• faraday_ampere_gauss_medium: The propagation operation is done by 4x4 matrices also with Gauss laws constraint in medium
MaxwellMediumCalculation
Section: Hamiltonian
Type: integer
Default: RS
For linear media the calculation of the Maxwell Operator acting on the RS state can be done directly using the Riemann-Silberstein representation or by calculating the curl of the electric and magnetic fields.
Options:
• RS: Medium calculation directly via Hamiltonian
• EM: Medium calculation via curl of electric field and magnetic field
ParticleMass
Section: Hamiltonian
Type: float
Default: 1.0
It is possible to make calculations for a particle with a mass different from one (atomic unit of mass, or mass of the electron). This is useful to describe non-electronic systems, or for esoteric purposes.
RashbaSpinOrbitCoupling
Section: Hamiltonian
Type: float
Default: 0.0
(Experimental.) For systems described in 2D (electrons confined to 2D in semiconductor structures), one may add the Bychkov-Rashba spin-orbit coupling term [Bychkov and Rashba, J. Phys. C: Solid State Phys. 17, 6031 (1984)]. This variable determines the strength of this perturbation, and has dimensions of energy times length.
RelativisticCorrection
Section: Hamiltonian
Type: integer
Default: non_relativistic
The default value means that no relativistic correction is used. To include spin-orbit coupling turn RelativisticCorrection to spin_orbit (this will only work if SpinComponents has been set to non_collinear, which ensures the use of spinors).
Options:
• non_relativistic: No relativistic corrections.
• spin_orbit: Spin-orbit.
RiemannSilbersteinSign
Section: Hamiltonian
Type: integer
Default: plus
Sign for the imaginary part of the Riemann Silberstein vector which represents the magnetic field
Options:
• minus: Riemann Silberstein sign is minus
• plus: Riemann Silberstein sign is plus
SOStrength
Section: Hamiltonian
Type: float
Default: 1.0
Tuning of the spin-orbit coupling strength: setting this value to zero turns off spin-orbit terms in the Hamiltonian, and setting it to one corresponds to full spin-orbit.
StaticElectricField
Section: Hamiltonian
Type: block
Default: 0
A static constant electric field may be added to the usual Hamiltonian, by setting the block StaticElectricField. The three possible components of the block (which should only have one line) are the three components of the electric field vector. It can be applied in a periodic direction of a large supercell via the single-point Berry phase.
StaticMagneticField
Section: Hamiltonian
Type: block
A static constant magnetic field may be added to the usual Hamiltonian, by setting the block StaticMagneticField. The three possible components of the block (which should only have one line) are the three components of the magnetic field vector. Note that if you are running the code in 1D mode, this will not work, and if you are running the code in 2D mode the magnetic field will have to be in the z-direction, so that the first two columns should be zero. Possible in periodic system only in these cases: 2D system, 1D periodic, with StaticMagneticField2DGauge = linear_y; 3D system, 1D periodic, field is zero in y- and z-directions (given currently implemented gauges).
The magnetic field should always be entered in atomic units, regardless of the Units variable. Note that we use the "Gaussian" system meaning 1 au[B] = $$1.7152553 \times 10^7$$ Gauss, which corresponds to $$1.7152553 \times 10^3$$ Tesla.
StaticMagneticField2DGauge
Section: Hamiltonian
Type: integer
Default: linear_xy
The gauge of the static vector potential $$A$$ when a magnetic field $$B = \left( 0, 0, B_z \right)$$ is applied to a 2D-system.
Options:
• linear_xy: Linear gauge with $$A = \frac{1}{2c} \left( -y, x \right) B_z$$. (Cannot be used for periodic systems.)
• linear_y: Linear gauge with $$A = \frac{1}{c} \left( -y, 0 \right) B_z$$. Can be used for PeriodicDimensions = 1 but not PeriodicDimensions = 2.
TheoryLevel
Section: Hamiltonian
Type: integer
The calculations can be run with different "theory levels" that control how electrons are simulated. The default is dft. When hybrid functionals are requested, through the XCFunctional variable, the default is hartree_fock.
Options:
• hartree: Calculation within the Hartree method (experimental). Note that, contrary to popular belief, the Hartree potential is self-interaction-free. Therefore, this run mode will not yield the same result as kohn-sham without exchange-correlation.
• independent_particles: Particles will be considered as independent, i.e. as non-interacting. This mode is mainly used for testing purposes, as the code is usually much faster with independent_particles.
• hartree_fock: This is the traditional Hartree-Fock scheme. Like the Hartree scheme, it is fully self-interaction-free.
• kohn_sham: This is the default density-functional theory scheme. Note that you can also use hybrid functionals in this scheme, but they will be handled the "DFT" way, i.e., solving the OEP equation.
• generalized_kohn_sham: This is similar to the kohn-sham scheme, except that this allows for nonlocal operators. This is the default mode to run hybrid functionals, meta-GGA functionals, or DFT+U. It can be more convenient to use kohn-sham DFT within the OEP scheme to get similar (but not the same) results. Note that within this scheme you can use a correlation functional, or a hybrid functional (see XCFunctional). In the latter case, you will be following the quantum-chemistry recipe to use hybrids.
• rdmft: (Experimental) Reduced Density Matrix functional theory.
TimeZero
Section: Hamiltonian
Type: logical
Default: no
(Experimental) If set to yes, the ground state and other time dependent calculation will assume that they are done at time zero, so that all time depedent field at that time will be included.
## Hamiltonian::DFT+U
ACBN0IntersiteCutoff
Section: Hamiltonian::DFT+U
Type: float
The cutoff radius defining the maximal intersite distance considered. Only available with ACBN0 functional with intersite interaction.
ACBN0IntersiteInteraction
Section: Hamiltonian::DFT+U
Type: logical
Default: no
If set to yes, Octopus will determine the effective intersite interaction V Only available with ACBN0 functional. It is strongly recommended to set AOLoewdin=yes when using the option.
ACBN0RotationallyInvariant
Section: Hamiltonian::DFT+U
Type: logical
If set to yes, Octopus will use for U and J a formula which is rotationally invariant. This is different from the original formula for U and J. This is activated by default, except in the case of spinors, as this is not yet implemented in this case.
ACBN0Screening
Section: Hamiltonian::DFT+U
Type: float
Default: 1.0
If set to 0, no screening will be included in the ACBN0 functional, and the U will be estimated from bare Hartree-Fock. If set to 1 (default), the full screening of the U, as defined in the ACBN0 functional, is used.
DFTUBasisFromStates
Section: Hamiltonian::DFT+U
Type: logical
Default: no
If set to yes, Octopus will construct the localized basis from user-defined states. The states are taken at the Gamma point (or the first k-point of the states in the restart_proj folder. The states are defined via the block DFTUBasisStates
DFTUBasisStates
Section: Hamiltonian::DFT+U
Type: block
Default: none
Each line of this block contains the index of a state to be used to construct the localized basis. See DFTUBasisFromStates for details.
DFTUDoubleCounting
Section: Hamiltonian::DFT+U
Type: integer
Default: dft_u_fll
This variable selects which DFT+U double counting term is used.
Options:
• dft_u_fll: (Default) The Fully Localized Limit (FLL)
• dft_u_amf: (Experimental) Around mean field double counting, as defined in PRB 44, 943 (1991) and PRB 49, 14211 (1994).
DFTUPoissonSolver
Section: Hamiltonian::DFT+U
Type: integer
This variable selects which Poisson solver is used to compute the Coulomb integrals over a submesh. These are non-periodic Poisson solvers. If the domain parallelization is activated, the default is the direct sum. Otherwise, the FFT Poisson solver is used by default.
Options:
• dft_u_poisson_direct: (Default) Direct Poisson solver. Slow.
• dft_u_poisson_isf: (Experimental) ISF Poisson solver on a submesh. This does not work for non-orthogonal cells nor domain parallelization.
• dft_u_poisson_psolver: (Experimental) PSolver Poisson solver on a submesh. This does not work for non-orthogonal cells nor domain parallelization. Requires the PSolver external library.
• dft_u_poisson_fft: FFT Poisson solver on a submesh. This uses the 0D periodic version of the FFT kernels. This is not implemented for domain parallelization.
SkipSOrbitals
Section: Hamiltonian::DFT+U
Type: logical
Default: no
If set to yes, Octopus will determine the effective U for all atomic orbitals from the peusopotential but s orbitals. Only available with ACBN0 functional.
UseAllAtomicOrbitals
Section: Hamiltonian::DFT+U
Type: logical
Default: no
If set to yes, Octopus will determine the effective U for all atomic orbitals from the peusopotential. Only available with ACBN0 functional. It is strongly recommended to set AOLoewdin=yes when using the option.
## Hamiltonian::PCM
PCMCalcMethod
Section: Hamiltonian::PCM
Type: integer
Default: pcm_direct
Defines the method to be used to obtain the PCM potential.
Options:
• pcm_direct: Direct sum of the potential generated by the polarization charges regularized with a Gaussian smearing [A. Delgado, et al., J Chem Phys 143, 144111 (2015)].
• pcm_poisson: Solving the Poisson equation for the polarization charge density.
PCMCalculation
Section: Hamiltonian::PCM
Type: logical
Default: no
If true, the calculation is performed accounting for solvation effects by using the Integral Equation Formalism Polarizable Continuum Model IEF-PCM formulated in real-space and real-time (J. Chem. Phys. 143, 144111 (2015), Chem. Rev. 105, 2999 (2005), J. Chem. Phys. 139, 024105 (2013)). At the moment, this option is available only for TheoryLevel = DFT. PCM is tested for CalculationMode = gs, while still experimental for other values (in particular, CalculationMode = td).
PCMCavity
Section: Hamiltonian::PCM
Type: string
Name of the file containing the geometry of the cavity hosting the solute molecule. The data must be in atomic units and the file must contain the following information sequentially: T < Number of tesserae s_x(1:T) < coordinates x of the tesserae s_y(1:T) < coordinates y of the tesserae s_z(1:T) < coordinates z of the tesserae A(1:T) < areas of the tesserae R_sph(1:T) < Radii of the spheres to which the tesserae belong normal(1:T,1:3) < Outgoing unitary vectors at the tesserae surfaces
PCMChargeSmearNN
Section: Hamiltonian::PCM
Type: integer
Default: 2 * max_area * PCMSmearingFactor
Defines the number of nearest neighbor mesh-points to be taken around each cavity tessera in order to smear the charge when PCMCalcMethod = pcm_poisson. Setting PCMChargeSmearNN = 1 means first nearest neighbors, PCMChargeSmearNN = 2 second nearest neighbors, and so on. The default value is such that the neighbor mesh contains points in a radius equal to the width used for the gaussian smearing.
PCMDebyeRelaxTime
Section: Hamiltonian::PCM
Type: float
Default: 0.0
Relaxation time of the solvent within Debye model ($$\tau$$). Recall Debye dieletric function: $$\varepsilon(\omega)=\varepsilon_d+\frac{\varepsilon_0-\varepsilon_d}{1-i\omega\tau}$$
PCMDrudeLDamping
Section: Hamiltonian::PCM
Type: float
Default: 0.0
Damping factor of the solvent charges oscillations within Drude-Lorentz model ($$\gamma$$). Recall Drude-Lorentz dielectric function: $$\varepsilon(\omega)=1+\frac{A}{\omega_0^2-\omega^2+i\gamma\omega}$$
PCMDrudeLOmega
Section: Hamiltonian::PCM
Type: float
Default: $$\sqrt{1/(\varepsilon_0-1)}$$
Resonance frequency of the solvent within Drude-Lorentz model ($$\omega_0$$). Recall Drude-Lorentz dielectric function: $$\varepsilon(\omega)=1+\frac{A}{\omega_0^2-\omega^2+i\gamma\omega}$$ Default values of $$\omega_0$$ guarantee to recover static dielectric constant.
PCMDynamicEpsilon
Section: Hamiltonian::PCM
Type: float
Default: PCMStaticEpsilon
High-frequency dielectric constant of the solvent ($$\varepsilon_d$$). $$\varepsilon_d=\varepsilon_0$$ indicate equilibrium with solvent.
PCMEoMInitialCharges
Section: Hamiltonian::PCM
Type: integer
Default: 0
If =0 the propagation of the solvent polarization charges starts from internally generated initial charges in equilibrium with the initial potential. For Debye EOM-PCM, if >0 the propagation of the solvent polarization charges starts from initial charges from input file. if =1, initial pol. charges due to solute electrons are read from input file. else if =2, initial pol. charges due to external potential are read from input file. else if =3, initial pol. charges due to solute electrons and external potential are read from input file. Files should be located in pcm directory and are called ASC_e.dat and ASC_ext.dat, respectively. The latter files are generated after any PCM run and contain the last values of the polarization charges.
PCMEpsilonModel
Section: Hamiltonian::PCM
Type: integer
Default: pcm_debye
Define the dielectric function model.
Options:
• pcm_debye: Debye model: $$\varepsilon(\omega)=\varepsilon_d+\frac{\varepsilon_0-\varepsilon_d}{1-i\omega\tau}$$
• pcm_drude: Drude-Lorentz model: $$\varepsilon(\omega)=1+\frac{A}{\omega_0^2-\omega^2+i\gamma\omega}$$
PCMGamessBenchmark
Section: Hamiltonian::PCM
Type: logical
Default: .false.
If PCMGamessBenchmark is set to "yes", the pcm_matrix is also written in a Gamess format. for benchamarking purposes.
PCMKick
Section: Hamiltonian::PCM
Type: logical
Default: no
This variable controls the effect the kick has on the polarization of the solvent. If .true. ONLY the FAST degrees-of-freedom of the solvent follow the kick. The potential due to polarization charges behaves as another kick, i.e., it is a delta-perturbation. If .false. ALL degrees-of-freedom of the solvent follow the kick. The potential due to polarization charges evolves following an equation of motion. When Debye dielectric model is used, just a part of the potential behaves as another kick.
PCMLocalField
Section: Hamiltonian::PCM
Type: logical
Default: no
This variable is a flag for including local field effects when an external field is applied. The total field interacting with the molecule (also known as cavity field) is not the bare field in the solvent (the so-called Maxwell field), but it also include a contribution due to the polarization of the solvent. The latter is calculated here within the PCM framework. See [G. Gil, et al., J. Chem. Theory Comput., 2019, 15 (4), pp 2306–2319].
PCMQtotTol
Section: Hamiltonian::PCM
Type: float
Default: 0.5
If PCMRenormCharges=.true. and $$\delta Q = |[\sum_i q_i| - ((\epsilon-1)/\epsilon)*|Q_M]|>PCMQtotTol$$ the polarization charges will be normalized as $$q_i^\prime=q_i + signfunction(e, n, \delta Q) (q_i/q_{tot})*\delta Q$$ with $$q_{tot} = \sum_i q_i$$. For values of $$\delta Q > 0.5$$ (printed by the code in the file pcm/pcm_info.out) even, if polarization charges are renormalized, the calculated results might be inaccurate or erroneous.
Section: Hamiltonian::PCM
Type: float
Scales the radii of the spheres used to build the solute cavity surface. The default value depends on the choice of PCMVdWRadii: 1.2 for pcm_vdw_optimized and 1.0 for pcm_vdw_species.
PCMRenormCharges
Section: Hamiltonian::PCM
Type: logical
Default: .false.
If .true. renormalization of the polarization charges is performed to enforce fulfillment of the Gauss law, $$\sum_i q_i^{e/n} = -[(\epsilon-1)/\epsilon] Q_M^{e/n}$$ where $$q_i^{e/n}$$ are the polarization charges induced by the electrons/nuclei of the molecule and $$Q_M^{e/n}$$ is the nominal electronic/nuclear charge of the system. This can be needed to treat molecules in weakly polar solvents.
PCMSmearingFactor
Section: Hamiltonian::PCM
Type: float
Default: 1.0
Parameter used to control the width (area of each tessera) of the Gaussians used to distribute the polarization charges on each tessera (arXiv:1507.05471). If set to zero, the solvent reaction potential in real-space is defined by using point charges.
PCMSolute
Section: Hamiltonian::PCM
Type: logical
Default: yes
This variable is a flag for including polarization effects of the solvent due to the solute. (Useful for analysis) When external fields are applied, turning off the solvent-molecule interaction (PCMSolute=no) and activating the solvent polarization due to the applied field (PCMLocalField=yes) allows to include only local field effects.
PCMSpheresOnH
Section: Hamiltonian::PCM
Type: logical
Default: no
If true, spheres centered at the Hydrogens atoms are included to build the solute cavity surface.
PCMStaticEpsilon
Section: Hamiltonian::PCM
Type: float
Default: 1.0
Static dielectric constant of the solvent ($$\varepsilon_0$$). 1.0 indicates gas phase.
PCMTDLevel
Section: Hamiltonian::PCM
Type: integer
Default: eq
When CalculationMode=td, PCMTDLevel it sets the way the time-depenendent solvent polarization is propagated.
Options:
• eq: If PCMTDLevel=eq, the solvent is always in equilibrium with the solute or the external field, i.e., the solvent polarization follows instantaneously the changes in solute density or in the external field. PCMTDLevel=neq and PCMTDLevel=eom are both nonequilibrium runs.
• neq: If PCMTDLevel=neq, solvent polarization charges are splitted in two terms: one that follows instantaneously the changes in the solute density or in the external field (dynamical polarization charges), and another that lag behind in the evolution w.r.t. the solute density or the external field (inertial polarization charges).
• eom: If PCMTDLevel=eom, solvent polarization charges evolves following an equation of motion, generalizing 'neq' propagation. The equation of motion used here depends on the value of PCMEpsilonModel.
PCMTessMinDistance
Section: Hamiltonian::PCM
Type: float
Default: 0.1
Minimum distance between tesserae. Any two tesserae having smaller distance in the starting tesselation will be merged together.
PCMTessSubdivider
Section: Hamiltonian::PCM
Type: integer
Default: 1
Allows to subdivide further each tessera refining the discretization of the cavity tesselation. Can take only two values, 1 or 4. 1 corresponds to 60 tesserae per sphere, while 4 corresponds to 240 tesserae per sphere.
PCMUpdateIter
Section: Hamiltonian::PCM
Type: integer
Default: 1
Defines how often the PCM potential is updated during time propagation.
Section: Hamiltonian::PCM
Type: integer
Default: pcm_vdw_optimized
This variable selects which van der Waals radius will be used to generate the solvent cavity.
Options:
• pcm_vdw_optimized: Use the van der Waals radius optimized by Stefan Grimme in J. Comput. Chem. 27: 1787-1799, 2006, except for C, N and O, reported in J. Chem. Phys. 120, 3893 (2004).
• pcm_vdw_species: The vdW radii are set from the share/pseudopotentials/elements file. These values are obtained from Alvarez S., Dalton Trans., 2013, 42, 8617-8636. Values can be changed in the Species block.
## Hamiltonian::Poisson
AlphaFMM
Section: Hamiltonian::Poisson
Type: float
Default: 0.291262136
Dimensionless parameter for the correction of the self-interaction of the electrostatic Hartree potential, when using PoissonSolver = FMM.
Octopus represents charge density on a real-space grid, each point containing a value $$\rho$$ corresponding to the charge density in the cell centered in such point. Therefore, the integral for the Hartree potential at point $$i$$, $$V_H(i)$$, can be reduced to a summation:
$$V_H(i) = \frac{\Omega}{4\pi\varepsilon_0} \sum_{i \neq j} \frac{\rho(\vec{r}(j))}{|\vec{r}(j) - \vec{r}(i)|} + V_{self.int.}(i)$$ where $$\Omega$$ is the volume element of the mesh, and $$\vec{r}(j)$$ is the position of the point $$j$$. The $$V_{self.int.}(i)$$ corresponds to the integral over the cell centered on the point $$i$$ that is necessary to calculate the Hartree potential at point $$i$$:
$$V_{self.int.}(i)=\frac{1}{4\pi\varepsilon_0} \int_{\Omega(i)}d\vec{r} \frac{\rho(\vec{r}(i))}{|\vec{r}-\vec{r}(i)|}$$
In the FMM version implemented into Octopus, a correction method for $$V_H(i)$$ is used (see García-Risueño et al., J. Comp. Chem. 35, 427 (2014)). This method defines cells neighbouring cell $$i$$, which have volume $$\Omega(i)/8$$ (in 3D) and charge density obtained by interpolation. In the calculation of $$V_H(i)$$, in order to avoid double counting of charge, and to cancel part of the errors arising from considering the distances constant in the summation above, a term $$-\alpha_{FMM}V_{self.int.}(i)$$ is added to the summation (see the paper for the explicit formulae).
DeltaEFMM
Section: Hamiltonian::Poisson
Type: float
Default: 0.0001
Dimensionless parameter for relative convergence of PoissonSolver = FMM. Sets energy error bound. Strong inhomogeneous systems may violate the error bound. For inhomogeneous systems we have an error-controlled sequential version available (from Ivo Kabadshow).
Our implementation of FMM (based on H. Dachsel, J. Chem. Phys. 131, 244102 (2009)) can keep the error of the Hartree energy below an arbitrary bound. The quotient of the value chosen for the maximum error in the Hartree energy and the value of the Hartree energy is DeltaEFMM.
DressedOrbitals
Section: Hamiltonian::Poisson
Type: logical
Default: false
Allows for the calculation of coupled elecron-photon problems by applying the dressed orbital approach. Details can be found in https://arxiv.org/abs/1812.05562 At the moment, N electrons in d (<=3) spatial dimensions, coupled to one photon mode can be described. The photon mode is included by raising the orbital dimension to d+1 and changing the particle interaction kernel and the local potential, where the former is included automatically, but the latter needs to by added by hand as a user_defined_potential! Coordinate 1-d: electron; coordinate d+1: photon.
Poisson1DSoftCoulombParam
Section: Hamiltonian::Poisson
Type: float
Default: 1.0 bohr
When Dimensions = 1, to prevent divergence, the Coulomb interaction treated by the Poisson solver is not $$1/r$$ but $$1/\sqrt{a^2 + r^2}$$, where this variable sets the value of $$a$$.
Section: Hamiltonian::Poisson
Type: float
When PoissonSolver = fft and PoissonFFTKernel is neither multipole_corrections nor fft_nocut, this variable controls the distance after which the electron-electron interaction goes to zero. A warning will be written if the value is too large and will cause spurious interactions between images. The default is half of the FFT box max dimension in a finite direction.
PoissonFFTKernel
Section: Hamiltonian::Poisson
Type: integer
Defines which kernel is used to impose the correct boundary conditions when using FFTs to solve the Poisson equation. The default is selected depending on the dimensionality and periodicity of the system:
In 1D, spherical if finite, fft_nocut if periodic.
In 2D, spherical if finite, cylindrical if 1D-periodic, fft_nocut if 2D-periodic.
In 3D, spherical if finite, cylindrical if 1D-periodic, planar if 2D-periodic, fft_nocut if 3D-periodic. See C. A. Rozzi et al., Phys. Rev. B 73, 205119 (2006) for 3D implementation and A. Castro et al., Phys. Rev. B 80, 033102 (2009) for 2D implementation.
Options:
• spherical: FFTs using spherical cutoff (in 2D or 3D).
• cylindrical: FFTs using cylindrical cutoff (in 2D or 3D).
• planar: FFTs using planar cutoff (in 3D).
• fft_nocut: FFTs without using a cutoff (for fully periodic systems).
• multipole_correction: The boundary conditions are imposed by using a multipole expansion. Only appropriate for finite systems. Further specification occurs with variables PoissonSolverBoundaries and PoissonSolverMaxMultipole.
PoissonSolver
Section: Hamiltonian::Poisson
Type: integer
Defines which method to use to solve the Poisson equation. Some incompatibilities apply depending on dimensionality, periodicity, etc. For a comparison of the accuracy and performance of the methods in Octopus, see P Garcia-Risueño, J Alberdi-Rodriguez et al., J. Comp. Chem. 35, 427-444 (2014) or arXiV. Defaults:
1D and 2D: fft.
3D: cg_corrected if curvilinear, isf if not periodic, fft if periodic.
Dressed orbitals: direct_sum.
Options:
• direct_sum: Direct evaluation of the Hartree potential (only for finite systems).
• FMM: (Experimental) Fast multipole method. Requires FMM library.
• NoPoisson: Do not use a Poisson solver at all.
• fft: The Poisson equation is solved using FFTs. A cutoff technique for the Poisson kernel is selected so the proper boundary conditions are imposed according to the periodicity of the system. This can be overridden by the PoissonFFTKernel variable. To choose the FFT library use FFTLibrary
• psolver: Solver based on Interpolating Scaling Functions as implemented in the PSolver library. Parallelization in k-points requires PoissonSolverPSolverParallelData = no. Requires the PSolver external library.
• poke: (Experimental) Solver from the Poke library.
• cg: Conjugate gradients (only for finite systems).
• cg_corrected: Conjugate gradients, corrected for boundary conditions (only for finite systems).
• multigrid: Multigrid method (only for finite systems).
• isf: Interpolating Scaling Functions Poisson solver (only for finite systems).
PoissonSolverBoundaries
Section: Hamiltonian::Poisson
Type: integer
Default: multipole
For finite systems, some Poisson solvers (multigrid, cg_corrected, and fft with PoissonFFTKernel = multipole_correction) require the calculation of the boundary conditions with an auxiliary method. This variable selects that method.
Options:
• multipole: A multipole expansion of the density is used to approximate the potential on the boundaries.
• exact: An exact integration of the Poisson equation is done over the boundaries. This option is experimental, and not implemented for domain parallelization.
PoissonSolverMaxIter
Section: Hamiltonian::Poisson
Type: integer
Default: 500
The maximum number of iterations for conjugate-gradient Poisson solvers.
PoissonSolverMaxMultipole
Section: Hamiltonian::Poisson
Type: integer
Order of the multipolar expansion for boundary corrections.
The Poisson solvers multigrid, cg, and cg_corrected (and fft with PoissonFFTKernel = multipole_correction) do a multipolar expansion of the given charge density, such that $$\rho = \rho_{multip.expansion}+\Delta \rho$$. The Hartree potential due to the $$\rho_{multip.expansion}$$ is calculated analytically, while the Hartree potential due to $$\Delta \rho$$ is calculated with either a multigrid or cg solver. The order of the multipolar expansion is set by this variable.
Default is 4 for PoissonSolver = cg_corrected and multigrid, and 2 for fft with PoissonFFTKernel = multipole_correction.
PoissonSolverNodes
Section: Hamiltonian::Poisson
Type: integer
Default: 0
How many nodes to use to solve the Poisson equation. A value of 0, the default, implies that all available nodes are used.
PoissonSolverThreshold
Section: Hamiltonian::Poisson
Type: float
Default: 1e-6
The tolerance for the Poisson solution, used by the cg, cg_corrected, and multigrid solvers.
PoissonTestPeriodicThreshold
Section: Hamiltonian::Poisson
Type: float
Default: 1e-5
This threshold determines the accuracy of the periodic copies of the Gaussian charge distribution that are taken into account when computing the analytical solution for periodic systems. Be aware that the default leads to good results for systems that are periodic in 1D - for 3D it is very costly because of the large number of copies needed.
## Hamiltonian::Poisson::Multigrid
PoissonSolverMGMaxCycles
Section: Hamiltonian::Poisson::Multigrid
Type: integer
Default: 60
Maximum number of multigrid cycles that are performed if convergence is not achieved.
PoissonSolverMGPostsmoothingSteps
Section: Hamiltonian::Poisson::Multigrid
Type: integer
Default: 4
Number of Gauss-Seidel smoothing steps after coarse-level correction in the multigrid Poisson solver.
PoissonSolverMGPresmoothingSteps
Section: Hamiltonian::Poisson::Multigrid
Type: integer
Default: 1
Number of Gauss-Seidel smoothing steps before coarse-level correction in the multigrid Poisson solver.
PoissonSolverMGRelaxationFactor
Section: Hamiltonian::Poisson::Multigrid
Type: float
Relaxation factor of the relaxation operator used for the multigrid method. This is mainly for debugging, since overrelaxation does not help in a multigrid scheme. The default is 1.0, except 0.6666 for the gauss_jacobi method.
PoissonSolverMGRelaxationMethod
Section: Hamiltonian::Poisson::Multigrid
Type: integer
Method used to solve the linear system approximately in each grid for the multigrid procedure that solves Poisson equation. Default is gauss_seidel, unless curvilinear coordinates are used, in which case the default is gauss_jacobi.
Options:
• gauss_seidel: Gauss-Seidel.
• gauss_jacobi: Gauss-Jacobi.
• gauss_jacobi2: Alternative implementation of Gauss-Jacobi.
PoissonSolverMGRestrictionMethod
Section: Hamiltonian::Poisson::Multigrid
Type: integer
Default: fullweight
Method used from fine-to-coarse grid transfer.
Options:
• injection: Injection
• fullweight: Fullweight restriction
## Hamiltonian::Poisson::PSolver
PoissonSolverPSolverParallelData
Section: Hamiltonian::Poisson::PSolver
Type: logical
Default: yes
Indicates whether data is partitioned within the PSolver library. If data is distributed among processes, Octopus uses parallel data-structures and, thus, less memory. If "yes", data is parallelized. The z-axis of the input vector is split among the MPI processes. If "no", entire input and output vector is saved in all the MPI processes. If k-points parallelization is used, "no" must be selected.
## Hamiltonian::XC
DFTULevel
Section: Hamiltonian::XC
Type: integer
Default: no
This variable selects which DFT+U expression is added to the Hamiltonian.
Options:
• dft_u_none: No +U term is not applied.
• dft_u_empirical: An empiricial Hubbard U is added on the orbitals specified in the block species with hubbard_l and hubbard_u
• dft_u_acbn0: Octopus determines the effective U term using the ACBN0 functional as defined in PRX 5, 011006 (2015)
HFSingularity
Section: Hamiltonian::XC
Type: integer
Default: general
(Experimental) This variable selects the method used for the treatment of the singularity of the Coulomb potential in Hatree-Fock and hybrid-functional DFT calculations. This shoulbe be only applied for periodic systems and is only used for FFT kernels of the Poisson solvers.
Options:
• none: The singularity is replaced by zero.
• general: The general treatment of the singularity, as described in Carrier et al, PRB 75 205126 (2007). This is the default option
• fcc: The treatment of the singulariy as described in Gygi and Baldereschi, PRB 34, 4405 (1986). This is formaly valid for cubic systems only.
• spherical_bz: The divergence in q=0 is treated analytically assuming a spherical Brillouin zone
HFSingularityNk
Section: Hamiltonian::XC
Type: integer
Default: 60
Number of k-point used (total number of k-points) is (2*Nk+1)^3) in the numerical integration of the auxiliary function f(q). See PRB 75, 205126 (2007) for more details. Only for HFSingularity=general.
HFSingularityNsteps
Section: Hamiltonian::XC
Type: integer
Default: 7
Number of grid refinement steps in the numerical integration of the auxiliary function f(q). See PRB 75, 205126 (2007) for more details. Only for HFSingularity=general.
Interaction1D
Section: Hamiltonian::XC
Type: integer
Default: interaction_soft_coulomb
When running in 1D, one has to soften the Coulomb interaction. This softening is not unique, and several possibilities exist in the literature.
Options:
• interaction_exp_screened: Exponentially screened Coulomb interaction. See, e.g., M Casula, S Sorella, and G Senatore, Phys. Rev. B 74, 245427 (2006).
• interaction_soft_coulomb: Soft Coulomb interaction of the form $$1/\sqrt{x^2 + \alpha^2}$$.
Interaction1DScreening
Section: Hamiltonian::XC
Type: float
Default: 1.0
Defines the screening parameter $$\alpha$$ of the softened Coulomb interaction when running in 1D.
KLIPhotonCOC
Section: Hamiltonian::XC
Type: logical
Default: .false.
Activate the center of charge translation of the electric dipole operator which should avoid the dependence of the photon KLI on an permanent dipole.
OEPLevel
Section: Hamiltonian::XC
Type: integer
Default: oep_kli
At what level shall Octopus handle the optimized effective potential (OEP) equation.
Options:
• oep_none: Do not solve OEP equation.
• oep_kli: Krieger-Li-Iafrate (KLI) approximation. For spinors, the iterative solution is controlled by the variables in section Linear Response::Solver, and the default for LRMaximumIter is set to 50. Ref: JB Krieger, Y Li, GJ Iafrate, Phys. Lett. A 146, 256 (1990).
• oep_full: (Experimental) Full solution of OEP equation using the Sternheimer approach. The linear solver will be controlled by the variables in section Linear Response::Solver, and the iterations for OEP by Linear Response::SCF in LR calculations and variable OEPMixing. Note that default for LRMaximumIter is set to 10. Ref: S. Kuemmel and J. Perdew, Phys. Rev. Lett. 90, 043004 (2003).
OEPMixing
Section: Hamiltonian::XC
Type: float
Default: 1.0
The linear mixing factor used to solve the Sternheimer equation in the full OEP procedure.
OEPMixingScheme
Section: Hamiltonian::XC
Type: integer
Default: 1.0
Different Mixing Schemes are possible
Options:
• OEP_MIXING_SCHEME_CONST: Use a constant Reference: S. Kuemmel and J. Perdew, Phys. Rev. Lett. 90, 4, 043004 (2003)
• OEP_MIXING_SCHEME_BB: Use the Barzilai-Borwein (BB) Method Reference: T. W. Hollins, S. J. Clark, K. Refson, and N. I. Gidopoulos, Phys. Rev. B 85, 235126 (2012)
• OEP_MIXING_SCHEME_DENS: Use the inverse of the electron density Reference: S. Kuemmel and J. Perdew, Phys. Rev. B 68, 035103 (2003)
PhotonModes
Section: Hamiltonian::XC
Type: block
Each line of the block should specify one photon mode. The syntax is the following:
%PhotonModes omega1 | lambda1| PolX1 | PolY1 | PolZ1 ... %
The first column is the mode frequency, in units of energy. The second column is the coupling strength, in units of energy. The remaining columns specify the polarization direction of the mode. If the polarization vector should be normalized to one. If that is not the case the code will normalize it.
SICCorrection
Section: Hamiltonian::XC
Type: integer
Default: sic_none
This variable controls which form of self-interaction correction to use. Note that this correction will be applied to the functional chosen by XCFunctional.
Options:
• sic_none: No self-interaction correction.
• sic_pz: Perdew-Zunger SIC, handled by the OEP technique.
• sic_amaldi: Amaldi correction term.
• sic_adsic: Average-density SIC. C. Legrand et al., J. Phys. B 35, 1115 (2002).
VDWCorrection
Section: Hamiltonian::XC
Type: integer
Default: no
(Experimental) This variable selects which van der Waals correction to apply to the correlation functional.
Options:
• none: No correction is applied.
• vdw_ts: The scheme of Tkatchenko and Scheffler, Phys. Rev. Lett. 102 073005 (2009).
• vdw_d3: The DFT-D3 scheme of S. Grimme, J. Antony, S. Ehrlich, and S. Krieg, J. Chem. Phys. 132, 154104 (2010).
VDWD3Functional
Section: Hamiltonian::XC
Type: string
(Experimental) You can use this variable to override the parametrization used by the DFT-D3 van deer Waals correction. Normally you need not set this variable, as the proper value will be selected by Octopus (if available).
This variable takes a string value, the valid values can be found in the source file 'external_libs/dftd3/core.f90'. For example you can use:
VDWD3Functional = 'pbe'
VDWSelfConsistent
Section: Hamiltonian::XC
Type: logical
Default: yes
This variable controls whether the VDW correction is applied self-consistently, the default, or just as a correction to the total energy. This option only works with vdw_ts.
VDW_TS_cutoff
Section: Hamiltonian::XC
Type: float
Default: 10.0
Set the value of the cutoff (unit of length) for the VDW correction in periodic system in the Tkatchenko and Scheffler (vdw_ts) scheme only.
VDW_TS_damping
Section: Hamiltonian::XC
Type: float
Default: 20.0
Set the value of the damping function (in unit of 1/length) steepness for the VDW correction in the Tkatchenko-Scheffler scheme. See Equation (12) of Phys. Rev. Lett. 102 073005 (2009).
VDW_TS_sr
Section: Hamiltonian::XC
Type: float
Default: 0.94
Set the value of the sr parameter in the damping function of the VDW correction in the Tkatchenko-Scheffler scheme. See Equation (12) of Phys. Rev. Lett. 102 073005 (2009). This parameter depends on the xc functional used. The default value is 0.94, which holds for PBE. For PBE0, a value of 0.96 should be used.
XCFunctional
Section: Hamiltonian::XC
Type: integer
Defines the exchange and correlation functionals to be used, specified as a sum of an exchange functional and a correlation functional, or a single exchange-correlation functional (e.g. hyb_gga_xc_pbeh). For more information on the functionals, see Libxc documentation. The list provided here is from libxc 4; if you have linked against a different libxc version, you may have a somewhat different set of available functionals. Note that kinetic-energy functionals are not supported.
The default functional will be selected by Octopus to be consistent with the pseudopotentials you are using. If you are not using pseudopotentials, Octopus cannot determine the functional used to generate the pseudopotential, or the pseudopotential functionals are inconsistent, Octopus will use the following defaults:
1D: lda_x_1d + lda_c_1d_csc
2D: lda_x_2d + lda_c_2d_amgb
3D: lda_x + lda_c_pz_mod
Options:
• none: Exchange and correlation set to zero (not from libxc).
• gga_c_tca: Tognetti, Cortona, Adamo
• lda_c_pz_mod: Perdew & Zunger (Modified)
• gga_x_pbe: Perdew, Burke & Ernzerhof exchange
• gga_x_pbe_r: Perdew, Burke & Ernzerhof exchange (revised)
• gga_x_b86: Becke 86 Xalpha,beta,gamma
• gga_x_herman: Herman et al original GGA
• gga_x_b86_mgc: Becke 86 Xalpha,beta,gamma (with mod. grad. correction)
• gga_x_b88: Becke 88
• gga_x_g96: Gill 96
• gga_x_pw86: Perdew & Wang 86
• gga_x_pw91: Perdew & Wang 91
• lda_c_ob_pz: Ortiz & Ballone (PZ)
• gga_x_optx: Handy & Cohen OPTX 01
• gga_x_dk87_r1: dePristo & Kress 87 (version R1)
• gga_x_dk87_r2: dePristo & Kress 87 (version R2)
• gga_x_lg93: Lacks & Gordon 93
• gga_x_ft97_a: Filatov & Thiel 97 (version A)
• gga_x_ft97_b: Filatov & Thiel 97 (version B)
• gga_x_pbe_sol: Perdew, Burke & Ernzerhof exchange (solids)
• gga_x_rpbe: Hammer, Hansen & Norskov (PBE-like)
• gga_x_wc: Wu & Cohen
• gga_x_mpw91: Modified form of PW91 by Adamo & Barone
• lda_c_pw: Perdew & Wang
• gga_x_am05: Armiento & Mattsson 05 exchange
• gga_x_pbea: Madsen (PBE-like)
• gga_x_mpbe: Adamo & Barone modification to PBE
• gga_x_xpbe: xPBE reparametrization by Xu & Goddard
• gga_x_2d_b86_mgc: Becke 86 MGC for 2D systems
• gga_x_bayesian: Bayesian best fit for the enhancement factor
• gga_x_pbe_jsjr: JSJR reparametrization by Pedroza, Silva & Capelle
• gga_x_2d_b88: Becke 88 in 2D
• gga_x_2d_b86: Becke 86 Xalpha,beta,gamma
• gga_x_2d_pbe: Perdew, Burke & Ernzerhof exchange in 2D
• gga_c_pbe: Perdew, Burke & Ernzerhof correlation
• lda_c_pw_mod: Perdew & Wang (Modified)
• gga_c_lyp: Lee, Yang & Parr
• gga_c_p86: Perdew 86
• gga_c_pbe_sol: Perdew, Burke & Ernzerhof correlation SOL
• gga_c_pw91: Perdew & Wang 91
• gga_c_am05: Armiento & Mattsson 05 correlation
• gga_c_xpbe: xPBE reparametrization by Xu & Goddard
• gga_c_lm: Langreth and Mehl correlation
• gga_c_pbe_jrgx: JRGX reparametrization by Pedroza, Silva & Capelle
• gga_x_optb88_vdw: Becke 88 reoptimized to be used with vdW functional of Dion et al
• lda_c_ob_pw: Ortiz & Ballone (PW)
• gga_x_pbek1_vdw: PBE reparametrization for vdW
• gga_x_optpbe_vdw: PBE reparametrization for vdW
• gga_x_rge2: Regularized PBE
• gga_c_rge2: Regularized PBE
• gga_x_rpw86: refitted Perdew & Wang 86
• gga_x_kt1: Exchange part of Keal and Tozer version 1
• gga_xc_kt2: Keal and Tozer version 2
• gga_c_wl: Wilson & Levy
• gga_c_wi: Wilson & Ivanov
• gga_x_mb88: Modified Becke 88 for proton transfer
• lda_c_2d_amgb: Attaccalite et al
• gga_x_sogga: Second-order generalized gradient approximation
• gga_x_sogga11: Second-order generalized gradient approximation 2011
• gga_c_sogga11: Second-order generalized gradient approximation 2011
• gga_c_wi0: Wilson & Ivanov initial version
• gga_xc_th1: Tozer and Handy v. 1
• gga_xc_th2: Tozer and Handy v. 2
• gga_xc_th3: Tozer and Handy v. 3
• gga_xc_th4: Tozer and Handy v. 4
• gga_x_c09x: C09x to be used with the VdW of Rutgers-Chalmers
• gga_c_sogga11_x: To be used with HYB_GGA_X_SOGGA11_X
• lda_c_2d_prm: Pittalis, Rasanen & Marques correlation in 2D
• gga_x_lb: van Leeuwen & Baerends
• gga_xc_hcth_93: HCTH functional fitted to 93 molecules
• gga_xc_hcth_120: HCTH functional fitted to 120 molecules
• gga_xc_hcth_147: HCTH functional fitted to 147 molecules
• gga_xc_hcth_407: HCTH functional fitted to 407 molecules
• gga_xc_edf1: Empirical functionals from Adamson, Gill, and Pople
• gga_xc_xlyp: XLYP functional
• gga_xc_kt1: Keal and Tozer version 1
• gga_xc_b97_d: Grimme functional to be used with C6 vdW term
• lda_c_vbh: von Barth & Hedin
• gga_xc_pbe1w: Functionals fitted for water
• gga_xc_mpwlyp1w: Functionals fitted for water
• gga_xc_pbelyp1w: Functionals fitted for water
• lda_c_1d_csc: Casula, Sorella, and Senatore 1D correlation
• gga_x_lbm: van Leeuwen & Baerends modified
• gga_x_ol2: Exchange form based on Ou-Yang and Levy v.2
• gga_x_apbe: mu fixed from the semiclassical neutral atom
• gga_c_apbe: mu fixed from the semiclassical neutral atom
• gga_x_htbs: Haas, Tran, Blaha, and Schwarz
• gga_x_airy: Constantin et al based on the Airy gas
• gga_x_lag: Local Airy Gas
• gga_xc_mohlyp: Functional for organometallic chemistry
• gga_xc_mohlyp2: Functional for barrier heights
• gga_xc_th_fl: Tozer and Handy v. FL
• gga_xc_th_fc: Tozer and Handy v. FC
• gga_xc_th_fcfo: Tozer and Handy v. FCFO
• gga_xc_th_fco: Tozer and Handy v. FCO
• lda_x_2d: Exchange in 2D
• lda_x: Exchange
• gga_c_optc: Optimized correlation functional of Cohen and Handy
• lda_xc_teter93: Teter 93 parametrization
• lda_c_wigner: Wigner parametrization
• mgga_x_lta: Local tau approximation of Ernzerhof & Scuseria
• mgga_x_tpss: Tao, Perdew, Staroverov & Scuseria exchange
• mgga_x_m06_l: M06-L exchange functional from Minnesota
• mgga_x_gvt4: GVT4 from Van Voorhis and Scuseria
• mgga_x_tau_hcth: tau-HCTH from Boese and Handy
• mgga_x_br89: Becke-Roussel 89
• mgga_x_bj06: Becke & Johnson correction to Becke-Roussel 89
• mgga_x_tb09: Tran & Blaha correction to Becke & Johnson
• mgga_x_rpp09: Rasanen, Pittalis, and Proetto correction to Becke & Johnson
• mgga_x_2d_prhg07: Pittalis, Rasanen, Helbig, Gross Exchange Functional
• mgga_x_2d_prhg07_prp10: PRGH07 with PRP10 correction
• mgga_x_revtpss: revised Tao, Perdew, Staroverov & Scuseria exchange
• mgga_x_pkzb: Perdew, Kurth, Zupan, and Blaha
• lda_x_1d: Exchange in 1D
• lda_c_ml1: Modified LSD (version 1) of Proynov and Salahub
• mgga_x_ms0: MS exchange of Sun, Xiao, and Ruzsinszky
• mgga_x_ms1: MS1 exchange of Sun, et al
• mgga_x_ms2: MS2 exchange of Sun, et al
• hyb_mgga_x_ms2h: MS2 hybrid exchange of Sun, et al
• mgga_x_m11_l: M11-L exchange functional from Minnesota
• mgga_x_mn12_l: MN12-L exchange functional from Minnesota
• mgga_xc_cc06: Cancio and Chou 2006
• lda_c_ml2: Modified LSD (version 2) of Proynov and Salahub
• mgga_x_mk00: Exchange for accurate virtual orbital energies
• mgga_c_tpss: Tao, Perdew, Staroverov & Scuseria correlation
• mgga_c_vsxc: VSxc from Van Voorhis and Scuseria (correlation part)
• mgga_c_m06_l: M06-L correlation functional from Minnesota
• mgga_c_m06_hf: M06-HF correlation functional from Minnesota
• mgga_c_m06: M06 correlation functional from Minnesota
• mgga_c_m06_2x: M06-2X correlation functional from Minnesota
• mgga_c_m05: M05 correlation functional from Minnesota
• mgga_c_m05_2x: M05-2X correlation functional from Minnesota
• mgga_c_pkzb: Perdew, Kurth, Zupan, and Blaha
• mgga_c_bc95: Becke correlation 95
• lda_c_gombas: Gombas parametrization
• mgga_c_revtpss: revised TPSS correlation
• mgga_xc_tpsslyp1w: Functionals fitted for water
• mgga_x_mk00b: Exchange for accurate virtual orbital energies (v. B)
• mgga_x_bloc: functional with balanced localization
• mgga_x_modtpss: Modified Tao, Perdew, Staroverov & Scuseria exchange
• gga_c_pbeloc: Semilocal dynamical correlation
• mgga_c_tpssloc: Semilocal dynamical correlation
• hyb_mgga_x_mn12_sx: MN12-SX hybrid exchange functional from Minnesota
• mgga_x_mbeef: mBEEF exchange
• lda_c_pw_rpa: Perdew & Wang fit of the RPA
• mgga_x_mbeefvdw: mBEEF-vdW exchange
• mgga_xc_b97m_v: Mardirossian and Head-Gordon
• gga_xc_vv10: Vydrov and Van Voorhis
• mgga_x_mvs: MVS exchange of Sun, Perdew, and Ruzsinszky
• gga_c_pbefe: PBE for formation energies
• lda_xc_ksdt: Karasiev et al. parametrization
• lda_c_1d_loos: P-F Loos correlation LDA
• mgga_x_mn15_l: MN15-L exhange functional from Minnesota
• mgga_c_mn15_l: MN15-L correlation functional from Minnesota
• gga_c_op_pw91: one-parameter progressive functional (PW91 version)
• mgga_x_scan: SCAN exchange of Sun, Ruzsinszky, and Perdew
• hyb_mgga_x_scan0: SCAN hybrid exchange
• gga_x_pbefe: PBE for formation energies
• hyb_gga_xc_b97_1p: version of B97 by Cohen and Handy
• mgga_c_scan: SCAN correlation
• hyb_mgga_x_mn15: MN15 hybrid exchange functional from Minnesota
• mgga_c_mn15: MN15 correlation functional from Minnesota
• lda_c_rc04: Ragot-Cortona
• gga_x_cap: Correct Asymptotic Potential
• gga_x_eb88: Non-empirical (excogitated) B88 functional of Becke and Elliott
• gga_c_pbe_mol: Del Campo, Gazquez, Trickey and Vela (PBE-like)
• hyb_gga_xc_pbe_mol0: PBEmol0
• hyb_gga_xc_pbe_sol0: PBEsol0
• hyb_gga_xc_pbeb0: PBEbeta0
• hyb_gga_xc_pbe_molb0: PBEmolbeta0
• hyb_mgga_x_bmk: Boese-Martin for kinetics
• gga_c_bmk: Boese-Martin for kinetics
• lda_c_vwn_1: Vosko, Wilk, & Nusair (1)
• gga_c_tau_hcth: correlation part of tau-hcth
• hyb_mgga_x_tau_hcth: Hybrid version of tau-HCTH
• gga_c_hyb_tau_hcth: correlation part of hyb_tau-hcth
• mgga_x_b00: Becke 2000
• gga_x_beefvdw: BEEF-vdW exchange
• gga_xc_beefvdw: BEEF-vdW exchange-correlation
• lda_c_chachiyo: Chachiyo simple 2 parameter correlation
• mgga_xc_hle17: high local exchange 2017
• lda_c_lp96: Liu-Parr correlation
• hyb_gga_xc_pbe50: PBE0 with 50% exx
• lda_c_vwn_2: Vosko, Wilk, & Nusair (2)
• gga_x_pbetrans: Gradient-based interpolation between PBE and revPBE
• mgga_c_scan_rvv10: SCAN correlation + rVV10 correlation
• mgga_x_revm06_l: revised M06-L exchange functional from Minnesota
• mgga_c_revm06_l: Revised M06-L correlation functional from Minnesota
• hyb_mgga_x_m08_hx: M08-HX exchange functional from Minnesota
• hyb_mgga_x_m08_so: M08-SO exchange functional from Minnesota
• hyb_mgga_x_m11: M11 hybrid exchange functional from Minnesota
• gga_x_chachiyo: Chachiyo exchange
• lda_c_vwn_3: Vosko, Wilk, & Nusair (3)
• lda_c_rpa: Random Phase Approximation
• lda_c_vwn_4: Vosko, Wilk, & Nusair (4)
• gga_x_gam: GAM functional from Minnesota
• gga_c_gam: GAM functional from Minnesota
• gga_x_hcth_a: HCTH-A
• gga_x_ev93: Engel and Vosko
• hyb_mgga_x_dldf: Dispersionless Density Functional
• mgga_c_dldf: Dispersionless Density Functional
• gga_x_bcgp: Burke, Cancio, Gould, and Pittalis
• gga_c_bcgp: Burke, Cancio, Gould, and Pittalis
• lda_c_hl: Hedin & Lundqvist
• hyb_gga_xc_b3pw91: The original (ACM) hybrid of Becke
• hyb_gga_xc_b3lyp: The (in)famous B3LYP
• hyb_gga_xc_b3p86: Perdew 86 hybrid similar to B3PW91
• hyb_gga_xc_o3lyp: hybrid using the optx functional
• hyb_gga_xc_mpw1k: mixture of mPW91 and PW91 optimized for kinetics
• hyb_gga_xc_pbeh: aka PBE0 or PBE1PBE
• hyb_gga_xc_b97: Becke 97
• hyb_gga_xc_b97_1: Becke 97-1
• gga_x_lambda_oc2_n: lambda_OC2(N) version of PBE
• hyb_gga_xc_b97_2: Becke 97-2
• hyb_gga_xc_x3lyp: hybrid by Xu and Goddard
• hyb_gga_xc_b1wc: Becke 1-parameter mixture of WC and PBE
• hyb_gga_xc_b97_k: Boese-Martin for Kinetics
• hyb_gga_xc_b97_3: Becke 97-3
• hyb_gga_xc_mpw3pw: mixture with the mPW functional
• hyb_gga_xc_b1lyp: Becke 1-parameter mixture of B88 and LYP
• hyb_gga_xc_b1pw91: Becke 1-parameter mixture of B88 and PW91
• hyb_gga_xc_mpw1pw: Becke 1-parameter mixture of mPW91 and PW91
• hyb_gga_xc_mpw3lyp: mixture of mPW and LYP
• gga_x_b86_r: Revised Becke 86 Xalpha,beta,gamma (with mod. grad. correction)
• hyb_gga_xc_sb98_1a: Schmider-Becke 98 parameterization 1a
• mgga_xc_zlp: Zhao, Levy & Parr, Eq. (21)
• hyb_gga_xc_sb98_1b: Schmider-Becke 98 parameterization 1b
• hyb_gga_xc_sb98_1c: Schmider-Becke 98 parameterization 1c
• hyb_gga_xc_sb98_2a: Schmider-Becke 98 parameterization 2a
• hyb_gga_xc_sb98_2b: Schmider-Becke 98 parameterization 2b
• hyb_gga_xc_sb98_2c: Schmider-Becke 98 parameterization 2c
• hyb_gga_x_sogga11_x: Hybrid based on SOGGA11 form
• hyb_gga_xc_hse03: the 2003 version of the screened hybrid HSE
• hyb_gga_xc_hse06: the 2006 version of the screened hybrid HSE
• hyb_gga_xc_hjs_pbe: HJS hybrid screened exchange PBE version
• hyb_gga_xc_hjs_pbe_sol: HJS hybrid screened exchange PBE_SOL version
• lda_xc_zlp: Zhao, Levy & Parr, Eq. (20)
• hyb_gga_xc_hjs_b88: HJS hybrid screened exchange B88 version
• hyb_gga_xc_hjs_b97x: HJS hybrid screened exchange B97x version
• hyb_gga_xc_cam_b3lyp: CAM version of B3LYP
• hyb_gga_xc_tuned_cam_b3lyp: CAM version of B3LYP tuned for excitations
• hyb_gga_xc_bhandh: Becke half-and-half
• hyb_gga_xc_bhandhlyp: Becke half-and-half with B88 exchange
• hyb_gga_xc_mb3lyp_rc04: B3LYP with RC04 LDA
• hyb_mgga_x_m05: M05 hybrid exchange functional from Minnesota
• hyb_mgga_x_m05_2x: M05-2X hybrid exchange functional from Minnesota
• hyb_mgga_xc_b88b95: Mixture of B88 with BC95 (B1B95)
• hyb_mgga_xc_b86b95: Mixture of B86 with BC95
• hyb_mgga_xc_pw86b95: Mixture of PW86 with BC95
• hyb_mgga_xc_bb1k: Mixture of B88 with BC95 from Zhao and Truhlar
• hyb_mgga_x_m06_hf: M06-HF hybrid exchange functional from Minnesota
• hyb_mgga_xc_mpw1b95: Mixture of mPW91 with BC95 from Zhao and Truhlar
• hyb_mgga_xc_mpwb1k: Mixture of mPW91 with BC95 for kinetics
• hyb_mgga_xc_x1b95: Mixture of X with BC95
• hyb_mgga_xc_xb1k: Mixture of X with BC95 for kinetics
• hyb_mgga_x_m06: M06 hybrid exchange functional from Minnesota
• gga_x_lambda_ch_n: lambda_CH(N) version of PBE
• hyb_mgga_x_m06_2x: M06-2X hybrid exchange functional from Minnesota
• hyb_mgga_xc_pw6b95: Mixture of PW91 with BC95 from Zhao and Truhlar
• hyb_mgga_xc_pwb6k: Mixture of PW91 with BC95 from Zhao and Truhlar for kinetics
• hyb_gga_xc_mpwlyp1m: MPW with 1 par. for metals/LYP
• hyb_gga_xc_revb3lyp: Revised B3LYP
• hyb_gga_xc_camy_blyp: BLYP with yukawa screening
• hyb_gga_xc_pbe0_13: PBE0-1/3
• hyb_mgga_xc_tpssh: TPSS hybrid
• hyb_mgga_xc_revtpssh: revTPSS hybrid
• hyb_gga_xc_b3lyps: B3LYP* functional
• gga_x_lambda_lo_n: lambda_LO(N) version of PBE
• hyb_gga_xc_wb97: Chai and Head-Gordon
• hyb_gga_xc_wb97x: Chai and Head-Gordon
• hyb_gga_xc_lrc_wpbeh: Long-range corrected functional by Rorhdanz et al
• hyb_gga_xc_wb97x_v: Mardirossian and Head-Gordon
• hyb_gga_xc_lcy_pbe: PBE with yukawa screening
• hyb_gga_xc_lcy_blyp: BLYP with yukawa screening
• hyb_gga_xc_lc_vv10: Vydrov and Van Voorhis
• gga_x_hjs_b88_v2: HJS screened exchange corrected B88 version
• hyb_gga_xc_camy_b3lyp: B3LYP with Yukawa screening
• gga_c_q2d: Chiodo et al
• hyb_gga_xc_wb97x_d: Chai and Head-Gordon
• hyb_gga_xc_hpbeint: hPBEint
• hyb_gga_xc_lrc_wpbe: Long-range corrected functional by Rorhdanz et al
• hyb_mgga_x_mvsh: MVSh hybrid
• hyb_gga_xc_b3lyp5: B3LYP with VWN functional 5 instead of RPA
• hyb_gga_xc_edf2: Empirical functional from Lin, George and Gill
• hyb_gga_xc_cap0: Correct Asymptotic Potential hybrid
• hyb_gga_xc_lc_wpbe: Long-range corrected functional by Vydrov and Scuseria
• hyb_gga_xc_hse12: HSE12 by Moussa, Schultz and Chelikowsky
• hyb_gga_xc_hse12s: Short-range HSE12 by Moussa, Schultz, and Chelikowsky
• hyb_gga_xc_hse_sol: HSEsol functional by Schimka, Harl, and Kresse
• hyb_gga_xc_cam_qtp_01: CAM-QTP(01): CAM-B3LYP retuned using ionization potentials of water
• hyb_gga_xc_mpw1lyp: Becke 1-parameter mixture of mPW91 and LYP
• hyb_gga_xc_mpw1pbe: Becke 1-parameter mixture of mPW91 and PBE
• hyb_gga_xc_kmlyp: Kang-Musgrave hybrid
• gga_x_q2d: Chiodo et al
• gga_x_pbe_mol: Del Campo, Gazquez, Trickey and Vela (PBE-like)
• lda_c_gl: Gunnarson & Lundqvist
• gga_x_wpbeh: short-range version of the PBE
• gga_x_hjs_pbe: HJS screened exchange PBE version
• gga_x_hjs_pbe_sol: HJS screened exchange PBE_SOL version
• gga_x_hjs_b88: HJS screened exchange B88 version
• gga_x_hjs_b97x: HJS screened exchange B97x version
• gga_x_ityh: short-range recipe for exchange GGA functionals
• gga_x_sfat: short-range recipe for exchange GGA functionals
• hyb_mgga_xc_wb97m_v: Mardirossian and Head-Gordon
• lda_x_rel: Relativistic exchange
• gga_x_sg4: Semiclassical GGA at fourth order
• gga_c_sg4: Semiclassical GGA at fourth order
• gga_x_gg99: Gilbert and Gill 1999
• lda_xc_1d_ehwlrg_1: LDA constructed from slab-like systems of 1 electron
• lda_xc_1d_ehwlrg_2: LDA constructed from slab-like systems of 2 electrons
• lda_xc_1d_ehwlrg_3: LDA constructed from slab-like systems of 3 electrons
• gga_x_pbepow: PBE power
• mgga_x_tm: Tao and Mo 2016
• mgga_x_vt84: meta-GGA version of VT{8,4} GGA
• mgga_x_sa_tpss: TPSS with correct surface asymptotics
• gga_x_kgg99: Gilbert and Gill 1999 (mixed)
• gga_xc_hle16: high local exchange 2016
• lda_x_erf: Attenuated exchange LDA (erf)
• lda_xc_lp_a: Lee-Parr reparametrization B
• lda_xc_lp_b: Lee-Parr reparametrization B
• lda_x_rae: Rae self-energy corrected exchange
• lda_c_mcweeny: McWeeny 76
• lda_c_br78: Brual & Rothstein 78
• gga_c_scan_e0: GGA component of SCAN
• lda_c_pk09: Proynov and Kong 2009
• gga_c_gapc: GapC
• gga_c_gaploc: Gaploc
• gga_c_zvpbeint: another spin-dependent correction to PBEint
• gga_c_zvpbesol: another spin-dependent correction to PBEsol
• gga_c_tm_lyp: Takkar and McCarthy reparametrization
• gga_c_tm_pbe: Thakkar and McCarthy reparametrization
• gga_c_w94: Wilson 94 (Eq. 25)
• mgga_c_kcis: Krieger, Chen, Iafrate, and Savin
• hyb_mgga_xc_b0kcis: Hybrid based on KCIS
• mgga_xc_lp90: Lee & Parr, Eq. (56)
• gga_c_cs1: A dynamical correlation functional
• hyb_mgga_xc_mpw1kcis: Modified Perdew-Wang + KCIS hybrid
• hyb_mgga_xc_mpwkcis1k: Modified Perdew-Wang + KCIS hybrid with more exact exchange
• hyb_mgga_xc_pbe1kcis: Perdew-Burke-Ernzerhof + KCIS hybrid
• hyb_mgga_xc_tpss1kcis: TPSS hybrid with KCIS correlation
• gga_x_ak13: Armiento & Kuemmel 2013
• gga_x_b88m: Becke 88 reoptimized to be used with mgga_c_tau1
• mgga_c_b88: Meta-GGA correlation by Becke
• hyb_gga_xc_b5050lyp: Like B3LYP but more exact exchange
• lda_c_ow_lyp: Wigner with corresponding LYP parameters
• lda_c_ow: Optimized Wigner
• mgga_x_gx: GX functional of Loos
• mgga_x_pbe_gx: PBE-GX functional of Loos
• lda_xc_gdsmfb: Groth et al. parametrization
• lda_c_gk72: Gordon and Kim 1972
• lda_c_karasiev: Karasiev reparameterization of Chachiyo
• mgga_x_revscan: revised SCAN
• mgga_c_revscan: revised SCAN correlation
• hyb_mgga_x_revscan0: revised SCAN hybrid exchange
• mgga_c_scan_vv10: SCAN correlation + VV10 correlation
• mgga_c_revscan_vv10: revised SCAN correlation
• mgga_x_br89_explicit: Becke-Roussel 89 with an explicit inversion of x(y)
• gga_x_lv_rpw86: Berland and Hyldgaard
• hyb_mgga_xc_b98: Becke 98
• gga_x_pbe_tca: PBE revised by Tognetti et al
• lda_c_xalpha: Slater Xalpha
• rdmft_xc_m: RDMFT Mueller functional (not from libxc).
• gga_x_pbeint: PBE for hybrid interfaces
• gga_c_zpbeint: spin-dependent gradient correction to PBEint
• gga_c_pbeint: PBE for hybrid interfaces
• gga_c_zpbesol: spin-dependent gradient correction to PBEsol
• mgga_xc_otpss_d: oTPSS_D functional of Goerigk and Grimme
• gga_xc_opbe_d: oPBE_D functional of Goerigk and Grimme
• gga_xc_opwlyp_d: oPWLYP-D functional of Goerigk and Grimme
• gga_xc_oblyp_d: oBLYP-D functional of Goerigk and Grimme
• gga_x_vmt84_ge: VMT{8,4} with constraint satisfaction with mu = mu_GE
• gga_x_vmt84_pbe: VMT{8,4} with constraint satisfaction with mu = mu_PBE
• lda_c_vwn: Vosko, Wilk, & Nusair (5)
• lda_xc_cmplx: Complex-scaled LDA exchange and correlation (not from libxc).
• pbe_xc_cmplx: Complex-scaled PBE exchange and correlation (not from libxc).
• lb94_xc_cmplx: Complex-scaled LB94 exchange and correlation (not from libxc).
• gga_x_vmt_ge: Vela, Medel, and Trickey with mu = mu_GE
• gga_x_vmt_pbe: Vela, Medel, and Trickey with mu = mu_PBE
• mgga_c_cs: Colle and Salvetti
• mgga_c_mn12_sx: MN12-SX correlation functional from Minnesota
• mgga_c_mn12_l: MN12-L correlation functional from Minnesota
• mgga_c_m11_l: M11-L correlation functional from Minnesota
• mgga_c_m11: M11 correlation functional from Minnesota
• mgga_c_m08_so: M08-SO correlation functional from Minnesota
• mgga_c_m08_hx: M08-HX correlation functional from Minnesota
• gga_c_n12_sx: N12-SX functional from Minnesota
• gga_c_n12: N12 functional from Minnesota
• lda_c_vwn_rpa: Vosko, Wilk, & Nusair (RPA)
• ks_inversion: Inversion of KS potential (not from libxc).
• hyb_gga_x_n12_sx: N12-SX functional from Minnesota
• gga_x_n12: N12 functional from Minnesota
• gga_c_regtpss: Regularized TPSS correlation (ex-VPBE)
• gga_c_op_xalpha: one-parameter progressive functional (XALPHA version)
• gga_c_op_g96: one-parameter progressive functional (G96 version)
• gga_c_op_pbe: one-parameter progressive functional (PBE version)
• gga_c_op_b88: one-parameter progressive functional (B88 version)
• gga_c_ft97: Filatov & Thiel correlation
• gga_c_spbe: PBE correlation to be used with the SSB exchange
• lda_c_pz: Perdew & Zunger
• oep_x: OEP: Exact exchange (not from libxc).
• slater_x: Slater approximation to the exact exchange (not from libxc).
• fbe_x: Exchange functional based on the force balance equation (not from libxc).
• gga_x_ssb_sw: Swart, Sola and Bickelhaupt correction to PBE
• xc_half_hartree: Half-Hartree exchange for two electrons (supports complex scaling) (not from libxc). Defined by $$v_{xc}(r) = v_H(r) / 2$$.
• vdw_c_vdwdf: van der Waals density functional vdW-DF correlation from libvdwxc (not from libxc). Use with gga_x_pbe_r.
• vdw_c_vdwdf2: van der Waals density functional vdW-DF2 correlation from libvdwxc (not from libxc). Use with gga_x_rpw86.
• gga_x_ssb: Swart, Sola and Bickelhaupt
• vdw_c_vdwdfcx: van der Waals density functional vdW-DF-cx correlation from libvdwxc (not from libxc). Use with gga_x_lv_rpw86.
• hyb_gga_xc_mvorb_hse06: Density-based mixing parameter of HSE06 (not from libxc).
• hyb_gga_xc_mvorb_pbeh: Density-based mixing parameter of PBEH (not from libxc). At the moment this is not supported for libxc >= 4.0.
• gga_x_ssb_d: Swart, Sola and Bickelhaupt dispersion
• gga_xc_hcth_407p: HCTH/407+
• gga_xc_hcth_p76: HCTH p=7/6
• gga_xc_hcth_p14: HCTH p=1/4
• gga_xc_b97_gga1: Becke 97 GGA-1
• gga_c_hcth_a: HCTH-A
• gga_x_bpccac: BPCCAC (GRAC for the energy)
• gga_c_revtca: Tognetti, Cortona, Adamo (revised)
XCKernel
Section: Hamiltonian::XC
Type: integer
Defines the exchange-correlation kernel. Only LDA kernels are available currently. The options are the same as XCFunctional. Note: the kernel is only needed for Casida, Sternheimer, or optimal-control calculations. Defaults:
1D: lda_x_1d + lda_c_1d_csc
2D: lda_x_2d + lda_c_2d_amgb
3D: lda_x + lda_c_pz_mod
Options:
• xc_functional: The same functional defined by XCFunctional.
XCKernelLRCAlpha
Section: Hamiltonian::XC
Type: float
Default: 0.0
Set to a non-zero value to add a long-range correction for solids to the kernel. This is the $$\alpha$$ parameter defined in S. Botti et al., Phys. Rev. B 69, 155112 (2004). The $$\Gamma = \Gamma = 0$$ term $$-\alpha/q^2$$ is taken into account by introducing an additional pole to the polarizability (see R. Stubner et al., Phys. Rev. B 70, 245119 (2004)). The rest of the terms are included by multiplying the Hartree term by $$1 - \alpha / 4 \pi$$. The use of non-zero $$\alpha$$ in combination with HamiltonianVariation = V_ext_only corresponds to account of only the $$\Gamma = \Gamma = 0$$ term. Applicable only to isotropic systems. (Experimental)
XCUseGaugeIndependentKED
Section: Hamiltonian::XC
Type: logical
Default: yes
If true, when evaluating the XC functional, a term including the (paramagnetic or total) current is added to the kinetic-energy density such as to make it gauge-independent. Applies only to meta-GGA (and hybrid meta-GGA) functionals.
Xalpha
Section: Hamiltonian::XC
Type: float
Default: 1.0
The parameter of the Slater X$$\alpha$$ functional. Applies only for XCFunctional = xc_lda_c_xalpha.
libvdwxcDebug
Section: Hamiltonian::XC
Type: logical
Dump libvdwxc inputs and outputs to files.
libvdwxcMode
Section: Hamiltonian::XC
Type: integer
Whether libvdwxc should run with serial fftw3, fftw3-mpi, or pfft. to specify fftw3-mpi in serial for debugging. pfft is not implemented at the moment.
Options:
• libvdwxc_mode_auto: Use serial fftw3 if actually running in serial, else fftw3-mpi.
• libvdwxc_mode_serial: Run with serial fftw3. Works only when not parallelizing over domains.
• libvdwxc_mode_mpi: Run with fftw3-mpi. Works only if Octopus is compiled with MPI.
libvdwxcVDWFactor
Section: Hamiltonian::XC
Type: float
Prefactor of non-local van der Waals functional. Setting a prefactor other than one is wrong, but useful for debugging.
## Hamiltonian::XC::DensityCorrection
XCDensityCorrection
Section: Hamiltonian::XC::DensityCorrection
Type: integer
Default: none
This variable controls the long-range correction of the XC potential using the XC density representation.
Options:
• none: No correction is applied.
• long_range_x: The correction is applied to the exchange potential.
XCDensityCorrectionCutoff
Section: Hamiltonian::XC::DensityCorrection
Type: float
Default: 0.0
The value of the cutoff applied to the XC density.
XCDensityCorrectionMinimum
Section: Hamiltonian::XC::DensityCorrection
Type: logical
Default: true
When enabled, the cutoff optimization will return the first minimum of the $$q_{xc}$$ function if it does not find a value of -1 (details). This is required for atoms or small molecules, but may cause numerical problems.
XCDensityCorrectionNormalize
Section: Hamiltonian::XC::DensityCorrection
Type: logical
Default: true
When enabled, the correction will be normalized to reproduce the exact boundary conditions of the XC potential.
XCDensityCorrectionOptimize
Section: Hamiltonian::XC::DensityCorrection
Type: logical
Default: true
When enabled, the density cutoff will be optimized to replicate the boundary conditions of the exact XC potential. If the variable is set to no, the value of the cutoff must be given by the XCDensityCorrectionCutoff variable. |
# composition
• Aug 6th 2007, 04:27 PM
aikenfan
composition
find f * g if f(x) = x^2-7 and g(x) = 2x+4
• Aug 6th 2007, 04:28 PM
aikenfan
do i combine the two equations?
• Aug 6th 2007, 04:43 PM
topsquark
Quote:
Originally Posted by aikenfan
find f * g if f(x) = x^2-7 and g(x) = 2x+4
It depends.
$(f * g)(x) = (x^2 - 7)(2x + 4)$
if this is multiplication of functions.
If it is composition of functions:
$(f \circ g)(x) = f(g(x)) = f(2x + 4) = (2x + 4)^2 - 7$
-Dan
• Aug 6th 2007, 04:55 PM
aikenfan
It is this:
(f \circ g)(x) = f(g(x)) = f(2x + 4) = (2x + 4)^2 - 7
but i am a bit confused
• Aug 6th 2007, 05:38 PM
Krizalid
I think aikenfan asked for $(f\circ{g})$
$(f\circ{g})=f(g(x))=(g(x))^2-7=(2x+4)^2-7$
Which is the same result of Dan |
# Prove for sets $A$ and $B$ that $A\cup{B}=B\cup{A}$.
Prove for sets $$A$$ and $$B$$ that $$A\cup{B}=B\cup{A}$$.
Here is my attempt on proving this, by the definition of subset theorem we have $$(A\cup{B})\subseteq(B\cup{A})$$ and $$(B\cup{A})\subseteq(A\cup{B})$$ then $$x\in({B\cup{A}})$$, $$x\in({B\cup{A}})$$ therefore $$A\subseteq{B\cup{A}}\land{B\subseteq{A\cup{B}}}$$ this implies $$x\in{A}\lor{x\in{B}}\iff{x\in{B}}\lor{x\in{A}}$$. I know there is no much work can be done here, I just want to improve my proof's written.
You should use the commutativity of the $$\textit{or}$$.
$$x \in A \cup B \iff x \in A \lor x \in B \iff x \in B \lor x \in A \iff x \in B \cup A.$$
• Thank you for your advice i highly appreciate it! Apr 18 '20 at 8:49
• You're welcome ;) Apr 18 '20 at 8:50
• If you want to prove it elementwise, you'll have to apply the fact that the OR logical operator is commutative, since the union operation is defined using he OR-operator. With this method, the proof is immediate.
• The result can also be proved at the set level, without resorting to logical operators.
• We have to prove a set equality, which amounts to a reciprocal inclusion. That is, we have to prove : $$A\cup B \subseteq B\cup A$$ and $$B\cup A \subseteq A\cup B$$
• Admitting as a definition that $$\color{blue} {S\subseteq T \iff S\cap \overline T = \emptyset}$$ , our goal becomes :
$$(1) (A\cup B)\cap \overline{(B\cup A)} = \emptyset$$
and
$$(2) (B\cup A)\cap \overline{(A\cup B)} = \emptyset$$
• This can be shown using DeMorgan's law, Distributive Law, Commutative and Associative law for $$\cap$$, and Identity Law for sets.
• Let me do it for (1)
$$(A\cup B)\cap \overline{(B\cup A)}$$
$$= (A\cup B)\cap (\overline B \cap \overline A)$$
$$= [ (A\cup B)\cap \overline B] \cap [ (A\cup B)\cap \overline A]$$
$$= [(A\cap \overline B) \cup (B \cap \overline B)] \cap [ (A\cap\overline A) \cup (B \cap \overline A)]$$
$$= [(A\cap \overline B) \cup \emptyset ] \cap [ \emptyset \cup (B \cap \overline A)]$$
$$= [A\cap \overline B] \cap [ B \cap \overline A]$$
$$= [A\cap \overline A] \cap [B\cap \overline B]$$
$$= \emptyset \cap \emptyset$$
$$= \emptyset$$.
Which proves that : $$A\cup B \subseteq B\cup A$$.
The reverse inclusion is still to be proved, in order to reach the goal completely .
• Your proofs are very pleasant to my eye! Great work! Apr 18 '20 at 11:08
• @Jstinz. Thanks:)
– user655689
Apr 18 '20 at 11:09 |
# Compute Westtown Company’s (A) inventory turnover ratio and (B) number of days’ sales in inventory ratio, using the following information.
FindFindarrow_forward
### Principles of Accounting Volume 1
19th Edition
OpenStax
Publisher: OpenStax College
ISBN: 9781947172685
#### Solutions
Chapter
Section
FindFindarrow_forward
### Principles of Accounting Volume 1
19th Edition
OpenStax
Publisher: OpenStax College
ISBN: 9781947172685
Chapter 10, Problem 16EB
Textbook Problem
1 views
## Compute Westtown Company’s (A) inventory turnover ratio and (B) number of days’ sales in inventory ratio, using the following information.
To determine
(a)
Concept introduction:
Inventory Turnover Ratio:
Inventory Turnover Ratio measures the efficiency of the company in converting its inventory into sales. It is calculated by dividing the Cost of goods sold by Average inventory. The formula of the Inventory Turnover Ratio is as follows:
Inventory Turnover Ratio=Cost of goods soldAverage inventory
Note: Average inventory is calculated with the help of following formula:
Average inventory=(Beginning inventory + Ending inventory)2
Day’s sales in inventory:
Days sales in inventory represent the number of days the inventory waits for the sale. It is calculated using the following formula:
Day Sales in Inventory = Inventory *365Cost of Goods Sold
To calculate:
The Inventory Turnover Ratio
### Explanation of Solution
The Inventory Turnover Ratio is calculated as follows:
...
Cost of Goods Sold (A) $156,000 Beginning Inventory (B)$ 14,500
To determine
(b)
Concept introduction:
Inventory Turnover Ratio:
Inventory Turnover Ratio measures the efficiency of the company in converting its inventory into sales. It is calculated by dividing the Cost of goods sold by Average inventory. The formula of the Inventory Turnover Ratio is as follows:
Inventory Turnover Ratio=Cost of goods soldAverage inventory
Note: Average inventory is calculated with the help of following formula:
Average inventory=(Beginning inventory + Ending inventory)2
Day’s sales in inventory:
Days sales in inventory represent the number of days the inventory waits for the sale. It is calculated using the following formula:
Day Sales in Inventory = Inventory *365Cost of Goods Sold
To calculate:
Number of Day’s sales in inventory ratio
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started
Find more solutions based on key concepts
How are inflation and unemployment related in the short run?
Essentials of Economics (MindTap Course List)
What are the steps in entering international markets?
Foundations of Business (MindTap Course List)
EXCHANGE GAINS AND LOSSES You are the vice president of International InfoXchange, headquartered in Chicago, Il...
Fundamentals of Financial Management, Concise Edition (with Thomson ONE - Business School Edition, 1 term (6 months) Printed Access Card) (MindTap Course List)
What is a database?
Accounting Information Systems
How do unions affect the natural rate of unemployment?
Principles of Macroeconomics (MindTap Course List)
Describe a firms macro environment.
Principles of Management |
## Introduction
Ecological communities are structured both in space, as in a stratified lake and in time, as for migratory birds. This temporal and spatial organization is reflected in the strength of species interactions, where we expect populations dwelling in the same location, or being active in the same season, to interact more frequently than those that are not.
When drawing ecological interaction networks—where species are the nodes and the edges connecting the species stand for interactions (for example, consumption, pollination and competition)—we therefore expect to find that species can be partitioned into distinct ‘groups’, such that the frequency of interaction largely depends on group-membership1,2. Networks with this property are said to be ‘block-structured’: a modular structure is a particular block structure, in which a network is divided into subsystems, and within-subsystem interactions are much more frequent than those between subsystems3,4. If, on the other hand, interactions occur exclusively between groups, we obtain a bipartite network—another type of block structure with many ecological applications5.
The appealingly simple idea of a block structure has been formalized in the measure of modularity, Q3,4, calculated as the difference between observed and expected within-group interactions, divided by the total number of interactions. Positive values indicate that interactions occur predominantly within-groups, while negative values that interactions are more frequent between- than within-group. Modularity has become one of the most investigated network metrics, with applications spanning biological, social and technological networks6,7,8,9,10,11.
Unveiling the relationship between the network structure of biological systems and their dynamical properties has been a long-sought goal of the discipline, and many authors have hypothesized that biological networks must be shaped by evolution12,13,14,15 and co-evolution16, favouring configurations yielding controlled dynamics17,18,19.
In ecology, the idea that a modular organization would be beneficial for the local stability of ecological communities (that is, their ability to recover from small perturbations) dates back to work on complexity and stability by May20 , where he suggested ‘that our model multi-species communities [...] will do better if the interactions tend to be arranged in blocks’. This hypothesis was challenged by a number of authors21,22, who produced simulations showing the opposite effect. However, recent studies found that modularity can indeed enhance species persistence23,24, further complicating the picture. Though the relationship between block structure and dynamics on networks has since been investigated in many fields, including epidemiology25, neuroscience26 and complex systems in general27, a systematic classification is still lacking.
Here we study how the local stability of ecological systems is influenced by modularity, providing new results on the theory of random matrices that can be used to draw a direct relationship between modularity Q and local stability. The mathematical results are briefly stated in the Methods section, while a calculation based on the cavity method28,29,30,31 with quaternions is carried out in the Supplementary Notes.
We show that, with respect to the corresponding unstructured case, modularity can have moderate stabilizing effects for particular parameter choices, while anti-modularity can greatly destabilize networks. The rich range of possible effects associated with the same level of modularity stresses the fact that a given network structure is not ‘stabilizing’ or ‘destabilizing’ per se, but only for particular regimes.
## Results
### Building community matrices
We study the stability of a community matrix M, modelling a continuous-time, dynamical ecological system composed of S populations, resting at a feasible equilibrium point. We remove self-interactions from the matrix (setting Mii=0), so that we can concentrate on inter-specific effects (adding intra-specific effects would not qualitatively alter the results32).
M is obtained by multiplying element-by-element two matrices, a matrix of interaction strengths, W, where Wij expresses the effect of species j on species i around equilibrium, and the adjacency matrix of an undirected graph, K. The community matrix is therefore M=W K (Fig. 1).
Initially, we independently sample the coefficients in W in pairs, drawing (Wij, Wji) from a bivariate distribution with identical marginals, defined by the mean , the variance and the correlation . By varying these parameters, we can model different types of interactions between the species from preponderantly predator–prey to dominated by competition or mutualism33.
The binary matrix K dictates ‘who interacts with whom’, and is symmetric because we assume pairwise interactions. Thus, K determines which interactions in W are activated, and which are suppressed. Here we study the case of a block-structured K: we assume that the community is composed of two subsystems, of sizes αS and (1−α)S, respectively (with α≤1/2), and that species in the same subsystem interact with probability Cw (within-subsystem connectance), while species in different subsystems with probability Cb (between-subsystem connectance).
This parameterization is especially intuitive, because for Cb=Cw, we recover the well-studied case of a random ecological community20,33. Hence, by varying Cb and Cw, we can isolate the effect of having a modular or anti-modular structure (Fig. 2), with the two extreme cases being networks composed of two separate subsystems (perfectly modular), and those in which interactions occur exclusively between subsystems (perfectly anti-modular or bipartite). For simplicity, we speak of a ‘modular’ structure whenever Cw>Cb, and of an ‘anti-modular’ structure when Cw<Cb. The case of Cw=Cb represents ‘unstructured’ systems, such as those studied by May and other authors20,33. Equivalently (Methods), we can express Cw and Cb in terms of the overall connectance C (that is, the overall density of interactions in K) and Q, the modularity of the network3,4, defined as:
where Lw is the observed number of interactions within the subsystems, Lb the number of between-subsystem interactions and is the number of within-subsystem interactions we would expect by chance. Values of Q>0 (Q<0) mean that we observe within-subsystem interactions more (less) frequently than expected by chance. To calculate , we need to choose a reference model for network structure, and here we use the Erdős–Rényi random graph34, as we want to contrast our results with those found for an unstructured, random network. Though Q is bounded by −1 and 1, only a smaller interval of values might be achievable for a given choice of a reference model, α and C (Methods; for example, in our case, when α=1/2, then Q[−1/2, 1/2], provided that C<1/2).
In summary, the parameterization M=W K allows us to separate the effect on stability of a block structure (modelled by K through α, C and Q) from those due to the distribution of interaction strengths (modelled by W). Given that the case of Q=0 (unstructured network) has been studied intensively, and that the calculation of the stability of these matrices can be achieved analytically20,32,33,35, we use it as a reference point to determine the effect of Q on stability.
### Effect of modularity on stability
We want to study the effect of modularity on stability. Therefore, we contrast the value for the real part of the ‘rightmost’ eigenvalue of M, Re(λM,1), with , the value found for , a matrix with exactly the same coefficients, but re-arranged according to a random network structure (Q=0). Re(λM,1), is a measure of stability, as it expresses the amount of self-regulation we would need to stabilize the equilibrium20,32.
Our analysis (Fig. 3; Methods) highlights that there are three main parameterizations we need to consider: (a) mean interaction strength close to zero (μ≈0); (b) strongly negative mean interaction strength; and (c) strongly positive mean interaction strength. Without loss of generality, we can set σ2=1, and study the effect of the modularity Q on the stability of the community, measured as the ratio , for a given choice of α (controlling the size of the smaller subsystem), ρ (correlation of interaction strengths) and C (overall connectance of the system). Values Γ<1 are found when imposing the block structure helps stabilizing the community (for example, in Fig. 3, for Q<0 and μ=0), while ratios Γ >1 stand for destabilizing effects (for example, any Q≠0 for positive mean). For an unstructured matrix (Q=0), the ratio is exactly 1.
In Fig. 4, we show the effect of modularity on stability in a community composed of 1,000 species, when we set C=0.2. Take the case of two equally sized subsystems (α=1/2), for which we derive new results allowing us to express the ratio analytically (Methods; Supplementary Information): when μ≥0, we have no effect of modularity on stability; when μ<0, on the other hand, a bipartite structure is highly destabilizing, while a modular structure is moderately stabilizing. Both effects are stronger in the case of a negative correlation.
When the two subsystems have different sizes (α<1/2), the stabilizing effect of modularity found for the case of μ<0 is greatly diminished, and eventually also modularity can become destabilizing (especially for positive ρ and ). For μ>0, any Q≠0 is destabilizing, while for μ≈0, a modular structure is always destabilizing, and a anti-modular structure can be stabilizing, provided that ρ is sufficiently negative. These effects hold qualitatively for different levels of C (Methods; Supplementary Figs 2–3), with higher connectances leading to more marked effects. Though we cannot predict the ratio in full generality for the case of α<1/2, we can treat the extreme cases of a perfectly modular and perfectly bipartite structure, and this is sufficient to understand the qualitative behaviour of all cases (Methods).
The picture emerging from these results is much more nuanced and complex than what was previously hypothesized20,21,22: modularity can have a moderate stabilizing effect when the two subsystem have about the same size, and the mean μ is negative or destabilizing, when μ≥0 and the subsystems have different sizes. Similarly, anti-modularity is highly destabilizing for μ≠0, but can be stabilizing for μ=0.
The qualitative behaviour of these systems can be understood quite simply, when considering the distribution of the eigenvalues of the block-structured matrices in the complex plane. As shown in Fig. 5, when there are two subsystems the spectrum of M is composed of a ‘bulk’ of eigenvalues, and up to two ‘outlier’, real eigenvalues. When μ≈0, there are no outliers, and thus stability is determined in all cases by the rightmost eigenvalue(s) in the bulk. When μ≠0, we have only one outlier in the case of unstructured networks: if μ<0, then the outlier lies to the left of the bulk and thus has limited effects on stability; if μ>0, on the other hand, the outlier lies to the right of the bulk and therefore solely determines stability. The modular case is similar to the unstructured one, though we now have two outilers, in that both lie either to the right (μ>0) or the left (μ<0) of the bulk. In the bipartite case, however, for any μ≠0 the spectrum presents an outlier to the right (determining stability) and one to the left of the bulk.
These simple observations are sufficient to understand the very strong destabilizing effect of a bipartite structure when μ<0: in this case, the stability of the unstructured network, , is determined by the bulk of the eigenvalues, while that of the block-structured network, Re(λM,1), by the outlier to the right of the bulk (Fig. 5, red). When both Re(λM,1) and are determined by the bulk (for example, modular case with μ<0 or any structure with μ≈0), the either stabilizing or destabilizing effect is going to be moderate. Moderate effects are also observed when both Re(λM,1) and are associated with an outlier lying to the right of the bulk (μ>0). When both Re(λM,1) and are determined by the same type (bulk, outlier) of eigenvalue, the precise stabilizing or destabilizing effect depends nonlinearly on the parameters α, C, Q, μ, σ and ρ (Methods).
To summarize, a block structure for an otherwise random ecological system can help stabilization in only two cases: (a) when the structure is modular, and μ<0 (though small α or a positive ρ could reverse this effect); and (b) when the structure is bipartite, μ≈0, and the correlation is negative. For all the other cases, the effect of a block structure ranges from neutral to highly destabilizing.
### Food-web structure
Clearly, ecological systems do not follow a random graph structure, for example, displaying a directionality in the flow of energy from producers to consumers. This directionality proved important in our previous study36, where we showed that when the mean of the negative effects dominates that of the positive effects, systems built according to the cascade37 model (in which ‘larger’ species consume ‘smaller’ ones) are more likely to be stable than their random counterparts. We therefore analysed matrices constructed using a variation of the cascade model, where we assign a ‘size’ to each species and each species can only consume smaller species, and has a preference for those in the same subsystem (Q>0), or for those in the other subsystem (Q<0). Note that in this case, we need to set a mean for the positive effects and another one for the negative effects (Methods). Figure 6 shows that the stabilizing effect of modularity found before for the case of μ<0 is practically negligible, while the other results are qualitatively the same.
## Discussion
We have studied the effect of a modular or anti-modular network structure on the stability of an otherwise random ecological system. Our parameterization makes it easy to compare the effect of the network structure with that one would obtain for unstructured, random systems such as those studied in the past20,33: a ratio Γ<1 stand for stabilizing and Γ>1 for destabilizing effect of network structure.
For block-structured matrices, we showed that modularity can have a positive effect on stability only when (a) the system is composed of two subsystems of about the same size and (b) the overall mean interaction strength is negative. The stabilizing effect is stronger for negative correlations. Anti-modularity, on the other hand, is typically strongly destabilizing, besides the case in which the average interaction strength is close to 0 (a well-studied case20,33, though of limited biological realism). When the mean interaction strength is positive, both modularity and anti-modularity are destabilizing.
Through numerical simulations, we have investigated the more complex case of an interaction between modularity and food-web structure, and found that the results are qualitatively unchanged.
The picture emerging from both simulations and mathematical analysis is much more complex than previously hypothesized. Block structure can have an effect on the local asymptotic stability of the underlying system. However, unless we are in particular areas of the parameter space, the effect tends to be destabilizing. Our results stress the fact that, when discussing the relationship between network structure and local stability, we need to qualify our statements, as a given structure is not stabilizing or destabilizing per se, but is only so under certain specific conditions.
Though we have illustrated this point by studying the modular structures, we believe this phenomenon to hold generally: any network structure can have different effects on stability, depending on the choice of parameters. To reinforce this message, in Fig. 7, we show three cases in which an empirical network structure makes the system more or less stable than its random counterpart, depending on the parameterization of the coefficients.
Practically, this means that the challenge of proving that biological network structure emerges because of a selective process, removing configurations yielding unfavourable dynamics17,18,19 is much harder than expected: network structure, without estimates of the distribution of the coefficients, cannot be used to determine the effect on dynamical properties.
## Methods
### Constructing the community matrix
M is the S × S community matrix, representing the population dynamics of an unknown dynamical system around a feasible equilibrium point. We consider two cases: (a) random ecological networks with block structure and (b) food webs with block structure.
For case (a), we sampled the pairs (Wij, Wji) independently from a bivariate normal distribution with means (μ, μ)T, and covariance matrix
For case (b), we first assigned a ‘size’ to each species (randomly sampling it from a uniform distribution between 0 and 1), and then sampled the pairs (Wij, Wji) independently from a normal bivariate distribution with means ((1+ξ)μ, (1−ξ)μ)T (ensuring that , for Fig. 6, ξ=3), and covariance matrix Σ, whenever i was larger than j. This means that species can only consume ‘smaller’ prey, such as in the cascade model. In particular, in this case we could order the rows and columns of matrix W so that all the positive effects would be confined to the upper-triangular part and the negative effects to the lower-triangular part.
In both cases, the matrix M is obtained via the Hadamard (element-by-element) product of W and the adjacency matrix K. The matrix K is characterized by four parameters: S, α, C and Q. S is the size, α is the proportion of species belonging to the first subsystem, Q is the modularity and C the overall connectance of K (density of the nonzero elements). The first αS species are assigned to the first subsystem, and the remaining (1−α)S species to the second subsystem. The vector encodes the subsystem-membership of each species. Then, we set (Kij, Kji) to (1, 1) with probability Cw, when γi=γj and with probability Cb when γiγj. The ‘within-subsystem connectance’ Cw is:
and the ‘between-subsystem connectance’ Cb is:
Note that, given the Erdős–Rényi reference model, the values of Q that are attainable depend on both α and C:
### Numerical simulations
For each α and ρ, we set S=1,000, C=0.2, σ=1 and μ=0 (green), μ=−1 (red) or μ=1 (blue), and varied Q from its minimum to its maximum possible value in twenty equally sized steps. For each parameter set, we produced 50 block-structured matrices M, and 50 unstructured matrices , obtained by setting Q=0. The ratio was computed by averaging over the replicates. In many cases, one can obtain the expectation for the ratio analytically (below; Supplementary Information). The simulations were repeated for both random ecological networks (Fig. 4) and cascade-based food webs (Fig. 6).
### The spectrum of block-structured matrices
For our derivations, we adopt a slightly more general notation, which includes that discussed above as a special case. We consider the matrix M, with Mii=0 and the off-diagonal coefficients independently sampled in pairs from either of two distributions:
Hence, the pairs come from a certain bivariate distribution , when i and j belong to the same subsystem, while from a different distribution , when i and j belong to different subsystems. We do not need to specify the exact form of the distributions and , given that, as for many results in random matrix theory, our findings are consistent with the ‘universality’ property38,39: once fixed the mean and covariance matrices of and , and provided that the fourth moment of each is bounded, any choice of distributions yields the same result, for S→∞.
When considering the case examined in the main text, where the elements Mij=WijKij, the universality property helps us in two ways. First, consider that the pairs (Mij, Mji) are zero with probability 1−Cw (case γi=γj) or probability 1−Cb (case γiγj), and that the nonzero pairs are sampled from a bivariate distribution (for the elements of W), defined by the parameters μ, σ2 and ρ. This is sufficient to calculate32 the relevant parameters for the distributions and :
This means that the effect of the connectances is somewhat trivial: we can ‘absorb’ the connectances in the parameters μw, μb, σw, σb, ρw and ρb, which are our ‘effective’ parameters, dictating the shape of the distribution of the eigenvalues of M.
The second advantage of having universal results is that we are free to choose any distribution for the pairs (Wij, Wji). In all our examples, these are sampled from a bivariate normal distribution.
### Decomposition
We want to study the limiting (that is, when S is large) distribution of the eigenvalues of M for the case of a random community (for the food webs following the cascade model, we rely exclusively on simulations). Following the approach by Allesina et al.36, we write the matrix M as the sum of two matrices, M=A+B, where A is a matrix with block structure whose elements are
and B is obtained by difference: B=MA. Thus, the diagonal elements of Bii=−μw, while for the off-diagonal , and (when γi=γj), or (when γiγj).
This parameterization is very convenient, as the spectrum of matrix B describes the bulk of the eigenvalues of M, while the outlier eigenvalues of M are given by the nonzero eigenvalues of A, modified by a small correction40.
### The eigenvalues of A
The eigenvalues of A are easy to obtain for any choice of S, α, μw and μb, with all being zero besides
which can be both zero as well (μw=μb=0), both different from zero (α≠1/2, μwμb), or one zero and one nonzero (μw=μb≠0). Thus, there are going to be up to two outlier eigenvalues.
These are the approximate locations of the two outlier eigenvalues of M (only one outlier when μb=μw≠0, as found for example in the ‘unstructured’ case). The exact location of the outliers depends also on B, as explained below.
### The eigenvalues of B
The spectrum of B has never been studied in full generality. We start by discussing the known cases, and then introduce new results that allows us to understand the qualitative behaviour of our simulations. These results can be derived by a calculation making use of the cavity method (Supplementary Information).
### Known case: σw=σb, ρw=ρb
In this case, the eigenvalues of B follow the ‘elliptic law’39, and for S large, are approximately uniformly distributed in an ellipse, centred at (−μw, 0), and with horizontal semi-axis and vertical semi-axis .
### Known case: σb=0 (perfectly modular)
When there are no connections between-subsystem, we have two independent subsystems. Hence, the eigenvalues of B are simply the union of the eigenvalues of the two squared block matrices found on the diagonal. The eigenvalues of each diagonal block follow the elliptic law, so that the distribution of the eigenvalues of B is a combination of two uniform ellipses, both centred at (−μw, 0), and with horizontal semi-axes and , and vertical semi-axes and , respectively.
### Known case: ρb=ρw=0
New results41 can be applied to this case, showing that the eigenvalues of B are contained in a circle in the complex plane, with centre (−μw, 0) and radius
In this case, the distribution of the eigenvalues is not uniform, and Aljadeff et al.41 provide an implicit formula for the density of the limiting spectral distribution, which is consistent to that found in the Supplementary Information using a different method.
### New case: α=1/2 (equally sized subsystems)
When the two subsystems have the same size (α=1/2), we find (Supplementary Information) that the eigenvalues of B are approximately uniformly distributed in the ellipse in the complex plane with centre in (−μw, 0), horizontal semi-axis
and vertical semi-axis
Note that this would also be the limiting distribution for the eigenvalues of the unstructured matrix with −μw on the diagonal, and off-diagonal elements sampled independently in pairs from the bivariate normal distribution with means (0, 0)T, a correlation that is a weighted average of the correlations in B,
and a variance that is the arithmetic mean of the variances in B,
In this case, it is convenient to express these values in terms of μ, σ2, ρ, C and Q, because this makes the role of modularity in modulating the stability much clearer:
From the two equations above, it is clear that (a) for μ=0, modularity has no effect on the spectrum, while for μ≠0 the sign of μ does not affect the spectrum; (b) modular and bipartite structures have the same effect: the eigenvalues of B will be approximately the same, when we have Q=q or Q=−q; (c) the effect of modularity is going to be more marked for large C or |μ|; and (d) the radius of B is always lower or equal than that we would find by setting Q=0.
Summarizing, for α=1/2, the eigenvalues of B are contained in an ellipse whose horizontal semi-axis is always smaller or equal than that found for the corresponding unstructured matrix. This explains the stabilizing effect of a modular structure (Q<0) we observed for μ<0, as in that case the rightmost eigenvalue of M is the rightmost eigenvalue of B.
### New case: σw=0 (perfectly bipartite)
When σw=0 (that is, Cw=0), the nonzero coefficients of B are exclusively contained in the two blocks, representing the interactions between subsystems, as in a bipartite network. Hence, we can write the matrix B in block form:
where X is a αS × (1−α)S matrix and Y is a (1−α)S × αS matrix. The two matrices on the diagonal contain all zeros, and have size αS × αS and (1−α)S × (1−α)S, respectively.
The eigenvalues of B2, are the eigenvalues of B, squared: if λi is an eigenvalue of B, then is an eigenvalue of B2. Squaring B, we obtain:
XY has αS eigenvalues, while YX has (1−α)S eigenvalues. The eigenvalues of YX are the same as those of XY, with the exception of (1−α)SαS eigenvalues which are exactly 0 (take v to be an eigenvector of XY. Then XYv=λv. Let w=Yv, hence, XYv=Xw=λv. Finally, consider YXw=Y(λv)=λYv=λw. Thus, if λ≠0, it is an eigenvalue of both XY and YX). Hence, we can study the eigenvalues of XY (the smaller matrix) without loss of generality.
In the Supplementary Information, we show that the eigenvalues of XY are contained in an ellipse in the complex plane, with centre in , horizontal semi-axis and vertical semi-axis (Supplementary Fig. 7, top).
If the support of the distribution of the eigenvalues of XY is an ellipse in the complex plane, then the support of the eigenvalues of B for perfectly bipartite interaction matrices is obtained via a square-root transformation of the ellipse in the complex plane (Supplementary Fig. 7, bottom), with the addition of the point (0, 0), stemming from the extra eigenvalues of YX.
To find the real part of the rightmost eigenvalue of B, we thus need to consider the square-root transformation of the ellipse found for XY. The eigenvalues of XY, which are the squared eigenvalues of B, are contained in the ellipse:
where x is the real part and y the imaginary part of the point z=x+iy (we can consider the case y>0 without loss of generality, given that the spectrum of B is symmetric about the real and the imaginary axis). Then, the square root of z has real part a:
To approximate the maximum real part for the eigenvalues of B, we need to find the x that maximizes a. First, we can rewrite the equation for a, exploiting the fact that we know that all the points z we want to consider are on the curve describing the ellipse:
where we have substituted the value of y2 by constraining the point z to be on the curve describing the ellipse.
Substituting the values for xc, rx and ry, we can write:
Where we maximize the function for a by taking values x in [xc, xc+rx], which is sufficient because of the symmetry discussed above. Maximizing, we find two cases:
### Combining the eigenvalues of A and B
Having derived the position of the eigenvalues of A, and, for particular cases, the support of the distribution of those of B, we want to combine the results to obtain an approximation for Re(λM,1), the real part of the rightmost eigenvalue of M=A+B.
This problem has been recently studied by O’Rourke & Renfrew40, who considered the following case: B is a large, random matrix whose eigenvalues follow the elliptic law. It is defined by its size, S and the distribution of the coefficients, which are independently sampled in pairs from a bivariate distribution with mean zero, unitary variance and correlation ρ. A is a matrix with low rank (that is, few nonzero eigenvalues), and nonzero eigenvalues that are sufficiently larger than those of B. Then (Theorem 2.4), we can order the eigenvalues of such that:
where the term o(1) goes to zero as S→∞. This means (ref. 40; Theorem 2.8) that a random matrix with a nonzero mean μ will have a single outlier located approximately at μS, exactly as found for the unstructured case32,33 studied above.
Clearly, the correction above is well suited for the unstructured case, and for the perfectly modular one (which is the combination of two unstructured cases). We also corrected in the same way the eigenvalues for matrices with α=1/2, reasoning that the correction would have the same form, given that the spectra of these matrices converge to those of equivalent unstructured cases. We do not have a formula for correcting the eigenvalues of bipartite matrices, but, as for the other cases, the correction is negligible when |μ| is large enough.
Supplementary Fig. 8 shows that our approximation is indeed excellent for all the cases considered here.
### Simulating empirical network structures
We parameterized three empirical networks, and studied the effect of network structure by measuring the ratio , when varying a critical parameter θ. For simplicity, we always consider the case of matrices with zero on the diagonal.
### Contact network
We took a symmetric adjacency matrix, A, specifying whether two members of an high school were in contact (see ref. 42), and built the matrix M by sampling the coefficients Mij=Mji from a normal distribution with mean θ and variance 0.0025, whenever Aij=Aji=1. We sampled θ independently from a uniform distribution and for each M, we obtained by shuffling the interactions while maintaining the pairs (so that both matrices are symmetric), following ref. 35. In Fig. 7, we show 250 realizations.
### Food web
We took the adjacency matrix A, specifying trophic interactions in the Little Rock lake43, and built M by sampling Mij from the half-normal distribution , whenever Aij=1. These are the (negative) effects of predators on prey. For each Mij<0, we chose Mij by multiplying −Mij by a random value drawn from . Thus, for θ≈1 the positive coefficients have about the same strength as the negative ones; when θ>1 positive effects dominate; and for θ<1 negative effects are stronger. Again, for each of the realizations, we built both M and , obtained by shuffling the interactions. In Fig. 7, we show results obtained by sampling θ 250 times from the distribution .
### Pollinator network
We took the pollination network compiled by Robertson44,45, and we used the rectangular adjacency matrix to determine the position of the nonzero, mutualistic effects between plants and pollinators: Mij and Mji were sampled independently from the uniform distribution , whenever plant i and pollinator j interacted. We then sampled competitive effects by sampling uniformly the coefficients for k and l being both plants or both pollinators. Thus, M has a block structure, very similar to that studied here. In Fig. 7, we show the effects of the strength of competition (θ) on stability.
### Data availability
The data and code needed to reproduce all results presented in the article can be downloaded from https://github.com/StefanoAllesina/blockstructure |
# tushain
vivek1303 posted a reply to Applying a trading strategy to the GMAT in the I just Beat The GMAT! forum
“Hi Karan, Quite a perspective! So, will you be applying to colleges this year? With the kind of work-ex (I am assuming since you said you are married) you have, you can try some good colleges for sure. Thanks!”
July 11, 2015
vivek1303 posted a reply to 770(Q-51,V-44)- My GMAT Journey-Long post in the I just Beat The GMAT! forum
“Great debrief! Congrats on your amazing score!”
July 11, 2015
vivek1303 posted a reply to GMAT 760 (V41 and Q50) from GMAT 700 in the I just Beat The GMAT! forum
“Hi Gargrahul, Congrats on a great score! I had the same splits but scored 10 points less. Just like you I had a terrible time reaching that level in verbal. But, all''s well that ends well. Best of luck for your application cycle. Thanks!”
July 11, 2015
vivek1303 posted a reply to Rocked The GMAT - Scored 800 (Q51 V51) - Via Meditation in the I just Beat The GMAT! forum
“You must be God!”
July 7, 2015
vivek1303 posted a reply to My journey to a top business school... in the Admissions Success Stories forum
“Wow! I wish you had shared your success story in a bit more detailed fashion. Great job though..!”
July 7, 2015
vivek1303 posted a reply to 730 gmat to Finance Manager intern at Microsoft in the Admissions Success Stories forum
“Hi Jmansharma, Its great to hear that you had a wonderful experience at Kelley, especially to be able to intern at your dream company is a wonderful achievement. Kudos! Like yourself, I will be joining Kelley for my MBA education.”
June 24, 2015
vivek1303 posted a reply to GMAT 750 - What It Takes to Break 700 in the I just Beat The GMAT! forum
“Greta score! Well done! Bets of luck for the applications”
June 24, 2015
vivek1303 posted a reply to From 610 to 740 : It felt incredible to beat the GMAT in the I just Beat The GMAT! forum
“Congratulations, mate!”
June 24, 2015
June 24, 2015
vivek1303 posted a reply to 2 and a half weeks left and stuck at 620 in the GMAT Strategy forum
“Since it has only been two and a half weeks I feel you can trod on further. You scores will start stablizing by around 1-2 months. Now is the time you can be creative in your approach and see what startegy suits you best - skipping questions, focusing hard on first 15 etc. Good luck!”
June 24, 2015
vivek1303 posted a reply to Shall I retake the GMAT? in the Admissions Success Stories forum
“Hi, The colleges you are targeting prefer higher work-ex, apart from ISB maybe. Keep that in mind before you plan on re-taking your GMAT. With a good profile and good essays you can make to these colleges with a 720 as well but the question would of course be whether you want to enter with a ...”
June 21, 2015
vivek1303 posted a reply to 770(Q-51,V-44)- An average Joe's GMAT Journey in the I just Beat The GMAT! forum
“Party hard it should be! Congratulations”
June 21, 2015
June 21, 2015
vivek1303 posted a reply to Stuck on Quantitative - need expert Opinion in the GMAT Strategy forum
“Hi Beth, I am not sure whether you have taken the exam again but here''s my opinion anyway. So, the thing with Quant is that most of the questions in the real GMAT are derivatives of the Quant questions in the OG. What I would suggest is that instead of using different resources you can simply ...”
June 21, 2015
vivek1303 posted a reply to GMAT 720 95%(Q49,V40)(people never fail - they just give up) in the I just Beat The GMAT! forum
“Great effort, ngufo! This seriously is one of the most useful and extensive debreif I have seen in a long time! Best of luck for the admission cycle!”
June 21, 2015
vivek1303 posted a reply to I just beat the GMAT! 680 to 770 in a month. in the I just Beat The GMAT! forum
“Considerable improvement in quant. Awesome! Best of luck for the applications!”
June 15, 2015
vivek1303 posted a reply to Rocked The GMAT - Scored 800 (Q51 V51) - Via Meditation in the I just Beat The GMAT! forum
“The uniformity of your score is otherworldly!”
June 15, 2015
vivek1303 posted a reply to 660, 690....720! I Beat the GMAT!!! in the I just Beat The GMAT! forum
“Kudos, mate! Persistence pays, always..”
June 15, 2015
vivek1303 posted a reply to I Scored 760 and broke the 99 percentile barrier (Q50;V-42) in the I just Beat The GMAT! forum
“Congrats, mate! Great story..! kudos”
June 15, 2015
vivek1303 posted a reply to GMAT Attempt 580 - How to improve verbal mainly in the GMAT Strategy forum
“Hi, Improving your score in verbal is slightly more difficult than improving score in Quant. So, I would suggest go for the kill in quant. Once satisfied, that is scoring around 45 to 50 in Quant, then go for verbal. if SC is the issue then the best bet would the OG for practice, which i see ...”
June 15, 2015
vivek1303 posted a reply to Test in 43 days and I'm still scoring low 600s HELP in the GMAT Strategy forum
“Hi, Firstly, if you are scoring below 600 on MGMAT tests then it should not be a major worry for MGMAT tests are a very strict indicator of the actual exam. Not necessarily, but students, including myself, end up scoring an easy 40-50 points over their MGMAT scores. Secondly, if RC is your ...”
June 15, 2015
mukherjee.tanuj3@gmail.co posted a reply to Shall I retake the GMAT? in the Admissions Success Stories forum
“Books: 1. Official Guide for GMAT 2. Verbal Review for GMAT 3. Manhattan Sentence Correction (for sentence correction) 4. Manhattan advanced Quants( for 700+ level quant question) 5. Manhattan prep books for your weak section Strategy: Please go through this ...”
May 24, 2015
mukherjee.tanuj3@gmail.co posted a reply to Shall I retake the GMAT? in the I just Beat The GMAT! forum
“Hi Marty! I agree with you on the part that GMAT helps in overall personality development.However, my concern here is whether improving my GMAT score will increase my candidacy? Warm Regards, Tanuj”
May 13, 2015
mukherjee.tanuj3@gmail.co posted a new topic called Profile Evaluation Request-Tanuj,Indian,PepsiCo India Region in the Ask Admit Advantage forum
“Background: I’m working with PepsiCo, India Region as an Asst Manager in Technical Operations for the last 2 years. I plan to apply for 2017 batch to the following colleges:- 1. Indian School Of Business, India. 2. NYU-Stern,NY,US as consultancy is its forte. 3. Darden School Of ...”
May 13, 2015
mukherjee.tanuj3@gmail.co posted a new topic called Shall I retake the GMAT? in the I just Beat The GMAT! forum
“Hi Guys! I need a help. But, before I ask my doubt, I’ll walk you through my resume and my aspirations. I gave my GMAT last weekend and got a 720(Q51, V35), 7 in IR and 5 in AWA. I’m not satisfied with my verbal score as I believe that I can score better in that arena. Background: I’m ...”
May 1, 2015
mukherjee.tanuj3@gmail.co posted a new topic called Shall I retake the GMAT? in the Admissions Success Stories forum
“Hi Guys! I need a help. But, before I ask my doubt, I’ll walk you through my resume and my aspirations. I gave my GMAT last weekend and got a 720(Q51, V35), 7 in IR and 5 in AWA. I’m not satisfied with my verbal score as I believe that I can score better in that arena. Background: I’m ...”
May 1, 2015
mukherjee.tanuj3@gmail.co posted a reply to Oyster Problem in the Sentence Correction forum
“I''m sorry to bring this topic up again.. My doubt is: " smaller" should require "than" with it. Smaller than what????... I eliminated D based on that logic.. Is my logic flawed?”
March 8, 2015
mukherjee.tanuj3@gmail.co posted a reply to The world wildlife fund has declared that global warming in the Sentence Correction forum
“Hi GMATGuruNY! I eliminated (C) by looking at the possesive noun human beings'' . In Manhattan book, it is written that the possisve nouns are only used if can be replaced with "of prepositional phrase".For example: Tom''s car is okay as we mean "car of Tom". However, in the ...”
January 22, 2015
mukherjee.tanuj3@gmail.co posted a reply to Is x a multiple of y? in the Data Sufficiency forum
“What''s the logic behind none of them is a multiple of y?”
January 12, 2015
mukherjee.tanuj3@gmail.co posted a reply to Sound can travel through water for enormous distances, in the Sentence Correction forum
“This is not a run on sentence. The second sentence is called absolute modifier. The second sentence doesn''t have a active verb.”
January 7, 2015
June 13, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Profile Evaluation Request and Suggestion Required! in the Ask Admissionado (formerly Precision Essay) forum
“My Profile:- College: National Institute of Technology, Nagpur(One of the best in India) Stream(UG): Electrical and Electronics Engineering % in UG: 84.1%(Top 10 in the department) GMAT(1st attempt) : 680 Currently employed in: PepsiCo International,India Region Years of Experience: 1 ...”
May 24, 2014
“My aspirations:- 1.Operational Planning Head (10 years down the line) 2.I prefer to study in a. USA b. Europe c. India 3.Frankly, I’m impatient. However, If I can instill confidence in me that I can make it to the top MBA programme, I will wait. My Doubts:- Q1. Should I re-take ...”
May 24, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Profile Evaluation Request and Suggestion Required in the Ask an MBA Admissions Consultant forum
“My Profile:- College: National Institute of Technology, Nagpur(One of the best in India) Stream(UG): Electrical and Electronics Engineering % in UG: 84.1%(Top 10 in the department) GMAT(1st attempt) : 680 Currently employed in: PepsiCo International,India Region Years of Experience: 1 ...”
May 24, 2014
mukherjee.tanuj3@gmail.co posted a new topic called MiM or MBA? in the Ask an MBA Admissions Consultant forum
“I am confused whether to opt for Mim or to wait for 3-5 more years for MBA? My Profile:- College: National Institute of Technology, Nagpur(One of the best in India) Stream(UG): Electrical and Electronics Engineering % in UG: 84.1%(Top 10 in the department) GMAT(1st attempt) : 680 ...”
May 22, 2014
mukherjee.tanuj3@gmail.co posted a new topic called MiM or MBA? in the Ask Clear Admit forum
“I am confused whether to opt for Mim or to wait for 3-5 more years for MBA? My Profile:- College: National Institute of Technology, Nagpur(One of the best in India) Stream(UG): Electrical and Electronics Engineering % in UG: 84.1%(Top 10 in the department) GMAT(1st attempt) : 680 Currently ...”
May 22, 2014
mukherjee.tanuj3@gmail.co posted a reply to as likely as in the Sentence Correction forum
“Hey Guru! I have one more doubt. Had the option C said ....as likely as other grads are to plan...., then the sentence would have been correct? Regards, Mukherjee[/spoiler]”
May 13, 2014
mukherjee.tanuj3@gmail.co posted a reply to as likely as in the Sentence Correction forum
“Hey Guru! I have one more doubt. Had the option C said ....as likely as other grads are to plan...., then the sentence would have been correct? Regards, Mukherjee”
May 12, 2014
mukherjee.tanuj3@gmail.co posted a reply to as likely as in the Sentence Correction forum
“Hey Adi, Even in option A,"In planning to practice in socioeconomically deprived areas" is describing minority graduates even though the exp is attached to other grads. Hope it helps. Regards, Mukherjee”
May 12, 2014
mukherjee.tanuj3@gmail.co posted a reply to Goal_PowerPrep in the Sentence Correction forum
“Hi Anjali, Thanx for your reply. The usage of " like" in option A is correct, because the phrase following LIKE is a noun(GOAL). If I rephrase the sentence from the LIKE phrase, it will look "...., similar to a goal of earlier generations.". Regards, Mukherjee”
May 12, 2014
mukherjee.tanuj3@gmail.co posted a reply to as likely as in the Sentence Correction forum
“The minority grads are more likely in planning to practice....... Am I wrong? Regards, Mukherjee[/u]”
May 12, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Goal_PowerPrep in the Sentence Correction forum
“Hey, The OA is E, however option A looks good to me. http://s13.postimg.org/wv95i9ulv/SC_goal.jpg In optiion A that= goal. Help!”
May 12, 2014
mukherjee.tanuj3@gmail.co posted a reply to as likely as in the Sentence Correction forum
“Minority grads are more likely IN PLANNING TO PRACTICE. Am I wrong? Regards, Mukherjee”
May 12, 2014
May 12, 2014
mukherjee.tanuj3@gmail.co posted a reply to as likely as in the Sentence Correction forum
“Hey Adi, Why is "in planning to practice" wrong? Regards, Mukherjee”
May 12, 2014
mukherjee.tanuj3@gmail.co posted a reply to as likely as in the Sentence Correction forum
“Hey Rich, Consider the following sentence:- City X has more criminals than does city Y.(correct) City Y has more criminals than city Y.(wrong)(This sentence is wrong because in "X more than Y", X and Y must be parallel.) Similarly in the sentence in question:- Minority Grads are 4 ...”
May 12, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Why A wrong? in the Sentence Correction forum
“http://s13.postimg.org/7pxr0lwcj/SC_Y_A_wrong.jpg”
May 11, 2014
mukherjee.tanuj3@gmail.co posted a new topic called as likely as in the Sentence Correction forum
“Why is OPTION A wrong? http://s24.postimg.org/uifsjowq9/SC_Y_A_Aslikelyas.jpg”
May 11, 2014
mukherjee.tanuj3@gmail.co posted a new topic called LCM/GCD in the Data Sufficiency forum
“http://s4.postimg.org/wbc0rg9ft/DS_HCF.jpg”
May 11, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Boat_upstream in the Problem Solving forum
“http://s18.postimg.org/4l33j47b9/PS_Boat.jpg”
May 11, 2014
mukherjee.tanuj3@gmail.co posted a reply to Probability in the Problem Solving forum
“HEy Guru! Perfect explanation. Thanx Regards, Mukherjee”
May 11, 2014
mukherjee.tanuj3@gmail.co posted a reply to best approach plz in the Data Sufficiency forum
“How case 2 is not possible, if b>0? Regards, Mukherjee”
May 11, 2014
mukherjee.tanuj3@gmail.co posted a reply to DS_interest in the Data Sufficiency forum
“Hey Guru! I took the following values:- Ra=100% and Rb=150%, which gave me Pa=50 and Pb=100. Ra=500% and Rb=750%, which gave me Pa=10 and Pb=30. therefore I concluded Statement 1 is insufficient. However, according to the values, which I choose, I was convinced that statement 2 would give me a ...”
May 11, 2014
mukherjee.tanuj3@gmail.co posted a reply to CR_JOBS in the Critical Reasoning forum
May 11, 2014
mukherjee.tanuj3@gmail.co posted a new topic called CR_RomanEmpire in the Critical Reasoning forum
“http://s28.postimg.org/62iss92hl/CR_Roman_Empire.jpg”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called CR_JOBS in the Critical Reasoning forum
“http://s27.postimg.org/sfmwrugrj/CR_Jobs.jpg”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Difficult1 from Prep in the Critical Reasoning forum
“http://s27.postimg.org/l337xpdgf/CR_C_T.jpg”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called DS_interest in the Data Sufficiency forum
“http://s27.postimg.org/8b08ozp0f/DS_Interest.jpg”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called DS_HCF_GMATPrep in the Data Sufficiency forum
“http://s30.postimg.org/z5z7jd3kt/DS_HCF.jpg”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called DS_r and s in the Data Sufficiency forum
“Hey, PLease try the Q from GMATprep. http://s29.postimg.org/97kkgbd9v/DS_0between.jpg Why can''t the series be r,t,-s,0,s. In this series t is in between r and -s. If this series is valid, then the answer to this Q fails. HELP. Regards, Mukherjee”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Probability in the Problem Solving forum
“Hey, I have no clue how to solve this Q. http://s11.postimg.org/xi97kfq67/PS_problem_solving.jpg”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called CR_OIL in the Critical Reasoning forum
“Hey, Why is option A worse then the OA . http://s3.postimg.org/4k06qx17j/CR_OIL.jpg[/spoiler]”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called SC_River in the Sentence Correction forum
“Hey, Please tell me the steps involved to eliminate all options but the correct one. http://s11.postimg.org/3shs3hlxb/SC_LAKE.jpg”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a new topic called SC_Packages in the Sentence Correction forum
“Hey all, It is very easy to eliminate option A,B and C as apples are not compared to apples, but when I narrowed down to D and E, I couldn''t find a good reason to eliminate either of them. http://s29.postimg.org/maaoj3g8j/SC_packages.jpg”
May 10, 2014
mukherjee.tanuj3@gmail.co posted a reply to Confusion in the Data Sufficiency forum
“Thank You! I understood my mistake. OA B”
May 9, 2014
mukherjee.tanuj3@gmail.co posted a reply to Confusion in the Data Sufficiency forum
“Hey GMATguruNY and Rich, According to the post that I read in BTG, answer to the following Q must be D. As, y^2=9 or x=y^2=9 or sqrt(x)=mod(3)=3(NOT -3). http://s30.postimg.org/u895432pp/confusion.jpg Please HELP. I''m LOST. Regards, Mukherjee”
May 9, 2014
“Hi all, Thanx for your replies. Could one please tell me how to solve this Q without plugging values.In sum, I am asking the formal procedure. Regards, Mukherjee”
May 9, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Confusion in the Data Sufficiency forum
“Hi, While reading a post from BTG, I realised that sqrt(x)=mod(x). That means, Sqrt(16)=mod(4)=4, or sqrt(16)=4(always). However, while solving an Q from Power Prep,I encountered an Q in which sqrt(16)=4 or -4. PLEASE HELP Reagrds, Mukherjee”
May 9, 2014
mukherjee.tanuj3@gmail.co posted a new topic called classic_GMATprepQ in the Data Sufficiency forum
“http://s21.postimg.org/fje1ggo5v/6_18.jpg”
May 8, 2014
“http://s27.postimg.org/70ciinhin/DS_DIFFICULT1.jpg”
May 8, 2014
mukherjee.tanuj3@gmail.co posted a reply to Explain the use of Past Perfect in the non-underlined part in the Sentence Correction forum
“Hey Guru, Why is option D wrong? Regards, Mukherjee”
May 8, 2014
mukherjee.tanuj3@gmail.co posted a new topic called CR_Gmat Prep in the Critical Reasoning forum
“http://s29.postimg.org/8t5wuy5zn/Coal_CR.jpg”
May 7, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Explain the use of Past Perfect in the non-underlined part in the Sentence Correction forum
“http://s27.postimg.org/3tvh758kv/Justify_pastperfect.jpg”
May 7, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Difficult 1 from GMAT in the Sentence Correction forum
“http://s28.postimg.org/61djnr9x5/Difficult_1_from_GMAT.jpg”
May 7, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Among them_GMat Prep in the Sentence Correction forum
“http://s7.postimg.org/ocifq6l3b/amongthem.jpg”
May 7, 2014
vivek1303 posted a new topic called Additional course one can go for to pep-up resume..?? in the Lounge forum
“Hey Guys..!! I have got an admit from a school that I was highly looking forward to get into. But the thing is that since I have been selected from the waitlist (and that from 2nd round), I had almost given up on getting into the college around a month back and, since then, had started preparing ...”
May 6, 2014
vivek1303 posted a new topic called Online tests with Kaplan 800score book in the Ask a Kaplan representative forum
“Please share the following details about the Kaplan 800 score book - 1) Does the book come with 6 practice tests? There is no mention of tests as such on the cover page. 2) What is the difference between Kaplan GMAT 800: Advanced Prep for Advanced Students and Kaplan GMAT 800 International ...”
May 4, 2014
“Hi, Can anyone explain me the use of "what" as a pronoun? Regards, Mukherjee”
May 4, 2014
mukherjee.tanuj3@gmail.co posted a reply to WHAT ???? in the Sentence Correction forum
“Hi, I am little skeptic about your explanation. Why can''t a rate be fast? Rate is a quantity that may be fast or slow. Regards, Mukherjee”
May 4, 2014
“Hello Eli, Can you please share the following details abot the Kaplan 800 score book - 1) Does the book come with 6 practice tests? 2) What is the difference between Kaplan GMAT 800: Advanced Prep for Advanced Students and Kaplan GMAT 800 International Edition? Thanks in advance!”
May 4, 2014
mukherjee.tanuj3@gmail.co posted a new topic called WHAT ???? in the Sentence Correction forum
“I was sure that the answer was B, but OA is C http://s8.postimg.org/at6x8mz5d/WHAT.jpg”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a new topic called why E is wrong? in the Sentence Correction forum
“http://s29.postimg.org/qrysa1no3/OA_C.jpg oac”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a reply to Why A wrong? in the Sentence Correction forum
“Hey! Why is option C wrong? Regards, Mukherjee”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a reply to Paper pen in the Sentence Correction forum
“Hey Aditya! Why is B and C wrong? Regards, Mukherjee”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a reply to Why A wrong? in the Sentence Correction forum
“Bingo OA B”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a new topic called more v/s higher in the Sentence Correction forum
“OAE http://s30.postimg.org/69chefo31/morehigher.jpg”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Why A wrong? in the Sentence Correction forum
“http://s7.postimg.org/phi8ljaxz/Y_A_wrong.jpg”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Paper pen in the Sentence Correction forum
“http://s17.postimg.org/li7h6x0wr/Paper_OA_A.jpg OAA”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Paper pen in the Sentence Correction forum
“http://s18.postimg.org/sy727okd1/Migraine_OA_B.jpg OAB”
May 3, 2014
mukherjee.tanuj3@gmail.co posted a reply to few v/s less in the Sentence Correction forum
“Hi, "less" is used when the concerned noun is uncountable or A NUMBER, such as volume, density, dollars. Therefore, I preferred less rather than fewer as a number was concerned . Am I missing something? Ex- Volume of Earth is less than that of Sun. Regards, Mukherjee”
May 1, 2014
mukherjee.tanuj3@gmail.co posted a new topic called tricky again_prep in the Data Sufficiency forum
“http://s27.postimg.org/43ex4wfkv/Untitled.jpg”
April 29, 2014
mukherjee.tanuj3@gmail.co posted a new topic called red ball_prep in the Data Sufficiency forum
“http://s2.postimg.org/o2js7o6kl/probability.jpg”
April 29, 2014
mukherjee.tanuj3@gmail.co posted a new topic called tricky 1 from Prep in the Data Sufficiency forum
“http://s30.postimg.org/dattzf72l/greaterthanzero.jpg”
April 29, 2014
mukherjee.tanuj3@gmail.co posted a new topic called few v/s less in the Sentence Correction forum
“http://s30.postimg.org/a7ev7g70t/image.jpg OAB”
April 29, 2014
mukherjee.tanuj3@gmail.co posted a reply to WHy is B wrong? in the Critical Reasoning forum
“Hello, Option D says that the naturally occurring fullerence are arranged in a previously unknown state, however the state is recently discovered by the scientists. Therefore, option D doesn''t say that the sample is different from the natural occurring fullerence. Am I missing something?”
April 26, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Confusing in the Data Sufficiency forum
“http://s12.postimg.org/rqd9v6e55/DS_confusing.jpg”
April 25, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Why not D? in the Sentence Correction forum
“http://s27.postimg.org/5yh80766n/Ynot_D.jpg”
April 25, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Why not D? in the Sentence Correction forum
“http://s27.postimg.org/5yh80766n/Ynot_D.jpg”
April 25, 2014
mukherjee.tanuj3@gmail.co posted a new topic called NOT One of the... in the Sentence Correction forum
“http://s28.postimg.org/4v6qz91vd/SC_oneofthe.jpg”
April 25, 2014
mukherjee.tanuj3@gmail.co posted a new topic called expect in the Sentence Correction forum
“http://s29.postimg.org/ilhtlimrn/SC_correct_idiom.jpg”
April 25, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Confusing in the Sentence Correction forum
“http://s30.postimg.org/49lge3gbh/SC_CONfusing.jpg”
April 25, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Prep in the Critical Reasoning forum
“http://s28.postimg.org/w69ine0ft/CR_scientists.jpg”
April 25, 2014
mukherjee.tanuj3@gmail.co posted a new topic called WHy is B wrong? in the Critical Reasoning forum
“http://s22.postimg.org/ah1rigde5/CR_Bwrong_Y.jpg”
April 25, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT1 in the Data Sufficiency forum
“http://s8.postimg.org/a9sn86is1/DS4.jpg”
April 21, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT PREP Q HELP in the Problem Solving forum
“http://s4.postimg.org/w2swztnmx/image.jpg”
April 21, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT PREP Q HELP in the Data Sufficiency forum
“http://s29.postimg.org/r9dy6u3dv/DS3.jpg”
April 21, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT PREP Q HELP in the Data Sufficiency forum
“http://s14.postimg.org/6xr9rpz59/DS2.jpg”
April 21, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT PREP Q HELP in the Data Sufficiency forum
“http://s9.postimg.org/nb3b0v9gr/image.jpg [/img][/spoiler]”
April 21, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT PREP Q HELP in the Sentence Correction forum
“http://s4.postimg.org/e8e4ctd61/SC_prep.jpg”
April 21, 2014
mukherjee.tanuj3@gmail.co posted a new topic called G_PREP in the Sentence Correction forum
“OAB IMOD http://s28.postimg.org/se4xizbjt/GMAT_club.jpg”
April 19, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Veritas in the Sentence Correction forum
“http://s27.postimg.org/qeupngkrz/veritas_SC.jpg”
April 18, 2014
mukherjee.tanuj3@gmail.co posted a reply to Why is D wrong? in the Sentence Correction forum
“sorry OA C”
April 18, 2014
mukherjee.tanuj3@gmail.co posted a reply to Heating oil prices in the Sentence Correction forum
“A quick way to solve this problem. Higher X THAN Y.(X and Y must be parallel) The construction higher X over Y is wrong. ONLY A satisfies the 2 criterion above mentioned.Option A compares year to year. Regards, fellow GMAT taker”
April 17, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Why is D wrong? in the Sentence Correction forum
“Gmat Paper pen test OAA http://s17.postimg.org/6xkdor24r/YBWrong.jpg”
April 17, 2014
mukherjee.tanuj3@gmail.co posted a reply to Two ANDs in a sentence in the Sentence Correction forum
“SOURCE:GMAT paper pen tests”
April 17, 2014
mukherjee.tanuj3@gmail.co posted a new topic called so.. in the Sentence Correction forum
“I have read that so X that Y/ so X as to Y are correct constructions! But the following example keeps so x, which is followed by nothing else. Is the construction correct?Am I missing something? OAE http://s30.postimg.org/x6n407ygd/image.jpg”
April 16, 2014
mukherjee.tanuj3@gmail.co posted a new topic called so.. in the Sentence Correction forum
“I have read that so X that Y/ so X as to Y are correct constructions! But the following example keeps so x. Am I missing something? OAE http://s30.postimg.org/x6n407ygd/image.jpg”
April 16, 2014
mukherjee.tanuj3@gmail.co posted a new topic called THERE in the Sentence Correction forum
April 16, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Two ANDs in a sentence in the Sentence Correction forum
“http://s29.postimg.org/kdskhbwn7/two_ANDs.jpg OAc”
April 16, 2014
April 16, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Experts only in the Data Sufficiency forum
“http://s16.postimg.org/c6epnpgjl/GMATprep.jpg OAB”
April 10, 2014
mukherjee.tanuj3@gmail.co posted a reply to prep question in the Data Sufficiency forum
“Hey! I think you have not entered the entire Q, as there can be few customers who ordered nothing! Please check your Q with that of the actual. Regards, Mukherjee”
April 10, 2014
mukherjee.tanuj3@gmail.co posted a reply to can v/s could?..help in the Sentence Correction forum
“OAE”
April 9, 2014
mukherjee.tanuj3@gmail.co posted a new topic called idiom speculation! in the Sentence Correction forum
“http://s27.postimg.org/ekdgdfn3j/OG_sc1.jpg OAE”
April 9, 2014
mukherjee.tanuj3@gmail.co posted a new topic called can v/s could?..help in the Sentence Correction forum
“http://s1.postimg.org/lbs43eciz/OG_SC2.jpg”
April 9, 2014
mukherjee.tanuj3@gmail.co posted a reply to KaplanSC in the Sentence Correction forum
“Any two independent sentences can be connected with a conjunction. Here, 2 independent sentences are connected with a conjunction"and". The proper format to do so is independent sentence, and independent sentence. Hope it helps.”
April 8, 2014
mukherjee.tanuj3@gmail.co posted a reply to Multiple modifiers in the Sentence Correction forum
“Hey GMATguruNY, "they"in the first option can point to grooves as well!!... Am I correct? Regards, Mukherjee”
April 7, 2014
April 7, 2014
mukherjee.tanuj3@gmail.co posted a reply to OG SC in the Sentence Correction forum
“Dear VivianKerr, I know my question is very silly but still do answer. Q How is present perfect and simple present parallel in the above example? Regards, Mukherjee”
April 7, 2014
mukherjee.tanuj3@gmail.co posted a reply to MGMAT_I find all options wrong! Experts HELP in the Sentence Correction forum
“Even I have the same doubts!”
April 7, 2014
April 6, 2014
April 6, 2014
mukherjee.tanuj3@gmail.co posted a new topic called How can one attack a premise!..A premise is a fact! Confused in the Critical Reasoning forum
“http://s30.postimg.org/esy9bx0m5/MGMAT_CR.jpg”
April 6, 2014
mukherjee.tanuj3@gmail.co posted a new topic called MGMAT_I find all options wrong! Experts HELP in the Sentence Correction forum
“http://s10.postimg.org/8rnh8zged/MGMAT_SC.jpg”
April 6, 2014
mukherjee.tanuj3@gmail.co posted a new topic called MGMAT... in the Data Sufficiency forum
“http://s8.postimg.org/53z00r6z5/Mgmat_Q.jpg”
April 6, 2014
mukherjee.tanuj3@gmail.co posted a new topic called EXPERTS RATE MY ESSAY!!! PLEASE in the GMAT Essays (AWA) forum
“AWA ESSAYS: Analyze Argument ESSAY QUESTION: The following appeared in a proposal for a high school''s annual fundraising event: "In order to earn the most money for supplemental school programs, we will have larger and more thrilling rides at this year''s School Fair, including a ferris ...”
April 6, 2014
mukherjee.tanuj3@gmail.co posted a new topic called OG SC in the Sentence Correction forum
“http://s24.postimg.org/gsa534crl/OGSC.jpg”
April 4, 2014
mukherjee.tanuj3@gmail.co posted a new topic called WHy is B wrong? in the Sentence Correction forum
“oa c http://s29.postimg.org/s84vmp003/YBWrong.jpg”
April 4, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC from Gprep-1 in the Sentence Correction forum
“IMO E OA?”
April 4, 2014
candygal79 posted a new topic called DS-10 in the Data Sufficiency forum
“The sum of integers in the list S is the same the sum of the integers in list T . Does S contain more integers than T ? 1) The avg of the integers in S is less than the avg of the integers in T 2) The median of the integers in S is greater than the median of the integers in T”
April 4, 2014
candygal79 posted a new topic called DS-9 in the Data Sufficiency forum
“If a and b are positive numbers , what are the coordinates of the midpoint of the line segment CD in the xy-plane ? 1) The coordinates of C are ( a , 1-b) 2) The coordinates of D are ( 1-a, b)”
April 4, 2014
candygal79 posted a new topic called DS -8 in the Data Sufficiency forum
“Is x between 0 and 1 ? 1) x is between -1/2 and 3/2 2) 3/4 is 1/4 more than x”
April 4, 2014
candygal79 posted a new topic called DS-7 in the Data Sufficiency forum
“Are x and y both positive ? 1) 2x -2y = 1 2) x/ y > 1”
April 4, 2014
candygal79 posted a new topic called DS-6 in the Data Sufficiency forum
“In isosceles RST what is measure of < R 1) the measure of <T is 100 2) the measure of < S is 40”
April 4, 2014
candygal79 posted a new topic called DS-5 in the Data Sufficiency forum
“This morning, a certain sugar container was full. Since then some of the sugar from this container was used to make cookies. If no other sugar was removed from or added to the container, by what percent did the amount of sugar in the container decrease ? 1) The amount of sugar in the container ...”
April 4, 2014
candygal79 posted a new topic called DS-4 in the Data Sufficiency forum
“For the students in class A , the range of their heights is r cms and the greatest height is g cms. For the students in class B, the range of their heights is s cms and the greatest height is h cms. Is the least height of the students class A greater than the least height of the students in class B ...”
April 4, 2014
candygal79 posted a new topic called DS -3 in the Data Sufficiency forum
“Is p + pz = p 1) p= 0 2) z = 0”
April 4, 2014
candygal79 posted a new topic called DS -2 in the Data Sufficiency forum
“Malik''s recipe for 4 servings of a certain dish requires 1 1/2 cups of pasta. According to this recipe, what is the number of cups pasta that Malik will use the next time he prepares this dish ? 1) The next time he prepare this dish, Malik will make half as many servings as he did the last time ...”
April 4, 2014
candygal79 posted a new topic called DS -1 in the Data Sufficiency forum
“If the product of the three digits of the positive integer k is 14, what is the value of k ? 1) k is an odd integer 2) k< 700”
April 4, 2014
candygal79 posted a new topic called Seriously i dont get this qs in the Problem Solving forum
“A metal company''s old machine makes bolts a constant rate of 100 bolts per hour. The company''s new machine makes bolts constant rate of 150 bolts per hour. If both machine start at the same time and continue making bolts simultaneously, how many MINUTES will it take the two machines to make a ...”
April 4, 2014
candygal79 posted a new topic called Not sure in the Problem Solving forum
“Which of the following inequalities has a solution set that, when graphed on the number line, is a single segment of finite length ? A x4 greater or equal 1 B x3 lesser or equal 27 C x2 greater or equal 16 D 2 lesser or equal|x|lesser or equal 5 E 2 lesser or equal 3x + 4 lesser or ...”
April 4, 2014
candygal79 posted a new topic called Not sure in the Problem Solving forum
“Which of the following inequalities has a solution set that, when graphed on the number line, is a single segment of finite length ? A x4 greater or equal 1 B x3 lesser or equal 27 C x2 greater or equal 16 D 2 lesser or equal|x|lesser or equal 5 E 2 lesser or equal 3x + 4 lesser or ...”
April 4, 2014
candygal79 posted a new topic called I dont seem to get the calculation in the Problem Solving forum
“In 1990 the budgets for projects Q & V were $660,000 and$ 780,000, respectively. In each of the next 10 years , the budget for Q increased by $30,000 and the budget for V decreased by$ 10,000. In which was the budget for Q equal to the budget for V ? A 1992 B 1993 C 1994 D 1995 ...”
April 4, 2014
candygal79 posted a new topic called Prime Factor in the Problem Solving forum
“What is the sum of different positive Prime Factor of 550 ? A 10 B 11 C 15 D 16 E 18”
April 4, 2014
candygal79 posted a new topic called Ratio in the Problem Solving forum
“The perimeters of square regions and rectangular region R are equal. If the sides of R are in the ratio 2:3 , what is the ratio of the area of region R to the are of region S ? A 25:16 B 24:25 C 5:6 D 4:5 E 4:9”
April 4, 2014
candygal79 posted a new topic called Problem 3 in the Problem Solving forum
“A certain state levies charge 4% tax on the nightly rates of hotel room. A certain hotel in this state also charge $2.00 nightly fee per room which is not subjected to tax. If the total charge for a room for one night was$ 74.80, what was the nightly rate of the room ?”
April 3, 2014
candygal79 posted a new topic called Problem 3 in the Problem Solving forum
“48 % male students , 52 % female students in a school.In a graduating class, there was 40 % male and 20 % female who are 25 years and older. If one of the student in the graduating class is random selected, approx what is the probability he/she will be less than 25 years ?”
April 3, 2014
candygal79 posted a new topic called Problem 2 in the Data Sufficiency forum
“Is the product of a certain pair of integer even ? 1) The sum of the integers is odd 2) One of the integer is even and odd”
April 3, 2014
candygal79 posted a new topic called Problem 1 in the Data Sufficiency forum
“If x is a positive number, is x < 16 ? 1) x is less than average of the first ten positive integer 2) x is the square of an integer”
April 3, 2014
candygal79 posted a new topic called Problem 2 in the Problem Solving forum
“A lawyer charges her clients $200 at first hour of her time and$ 150 for each additional hour. If the lawyer charged her new client $1,550 for a certain number of hours of her time , how much was the average charge per hour ?” April 3, 2014 candygal79 posted a new topic called Problem in the Problem Solving forum “An auction charges a commission 15 % on the first$50,000 of the sale price of an item, plus 10% on the amount of the sale price in excess of $50,000. What was the sale price of a painting for which the auction house charged a total commission of$ 24,000 ?”
April 3, 2014
vivek1303 posted a reply to Beating the Verbal – (760 Q50, V42, AWA 6.0, IR8) in the I just Beat The GMAT! forum
“Great Score!!! Now is your time to drench yourself in glory. Anyhow, what was the time-gap between your actual exam and your retake? Thanks!”
April 3, 2014
mukherjee.tanuj3@gmail.co posted a reply to Official 1 in the Sentence Correction forum
“In option "D", IT has no antecedent. Regards, Mukherjee”
April 1, 2014
“Experts HELP! Regards, Mukherjee”
March 31, 2014
mukherjee.tanuj3@gmail.co posted a reply to GMAT4 in the Data Sufficiency forum
“I choose "E" as I didn''t see the word "Different"! Thanx ppl”
March 31, 2014
mukherjee.tanuj3@gmail.co posted a reply to GMAT4 in the Sentence Correction forum
“Q1. Why is THEM wrong, as the 1st segment uses THEM? Q2. How are the 3 segments parallel? Regards, Mukherjee”
March 31, 2014
mukherjee.tanuj3@gmail.co posted a reply to GMAT4 in the Sentence Correction forum
March 31, 2014
mukherjee.tanuj3@gmail.co posted a reply to GMAT3 in the Sentence Correction forum
“Experts! Bear with me! A. good B. "that" is problematic as it doesn''t have a clear antecedant.Still, I have a doubt is "From when correct"? C."had" is wrong as no comparison between two past activities. D. IT might point to "growth" or ...”
March 31, 2014
mukherjee.tanuj3@gmail.co posted a reply to GMAT3 in the Sentence Correction forum
March 31, 2014
mukherjee.tanuj3@gmail.co posted a reply to GMAT2 in the Sentence Correction forum
“You made the problem so easy! Cheers”
March 31, 2014
mukherjee.tanuj3@gmail.co posted a reply to GMAT1 in the Critical Reasoning forum
March 31, 2014
mukherjee.tanuj3@gmail.co posted a new topic called RC in the Reading Comprehension forum
“http://s4.postimg.org/8t6h3tfzt/RC1.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called RC in the Reading Comprehension forum
“http://s4.postimg.org/8t6h3tfzt/RC1.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called RC in the Reading Comprehension forum
“http://s4.postimg.org/8t6h3tfzt/RC1.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT2 in the Critical Reasoning forum
“http://s11.postimg.org/9vicqf7q7/CR2.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT1 in the Critical Reasoning forum
“http://s11.postimg.org/5epzrqti7/CR1.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT5 in the Sentence Correction forum
“http://s27.postimg.org/b6wrfs17j/SC5.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT4 in the Sentence Correction forum
“http://s8.postimg.org/kv8vwls1t/SC4.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT3 in the Sentence Correction forum
“http://s3.postimg.org/h61ew74bj/SC3.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT2 in the Sentence Correction forum
“http://s17.postimg.org/uzjxjsrob/SC2.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT1 in the Sentence Correction forum
“http://s24.postimg.org/hzhozczb5/SC1.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT4 in the Data Sufficiency forum
“http://s28.postimg.org/pazxmt9ux/Quants5.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT3 in the Data Sufficiency forum
“http://s27.postimg.org/lrk0k7swv/Quants3.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT2 in the Data Sufficiency forum
“http://s28.postimg.org/q7nitcaw9/Quant2.jpg”
March 30, 2014
mukherjee.tanuj3@gmail.co posted a new topic called GMAT1 in the Data Sufficiency forum
“http://s29.postimg.org/yg0kmcckz/Quants.jpg”
March 30, 2014
candygal79 posted a new topic called Yelp (2) in the Data Sufficiency forum
“If x and y are prime numbers, is y(x-3) odd ? 1) x > 10 2) y < 3 My workout - x can be 11 ,13 ,17 ,19 and so on forth lets say x is 11 Y ( 11-3) = y (8) , it can''t be odd since it is multiplying with 8 2 ) y < 3 y =2 HELP !”
March 29, 2014
candygal79 posted a new topic called Yelp! in the Data Sufficiency forum
“If y > 0 , is x less than 0 ? 1) xy = 16 2 ) x- y = 6 My workout - xy =16 ( 4.4 , 1.16 ,2.8) So, x cannot be less than 0 . x - y = 6 x= 6+ y So, x cannot be less than 0 as it will be more when adding 6 I would go with E but the answer says otherwise. Pls help !!”
March 29, 2014
candygal79 posted a new topic called I don''t understand the question, Yelp !!! in the Data Sufficiency forum
“Is 0 < a/b <1 ? 1) ab >1 2) a-b <1 ? I don''t understand how to do this DS .”
March 29, 2014
mukherjee.tanuj3@gmail.co posted a reply to sentence correction in the Sentence Correction forum
“why not "C"?”
March 28, 2014
mukherjee.tanuj3@gmail.co posted a reply to Reptiles body heat in the Sentence Correction forum
“Hey! When will I know "by" is implied or not implied? I choose "D", because "A" was missing "by". Regards, mukherjee”
March 28, 2014
mukherjee.tanuj3@gmail.co posted a reply to Industrial pollutants in the Sentence Correction forum
“Hey! Option "A" breaks parallelism. showed that .. past perfect and simple past. I don''t UNDERSTAND, am I missing something? Regards, Mukherjee EXPERTS Do HELP”
March 28, 2014
“so as to v/s such that? which one of the above is correct?”
March 28, 2014
mukherjee.tanuj3@gmail.co posted a reply to Official 2 in the Sentence Correction forum
March 28, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Official 2 in the Critical Reasoning forum
“http://s21.postimg.org/jeqw1k7sj/CR2_GMAT.jpg”
March 27, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Official 1 in the Critical Reasoning forum
“http://s30.postimg.org/7ss86gqjx/CR_GMAT.jpg”
March 27, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Official 4 in the Sentence Correction forum
“http://s23.postimg.org/k86e6r82f/SC4.jpg”
March 27, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Official 3 in the Sentence Correction forum
“http://s18.postimg.org/6a3qftul1/SC3.jpg”
March 27, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Official 2 in the Sentence Correction forum
“http://s12.postimg.org/purq7ruzt/image.jpg”
March 27, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Official 1 in the Sentence Correction forum
“http://s10.postimg.org/ya9jcyjqd/Doubt.jpg”
March 27, 2014
candygal79 posted a new topic called YELP in the Problem Solving forum
“Ok am stuck with this . I tried many times but i failed to get the answer. 4. The sales price of a car is $12,590, which is 20% off the original price. What is the original price? A.$14,310.40 B. $14,990.90 C.$15,290.70 D. $15,737.50 E.$16,935.80”
March 27, 2014
candygal79 posted a new topic called Am stuck !! Pls help !! in the Problem Solving forum
“2. If Sally can paint a house in 4 hours, and John can paint the same house in 6 hour, how long will it take for both of them to paint the house together? A. 2 hours and 24 minutes B. 3 hours and 12 minutes C. 3 hours and 44 minutes D. 4 hours and 10 minutes E. 4 hours and 33 minutes ...”
March 27, 2014
candygal79 posted a new topic called Am stuck !! Pls help !! in the Problem Solving forum
“. Jim is able to sell a hand-carved statue for $670 which was a 35% profit over his cost. How much did the statue originally cost him? A.$496.30 B. $512.40 C.$555.40 D. $574.90 E.$588.20”
March 27, 2014
mukherjee.tanuj3@gmail.co posted a reply to Difficult in the Critical Reasoning forum
“Sorry! Couldn''t understand your explanation! I choose "D" too! Why "C"?”
March 26, 2014
mukherjee.tanuj3@gmail.co posted a reply to OG11...why is 'a' wrong? in the Sentence Correction forum
“Bubbi! You couldn''t convince me!”
March 25, 2014
mukherjee.tanuj3@gmail.co posted a reply to OG11...why is 'a' wrong? in the Sentence Correction forum
“OA E”
March 25, 2014
candygal79 posted a new topic called Whats the calculation ? in the Problem Solving forum
“At the beginning of 1992, Maria''s stock portfolio was worth $600,000. During 1992, the portfolio''s value increased by 95%. During the next year, the portfolio increased its worth by only 5%. What was Maria''s portfolio worth, in dollars, by the end of 1993? 1,180,000 1,200,000 1,200,300 ...” March 25, 2014 candygal79 posted a new topic called # operation ? in the Problem Solving forum “If the operation # is defined for all x and y by the equation x#y=2(x2y−y2x), then (6#5)/(5#6)= a = -2 b= -1 c= 1/2 d= 2 e= 1” March 24, 2014 candygal79 posted a new topic called Yelp in the Problem Solving forum “Marv earned in two consecutive days a total of 920 dollars. Half of the amount that Marv earned on the first day is 20 dollars less then the amount that he earned on the second day. How much did Marv make on the first day (in dollars)? A-540 B-200 C-600 D-700 E-640” March 24, 2014 mukherjee.tanuj3@gmail.co posted a new topic called OG11...why is ''a'' wrong? in the Sentence Correction forum “http://s3.postimg.org/a46yt04sv/og11.jpg” March 22, 2014 mukherjee.tanuj3@gmail.co posted a new topic called OG11.... Why \"b\", which is also a noun, is wrong? in the Sentence Correction forum “http://s16.postimg.org/gssp81hkx/OG11.jpg” March 22, 2014 mukherjee.tanuj3@gmail.co posted a new topic called OG11.... Why \"b\", which is also a noun, is wrong? in the Sentence Correction forum “http://s16.postimg.org/gssp81hkx/OG11.jpg” March 22, 2014 mukherjee.tanuj3@gmail.co posted a reply to GMAT Prep-Stone Age People in the Sentence Correction forum “The answer has to be "C". Reasons: X rather than Y .Here X and Y have to be parallel. eliminate A and C as these options use have. Among C,D and E only "c" keeps the parallelism. option E has one more problem, "including" at the end seems to refer the entire clause ...” March 21, 2014 mukherjee.tanuj3@gmail.co posted a reply to Music copy in the Sentence Correction forum “Why can''t "them" refer to "collections of music"? Music is in the prepositional phase!! Therefore, use of them to refer collections must be correct!? Am I wrong? Regards, Mukherjee” March 21, 2014 mukherjee.tanuj3@gmail.co posted a new topic called Why option \"E\" wrong?... tests..compared to tests in the Sentence Correction forum “http://s29.postimg.org/iwqb4a8qb/veritas3.jpg” March 21, 2014 mukherjee.tanuj3@gmail.co posted a new topic called Veritas in the Sentence Correction forum “http://s17.postimg.org/kf6x8e1aj/veritas.jpg” March 21, 2014 mukherjee.tanuj3@gmail.co posted a new topic called How is \"IS\" n \"charactize\"correct? in the Sentence Correction forum “http://s29.postimg.org/h0bhzaunn/veritas2.jpg” March 21, 2014 mukherjee.tanuj3@gmail.co posted a reply to R&D expenditures in the Sentence Correction forum “Because of the enormous research and development expenditures required to survive in the electronics industry, an industry marked by rapid innovation and volatile demand, such firms tend to be very large. Isn''t the bolded part of the above sentence,causing comma splice??? Regards, ...” March 20, 2014 mukherjee.tanuj3@gmail.co posted a reply to Teratomas cancer in the Sentence Correction forum “Hey! In option "E" there is COMMA SPLICE ERROR. Am I wrong? Do reply! Regards, Mukherjee” March 20, 2014 ngupta27 posted a new topic called GMAT Verbal Course- EGMAT - need co-buyer in the GMAT Verbal & Essays forum “Hi ALL, I am planning to buy EGMAT Verbal online course. ITs cost is$199 . I am willing to share the cost if anyone else needs it. We can pay 50-50. Let me know if anyone is interested.”
March 20, 2014
mukherjee.tanuj3@gmail.co posted a reply to Magoosh prob 2 in the Sentence Correction forum
“Hey! Can I say that option "E" is the only option that follows infinitives of purpose. If yes, then is it a thumb rule? Regards, Mukherjee”
March 20, 2014
candygal79 posted a new topic called I am having trouble with distance calculation (2) in the Problem Solving forum
“Two cars travel away from each other in opposite directions at 24 miles per hour and 40 miles per hour respectively. If the first car travels for 20 mins and the second car for 45 min, how many miles apart will they be at the end of their trips ? 22 24 30 38 42”
March 20, 2014
candygal79 posted a new topic called I am having trouble with distance calculation (1) in the Problem Solving forum
“A certain mule travels at 2/3 the speed of a certain horse. If it takes the horse 6 hours to travel 20 miles, how many hours will the trip take the mule ? 4 8 9 10 30”
March 20, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC Q5GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Hey Bill! I read in Manhattan SC book that "which" points to the noun, which is just before the comma. The book says the above rule is a thumb rule. I''m confused! According to the logic that I previously drilled, your example must be wrong. Could you please clear my doubt? Help me ...”
March 20, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC Q6GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Thank you!”
March 20, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC Q6GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Hey! I''m sorry for the last post! I am still not convinced with your explanation about why option "C"is wrong? Regards, Mukherjee”
March 19, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC Q5GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Hey! How to judge when "which"/"that" refer to noun just before comma and when they refer to noun before prepositional phrase? Regards, mukherjee”
March 19, 2014
mukherjee.tanuj3@gmail.co posted a reply to Josephine Baker in the Sentence Correction forum
“Hey Patrick! I found one reason to eliminate "D". We can''t use "she" after "and"! Tanuj ran to the police and filed the FIR.( correct) Tanuj ran to the police and he filed the FIR. ( wrong) Am I right? Regards, Mukherjee”
March 19, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC Q5GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Hey! I have one more reason to eliminate A,B & C. "Which" and "That" seem to point to shots rather than news. Therefore, these options must be wrong. "another" along with "in adition" in option "D" must be wrong. Left with option ...”
March 19, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC Q4GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Hey! I have a very silly doubt! "I had my lunch". This sentence is simple past, right? I have one more doubt, why is the name of the bird put inside 2 commas?... I think only one comma should be used i.e before the name of the bird. regards, Mukherjee”
March 19, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC Q2GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Hey! I found one more reason to eliminate "B". We can''t use "it" after "and"! Tanuj ran to the police and filed the FIR.( correct) Tanuj ran to the police and he filed the FIR. ( wrong) Am I right? Regards, Mukherjee”
March 19, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Hey! What is "her" referring to? regards, mukherjee”
March 19, 2014
mukherjee.tanuj3@gmail.co posted a reply to SC Q3GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum
“Hey Rich! I can''t eliminate option "C" and option"A" wrong! Help! Regards, Mukherjee”
March 19, 2014
mukherjee.tanuj3@gmail.co posted a new topic called Magoosh prob 2 in the Sentence Correction forum
“3) The CEO of Laminar Flow gave his R & D team a new $300 million dollar research facility, with cutting-edge technology, that they can research potentially revolutionary innovations in. (A) that they can research potentially revolutionary innovations in (B) for conducting research about ...” March 19, 2014 mukherjee.tanuj3@gmail.co posted a new topic called Magoosh prob in the Sentence Correction forum “1) Snakes molt their skins regularly, for the purpose of regenerating skin worn by ground contact and for eliminating parasites, like ticks and mites, and this made ancient people venerate the snake as a symbol of resilience and healing. (A) for the purpose of regenerating skin worn by ground ...” March 19, 2014 mukherjee.tanuj3@gmail.co posted a new topic called CONFUSED in the Sentence Correction forum “OG #19 (verbal review purple book) While depressed property values can hurt some large investors, they are potentially devastating for homeowners, whose equity – in many cases representing a life’s savings – can plunge or even disappear. By the OG, The sentence above, choice (A), is ...” March 19, 2014 mukherjee.tanuj3@gmail.co posted a reply to SC Q6GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “Hey! I am still not convinced with your explanation about why option "B"is wrong? Regards, Mukherjee” March 19, 2014 mukherjee.tanuj3@gmail.co posted a reply to SC Q7GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “Hey! I have a serious problem! I solved this question with correct explanations in 55 seconds, but in the actual Gmatprep test I couldn''t understand the problem at all. Could you please help me? Regards, Mukherjee” March 19, 2014 mukherjee.tanuj3@gmail.co posted a reply to GMAT RC in the Reading Comprehension forum “OA[spoiler]C[spoiler] A:We are indeed comparing the functions. But, It isn''t the main purpose. This option is a good contender. B:There is no debate over the role of verenosal organ. C:Read the first line. So,there is a debate on constitution of pheromone. D: Defination! Out of scope E: Not ...” March 18, 2014 mukherjee.tanuj3@gmail.co posted a reply to greater than equal to in the Problem Solving forum “Great Explanation! Regards, Mukherjee” March 18, 2014 March 18, 2014 mukherjee.tanuj3@gmail.co posted a reply to Smallest Prime Factor-GMATPREP in the Data Sufficiency forum “Excellent Explanation!!! Thank you! Regards, Tanuj” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q7GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “http://s13.postimg.org/xqxewg7kz/sc_Q7_GMATPrep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q6GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “http://s1.postimg.org/ks4xk2dfv/sc_Q6_GMATPrep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q5GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “http://s30.postimg.org/fw9lsc5tp/sc_Q5_GMATPrep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q4GMATPrep- EXPERT HELP NEEDED in the Reading Comprehension forum “http://s8.postimg.org/dau48wsxd/RCQ4_Gmatprep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q3GMATPrep- EXPERT HELP NEEDED in the Reading Comprehension forum “http://s10.postimg.org/i5ctroel1/RCQ3_Gmatprep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q2GMATPrep- EXPERT HELP NEEDED in the Reading Comprehension forum “https://s11.postimg.org/l1hkk02zz/RCQ2_Gmatprep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q1GMATPrep- EXPERT HELP NEEDED in the Reading Comprehension forum “http://s9.postimg.org/pkp4mrdff/RCQ1_Gmatprep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q4GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “http://s17.postimg.org/b9yxmd7p7/sc_Q4_GMATPrep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q3GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “http://s2.postimg.org/h3nd4q479/sc_Q3_GMATPrep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC Q2GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “http://s13.postimg.org/5f27ojrz7/sc_Q2_GMATPrep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called SC GMATPrep- EXPERT HELP NEEDED in the Sentence Correction forum “http://s30.postimg.org/aox8vzn25/sc_Q1_GMATPrep.jpg” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called Smallest Prime Factor-GMATPREP in the Data Sufficiency forum “If h(n)= 2*4*6*8...n, then what is the smallest prime factor of h(100)+1?” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called greater than equal to in the Problem Solving forum “What''s the difference between x/y>1 and x>y? I thought the answer was " no difference" But, according to GMAT Prep, there is a difference. In x/y>1, both x & y have to be + or -. But there is no such criteria in x>y. I am still not convinced. Do help!! Regards, ...” March 18, 2014 mukherjee.tanuj3@gmail.co posted a new topic called GMAT PREP Q HELP in the Problem Solving forum “http://s11.postimg.org/6ap3ayblb/doubt.jpg” March 18, 2014 LulaBrazilia posted a new topic called When was the order filled? in the Data Sufficiency forum “On Monday morning a certain machine ran continuously at a uniform rate to fill a production order. At what time did it completely fill the order that morning? 1) The machine began filling the order at 9:30 am 2) The machine had filled 1/2 of the order by 10:30am and 5/6 of the order by 11:10am” March 17, 2014 LulaBrazilia posted a new topic called Is xy > 5? in the Data Sufficiency forum “Is xy > 5? 1) 1 <= x <= 3 and 2 <= y <= 4 2) x+y = 5” March 17, 2014 LulaBrazilia posted a new topic called Probability that X and Y solve in the Problem Solving forum “Xavier, Yvonne, and Zelda each try independently to solve a problem. If their individual probabilities for success are 1/4, 1/2, and 5/8, respectively, what is the probability that Xavier and Yvonne, but not Zelda, will solve the problem? A) 11/8 B) 7/8 C) 9/64 D) 5/64 E) 3/64” March 17, 2014 LulaBrazilia posted a new topic called Resistors connected in parallel in the Problem Solving forum “In an electric circuit, two resistors with resistances x and y are connected in parallel. In this case, if r is the combined resistance of these two resistors, then the reciprocal of r is equal to the sum of the reciprocals of x and y. What is r in terms of x and y? A) xy B) x+y C) ...” March 17, 2014 candygal79 posted a reply to Arithmetic in the Problem Solving forum “= (5+9+12)/6 -------------- (3+4+36)/9 Ok why is it /6 and 9 ? I am kind of confused here ...” March 14, 2014 candygal79 posted a new topic called Arithmetic in the Problem Solving forum “5/6 +3/2 + 2 -------------- 1/3 + 4/9 + 4 I tried several times but am not getting 39/43 !” March 14, 2014 candygal79 posted a new topic called Am not getting the right of 13 5/6 in the Problem Solving forum “7 + 5 X (1/4)2(exponent) - 6 / (2 -3)” March 14, 2014 candygal79 posted a reply to Yelp in the Data Sufficiency forum “Thank you so much for the explanation !” March 13, 2014 mukherjee.tanuj3@gmail.co posted a reply to MGMAT. Similar to OG quest in the Critical Reasoning forum “The premise says that the population % decrease is 10% in 5 yrs. EX:- No. of retired ppl moving to the country be 100 in 2005 2005: 100 2010: 90 It should 10 PERCENTAGE POINTS in the premise to make option "E" correct. Regards, Mukherjee” March 13, 2014 mukherjee.tanuj3@gmail.co posted a reply to GMAT prep! Stumped! in the Critical Reasoning forum “A classic causal argument! Keep sharing such g8 Qs Regards, Mukherjee” March 13, 2014 mukherjee.tanuj3@gmail.co posted a reply to OG 11 Question No: 10 Weaken Question in the Critical Reasoning forum “Expert HELP! It can very well be that the drugs other than Novex are very cheap than NOVEX. Option "C" doesn''t COMPARE relative cost. It merely says that NOVEX is cheap. IMO:B EXPERT HELP!!” March 13, 2014 candygal79 posted a reply to I don't understand the question, Yelp !!! in the Problem Solving forum “Thank you Sirs. NOW i clearly understood both the explanation.” March 13, 2014 candygal79 posted a new topic called Yelp in the Data Sufficiency forum “A certain number of people wait in line at a bank until served. How many people are presently in line? (1) If 2 more people joined the line and none of those currently waiting were served, there would be at least 10 people waiting in line. (2) If no more people joined the line and 4 of the ...” March 13, 2014 candygal79 posted a new topic called x is +ve ??? in the Data Sufficiency forum “Is x positive ? |x+6| > 6 |x- 6| > 6 My math workout : x > 0 ( X= 6 -6) x > 12 ( x= 6 + 6) So i chose C” March 13, 2014 candygal79 posted a new topic called Algebra ! in the Data Sufficiency forum “If a, b, and c are positive, is ac > 5 ? (1) a + b = 3 (2) 4=c-b” March 13, 2014 candygal79 posted a new topic called Algebra QS BUT I don''t get it .Yelp ! in the Problem Solving forum “Which of the following variables cannot be equal to 0 if (v+w) (x-y-z) (x+y+z)z = 3 ? A )v B )w C )x D )y E )z” March 13, 2014 candygal79 posted a new topic called I don''t understand the question, Yelp !!! in the Problem Solving forum “If a=2/7 b and b=7/3 c then what percent of a is c? A)50 % B) 57 1/7 % C) 66 2/3 % D) 125% E)150% Thanks” March 13, 2014 mukherjee.tanuj3@gmail.co posted a reply to Voter's choice in the Critical Reasoning forum “Hey Rich! We are NOT supposed to SECONDARY ASSUME! We should work only on the data presented above! Am I wrong? Regards, Mukherjee” March 13, 2014 candygal79 posted a new topic called Worried - How can i deal with RC qs ? in the Reading Comprehension forum “I am having trouble with RC - I am not able to get more right answers . Is note taking necessary ? I don''t take notes . I feel very frustrated when i get wrong answers . Worried ! How can i deal with RC ?Eg out 7qs, i probably will get 2 or 3 correct answers ..How can i improve ..I am using Kaplan ...” March 13, 2014 mukherjee.tanuj3@gmail.co posted a reply to Payload in the Critical Reasoning forum “IMO C Expert HElp” March 12, 2014 mukherjee.tanuj3@gmail.co posted a reply to GMAC Paper Test in the Critical Reasoning forum “I can''t eliminate "D"!!! Please HELP” March 12, 2014 LulaBrazilia posted a new topic called what is the value of x^2 - y^2? in the Data Sufficiency forum “What is the value of x^2 - y^2? 1) x-y = y+2 2) x-y = 1/(x+y)” March 12, 2014 LulaBrazilia posted a new topic called is y greater than 75? in the Data Sufficiency forum “If y is greater than 110 percent of x, is y greater than 75? 1) x > 75 2) y-x = 10” March 12, 2014 LulaBrazilia posted a new topic called Which n would raise average to 55? in the Problem Solving forum “For the past n days, the average (arithmetic mean) daily production at a company was 50 units. If today''s production of 90 unites raises the average to 55 units per day, what is the value of n? A) 30 B) 18 C) 10 D) 9 E) 7” March 12, 2014 LulaBrazilia posted a new topic called Probability of selecting siblings in the Problem Solving forum “A certain junior class has 1,000 students and a certain senior class has 800 students. Among these students, there are 60 sibling pairs, each consisting of 1 junior and 1 senior. If 1 student is to be selected at random from each class, what is the probability that the 2 students selected will be a ...” March 12, 2014 mukherjee.tanuj3@gmail.co posted a reply to OG11 Q89.... Do help in the Critical Reasoning forum “Patrick!! U rock \m/” March 11, 2014 mukherjee.tanuj3@gmail.co posted a reply to LSATs Qs free download with detailed explanations in the Critical Reasoning forum “Hey Rich, I score around 630.I''m good in mathematics. My avg score in quants is 48 on the other hand my avg score in verbal is 31. Please suggest me few tips! One more thing that I''ll to add is I can answer well while practising but during any exam I get nervous, causing low score. My aim is ...” March 11, 2014 March 11, 2014 MayankSancheti09 posted a new topic called Integers-Prime Numbers in the Problem Solving forum “How to determine whether a no. such as 67 or 139 or any other no. is a prime or not? Do we have any particular and easier method or we need to check for its divisors one by one?” March 11, 2014 LulaBrazilia posted a new topic called Number properties - how do I even start? in the Data Sufficiency forum “Can the positive integer p be expressed as the product of two integers, each of which is greater than 1? 1) 31 < p < 37 2) p is odd” March 11, 2014 LulaBrazilia posted a new topic called What is wz? in the Data Sufficiency forum “If w+z = 28, what is the value of wz? 1) w and z are positive integers. 2) w and z are consecutive odd integers.” March 11, 2014 LulaBrazilia posted a new topic called y is what percent of x? in the Problem Solving forum “If m > 0 and x is m percent of y, then, in terms of m, y is what percent of x? A) 100m B) 1/100m C) 1/m D) 10/m E) 10,000/m” March 11, 2014 LulaBrazilia posted a new topic called What is median - average? in the Problem Solving forum “If m is the average (arithmetic mean) of the first 10 positive multiples of 5 and if M is the median of the first 10 positive multiples of 5, what is the value of M-m? A) -5 B) 0 C) 5 D) 25 E) 27.5” March 11, 2014 LulaBrazilia posted a new topic called If x = 0.rstu what is x? in the Data Sufficiency forum “If x = 0.rstu, where r, s, t, and u each represent a nonzero digit of x, what is the value of x? 1) r = 3s = 2t = 6u 2) The product of r and u is equal to the product of s and t” March 10, 2014 LulaBrazilia posted a new topic called Is sqrt(n+k) > 2*sqrt(n)? in the Data Sufficiency forum “If n and k are positive integers, is sqrt(n+k) > 2*sqrt(n)? 1) k > 3n 2) n+k > 3n” March 10, 2014 LulaBrazilia posted a new topic called Grouping researchers in the Problem Solving forum “Of the 50 researchers in a workgroup, 40 percent will be assigned to team A and the remaining 60 percent to team B. However, 70 percent of the researchers prefer team A and 30 percent prefer team B. What is the lowest possible number of researchers who will NOT be assigned to the team they prefer? ...” March 10, 2014 LulaBrazilia posted a new topic called How many positive factors? in the Problem Solving forum “How many different positive integers are factors of 441? A) 4 B) 6 C) 7 D) 9 E) 11” March 10, 2014 mukherjee.tanuj3@gmail.co posted a reply to LSATs Qs free download with detailed explanations in the Critical Reasoning forum “MaTT thanx for your response but I''m in need of free pdf file! please share the required!!” March 9, 2014 March 9, 2014 mukherjee.tanuj3@gmail.co posted a reply to OG11 Q89.... Do help in the Critical Reasoning forum “I know the OA is right, but I can''t eliminate the option E. Give me a solid reason why " E" is wrong?” March 9, 2014 mukherjee.tanuj3@gmail.co posted a new topic called OG11 Q89.... Do help in the Critical Reasoning forum “http://s27.postimg.org/4m6qfne7j/Untitled.jpg OA A” March 9, 2014 mukherjee.tanuj3@gmail.co posted a reply to CR 1000 in the Critical Reasoning forum “IMO- A Reason- The conclusion in the argument is in future tense. The argument doesn''t comment on something that has happened rather the it comments on something that will happen, same as "A". Regards, Mukherjee Please share OA, as sharing OA is a good practice. Do like me, if ...” March 9, 2014 mukherjee.tanuj3@gmail.co posted a reply to prep questions in the Sentence Correction forum “Hey Stacey!! The use of past perfect in the option "C" is not wrong? Do reply! Regards, Mukherjee” March 9, 2014 mukherjee.tanuj3@gmail.co posted a reply to CR 1000 in the Critical Reasoning forum “OA? Sharing OA is a good practise!” March 9, 2014 candygal79 posted a new topic called I don''t understand the question, Yelp !!! in the Data Sufficiency forum “If a and b greater than zero, a is what percentage greater than b ? 1) 4a =5b 2) a=b+2000” March 8, 2014 mukherjee.tanuj3@gmail.co posted a reply to a good paradox question... in the Critical Reasoning forum “Guys! Why "E" is wrong? Ans: "E" do tells us that the exurbania have good connection due to people fleeing from urban places but the option doesn''t comment on "why less connection is present in URBAN state!!!!". It''s HALF answer/ half cooked answer choice!!! Do ...” March 8, 2014 mukherjee.tanuj3@gmail.co posted a new topic called LSATs Qs free download with detailed explanations in the Critical Reasoning forum “Hey all, I am in urgent need of EXPIRED ACTUAL LSAT CR, SC & RC questions with DETAILED EXPLANATIONS. Do reply a URL or a pdf file. THANKS IN Advance!! Regards, mukherjee” March 8, 2014 LulaBrazilia posted a new topic called What is the value of k? in the Data Sufficiency forum “If n is a positive integer and k = 5.1*10^n, what is the value of k? 1) 6,000 < k < 500,000 2) k^2 = 2.601 * 10^9” March 7, 2014 LulaBrazilia posted a new topic called Are all numbers equal? in the Data Sufficiency forum “Are all of the numbers in a certain list of 15 numbers equal? 1) The sum of all the numbers in the list is 60. 2) The sum of any 3 numbers in the list is 12.” March 7, 2014 LulaBrazilia posted a new topic called Which cannot be true? in the Problem Solving forum “S is a set containing 9 different numbers. T is a set containing 8 different numbers, all of which are members of S. Which of the following statements cannot be true? A) The mean of S is equal to the mean of T. B) The median of S is equal to the median of T. C) The range of S is equal to ...” March 7, 2014 LulaBrazilia posted a new topic called How many even divisors? in the Problem Solving forum “If n=4p, where p is a prime number greater than 2, how many different positive even divisors does n have, including n? A) 2 B) 3 C) 4 D) 6 E) 8 Is it safe to plug in? Is there a quick algebraic approach?” March 7, 2014 mukherjee.tanuj3@gmail.co posted a reply to Discount in billboards - Not convinced with OA. in the Critical Reasoning forum “Option "B" is wrong because "The companies that are unaware about the discounts are buying the spaces because the price is low; when price increases their buying capacity will decrease." Option "C" is right because "Those companies that are non-local will buy the ...” March 7, 2014 March 7, 2014 mukherjee.tanuj3@gmail.co posted a new topic called GMATPrep Q Bold face in the Critical Reasoning forum “http://s30.postimg.org/71db7vh0t/Untitled.jpg EXPERT PLEASE HELP” March 7, 2014 March 7, 2014 mukherjee.tanuj3@gmail.co posted a reply to CR: Strengthe: GMATPREP in the Critical Reasoning forum “The Q shared is very educative. See, there are 5 five ways to strengthen an argument in cause-effect relationship:- 1.There are no other causes to the effect than the cause mentioned. 2.the reverse i.e the effect doesn''t cause the "cause". ( Applicable to this Q) 3.when cause is not ...” March 7, 2014 LulaBrazilia posted a new topic called Predicting next year''s inflation in the Critical Reasoning forum “Last year the rate of inflation was 1.2 percent, but during the current year it has been 4 percent. We can conclude that inflation is on an upward trend and the rate will be still higher next year. Which of the following, if true, most seriously weakens the conclusion above? (A) The inflation ...” March 7, 2014 LulaBrazilia posted a new topic called Robot satellites and space flights in the Critical Reasoning forum “Robot satellites relay important communications and identify weather patterns. Because the satellites can be repaired only in orbit, astronauts are needed to repair them. Without repairs, the satellites would eventually malfunction. Therefore, space flights carrying astronauts must continue. ...” March 7, 2014 LulaBrazilia posted a new topic called 1988 corporate profits in the Sentence Correction forum “As measured by the Commerce Department, corporate profits peaked in the fourth quarter of 1988 and have slipped since then, as many companies have been unable to pass on higher costs. (A) and have slipped since then, as many companies have been unable to pass on higher costs (B) and have ...” March 7, 2014 LulaBrazilia posted a new topic called R&D expenditures in the Sentence Correction forum “Because of the enormous research and development expenditures required to survive in the electronics industry, an industry marked by rapid innovation and volatile demand, such firms tend to be very large. (A) to survive (B) of firms to survive (C) for surviving (D) for survival ...” March 7, 2014 LulaBrazilia posted a new topic called what is the sum of angles x and y? in the Data Sufficiency forum “http://snag.gy/o9TlW.jpg What is the value of x+y in the figure above? 1) w=95 2) z=125” March 6, 2014 LulaBrazilia posted a new topic called Survey results in the Data Sufficiency forum “http://snag.gy/Lo00a.jpg The table above shows the results of a survey of 100 voters who each responded "Favorable" or "Unfavorable" or "Not Sure" when asked about their impressions of Candidate M and of Candidate N. What was the number of voters who resunded ...” March 6, 2014 LulaBrazilia posted a new topic called How many green marbles in R? in the Problem Solving forum “http://snag.gy/19ZDt.jpg In the table above, what is the number of green marbles in jar R? A) 70 B) 80 C) 90 D) 100 E) 110” March 6, 2014 LulaBrazilia posted a new topic called How to quickly find sum of integers? in the Problem Solving forum “If x is equal to the sum of the even integers from 40 to 60, inclusive, and y is the number of even integers from 40 to 60, inclusive, what is the value of x+y ? A) 550 B) 551 C) 560 D) 561 E) 572” March 6, 2014 LulaBrazilia posted a new topic called Robots replacing workers in the Critical Reasoning forum “In many corporations, employees are being replaced by automated equipment in order to save money. However, many workers who lose their jobs to automation will need government assistance to survive, and the same corporations that are laying people off will eventually pay for that assistance through ...” March 6, 2014 LulaBrazilia posted a new topic called Lower incidence of intestinal disease in the Critical Reasoning forum “The number of people diagnosed as having a certain intestinal disease has dropped significantly in a rural county this year, as compared to last year. Health officials attribute this decrease entirely to improved sanitary conditions at water-treatment plants, which made for cleaner water this year ...” March 6, 2014 LulaBrazilia posted a new topic called Barbara McClintock in the Sentence Correction forum “As a result of the ground-breaking work of Barbara McClintock, many scientists now believe that all of the information encoded in 50,000 to 100,000 of the different genes found in a human cell are contained in merely three percent of the cell’s DNA. (A) 50,000 to 100,000 of the different genes ...” March 6, 2014 LulaBrazilia posted a new topic called Prime lending rate in the Sentence Correction forum “The prime lending rate is a key rate in the economy; not only are the interest rates on most loans to small and medium-sized businesses tied to the prime, but also on a growing number of consumer loans, including home equity loans. (A) not only are the interest rates on most loans to small and ...” March 6, 2014 mukherjee.tanuj3@gmail.co posted a reply to Creative engineers in the Critical Reasoning forum “Why "C" option is wrong? Answer: The argument say,"the engineer spends time on the computers INSTEAD of note pads"! So even if he had note-pads besides him, he would have not used it. HOPE IT HELPS!” March 4, 2014 March 4, 2014 March 4, 2014 March 4, 2014 March 4, 2014 March 4, 2014 mukherjee.tanuj3@gmail.co posted a reply to One Billion Dollar Loss in the Sentence Correction forum “EXPERTS!!! Please answer to my question!!!” March 4, 2014 mukherjee.tanuj3@gmail.co posted a reply to NEED EXPERT ADVISE in the Sentence Correction forum “Hey! Q1I guess "compounding" and "costing" are used as verb not as participles. Q2 I read in a book that ",which" refers to the object placed JUST BEFORE the comma. Is the rule ,stated about ",which", wrong?” March 4, 2014 mukherjee.tanuj3@gmail.co posted a reply to The new soft drink, Mango Paradise in the Critical Reasoning forum “Well!! Even I had the same kind of doubts about options such as the above, then i read the book "Power score CR bible", which cleared my doubts. The answer to your Q, "Y not 5" is the logical opposite of "will" is not "will not" but "may". Now, ...” March 3, 2014 mukherjee.tanuj3@gmail.co posted a reply to Critical Reasoning in the Critical Reasoning forum “This is a defender type assumption question:- Here, if A causes B, then nothing else can cause B. OR If B is caused, then it would be A that caused it. Option "B", fits the above explanation!!” March 3, 2014 mukherjee.tanuj3@gmail.co posted a reply to NEED EXPERT ADVISE in the Sentence Correction forum “OA is B! Doubt 1: "which" is referring to only alcohol abuse. AND not drug abuse!!! right? DOUBT 2: option D is parallel and better than option "B", right? Please explain be the OA?” March 3, 2014 mukherjee.tanuj3@gmail.co posted a new topic called NEED EXPERT ADVISE in the Sentence Correction forum “http://s21.postimg.org/srfzpkrvn/Untitled.jpg [/img]” March 3, 2014 LulaBrazilia posted a new topic called Sinusitis incidence rates in the Critical Reasoning forum “In 1987 sinusitis was the most common chronic medical condition in the United States, followed by arthritis and high blood pressure, in that order. The incidence rates for both arthritis and high blood pressure increase with age, but the incidence rate for sinusitis is the same for people of all ...” March 3, 2014 LulaBrazilia posted a new topic called Women in college in the Critical Reasoning forum “The proportion of women among students enrolled in higher education programs has increased over the past decades. This is partly shown by the fact that in 1959, only 11 percent of the women between twenty and twenty-one were enrolled in college, while in 1981, 30 percent of the women between twenty ...” March 3, 2014 LulaBrazilia posted a new topic called Compounding effects of drugs in the Sentence Correction forum “Executives and federal officials say that the use of crack and cocaine is growing rapidly among workers, significantly compounding the effects of drug and alcohol abuse, which already are a cost to business of more than$100 billion a year. (A) significantly compounding the effects of drug and ...”
March 3, 2014
LulaBrazilia posted a new topic called Light baggage in the Sentence Correction forum
“From the bark of the paper birch tree the Menomini crafted a canoe about twenty feet long and two feet wide, with small ribs and rails of cedar, which could carry four persons or eight hundred pounds of baggage so light that a person could easily portage it around impending rapids. (A) baggage so ...”
March 3, 2014
mukherjee.tanuj3@gmail.co posted a reply to One Billion Dollar Loss in the Sentence Correction forum
“"IT" can refer to both the companies!!! ALL THE OPTIONS ARE WRONG!! Can anyone verify!!! Expert help please!!!”
March 2, 2014
March 2, 2014
LulaBrazilia posted a new topic called Microcomputer for every pupil. Comparison in the Sentence Correction forum
“A recent national study of the public schools shows that there are now one microcomputer for every thirty-two pupils, four times as many than there were four years ago. (A) there are now one microcomputer for every thirty-two pupils, four times as many than there were (B) there is now one ...”
February 28, 2014
LulaBrazilia posted a new topic called Horace Pippin''s hand in the Sentence Correction forum
“Having the right hand and arm being crippled by a sniper’s bullet during the First World War, Horace Pippin, a Black American painter, worked by holding the brush in his right hand and guiding its movements with his left. (A) Having the right hand and arm being crippled by a sniper’s bullet ...”
February 28, 2014
LulaBrazilia posted a new topic called Graduates'' loans and scholarships in the Data Sufficiency forum
“In a survey of 200 college graduates, 30 percent said they had received student loans during their college careers, and 40 percent said they had received scholarships. What percent of those surveyed said that they had received neither student loans nor scholarships during their college careers? ...”
February 26, 2014
LulaBrazilia posted a new topic called Exponents and factors in the Data Sufficiency forum
“If n and t are positive integers, is n a factor of t? 1) n = 3^(n-2) 2) t = 3^n thanks!”
February 26, 2014
LulaBrazilia posted a new topic called What is w + z? - Number Properties in the Problem Solving forum
“If the product of the integers w, x, y, and z is 770 and if 1 < w < x < y < z, what is the value of w + z ? A) 10 B) 13 C) 16 D) 18 E) 21”
February 25, 2014
LulaBrazilia posted a new topic called Jack and Bill''s ages in the Problem Solving forum
“Jack is now 14 years older than Bill. If in 10 years Jack will be twice as old as Bill, how old will Jack be in 5 years? A) 9 B) 19 C) 21 D) 23 E) 33”
February 25, 2014
ngupta27 posted a new topic called does anyone has access to egmat material and manhattan tests in the GMAT Strategy forum
“does anyone has access to egmat material and manhattan tests Please let me know if anyone has or if anyone is planning to buy.. we can buy it together”
February 25, 2014
LulaBrazilia posted a new topic called Agricultural Societies staple foods in the Critical Reasoning forum
“Agricultural societies cannot exist without staple crops. Several food plants, such as kola and okra, are known to have been domesticated in western Africa, but they are all supplemental, not staple, foods. All the recorded staple crops grown in western Africa were introduced from elsewhere, ...”
February 22, 2014
LulaBrazilia posted a new topic called Printwell''s Ink Jet in the Critical Reasoning forum
“Printwell’s Ink Jet Division manufactures ink-jet printers and the ink cartridges they use. Sales of its ink-jet printers have increased. Monthly revenues from those sales, however, have not increased, because competition has forced Printwell to cut the prices of its printers. Unfortunately, ...”
February 22, 2014
candygal79 posted a reply to Yelp!! in the GMAT Math forum
February 21, 2014
candygal79 posted a reply to Yelp!! in the GMAT Math forum
“Hi Mr Rich, Thanks for ur prompt reply. I have decided to purchase Kaplan Foundation book along with Manhattan Verbal books to help me. I have another question-As u mentioned me to take CAT exam , can I take the test after I brush up on my Verbal ? At least I would be confident then.”
February 21, 2014
LulaBrazilia posted a new topic called Converting harvested trees in the Sentence Correction forum
“Although improved efficiency in converting harvested trees into wood products may reduce harvest rates, it will stimulate demand by increasing supply and lowering prices, therefore boosting consumption. (A) in converting harvested trees into wood products may reduce harvest rates, it will ...”
February 21, 2014
LulaBrazilia posted a new topic called North Pacific temperatures in the Sentence Correction forum
“A study of food resources in the North Pacific between 1989 and 1996 revealed that creatures of the seabed were suffering from dwindling food supplies, possibly a result from increasing sea surface temperatures during the same period. (A) that creatures of the seabed were suffering from dwindling ...”
February 21, 2014
candygal79 posted a reply to Yelp!! in the GMAT Math forum
“Hi Mr Rich, After debating too long, I have postponed my exams to April 14.Just completed the Maths Foundation. I think How do I prepare well for Verbal section? Again I did the Diagnostic test and I scored really bad.I have problem in SC and CR.I am very worried. I am about to do the ...”
February 20, 2014
LulaBrazilia posted a new topic called What is the value of a^4 - b^4? in the Data Sufficiency forum
“What is the value of a^4 - b^4? 1) a^2 - b^2 = 16 2) a + b = 8”
February 19, 2014
LulaBrazilia posted a new topic called Circomference of circle O in the Data Sufficiency forum
“http://snag.gy/qNnUm.jpg What is the circumference of the circle above with center O? 1) The perimeter of triangle OXZ is 20 + 10 root(2) 2) The length of arc XYZ is 5*pi”
February 19, 2014
LulaBrazilia posted a new topic called compound interest in the Problem Solving forum
“If money is invested at r percent interest, compounded annually, the amount of the investment will double in approximately 70/r years. If Pat''s parents invested $5,000 in a long-term bond that pays 8 percent interest, compounded annually, what will be the approximate total amount of the investment ...” February 16, 2014 LulaBrazilia posted a new topic called Which must be greate than 1? in the Problem Solving forum “If p/q < 1 and p and q are positive integers, which of the following must be greater than 1? A) root(p/q) B) p/(p^2) C) p/2q D) q/(p^2) E) q/p” February 16, 2014 ngupta27 posted a reply to Which coaching to join in Gurgaon/NCR in the GMAT Strategy forum “v-36, q-49” February 16, 2014 ngupta27 posted a new topic called Which coaching to join in Gurgaon/NCR in the GMAT Strategy forum “Hi.. I have given GMAT with my self prep and scored 690. . Now I want to retake it and plan to take coaching this time.. I prefer self paced coaching or personalized coaching but i dont like sitting in a class to study with so many students. Can anyone suggest good coaching institute in Gurgaon/ ...” February 15, 2014 LulaBrazilia posted a new topic called Centralia corn in the Critical Reasoning forum “Which of the following most logically completes the passage below? Heavy rains during Centralia’s corn planting season prevented some farmers there from planting corn. It is now the planting season for soybeans, another of Centralia’s principal crops, and those fields originally intended for ...” February 14, 2014 LulaBrazilia posted a new topic called OLEX Petroleum in the Critical Reasoning forum “The OLEX Petroleum Company has recently determined that it could cut its refining costs by closing its Grenville refinery and consolidating all refining at its Tasberg refinery. Closing the Grenville refinery, however, would mean the immediate loss of about 1,200 jobs in the Grenville area. ...” February 14, 2014 LulaBrazilia posted a new topic called Narwhals in the Sentence Correction forum “Narwhals can be called whales of the ice: in icy channels, ponds, and ice-shielded bays they seek sanctuary from killer whales, their chief predator, and their annual migrations following the seasonal rhythm of advancing and retreating ice. (A) their annual migrations following (B) their ...” February 13, 2014 LulaBrazilia posted a new topic called Lepenski Vir boulders in the Sentence Correction forum “Sculpted boulders found at Lepenski Vir, an example of the earliest monumental art known from central and western Europe, includes 15 figures with human features similar to Upper Paleolithic forms and to Middle Eastern Nantufian stone figurines. (A) Vir, an example of the earliest monumental art ...” February 13, 2014 candygal79 posted a reply to Yelp!! in the GMAT Math forum “Hi Rich, The reason am taking GMAT by March 12 is cos I want to apply to UC Irvine for Fall intake and their deadline is on Apr 1.To be honest, I have only four Uni to apply and two of them are UCs which requires my GMAT score by Apr 1.I am applying to Cali schools since I have friends in Cali ...” February 10, 2014 candygal79 posted a new topic called Yelp!! in the GMAT Math forum “I''m taking GMAT on March 12.Just did partial of the diagnostic Maths test.Boy my Maths is rusty.I want to get into Grad school in US by this Fall.What is the best way for me to brush my Maths k Skills .I have a month to prepare and I really appreciate if you guys can give me feedback! Am kinda ...” February 10, 2014 LulaBrazilia posted a new topic called inequality in the Data Sufficiency forum “Is 2x - 3y < x^2 1) 2x - 3y = -2 2) x>2 and y>0” February 8, 2014 LulaBrazilia posted a new topic called ratio of rectangle sides in the Data Sufficiency forum “http://snag.gy/7CLwp.jpg In the figure above, what is the ratio KN/MN ? 1) The perimeter of rectangle KLMN is 30 meters. 2) The three small rectangles have the same dimensions.” February 8, 2014 LulaBrazilia posted a new topic called Nancy''s furniture store sales in the Problem Solving forum “In a certain furniture store, each week Nancy earns a salary of$240 plus 5 percent of the amount of her total sales that exceeds $800 for the week. If Nancy earned a total of$450 one week, what were her total sales that week? A) $2,200 B)$3,450 C) $4,200 D)$4,250 E) $5,000” February 1, 2014 LulaBrazilia posted a new topic called meters in x minutes in the Problem Solving forum “At the rate of m meters per s seconds, how many meters does a cyclist travel in x minutes? A) m/sx B) mx/s C) 60m/sx D) 60ms/x E) 60mx/s” February 1, 2014 LulaBrazilia posted a new topic called Copeland cigarette tax in the Critical Reasoning forum “Sonya: The government of Copeland is raising the cigarette tax. Copeland’s cigarette prices will still be reasonably low, so cigarette consumption will probably not be affected much. Consequently, government revenue from the tax will increase. Raoul: True, smoking is unlikely to decrease, ...” January 29, 2014 LulaBrazilia posted a new topic called Sea Otters population decline in the Critical Reasoning forum “In the late 1980s, the population of sea otters in the North Pacific Ocean began to decline. Of the two plausible explanations for the decline – increased predation by killer whales or disease – disease is the more likely. After all, a concurrent sharp decline in the populations of seals and ...” January 29, 2014 LulaBrazilia posted a new topic called 1980s interest rates in the Sentence Correction forum “Even though interest rates rose last year, they were not nearly as high as the early 1980s, when the economy tumbled into a recession and markets were depressed. (A) they were not nearly as high as the early 1980s, when (B) they were not nearly as high as interest rates in the early 1980s, ...” January 27, 2014 LulaBrazilia posted a new topic called English Channel swim in the Sentence Correction forum “In 1926, in her second attempt to swim across the English Channel, Gertrude Ederle not only crossed the Channel against currents that forced her to swim thirty-five miles instead of the minimal twenty-one, but she set a record for speed as well, by swimming the distance in almost two hours faster ...” January 27, 2014 LulaBrazilia posted a new topic called Household income decrease in the Data Sufficiency forum “By what percent did the median household income in Country Y decrease from 1970 to 1980? 1) In 1970 the median household income in Country Y was 2/3 of the median household income in Country X 2) In 1980 the median household income in Country Y was 1/2 of the median household income in Country ...” January 18, 2014 LulaBrazilia posted a new topic called which operation is represented in the Data Sufficiency forum “If ~ represents one of the operations +, -, and *, is k~(l+m)=(k~l)+(k~m) for all numbers k, l and m? 1) k~1 is not equal to 1~k for some numbers k. 2) ~ represents subtraction I struggle with these abstract operations” January 18, 2014 LulaBrazilia posted a new topic called Most likely range in the Problem Solving forum “If a number between 0 and 1/2 is selected at random, which of the following will the number most likely be between? A) 0 and 3/20 B) 3/20 and 1/5 C) 1/5 and 1/4 D) 1/4 and 3/10 E) 3/10 and 1/2” January 16, 2014 LulaBrazilia posted a new topic called Sector income in the Problem Solving forum “http://snag.gy/25Qzi.jpg The table above represents the combined net income of all United States companies in each of five sectors for the second quarter of 1996. Which sector had the greatest net income during the first quarter of 1996? A) Basic Materials B) Energy C) Industrial D) ...” January 16, 2014 LulaBrazilia posted a new topic called Anton Checkhov in the Sentence Correction forum “In the English-speaking world Anton Checkhov is by far better known for his plays than for his short stories, but it was during his lifetime that Chekhov’s stories made him popular while his plays were given a more ambivalent reception, even by his fellow writers. (A) by far better known for ...” January 15, 2014 LulaBrazilia posted a new topic called Commercial competition in the Sentence Correction forum “In a state of pure commercial competition, there would be a large number of producing firms, all unfettered by government regulations, all seeking to meet consumer needs and wants more successfully than each other. (A) all seeking to meet consumer needs and wants more successfully than each ...” January 15, 2014 ngupta27 posted a new topic called Which coaching to join in Gurgaon/NCR in the GMAT Strategy forum “Hi.. I have given GMAT with my self prep and scored 690. . Now I want to retake it and plan to take coaching this time.. I prefer self paced coaching or personalized coaching but i dont like sitting in a class to study with so many students. Can anyone suggest good coaching institute in ...” December 22, 2013 LulaBrazilia posted a new topic called Mall Occupancy in the Critical Reasoning forum “Mall owner: Our mall’s occupancy rate is so low that we are barely making a profit. We cannot raise rents because of an unacceptably high risk of losing established tenants. On the other hand, a mall that is fully occupied costs about as much to run as one in which a rental space here and a ...” December 21, 2013 LulaBrazilia posted a new topic called Gandania tobacco sales in the Critical Reasoning forum “In Gandania, where the government has a monopoly on tobacco sales, the incidence of smoking-related health problems has risen steadily for the last twenty years. The health secretary recently proposed a series of laws aimed at curtailing tobacco use in Gandania. Profits from tobacco sales, ...” December 21, 2013 LulaBrazilia posted a new topic called Raffle tickets and ratios in the Problem Solving forum “A club sold an average (arithmetic mean) of 92 raffle tickets per member. Among the female members, the average number sold was 84, and among the male members, the average sold was 96. What was the ratio of the number of male members to the number of female members in the club? A) 1:1 B) 1:2 ...” December 19, 2013 LulaBrazilia posted a new topic called Bacteria population in the Problem Solving forum “Four hours from now, the population of a colony of bacteria will reach 1.28 x 10^6. If the population of the colony doubles every 4 hours, what was the population 12 hours ago? A) 6.4 x 10^2 B) 8.0 x 10^4 C) 1.6 x 10^5 D) 3.2 x 10^5 E) 8.0 x 10^6” December 19, 2013 LulaBrazilia posted a new topic called Compound interest in the Problem Solving forum “If$1 were invested at 8 percent interest compounded annually, the total value of the investment, in dollars, at the end of 6 years would be A) (1.8)^6 B) (1.08)^6 C) 6(1.08) D) 1 + (0.08)^6 E) 1 + 6(0.08) I tend to confuse compound and simple interest, and how to find the total ...”
December 10, 2013
LulaBrazilia posted a new topic called Coins in pockets in the Problem Solving forum
“Coins are to be put into 7 pockets so that each pocket contains at least one coin. At most 3 of the pockets are to contain the same number of coins, and no two of the remaining pockets are to contain an equal number of coins. What is the least possible number of coins needed for the pockets? A) ...”
December 10, 2013
LulaBrazilia posted a new topic called Stadium tickets in the Problem Solving forum
“Tickets for all but 100 seats in a 10,000-seat stadium were sold. Of the tickets sold, 20% were sold at half price and the remaining tickets were sold at the full price of $2. What was the total revenue from ticket sales? A.$15,840 B. $17,820 C.$18,000 D. $19,800 E.$21,780 How can this ...”
December 9, 2013
LulaBrazilia posted a new topic called Bob & Yolanda in the Problem Solving forum
“One hour after Yolanda started walking from X to Y, a distance of 45 miles, Bob started walking along the same road from Y to X. If Yolanda''s walking rate was 3 miles per hour and Bob''s was 4 miles per hour, how many miles had Bob walked when they met? A. 24 B. 23 C. 22 D. 21 E. 19.5 ...”
December 9, 2013
ngupta27 posted a new topic called list of universities which do not require application fee in the Research MBA Programs forum
“Hi All Does anyone have the list of good universities which do not require application fee ? Please share if anyone has compiled it . Thanks”
November 26, 2013
ngupta27 posted a new topic called NUS seniors- help needed 2014 applicant in the Research MBA Programs forum
“Guys - does anyone know what questions are asked this time in NUS recommendation letter? Or is it the same every year ? Please reply asap. i need to discuss with my recommendor”
October 4, 2013
ngupta27 posted a new topic called Essays editting consultants in Gurgaon/NCR? in the The Application Process forum
“Hi Does anyone know good Essay consultants in Delhi, Gurgaon region??? Please let me know asap Thanks”
September 10, 2013
ngupta27 posted a reply to Anybody gave gmat this week or about to give? in the GMAT Strategy forum
“hey..even I am giving on Friday... which center???”
October 30, 2012
ngupta27 posted a new topic called Anybody gave gmat this week or about to give? in the GMAT Strategy forum
“Hi.. Anybody who has given gmat this week or is about to give??? Please let me know asap”
October 27, 2012
ngupta27 posted a reply to Delhi center for GMAT --need suggestion urgently in the I just Beat The GMAT! forum
“Hey Thank you guys for your instant reviews. How were your exams? Could you please give me your phone numbers ( send private message or mail in case of any prob) . I wanted to take some tips from you all before appearing for exam”
October 24, 2012
ngupta27 posted a new topic called Delhi center for GMAT --need suggestion urgently in the I just Beat The GMAT! forum
“Hi All I am about to book my GMAT date and center for Delhi. I gives two Delhi centers and I see that for 1 center(names Positive solutions), all the dates are almost full where as for the other center( named Pearson prof center), many dates are still available. Is there any dependence on the ...”
October 23, 2012
ngupta27 posted a new topic called Delhi center for GMAT --need suggestion urgently in the GMAT Strategy forum
“Hi All I am about to book my GMAT date and center for Delhi. I gives two Delhi centers and I see that for 1 center(names Positive solutions), all the dates are almost full where as for the other center( named Pearson prof center), many dates are still available. Is there any dependence on the ...”
October 23, 2012
ngupta27 posted a new topic called Need latest material of Sandeep Gupta Classes in the GMAT Strategy forum
“Hi Does anybody have latest material of Sandeep Gupta''s classes? I am not in bangalore so cant attend his clasees but would like to go through his latest materials. I would be really grateful if anybody can help Regards Nitin”
August 25, 2012
ngupta27 posted a new topic called Need latest material of Sandeep Gupta Classes in the GMAT Verbal & Essays forum
“Hi Does anybody have latest material of Sandeep Gupta''s classes? I am not in bangalore so cant attend his clasees but would like to go through his latest materials. I would be really grateful if anybody can help Regards Nitin”
August 25, 2012
ngupta27 started following vk_vinayak
August 21, 2012
ngupta27 posted a reply to concept of "OCTAVE " in RC in the Reading Comprehension forum
August 3, 2012
ngupta27 posted a new topic called what is considered as international experience? in the Suggestions and Feedback forum
“HI I have a question - what is considered as international experience? I am from India., Can anyone please tell if I work in a neighboring country like bangaladesh or Nepal or Sri lanka . WIll it be considered as International Experience? My second question - The client is UK and I talk and ...”
July 31, 2012
ngupta27 posted a reply to how can I contact someone at "beat the gmat" web s in the Suggestions and Feedback forum
“Hi Eric. I had a quick and urgent question. I read that BeatheGMAT +Knewton prep material offer lasts on 8th June 2012. Could you please tell what is the time in IST till when we can avail this offer? I dont have credit card today and will have to wait for a day to make the payment. So I ...”
June 7, 2012
ngupta27 posted a new topic called BeatheGMAT +Knewton prep material-last date with time needed in the GMAT Verbal & Essays forum
“Hi I read that BeatheGMAT +Knewton prep material offer lasts on 8th June 2012. Could anyone please tell what is the time in IST till when we can avail this offer? I dont have credit card today and will have to wait for a day to make the payment. So I wanted to know the time( in IST preferable) ...”
June 7, 2012
ngupta27 posted a new topic called B school research - Any compiled sheet present? in the Research MBA Programs forum
“Hi buddies I am done with my GMAT and wanted to research B-school programs in terms of tuition fee, cost of living , scholarship, deadlines, programs offered, average gmat score required, average work exp required etc In short, I needed an excel sheet which gives me a comparison of all these. I ...”
May 31, 2012
ngupta27 posted a reply to Score 690 - Q 49 , V -35 - What shall I do please suggest? in the GMAT Verbal & Essays forum
“@digvijayk and @ dhonu121 Thanks for the hope and your feedback. - I have a pretty good exposure to leardership activities apart from my professional career. Actively involved in my company''s cultural activities. Leading a club in my company. Running activities for an NGO. Running my own ...”
May 31, 2012
ngupta27 posted a new topic called Score 690 - Q 49 , V -35 - What shall I do please suggest? in the GMAT Verbal & Essays forum
“Hi I gave my GMAT on 3rd May and got 690 score. I have 2 years IT experience in an MNC. Have good leadership exposure to showcase. I am looking for top 50 universities only. Should I retake GMAT ? I do not have any financial backup, so would be depending on scholarship and loan only! My ...”
May 17, 2012
“Hi..I had already taken my leaves and took advantage of them to learn almost all the stuff for gmat but the scores in the 4 tests really disappointed me.”
January 12, 2012
ngupta27 posted a reply to Majority rule. in the Sentence Correction forum
“The following are three examples which have i seen. The first two I am able to interpret why the particular verb is different but the third one confuses me between 1 and 3 1. The majority of students ARE 2. The student majority IS 3. A majority of railway commuters READS books. Is the ...”
January 12, 2012
“Thanks for the reply sam. I have given all the four tests within a month''s time. Also I haven''t taken the essays and 8 minutes break in any of the tests. The number of ques wrong were nearly equal in new and old. In the older version, I had plenty of questions from OG12 while in the newer ...”
January 11, 2012
ngupta27 posted a new topic called GMAT new software shocking result-please help in the GMAT Strategy forum
“Hello everyone, I gave the two tests of GMATPrep using older version and scored 710 and 690 in the two tests. I removed that software and downloaded latest one from the mba.com site... The scores in this new software were shocking! I scored 640 and 660 in the two tests. The individual scores in ...”
January 11, 2012
ngupta27 posted a reply to Critical Reasoning Strategy - Powerpoint in the Critical Reasoning forum
January 11, 2012
ngupta27 posted a reply to 740!! Should I retake?? in the I just Beat The GMAT! forum
“Congrats and best of luck”
June 1, 2011
ngupta27 started following gmataspirant123
June 1, 2011
ngupta27 posted a reply to I beat the GMAT 740 Q49/V42 in the I just Beat The GMAT! forum
“That response was very prompt. Thanks for that. Also I needed help with BOLD Faced CR questions.I always get them wrong. IS there any material for practice or notes that can help me understand the concepts properly of those type of questions? Thanks in advance Regards Nitin”
May 19, 2011
ngupta27 posted a reply to I beat the GMAT 740 Q49/V42 in the I just Beat The GMAT! forum
“Hi Congrats for your score I couldnt find Chinese Burn’s AWA notes anywhere. Can you please share them so that we can also use Regards Nitin”
May 19, 2011
ngupta27 posted a reply to I beat the GMAT 740 Q49/V42 in the I just Beat The GMAT! forum
“Hi Congrats for your score I couldnt find Chinese Burn’s AWA notes anywhere. Can you please share them so that we can also use Regards Nitin”
May 19, 2011 |
library(hmer)
library(lhs)
library(ggplot2)
set.seed(1)
# One-dimensional Example: Sine Function
## Setup
We first look at the simplest possible example: a single univariate function. We take the following sine function:
$$f(x) = 2x + 3x\sin\left(\frac{5\pi(x-0.1)}{0.4}\right).$$
This should demonstrate the main features of emulation and history matching over a couple of waves. The other advantage of using such a simple function (in this case as well as in the later, two-dimensional, case) is that the function can be evaluated quickly, so we can compare the emulator performance against the actual function value very easily. This is seldom the case in real-world applications, where a simulator is often a ‘black box’ function that can take a long time to evaluate at any given point.
func <- function(x) {
2*x + 3*x*sin(5*pi*(x-0.1)/0.4)
}
We presume that we want to emulate this function over the input space $$x\in[0,0.6]$$. To train an emulator to this function, we require a set of known points. We’ll evaluate the function at equally spaced points along the range $$[0, 0.5]$$: ten points will be sufficient for training this one-dimensional emulator. A general rule of thumb is that we require a number of points equal to at least ten times the dimension of the input space.
data1d <- data.frame(x = seq(0.05, 0.5, by = 0.05), f = func(seq(0.05, 0.5, by = 0.05)))
These points will be passed to the hmer package functions, in order for it to train an emulator to func, interpolate points between the data points above, and propose a set of new points for training a second-wave emulator.
## Emulator Training
To train the emulator, we require at least three things: the data to train on, the names of the outputs to emulate, and the parameter ranges. The function emulator_from_data can then be used to determine prior specifications for the emulator; namely expectations and variances of its component parts, as well as correlation lengths and other structural elements. It then performs Bayes Linear adjustment on this prior emulator to give us our trained emulator. To see these elements in more detail, consult the section “The Structure of a Bayes Linear Emulator” at the bottom of this document.
We therefore define the ranges of the parameters (in this case, just one parameter $$x$$) and use the emulator_from_data function, producing objects of class Emulator:
ranges1d <- list(x = c(0, 0.6))
em1d <- emulator_from_data(data1d, c('f'), ranges1d)
#> Fitting regression surfaces...
#> Building correlation structures...
#> Creating emulators...
em1d
#> $f #> Parameters and ranges: x: c(0, 0.6) #> Specifications: #> Basis functions: (Intercept) #> Active variables x #> Regression Surface Expectation: 0.5963 #> Regression surface Variance (eigenvalues): 0 #> Correlation Structure: #> Bayes-adjusted emulator - prior specifications listed. #> Variance (Representative): 7.008676 #> Expectation: 0 #> Correlation type: exp_sq #> Hyperparameters: theta: 0.3333 #> Nugget term: 0 #> Mixed covariance: 0 The print statement for the emulator shows us the specifications: the basis functions chosen, the corresponding regression coefficients, the global variance, and the structure and hyperparameters of the correlation structure. We also note that the output of emulator_from_data is a named list of emulators: here, this is not particularly important as we only have one emulator. The print statement also indicates that Bayes Linear adjustment has been applied: if we instead wanted to examine an unadjusted emulator, we could have run emulator_from_data with the option adjusted = FALSE, or having trained an emulator we can access the prior emulator by calling o_em. The below commands give the same output. emulator_from_data(data1d, c('f'), ranges1d, adjusted = FALSE)$f
#> Fitting regression surfaces...
#> Building correlation structures...
#> Creating emulators...
#> Parameters and ranges: x: c(0, 0.6)
#> Specifications:
#> Basis functions: (Intercept)
#> Active variables x
#> Regression Surface Expectation: 0.5963
#> Regression surface Variance (eigenvalues): 0
#> Correlation Structure:
#> Variance (Representative): 7.008676
#> Expectation: 0
#> Correlation type: exp_sq
#> Hyperparameters: theta: 0.3333
#> Nugget term: 0
#> Mixed covariance: 0
em1d$f$o_em
#> Parameters and ranges: x: c(0, 0.6)
#> Specifications:
#> Basis functions: (Intercept)
#> Active variables x
#> Regression Surface Expectation: 0.5963
#> Regression surface Variance (eigenvalues): 0
#> Correlation Structure:
#> Variance (Representative): 7.008676
#> Expectation: 0
#> Correlation type: exp_sq
#> Hyperparameters: theta: 0.3333
#> Nugget term: 0
#> Mixed covariance: 0
The trained emulator, by virtue of having been provided the training points, ‘knows’ the value of the function at those points. Hence, the expectation of the emulator at those points is identical (up to numerical precision) to the known values of $$f(x)$$, and the variance at those points is $$0$$. We can access this information using the built-in functions get_exp and get_cov of the Emulator object.
em1d$f$get_exp(data1d) - data1d$f #> [1] -5.911938e-15 -7.882583e-15 -2.187139e-14 -4.374279e-14 -5.573320e-14 -3.552714e-14 -1.160183e-14 9.992007e-16 #> [9] 9.325873e-15 -2.220446e-16 em1d$f$get_cov(data1d) #> 11 12 13 14 15 16 17 18 19 20 #> 0 0 0 0 0 0 0 0 0 0 We can now use this trained emulator to evaluate the function at any point in the parameter range. While it will not exactly match the function value, each evaluation comes with its associated uncertainty. We define a ‘large’ set of additional points to evaluate the emulator on, and find their expectation and variance. test1d <- data.frame(x = seq(0, 0.6, by = 0.001)) em1d_exp <- em1d$f$get_exp(test1d) em1d_var <- em1d$f$get_cov(test1d) Since, by design, we have a function that is quick and easy to evaluate, we can directly compare the emulator predictions against the true function value, plotting the relevant quantities. In higher-dimensional cases, the hmer package has built-in plotting functionality, but here we use simple R plotting methods. #> Define a data.frame for the plotting plot1d <- data.frame( x = test1d$x,
f = func(test1d$x), E = em1d_exp, max = em1d_exp + 3*sqrt(abs(em1d_var)), min = em1d_exp - 3*sqrt(abs(em1d_var)) ) plot(data = plot1d, f ~ x, ylim = c(min(plot1d[,-1]), max(plot1d[-1])), type = 'l', main = "Emulation of a Simple 1d Function", xlab = "x", ylab = "f(x)") lines(data = plot1d, E ~ x, col = 'blue') lines(data = plot1d, max ~ x, col = 'red', lty = 2) lines(data = plot1d, min ~ x, col = 'red', lty = 2) points(data = data1d, f ~ x, pch = 16, cex = 1) legend('bottomleft', inset = c(0.05, 0.05), legend = c("Function value", "Emulated value", "Uncertainty Bounds"), col = c('black', 'blue', 'red'), lty = c(1,1,2)) We can see a few things from this plot. The emulator does exactly replicate the function at the points used for training (the black dots in the plots), and the corresponding uncertainty is zero at these points. Away from these points, the emulator does a good job of interpolating the function values, represented by the coincidence of the black and blue lines: the exception to this is areas far from a training point (the edges of the plot). However, we can see that even where the emulator expectation diverges from the function value, the both lines lie inside the uncertainty bounds (here demonstrated by the red lines). ## History Matching We have a trained emulator, which we can see performs well over the region of interest. Suppose we want to find input points which result in a given output value. While with this function it would be straightforward to find such input points (either analytically or numerically), in general this would not be the case. We can therefore follow the history matching approach, using the emulator as a surrogate for the function. History Matching consists of the following steps: • Train emulators on known points in the target space; • Use the trained emulators to rule out regions of parameter space that definitely cannot give rise to the desired output value; • Sample a new set of points from the remaining space (the ‘non-implausible’ region); • Input these new points into the model/simulator/function to obtain a new training set. These four steps are repeated until either 1) we have a suitably large number of points producing the desired output; 2) the whole parameter space has been ruled out; 3) The emulators are as confident evaluating the parameter space of interest as the model itself. Here, we will not worry about these stopping conditions and instead perform the steps to complete two waves of emulation and history matching. The first thing we require is a target output value: suppose we want to find points $$x$$ such that $$f(x)=0$$, up to some uncertainty. We define this as follows: target1d <- list(f = list(val = 0, sigma = 0.05)) This means of defining targets, while common in real-life observations where we observe an outcome with some measurement error, may not be available to a particular output. An alternative is to define the target as a range of values that could be taken: for example in this case we could define a similar target to the above as target1d_alt <- list(f = c(-0.1, 0.1)) The generate_new_runs function is used to propose new points. There are a multitude of different methods that can be used to propose the new points: in this particular one-dimensional case, it makes sense to take the most basic approach. This is to generate a large number of space-filling points, reject those that the emulator rules out as implausible, and select the subset of the remaining points that has the maximal minimum distance between them (so as to cover as much of the non-implausible space as possible). new_points1d <- generate_new_runs(em1d, 10, target1d, method = 'lhs') #> Proposing from LHS... #> 40 initial valid points generated for I=3 #> Selecting final points using maximin criterion... The necessary parameters here are the (list of) emulators, the number of points (here, 10), and the target(s). Having obtained these new points, we include them on our previous plot, along with the target bounds, to demonstrate the logic of the point proposal. We will also include a bar indicating the implausibility at each value of x: the implausibility is defined roughly as the distance between the emulator prediction and the target value, normalised by the uncertainty at that point (both the emulator uncertainty em$get_cov(x) and the observational uncertainty target1d$sigma). We will define the implausibility more rigorously shortly, but for now it signifies the level of ‘suitability’ of a point; lower values of implausibility equate to points deemed more suitable. col_func <- function(x, max_val) { if (x <= 3) return(rgb(221*x/3, 255, 0, maxColorValue = 256)) return(rgb(221+34*(x-3)/(max_val-3), 255*(1-(x-3)/(max_val-3)), 0, maxColorValue = 256)) } imps <- em1d$f$implausibility(plot1d$x, target1d$f) col.pal <- purrr::map_chr(imps, col_func, max(imps)) plot(data = plot1d, f ~ x, ylim = c(min(plot1d[,-1]-1), max(plot1d[,-1])), type = 'l', main = "Emulation of a Simple 1d Function", xlab = "x", ylab = "f(x)") lines(data = plot1d, E ~ x, col = 'blue') lines(data = plot1d, max ~ x, col = 'red', lty = 2) lines(data = plot1d, min ~ x, col = 'red', lty = 2) points(data = data1d, f ~ x, pch = 16, cex = 1) abline(h = target1d$f$val, lty = 2) abline(h = target1d$f$val + 3*target1d$f$sigma, lty = 2) abline(h = target1d$f$val - 3*target1d$f$sigma, lty = 2) points(unlist(new_points1d, use.names = F), y = func(unlist(new_points1d, use.names = F)), pch = 16, col = 'blue') points(plot1d$x, rep(min(plot1d[,-1])-0.5, length(plot1d$x)), col = col.pal, pch = 16) legend('bottomleft', inset = c(0.05, 0.1), legend = c("Function value", "Emulated value", "Uncertainty Bounds"), col = c('black', 'blue', 'red'), lty = c(1,1,2)) Here, the colour bar at the bottom indicates the implausibility: the more green the value at a point, the more acceptable the emulator thinks the point is for matching the target. Note that some of the newly proposed points (in blue) do not live inside the target bounds - in particular, there are points on the far right that are not near the target. In these regions, the implausibility is also quite green, even though we know by inspection that they will not match the target. This is a consequence of the way that the emulator proposes points: because the emulator uncertainty is large in that region of the parameter space, it cannot rule out those regions with certainty, and therefore samples from that region. This is in contrast to many optimisation methods, which look for regions that satisfy the conditions: the history matching iteratively removes regions that cannot satisfy the conditions. Because of the built-in understanding of uncertainty, the history matching is highly unlikely to remove parts of parameter space that could eventually result in an adequate match to the targets. The points in the high uncertainty region will be extremely instructive in training a second wave of emulators, which may consequently remove the space. ## Second Wave The second wave is very similar to the first, so little needs to be explained. new_data1d <- data.frame(x = unlist(new_points1d, use.names = F), f = func(unlist(new_points1d, use.names = F))) em1d_2 <- emulator_from_data(new_data1d, c('f'), ranges1d) #> Fitting regression surfaces... #> Building correlation structures... #> Creating emulators... #> Performing Bayes linear adjustment... plot1d_2 <- data.frame( x = plot1d$x, f = plot1d$f, E = em1d_2$f$get_exp(test1d), min = em1d_2$f$get_exp(test1d) - 3*sqrt(abs(em1d_2$f$get_cov(test1d))), max = em1d_2$f$get_exp(test1d) + 3*sqrt(abs(em1d_2$f$get_cov(test1d))) ) plot(data = plot1d_2, f ~ x, ylim = c(min(plot1d_2[,-1]), max(plot1d_2[,-1])), type = 'l', main = "Emulator of a Simple 1-dimensional Function: Wave 2", xlab = "Parameter value", ylab = "Function value") lines(data = plot1d_2, E ~ x, col = 'blue') lines(data = plot1d_2, max ~ x, col = 'red', lty = 2) lines(data = plot1d_2, min ~ x, col = 'red', lty = 2) points(data = new_data1d, f ~ x, pch = 16, cex = 1) legend('topleft', inset = c(0.05, 0.05), legend = c("Function value", "Emulated value", "Uncertainty Bounds"), col = c('black', 'blue', 'red'), lty = c(1,1,2)) This plot underlines the importance of using all waves of emulation. The first wave was trained over most of the space, and so gives a moderately confident estimate of the function on the interval $$[0, 0.5]$$. The second wave emulator is trained only on regions that are of interest at this point, and so is less confident than the first wave emulator in parts of parameter space. However, we still have the emulator from the first wave, which contains all the information of the previous training runs by design. In this wave of history matching, therefore, we use both the first- and second-wave emulator to propose points. new_new_points1d <- generate_new_runs(c(em1d_2, em1d), 10, z = target1d, method = 'lhs') #> Proposing from LHS... #> 29 initial valid points generated for I=3 #> Selecting final points using maximin criterion... plot(data = plot1d_2, f ~ x, ylim = c(min(plot1d_2[,-1]), max(plot1d_2[,-1])), type = 'l', main = "Emulator of a Simple 1-dimensional Function: Wave 2", xlab = "Parameter value", ylab = "Function value") lines(data = plot1d_2, E ~ x, col = 'blue') lines(data = plot1d_2, max ~ x, col = 'red', lty = 2) lines(data = plot1d_2, min ~ x, col = 'red', lty = 2) points(data = new_data1d, f ~ x, pch = 16, cex = 1) legend('topleft', inset = c(0.05, 0.05), legend = c("Function value", "Emulated value (wave 2)", "Uncertainty Bounds"), col = c('black', 'blue', 'red'), lty = c(1,1,2)) abline(h = target1d$f$val, lty = 2) abline(h = target1d$f$val + 3*target1d$f$sigma, lty = 2) abline(h = target1d$f$val - 3*target1d$f$sigma, lty = 2) points(x = unlist(new_new_points1d, use.names = F), y = func(unlist(new_new_points1d, use.names = F)), pch = 16, col = 'blue') We can see that the proposed points are much better overall than the first wave points. Because the second-wave emulator has much greater certainty in the region $$[0.5, 0.6]$$, it is far better at determining suitability of points in that region. From the plot, it looks like there should be points proposed from the central regions where the wave-two emulator is uncertain, but the fact that we are also using the first-wave emulator as a metric for point proposal means that these points are ruled out anyway. # Two-dimensional example We now consider a slightly higher-dimensional example, in order to consider emulator diagnostics and some more interesting plots. Consider the following pair of functions in two dimensions: $$f_1(x) = 2\cos(1.2x-2) + 3\sin(-0.8y+1)$$ $$f_2(x) = y\sin x - 3\cos(xy)$$ The associated parameter space to explore is given by $$x\in[-\pi/2, \pi/2]$$ and $$y\in[-1,1]$$. We create a set of initial data to train emulators on: here we select the initial points to be space-filling using a Latin Hypercube design (package lhs): func1 <- function(x) { 2*cos(1.2*x[[1]]-2) + 3*sin(-0.8*x[[2]]+1) } func2 <- function(x) { x[[2]]*sin(x[[1]]) - 3*cos(x[[1]]*x[[2]]) } initial_data <- setNames(data.frame(sweep(maximinLHS(20, 2) - 1/2, 2, c(pi, 2), "*")), c('x', 'y')) validation_data <- setNames(data.frame(sweep(maximinLHS(20, 2) - 1/2, 2, c(pi, 2), "*")), c('x', 'y')) initial_data$f1 <- apply(initial_data, 1, func1)
initial_data$f2 <- apply(initial_data, 1, func2) validation_data$f1 <- apply(validation_data, 1, func1)
validation_data$f2 <- apply(validation_data, 1, func2) In this example, we have created two data sets: a training set initial_data and a validation set validation_data. This will allow us to see the performance of the emulators on previously untested points. In the one-dimensional case, this was not necessary as we can quite easily visualise the performance of the emulator using the simple plots. As in the one-dimensional case, we will define a target to match to. In this case, we’ll match to the value given when $$x=0.1, y=0.4$$ - this means that for this example we know that the target can be matched to, and gives us a check that the emulators are not ruling out a region of parameter space that they shouldn’t be. ranges2d <- list(x = c(-pi/2, pi/2), y = c(-1, 1)) targets2d <- list( f1 = list(val = func1(c(0.1, 0.4)), sigma = sqrt(0.1)), f2 = list(val = func2(c(0.1, 0.4)), sigma = sqrt(0.005)) ) We have placed a much tighter bound of $$f_2$$, suggesting that this target should be harder to hit. We construct some emulators as in the one-dimensional case: ems2d <- emulator_from_data(initial_data, c('f1','f2'), ranges2d) #> Fitting regression surfaces... #> Building correlation structures... #> Creating emulators... #> Performing Bayes linear adjustment... ems2d #>$f1
#> Parameters and ranges: x: c(-1.5708, 1.5708): y: c(-1, 1)
#> Specifications:
#> Basis functions: (Intercept); x; y; I(x^2); I(y^2)
#> Active variables x; y
#> Regression Surface Expectation: 1.6691; 2.3664; -1.0949; 1.0637; -0.5503
#> Regression surface Variance (eigenvalues): 0; 0; 0; 0; 0
#> Correlation Structure:
#> Bayes-adjusted emulator - prior specifications listed.
#> Variance (Representative): 0.09576228
#> Expectation: 0
#> Correlation type: exp_sq
#> Hyperparameters: theta: 0.5402
#> Nugget term: 0
#> Mixed covariance: 0 0 0 0 0
#>
#> $f2 #> Parameters and ranges: x: c(-1.5708, 1.5708): y: c(-1, 1) #> Specifications: #> Basis functions: (Intercept); x; y; I(x^2); I(y^2); x:y #> Active variables x; y #> Regression Surface Expectation: -3.3511; -0.1242; 0.0779; 1.0988; 1.0975; 0.8124 #> Regression surface Variance (eigenvalues): 0; 0; 0; 0; 0; 0 #> Correlation Structure: #> Bayes-adjusted emulator - prior specifications listed. #> Variance (Representative): 0.1383129 #> Expectation: 0 #> Correlation type: exp_sq #> Hyperparameters: theta: 0.5172 #> Nugget term: 0 #> Mixed covariance: 0 0 0 0 0 0 We obtain a list of trained emulators. We note that both emulators have a dependence on the parameters, and their squares, which we would expect for this model. Since the emulators fit a quadratic surface to the data, and the data is generated by trigonometric functions, the closest global structure they can produce is quadratic. We want to check that the emulators are suitable for proposing new points robustly. We will check the following things: • We want the emulator prediction to ‘match’ the model prediction; by which we wouldn’t expect the emulator prediction to be frequently more than $$3\sigma$$ away from the model prediction in the regions that matter. • In general, we want the standardised errors of the emulator predictions relative to the model predictions to be not too large, but not too small. Large errors imply that the emulators are doing a bad job of fitting to validation points; consistently small errors imply that the emulators are underconfident and thus makes the wave of emulation less valuable for cutting out space. • We want to ensure that the emulators do not rule out any regions of space that the model would not. The converse is okay, and indeed expected (the emulators are going to be more conservative about ruling out points), but the emulators should absolutely not rule out regions of space that may be valid. These three tests are contained within validation_diagnostics, which prints out the corresponding plots for each emulator and returns any points which fail one or more of these checks. invalid_points <- validation_diagnostics(ems2d, targets2d, validation_data, plt = TRUE, row = 2) Looking at these plots shows that the emulators are doing fine. Any failing points would be flagged as red in the first two columns of the plots, and appear in the data.frame invalid_points: should we have such points it is useful to consider why they might be problematic (for example, a point at an edge or a corner of the parameter space can have strange behaviours). At this stage, we can make a variety of changes to the prior specifications in order to combat these issues if they occur, by utilising the mult_sigma and set_hyperparams functions of the Emulator object. For instance, to inflate the emulator variance by a factor of 2 and decrease the correlation length to 0.5 for the $$f_1$$ emulator: modified_em <- ems2d$f1$mult_sigma(2)$set_hyperparams(list(theta = 0.5))
new_invalid <- validation_diagnostics(list(f1 = modified_em, f2 = ems2d$f2), targets2d, validation_data, plt = TRUE, row = 2) The mult_sigma and set_hyperparams functions create a new Emulator object, allowing us to chain the two functions mult_sigma and set_hyperparams above, so in general we would replace the old emulator with the new one rather than combining lists (e.g. ems2d$f1 <- emd2d$f1$mult_sigma(...)). In this case, such tinkering is not required and hence we did not replace the $$f_1$$ emulator in situ. We can also use the set_sigma function to define the structure of the variance, including giving it functional dependence on the outputs; this is beyond the need or scope of this example, however.
## Emulator Plots
The hmer package allows us to create plots analogous to those made by-hand in the one-dimensional case. The corresponding plots are contour plots over the input region.
emulator_plot(ems2d)
emulator_plot(ems2d, plot_type = 'sd')
The emulator_plot function can produce a number of things: the default is an expectation plot equivalent to the blue line in the one dimensional case. The plot_type command determines what is plotted: ‘var’ has it plot the variance and ‘sd’ the standard deviation, which play the part of the distance from blue line to red dotted lines in the one-dimensional case. These combined plots are used mainly for diagnostics - to obtain an expectation or variance plot with meaningful scale, we plot them one-by-one. As they are all ggplot objects, we can augment them after the fact.
emulator_plot(ems2d$f1, plot_type = 'sd') + geom_point(data = initial_data, aes(x = x, y = y)) The structure of this plot warrants some discussion. In the one-dimensional case, we discussed the fact that the emulator was ‘certain’ at training points, and less so elsewhere; the uncertainty of the emulator was driven by its distance from a known training point. Here we have the same picture: minima of the emulator uncertainty (coloured here in deep blue) sit over training points. As we move away from a training point, the uncertainty increases until at the edges (which represent the maximum distance from training points) it reaches a maximum. The gradient of this change in uncertainty is controlled by the correlation length - a higher correlation length presumes that nearby points are more closely correlated, and in some sense can be equated to the ‘smoothness’ of our output space. We can modify the f1 emulator to have a larger correlation length and see what the effect is. inflated_corr <- ems2d$f1$set_hyperparams(list(theta = 1)) emulator_plot(inflated_corr, 'sd') + geom_point(data = initial_data, aes(x = x, y = y)) We can see that the emulator with the higher correlation length maintains a lower uncertainty when not at training points. The final two plots that emulator_plot can provide are those of implausibility, and nth-maximum implausibility. emulator_plot(ems2d, plot_type = 'imp', targets = targets2d) emulator_plot(ems2d, plot_type = 'nimp', targets = targets2d) The implausibility of a point $$x$$, given a target $$z$$, is determined by the following expression: $$I(x)^2 = \frac{(\mathbb{E}[f(x)]-z)^2}{\text{Var}[f(x)] + \sigma^2}.$$ Here $$\mathbb{E}[f(x)]$$ is the emulator expectation at the point, $$\text{Var}[f(x)]$$ is the emulator variance, and $$\sigma^2$$ corresponds to all other sources of uncertainty outside of the emulators (e.g. observation error, model discrepancy, …). The smaller the implausibility at a given point, the more likely an emulator is to propose it as a new point. We can note that there are two reasons that implausibility might be small: either the numerator is small (in which case the difference between the emulator prediction and the target value is small, implying a ‘good’ point) or the denominator is large (in which case the point is in a region of parameter space that the emulator is uncertain about). Both are useful: the first for obvious reasons; the second because proposing points in such regions will make subsequent waves of emulators more certain about that region and thus allow us to reduce the allowed parameter space. The first plot here shows implausibility for each of the outputs. Of course, we want proposed points to be acceptable to matching both $$f_1$$ and $$f_2$$. The second plot gives maximum implausibility, which is exactly as it sounds: the largest implausibility across all outputs at a point. We can see that the $$f_1$$ emulator drives much of the space reduction in the lower-right and upper-left quadrant of the region, while the $$f_2$$ emulator drives space reduction in the lower-left and (to some extent) upper-right. This maximum implausibility is equivalent to the determination made by the one-dimensional emulator, where in that case the non-implausible region was easy to see graphically. We can now generate new points for a second wave of emulation. We again make a slight modification to the normal behaviour of the generate_new_runs function. By default it will use a space-filling argument before using those points to draw lines to the boundaries of the non-implausible space - the points on the boundary are included in the proposal. From this augmented set, we perform importance sampling to try to fill out the space. To further ensure that the non-implausible space is filled out, we usually thin the proposed points and re-propose using the boundary search and importance sampling. This ‘resampling’ step can be repeated as many times as one desires (at a computational cost) and is useful in large-dimensional spaces; here we set resample = 0 for efficiency’s sake. new_points2d <- generate_new_runs(ems2d, 40, targets2d, resample = 0) #> Proposing from LHS... #> 111 initial valid points generated for I=3 #> Selecting final points using maximin criterion... new_data2d <- data.frame(x = new_points2d$x, y = new_points2d$y, f1 = apply(new_points2d, 1, func1), f2 = apply(new_points2d, 1, func2)) plot_wrap(new_data2d[,1:2], ranges2d) With these new points, we want to train a new set of emulators. We generated 40 points from generate_new_runs so as to split them into a training set and a validation set once more: sample2d <- sample(40, 20) train2d <- new_data2d[sample2d,] valid2d <- new_data2d[-sample2d,] We train the new emulators, but first we redefine the range over which the emulators are defined. We can see from the plot above that any points with $$x>0.5$$ are not going to produce an acceptable match: it makes sense, therefore, to train the new emulators on a reduced part of parameter space. This reduction ensures that the emulators focus on interpolation between known points rather than extrapolation to a region we know is going to be unacceptable. This can be done within the emulator training by adding check.ranges = TRUE to the call to emulator_from_data; here we will just manually change the ranges with a small buffer. new_ranges2d <- list(x = c(-pi/2, 0.6), y = c(-1, 1)) ems2d_2 <- emulator_from_data(train2d, c('f1', 'f2'), new_ranges2d) #> Fitting regression surfaces... #> Building correlation structures... #> Creating emulators... #> Performing Bayes linear adjustment... ems2d_2 #>$f1
#> Parameters and ranges: x: c(-1.5708, 0.6): y: c(-1, 1)
#> Specifications:
#> Basis functions: (Intercept); x; y; I(x^2); I(y^2)
#> Active variables x; y
#> Regression Surface Expectation: 0.8263; 1.2372; -1.2427; 1.4067; -0.7827
#> Regression surface Variance (eigenvalues): 0; 0; 0; 0; 0
#> Correlation Structure:
#> Bayes-adjusted emulator - prior specifications listed.
#> Variance (Representative): 0.003089494
#> Expectation: 0
#> Correlation type: exp_sq
#> Hyperparameters: theta: 0.6322
#> Nugget term: 0
#> Mixed covariance: 0 0 0 0 0
#>
#> $f2 #> Parameters and ranges: x: c(-1.5708, 0.6): y: c(-1, 1) #> Specifications: #> Basis functions: (Intercept) #> Active variables x; y #> Regression Surface Expectation: -2.8959 #> Regression surface Variance (eigenvalues): 0 #> Correlation Structure: #> Bayes-adjusted emulator - prior specifications listed. #> Variance (Representative): 0.04032869 #> Expectation: 0 #> Correlation type: exp_sq #> Hyperparameters: theta: 1 #> Nugget term: 0.05 #> Mixed covariance: 0 One thing we can see from this is that the emulators’ global variances have reduced, as one would expect. Since these emulators are trained on a smaller parameter space, and given more relevant points, the region of interest can be more accurately emulated. We check again on the emulator diagnostics and modify as necessary: invalid_points2 <- validation_diagnostics(ems2d_2, targets2d, valid2d, plt = TRUE, row = 2) Again, we modify the emulators with reference to these plots. It appears that the emulator for output $$f_1$$ would benefit from some tweaking, which we do here. ems2d_2$f1 <- ems2d_2$f1$mult_sigma(1.4)$set_hyperparams(list(theta = 1/3)) We briefly look at the implausibilities for these new emulators: emulator_plot(ems2d_2, 'imp', targets = targets2d) Finally, we propose points from a combination of this wave and the previous wave. new_new_points2d <- generate_new_runs(c(ems2d_2, ems2d), 40, targets2d, resample = 0) #> Proposing from LHS... #> 157 initial valid points generated for I=3 #> Selecting final points using maximin criterion... plot_wrap(new_new_points2d, ranges2d) For completeness, we record the function values of these new points: new_new_data2d <- data.frame(x = new_new_points2d$x, y = new_new_points2d$y, f1 = apply(new_new_points2d, 1, func1), f2 = apply(new_new_points2d, 1, func2)) We conclude this example by looking at the evolution of the allowed space, and the relative performance of the proposed points. A variety of functions are available to do so: we first package up all our waves into a list. all_waves <- list(rbind(initial_data, validation_data), new_data2d, new_new_data2d) We can view the output in three main ways: looking at the pure performance of the runs relative to the targets, looking at the distribution of the proposed points as the waves progress, or looking at the distribution of the output points as the waves progress. The functions simulator_plot, wave_points, and wave_values do just that. simulator_plot(all_waves, z = targets2d) wave_points(all_waves, names(ranges2d)) wave_values(all_waves, targets2d, l_wid = 0.8) We can see that, as the colour gets darker (representing later waves), the proposed points settle into the bounds of the targets. In fact, in this example, we cna start to consider whether further waves are necessary - the points proposed at the end of the first wave were predominantly suitable for representing the non-implausible region. We can consider this by looking at the emulator uncertainty in the second wave compared to the observation uncertainty. c(ems2d_2$f1$u_sigma, ems2d_2$f2$u_sigma) #> [1] 0.0778165 0.2008200 c(targets2d$f1$sigma, targets2d$f2\$sigma)
#> [1] 0.31622777 0.07071068
The first emulator has uncertainty far lower than the target itself (about one quarter of the target sigma). Recall that the denominator of the implausibility measure combines all sources of uncertainty in quadrature: for $$f_1$$, therefore, the observation uncertainty will drive the behaviour of that uncertainty measure. Since we cannot reduce the observational error, subsequent waves are unlikely to have any large impact on the non-implausible space for $$f_1$$. For the second emulator, however, the emulator uncertainty is still relatively large, in comparison to the observation error, so we could perhaps consider training an additional wave only to the output of $$f_2$$. Alternatively, if we believe that the ‘yield’ of good points (i.e. those which hit all targets) is good enough at this wave, we could simply generate many more points at this wave and use those as a final representative sample of the non-implausible space. The decision on when we have performed enough waves is situational and dependent on one’s own intuition and needs; nevertheless, with two waves of emulation and history matching here we are in a strong position to be able to make that decision, supported by the information provided by the process.
### The Structure of a Bayes Linear Emulator
The basic structure of an emulator $$f(x)$$ is
$$f(x) = \sum_i \beta_i h_i(x) + u(x),$$
where the first term represents a regression surface (encapsulating the global behaviour of the function), and the second term accounts for local variations by defining an correlation structure on the space (more of which later). Our prior beliefs about the emulator must be specified. We need only second-order specifications (expectation and variance), so one can see that we must specify the following: - A set of basis functions, $$h(x)$$; - The second-order specifications for the coefficients $$\beta$$: namely the expectation and the variance $$\mathbb{E}[\beta]$$ and $$\text{Var}[\beta]$$;
• The second order specifications for the correlation structure: $$\mathbb{E}[u(x)]$$ and $$\text{Var}[u(x)]$$;
• The covariance between the coefficients and the correlation structure: $$\text{Cov}[\beta, u(x)]$$.
We could specify all of these things by hand; however, many of the parts of the above specification can be estimated quite readily automatically. The function emulator_from_data does exactly that, with a few assumptions. It assumes no covariance between the regression surface and the correlation structure, that the expectation of $$u(x)$$ is $$0$$, and that the variance of the coefficients $$\beta$$ is $$0$$ (i.e. that the regression surface is fixed and known); it also assumes that the correlation structure has an exponential-squared form. For two points $$x$$ and $$x^\prime$$, the correlation between them is
$$c(x,x^\prime) = \exp\left\{\frac{-(x-x^\prime)^2}{\theta^2}\right\}.$$
The closer two points are together, the higher their correlation; the parameter $$\theta$$ is known as a correlation length. The larger $$\theta$$, the larger the extent of the correlation between distant points. The emulator_from_data function also attempts to estimate the value of $$\theta$$.
The above analysis gives us a set of prior specifications for the emulator. However, it has not used all the information available from the training data. We can update our second-order beliefs with the Bayes Linear Update Equations - given data $$D$$ and prior specifications $$\mathbb{E}[f(x)]$$ and $$\text{Var}[f(x)]$$ for the emulator, the adjusted expectation and variance are
$$\mathbb{E}_D[f(x)] = \mathbb{E}[f(x)] + \text{Cov}[f(x), D]\text{Var}[D]^{-1}(D-\mathbb{E}[D]),$$ $$\text{Var}_D[f(x)] = \text{Var}[f(X)] - \text{Cov}[f(x), D]\text{Var}[D]^{-1}\text{Cov}[D, f(x)].$$ |
# Empirical rule
Practice applying the 68-95-99.7 empirical rule.
You might need: Calculator
### Problem
The lifespans of lions in a particular zoo are normally distributed. The average lion lives 10 years; the standard deviation is 1, point, 4 years.
Use the empirical rule left parenthesis, 68, minus, 95, minus, 99, point, 7, percent, right parenthesis to estimate the probability of a lion living longer than 7, point, 2 years.
percent |
# How are the wavelength and frequency of a sound wave related to its speed ?
16 views
in Physics
closed
How are the wavelength and frequency of a sound wave related to its speed ?
by (66.1k points)
selected by
Speed, wavelength, and frequency of a sound wave are related by the following equation:
Speed (v) = Wavelength (lambda) xx Frequency (ν)
v = lambda xx v |