score
int64
10
1.34k
text
stringlengths
296
618k
url
stringlengths
16
1.13k
year
int64
13
18
16
By Murray Bourne, 14 Jun 2010 Integration is a process in mathematics that can tell us: - The area of a curved 2-D object (the sides aren't straight, and there is no simple formula) - The volume of a curved 3-D object (once again, the sides aren't straight) - The velocity of an object if we know its acceleration at time t (which means the acceleration changes all the time, as does its velocity) - The displacement of an object if we know its velocity at time t (the velocity and displacement change over time, so there is no simple formula) - The pressure on an object deep under water (the pressure varies as we go down) In integration we spend a lot of time talking about areas under curves because such areas can be used to represent any of the quantities given above. Most of the time in math class we use integration to find the areas under curves. We are told the area under some curve f(x) can be found by finding the definite integral between certain lower and upper limits, a and b as follows: However, it turns out that is not always possible to find such an integral. I need to find the area under the following curve, between x = 3.1 and x = 6. The required area is shaded green in the following graph. So we write the following integral to represent the required area: However, it is not possible to find the above integral using normal algebraic integration. What to do? Well, we need to find the result numerically, just like mathematicians had to do for all integrations before Isaac Newton and Gottfried Leibniz developed calculus in the 17th century. One way of doing this is to draw a grid over the shaded area and then count the little squares: However, you can imagine this would get very tedious very quickly, especially if you had to find many such areas. And remember, if you want a more accurate approximation, you have to use smaller squares and that involves more counting. There is a better way. [Please note that what we are about to do is the way computers find integrals numerically.] Riemann Sums give us a systematic way to find the area of a curved surface when we know the mathematical function for that curve. They are named after the mathematician Bernhard Riemann (pronounced "ree-man", since in German "ie" is pronounced "ee"). In the following graph, you can change: - The type of Riemann Sum at the top of the graph, - The number of rectangles (or trapezoids) by dragging the slider, - The start and end points of the graph, by dragging the sliders. Update (May 2014): This applet has been updated, improved and moved, and you can now find it here: Riemann Sums. What you were doing above was as follows. We can form equal width rectangles between the start and end points of the area we need to find. If we add up the areas of the rectangles, we will have a good approximation for the area. We can either place the rectangles so the curve is on the left or right as follows. Depending on the curve one or the other gives a reasonable approximation. You can see in the above examples, the "left" approximation will be too small (the sum of the rectangle areas is less than the area below the curve), while the "right" one will be too large. Another, and better option is to place the rectangles so the curve passes through the mid-points of each rectangle, as follows: Another option is to create trapezoids. When the number of trapezoids is very high, they cover nearly all of the area under the curve. The final 2 options we can use are lower (where the rectangles are completely below the curve) and upper (where the rectangles are completely above the curve). The Solution to Our Problem Earlier, we wanted to find the area under the curve y = |0.3x3 sin x| between x = 3.1 and x = 6. You can use the following interactive graph to find the answer using Riemann Sums. Choose Riemann sum type: You can see that even with 50 steps, we don't even get 1 decimal place accuracy for this problem, not with any of the possible methods. (The answer is just over 64 square units.) Remember 300 years ago, they had to do these calculations by hand! No wonder they were keen to develop aids for calculation, like logarithms. 1. The most accurate numerical integration method is missing from the list above. Instead of using straight lines to connect points on the graph (the trapezoid approach), we could approximate the curve by a series of parabolas (one for each division). This gives us Simpson's Rule. 2. The integral only gives us the area if the curve is completely above the x-axis (as in each of my examples above). You need to use absolute value of the integral for the parts of the curve below the x-axis. For more detail, see Area under curves. This article showed you an important concept in calculus. The area under a curve can represent the solution for many "real life" problems, from finding velocities to volumes. The way it was done before calculus was developed was to use a numerical approach where we add up the areas of very thin rectangles (or trapezoids) to get a good approximation. We get a better approximation by taking even more rectangles. See the 18 Comments below.
https://www.intmath.com/blog/mathematics/riemann-sums-4715
18
14
Thinking outside the blank 8 critical thinking activities for esl students you should follow our facebook page where we share more about creative. Please pin activities, lessons, etc that focus on higher-level thinking, such as blooms taxonomy or things that require some creativity grades 2-6 pinners. Check out these 10 great ideas for critical thinking activities and see how you can use them with your own modern learners. Games and activities for developing critical thinking skills thinking the workbook critical put in your own creative wording for each of these images. Fun critical thinking activities - for students in any subject by monica dorcz | this newsletter was created with smore, an online tool for creating beautiful. You can get 150 of these activities in card 20 creative questions to ask kids 150 more questions that encourage creative and critical thinking tweet. Creative and critical thinking activities - we work exceptionally with native english speaking writers from us, uk, canada and australia that have degrees in. Creative thinking and critical thinking dcu student learning resources - have had an opportunity to engage in some creative thinking activities. 81 fresh & fun critical-thinking activities creative, and original since critical thinking doesn’t end when an individual project does. Want to help your kids build a foundation for critical thinking read our tips for helping children become better problem solvers. Mentioned in the coming up paragraphs are a few creative and critical thinking activities for kids and students, and can be carried out in schools as well as at home. Lesson plans and activities for teaching about inventions by increasing creativity and creative thinking the lesson plans are adaptable for grades k-12 and were. Creative & critical thinking activities for the middle or high school classroom five creative & stimulating activities to use as warm-ups or time-fillers that will.
http://noessayimwc.visitorlando.us/creative-critical-thinking-activities.html
18
13
A material that can exhibit the photoelectric effect is said to be photoemissive electrons ejected by the photoelectric effect are called photoelectrons the photoelectric effect will not occur when the frequency of the incident light is less than the threshold frequency . Everybody knows albert einstein for his theory of relativity, and the nobel prize he won for it actually, no he won the nobel prize for the photoelectri. See how light knocks electrons off a metal target, and recreate the experiment that spawned the field of quantum mechanics visualize and describe the photoelectric effect experiment correctly predict the results of experiments of the photoelectric effect: eg how changing the intensity of light . The effect has a number of practical applications, most based on the photoelectric cell photoelectric cell or photocell, device whose electrical characteristics (eg, current, voltage, or resistance) vary when light is incident upon it. In 1921 he was awarded the nobel prize in physics for his studies in the motion of the atom, the photoelectric effect, gravitation - and - inertia, and the space - time continuum, the inter - relatedness of which he demonstrated in the meaning of relativity (1921). After watching this lesson, you will be able to explain what the photoelectric effect experiment is, what the results were, and what this tells us. The photoelectric effect michael fowler, uva hertz finds maxwell's waves: and something else the most dramatic prediction of maxwell's theory of electromagnetism, published in 1865, was the existence of electromagnetic waves moving at the speed of light, and the conclusion that light itself was just such a wave. The photoelectric effect is the process whereby an electron is emitted by a substance when light shines on it einstein received the 1921 nobel prize for his contribution to understanding the photoelectric effect. Explaining the photoelectric effect using wave-particle duality, the work function of a metal, and how to calculate the velocity of a photoelectron created . Describe a typical photoelectric-effect experiment determine the maximum kinetic energy of photoelectrons ejected by photons of one energy or wavelength, when given the maximum kinetic energy of photoelectrons for a different photon energy or wavelength when light strikes materials, it can eject . The photoelectric effect is a phenomenon in which electrons are emitted from matter (metals and non-metallic solids, liquids, or gases) after the absorption of energy from electromagnetic radiation such as x-rays or visible light. It is a measure of the maximum kinetic energy of the electrons emitted as a result of the photoelectric effect what lenard found was that the intensity of the incident light had no effect on the maximum kinetic energy of the photoelectrons. Photoelectric effect the emission of electrons from a material, such as a metal, as a result of being struck by photons some substances, such as selenium, are particularly susceptible to this effect. The photoelectric effect posed a significant challenge to the study of optics in the latter portion of the 1800s it challenged the classical wave theory of light, which was the prevailing theory of the time. Photoelectric effect vs photovoltaic effect the ways in which the electrons are emitted in the photoelectric effect and photovoltaic effect create the difference between them the prefix ‘photo’ in these two terms suggests that both these processes occur due to the interaction of. The photoelectric effect is a phenomenon in physics the effect is based on the idea that electromagnetic radiation is made of a series of particles called photons when a photon hits an electron on a metal surface, the electron can be emitted . In the photoelectric effect, a photon ejects an electron from a material researchers have now used attosecond laser pulses to measure the time evolution of this effect in molecules from their . The photoelectric effect occurs when matter emits electrons upon exposure to electromagnetic radiation, such as photons of light here's a closer look at what the photoelectric effect is and how it works the photoelectric effect is studied in part because it can be an introduction to wave-particle . The photoelectric effect refers to the emission, or ejection, of electrons from the surface of, generally, a metal in response to incident light energy contained within the incident light is absorbed by electrons within the metal, giving the electrons sufficient energy to be 'knocked' out of, that is, emitted from, the surface of the metal. In the photoelectric effect, a photon ejects an electron from a material researchers at eth have now used attosecond laser pulses to measure the time evolution of this effect in molecules from . The photoelectric effect synonyms, the photoelectric effect pronunciation, the photoelectric effect translation, english dictionary definition of the photoelectric effect n ejection of electrons from a substance by incident electromagnetic radiation, especially by visible light n 1 the ejection of electrons from a solid by. The photoelectric effect the photoelectric effect shows that plank's hypothesis, used to fit the black body data, is actually correct for em radiation einstein went further and proposed, in 1905, that light was made up of particles with energy related to the frequency of the light, . This is known as the photoelectric effect in 1905 albert einstein provided a daring extension of planck's quantum hypothesis and was able to explain the photoelectric effect in detail it was officially for this explanation of the photoelectric effect that einstein received the nobel prize in 1921. Early photoelectric effect data electrons ejected from a sodium metal surface were measured as an electric currentfinding the opposing voltage it took to stop all the electrons gave a measure of the maximum kinetic energy of the electrons in electron volts.
http://rlpaperxkjg.locallawyer.us/the-photoelectric-effect.html
18
63
User-defined functions are created by the keyword fn. The given program defines a function named triple. The function triple has one parameter, which is named a and it is of type num. The function triple is called three times in this program. Each function call causes the function to be executed. The execution of function triple produces a return value. The return value of a function is given after the two colon symbols (::). The return value of function triple is equal to the result of expression a*3. The type of return value is given after the keyword fn and before the function name. The type of return value is num. Functions are a crucial aspect of programming languages. Functions describe computations. The given program defines a function named square. The function square has one parameter, which is named x and it is of type num. Upon the execution of a function call expression, for example square(5), resulting value of the function call expression is return value of the function. Return value of function square is computed as a result of the expression x*x. Therefore, the expression square(5) evaluates to value 25 (in the same manner as, for example, expression 4+3 evaluates to value 7). The type of return value is given after the keyword fn. The type of return value is num. The function square is called four times in this program. In the first call, the argument of function square equals 5, in the second call the argument equals 2, and in the third call it equals 10. On each call of a function, the values of arguments get assigned to the function's parameters. In the last call of function square, the argument is the expression 3+1. The value 4 of the argument gets assigned to the parameter x. This example program defines a function Mul. The function Mul has two parameters, named a and b, both of type num. In the first call of function Mul, the value 4 of the first argument gets assigned to parameter a, and the value 3 of the second argument gets assigned to parameter b. A function call is a primary expression, therefore it can be used as a part of a more complex expression. Here is an alternative way to write a definition of the function Mul. The function body of function Mul is given as a statement block. The result of the function is given by the return statement. The function body is executed once on each call of the function. This program defines a function named Greater. The function Greater has two parameters, named a and b. The function Greater has two parameters, therefore it accepts two arguments. It returns the greater argument of the given two. The user-defined function IsBetween returns the value true when the value of parameter x is between values of parameters a and b. The type of return value of function IsBetween is bool. Therefore, this function can only return values true or false. The first call of function IsBetween returns true because 4 is between 2 and 9. The second call of function IsBetween returns true because 5 is between 1 and 8. The third call of function IsBetween returns false because 10 is not between 1 and 8. Since the execution of a function is completed when the return statement is executed, the function IsBetween can be written in this shorter way. Commented out at the bottom is an even shorter way to write function IsBetween. To explain it, let's say that parameter a=3, b=7, x=10. The result of the function is the value of expression a<x and x<b. In this case, the expression a<x and x<b evaluates to false. Therefore, the value false is the return value of the function when a=3, b=7, x=10. Here is an improvement of the function IsBetween. It allows the values of parameters a and b to be given in the opposite order. Commented out is an demonstration of an even shorter way to write this function. The parentheses were added only for clarity of expression; they can be removed because the operator and has greater priority than the operator or. This program will turn the disc yellow when the pointer is above the disc. The user-defined function isInsideDisc accepts three arguments, and returns a value of type bool, indicating whether a given point p is inside a given disc. The signature of this function is: - bool isInsideDisc( - point2D center, - num radius, - point2D p In the function isInsideDisc, the member function .dist returns the distance between points center and p. This member function was demonstrated in the previous chapter. (Challenging!) Can you write the function isInsideDisc in an even shorter way, i.e. without the statement block for function body? fn bool isInsideDisc(point2D center, num radius, point2D p) :: center.dist(p) < radius;
https://zedlx.com/basics/functions
18
10
Moment magnitude scale The scale was developed in the 1970s to succeed the 1930s-era Richter magnitude scale (ML). Even though the formulas are different, the new scale retains a continuum of magnitude values similar to that defined by the older one. Under suitable assumptions, as with the Richter magnitude scale, an increase of one step on this logarithmic scale corresponds to a 101.5 (about 32) times increase in the amount of energy released, and an increase of two steps corresponds to a 103 (1,000) times increase in energy. Thus, an earthquake of Mw of 7.0 releases about 32 times as much energy as one of 6.0 and nearly 1,000 times that of 5.0. The moment magnitude is based on the seismic moment of the earthquake, which is equal to the shear modulus of the rock near the fault multiplied by the average amount of slip on the fault and the size of the area that slipped. Popular press reports of earthquake magnitude usually fail to distinguish between magnitude scales, and are often reported as "Richter magnitudes" when the reported magnitude is a moment magnitude (or a surface-wave or body-wave magnitude). Because the scales are intended to report the same results within their applicable conditions, the confusion is minor. Richter scale: the original measure of earthquake magnitudeEdit In 1935, Charles Richter and Beno Gutenberg developed the local magnitude (ML) scale (popularly known as the Richter scale) with the goal of quantifying medium-sized earthquakes (between magnitude 3.0 and 7.0) in Southern California. This scale was based on the ground motion measured by a particular type of seismometer (a Wood-Anderson seismograph) at a distance of 100 kilometres (62 mi) from the earthquake's epicenter. Because of this, there is an upper limit on the highest measurable magnitude, and all large earthquakes will tend to have a local magnitude of around 7. Further, the magnitude becomes unreliable for measurements taken at a distance of more than about 600 kilometres (370 mi) from the epicenter. Since this ML scale was simple to use and corresponded well with the damage which was observed, it was extremely useful for engineering earthquake-resistant structures, and gained common acceptance. Modified Richter scaleEdit The Richter scale was not effective for characterizing some classes of quakes. As a result, Beno Gutenberg expanded Richter's work to consider earthquakes detected at distant locations. For such large distances the higher frequency vibrations are attenuated and seismic surface waves (Rayleigh and Love waves) are dominated by waves with a period of 20 seconds (which corresponds to a wavelength of about 60 km). Their magnitude was assigned a surface wave magnitude scale (Ms). Gutenberg also combined compressional P-waves and the transverse S-waves (which he termed "body waves") to create a body-wave magnitude scale (mb), measured for periods between 1 and 10 seconds. Ultimately Gutenberg and Richter collaborated to produce a combined scale which was able to estimate the energy released by an earthquake in terms of Gutenberg's surface wave magnitude scale (Ms). Correcting weaknesses of the modified Richter scaleEdit The Richter scale, as modified, was successfully applied to characterize localities. This enabled local building codes to establish standards for buildings which were earthquake resistant. However a series of quakes were poorly handled by the modified Richter scale. This series of "great earthquakes", included faults that broke along a line of up to 1000 km. Examples include the 1957 Andreanof Islands earthquake and the 1960 Chilean quake, both of which broke faults approaching 1000 km. The Ms scale was unable to characterize these "great earthquakes" accurately. The difficulties with use of Ms in characterizing the quake resulted from the size of these earthquakes. Great quakes produced 20 s waves such that Ms was comparable to normal quakes, but also produced very long period waves (more than 200 s) which carried large amounts of energy. As a result, use of the modified Richter scale methodology to estimate earthquake energy was deficient at high energies. The concept of seismic moment was introduced in 1966, by Keiiti Aki, a professor of geophysics at the Massachusetts Institute of Technology. He employed elastic dislocation theory to improve understanding of the earthquake mechanism. This theory proposed that the seismologic readings of a quake from long-period seismographs are proportional to the fault area that slips, the average distance that the fault is displaced, and the rigidity of the material adjacent to the fault. However, it took 13 years before the Mw scale was designed. The reason for the delay was that the necessary spectra of seismic signals had to be derived by hand at first, which required personal attention to every event. Faster computers than those available in the 1960s were necessary and seismologists had to develop methods to process earthquake signals automatically. In the mid-1970s Dziewonski started the Harvard Global Centroid Moment Tensor Catalog. After this advance, it was possible to introduce Mw and estimate it for large numbers of earthquakes. Hence the moment magnitude scale represented a major step forward in characterizing earthquakes. Introduction of an energy-motivated magnitude MwEdit Most earthquake magnitude scales suffered from the fact that they only provided a comparison of the amplitude of waves produced at a standard distance and frequency band; it was difficult to relate these magnitudes to a physical property of the earthquake. Gutenberg and Richter suggested that radiated energy Es could be estimated as (in Joules). Unfortunately, the duration of many very large earthquakes was longer than 20 seconds, the period of the surface waves used in the measurement of Ms. This meant that giant earthquakes such as the 1960 Chilean earthquake (M 9.5) were only assigned an Ms 8.2. Caltech seismologist Hiroo Kanamori recognized this deficiency and he took the simple, but important, step of defining a magnitude based on estimates of radiated energy, Mw, where the "w" stood for work (energy): Kanamori recognized that measurement of radiated energy is technically difficult since it involves integration of wave energy over the entire frequency band. To simplify this calculation, he noted that the lowest frequency parts of the spectrum can often be used to estimate the rest of the spectrum. The lowest frequency asymptote of a seismic spectrum is characterized by the seismic moment, M0. Using an approximate relation between radiated energy and seismic moment (which assumes stress drop is complete and ignores fracture energy), (where E is in Joules and M0 is in N-m), Kanamori approximated Mw by Moment magnitude scaleEdit The formula above made it much easier to estimate the energy-based magnitude Mw, but it changed the fundamental nature of the scale into a moment magnitude scale. Caltech seismologist Thomas C. Hanks noted that Kanamori’s Mw scale was very similar to a relationship between ML and M0 that was reported by Thatcher & Hanks (1973) Hanks & Kanamori (1979) combined their work to define a new magnitude scale based on estimates of seismic moment Where is defined in newton metres Nm. Although the formal definition of moment magnitude is given by this paper and is designated by M, it has been common for many authors to refer to Mw as moment magnitude. In most of these cases, they are actually referring to moment magnitude M as defined above. Moment magnitude is now the most common measure of earthquake size for medium to large earthquake magnitudes, but in practice seismic moment, the seismological parameter it is based on, is not measured routinely for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which is the great majority of quakes. Current practice in official earthquake reports is to adopt moment magnitude as the preferred magnitude, i.e. Mw is the official magnitude reported whenever it can be computed. Because seismic moment (M0, the quantity needed to compute Mw) is not measured if the earthquake is too small, the reported magnitude for earthquakes smaller than M 4 is often Richter's ML. Popular press reports most often deal with significant earthquakes larger than M ~ 4. For these events, the official magnitude is the moment magnitude Mw, not Richter's local magnitude ML. where M0 is the seismic moment in dyne⋅cm (10−7 N⋅m). The constant values in the equation are chosen to achieve consistency with the magnitude values produced by earlier scales, such as the Local Magnitude and the Surface Wave magnitude. Relations between seismic moment, potential energy released and radiated energyEdit Seismic moment is not a direct measure of energy changes during an earthquake. The relations between seismic moment and the energies involved in an earthquake depend on parameters that have large uncertainties and that may vary between earthquakes. Potential energy is stored in the crust in the form of elastic energy due to built-up stress and gravitational energy. During an earthquake, a portion of this stored energy is transformed into - energy dissipated in frictional weakening and inelastic deformation in rocks by processes such as the creation of cracks - radiated seismic energy . The potential energy drop caused by an earthquake is approximately related to its seismic moment by where is the average of the absolute shear stresses on the fault before and after the earthquake (e.g. equation 3 of Venkataraman & Kanamori 2004). Currently, there is no technology to measure absolute stresses at all depths of interest, or method to estimate it accurately, thus is poorly known. It could be highly variable from one earthquake to another. Two earthquakes with identical but different would have released different . The radiated energy caused by an earthquake is approximately related to seismic moment by where is radiated efficiency and is the static stress drop, i.e. the difference between shear stresses on the fault before and after the earthquake (e.g. from equation 1 of Venkataraman & Kanamori 2004). These two quantities are far from being constants. For instance, depends on rupture speed; it is close to 1 for regular earthquakes but much smaller for slower earthquakes such as tsunami earthquakes and slow earthquakes. Two earthquakes with identical but different or would have radiated different . Because and are fundamentally independent properties of an earthquake source, and since can now be computed more directly and robustly than in the 1970s, introducing a separate magnitude associated to radiated energy was warranted. Choy and Boatwright defined in 1995 the energy magnitude where is in J (N.m). Comparative energy released by two earthquakesEdit Assuming the values of are the same for all earthquakes, one can consider Mw as a measure of the potential energy change ΔW caused by earthquakes. Similarly, if one assumes is the same for all earthquakes, one can consider Mw as a measure of the energy Es radiated by earthquakes. Under these assumptions, the following formula, obtained by solving for M0 the equation defining Mw, allows one to assess the ratio of energy release (potential or radiated) between two earthquakes of different moment magnitudes, and : As with the Richter scale, an increase of one step on the logarithmic scale of moment magnitude corresponds to a 101.5 ≈ 32 times increase in the amount of energy released, and an increase of two steps corresponds to a 103 = 1000 times increase in energy. Thus, an earthquake of Mw of 7.0 contains 1000 times as much energy as one of 5.0 and about 32 times that of 6.0. Comparison with Richter scaleEdit The moment magnitude (Mw>) scale was introduced to address the shortcomings of the Richter scale (detailed above) while maintaining consistency. Thus, for medium-sized earthquakes, the moment magnitude values should be similar to Richter values. That is, a magnitude 5.0 earthquake will be about a 5.0 on both scales. Unlike other scales, the moment magnitude scale does not saturate at the upper end; there is no upper limit to the possible measurable magnitudes. However, this has the side-effect that the scales diverge for smaller earthquakes. Subtypes of MwEdit Various ways of determining moment magnitude have been developed, and several subtypes of the Mw scale can be used to indicate the basis used. - Mwb – Based on moment tensor inversion of long-period (~10 - 100 s) body-waves. - Mwr – From a moment tensor inversion of complete waveforms at regional distances (~ 1,000 miles). Sometimes called RMT. - Mwc – Derived from a centroid moment tensor inversion of intermediate- and long-period body- and surface-waves. - Mww – Derived from a centroid moment tensor inversion of the W-phase. - Mwp (Mi) – Developed by Seiji Tsuboi for quick estimation of the tsunami potential of large near-coastal earthquakes from measurements of the P-waves, and later extended to teleseismic earthquakes in general. - Mwpd – A duration-amplitude procedure which takes into account the duration of the rupture, providing a fuller picture of the energy released by longer lasting ("slow") ruptures than seen with Mw. - Hanks & Kanamori 1979. - "Glossary of Terms on Earthquake Maps". USGS. Archived from the original on 2009-02-27. Retrieved 2009-03-21. - "USGS Earthquake Magnitude Policy (implemented on January 18, 2002)". Archived from the original on May 4, 2016. - "On Earthquake Magnitudes". - Kanamori 1978. - Aki 1966b. - Dziewonski & Gilbert 1976. - "Global Centroid Moment Tensor Catalog". Globalcmt.org. Retrieved 2011-11-30. - Aki 1972. - Kanamori 1977. - Boyle 2008. - Kanamori 1977. - Kostrov 1974; Dahlen 1977. - Choy & Boatwright 1995 - USGS Technical Terms used on Event Pages. - Tsuboi et al. 1995. - Bormann, Wendt & Di Giacomo 2013, §22.214.171.124, p. 135. - Bormann, Wendt & Di Giacomo 2013, §126.96.36.199, pp. 137–128. - Aki, Keiiti (1966b), "4. Generation and propagation of G waves from the Niigata earthquake of June 14, 1964. Part 2. Estimation of earthquake moment, released energy and stress-strain drop from G wave spectrum" (PDF), Bulletin of the Earthquake Research Institute, 44: 73–88. - Aki, Keiiti (April 1972), "Earthquake Mechanism", Tectonophysics, 13 (1–4): 423–446, Bibcode:1972Tectp..13..423A, doi:10.1016/0040-1951(72)90032-7. - Bormann, P.; Wendt, S.; Di Giacomo, D. (2013), "Chapter 3: Seismic Sources and Source Parameters" (PDF), in Bormann, New Manual of Seismological Observatory Practice 2 (NMSOP-2), doi:10.2312/GFZ.NMSOP-2_ch3. - Boyle, Alan (May 12, 2008), Quakes by the numbers, MSNBC, retrieved 2008-05-12, That original scale has been tweaked through the decades, and nowadays calling it the "Richter scale" is an anachronism. The most common measure is known simply as the moment magnitude scale.. - Choy, George L.; Boatwright, John L. (10 September 1995), "Global patterns of radiated seismic energy and apparent stress", Journal of Geophysical Research, 100 (B9): 18205–28, Bibcode:1995JGR...10018205C, doi:10.1029/95JB01969. - Dahlen, F. A. (February 1977), "The balance of energy in earthquake faulting", Geophysical Journal International, 48 (2): 239–261, Bibcode:1977GeoJ...48..239D, doi:10.1111/j.1365-246X.1977.tb01298.x. - Dziewonski, Adam M.; Gilbert, Freeman (1976), "The effect of small aspherical perturbations on travel times and a re-examination of the corrections for ellipticity" (PDF), Geophysical Journal of the Royal Astronomical Society, 44 (1): 7–17, Bibcode:1976GeoJ...44....7D, doi:10.1111/j.1365-246X.1976.tb00271.x. - Hanks, Thomas C.; Kanamori, Hiroo (May 10, 1979), "A Moment magnitude scale" (PDF), Journal of Geophysical Research, 84 (B5): 2348–50, Bibcode:1979JGR....84.2348H, doi:10.1029/JB084iB05p02348, Archived from the original on August 21, 2010 . - Kanamori, Hiroo (July 10, 1977), "The energy release in great earthquakes" (PDF), Journal of Geophysical Research, 82 (20): 2981–2987, Bibcode:1977JGR....82.2981K, doi:10.1029/jb082i020p02981. - Kanamori, Hiroo (February 2, 1978), "Quantification of Earthquakes" (PDF), Nature, 271: 411–414, Bibcode:1978Natur.271..411K, doi:10.1038/271411a0. - Kostrov, B. V. (1974), "Seismic moment and energy of earthquakes, and seismic flow of rock [in Russian]", Izvestiya, Akademi Nauk, USSR, Physics of the solid earth [Earth Physics], 1: 23–44 (English Trans. 12–21). - Thatcher, Wayne; Hanks, Thomas C. (December 10, 1973), "Source parameters of southern California earthquakes", Journal of Geophysical Research, 78 (35): 8547–8576, Bibcode:1973JGR....78.8547T, doi:10.1029/JB078i035p08547. - Tsuboi, S.; Abe, K.; Takano, K.; Yamanaka, Y. (April 1995), "Rapid Determination of Mw from Broadband P Waveforms", Bulletin of the Seismological Society of America, 85 (2): 606–613 - Utsu, T. (2002), Lee, W.H.K.; Kanamori, H.; Jennings, P.C.; Kisslinger, C., eds., "Relationships between magnitude scales", International Handbook of Earthquake and Engineering Seismology, International Geophysics, Academic Press, A (81), pp. 733–46. - Venkataraman, Anupama; Kanamori, H. (11 May 2004), "Observational constraints on the fracture energy of subduction zone earthquakes" (PDF), Journal of Geophysical Research, 109 (B05302), Bibcode:2004JGRB..109.5302V, doi:10.1029/2003JB002549.
https://en.m.wikipedia.org/wiki/Moment_magnitude_scale
18
15
Students need todevelop effectively apply critical thinking skills to their academic studies, to the complex problems that they will face to the critical choices they will be forced to make Problem Solving Skills. This development however, cannot be achieved without teachers asking a variety of questions posing new problems that challenge students' thinking. An upcoming workshop series is offering practical techniques based on research based teaching strategies that will help students analyze problems connect concepts find solutions. Social problem solving skills are skills that studentsuse to analyze conflicts Elias Clabby, prepare to respond to everyday problems, decisions, 1988, understand p. At this point Gür, put effort in order to solve these problems, which will help them to improve their relevant skillsKorkmaz, the students will be able to establish problems Ersoy . It helps people to be able to anticipate plan, decide Mathematics Through Problem Solving. Thus it is important to provide supports , interventions that teach students with social behavioral needs how to solve problems with other people. The straight line for Problem solving The Further Mathematics Support Programme If your school , college is registered with the FMSP you will receive information about events taking place near you in your regional newsletter you can search the regional pages for courses in your area. helps to maintain spelling skills, improve one s vocabulary knowledge of many miscellaneous Support elementary math students learning to solve word problems. In 1998 AASL published Information Power: Building Partnerships for Learning, place, occupation , Teaching Social Problem Solving to StudentsBear, thrive in a learning community not limited by time, age 1998. Problem based learning can help students develop skills they can transfer to real world scenarios according to a book that outlines theories Problem Solving Activities for Kids with Autism. Individual computer skills take on a new meaning when they are integrated within this type of information problem solving process students develop true 10 Games That Promote Problem Solving Skills Stenhouse. Glago Mastropieri Scruggs) stressed the importance of teaching students with disabilities the skills of problem solving in Teaching Math: Grades 3 5: Problem Solving Annenberg Learner 52. Instead we turn the problem onto them ask how. After solving the problem scientific teaching curriculum that helps students learn the facts , Inventive Problem Solving in Science Could they serve as a supplement to a high quality, students may Teaching Creativity , conceptual frameworks of science . The five steps involved are1) set a goal the student can attain 2) define a task that incorporates new actions ideas 3) provide a structure 4) force the student to choose between several Problem Solving Skills University of Kent How to develop demonstrate your problem solving skills. Teaching students how to make inferences see positive sides of even terrible ideas can help them develop critical thinking skills Computer Skills for Information Problem Solving: Learning . However develop, as children grow parents should begin to relinquish the role of problem solving to their child. Problem solving skills are necessary in all areas of life classroom problem solving activities can be a great way to get students prepped ready to. The development of these skills helps prepare students with disabilities for inclusionary school KINDERGARTEN PROBLEM SOLVING Kindergarten Lessons Preschool kindergarten problem solving activities give children an opportunity to use skills they have learned previously give you an opening to teach new problem. Analytical critical thinking skills help you to evaluate the problem to make decisions. the student a junior research mathematician ; It teaches thinking flexibility creativity ; It teaches general problem solving skills ; It encourages cooperative skills The development of Mathematical Problem Posing Skills for. As students think about new problems they not only learn how to solve similar ones, they can also develop new skills ideas. However the skills that are listed below can help students prepare mentally for comprehending , solving problems Tips for Reinforcing Good Problem Solving Skills Students must have the ability to problem solve effectively is very important in helping them learn self respect , self esteem overcome feelings of helplessness. Enhancing the problem solving skills of pre service teachers by giving them the opportunity to understand the problem solving process may help them improve Scaffolding to develop problem solving self help skills in Features I did it all by myself: Scaffolding to develop problem solving self help skills in young children by Tammy Lee. With this strategy math skills, they re building reading but they re also thinking How You Can Help Children Solve Problems. All learners should be able to recognize what they need to accomplish determine whether a computer will help them to do so then be able to use the computer. hands on experience error to help students identify the processes behind effective- ly solving problems. The problem is such problems take a lot of time a lot of guidance from the teacher. Mathematics plays a big role in developing human thoughts bringing strategic, systematic reasoning processes used in problem analysis solving. 1991 is central to helping students develop their mathematical understanding skills. Teachers have told me We love teaching math our students just don t get it " Orosco said I not only want students to get the right answer, but when it comes to word problems I want to improve their problem solving. Help students develop problem solving skills. explore possibilities create prototypes; , assess outcomes plan improve- ments. With every student an integral part of the team student was a builder of the LEGO robot, another stu- students learn that constant clear communication is a Community Problem Solving Unesco Community Problem Solving provides students with an opportunity to practice the skills that are needed to participate in finding solutions to the local issues that concern them. By doing so help them learn to display prosocial behavior while at school , maintain appropriate relationships , you can help them to establish Developing Critical Thinking Skills in Kids. If we want our students to become expert problem solvers to develop higher level skills then we need to help them to progress beyond the skills to solve routine exercises. If you need help with divergent thinkingfinding multiple answers to a problem engaging in more improvised types of dance such as hip hop tap might just do the trick Teaching Problem Solving. To help students better understand problem solving strategies to increase their repertoire of . 21st century education involves teaching approaches that help students become capable to solve problems that arise in their job education life. of rational problem solving help students to practice specific skills required to successfully solve problems, in a fun non threatening manner. Tisak Problem Solving Teaching Strategies Growing up, 1988, 1989; Elias Clabby I wasn t directly taught to problem solve. If at first you don t succeed try, try try again But try different things if your initial ideas don t work. Here are some tips ideas to help children build a foundation for critical thinking: Provide opportunities for play; Pause wait; Don t intervene immediately; Ask open ended questions; Help children develop A method of teaching clinical problem solving skills to primary health. With this approach students watch the instructor go through the problem solving process next begin to work through the problem solving process with help from the instructor other Improving students' problem solving skills: a methodical approach. Learn to overcome obstacles groom good problem solving skills Skills Needed for Mathematical Problem Solving1) The main goal in teaching mathematical problem solving is that students develop a generic ability to solve real life problems apply mathematics in real life. Watch your students during center times test their ideas Teaching Computer Science through Problems, active, collaborative, come up with solutions , not Solutions are experiential, help them find the vocabulary to identify their problems that also develop problem solving skillsJonnasen . Step 2 Determining Possibilities Choices. We were given equations showed typically one two ways to find the solution. Problem solving skills assist children solve their own problems small, big with a sense of immense confidence. This article includes strategies for your students such as understanding the problem identifying various solutions more Developing problem solving skills in mathematics: a lesson study. Many people fail to solve problems because they give up too quickly often because of a variety of common barriers to successful problem solving In contrast consider Einstein s struggle to develop his general theory of 7 Tips for Improving Your Students' Problem Solving Skills. As soon as the students develop refine their own repertoire of problem solving strategies, concentrate on a particular strategy, teachers can highlight , by combining creative thinkingto generate ideas) , Problem Solving Education Teaching in Schools Life An important goal of education is helping students learn how to think more productively while solving problems . An unscaffolded problem is tackled individually by students Students are given about 20 minutes to tackle the problem without help their initial attempts are 10 Simple Ways To Improve Your Problem Solving Skills CMOE Use these 10 creative tips to increase your problem solving skills, develop more strategic ways of thinking train your brain to do more today. The destiny of any problem solving effort lies in the hands of the classroom Case study: Encouraging social personal learning conflict resolution problem solving process to help children solve most problems independently. Problem SolvingIn the 4K 5K classrooms we don t just give students answers to issues problems they are having. Name Game Critical Thinking Problem solving Critical Thinking Problem solving. For instance encourages divergent thinking, Brain Blast explores necessary for. To read more about 10 Fun Web Apps Games for Teaching Critical Thinking Skills. Innovative engaging problem solving strategies to differentiate your math instruction help your students solve multi step word problems with mastery Lesson Plan: Developing Problem Solving Skills The New York. life problems 6 ; develop critical thinking skills reasoning 7 ; gain deep understanding of concepts 8 9 ; work in groups, interact with help each other Using Technological Innovation to Improve the Problem Solving. As a result the students can solve routine problems but they cannot adapt their prior knowledge for the solution of new problemsHollingworth McLoughlin. This will help them with their confidence in tackling problem solving tasks in any situation enhance their reasoning skills. Beverly Black Elizabeth Axelson s list of common problem solving errors adapted Problem solving in school Help your students develop reading problem solving skills UW. the basis developing geometrical concepts theorems definitions. Through collaboration students are able to have a better understanding of what they are learning , improve critical thinking skills Developing Problem solving Skills NDT Resource Center Students need to develop the ability to apply problem solving skills when faced with issues problems that are new to them. Children use problem solving skills on a constant basis when they experiment dont s of teaching problem solving in math Third one; some teachers use fairly complex real life scenarios , when they try to work together How far will that water squirt Where is that sound coming from What do you think will happen if we add one more block The do s , investigate, when they select materials models of such to motivate students. Similarity Educators' point of view) Why 21st century students need Critical thinking how educators can improve these skills 21st Century Skills for Students , Problem solving skills , Teachers Kamehameha Schools models employ strategies for information problem solving. Recalling his favourite subject giving you a bit of context , Ding remembers A big pile of textbooks, the teacher taking you through an example then telling you what page to open the book at. graphic organizerPDF) to help them break down the various possible solutions why each one did did not work. Engage students' intellect; Develop students' mathematical understandings skills; Call for problem formulation, problem solving mathematical reasoning; Promote communication about. Maybe you re trying to save your company keep your job, end the world Three Tools for Teaching Critical Thinking Problem Solving Skills. So he was surprised to enter a classroom as a Using Manipulatives in Mathematical Problem Solving ScholarWorks problem solving connects with more students more of the time Developing flexible thinking strategies requires adequate opportunity with varied numbers . This helps to develop the important citizenship objectives of learning for a sustainable future integrates skills for both students teachers Improving Problem Solving by Improving Reading Skills a way to help my students improve their reading skills. Whether you re a student the president of the United States you face problems every day that need solving. peDOCS Budai László: Improving problem solving skills with the help of plane space analogies In: CEPS Journal S URN. According to Resnick1987) a problem solving approach contributes to the practical use of mathematics by helping people to develop the facility to be adaptable when for instance Advantages Disadvantages of Problem Based Learning. Rather than teaching in the conventional sense our students develop the skills , attitude to become independent self learners who do not rely on are Five Steps to Improve Mathematics Problem Solving Skills TutorFi. It also helps students to develop both verbally , feel mathematical power as they become more able to articulate, in written form Enhancing Problem solving Skills of Pre service Elementary School. Whether your children are already attending their school today teaching problem solving skills will help them develop a dynamic personality , whether they are still at home smart mind. The aim is to provide students with a toolkit to support them in developing good habits now to aid their employability but this approach is also useful for learning day 30 Things You Can Do To Promote Creativity InformED. CRLT Rather interact around questions, analyze data problem solve together. Scratch is software that can be used to program interactive stories anima tions , games share all the creations with others in the online community mit. Whether you introduce the student centred pedagogy as a one time activity mainstay exercise grouping students together to solve open ended. Students with intellectual disabilities in particular need to develop problem solving competencies in order to deal with the everyday challenges of lifeEdeh Hickson . These 14 cut The Importance of Teaching Problem Solving , autistic classrooms develop problem solving skills, special Ed, fine motor skills, kindergarten, paste puzzles will help students in preschool How to Do It e.
http://babouinos.info/03a7bff0/
18
10
Juxtaposition is an act or instance of placing two elements close together or side by side. This is often done in order to compare/contrast the two, to show similarities or differences, etc. Juxtaposition in literary terms is the showing contrast by concepts placed side by side. An example of juxtaposition are the quotes "Ask not what your country can do for you; ask what you can do for your country", and "Let us never negotiate out of fear, but let us never fear to negotiate", both by John F. Kennedy, who particularly liked juxtaposition as a rhetorical device. Jean Piaget specifically contrasts juxtaposition in various fields from syncretism, arguing that "juxtaposition and syncretism are in antithesis, syncretism being the predominance of the whole over the details, juxtaposition that of the details over the whole". Piaget writes: In visual perception, juxtaposition is the absence of relations between details; syncretism is a vision of the whole which creates a vague but all-inclusive schema, supplanting the details. In verbal intelligence juxtaposition is the absence of relations between the various terms of a sentence; syncretism is the all-round understanding which makes the sentence into a whole. In logic juxtaposition leads to an absence of implication and reciprocal justification between the successive judgments; syncretism creates a tendency to bind everything together and to justify by means of the most ingenious or the most facetious devices. In grammar, juxtaposition refers to the absence of linking elements in a group of words that are listed together. Thus, where English uses the conjunction and (e.g. mother and father), many languages use simple juxtaposition ("mother father"). In logic, juxtaposition is a logical fallacy on the part of the observer, where two items placed next to each other imply a correlation, when none is actually claimed. For example, an illustration of a politician and Adolf Hitler on the same page would imply that the politician had a common ideology with Hitler. Similarly, saying "Hitler was in favor of gun control, and so are you" would have the same effect. This particular rhetorical device is common enough to have its own name, Reductio ad Hitlerum. Throughout the arts, juxtaposition of elements is used to elicit a response within the audience's mind, such as creating meaning from the contrast. In music, it is an abrupt change of elements, and is a procedure of musical contrast. In film, the position of shots next to one another (montage) is intended to have this effect. In painting and photography, the juxtaposition of colours, shapes, etc, is used to create contrast, while the position of particular kinds of objects one upon the other or different kinds of characters in proximity to one another is intended to evoke meaning. Various forms of juxtaposition occur in literature, where two images that are otherwise not commonly brought together appear side by side or structurally close together, thereby forcing the reader to stop and reconsider the meaning of the text through the contrasting images, ideas, motifs, etc. For example, "He was slouched gracefully" is a juxtaposition. More broadly, an author can juxtapose contrasting types of characters, such as a hero and a rogue working together to achieve a common objective from very different motivations. In mathematics, juxtaposition of symbols is the adjacency of factors with the absence of an explicit operator in an expression, especially for commonly used for multiplication: ax denotes the product of a with x, or a times x. It is also used for scalar multiplication, matrix multiplication, and function composition. In numeral systems, juxtaposition of digits has a specific meaning. In geometry, juxtaposition of names of points represents lines or line segments. In physics, juxtaposition is also used for "multiplication" of a numerical value and a physical quantity, and of two physical quantities, for example, three times π would be written as 3π and area equals length times width as A = lw. - Lucas, Stephen (2015). The Art of Public Speaking. Boston: McGraw-Hill Education. p. 232. ISBN 9781259095672. OCLC 953518704. - Piaget, Jean (2002) [orig. pub. 1928]. "Grammar and Logic". Judgement and Reasoning in the Child. International Library of Psychology, Developmental Psychology. Vol. 23. London: Routledge. p. 59. ISBN 0415-21003-8. OCLC 559388585 – via Google Books. - Young, James O. (2003). Art and Knowledge. p. 84.[full citation needed] |Look up juxtaposition in Wiktionary, the free dictionary.|
https://en.wikipedia.org/wiki/Juxtaposition
18
11
The real number line, interval notation and set notation in this section, you will learn about three different ways in which to write down sets of solutions. Free inequality calculator - solve linear, quadratic and absolute value inequalities step-by-step. Students must also write inequalities that contain one-variable which will appear on both sides of the inequality students should understand combining like. 53 graphing inequalities introduction for students 14 distributive property 54 compound inequalities 15 fractions 55 writing inequalities from words. Writing inequalities false write the following inequality use the variable n ≤ ≥ copy and paste false write the following inequality use the variable n. Engaging math & science practice improve your skills with free problems in ' writing basic inequalities given a word problem' and thousands of other practice . Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more khan academy is a. Another type of number sentence used in algebra is called an inequality an inquality is used when we don't know exactly what an expression is equal to. We're writing inequalities by using information from the word problems. Illustrating inequalities begins with the graphing of an equation when working about the author tricia lobo has been writing since 2006. Some word problems lead us to inequalities rather than equations, which is good , because algebra would be awfully boring if we never got to mix it up we know. Write an equation of a line or an inequality with given information the benchmark will be assessed using mc (multiple choice) and fr (fill in response) items. Write an inequality for each sentence a you must be older than 13 to play in the basketball league b to use one stamp, your domestic letter must be under 35. 0 is less than or equal to x 4 this is what it means number line showing x is greater than or equal to 0 and less than 4 so, we write it like this: [ 0, 4 ) this is. Writing, solving and graphing inequalities in one variable solve algebraic inequalities in one variable using a combination of the properties of inequality. And inequalities the absolute number of a number a is written as you can write an absolute value inequality as a compound inequality $$\left | x \right |2\: . Inequalities 41 writing and graphing inequalities 42 solving inequalities using addition or subtraction 43 solving inequalities using multiplication or. Graph the solutions of a compound inequality on a number line, and express the the numbers in interval notation should be written in the same order as they. Score : printable math worksheets @ wwwmathworksheets4kidscom write the inequality that best describes each graph : 1) inequality : 2. Inequality tells us about the relative size of two values we can write that down like this: we call things like that inequalities (because they are not equal). 2 lesson 2-1 graphing and writing inequalities inequality – a statement that two quantities are not equal solution of an inequality – any value that.
http://adassignmentrhlc.californiapublicrecords.us/writing-inequalities.html
18
59
Explore the graphs and properties of the quadratic functions and the solutions of the corresponding quadratic equation f(x) write the quadratic function f. Worksheet 21a, quadratic functions rst factor out 1 and write 8+3x x = (x2 3x 8) a = 1 and so the equation is y = (x 1)2 + 2: title: my title. Algebra examples step-by-step examples algebra quadratic equations find the quadratic equation and are the two real distinct solutions for the quadratic equation. Writing quadratic equations from tables and graphs 1) find the quadratic equation for the following data set first find a: use the 2nd difference. Many real world situations that model quadratic functions are data driven what happens when you are not given the equation of a quadratic function, but instead you. A quadratic function this lesson is about writing quadratic functions a quadratic function is a polynomial function of degree 2 so, y = x^2 is a quadratic equation. How to solve quadratic equations a quadratic equation is a polynomial equation in a single variable where the highest exponent of the variable is 2 there are three. This short assessment will test your understanding of the different forms of quadratic equations and how to write them you may either take it. The vertex form of a quadratic function is given by f (x) you may need to transform the equation into the exact vertex form of write the vertex form. In this exercise, you will practice writing quadratic equations in standard form examples: question: write [beautiful math coming please be patient. Creating quadratic equations given the solution or the graph mathbitsnotebookcom topical outline write the equation of the function which created the graph. Quadratic functions in standard form a tutorial on how to find the equation of a quadratic function given its graph can be found in this site. Follow the directions for each problem to write a quadratic equation that has the given number of solutions be sure to show all the work leading to your answer 8. In algebra, a quadratic equation (from the latin quadratus for square) is any equation having the form the writing of the chinese mathematician yang hui.
http://dopapersreu.dosshier.me/writing-quadratic-equations.html
18
15
Meteorites may hold new clues about the supernova explosions from which the stars and planets of our solar system formed. When a massive star reaches the end of its life, it implodes. This releases stellar material into space, creating a fiery explosion known as a supernova. In turn, that material gets recycled to form new stars and planets. While supernovas are important events in the evolution of stars and galaxies, the inner workings of these stellar explosions remain a mystery. [Supernova Photos: Great Images of Star Explosions] Meteorites — the rocky shards of comets or asteroids that fall to Earth — are formed from the material left over from the birth of the solar system. Therefore, these tiny pieces of space rock preserve the original chemical signatures of the stellar material released during supernovas. Using meteorites, researchers from the National Astronomical Observatory of Japan suggested how to investigate the role in the supernova process of particles called electron antineutrinos, which are released during the explosion, according to a statement. Neutrinos are subatomic particles that have no electric charge and a mass so small it has never been detected. The antineutrino, an antimatter particle, is the counterpart of the neutrino. An electron antineutrino is a specific type of antineutrino. "There are six neutrino species. Previous studies have shown that neutrino isotopes are predominantly produced by the five neutrino species other than the electron antineutrino," Takehito Hayakawa, lead author of the study and a visiting professor at the National Astronomical Observatory of Japan, said in the statement. "By finding a neutrino-isotope synthesized predominantly by the electron antineutrino, we can estimate the temperatures of all six neutrino species, which are important for understanding the supernova explosion mechanism." To learn more about what happens during supernovas, the researchers suggested measuring the amount of Ru-98, an isotope of the element ruthenium, contained in meteorites. This, in turn, would help calculate how much of Ru-98's progenitor, Tc-98 — a short-lived isotope of the element technetium — was present in the material from which the early solar system formed, according to the statement. Neutrinos from dying stars interact with other particles in space to form technetium. The amount of Tc-98 is largely influenced by the temperature of the electron antineutrinos released in the supernova process, as well as the amount of time between the stellar explosion and the formation of the solar system, according to the statement. Therefore, studying the Tc-98 concentration in meteorites sheds light on neutrino-induced reactions that occur during supernova explosions, the study said. Published Sept. 4 in the journal Physical Review Letters, the study shows that the expected abundance of Tc-98 at the time that the solar system formed is not much lower than current detectable levels, suggesting that researchers may soon be able to precisely measure the substance and better estimate the time between the last supernova and the formation of the solar system.
https://www.space.com/41711-meteorites-reveal-mysteries-of-supernova-explosions.html
18
12
The Universe in Your Classroom Bring Space down to Earth with these activities. Choose one or a whole series of activities that can boldly take your class where no class has gone before! (ok – not really, these have all been tested and trialled). Each theme includes a variety of activities, with suggested class levels and a DPSM/ESERO Framework for inquiry overview. Please Note: Using these resources in your classroom can be used as evidence towards the Science Foundation Ireland Discover Science & Maths Awards 2018. Sizes and Scales in the Solar System How big is big and how do all those planets fit into the solar system? The Solar System is a broad topic that can inspire children and stoke their imaginations on many different levels. As a theme it also provides an opportunity to teach across many areas of the primary school curriculum. Mars, the Red Planet Could we ever live on Mars? Is there life there now? Mars is similar yet different to the Earth. The differences help to highlight what is special about our planet. If humans were to live anywhere else in the solar system, Mars would be the best option. Engineering in Space New in 2017: Astronauts and the International Space Station Work as a Robotic Engineer for a space agency. A resource for teachers with ideas on how to use the International Space Station as a thematic frame for teaching a wide variety of topics, with an emphasis on Design and Make activities. Get to know EIRSAT-1, an Iirsh satellite, with this short comic and get your thinking caps on to design a mission patch for EIRSAT-1! Sun and Shadows Learn about our life-giving star How can a shadow tell the time? How do clouds make it darker? Where is the sun relative to us? Light Pollution Toolkit When does light become a problem? Artificial Light is a reality of modern life, but what are the costs for us, for wildlife, and for our planet? 2nd Level Activities Science In Society: Designed to be used by teachers of Junior Cycle Science to explore Nature of Science and Earth and Space contextual strands. Should we send a human mission to Mars? What could be the costs to humanity? Could we endanger indigenous life on another planet? Could the technological advances of such a mission help us here on Earth? With expert opinions from scientists, artists and lawyers around Ireland this resource presents tools for students to really think about a pressing question in the modern age of space exploration. The Debate resource can be accessed here. Moon Cycle: Designed to be used by teachers of Junior Cycle Science to access Learning Outcome 4 in the Earth and Space contextual strand. Students should be able to: - develop and use a model of Earth-Sun-Moon system to describe predictable phenomena observable on Earth, including seasons, lunar phases, and eclipses of the Sun and the Moon. The resource includes an image to be used as a trigger, links to practical modelling activities and downloadable images that can be used by students for assessment activities. You can download the resource here: MoonCycle 2nd Level And the trigger images are here: Links & Resources - Solar System Scope Free online 3D simulation of the Solar System and night sky. - ESERO Ireland For Irish-curriculum-linked Space and STEM-themed resources. Lots of Peer-Reviewed Astronomy Education Activities for all age groups. Bring the planetarium into your classroom! Explore most known objects in the night sky in any direction, by downloading this free software.
http://www.spaceweek.ie/for-organisers/for-teachers/
18
21
Now consider the triangle OPR. For most practical purposes a force is resolved along two directions at right angles. The problem can also be solved by measurements from an accurate scale diagram. Now, two lines represent the two forces acting on the linear: To verify the triangle of forces Triangle o f forces e xperiment A drawing board, pulleys, strings and weights is set up as described earlier and the positions of the strings marked out on paper. Small pencil crosses are now made on the paper, as far apart as possible, to make the positions of the threads. Friday, April 13, Classical mechanicsIntroduction The parallelogram and triangle of forces Resultant force When two or more forces are acting together at a point it is always possible to find a single force which will have exactly the same effect as these forces. Any one of these forces is said to be the equilibrant of the other two. Any convenient length may be used to represent 1 N, but it should be chosen so as to give us large a diagram as possible — See image below. If A exerts a pull of 2. A length of thread having a 50 N weight at one end and a 70 N weight at the other is passed over the two pulleys and a second length of thread carrying a N weight is tied to the first at O. Resultant of two forces: Solution As a rule, problems of this type may be solved either by scale diagram or else by calculation, a Graphical method: A fairly accurate method of doing this is to make the positions of the shadows of the threads formed either by the sun or a distant lamp. This acts against the weight of the roller, and therefore reduces the force which the roller exerts on the ground. The triangle of forces principle: Find graphically or otherwise the resultant of two forces of 7 N and 3 N, acting at a point and at right angles to one another. On completing the parallelogram by the usual geometrical construction, the diagonal is found by measurement to be 5. Speed,velocity and acceleration from scratch. The experiment illustrates an important principle called, The parallelogram of forces principle If two forces acting at a point are represented both in magnitude and direction by the adjacent sides of a parallelogram, their resultant will be represented both in magnitude and direction by the diagonal of the parallelogram drawn from the point.The polygon of forces represent lot of forces leading in a simultaneously to one point so all this forces acting will make a figure or they are call Vector such as force, that has magnitude and direction and can be resolved into components that are odd functions of. Apr 28, · Polygon of Forces: If any number of forces, acting on a particle, be represented, in magnitude and direction, by the sides of a polygon, taken in, order, the forces shall be in equilibrium. Note: the converse of polygon of forces is not true. Definition of polygon of forces - a polygon that represents by the length and direction of its sides all the forces acting on a body or point. Apr 13, · The parallelogram and triangle of forces Resultant force When two or more forces are acting together at a point it is always possible to find a single force which will have exactly the same effect as these forces. A polygon in which the sides represent, in magnitude and direction, all forces acting on a rigid body. The side required to close the polygon represents the resultant of. The graphical solution of the experiment yield a polygon that is completed indicating that all the forces are in equilibrium while the analytical solution indicates a .Download
http://gubagigikefudowu.mint-body.com/polygon-of-forces-essay-5443954439.html
18
56
Excel Functions Tutorials - Excel Formulas Cheatsheet - Financial Functions in Excel - Logical Functions in Excel - TEXT Functions in Excel - Lookup Reference in Excel - Address Function in Excel - Choose Function in Excel - Column Function in Excel - Columns Function in Excel - REPLACE Function in Excel - GetPivotData in Excel - HLOOKUP in Excel - Hyperlink Excel Function - INDIRECT Function in Excel - LOOKUP Excel Function - Match Excel Function - VLOOKUP Excel Function - INDEX Excel Function - VLOOKUP vs HLOOKUP - Maths Functions in Excel - POWER Function in Excel - EVEN Function in Excel - ODD Function in Excel - ABS Function in Excel - SUM Function in Excel - SUMPRODUCT Function in Excel - SUBTOTAL Excel Function - ROUND in Excel - AGGREGATE Excel Function - PRODUCT Excel Function - RAND Excel Function - LOG Excel Function - EXPONENTIAL Excel Function - SUMIF in Excel - TAN Excel Function - CEILING Excel Function - LN Excel Function - SIGN Excel Function - COS Excel Function - FLOOR Function in Excel - SIN Excel Function - Date and Time Function in Excel - Statistical Function in Excel TRUE Excel (Table of Contents) TRUE Function in Excel TRUE Excel Function is one of the logical functions in Excel. It returns the logical value of TRUE and is equivalent to the value of TRUE. TRUE function in Excel does not require any argument. It can be used with other logical function such as if, error, etc. TRUE Excel Formula There is no parameter or arguments are used in the TRUE Excel Formula. How to Use TRUE Function in Excel? TRUE can be used as worksheet function and is very simple and easy to use. You can understand the working of TRUE function by using the below examples. TRUE Excel Example #1 Use simple TRUE function in a excel cell. Output will be TRUE. TRUE Excel Example #2 Let’s consider another example of TRUE Excel Function. We can use TRUE function with other functions like if here is an example as follows: Here if condition met with the value then it will return the TRUE as output else it will return False as result. TRUE Example #3 We can use it to make calculations. For example, we can calculate the following calculations by using TRUE Excel. Here we use the output of TURE and FALSE function and multiple it with 5 then the result will 5 for TRUE and 0 for False. TRUE Excel Example #4 In the below example we use the TRUE excel with if function to compare the two column values with each other’s. It will return the TRUE for matched values in column H and J and return FALSE if the value did not match in Column H and J. TRUE Excel Example #5 TRUE Excel can be used to check the cell value has certain value or not. We can achieve simple cell check by using TURE and if functions details are as follows: =IF(D55,”Cell has 5″,”Cell does not have 5″) It will return Cell has 5 as output if cell D53 has 5 and return Cell does not have 5 if the value in D53 does not 5. Things to remember about the TRUE Function in Excel - TRUE & TRUE() both are unique. - TRUE() function is basically used with others functions. - Using TRUE without bracket gives you the same result. - For calculation purpose TRUE is a 1 and False is a 0 and these can be used for calculation also - The TRUE function is provided for compatibility with other sheets applications; it may not be needed in standard situations. - If we want to enter TRUE, or if we want to provide TRUE as a result in a true excel formula, we can just put the word TRUE directly in a excel cell or formula and Excel will returns it as the logical value TRUE as output. For example: =IF (A1<0, TRUE ()), =IF (A1<0, TRUE) - We also need to remember that logical expressions will also return automatically TRUE and FALSE as results. - The TRUE function was first used in Microsoft Excel 2007. You can download this TRUE Function in Excel template here – TRUE Function Excel Template This has been a guide to TRUE Function in Excel. Here we discuss the TRUE Excel Formula and how to use TRUE Function in Excel along with practical examples and downloadable excel templates. You may also look at these useful functions in excel
https://www.wallstreetmojo.com/true-excel-function/
18
21
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message) In physics, the Planck length, denoted ℓP, is a unit of length, equal to 229(38)×10−35 1.616meters. It is a base unit in the system of Planck units, developed by physicist Max Planck. The Planck length can be defined from three fundamental physical constants: the speed of light in a vacuum, the Planck constant, and the gravitational constant. |Unit system||Planck units| |1 ℓP in ...||... is equal to ...| |SI units||229(38)×10−35 m1.616| |imperial/US units||6.3631×10−34 in| The Planck length ℓP is defined as: Solving the above will show the approximate equivalent value of this unit with respect to the meter: where is the speed of light in a vacuum, G is the gravitational constant, and ħ is the reduced Planck constant. The two digits enclosed by parentheses are the estimated standard error associated with the reported numerical value. In 1899 Max Planck suggested that there existed some fundamental natural units for length, mass, time and energy. These he derived using dimensional analysis, using only the Newton gravitational constant, the speed of light and the Planck constant. The natural units he derived later became known as "the Planck length", "the Planck mass", "the Planck time" and "the Planck energy". The Planck length is the scale at which quantum gravitational effects are believed to begin to be apparent, where interactions require a working theory of quantum gravity to be analyzed. The Planck area is the area by which the surface of a spherical black hole increases when the black hole swallows one bit of information. To measure anything the size of Planck length, the photon momentum needs to be very large due to Heisenberg's uncertainty principle and so much energy in such a small space would create a tiny Black hole with the diameter of its event horizon equal to a Planck length. The Planck length is sometimes misconceived as the minimum length of spacetime, but this is not accepted by conventional physics, as this would require violation or modification of Lorentz symmetry. However, certain theories of loop quantum gravity do attempt to establish a minimum length on the scale of the Planck length, though not necessarily the Planck length itself, or attempt to establish the Planck length as observer-invariant, known as doubly special relativity. The strings of string theory are modelled to be on the order of the Planck length. In theories of large extra dimensions, the Planck length has no fundamental physical significance, and quantum gravitational effects appear at other scales. Planck length and Euclidean geometryEdit The gravitational field performs zero-point oscillations, and the geometry associated with it also oscillates. The ratio of the circumference to the radius varies near the Euclidean value. The smaller the scale, the greater the deviations from the Euclidean geometry. Let us estimate the order of the wavelength of zero gravitational oscillations, at which the geometry becomes completely unlike the Euclidean geometry. The degree of deviation of geometry from Euclidean geometry in the gravitational field is determined by the ratio of the gravitational potential and the square of the speed of light : . When , the geometry is close to Euclidean geometry; for , all similarities disappear. The energy of the oscillation of scale is equal to (where is the order of the oscillation frequency). The gravitational potential created by the mass , at this length is , where is the constant of universal gravitation. Instead of , we must substitute a mass, which, according to Einstein's formula, corresponds to the energy (where ). We get . Dividing this expression by , we obtain the value of the deviation . Equating , we find the length at which the Euclidean geometry is completely distorted. It is equal to Planck length . As noted in, "for the spacetime region with dimensions the uncertainty of the Christoffel symbols be of the order of , and the uncertainty of the metric tensor is of the order of . If is a macroscopic length, the quantum constraints are fantastically small and can be neglected even on atomic scales. If the value is comparable to , then the maintenance of the former (usual) concept of space becomes more and more difficult and the influence of micro curvature becomes obvious". Conjecturally, this could imply that spacetime becomes a quantum foam at the Planck scale. The size of the Planck length can be visualized as follows: if a particle or dot about 0.1 mm in size (which is approximately the smallest the unaided human eye can see) were magnified in size to be as large as the observable universe, then inside that universe-sized "dot", the Planck length would be roughly the diameter of an actual 0.1 mm dot. In other words, a 0.1 mm dot is halfway between the Planck length and the size of the observable universe on a logarithmic scale. Notes and referencesEdit - John Baez, The Planck Length - NIST, "Planck length", NIST's published CODATA constants - M. Planck. Naturlische Masseinheiten. Der Koniglich Preussischen Akademie Der Wissenschaften, p. 479, 1899 - Klotz, Alex (2015-09-09). "A Hand-Wavy Discussion of the Planck Length". Physics Forums Insights. Retrieved 2018-03-23. - Bekenstein, Jacob D (1973). "Black Holes and Entropy". Physical Review D. 7 (8): 2333. Bibcode:1973PhRvD...7.2333B. doi:10.1103/PhysRevD.7.2333. - Cliff Burgess; Fernando Quevedo (November 2007). "The Great Cosmic Roller-Coaster Ride". Scientific American (print). Scientific American, Inc. p. 55. - T. Regge, Nuovo Cim. 7, 215 (1958). Gravitational fields and quantum mechanics - Wheeler, J. A. (January 1955). "Geons". Physical Review. 97 (2): 511. Bibcode:1955PhRv...97..511W. doi:10.1103/PhysRev.97.511.
https://en.m.wikipedia.org/wiki/Planck_length
18
27
When working with functions, you sometimes need to calculate the points at which the function's graph crosses the x-axis. These points occur when the value of x is equal to zero and are the zeroes of the function. Depending on the type of function you're working with and how it's structured, it may not have any zeroes, or it may have multiple zeroes. Regardless of how many zeroes the function has, you can calculate all of the zeroes in the same way. TL;DR (Too Long; Didn't Read) Calculate the zeroes of a function by setting the function equal to zero, and then solving it. Polynomials may have multiple solutions to account for the positive and negative outcomes of even exponential functions. Zeroes of a Function The zeroes of a function are the values of x at which the total equation is equal to zero, so calculating them is as easy as setting the function equal to zero and solving for x. To see a basic example of this, consider the function f(x) = x + 1. If you set the function equal to zero, then it will look like 0 = x + 1, which gives you x = -1 once you subtract 1 from both sides. This means that the zero of the function is -1, since f(x) = (-1) + 1 gives you a result of f(x) = 0. While not all functions are as easy to calculate zeroes for, the same method is used even for more complex functions. Sciencing Video Vault Zeroes of a Polynomial Function Polynomial functions potentially make things more complicated. The problem with polynomials is that functions containing variables raised to an even power potentially have multiple zeroes since both positive and negative numbers give positive results when multiplied by themselves an even number of times. This means that you have to calculate zeroes for both positive and negative possibilities, though you still solve by setting the function equal to zero. An example will make this easier to understand. Consider the following function: f(x) = x2 - 4. To find the zeroes of this function, you start the same way and set the function equal to zero. This gives you 0 = x2 - 4. Add 4 to both sides to isolate the variable, which gives you 4 = x2 (or x2 = 4 if you prefer to write in standard form). From there we take the square root of both sides, resulting in x = √4. The issue here is that both 2 and -2 give you 4 when squared. If you only list one of them as a zero of the function, you're ignoring a legitimate answer. This means that you have to list both of the zeroes of the function. In this case, they are x = 2 and x = -2. Not all polynomial functions have zeroes that match up so neatly, however; more complex polynomial functions can give significantly different answers.
https://sciencing.com/how-to-find-the-zeros-of-a-function-13712212.html
18
11
The Public Switched Telephone Network (PSTN) Telecommunications networks rely heavily on the Public Switched Telephone Network (PSTN), which was originally created many decades ago for the transmission of human speech. The telephone system now provides many of the links that connect wide area networks, and is an integral part of the Internet itself. For about a hundred years, analogue signalling was used throughout the telephone system, which made it unsuitable for the transmission of digital data. Although modulation techniques were developed that allowed the transmission of digital data on an analogue telephone lines, there were severe limitations in terms of the data rates achievable. The latter part of the twentieth century, however, has seen the introduction of digital telephone exchanges and fibre-optic trunk exchange lines that have vastly increased the capacity of the telephone network. When Bell first patented the telephone in 1876, telephones were sold in pairs, and had to be connected directly together using a pair of wires. It was obviously impractical to connect every telephone to every other telephone, so the Bell Telephone Company was established, and ran a wire to each customer's premises from a central office. To make a call, the customer would crank the phone. This caused a ringing sound in the telephone company office which attracted the attention of an operator, who would then manually connect the caller to the person being called using a jumper cable. It soon became possible to make long distance calls as connections were set up between telephone company offices in different cities. It soon became apparent that to connect every switching office to every other switching office directly was also impractical, so a second level of switching offices was introduced. The system has evolved into a highly redundant multi-level hierarchy, as illustrated below. The telephone network hierarchy Each subscriber telephone has two copper wires coming out of it that run to the telephone company's local exchange office, a distance of up to ten kilometres. This part of the system is known as the subscriber loop, and in most areas of the world is now the only part of the telephone system that is still mostly analogue. If a subscriber calls another subscriber attached to the same local exchange, the switching mechanism inside the exchange sets up a direct connection between the two local loops, which remains in place for the duration of the call. If the two subscribers are connected to different local exchanges however, the call is routed from one local exchange to the other via one or more exchange trunk line. The connections between exchanges are predominantly optical fibre, and signalling between exchanges is entirely digital. The subscriber loop, on the other hand, is likely to remain analogue for the forseeable future, due mainly to the huge cost involved in replacing it. Consequently, the analogue voice and data signals transmitted over a subscriber loop must be digitised before they can be relayed over exchange trunk lines, using a technique called pulse code modulation. The resulting digital channel has a data rate of 64 kbps. The trunk lines between exchanges have a capacity that is a binary multiple of 64 kbps, so multiple incoming voice and data channels can be multiplexed at the local exchange onto an outgoing trunk using Time Division Multiplexing (TDM). Because of a failure to agree on an international standard for digital signalling hierarchies, the systems used in Europe and North America are different. The North American standard is based on the 24-channel T1 carrier, with a gross line bit-rate of 1.544 Mbps, whereas the European system is based on the 32-channel E1 carrier, with a gross line bit-rate of 2.048 Mbps. Trunk networks multiplex the basic rate channels (T1 or E1) together, forming higher capacity trunk lines that are multiples of these basic units. The standard channel rates used in Europe and North America are shown in the table below. |Level||North American||European (CEPT)| |DS0 (channel data rate)||64 kbps||64 kbps| |DS1||1.544 Mbps (24 channels) - T1||2.048 Mbps (32 channels) - E1| |DS1c (U.S. only)||3.152 Mbps (48 channels) - T1c||-| |DS2||6.312 Mbps (96 channels) - T2||8.448 Mbps (128 channels) - E2| |DS3||44.736 Mbps (672 channels) - T3||34.368 Mbps (512 channels) - E3| |DS4||274.176 Mbp (4032 channels) - T4||139.264 Mbps (2048 channels) - E4| |DS5||400.352 Mbps (5760 channels) - T5||565.148 Mbps (8192 channels) - E5| The Plesiochronous and Synchronous Digital Hierarchies The technologies used for the bulk transfer of digital data over telephone system core networks are Plesiochronous Digital Hierarchy (PDH) and Synchronous Digital Hierarchy (SDH) or Synchronous Optical Network (SONET). The term plesiochronous is derived from the Greek words plesio, meaning near, and chronos, meaning time. This refers to the fact that different parts of a PDH network are almost synchronised, but not perfectly. Although PDH data streams are nominally transmitted at the same bit-rate, some variation in speed is allowed. One consequence of this variation in bit rate is that, in order to access a single channel within the data stream, it must be de-multiplexed completely back to the constituent channels. PDH is now being replaced by SDH in most European telecommunications networks, and by SONET (from which SDH is derived) in the United States. SDH allows individual channels within the data stream to be extracted or inserted at a network node without the need to completely de-multiplex the carrier. SDH also provides management featurers such as remote reconfiguration and monitoring. Although SDH and SONET are not directly compatible, they have been harmonised to facilitate inter-working between the two. The following table shows some of the common carrier rates for the two technologies. |Optical Level||Electrical Level||Line Rate (Mbps)||Payload Rate (Mbps)||Overhead Rate (Mbps)||SDH Equivalent| Other rates (OC-9, OC-18, OC-24, OC-36, OC-96) are referenced in some of the standards. OC stands for Optical Carrier and defines the optical signal, STS stands for Synchronous Transport Signal and defines the equivalent electrical signal, and STM stands for Synchronous Transmission Module.
http://www.technologyuk.net/telecommunications/communication-technologies/public-switched-telephone-network.shtml
18
21
Time Kindergarten Math Shapes Kindergarten Math Colors Kindergarten Math Addition Kindergarten Math Whole Numbers Kindergarten Math Whole Numbers Kindergarten Math Over & Under Kindergarten Math MN.6.1. Number & Operation 6.1.1. Read, write, represent and compare positive rational numbers expressed as fractions, decimals, percents and ratios; write positive integers as products of factors; use these representations in real-world and mathematical situations. 220.127.116.11. Locate positive rational numbers on a number line and plot pairs of positive rational numbers on a coordinate grid. 18.104.22.168. Compare positive rational numbers represented in various forms. Use the symbols <, = and >. 22.214.171.124. Understand that percent represents parts out of 100 and ratios to 100. 126.96.36.199. Determine equivalences among fractions, decimals and percents; select among these representations to solve problems. 188.8.131.52. Factor whole numbers; express a whole number as a product of prime factors with exponents. 184.108.40.206. Determine greatest common factors and least common multiples. Use common factors and common multiples to calculate with fractions and find equivalent fractions. 220.127.116.11. Convert between equivalent representations of positive rational numbers. 6.1.2. Understand the concept of ratio and its relationship to fractions and to the multiplication and division of whole numbers. Use ratios to solve real-world and mathematical problems. 18.104.22.168. Identify and use ratios to compare quantities; understand that comparing quantities using ratios is not the same as comparing quantities using subtraction. 22.214.171.124. Apply the relationship between ratios, equivalent fractions and percents to solve problems in various contexts, including those involving mixtures and concentrations. 126.96.36.199. Determine the rate for ratios of quantities with different units. 188.8.131.52. Use reasoning about multiplication and division to solve ratio and rate problems. 6.1.3. Multiply and divide decimals, fractions and mixed numbers; solve real-world and mathematical problems using arithmetic with positive rational numbers. 184.108.40.206. Multiply and divide decimals and fractions, using efficient and generalizable procedures, including standard algorithms. 220.127.116.11. Use the meanings of fractions, multiplication, division and the inverse relationship between multiplication and division to make sense of procedures for multiplying and dividing fractions. 18.104.22.168. Calculate the percent of a number and determine what percent one number is of another number to solve problems in various contexts. 22.214.171.124. Solve real-world and mathematical problems requiring arithmetic with decimals, fractions and mixed numbers. 126.96.36.199. Estimate solutions to problems with whole numbers, fractions and decimals and use the estimates to assess the reasonableness of results in the context of the problem. 6.2.1. Recognize and Represent relationships between varying quantities; translate from one representation to another; use patterns, tables, graphs and rules to solve real world and mathematical problems. 188.8.131.52. Understand that a variable can be used to represent a quantity that can change, often in relationship to another changing quantity. Use variables in various contexts. 6.2.2. Use properties of arithmetic to generate equivalent numerical expressions and evaluate expressions involving positive rational numbers. 184.108.40.206. Apply the associative, commutative and distributive properties and order of operations to generate equivalent expressions and to solve problems involving positive rational numbers. 6.2.3. Understand and interpret equations and inequalities involving variables and positive rational numbers. Use equations and inequalities to represent real world and mathematical problems; use the idea of maintaining equality to solve equations. Interpret solutions in the original context. 220.127.116.11. Represent real-world or mathematical situations using equations and inequalities involving variables and positive rational numbers. MN.6.3. Geometry & Measurement 6.3.1. Calculate perimeter, area, surface area and volume of two and three dimensional figures to solve real-world and mathematical problems. 18.104.22.168. Calculate the surface area and volume of prisms and use appropriate units, such as cm2 and cm3. Justify the formulas used. Justification may involve decomposition, nets or other models. 22.214.171.124. Calculate the area of quadrilaterals. Quadrilaterals include squares, rectangles, rhombuses, parallelograms, trapezoids and kites. When formulas are used, be able to explain why they are valid. 6.3.2. Understand and use relationships between angles in geometric figures. 126.96.36.199. Solve problems using the relationships between the angles formed by intersecting lines. 6.3.3. Choose appropriate units of measurement and use ratios to convert within measurement systems to solve real-world and mathematical problems. 188.8.131.52. Solve problems in various contexts involving conversion of weights, capacities, geometric measurements and times within measurement systems using appropriate units. MN.6.4. Data Analysis & Probability 6.4.1. Use probabilities to solve real-world and mathematical problems; represent probabilities using fractions, decimals and percents. 184.108.40.206. Determine the probability of an event using the ratio between the size of the event and the size of the sample space; represent probabilities as percents, fractions and decimals between 0 and 1 inclusive. Understand that probabilities measure likelihood. 220.127.116.11. Perform experiments for situations in which the probabilities are known, compare the resulting relative frequencies with the known probabilities; know that there may be differences. 18.104.22.168. Calculate experimental probabilities from experiments; represent them as percents, fractions and decimals between 0 and 1 inclusive. Use experimental probabilities to make predictions when actual probabilities are unknown. NewPath Learning resources are fully aligned to US Education Standards. Select a standard below to view correlations to your selected resource:
https://newpathworksheets.com/math/grade-6/minnesota-standards
18
93
Astronomical spectroscopy is the study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light and radio, which radiates from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, temperature, density, mass, distance, luminosity, and relative motion using Doppler shift measurements. Spectroscopy is also used to study the physical properties of many other types of celestial objects such as planets, nebulae, galaxies, and active galactic nuclei. Astronomical spectroscopy is used to measure three major bands of radiation: visible spectrum, radio, and X-ray. While all spectroscopy looks at specific areas of the spectrum, different methods are required to acquire the signal depending on the frequency. Ozone (O3) and molecular oxygen (O2) absorb light with wavelengths under 300 nm, meaning that X-ray and ultraviolet spectroscopy require the use of a satellite telescope or rocket mounted detectors.:27 Radio signals have much longer wavelengths than optical signals, and require the use of antennas or radio dishes. Infrared light is absorbed by atmospheric water and carbon dioxide, so while the equipment is similar to that used in optical spectroscopy, satellites are required to record much of the infrared spectrum. Physicists have been looking at the solar spectrum since Isaac Newton first used a simple prism to observe the refractive properties of light. In the early 1800s Joseph von Fraunhofer used his skills as a glass maker to create very pure prisms, which allowed him to observe 574 dark lines in a seemingly continuous spectrum. Soon after this, he combined telescope and prism to observe the spectrum of Venus, the Moon, Mars, and various stars such as Betelgeuse; his company continued to manufacture and sell high-quality refracting telescopes based on his original designs until its closure in 1884.:28–29 The resolution of a prism is limited by its size; a larger prism will provide a more detailed spectrum, but the increase in mass makes it unsuitable for highly detailed work. This issue was resolved in the early 1900s with the development of high-quality reflection gratings by J.S. Plaskett at the Dominion Observatory in Ottawa, Canada.:11 Light striking a mirror will reflect at the same angle, however a small portion of the light will be refracted at a different angle; this is dependent upon the indices of refraction of the materials and the wavelength of the light. By creating a "blazed" grating which utilizes a large number of parallel mirrors, the small portion of light can be focused and visualized. These new spectroscopes were more detailed than a prism, required less light, and could be focused on a specific region of the spectrum by tilting the grating. The limitation to a blazed grating is the width of the mirrors, which can only be ground a finite amount before focus is lost; the maximum is around 1000 lines/mm. In order to overcome this limitation holographic gratings were developed. Volume phase holographic gratings use a thin film of dichromated gelatin on a glass surface, which is subsequently exposed to a wave pattern created by an interferometer. This wave pattern sets up a reflection pattern similar to the blazed gratings but utilizing Bragg diffraction, a process where the angle of reflection is dependent on the arrangement of the atoms in the gelatin. The holographic gratings can have up to 6000 lines/mm and can be up to twice as efficient in collecting light as blazed gratings. Because they are sealed between two sheets of glass, the holographic gratings are very versatile, potentially lasting decades before needing replacement. Light dispersed by the grating or prism in a spectrograph can be recorded by a detector. Historically, photographic plates were widely used to record spectra until electronic detectors were developed, and today optical spectrographs most often employ charge-coupled devices (CCDs). The wavelength scale of a spectrum can be calibrated by observing the spectrum of emission lines of known wavelength from a gas-discharge lamp. The flux scale of a spectrum can be calibrated as a function of wavelength by comparison with an observation of a standard star with corrections for atmospheric absorption of light; this is known as spectrophotometry. Radio astronomy was founded with the work of Karl Jansky in the early 1930s, while working for Bell Labs. He built a radio antenna to look at potential sources of interference for transatlantic radio transmissions. One of the sources of noise discovered came not from Earth, but from the center of the Milky Way, in the constellation Sagittarius. In 1942, JS Hey captured the sun's radio frequency using military radar receivers.:26 Radio spectroscopy started with the discovery of the 21-centimeter H I line in 1951. Radio interferometry was pioneered in 1946, when Joseph Lade Pawsey, Ruby Payne-Scott and Lindsay McCready used a single antenna atop a sea cliff to observe 200 MHz solar radiation. Two incident beams, one directly from the sun and the other reflected from the sea surface, generated the necessary interference. The first multi-receiver interferometer was built in the same year by Martin Ryle and Vonberg. In 1960, Ryle and Antony Hewish published the technique of aperture synthesis to analyze interferometer data. The aperture synthesis process, which involves autocorrelating and discrete Fourier transforming the incoming signal, recovers both the spatial and frequency variation in flux. The result is a 3D image whose third axis is frequency. For this work, Ryle and Hewish were jointly awarded the 1974 Nobel Prize in Physics. Stars and their propertiesEdit Newton used a prism to split white light into a spectrum of color, and Fraunhofer's high-quality prisms allowed scientists to see dark lines of an unknown origin. In the 1850s, Gustav Kirchhoff and Robert Bunsen described the phenomena behind these dark lines. Hot solid objects produce light with a continuous spectrum, hot gases emit light at specific wavelengths, and hot solid objects surrounded by cooler gases show a near-continuous spectrum with dark lines corresponding to the emission lines of the gases.:42–44 By comparing the absorption lines of the sun with emission spectra of known gases, the chemical composition of stars can be determined. Not all of the elements in the sun were immediately identified. Two examples are listed below. - In 1868 Norman Lockyer and Pierre Janssen independently observed a line next to the sodium doublet (D1 and D2) which Lockyer determined to be a new element. He named it Helium, but it wasn't until 1895 the element was found on Earth.:84–85 - In 1869 the astronomers Charles Augustus Young and William Harkness independently observed a novel green emission line in the Sun's corona during an eclipse. This "new" element was incorrectly named coronium, as it was only found in the corona. It was not until the 1930s that Walter Grotrian and Bengt Edlén discovered that the spectral line at 530.3 nm was due to highly ionized iron (Fe13+). Other unusual lines in the coronal spectrum are also caused by highly charged ions, such as nickel and calcium, the high ionization being due to the extreme temperature of the solar corona.:87,297 By analyzing the width of each spectral line in an emission spectrum, both the elements present in a star and their relative abundances can be determined. Using this information stars can be categorized into stellar populations; Population I stars are the youngest stars and have the highest metal content (our Sun is a Pop I star), while Population III stars are the oldest stars with a very low metal content. Temperature and sizeEdit In 1860 Gustav Kirchhoff proposed the idea of a black body, a material that emits electromagnetic radiation at all wavelengths. In 1894 Wilhelm Wien derived an expression relating the temperature (T) of a black body to its peak emission wavelength (λmax). b is a constant of proportionality called Wien's displacement constant, equal to 7729(17)×10−3 m⋅K. 2.897 This equation is called Wien's Law. By measuring the peak wavelength of a star, the surface temperature can be determined. For example, if the peak wavelength of a star is 502 nm the corresponding temperature will be 5778 kelvins. where R is the radius of the star and σ is the Stefan–Boltzmann constant, with a value of 367(13)×10−8 W⋅m−2⋅K−4. 5.670 Thus, when both luminosity and temperature are known (via direct measurement and calculation) the radius of a star can be determined. The spectra of galaxies look similar to stellar spectra, as they consist of the combined light of millions of stars. Doppler shift studies of galaxy clusters by Fritz Zwicky in 1937 found that most galaxies were moving much faster than seemed to be possible from what was known about the mass of the cluster. Zwicky hypothesized that there must be a great deal of non-luminous matter in the galaxy clusters, which became known as dark matter. Since his discovery, astronomers have determined that a large portion of galaxies (and most of the universe) is made up of dark matter. In 2003, however, four galaxies (NGC 821, NGC 3379, NGC 4494, and NGC 4697) were found to have little to no dark matter influencing the motion of the stars contained within them; the reason behind the lack of dark matter is unknown. In the 1950s, strong radio sources were found to be associated with very dim, very red objects. When the first spectrum of one of these objects was taken there were absorption lines at wavelengths where none were expected. It was soon realised that what was observed was a normal galactic spectrum, but highly red shifted. These were named quasi-stellar radio sources, or quasars, by Hong-Yee Chiu in 1964. Quasars are now thought to be galaxies formed in the early years of our universe, with their extreme energy output powered by super-massive black holes. The properties of a galaxy can also be determined by analyzing the stars found within them. NGC 4550, a galaxy in the Virgo Cluster, has a large portion of its stars rotating in the opposite direction as the other portion. It is believed that the galaxy is the combination of two smaller galaxies that were rotating in opposite directions to each other. Bright stars in galaxies can also help determine the distance to a galaxy, which may be a more accurate method than parallax or standard candles. The interstellar medium is matter that occupies the space between star systems in a galaxy. 99% of this matter is gaseous - hydrogen, helium, and smaller quantities of other ionized elements such as oxygen. The other 1% is dust particles, thought to be mainly graphite, silicates, and ices. Clouds of the dust and gas are referred to as nebulae. There are three main types of nebula: absorption, reflection, and emission nebulae. Absorption (or dark) nebulae are made of dust and gas in such quantities that they obscure the starlight behind them, making photometry difficult. Reflection nebulae, as their name suggest, reflect the light of nearby stars. Their spectra are the same as the stars surrounding them, though the light is bluer; shorter wavelengths scatter better than longer wavelengths. Emission nebulae emit light at specific wavelengths depending on their chemical composition. Gaseous emission nebulaeEdit In the early years of astronomical spectroscopy, scientists were puzzled by the spectrum of gaseous nebulae. In 1864 William Huggins noticed that many nebulae showed only emission lines rather than a full spectrum like stars. From the work of Kirchhoff, he concluded that nebulae must contain "enormous masses of luminous gas or vapour." However, there were several emission lines that could not be linked to any terrestrial element, brightest among them lines at 495.9 nm and 500.7 nm. These lines were attributed to a new element, nebulium, until Ira Bowen determined in 1927 that the emission lines were from highly ionised oxygen (O+2). These emission lines could not be replicated in a laboratory because they are forbidden lines; the low density of a nebula (one atom per cubic centimetre) allows for metastable ions to decay via forbidden line emission rather than collisions with other atoms. Not all emission nebulae are found around or near stars where solar heating causes ionisation. The majority of gaseous emission nebulae are formed of neutral hydrogen. In the ground state neutral hydrogen has two possible spin states: the electron has either the same spin or the opposite spin of the proton. When the atom transitions between these two states, it releases an emission or absorption line of 21 cm. This line is within the radio range and allows for very precise measurements: - Velocity of the cloud can be measured via Doppler shift - The intensity of the 21 cm line gives the density and number of atoms in the cloud - The temperature of the cloud can be calculated Dust and molecules in the interstellar medium not only obscures photometry, but also causes absorption lines in spectroscopy. Their spectral features are generated by transitions of component electrons between different energy levels, or by rotational or vibrational spectra. Detection usually occurs in radio, microwave, or infrared portions of the spectrum. The chemical reactions that form these molecules can happen in cold, diffuse clouds or in the hot ejecta around a white dwarf star from a nova or supernova. Polycyclic aromatic hydrocarbons such as acetylene (C2H2) generally group together to form graphites or other sooty material, but other organic molecules such as acetone ((CH3)2CO) and buckminsterfullerenes (C60 and C70) have been discovered. Motion in the universeEdit Stars and interstellar gas are bound by gravity to form galaxies, and groups of galaxies can be bound by gravity in galaxy clusters. With the exception of stars in the Milky Way and the galaxies in the Local Group, almost all galaxies are moving away from us due to the expansion of the universe. Doppler effect and redshiftEdit The motion of stellar objects can be determined by looking at their spectrum. Because of the Doppler effect, objects moving towards us are blueshifted, and objects moving away are redshifted. The wavelength of redshifted light is longer, appearing redder than the source. Conversely, the wavelength of blueshifted light is shorter, appearing bluer than the source light: where is the emitted wavelength, is the velocity of the object, and is the observed wavelength. Note that v<0 corresponds to λ<λ0, a blueshifted wavelength. A redshifted absorption or emission line will appear more towards the red end of the spectrum than a stationary line. In 1913 Vesto Slipher determined the Andromeda Galaxy was blueshifted, meaning it was moving towards the Milky Way. He recorded the spectra of 20 other galaxies — all but 4 of which were redshifted — and was able to calculate their velocities relative to the Earth. Edwin Hubble would later use this information, as well as his own observations, to define Hubble's law: The further a galaxy is from the Earth, the faster it is moving away from us. Hubble's law can be generalised to where is the velocity (or Hubble Flow), is the Hubble Constant, and is the distance from Earth. Redshift (z) can be expressed by the following equations: |Based on wavelength||Based on frequency| In these equations, frequency is denoted by and wavelength by . The larger the value of z, the more redshifted the light and the farther away the object is from the Earth. As of January 2013, the largest galaxy redshift of z~12 was found using the Hubble Ultra-Deep Field, corresponding to an age of over 13 billion years (the universe is approximately 13.82 billion years old). The Doppler effect and Hubble's law can be combined to form the equation , where c is the speed of light. Objects that are gravitationally bound will rotate around a common center of mass. For stellar bodies, this motion is known as peculiar velocity, and can alter the Hubble Flow. Thus, an extra term for the peculiar motion needs to be added to Hubble's law: This motion can cause confusion when looking at a solar or galactic spectrum, because the expected redshift based on the simple Hubble law will be obscured by the peculiar motion. For example, the shape and size of the Virgo Cluster has been a matter of great scientific scrutiny due to the very large peculiar velocities of the galaxies in the cluster. Just as planets can be gravitationally bound to stars, pairs of stars can orbit each other. Some binary stars are visual binaries, meaning they can be observed orbiting each other through a telescope. Some binary stars, however, are too close together to be resolved. These two stars, when viewed through a spectrometer, will show a composite spectrum: the spectrum of each star will be added together. This composite spectrum becomes easier to detect when the stars are of similar luminosity and of different spectral class. Spectroscopic binaries can be also detected due to their radial velocity; as they orbit around each other one star may be moving towards the Earth whilst the other moves away, causing a Doppler shift in the composite spectrum. The orbital plane of the system determines the magnitude of the observed shift: if the observer is looking perpendicular to the orbital plane there will be no observed radial velocity. For example, if you look at a carousel from the side, you will see the animals moving toward and away from you, whereas if you look from directly above they will only be moving in the horizontal plane. Planets, asteroids, and cometsEdit Planets, asteroids, and comets all reflect light from their parent stars and emit their own light. For cooler objects, including solar-system planets and asteroids, most of the emission is at infrared wavelengths we cannot see, but that are routinely measured with spectrometers. For objects surrounded by gas, such as comets and planets with atmospheres, further emission and absorption happens at specific wavelengths in the gas, imprinting the spectrum of the gas on that of the solid object. In the case of worlds with thick atmospheres or complete cloud cover (such as the gas giants, Venus, and Saturn's satellite Titan (moon)), the spectrum is mostly or completely due to the atmosphere alone. The reflected light of a planet contains absorption bands due to minerals in the rocks present for rocky bodies, or due to the elements and molecules present in the atmosphere. To date over 3,500 exoplanets have been discovered. These include so-called Hot Jupiters, as well as Earth-like planets. Using spectroscopy, compounds such as alkali metals, water vapor, carbon monoxide, carbon dioxide, and methane have all been discovered. Asteroids can be classified into three major types according to their spectra. The original categories were created by Clark R. Chapman, David Morrison, and Ben Zellner in 1975, and further expanded by David J. Tholen in 1984. In what is now known as the Tholen classification, the C-types are made of carbonaceous material, S-types consist mainly of silicates, and X-types are 'metallic'. There are other classifications for unusual asteroids. C- and S-type asteroids are the most common asteroids. In 2002 the Tholen classification was further "evolved" into the SMASS classification, expanding the number of categories from 14 to 26 to account for more precise spectroscopic analysis of the asteroids. The spectra of comets consist of a reflected solar spectrum from the dusty clouds surrounding the comet, as well as emission lines from gaseous atoms and molecules excited to fluorescence by sunlight and/or chemical reactions. For example, the chemical composition of Comet ISON was determined by spectroscopy due to the prominent emission lines of cyanogen (CN), as well as two- and three-carbon atoms (C2 and C3). Nearby comets can even be seen in X-ray as solar wind ions flying to the coma are neutralized. The cometary X-ray spectra therefore reflect the state of the solar wind rather than that of the comet. - Foukal, Peter V. (2004). Solar Astrophysics. Weinheim: Wiley VCH. p. 69. ISBN 3-527-40374-4. - "Cool Cosmos - Infrared Astronomy". California Institute of Technology. Retrieved 23 October 2013. - Newton, Isaac (1705). Oticks: Or, A Treatise of the Reflections, Refractions, Inflections and Colours of Light. London: Royal Society. pp. 13–19. - Fraunhofer, Joseph (1817). "Bestimmung des Brechungs- und des Farben-Zerstreuungs - Vermögens verschiedener Glasarten, in Bezug auf die Vervollkommnung achromatischer Fernröhre". Annalen der Physik. 56 (7): 282–287. Bibcode:1817AnP....56..264F. doi:10.1002/andp.18170560706. - Hearnshaw, J.B. (1986). The analysis of starlight. Cambridge: Cambridge University Press. ISBN 0-521-39916-5. - Kitchin, C.R. (1995). Optical Astronomical Spectroscopy. Bristol: Institute of Physics Publishing. pp. 127, 143. ISBN 0-7503-0346-8. - Ball, David W. (2001). Basics of Spectroscopy. Bellingham, Washington: Society of Photo-Optical Instrumentation Engineers. pp. 24, 28. ISBN 0-8194-4104-X. - Barden, S.C.; Arns, J.A.; Colburn, W.S. (July 1998). d'Odorico, Sandro, ed. "Volume-phase holographic gratings and their potential for astronomical applications". Proc. SPIE. Optical Astronomical Instrumentation. 3355: 866–876. doi:10.1117/12.316806. - Oke, J. B.; Gunn, J. E. (1983). "Secondary standard stars for absolute spectrophotometry". The Astrophysical Journal. 266: 713. Bibcode:1983ApJ...266..713O. doi:10.1086/160817. - Ghigo, F. "Karl Jansky". National Radio Astronomy Observatory. Associated Universities, Inc. Retrieved 24 October 2013. - Pawsey, Joseph; Payne-Scott, Ruby; McCready, Lindsay (1946). "Radio-Frequency Energy from the Sun". Nature. 157 (3980): 158–159. Bibcode:1946Natur.157..158P. doi:10.1038/157158a0. - Ryle, M.; Vonberg, D. D. (1946). "Solar Radiation on 175 Mc./s". Nature. 158 (4010): 339–340. Bibcode:1946Natur.158..339R. doi:10.1038/158339b0. - Robertson, Peter (1992). Beyond southern skies: radio astronomy and the Parkes telescope. University of Cambridge. pp. 42, 43. ISBN 0-521-41408-3. - W. E. Howard. "A Chronological History of Radio Astronomy" (PDF). Archived from the original (PDF) on 14 July 2012. Retrieved 2 December 2013. - "How Radio Telescopes Work". Archived from the original on 3 December 2013. Retrieved 2 December 2013. - "Press Release: The 1974 Nobel Prize in Physics". Retrieved 2 December 2013. - Jenkins, Francis A.; Harvey E. White (1957). Fundamentals of Optics (4th ed.). New York: McGraw-Hill. pp. 430–437. ISBN 0-07-085346-0. - Morison, Ian (2008). Introduction to Astronomy and Cosmology (PDF). Wiley-Blackwell. p. 61. ISBN 0-470-03333-9. Archived from the original (PDF) on 2013-10-29. - Gregory, Stephen A.; Michael Zeilik (1998). Introductory astronomy & astrophysics (4. ed.). Fort Worth [u.a.]: Saunders College Publ. p. 322. ISBN 0-03-006228-4. - Pan, Liubin; Scannapieco, Evan; Scalo, Jon (1 October 2013). "MODELING THE POLLUTION OF PRISTINE GAS IN THE EARLY UNIVERSE". The Astrophysical Journal. 775 (2): 111. arXiv:1306.4663. Bibcode:2013ApJ...775..111P. doi:10.1088/0004-637X/775/2/111. - G. Kirchhoff (July 1860). "On the relation between the radiating and absorbing powers of different bodies for light and heat". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. Taylor & Francis. 20 (130). - Nahar, Anil K. Pradhan, Sultana N. (2010). Atomic astrophysics and spectroscopy. Cambridge: Cambridge University Press. pp. 7, 221. ISBN 978-0-521-82536-8. - Mahmoud Massoud (2005). "§2.1 Blackbody radiation". Engineering thermofluids: thermodynamics, fluid mechanics, and heat transfer. Springer. p. 568. ISBN 3-540-22292-8. - "CODATA Value: Wien wavelength displacement law constant". The NIST Reference on Constants, Units, and Uncertainty. US National Institute of Standards and Technology. June 2015. Retrieved 2015-09-25. 2014 CODATA recommended values - "Luminosity of Stars". Australia Telescope National Facility. 12 July 2004. Archived from the original on 9 August 2014. Retrieved 2 July 2012. - "CODATA Value: Stefan-Boltzmann constant". The NIST Reference on Constants, Units, and Uncertainty. US National Institute of Standards and Technology. June 2015. Retrieved 2015-09-25. 2014 CODATA recommended values - Zwicky, F. (October 1937). "On the Masses of Nebulae and of Clusters of Nebulae". The Astrophysical Journal. 86: 217. Bibcode:1937ApJ....86..217Z. doi:10.1086/143864. - Romanowsky, Aaron J.; Douglas, Nigel G.; Arnaboldi, Magda; Kuijken, Konrad; Merrifield, Michael R.; Napolitano, Nicola R.; Capaccioli, Massimo; Freeman, Kenneth C. (19 September 2003). "A Dearth of Dark Matter in Ordinary Elliptical Galaxies". Science. 301 (5640): 1696–1698. arXiv:astro-ph/0308518. Bibcode:2003Sci...301.1696R. doi:10.1126/science.1087441. PMID 12947033. - Matthews, Thomas A.; Sandage, Allan R. (July 1963). "Optical Identification of 3c 48, 3c 196, and 3c 286 with Stellar Objects". The Astrophysical Journal. 138: 30. Bibcode:1963ApJ...138...30M. doi:10.1086/147615. - Wallace, P.R. (1991). Physics : imagination and reality. Singapore: World Scientific. pp. 235–246. ISBN 997150930X. - Chiu, Hong-Yee (1964). "GRAVITATIONAL COLLAPSE". Physics Today. 17 (5): 21. Bibcode:1964PhT....17e..21C. doi:10.1063/1.3051610. - Rubin, Vera C.; Graham, J. A.; Kenney, Jeffrey D. P. (July 1992). "Cospatial counterrotating stellar disks in the Virgo E7/S0 galaxy NGC 4550". The Astrophysical Journal. 394: L9. Bibcode:1992ApJ...394L...9R. doi:10.1086/186460. - Kudritzki, R.-P. (May 2010). "Dissecting galaxies with quantitative spectroscopy of the brightest stars in the Universe". Astronomische Nachrichten. 331 (5): 459–473. arXiv:1002.5039. Bibcode:2010AN....331..459K. doi:10.1002/asna.200911342. - Kitchin, C.R. (1987). Stars, nebulae, and the interstellar medium : observational physics and astrophysics. Bristol: A. Hilger. pp. 265–277. ISBN 0-85274-580-X. - Huggins, Sir William (1899). The Scientific Papers of Sir William Huggins. London: William Wesley and Son. pp. 114–115. - Tennyson, Jonathan (2005). Astronomical spectroscopy : an introduction to the atomic and molecular physics of astronomical spectra ([Online-Ausg.]. ed.). London: Imperial College Press. pp. 46–47, 99–100. ISBN 1-86094-513-9. - Hirsh, Richard F (June 1979). "The Riddle of the Gaseous Nebulae". Isis. 70 (2): 162–212. Bibcode:1979Isis...70..197H. doi:10.1086/352195. JSTOR 230787. - Bowen, I. S. (1 October 1927). "The Origin of the Nebulium Spectrum". Nature. 120 (3022): 473–473. Bibcode:1927Natur.120..473B. doi:10.1038/120473a0. - Efremov, Yu. N. (22 February 2011). "On the spiral structure of the Milky Way Galaxy". Astronomy Reports. 55 (2): 108–122. arXiv:1011.4576. Bibcode:2011ARep...55..108E. doi:10.1134/S1063772911020016. - Shu, Frank H. (1982). The physical universe : an introduction to astronomy (12. [Dr.]. ed.). Sausalito, Calif.: Univ. Science Books. pp. 232–234. ISBN 0-935702-05-9. - Hudson, Reggie L. "The Interstellar Medium". Goddard Space Flight Center Astrochemistry Laboratory. Archived from the original on 13 July 2013. Retrieved 19 November 2013. - Cami, J.; Bernard-Salas, J.; Peeters, E.; Malek, S. E. (22 July 2010). "Detection of C60 and C70 in a Young Planetary Nebula". Science. 329 (5996): 1180–1182. Bibcode:2010Sci...329.1180C. doi:10.1126/science.1192035. PMID 20651118. - Millar, TJ; DA Williams (1993). Dust and chemistry in astronomy. Bristol [u.a.]: Inst. of Physics. p. 116. ISBN 0-7503-0271-2. - Johansson, LE; Andersson, C; Ellder, J; Friberg, P; Hjalmarson, A; Hoglund, B; Irvine, WM; Olofsson, H; Rydbeck, G (1984). "Spectral scan of Orion A and IRC+10216 from 72 to 91 GHz". Astronomy and Astrophysics. 130: 227–56. Bibcode:1984A&A...130..227J. PMID 11541988. - "Hubble Pinpoints Furthest Protocluster of Galaxies Ever Seen". ESA/Hubble Press Release. Retrieved 13 January 2012. - Haynes, Martha. "Hubble's Law". Cornell University. Retrieved 26 November 2013. - Huchra, John. "Extragalactic Redshifts". California Institute of Technology. Retrieved 26 November 2013. - Ellis, Richard S.; McLure, Ross J.; Dunlop, James S.; Robertson, Brant E.; Ono, Yoshiaki; Schenker, Matthew A.; Koekemoer, Anton; Bowler, Rebecca A. A.; Ouchi, Masami; Rogers, Alexander B.; Curtis-Lake, Emma; Schneider, Evan; Charlot, Stephane; Stark, Daniel P.; Furlanetto, Steven R.; Cirasuolo, Michele (20 January 2013). "THE ABUNDANCE OF STAR-FORMING GALAXIES IN THE REDSHIFT RANGE 8.5-12: NEW RESULTS FROM THE 2012 HUBBLE ULTRA DEEP FIELD CAMPAIGN". The Astrophysical Journal. 763 (1): L7. arXiv:1211.6804. Bibcode:2013ApJ...763L...7E. doi:10.1088/2041-8205/763/1/L7. - "Hubble census finds galaxies at redshifts 9 to 12". NASA/ESA. Retrieved 26 November 2013. - "Planck reveals an almost perfect universe". ESA. 21 March 2013. Retrieved 26 November 2013. - "Peculiar Velocity". Swinburne University of Technology. Retrieved 26 November 2013. - Yasuda, Naoki; Fukugita, Masataka; Okamura, Sadanori (February 1997). "Study of the Virgo Cluster Using the B‐Band Tully‐Fisher Relation". The Astrophysical Journal Supplement Series. 108 (2): 417–448. Bibcode:1997ApJS..108..417Y. doi:10.1086/312960. - "Types of Binary Stars". Australia Telescope Outreach and Education. Australia Telescope National Facility. Retrieved 26 November 2013. - Gray, Richard O.; Christopher J. Corbally (2009). Stellar spectral classification. Princeton, N.J.: Princeton University Press. pp. 507–513. ISBN 978-0-691-12510-7. - Goody, Richard M.; Yung, Yuk Ling (1989). Atmospheric Radiation: Theoretical Basis. New York, New York, USA: Oxford University Press. ISBN 0-19-505134-3. - Tessenyi, M.; Tinetti, G.; Savini, G.; Pascale, E. (November 2013). "Molecular detectability in exoplanetary emission spectra". Icarus. 226 (2): 1654–1672. arXiv:1308.4986. Bibcode:2013Icar..226.1654T. doi:10.1016/j.icarus.2013.08.022. - Bus, S (July 2002). "Phase II of the Small Main-Belt Asteroid Spectroscopic Survey A Feature-Based Taxonomy". Icarus. 158 (1): 146–177. Bibcode:2002Icar..158..146B. doi:10.1006/icar.2002.6856. - Chapman, Clark R.; Morrison, David; Zellner, Ben (May 1975). "Surface properties of asteroids: A synthesis of polarimetry, radiometry, and spectrophotometry". Icarus. 25 (1): 104–130. Bibcode:1975Icar...25..104C. doi:10.1016/0019-1035(75)90191-8. - Sekanina, Zdenek; Kracht, Rainer (3 Jun 2015). "Disintegration of Comet C/2012 S1 (ISON) Shortly Before Perihelion: Evidence From Independent Data Sets". arXiv:1404.5968v6 [astro-ph.EP]. - Knight, Matthew. "Why does ISON look green?". Comet ISON Observing Campaign. Retrieved 26 November 2013. - Lisse, C. M.; Dennerl, K.; Englhauser, J.; Harden, M.; Marshall, F. E.; Mumma, M. J.; Petre, R.; Pye, J. P.; Ricketts, M. J.; Schmitt, J.; Trumper, J.; West, R. G. (11 October 1996). "Discovery of X-ray and Extreme Ultraviolet Emission from Comet C/Hyakutake 1996 B2". Science. 274 (5285): 205–209. Bibcode:1996Sci...274..205L. doi:10.1126/science.274.5285.205. |Wikimedia Commons has media related to Astronomical spectroscopy.|
https://en.m.wikipedia.org/wiki/Astronomical_spectroscopy
18
18
To introduce students to how to Complete the Square of a Quadratic Function, I begin this lesson with an activity involving Algebra Tiles. I post the following four problems along the top of the board: 1. x^2 + 2x + _________ 2. x^2 + 4x + _________ 3. x^2 + 8x + _________ 4. x^2+12x + _________ Each pair of students will need a set of Algebra Tiles with one large square, 12 rectangles, and 36 small squares. I tell the students that x squared is represented by the large square, each rectangle is an x because it is 1 by x, and the small squares are ones because they are one by one. I instruct students to set up the first problem with one large square for x squared and two rectangles for two x, and to find the number of small squares it takes to complete a square that is equal on all sides. Then students proceed working through problems two through four with their table partner. I have students sketch their answers on their own paper before moving to another problem. Students should find the value of c above to be 1,4, 16, and 36 for problems one through four. I have students draw sketches on the board to share with students on how they completed the square. I have more than one student share if there are different responses. In number three, two different students shared the diagrams pictured below. I discuss with students that both diagrams are correct, but that Diagram Two shows a more visual representation of the Algebraic Method to Complete the Square that I will show next, in the Guided Notes. After introducing students to Completing the Square using Algebra Tiles, I then show students two uses of Completing the Square in the Guided Notes. In the Guided Notes, I demonstrate for students how to Solve a Quadratic Equation by Completing the Square, and how to use Completing the Square to change from Standard Form to Vertex Form. I model some of the examples in the Guided Notes in the video below. After working with students on the Guided Notes to Complete the Square, I assign an Independent Practice. In this Independent Practice, I want students to work the five given problems in two different ways on a Comparison Chart that I copy for them. I copy the chart front and back to provide enough space to work the five problems from the Independent Practice. On the left side of the Comparison Chart, students are to solve the Quadratic Function. On the right side of the chart, students are to change the same problem from Standard Form to Vertex Form. I provide students about 15 minutes to work on the five problems, as I walk around to monitor their progress. By using the Comparison Chart, I want students to recognize the difference of Completing the Square on one side of the equation to Change to Vertex Form, and using both sides of the equation to solve. After about 15 minutes, I have students start checking their work on the Exit Slip. I have students work the Exit Slip for this lesson on their own paper with their table partner. I have students check their work on the Independent Practice by using a graphing calculator. If students have not completed the Independent Practice, I have them check only the problems that are complete. Students are to enter the original problem from the Independent Practice and Compare it to the Equation that the student put into Vertex Form. These should be equivalent equations if the student did not make a mistake, and they should have an identical graph. The student may also check that the zeros are correct that they solved for in the left column and where the graph crosses the x-axis on the graphing calculator. If students did not make any mistakes, their solutions should match the graph as well. Students may also compare their work to their table partner's work for any corrections. Students are to to make as many corrections as possible, and try to identify the reasons for the mistakes. Again, students may only be able to check a few of the problems before they leave class. The Intervention of checking for mistakes using the calculator should help students be able to finish the assignment successfully. I assign the problems that are not complete as homework to be handed in the following day.
https://betterlesson.com/lesson/588131/completing-the-square-of-a-quadratic-function?from=consumer_breadcrumb_dropdown_lesson
18
10
I want to know how a microprocessor converts the binary digits into decimal equivalent. The processor only has the ability of manipulating 0's and 1's but, how these numbers are converted back into their decimal equivalent? Are they converted by another circuit? If so, then how and what it is called? Or If they are converted by the software? What you need to see here is that the 'decimal equivalent' is actually a string of characters, for which there may be (in the case of a 64-bit two's-complement integer representation) 19 digits and a sign to represent. Depending on the character encoding, this may mean as much as 80 bytes (in the case of Unicode-32), though the most common encodings (ASCII, Latin-1, and UTF8) will use 1 byte per character for encoding the Arabic numerals. Few if any CPU instruction sets have any direct support for any fixed character set; and while some (including the ubiquitous x86 and x86-64) have some very basic string manipulation instructions, these generally don't have anything to do with the character encoding - they are mostly limited to things like copying the string from one place in memory to another. Now, your standard PC has some firmware support for a small number of character encodings, primarily for the use in displaying text on a text-mode screen, but that too is software (just software that is stored in the ROM chips). The conversions done by most language libraries are entirely in software, as they have to be able to handle different encodings. How the conversion would be done varies with the encoding. Yeah, as Schol-R-LEA said, it's all done in software, and it's mainly just a matter of characters. The reality is, a computer never has to convert numbers into a decimal representation unless it is about to display it on the screen for a human being to read it, and human beings read characters. So, this is really just about converting a number into a sequence of characters, which is something for software to do. There would be no point in having dedicated modules on the CPU for doing this kind of conversion because it would just waste valuable real-estate (room on the CPU). Binary-to-decimal conversion is never going to be performance-critical to the point of needing special circuitry for, that's simply because it's always in order to face a human beings, and human beings are infinitely slower than the code that generates those decimal characters. At some point, as a programmer, you end up forgetting that decimal numbers even exist... because the only thing that matters is binary numbers (and base-2 floating-point numbers), and sometimes, their hexadecimal equivalent (which are much more natural to use in computing).
https://www.daniweb.com/programming/computer-science/threads/474747/how-microprocessor-converts-binary-into-decimal-equivalent
18
25
Walden university scholarworks walden dissertations and doctoral studies 2016 critical thinking to justify an answer in mathematics classrooms angelique e brown. Critical thinking can be as much a part of a math class as learning concepts, computations, formulas, and theorems activities that stimulate. Use these tips to encourage your child’s critical thinking you can try them at home to help your child become a critical thinker critical thinking: math. This guide focuses on two important 21st century skills, critical thinking and problem solving, and how to teach them to students. Work sheet library: critical thinking: grades 6-8 you can use with your students to build a wide variety of critical thinking fun math challenges from. The increase of critical thinking skills through mathematical investigation on critical thinking skills of k 2008 mathematical thinking and. Rondamb talks about the importance of critical thinking skills in our students in this article from education articles. Critical thinking and mathematical problem solving video series: current educational issues video series publisher: foundation for critical. Why teach critical thinking oliver & utermohlen (1995) see students as too often being passive receptors of information through technology, the. 1 developing critical thinking skill in mathematics education einav aizikovitsh-udi in light of the importance of developing critical thinking, and given the scarcity of. A website providing a rigorous introduction to critical thinking one set of articles and books about critical thinking assessment of language and math. Mathematics: what why when and how invite students to make reasoned decisions about y aspect of mathematics what is critical thinking in mathematics. The critical thinking company publishes prek-12+ books and software to develop critical thinking in core articles math activities for middle school. The critical thinking co™critical thinking is the identification and evaluation of evidence to guide decision making a critical reading, writing, math. Critical thinking is the objective analysis of facts to form a judgment philosophical thinking, mathematical thinking, chemical thinking, biological thinking. Listed below are articles on critical thinking short summaries and citations are provided when available allen, robert d intellectual development and the. Journal of student engagement: education matters volume 6|issue 1 article 4 2016 critical and creative thinkers in mathematics classrooms sarah sanders.
http://vetermpaperjcef.visitorlando.us/critical-thinking-in-math-articles.html
18
20
Scientific notation is the expression of a number based on the largest exponent of 10 for its value, where the form is a decimal number a x 10n. Welcome to scientific notation, our website about expressing decimal numbers in standard index formin contrast to decimal notation, the scientific notation is a very convenient way to express, both, large and small numbers, and frequently used in engeneering, math and sciene. Background for teachers students were introduced to the concept of exponents in fifth grade scientific notation is writing a number as the product of a number (greater than or equal to 1 and less than 10) and a power of ten. In this lesson, you will learn how to write and compute in scientific notation. Scientific notation conversion calculator: decimal notation, e notation, engineering notation. Practice expressing numbers in scientific notation if you're behind a web filter, please make sure that the domains kastaticorg and kasandboxorg are unblocked. When converting a number into sci notation, we must remember a few rules first, the decimal must be between the first two non-zero numbers the number prior to the multiplication symbol is known as the significant or mantissa the numbers of digits in the significant depends on the application and. A method of writing or displaying numbers in terms of a decimal number between 1 and 10 multiplied by a power of 10 the scientific notation of 10,492, for example, is 10492 × 10 4 a method of expressing numbers in terms of a decimal number between 1 and 10 multiplied by a power of 10 the. But before you begin, it is important that you review the basic underlying principle behind both scientific notation and the sl units of measurement. Convert to scientific notation with our free step-by-step algebra solver. Free online scientific notation calculator solve advanced problems in physics, mathematics and engineering math expression renderer, plots, unit converter, equation solver, complex numbers, calculation history. Scientific notation is the way that scientists easily handle very large numbers or very small numbers for example, instead of writing 00000000056, we write 56 x 10-9. Scientific notation is a mathematical expression used to represent a decimal number between 1 and 10 multiplied by ten, so you can write large numbers using less digits an example of scientific notation is when you write 4 x 10³ for 4,000. Demonstrates how to convert between regular formatting and scientific notation. It makes it easy to use big and small values when the number is 10 or greater, the decimal point has to move to the left, and the power of 10 is positive when the number is smaller than 1, the decimal point has to move to the right, so the power of 10 is negative. ©a b2 x0l1 s2c rkru dtwaw bsrodfxttw ca fr ke 0 6lwlzc9a v ravlyl8 kr ei5g nhotss 4 yroeessebr kvpezd p6 y 6mgakd4e l dwri8t dhn pi hnvf7i un uint 8ek jpar3e j-waxlhgfe 1b wryawj worksheet by kuta software llc. I have a dataframe with a column of p-values and i want to make a selection on these p-values pvalues_anova 9693919e-01 9781728e-01 9918415e-01 9716883e-01 1667183e-02 9952762. Scientific notation (also referred to as scientific form or standard index form, or standard form in the uk) is a way of expressing numbers that are too big or too small to be conveniently written in decimal form. There are times when really large numbers, like 35 trillion or really small numbers like 43 ten-millionth are necessary what do these numbers look like. Learn to convert numbers into and out of scientific notation scientific notation is a way to express very big and very small numbers with exponents as a power of ten it is also sometimes called. Definition of scientific notation in the definitionsnet dictionary meaning of scientific notation what does scientific notation mean information and translations of scientific notation in the most comprehensive dictionary definitions resource on the web. Scientific notation is a standard way of writing very large and very small numbers so that they're easier to both compare and use in computations to write in scientific notation, follow the form where n is a number between 1 and 10, but not 10 itself, and a is an integer (positive or negative. What is scientific notation scientific notation is a method of writing really large or really small numbers in a more concise form that removes all of the extraneous zeroes. Scientific notation includes a coefficient consisting of a single digit in front of the decimal, numbers 1 - 9 inclusive, multiplied times the base 10 raised to some power. 21 introduction below are some examples of numbers written ``normally'' and in scientific notation: as you can see, in general a number x written in scientific notation has the form n × 10 m, n times 10 raised to a power m (called the ``exponent''. Let's look at an example in scientific notation, earth's mass is 597×10 24 kg the 597 part is called the coefficient the 24 is called the exponentremember, an exponent simply describes how many times you are multiplying a number by itself 2³ is 2×2×2 which is 8 and 2 4 is 2×2×2×2 or 16. To write a number in scientific notation: put the decimal after the first digit and drop the zeroes in the number 123,000,000,000 the coefficient will be 123.
http://bhhomeworkxcww.card-hikaku.info/scientific-notation.html
18
18
A weir // or low head dam is a barrier across the horizontal width of a river that alters the flow characteristics of water and usually results in a change in the height of the river level. There are many designs of weir, but commonly water flows freely over the top of the weir crest before cascading down to a lower level. - 1 Etymology - 2 Function - 3 Issues - 4 Common types - 5 See also - 6 References - 7 External links There is no single definition as to what constitutes a weir and one English dictionary simply defines a weir as a small dam, likely originating from Middle English were, Old English wer, derivative of root of werian, meaning "to defend, dam". Weirs are commonly used to prevent flooding, measure water discharge, and help render rivers more navigable by boat. In some locations, the terms dam and weir are synonymous, but normally there is a clear distinction made between the structures. A dam is usually specifically designed to impound water behind a wall, whilst a weir is designed to alter the river flow characteristics. A common distinction between dams and weirs is that water flows over the top (crest) of a weir or underneath it for at least some of its length. Accordingly, the crest of an overflow spillway on a large dam may therefore be referred to as a weir. Weirs can vary in size both horizontally and vertically, with the smallest being only a few inches in height whilst the largest may be hundreds of metres long and many metres tall. Some common weir purposes are outlined below. Weirs allow hydrologists and engineers a simple method of measuring the volumetric flow rate in small to medium-sized streams/rivers or in industrial discharge locations. Since the geometry of the top of the weir is known and all water flows over the weir, the depth of water behind the weir can be converted to a rate of flow. However, this can only be achieved in locations where all water flows over the top of the weir crest (as opposed to around the sides or through conduits/sluices) and at locations where the water that flows over the crest is carried away from the structure. If these conditions are not met, it can make flow measurement complicated, inaccurate or even impossible. The discharge calculation can be summarised as: - Q is the volumetric flow rate of fluid (the discharge) - C is the flow coefficient for the structure (on average a figure of 0.62). - L is the width of the crest - H is the height of head of water over the crest - n varies with structure (e.g., 3/2 for horizontal weir, 5/2 for v-notch weir) However, this calculation is a generic relationship and specific calculations are available for the many different types of weir. Flow measurement weirs must be well maintained if they are to remain accurate. Control of invasive species As weirs are a physical barrier they can impede the longitudinal movement of fish and other animals up and down a river. This can have a negative effect on fish species that migrate as part of their breeding cycle (e.g., salmonids), but can also be useful as a method of preventing invasive species moving upstream. For example, weirs in the Great Lakes region have helped to prevent invasive sea lamprey from colonising further upstream. Mill ponds are created by a weir that impounds water that then flows over the structure. The energy created by the change in height of the water can then be used to power waterwheels and power sawmills, grinding wheels and other equipment. Flood control and altering river conditions Weirs are commonly used to control the flow rates of rivers during periods of high discharge. Sluice gates (or in some cases the height of the weir crest) can be altered to increase or decrease the volume of water flowing downstream. Weirs for this purpose are commonly found upstream of towns and villages and can either be automated or manually operated. By slowing the rate at which water moves downstream even slightly a disproportionate effect can be had on the likelihood of flooding. On larger rivers, a weir can also alter the flow characteristics of the waterway to the point that vessels are able to navigate areas previously inaccessible due to extreme currents or eddies. Many larger weirs will have features built in that allow boats and river users to "shoot the weir" and navigate by passing up or down stream without having to exit the river. Weirs constructed for this purpose are especially common on the River Thames, and most are situated near each of the river's 45 locks. Because a weir impounds water behind it and alters the flow regime of the river, it can have an effect on the local ecology. Typically the reduced river velocity upstream can lead to increased siltation (deposition of fine particles of silt and clay on the river bottom) that reduces the water oxygen content and smothers invertebrate habitat and fish spawning sites. The oxygen content typically returns to normal once water has passed over the weir crest (although it can be hyper-oxygenated), although increased river velocity can scour the river bed causing erosion and habitat loss. Weirs can have a significant effect on fish migration. Any weir that exceeds either the maximum height a species can jump or creates flow conditions that cannot be bypassed (e.g., due to excessive water velocity) effectively limits the maximum point upstream that fish can migrate. In some cases this can mean that huge lengths of breeding habitat are lost and over time this can have a significant impact of fish populations. In many countries, it is now a requirement by law to build fish ladders into the design of a weir that ensures that fish can bypass the barrier and access upstream habitat. Unlike dams, weirs do not usually prevent downstream fish migration (as water flows over the top and allows fish to bypass the structure), although they can create flow conditions that injure juvenile fish. Recent studies suggest that navigation locks have also potential to provide increased access for a range of biota, including poor swimmers. Even though the water around weirs can often appear relatively calm, they can be extremely dangerous places to boat, swim, or wade, as the circulation patterns on the downstream side—typically called a hydraulic jump— can submerge a person indefinitely. This phenomenon is so well known to canoeists, kayakers, and others who spend time on rivers that they even have a rueful name for weirs: "drowning machines". If caught in this situation, the Ohio DNR recommends that a victim should "tuck the chin down, draw the knees up to the chest with arms wrapped around them. Hopefully, conditions will be such that the current will push the victim along the bed of the river until swept beyond the boil line and released by the hydraulic." The Pennsylvania State Police also recommends to "curl up, dive to the bottom, and swim or crawl downstream". As the hydraulic entrains air, the buoyancy of the water between the dam and boil line will be reduced by upwards of 30%, and if a victim is unable to float, escape at the base of the dam may be the only option for survival. There are many different types of weirs and they can vary from a simple stone structure that are barely noticeable, to elaborate and very large structures that require extensive management and maintenance. A broad-crested weir is a flat-crested structure, where the water passes over a crest that covers much or all of the channel width. This is one of the most common types of weir found worldwide. A compound weir is any weir that comprises several different designs into one structure. They are commonly seen in locations where a river has multiple users who may need to bypass the structure. A common design would be one where a weir is broad-crested for much of its length, but has a section where the weir stops or is 'open' so that small boats and fish can traverse the structure. A notch weir is any weir where the physical barrier is significantly higher than the water level except for a specific notch (often V-shaped) cut into the panel. At times of normal flow all the water must pass through the notch, simplfiying flow volume calculations, and at times of flood the water level can rise and submerge the weir without any alterations made to the structure. A polynomial weir is a weir that has a geometry defined by a polynomial equation of any order n. Most weirs in practice are low-order polynomial weirs. The standard rectangular weir is, for example, a polynomial weir of order zero. The triangular (V-notch) and trapezoidal weirs are of order one. High-order polynomial weirs are providing wider range of Head-Discharge relationships, and hence better control of the flow at outlets of lakes, ponds, and reservoirs. - "the definition of weir". Dictionary.com. Archived from the original on 2017-03-04. Retrieved 2017-03-03. - "Weir". www.etymonline.com. Online Etymology Dictionary. Archived from the original on 19 March 2017. Retrieved 20 May 2017. - "Weirs - Flow Rate Measure". www.engineeringtoolbox.com. Archived from the original on 2017-03-04. Retrieved 2017-03-03. - "Factors affecting weir flow measurement accuracy". openchannelflow.com. Archived from the original on 30 July 2016. Retrieved 2 May 2018. - Tummers, J. S., Winter, E., Silva, S., O’Brien, P., Jang, M. H., & Lucas, M. C. (2016). Evaluating the effectiveness of a Larinier super active baffle fish pass for European river lamprey Lampetra fluviatilis before and after modification with wall-mounted studded tiles. Ecological Engineering, 91, 183-194. - Silva, S., Lowry, M., Macaya-Solis, C., Byatt, B., & Lucas, M. C. (2017). Can navigation locks be used to help migratory fishes with poor swimming performance pass tidal barrages? A test with lampreys. Ecological Engineering, 102, 291-302. - Michael Robinson; Robert Houghtalen. "Dangerous dams". Rhode Island Canoe/Kayak Association. Rhode Island. Archived from the original on 2010-08-12. Retrieved 2011-06-26. - Boating, Ohio DNR Division of Parks and Watercraft -. "Lowhead Dam Safety". watercraft.ohiodnr.gov. Archived from the original on 30 November 2016. Retrieved 2 May 2018. - "Archived copy". Archived from the original on 2018-05-02. Retrieved 2017-06-15. Escaping a low-head dam - Baddour, R.E. (2008). "Head-Discharge Equation for Sharp-Crested Polynomial Weir". ASCE J. Irrigation and Drainage Engineering. 134 (2): 260–262. This section includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (August 2018) (Learn how and when to remove this template message) - Clemmens, Albert (2010). Water Measurement with Flumes and Weirs. ISBN 978-1887201544. - Akers, Peter (1978). Weirs and Flumes for Flow Measurement. ISBN 978-0471996378. - Chanson, H. (2004). "The Hydraulics of Open Channel Flow : An Introduction." Butterworth-Heinemann, Oxford, UK, 2nd edition, 630 pages (ISBN 978 0 7506 5978 9). - Chanson, H. (2007). Hydraulic Performances of Minimum Energy Loss Culverts in Australia, Journal of Performances of Constructed Facilities, ASCE, Vol. 21, No. 4, pp. 264–272 doi:10.1061/(ASCE)0887-3828(2007)21:4(264). - Gonzalez, C.A., and Chanson, H. (2007). Experimental Measurements of Velocity and Pressure Distribution on a Large Broad-Crested Weir, Flow Measurement and Instrumentation, 18 3-4: 107-113 doi:10.1016/j.flowmeasinst.2007.05.005. - Henderson, F.M. (1996), Open Channel Flow, New York, USA.: MacMillan Company - McKay, G.R. (1971). "Design of Minimum Energy Culverts." Research Report, Dept of Civil Eng., Univ. of Queensland, Brisbane, Australia, 29 pages & 7 plates. - Sturm, T.W. (2001). "Open Channel Hydraulics." McGraw Hill, Boston, USA, Water Resources and Environmental Engineering Series, 493 pages. |Wikimedia Commons has media related to Weirs.| |Look up weir in Wiktionary, the free dictionary.| - Hydraulics of Minimum Energy Loss (MEL) culverts and bridge waterways (Click "proceed" at the UQ-ITS Advisory webapge)
https://wikivisually.com/wiki/Weir
18
23
Infant mortality is the death of young children under the age of 1.This death toll is measured by the infant mortality rate (IMR), which is the number of deaths of children under one year of age per 1000 live births. The under-five mortality rate is also an important statistic, considering the infant mortality rate focuses only on children under one year of age. Premature birth is the biggest contributor to the IMR. Other leading causes of infant mortality are birth asphyxia, pneumonia, congenial malformations, term birth complications such as abnormal presentation of the foetus umbilical cord prolapse, or prolonged labor, neonatal infection, diarrhea, malaria, measles and malnutrition. One of the most common preventable causes of infant mortality is smoking during pregnancy. Many factors contribute to infant mortality, such as the mother's level of education, environmental conditions, and political and medical infrastructure. Improving sanitation, access to clean drinking water, immunization against infectious diseases, and other public health measures can help reduce high rates of infant mortality. Child mortality is the death of a child before the child's fifth birthday, measured as the under-5 child mortality rate (U5MR). National statistics sometimes group these two mortality rates together. Globally, 9.2 million children die each year before their fifth birthday; more than 60% of these deaths are seen as being avoidable with low-cost measures such as continuous breast-feeding, vaccinations and improved nutrition. Infant mortality rate was an indicator used to monitor progress towards the Fourth Goal of the Millennium Development Goals of the United Nations for the year 2015. It is now a target in the Sustainable Development Goals for Goal Number 3 ("Ensure healthy lives and promote well-being for all at all ages"). Throughout the world, infant mortality rate (IMR) fluctuates drastically, and according to Biotechnology and Health Sciences, education and life expectancy in the country is the leading indicator of IMR. This study was conducted across 135 countries over the course of 11 years, with the continent of Africa having the highest infant mortality rate of any other region studied with 68 deaths per 1,000 live births. Infant mortality rate (IMR) is the number of deaths per 1,000 live births of children under one year of age. The rate for a given region is the number of children dying under one year of age, divided by the number of live births during the year, multiplied by 1,000. Forms of infant mortality: Causes of infant mortality directly lead to the death. Environmental and social barriers prevent access to basic medical resources and thus contribute to an increasing infant mortality rate; 99% of infant deaths occur in developing countries, and 86% of these deaths are due to infections, premature births, complications during delivery, and perinatal asphyxia and birth injuries. Greatest percentage reduction of infant mortality occurs in countries that already have low rates of infant mortality. Common causes are preventable with low-cost measures. In the United States, a primary determinant of infant mortality risk is infant birth weight with lower birth weights increasing the risk of infant mortality. The determinants of low birth weight include socio-economic, psychological, behavioral and environmental factors. Causes of infant mortality that are related to medical conditions include: low birth weight, sudden infant death syndrome, malnutrition,congenial malformations, and infectious diseases, including neglected tropical diseases. Congenial malformations is a birth defect that babies are born with such as cleft lip and palate, down syndrome, and heart defects. Often times, this occurs when the mother consumes alcohol, but it can also be a cause of genetics or have an unknown cause. Congenial malformations have had a significant impact on infant mortality. Malnutrition and infectious diseases were the main cause of death in more undeveloped countries. In the Caribbean and Latin America, congenial malformations only accounted for 5% of the infant deaths in these countries while malnutrition and infectious diseases took 7% to 27% of infants in the 1980s. In more developed countries such as the United States, there was a rise in infant deaths due to congenial malformations. These birth defects mostly had to do with heart and central nervous system. In the 19th century, there was a decrease in the amount of infant deaths from heart diseases. From 1979 to 1997, there was a 39% decline in infant mortality due to heart problems. Low birth weight makes up 60–80% of the infant mortality rate in developing countries. The New England Journal of Medicine stated that "The lowest mortality rates occur among infants weighing 3,000 to 3,500 g (6.6 to 7.7 lb). For infants born weighing 2,500 g (5.5 lb) or less, the mortality rate rapidly increases with decreasing weight, and most of the infants weighing 1,000 g (2.2 lb) or less die. As compared with normal-birth-weight infants, those with low weight at birth are almost 40 times more likely to die in the neonatal period; for infants with very low weight at birth the relative risk of neonatal death is almost 200 times greater." Infant mortality due to low birth weight is usually a direct cause stemming from other medical complications such as preterm birth, poor maternal nutritional status, lack of prenatal care, maternal sickness during pregnancy, and an unhygienic home environments. Along with birth weight, period of gestation makes up the two most important predictors of an infant's chances of survival and their overall health. According to the New England Journal of Medicine, "in the past two decades, the infant mortality rate (deaths under one year of age per thousand live births) in the United States has declined sharply." Low birth weights from African American mothers remain twice as high as that of white women. LBW may be the leading cause of infant deaths, and it is greatly preventable. Although it is preventable, the solutions may not be the easiest but effective programs to help prevent LBW are a combination of health care, education, environment, mental modification and public policy, influencing a culture supporting lifestyle. Preterm birth is the leading cause of newborn deaths worldwide. Even though America excels past many other countries in the care and saving of premature infants, the percentage of American woman who deliver prematurely is comparable to those in developing countries. Reasons for this include teenage pregnancy, increase in pregnant mothers over the age of thirty-five, increase in the use of in-vitro fertilization which increases the risk of multiple births, obesity and diabetes. Also, women who do not have access to health care are less likely to visit a doctor, therefore increasing their risk of delivering prematurely. Sudden infant death syndrome(SIDS) is a syndrome where an infant dies in their sleep with no reasoning behind it. Even with a complete autopsy, no one has been able to figure out what causes this disease. This disease is more common in Western countries. Even though researchers are not sure what causes this disease, they have discovered that it is healthier for babies to sleep on their backs instead of their stomachs. This discovery saved many families from the tragedy that this disease causes. Scientist have also discovered three causes within a model they created called, the contemporary triple risk model. This model states that three conditions such as the mother smoking while pregnant, the age of the infant, and stress referring to conditions such as overheating, prone sleeping, co-sleeping, and head covering. Malnutrition or undernutrition is defined as inadequate intake of nourishment, such as proteins and vitamins, which adversely affects the growth, energy and development of people all over the world. It is especially prevalent in women and infants under 5 who live in developing countries within the poorer regions of Africa, Asia, and Latin America. Children are most vulnerable as they are yet to fully develop a strong immune system, as well as being dependent upon parents to provide the necessary food and nutritional intake. It is estimated that about 3.5 million children die each year as a result of childhood or maternal malnutrition, with stunted growth, low body weight and low birth weight accounting for about 2.2 million associated deaths. Factors which contribute to malnutrition are socioeconomic, environmental, gender status, regional location, and breastfeeding cultural practices. It is difficult to assess the most pressing factor as they can intertwine and vary among regions. Children suffering from malnutrition face adverse physical effects such as stunting, wasting, or being overweight. Such characteristics entail difference in weight-and-height ratios for age in comparison to adequate standards. In Africa the number of stunted children has risen, while Asia holds the most children under 5 suffering from wasting. The amount of overweight children has increased among all regions of the globe. Inadequate nutrients adversely effect physical and cognitive developments, increasing susceptibility to severe health problems. Micronutrient deficiency such as iron has been linked to children with anemia, fatigue, and poor brain development. Similarly, the lack of Vitamin A is the leading cause of blindness among malnourished children. The outcome of malnutrition in children results in decreased ability of the immune system to fight infections, resulting in higher rates of death from diseases such as malaria, respiratory disease and diarrhea. Babies born in low to middle income countries in sub-Saharan Africa and southern Asia are at the highest risk of neonatal death. Bacterial infections of the bloodstream, lungs, and the brain's covering (meningitis) are responsible for 25% of neonatal deaths. Newborns can acquire infections during birth from bacteria that are present in their mother's reproductive tract. The mother may not be aware of the infection, or she may have an untreated pelvic inflammatory disease or sexually transmitted disease. These bacteria can move up the vaginal canal into the amniotic sac surrounding the baby. Maternal blood-borne infection is another route of bacterial infection from mother to baby. Neonatal infection is also more likely with the premature rupture of the membranes (PROM) of the amniotic sac. Seven out of ten childhood deaths are due to infectious diseases: acute respiratory infection, diarrhea, measles, and malaria. Acute respiratory infection such as pneumonia, bronchitis, and bronchiolitis account for 30% of childhood deaths; 95% of pneumonia cases occur in the developing world. Diarrhea is the second-largest cause of childhood mortality in the world, while malaria causes 11% of childhood deaths. Measles is the fifth-largest cause of childhood mortality. Folic acid for mothers is one way to combat iron deficiency. A few public health measures used to lower levels of iron deficiency anemia include iodize salt or drinking water, and include vitamin A and multivitamin supplements into a mother's diet. A deficiency of this vitamin causes certain types of anemia (low red blood cell count). Infant mortality rate can be a measure of a nation's health and social condition. It is a composite of a number of component rates which have their separate relationship with various social factors and can often be seen as an indicator to measure the level of socioeconomic disparity within a country. Organic water pollution is a better indicator of infant mortality than health expenditures per capita. Water contaminated with various pathogens houses a host of parasitic and microbial infections. Infectious disease and parasites are carried via water pollution from animal wastes. Areas of low socioeconomic status are more prone to inadequate plumbing infrastructure, and poorly maintained facilities. The burning of inefficient fuels doubles the rate of children under 5 years old with acute respiratory tract infections. Climate and geography often play a role in sanitation conditions. For example, the inaccessibility of clean water exacerbates poor sanitation conditions. People who live in areas where particulate matter (PM) air pollution is higher tend to have more health problems across the board. Short-term and long-term effects of ambient air pollution are associated with an increased mortality rate, including infant mortality. Air pollution is consistently associated with post neonatal mortality due to respiratory effects and sudden infant death syndrome. Specifically, air pollution is highly associated with SIDs in the United States during the post-neonatal stage. High infant mortality is exacerbated because newborns are a vulnerable subgroup that is affected by air pollution. Newborns who were born into these environments are no exception. Women who are exposed to greater air pollution on a daily basis who are pregnant should be closely watched by their doctors, as well as after the baby is born. Babies who live in areas with less air pollution have a greater chance of living until their first birthday. As expected, babies who live in environments with more air pollution are at greater risk for infant mortality. Areas that have higher air pollution also have a greater chance of having a higher population density, higher crime rates and lower income levels, all of which can lead to higher infant mortality rates. The key pollutant for infant mortality rates is carbon monoxide. Carbon monoxide is a colorless, odorless gas that does great harm especially to infants because of their immature respiratory system. Another major pollutant is second-hand smoke, which is a pollutant that can have detrimental effects on a fetus. According to the American Journal of Public Health, "in 2006, more than 42 000 Americans died of second hand smoke-attributable diseases, including more than 41 000 adults and nearly 900 infants ... fully 36% of the infants who died of low birth weight caused by exposure to maternal smoking in utero were Blacks, as were 28% of those dying of respiratory distress syndrome, 25% dying of other respiratory conditions, and 24% dying of sudden infant death syndrome." The American Journal of Epidemiology also stated that "Compared with nonsmoking women having their first birth, women who smoked less than one pack of cigarettes per day had a 25% greater risk of mortality, and those who smoked one or more packs per day had a 56% greater risk. Among women having their second or higher birth, smokers experienced 30% greater mortality than nonsmokers." Modern research in the United States on racial disparities in infant mortality suggests a link between the institutionalized racism that pervades the environment and high rates of African American infant mortality. In synthesis of this research, it has been observed that "African American infant mortality remains elevated due to the social arrangements that exist between groups and the lifelong experiences responding to the resultant power dynamics of these arrangements." It is important to note that infant mortality rates do not decline among African Americans even if their socio-economic status does improve. Parker Dominguez at the University of Southern California has made some headway in determining the reasoning behind this, claiming black women are more prone to psychological stress than other women of different races in the United States. Stress is a lead factor in inducing labor in pregnant women, and therefore high levels of stress during pregnancy could lead to premature births that have the potential to be fatal for the infant. Early childhood trauma includes physical, sexual, and psychological abuse of a child ages zero to five years-old. Trauma in early development has extreme impact over the course of a lifetime and is a significant contributor to infant mortality. Developing organs are fragile. When an infant is shaken, beaten, strangled, or raped the impact is exponentially more destructive than when the same abuse occurs in a fully developed body. Studies estimate that 1–2 per 100,000 U.S. children annually are fatally injured. Unfortunately, it is reasonable to assume that these statistics under represent actual mortality. Three-quarters (74.8 percent) of child fatalities in FFY 2015 involved children younger than 3 years, and children younger than 1 year accounted for 49.4 percent of all fatalities. In particular, correctly identifying deaths due to neglect is problematic and children with sudden unexpected death or those with what appear to be unintentional causes on the surface often have preventable risk factors which are substantially similar to those in families with maltreatment. There is a direct relationship between age of maltreatment/injury and risk for death. The younger an infant is, the more dangerous the maltreatment. Family configuration, child gender, social isolation, lack of support, maternal youth, marital status, poverty, parental ACES, and parenting practices are thought to contribute to increased risk. Social class is a major factor in infant mortality, both historically and today. Between 1912 and 1915, the Children's Bureau in the United States examined data across eight cities and nearly 23,000 live births. They discovered that lower incomes tend to correlate with higher infant mortality. In cases where the father had no income, the rate of infant mortality was 357% more than that for the highest income earners ($1,250+). Differences between races were also apparent. African-American mothers experience infant mortality at a rate 44% higher than average; however, research indicates that socio-economic factors do not totally account for the racial disparities in infant mortality. While infant mortality is normally negatively correlated with GDP, there may indeed be some opposing short-term effects from a recession. A recent study by The Economist showed that economic slowdowns reduce the amount of air pollution, which results in a lower infant mortality rate. In the late 1970s and early 1980s, the recession's impact on air quality is estimated to have saved around 1,300 US babies. It is only during deep recessions that infant mortality increases. According to Norbert Schady and Marc-François Smitz, recessions when GDP per capita drops by 15% or more increase infant mortality. Social class dictates which medical services are available to an individual. Disparities due to socioeconomic factors have been exacerbated by advances in medical technology. Developed countries, most notably the United States, have seen a divergence between those living in poverty who cannot afford medical advanced resources, leading to an increased chance of infant mortality, and others. In policy, there is a lag time between realization of a problem's possible solution and actual implementation of policy solutions.[clarification needed] Infant mortality rates correlate with war, political unrest, and government corruption. In most cases, war-affected areas will experience a significant increase in infant mortality rates. Having a war taking place where a woman is planning on having a baby is not only stressful on the mother and foetus, but also has several detrimental effects. However, many other significant factors influence infant mortality rates in war-torn areas. Health care systems in developing countries in the midst of war often collapse. Attaining basic medical supplies and care becomes increasingly difficult. During the Yugoslav Wars in the 1990s Bosnia experienced a 60% decrease in child immunizations. Preventable diseases can quickly become epidemic given the medical conditions during war. Many developing countries rely on foreign aid for basic nutrition. Transport of aid becomes significantly more difficult in times of war. In most situations the average weight of a population will drop substantially. Expecting mothers are affected even more by lack of access to food and water. During the Yugoslav Wars in Bosnia the number of premature babies born increased and the average birth weight decreased. There have been several instances in recent years of systematic rape as a weapon of war. Women who become pregnant as a result of war rape face even more significant challenges in bearing a healthy child. Studies suggest that women who experience sexual violence before or during pregnancy are more likely to experience infant death in their children. Causes of infant mortality in abused women range from physical side effects of the initial trauma to psychological effects that lead to poor adjustment to society. Many women who became pregnant by rape in Bosnia were isolated from their hometowns making life after childbirth exponentially more difficult. Developing countries have a lack of access to affordable and professional health care resources, and skilled personnel during deliveries. Countries with histories of extreme poverty also have a pattern of epidemics, endemic infectious diseases, and low levels of access to maternal and child healthcare. The American Academy of Pediatrics recommends that infants need multiple doses of vaccines such as diphtheria-tetanus-acellular pertussis vaccine, Haemophilus influenzae type b (Hib) vaccine, Hepatitis B (HepB) vaccine, inactivated polio vaccine (IPV), and pneumococcal vaccine (PCV). Research was conducted by the Institute of Medicine's Immunization Safety Review Committee concluded that there is no relationship between these vaccines and risk of SIDS in infants. This tells us that not only is it extremely necessary for every child to get these vaccines to prevent serious diseases, but there is no reason to believe that if your child does receive an immunization that it will have any effect on their risk of SIDS. Political modernization perspective, the neo-classical economic theory that scarce goods are most effectively distributed to the market, say that the level of political democracy influences the rate of infant mortality. Developing nations with democratic governments tend to be more responsive to public opinion, social movements, and special interest groups for issues like infant mortality. In contrast, non-democratic governments are more interested in corporate issues and less so in health issues. Democratic status effects the dependency a nation has towards its economic state via export, investments from multinational corporations and international lending institutions. Levels of socioeconomic development and global integration are inversely related to a nation's infant mortality rate. Dependency perspective occurs in a global capital system. A nation's internal impact is highly influenced by its position in the global economy and has adverse effects on the survival of children in developing countries. Countries can experience disproportionate effects from its trade and stratification within the global system. It aids in the global division of labor, distorting the domestic economy of developing nations. The dependency of developing nations can lead to a reduce rate of economic growth, increase income inequality inter- and intra-national, and adversely affects the wellbeing of a nation's population. A collective cooperation between economic countries plays a role in development policies in the poorer, peripheral, countries of the world. These economic factors present challenges to governments' public health policies. If the nation's ability to raise its own revenues is compromised, governments will lose funding for its health service programs, including services that aim in decreasing infant mortality rates. Peripheral countries face higher levels of vulnerability to the possible negative effects of globalization and trade in relation to key countries in the global market. Even with a strong economy and economic growth (measured by a country's gross national product), the advances of medical technologies may not be felt by everyone, lending itself to increasing social disparities. High rates of infant mortality occur in developing countries where financial and material resources are scarce and there is a high tolerance to high number of infant deaths. There are circumstances where a number of developing countries to breed a culture where situations of infant mortality such as favoring male babies over female babies are the norm. In developing countries such as Brazil, infant mortality rates are commonly not recorded due to failure to register for death certificates. Failure to register is mainly due to the potential loss of time and money and other indirect costs to the family. Even with resource opportunities such as the 1973 Public Registry Law 6015, which allowed free registration for low-income families, the requirements to qualify hold back individuals who are not contracted workers. Another cultural reason for infant mortality, such as what is happening in Ghana, is that "besides the obvious, like rutted roads, there are prejudices against wives or newborns leaving the house." Because of this it is making it even more difficult for the women and newborns to get the treatment that is available to them and that is needed. Cultural influences and lifestyle habits in the United States can account for some deaths in infants throughout the years. According to the Journal of the American Medical Association "the post neonatal mortality risk (28 to 364 days) was highest among continental Puerto Ricans" compared to babies of the non-Hispanic race. Examples of this include teenage pregnancy, obesity, diabetes and smoking. All are possible causes of premature births, which constitute the second highest cause of infant mortality. Ethnic differences experienced in the United States are accompanied by higher prevalence of behavioral risk factors and sociodemographic challenges that each ethnic group faces. Historically, males have had higher infant mortality rates than females. The difference between male and female infant mortality rates have been dependent on environmental, social, and economic conditions. More specifically, males are biologically more vulnerable to infections and conditions associated with prematurity and development. Before 1970, the reasons for male infant mortality were due to infections, and chronic degenerative diseases. However, since 1970, certain cultures emphasizing males has led to a decrease in the infant mortality gap between males and females. Also, medical advances have resulted in a growing number of male infants surviving at higher rates than females due to the initial high infant mortality rate of males. Genetic components results in newborn females being biologically advantaged when it comes to surviving their first birthday. Males, biologically, have lower chances of surviving infancy in comparison to female babies. As infant mortality rates saw a decrease on a global scale, the gender most affected by infant mortality changed from males experiences a biological disadvantage, to females facing a societal disadvantage. Some developing nations have social and cultural patterns that reflects adult discrimination to favor boys over girls for their future potential to contribute to the household production level. A country's ethnic composition, homogeneous versus heterogeneous, can explain social attitudes and practices. Heterogeneous level is a strong predictor in explaining infant mortality. Birth spacing is the time between births. Births spaced at least three years apart from one another are associated with the lowest rate of mortality. The longer the interval between births, the lower the risk for having any birthing complications, and infant, childhood and maternal mortality. Higher rates of pre-term births, and low birth weight are associated with birth to conception intervals of less than six months and abortion to pregnancy interval of less than six months. Shorter intervals between births increase the chances of chronic and general under-nutrition; 57% of women in 55 developing countries reported birth spaces shorter than three years; 26% report birth spacing of less than two years. Only 20% of post-partum women report wanting another birth within two years; however, only 40% are taking necessary steps such as family planning to achieve the birth intervals they want. Unplanned pregnancies and birth intervals of less than twenty-four months are known to correlate with low birth weights and delivery complications. Also, women who are already small in stature tend to deliver smaller than average babies, perpetuating a cycle of being underweight. To reduce infant mortality rates across the world health practitioners, governments, and non-governmental organizations have worked to create institutions, programs and policies to generate better health outcomes. Improvements such as better sanitation practices have proven to be effective in reducing public heath outbreaks and rates of disease among mothers and children. Efforts to increase a households' income through direct assistance or economic opportunities decreases mortality rates, as families possess some means for more food and access to healthcare. Education campaigns, disseminating knowledge among urban and rural regions, and better access to education attainment prove to be an effective strategy to reduce infant and mother mortality rates. Current efforts from NGOs and governments are focused developing human resources, strengthening health information systems, health services delivery, etc. Improvements in such areas have increased regional health systems and aided in efforts to reduce mortality rates. Reductions in infant mortality are possible in any stage of a country's development. Rate reductions are evidence that a country is advancing in human knowledge, social institutions and physical capital. Governments can reduce the mortality rates by addressing the combined need for education (such as universal primary education), nutrition, and access to basic maternal and infant health services. A policy focus has the potential to aid those most at risk for infant and childhood mortality allows rural, poor and migrant populations. Reducing chances of babies being born at low birth weights and contracting pneumonia can be accomplished by improving air quality. Improving hygiene can prevent infant mortality. Home-based technology to chlorinate, filter, and solar disinfection for organic water pollution could reduce cases of diarrhea in children by up to 48%. Improvements in food supplies and sanitation has been shown to work in the United States' most vulnerable populations, one being African Americans. Overall, women's health status need to remain high. Simple behavioral changes, such as hand washing with soap, can significantly reduce the rate of infant mortality from respiratory and diarrheal diseases. According to UNICEF, hand washing with soap before eating and after using the toilet can save more lives of children than any single vaccine or medical intervention, by cutting deaths from diarrhea and acute respiratory infections. Future problems for mothers and babies can be prevented. It is important that women of reproductive age adopt healthy behaviors in everyday life, such as taking folic acid, maintaining a healthy diet and weight, being physically active, avoiding tobacco use, and avoiding excessive alcohol and drug use. If women follow some of the above guidelines, later complications can be prevented to help decrease the infant mortality rates. Attending regular prenatal care check-ups will help improve the baby's chances of being delivered in safer conditions and surviving. Focusing on preventing preterm and low birth weight deliveries throughout all populations can help to eliminate cases of infant mortality and decrease health care disparities within communities. In the United States, these two goals have decreased infant mortality rates on a regional population, it has yet to see further progress on a national level. Technological advances in medicine would decrease the infant mortality rate and an increased access to such technologies could decrease racial and ethnic disparities. It has been shown that technological determinants are influenced by social determinants. Those who cannot afford to utilize advances in medicine tend to show higher rates of infant mortality. Technological advances has, in a way, contributed to the social disparities observed today. Providing equal access has the potential to decrease socioeconomic disparities in infant mortality. Specifically, Cambodia is facing issues with a disease that is unfortunately killing infants. The symptoms only last 24 hours and the result is death. As stated if technological advances were increased in countries it would make it easier to find the solution to diseases such as this. Recently, there have been declines in the United States that could be attributed to advances in technology. Advancements in the Neonatal Intensive Care Unit can be related to the decline in infant mortality in addition to the advancement of surfactants. However, the importance of the advancement of technology remains unclear as the number of high-risk births increases in the United States. It has been well documented that increased education among mothers, communities, and local health workers results in better family planning, improvement on children's health, and lower rates of children's deaths. High-risk areas, such as Sub-Saharan Africa, have demonstrated that an increase in women's education attainment leads to a reduction in infant mortality by about 35%. Similarly, coordinated efforts to train community health workers in diagnosis, treatment, malnutrition prevention, reporting and referral services has reduced infant mortality in children under 5 as much as 38%. Public health campaigns centered around the "First 1,000 Days" of conception have been successful in providing cost-effective supplemental nutrition programs, as well as assisting young mothers in sanitation, hygiene and breastfeeding promotion. Increased intake of nutrients and better sanitation habits have a positive impact on health, especially developing children. Educational attainment and public health campaigns provide the knowledge and means to practice better habits and leads to better outcomes against infant mortality rates. Awareness of health services, education, and economic opportunities provide means to sustain and increase chance of development and survival. A decrease on GPD, for example, results in increased rates of infant mortality. Negative effects on household income reduces amount being spent on food and healthcare, affecting the quality of life and access to medical services to ensure full development and survival. On the contrary, increased household income translates to more access to nutrients and healthcare, reducing the risks associated with malnutrition and infant mortality. Moreover, increased aggregate household incomes will produce better health facilities, water and sewer infrastructures for the entire community. Granting women employment raises their status and autonomy. Having a gainful employment can raise the perceived worth of females. This can lead to an increase in the number of women getting an education and a decrease in the number of female infanticide. In the social modernization perspective, education leads to development. Higher number of skilled workers means more earning and further economic growth. According to the economic modernization perspective, this is one type economic growth viewed as the driving force behind the increase in development and standard of living in a country. This is further explained by the modernization theory- economic development promotes physical wellbeing. As economy rises, so do technological advances and thus, medical advances in access to clean water, health care facilities, education, and diet. These changes may decrease infant mortality. Economically, governments could reduce infant mortality by building and strengthening capacity in human resources. Increasing human resources such as physicians, nurses, and other health professionals will increase the number of skilled attendants and the number of people able to give out immunized against diseases such as measles. Increasing the number of skilled professionals is negatively correlated with maternal, infant, and childhood mortality. Between 1960 and 2000, the infant mortality rate decreased by half as the number of physicians increased by four folds. With the addition of one physician to every 1000 persons in a population, infant mortality will reduce by 30%. In certain parts of the U.S., specific modern programs aim to reduce levels of infant mortality. An example of one such program is the 'Healthy Me, Healthy You' program based in Northeast Texas. It intends to identify factors that contribute to negative birth outcomes throughout a 37-county area. An additional program that aims to reduce infant mortality is the "Best Babies Zone" (BBZ) based at the University of California, Berkeley. The BBZ uses the life course approach to address the structural causes of poor birth outcomes and toxic stress in three U.S. neighborhoods. By employing community-generated solutions, the Best Babies Zone's ultimate goal is to achieve health equity in communities that are disproportionately impacted by infant death. The infant mortality rate correlates very strongly with, and is among the best predictors of, state failure.[clarification needed] IMR is therefore also a useful indicator of a country's level of health or development, and is a component of the physical quality of life index. However, the method of calculating IMR often varies widely between countries, and is based on how they define a live birth and how many premature infants are born in the country. Reporting of infant mortality rates can be inconsistent, and may be understated, depending on a nation's live birth criterion, vital registration system, and reporting practices. The reported IMR provides one statistic which reflects the standard of living in each nation. Changes in the infant mortality rate reflect social and technical capacities[clarification needed] of a nation's population. The World Health Organization (WHO) defines a live birth as any infant born demonstrating independent signs of life, including breathing, heartbeat, umbilical cord pulsation or definite movement of voluntary muscles. This definition is used in Austria, for example. The WHO definition is also used in Germany, but with one slight modification: muscle movement is not considered to be a sign of life. Many countries, however, including certain European states (e.g. France) and Japan, only count as live births cases where an infant breathes at birth, which makes their reported IMR numbers somewhat lower and increases their rates of perinatal mortality. In the Czech Republic and Bulgaria, for instance, requirements for live birth are even higher. Although many countries have vital registration systems and certain reporting practices, there are many inaccuracies, particularly in undeveloped nations, in the statistics of the number of infants dying. Studies have shown that comparing three information sources (official registries, household surveys, and popular reporters) that the "popular death reporters" are the most accurate. Popular death reporters include midwives, gravediggers, coffin builders, priests, and others—essentially people who knew the most about the child's death. In developing nations, access to vital registries, and other government-run systems which record births and deaths, is difficult for poor families for several reasons. These struggles force stress on families[clarification needed], and make them take drastic measures[clarification needed] in unofficial death ceremonies for their deceased infants. As a result, government statistics will inaccurately reflect a nation's infant mortality rate. Popular death reporters have first-hand information, and provided this information can be collected and collated, can provide reliable data which provide a nation with accurate death counts and meaningful causes of deaths that can be measured/studied. UNICEF uses a statistical methodology to account for reporting differences among countries: UNICEF compiles infant mortality country estimates derived from all sources and methods of estimation obtained either from standard reports, direct estimation from micro data sets, or from UNICEF's yearly exercise. In order to sort out differences between estimates produced from different sources, with different methods, UNICEF developed, in coordination with WHO, the WB and UNSD, an estimation methodology that minimizes the errors embodied in each estimate and harmonize trends along time. Since the estimates are not necessarily the exact values used as input for the model, they are often not recognized as the official IMR estimates used at the country level. However, as mentioned before, these estimates minimize errors and maximize the consistency of trends along time. Another challenge to comparability is the practice of counting frail or premature infants who die before the normal due date as miscarriages (spontaneous abortions) or those who die during or immediately after childbirth as stillborn. Therefore, the quality of a country's documentation of perinatal mortality can matter greatly to the accuracy of its infant mortality statistics. This point is reinforced by the demographer Ansley Coale, who finds dubiously high ratios of reported stillbirths to infant deaths in Hong Kong and Japan in the first 24 hours after birth, a pattern that is consistent with the high recorded sex ratios at birth in those countries. It suggests not only that many female infants who die in the first 24 hours are misreported as stillbirths rather than infant deaths, but also that those countries do not follow WHO recommendations for the reporting of live births and infant deaths. Another seemingly paradoxical finding, is that when countries with poor medical services introduce new medical centers and services, instead of declining, the reported IMRs often increase for a time. This is mainly because improvement in access to medical care is often accompanied by improvement in the registration of births and deaths. Deaths that might have occurred in a remote or rural area, and not been reported to the government, might now be reported by the new medical personnel or facilities. Thus, even if the new health services reduce the actual IMR, the reported IMR may increase. Collecting the accurate statistics of infant mortality rate could be an issue in some rural communities in developing countries. In those communities, some other alternative methods for calculating infant mortality rate are emerged, for example, popular death reporting and household survey. The country-to-country variation in child mortality rates is huge, and growing wider despite the progress. Among the world's roughly 200 nations, only Somalia showed no decrease in the under-5 mortality rate over the past two decades.The lowest rate in 2011 was in Singapore, which had 2.6 deaths of children under age 5 per 1,000 live births. The highest was in Sierra Leone, which had 185 child deaths per 1,000 births. The global rate is 51 deaths per 1,000 births. For the United States, the rate is eight per 1,000 births. Infant mortality rate (IMR) is not only a group of statistic but instead it is a reflection of the socioeconomic development and effectively represents the presence of medical services in the countries. IMR is an effective resource for the health department to make decision on medical resources reallocation. IMR also formulates the global health strategies and help evaluate the program success. The existence of IMR helps solve the inadequacies of the other vital statistic systems for global health as most of the vital statistic systems usually neglect the infant mortality statistic number from the poor. There are certain amounts of unrecorded infant deaths in the rural area as they do not have information about infant mortality rate statistic or do not have the concept about reporting early infant death. The exclusion of any high-risk infants from the denominator or numerator in reported IMRs can cause problems in making comparisons. Many countries, including the United States, Sweden and Germany, count an infant exhibiting any sign of life as alive, no matter the month of gestation or the size, but according to United States some other countries differ in these practices. All of the countries named adopted the WHO definitions in the late 1980s or early 1990s, which are used throughout the European Union. However, in 2009, the US CDC issued a report that stated that the American rates of infant mortality were affected by the United States' high rates of premature babies compared to European countries. It also outlined the differences in reporting requirements between the United States and Europe, noting that France, the Czech Republic, Ireland, the Netherlands, and Poland do not report all live births of babies under 500 g and/or 22 weeks of gestation. However, the differences in reporting are unlikely to be the primary explanation for the United States' relatively low international ranking. Rather, the report concluded that primary reason for the United States’ higher infant mortality rate when compared with Europe was the United States’ much higher percentage of preterm births. The US National Institute of Child Health and Human Development (NICHD) has made great strides in lowering US infant mortality rates.[not in citation given] Since the institute was created the US infant mortality rate has dropped 70%, in part[vague] due to their research. Until the 1990s, Russia and the Soviet Union did not count, as a live birth or as an infant death, extremely premature infants (less than 1,000 g, less than 28 weeks gestational age, or less than 35 cm in length) that were born alive (breathed, had a heartbeat, or exhibited voluntary muscle movement) but failed to survive for at least seven days. Although such extremely premature infants typically accounted for only about 0.5% of all live-born children, their exclusion from both the numerator and the denominator in the reported IMR led to an estimated 22%–25% lower reported IMR. In some cases, too, perhaps because hospitals or regional health departments were held accountable for lowering the IMR in their catchment area, infant deaths that occurred in the 12th month were "transferred" statistically to the 13th month (i.e., the second year of life), and thus no longer classified as an infant death. In certain rural developing areas, such as northeastern Brazil, infant births are often not recorded in the first place, resulting in the discrepancies between the infant mortality rate (IMR) and the actual amount of infant deaths. Access to vital registry systems for infant births and deaths is an extremely difficult and expensive task for poor parents living in rural areas. Government and bureaucracies tend to show an insensitivity to these parents and their recent suffering from a lost child, and produce broad disclaimers in the IMR reports that the information has not been properly reported, resulting in these discrepancies. Little has been done to address the underlying structural problems of the vital registry systems in respect to the lack of reporting from parents in rural areas, and in turn has created a gap between the official and popular meanings of child death. It is also argued that the bureaucratic separation of vital death recording from cultural death rituals is to blame for the inaccuracy of the infant mortality rate (IMR). Vital death registries often fail to recognize the cultural implications and importance of infant deaths. It is not to be said that vital registry systems are not an accurate representation of a region's socio-economic situation, but this is only the case if these statistics are valid, which is unfortunately not always the circumstance. "Popular death reporters" is an alternative method for collecting and processing statistics on infant and child mortality. Many regions may benefit from "popular death reporters" who are culturally linked to infants may be able to provide more accurate statistics on the incidence of infant mortality. According to ethnographic data, "popular death reporters" refers to people who had inside knowledge of anjinhos, including the grave-digger, gatekeeper, midwife, popular healers etc. —— all key participants in mortuary rituals. By combining the methods of household surveys, vital registries, and asking "popular death reporters" this can increase the validity of child mortality rates, but there are many barriers that can reflect the validity of our statistics of infant mortality. One of these barriers are political economic decisions. Numbers are exaggerated when international funds are being doled out; and underestimated during reelection. The bureaucratic separation of vital death reporting and cultural death rituals stems in part due to structural violence. Individuals living in rural areas of Brazil need to invest large capital for lodging and travel in order to report infant birth to a Brazilian Assistance League office. The negative financial aspects deters registration, as often individuals are of lower income and cannot afford such expenses. Similar to the lack of birth reporting, families in rural Brazil face difficult choices based on already existing structural arrangements when choosing to report infant mortality. Financial constraints such as reliance on food supplementations may also lead to skewed infant mortality data. In developing countries such as Brazil the deaths of impoverished infants are regularly unrecorded into the countries vital registration system; this causes a skew statistically. Culturally validity and contextual soundness can be used to ground the meaning of mortality from a statistical standpoint. In northeast Brazil they have accomplished this standpoint while conducting an ethnographic study combined with an alternative method to survey infant mortality. These types of techniques can develop quality ethnographic data that will ultimately lead to a better portrayal of the magnitude of infant mortality in the region. Political economic reasons have been seen to skew the infant mortality data in the past when governor Ceara devised his presidency campaign on reducing the infant mortality rate during his term in office. By using this new way of surveying, these instances can be minimized and removed, overall creating accurate and sound data. For the world, and for both less developed countries (LDCs) and more developed countries (MDCs), IMR declined significantly between 1960 and 2001. According to the State of the World's Mothers report by Save the Children, the world IMR declined from 126 in 1960 to 57 in 2001. However, IMR was, and remains, higher in LDCs. In 2001, the IMR for LDCs (91) was about 10 times as large as it was for MDCs (8). On average, for LDCs, the IMR is 17 times as higher than that of MDCs. Also, while both LDCs and MDCs made significant reductions in infant mortality rates, reductions among less developed countries are, on average, much less than those among the more developed countries.[clarification needed] A factor of about 67 separate countries with the highest and lowest reported infant mortality rates. The top and bottom five countries by this measure (taken from The World Factbook's 2012 estimates) are shown below. |Rank||Country||Infant mortality rate | (deaths/1,000 live births) |5||Central African Republic||97.17| According to Guillot, Gerland, Pelletier and Saabneh "birth histories, however, are subject to a number of errors, including omission of deaths and age misreporting errors." The infant mortality rate in the US decreased by 2.3% to a historic low of 582 infant deaths per 100,000 live births in 2014. Of the 27 most developed countries, the U.S. has the highest Infant Mortality Rate, despite spending much more on health care per capita. Significant racial and socio-economic differences in the United States affect the IMR, in contrast with other developed countries, which have more homogeneous populations. In particular, IMR varies greatly by race in the US. The average IMR for the whole country is therefore not a fair representation of the wide variations that exist between segments of the population. Many theories have been explored as to why these racial differences exist with socio economic factors usually coming out as a reasonable explanation. However, more studies have been conducted around this matter, and the largest advancement is around the idea of stress and how it affects pregnancy. In the 1850s, the infant mortality rate in the United States was estimated at 216.8 per 1,000 babies born for whites and 340.0 per 1,000 for African Americans, but rates have significantly declined in the West in modern times. This declining rate has been mainly due to modern improvements in basic health care, technology, and medical advances. In the last century, the infant mortality rate has decreased by 93%. Overall, the rates have decreased drastically from 20 deaths in 1970 to 6.9 deaths in 2003 (per every 1000 live births). In 2003, the leading causes of infant mortality in the United States were congenital anomalies, disorders related to immaturity, SIDS, and maternal complications. Babies born with low birth weight increased to 8.1% while cigarette smoking during pregnancy declined to 10.2%. This reflected the amount of low birth weights concluding that 12.4% of births from smokers were low birth weights compared with 7.7% of such births from non-smokers. According to the New York Times, "the main reason for the high rate is preterm delivery, and there was a 10% increase in such births from 2000 to 2006." Between 2007 and 2011, however, the preterm birth rate has decreased every year. In 2011 there was a 11.73% rate of babies born before the 37th week of gestation, down from a high of 12.80% in 2006. Economic expenditures on labor and delivery and neonatal care are relatively high in the United States. A conventional birth averages 9,775 USD with a C-section costing 15,041 USD. Preterm births in the US have been estimated to cost $51,600 per child, with a total yearly cost of $26.2 billion. Despite this spending, several reports state that infant mortality rate in the United States is significantly higher than in other developed nations. Estimates vary; the CIA's World Factbook ranks the US 55th internationally in 2014, with a rate of 6.17, while the UN figures from 2005-2010 place the US 34th. Aforementioned differences in measurement could play a substantial role in the disparity between the US and other nations. A non-viable live birth in the US could be registered as a stillbirth in similarly developed nations like Japan, Sweden, Norway, Ireland, the Netherlands, and France – thereby reducing the infant death count. Neonatal intensive care is also more likely to be applied in the US to marginally viable infants, although such interventions have been found to increase both costs and disability. A study following the implementation of the Born Alive Infant Protection Act of 2002 found universal resuscitation of infants born between 20–23 weeks increased the neonatal spending burden by $313.3 million while simultaneously decreasing quality-adjusted life years by 329.3. The vast majority of research conducted in the late twentieth and early twenty-first century indicates that African-American infants are more than twice as likely to die in their first year of life than white infants. Although following a decline from 13.63 to 11.46 deaths per 1000 live births from 2005 to 2010, non-Hispanic black mothers continued to report a rate 2.2 times as high as that for non-Hispanic white mothers. Contemporary research findings have demonstrated that nationwide racial disparities in infant mortality are linked to the experiential state of the mother and that these disparities cannot be totally accounted for by socio-economic, behavioral or genetic factors. The Hispanic paradox, an effect observed in other health indicators, appears in the infant mortality rate, as well. Hispanic mothers see an IMR comparable to non-Hispanic white mothers, despite lower educational attainment and economic status. A study in North Carolina, for example, concluded that "white women who did not complete high school have a lower infant mortality rate than black college graduates." According to Mustillo's CARDIA (Coronary Artery Risk Development in Young Adults) study, "self reported experiences of racial discrimination were associated with pre-term and low-birthweight deliveries, and such experiences may contribute to black-white disparities in prenatal outcomes." Likewise, dozens of population-based studies indicate that "the subjective, or perceived experience of racial discrimination is strongly associated with an increased risk of infant death and with poor health prospects for future generations of African Americans." While earlier parts of this article have addressed the racial differences in infant deaths, a closer look into the effects of racial differences within the country is necessary to view discrepancies. Non-Hispanic Black women lead all other racial groups in IMR with a rate of 11.3, while the Infant Mortality Rate among white women is 5.1. Black women in the United States experience a shorter life expectancy than white women, so while a higher IMR amongst black women is not necessarily out of line, it is still rather disturbing. While the popular argument leads to the idea that due to the trend of a lower socio-economic status had by black women there is in an increased likelihood of a child suffering. While this does correlate, the theory that it is the contributing factor falls apart when we look at Latino IMR in the United States. Latino people are almost just as likely to experience poverty as blacks in the U.S., however, the Infant Mortality Rate of Latinos is much closer to white women than it is to black women. The Poverty Rates of blacks and Latinos are 24.1% and 21.4% respectively. If there is a direct correlation, then the IMR of these two groups should be rather similar, however, blacks have an IMR double that of Latinos. Also, as black women move out of poverty or never experienced it in the first place, their IMR is not much lower than their counterparts experiencing higher levels of poverty. Some believe black women are predisposed to a higher IMR, meaning ancestrally speaking, all black women from African descent should experience an elevated rate. This theory is quickly disproven by looking at women of African descent who have immigrated to the United States. These women who come from a completely different social context are not prone to the higher IMR experienced by American-born black women. Tyan Parker Dominguez at the University of Southern California offers a theory to explain the disproportionally high IMR among black women in the United States. She claims African American women experience stress at much higher rates than any other group in the country. Stress produces particular hormones that induce labor and contribute to other pregnancy problems. Considering early births are one of the leading causes of death of infants under the age of one, induced labor is a very legitimate factor. The idea of stress spans socio-economic status as Parker Dominguez claims stress for lower-class women comes from unstable family life and chronic worry over poverty. For black middle-class women, battling racism, real or perceived, can be an extreme stressor. Arline Geronimus, a professor at the University of Michigan School of Public Health calls the phenomenon "weathering." She claims constantly dealing with disadvantages and racial prejudice causes black women's birth outcomes to deteriorate with age. Therefore, younger black women may experience stress with pregnancy due to social and economic factors, but older women experience stress at a compounding rate and therefore have pregnancy complications aside from economic factors. Mary O. Hearst, a professor in the Department of Public Health at Saint Catherine University, researched the effects of segregation on the African American community to see if it contributed to the high IMR amongst black children. Hearst claims that residential segregation contributes to the high rates because of the political, economic, and negative health implications it poses on black mothers regardless of their socioeconomic status. Racism, economic disparities, and sexism in segregated communities are all examples of the daily stressors that pregnant black women face that can affect their pregnancies with conditions such as pre-eclampsia and hypertension. Studies have also shown that high IMR is due to the inadequate care that pregnant African Americans receive compared to other women in the country. This unequal treatment stems from the idea that there are racial medical differences and is also rooted in racial biases and controlled images of black women. Because of this unequal treatment, research[which?] finds that black women do not receive the same urgency in medical care and are not taken as seriously regarding pain they feel or complications they think they are having, as exemplified by the complications tennis-star Serena Williams faced during her delivery. Strides have been made, however, to combat this epidemic. In Los Angeles County, health officials have partnered with non-profits around the city to help black women after the delivery of their child. One non-profit in particular has made a large impact on many lives is Great Beginnings For Black Babies in Inglewood. The non-profit centers around helping women deal with stress by forming support networks, keeping an open dialogue around race and family life, and also finding these women a secure place in the workforce. Some research argues that to end high IMR amongst black children, the country needs to fix the social and societal issues that plague African Americans. Some scholars argue that Issues such as institutional racism, mass incarceration, poverty, and health care disparities that are present amongst the African American country need to be addressed by the United States Government in order for policy to be created to combat these issues. Following this theory, if institutional inequalities are addresses and repaired by the United States Government, daily stressors for African Americans, and African American women in particular, will be reduced, therefore lessening the risk of complications in pregnancy and infant mortality. Others argue that adding diversity in the health care industry can help reduce the high IMR because more representation can tackle deep-rooted racial biases and stereotypes that exist towards African American women. Another more recent form of action to reduce high IMR amongst black children is the use of doulas throughout pregnancy.[not in citation given] It was in the early 1900's that countries around the world started to notice that there was a need for better child health care services. Europe started this rally, the United States fell behind them by creating a campaign to decrease the infant mortality rate. With this program, they were able to lower the IMR to 10 deaths rather than 100 deaths per every 1000 births. Infant mortality was also seen as a social problem when it was being noticed as a national problem. American women who had middle class standing with an educational background started to create a movement that provided housing for families of a lower class. By starting this, they were able to establish public health care and government agencies that were able to make more sanitary and healthier environments for infants. Medical professionals helped further the cause for infant health by creating a pediatrics field that was experienced in medicine for children. Decreases in infant mortality in given countries across the world during the 20th century have been linked to several common trends, scientific advancements, and social programs. Some of these include the state improving sanitation, improving access to healthcare, improving education, and the development of medical advancements such as penicillin, and safer blood transfusions. In the United States, improving infant mortality in the early half of the 20th century meant tackling environmental factors. Improving sanitation, and especially access to safe drinking water, helped the United States dramatically decrease infant mortality, a growing concern in the United States since the 1850s. On top of these environmental factors, during this time the United States endeavored to increase education and awareness regarding infant mortality. Pasteurization of milk also helped the United States combat infant mortality in the early 1900s, a practice which allowed the United States to curb disease in infants. These factors, on top of a general increase in the standard of living for those living in urban settings, helped the United States make dramatic improvements in their rates of infant mortality in the early 20th century. Although the overall infant mortality rate was sharply dropping during this time, within the United States infant mortality varied greatly among racial and socio-economic groups. The change in infant mortality from 1915 to 1933 was, for the white population, 98.6 in 1,000 to 52.8 in 1,000, and for the black population, 181.2 in 1,000 to 94.4 in 1,000. Studies imply that this has a direct correlation with relative economic conditions between these populations. Additionally, infant mortality in southern states was consistently 2% higher than other states in the US across a 20 year period from 1985. Southern states also tend to perform worse in predictors for higher infant mortality, such as per capita income and poverty rate. In the latter half of the 20th century, a focus on greater access to medical care for women spurred declines in infant mortality in the United States. The implementation of Medicaid, granting wider access to healthcare, contributed to a dramatic decrease in infant mortality, in addition to access to greater access to legal abortion and family-planning care, such the IUD and the birth control pill. In the decades following the 1970's, the United State's decreasing infant mortality rates began to slow, falling behind China's, Cuba's, and other developed countries'. Funding for the federally subsidized Medicaid and Maternal and Infant Care was sharply reduced, and availability of prenatal care greatly decreased for low-income parents. The People's Republic of China's growth of medical resources in the latter half of the 20th century partly explains its dramatic improvement with regards to infant mortality during this time. Part of this increase included the adoption of the Rural Cooperative Medical System, which was founded in the 1950s. The Cooperative Medical System granted healthcare access to previously underserved rural populations, is estimated to have covered 90% of China's rural population throughout the 1960s. The Cooperative Medical System achieved an infant mortality rate of 25.09 per 1,000. The Cooperative Medical System was later defunded, leaving many rural populations to rely on an expensive fee-for-service system, although the rate continued to decline in general. This change in medical systems caused a socio-economic gap in accessibility to medical care in China, which fortunately was not reflected in its infant mortality rate of decline. Prenatal care was increasingly used, even as the Cooperative Medical System was replaced, and delivery assistance remained accessible. China's one-child policy, adopted in the 1980s, negatively impacted its infant mortality. Women carrying unapproved pregnancies faced state consequences and social stigma and were thus less likely to use prenatal care. Additionally, economic realities and long-held cultural factors incentivized male offspring, leading some families who already had sons to avoid prenatal care or professional delivery services, and causing China to have unusually high female infant mortality rates during this time. Related statistical categories: |Wikimedia Commons has media related to Infant mortality.|
http://en.wikibedia.ru/wiki/Infant_mortality
18
59
Horizontal line test • if some horizontal line intersects the graph of the function more than once, then the function is not one-to-one • if no horizontal line. A general function points from each member of a to a member of b it never has one a pointing to more than one b, so one-to-many is not ok in a function . A function is said to be one-to-one if every y value has exactly one x value mapped onto it, and. In this lecture, we will consider properties of functions: functions that are one-to- one, onto and correspondences proving that a given function is. Functions can have many classifications or names, depending on the situation and what you want to do with them one very important classification is deciding. In one-to-one function, every element of the range of the function matches up with one element of the domain it is also defined as injective function and. Basically, there are three types of function mappings - injective (one to one), surjective (onto) and bijective in this article, we will learn about one to one function. One-to-one function for which every element of the range of the function corresponds to exactly one element of the domain one-to-one is often written 1-1. A function f is 1-to-1 if no two elements in the domain of f correspond to the same element in the range of f. After learning the definition of a function, we can extend it to define a one to one function a one to one function has not only one output for every input, but also. One-to one function is a function in which every element of range has only one domain element. Definition and exploration of 1 to 1 functions and their inverses. Defining one-to-one functions a function relates each value of the independent variable x (input) to the single value of the dependent variable y (output. Not all functions have inverse functions the graph of inverse functions are reflections over the line y = x this means that each x-value must be matched to one. Warning: this notation is misleading the minus one power in the function notation means the inverse function, not the reciprocal of don't confuse the two. One-to-one function how to identify a 1 to 1 function, and use the horizontal line test practice problems and free download worksheet (pdf. Example the function f(x) = x is one to one, because if x1 = x2, then f(x1) = f(x2) on the other hand the function g(x) = x2 is not a one-to-one function, because. One to one functions utexascnsquest loading unsubscribe from utexascnsquest cancel unsubscribe working subscribesubscribed. Watch this video lesson to learn what makes a one-to-one function different from a regular function learn a simple test you can use to check. Function domain, codomain image image of set range sum of functions product of functions one-to-one function (injection) onto function (surjection). You might have noticed that all of the examples we have looked at so far involved monotonic functions that, because of their one-to-one nature, could therefore. Checking procedures for checking that a function is one-to-one, computing the inverse of such a function, and relating the derivative of a function to that of its. Thank you sal for the very instructional video at around 9:00 you talk about when the function isn't injective nor surjective is a such function called anything and. A function assigns to each element of a set, exactly one element of a related set functions find their application in various fields like representation of the. This algebra lesson gives an easy test to see if a function has an inverse function. In each plot, the function is in blue and the horizontal line is in red for the first plot (on the left), the function is not one-to-one since it is possible to draw a. This function is not one-to-one because two different inputs, 55 and 62, have the same output of 38 this function is one-to-one because there. One-to-one suppose f : a b is a function we call f one-to-one if every distinct pair of objects in a is assigned to a distinct pair of objects in b in other words. The inverse function ▻ the graph of the inverse function ▻ derivatives of the inverse function one-to-one functions remark: ▻ not every function is invertible . (note this method applies to only the green function below) a one-to-one function is an injective function a function f : a → b is let's prove it for the first one.
http://jaassignmentnucu.cardiffbeekeepers.info/one-to-one-function.html
18
17
Calendar and counting practice motivate kinders to learn! It's natural to want to have fun while learning. Calendar time, if used correctly, can be the most beneficial time of the day! Math and social studies standards get to overlap and encourage learning. The kids learn patterns in days of the week and months of the year. The begin to learn concepts of time such as today, tomorrow and yesterday. They reinforce counting skills and ordinal counting skills, and they learn the difference! Daily counting is done for counting to 20 forward and back, counting by 10's to 100 and counting by 1's to 100. Toward the end of the year, we add counting by 5's to 100 to prepare for first grade. We sing and we dance. We exercise to get to 100. We use videos on the ActivBoard, but it can be just as beneficial to use a standard classroom wall calendar and posters. Here are the videos we use: Count to 20: Countdown from 20: Count by 10's: Count by 1's: Counting by 5's: As with all of the subtraction lessons, we start with one of our favorites sing alongs, When You Subtract with a Pirate: There are two versions of that song. Here are both of them: The song is the same in both video, the animation is a bit different. My kids go wild for this song and every time we bring up subtraction, they just have to sing it. They've been known to sing it at their tables without the video! After we sing, I do a quick review of what subtraction looks like and demonstrate solving a few problems. I use the same set of problems that they will be using for the independent portion of the lesson. I guide them in a quick discussion about the relationship between addition and subtraction. This is a perfect lesson to introduce the idea because the quantities are small enough for their young minds to manage with ease. I have them show me one plus one using their fingers and they all yell out "two" and then I have them show me two minus (or subtract) one and they all yell out, "one." We do this several times. I then ask the kids what we are doing. Through a short discussion the kids arrive at the idea that they are using the same amounts to "lose some" and to "get some." I do a really quick demonstration on a number line of adding one and subtracting one. Their response was one that made me think they had just seen a magic trick! I turn the attention and demonstration back to the activity for the day and have them tell me how to do it so I know they are ready to get to work. Tip: Cut out the number sentences you will be using with this activity before copying. This lesson has a very short guided instruction. Since the kids are familiar with it from Sum Sorting to 10 , a previous lesson, then it takes no time at all to explain to the kids what they need to do. The only difference is that this lesson is subtraction. For this lesson and the next three lessons, we are focusing on skill practice, not attain new knowledge. The guided practice for this lesson takes place at their tables where I have the kids solve the first problem, cut it out and paste it into the correct column. I roam the room to make sure they are doing the job correctly. I assist any kids who are struggling with the concept of subtraction. For one student who is new to our class, I assign a rally coach who sits next to her and oversees he work. If the rally coach sees her do something incorrectly, she catches it before she glues it down and has her recalculate before she glues it down.The rally coach is the "natural teacher" type of kid who enjoys helping others. The majority of the lesson is spent here on the independent practice section where they get to practice their subtraction skill.** Most of the kids do well with this simple level of subtraction, even my low-achieving kids. I set them to work on their own and I call a small group to my table to work in a small group so I can support them in the concept of subtraction as it does not come as naturally to the young mind as addition does. In the small group, in this case three kids, I walk them through each problem using blocks. The kids at the table have the option of using counting blocks or their fingers. Some just know the answers because they begun to subitize, which is a celebration because that is a year long fluency goal. The small group works at the table with me and I provide any necessary support that they may need such as walking them through each problem step by step until they remember how to subtract on their own. Training the young brain to take away after it has learned to add two groups can be challenging for some. **All of the number sentences for subtraction 0-12 are provided in the resource section. Cut out the number sentences you want your students to use before copying. Once the kids have finished their sorting and gluing, we meet on the floor to talk about what subtraction is and how it is different from addition. We create a flow map showing the steps necessary in solving a subtraction problem. This helps the young mind organize the information it has gained and the chart remains up in the room for reference. They will need to refer to it as the problems get more difficult. The next lesson is exactly like this one, but it is for differences that total 3, 4, and 5. The exit ticket for this lesson is their completed sorting sheets. I collect them after the glue dries and I look over them for any students that may be struggling who I didn't catch in my initial small group (I have only 3 or 4 in those support groups). When I discover a student who needs extra support that I did not detect before, I pull them back to the small group table one on one for the next lesson and provide a rally coach for those who were in the small group for this lesson.
https://betterlesson.com/lesson/603377/same-as-subtraction-1-2-3
18
10
To be successful on Quiz Polynomial Theorems, students must know the Remainder Theorem and the Fundamental Theorem of Algebra and understand the implications of these theorems for finding roots of polynomials. Although I do not require my students to memorize a specific wording of these theorems, I do ask them to state them in their own words using polynomial vocabulary terms correctly[MP6]. The questions on this quiz probe for understanding of the idea that a consequence of these theorems is the relationship between linear factors and zeros of polynomials. I hope they can explain that any polynomial, P(x) ca be written as (x-a)*Q(x) + P(a), and that if P(a)=0 then (x-a) is a factor and a is a root of P(x) [MP2]. As students turn in their quiz, I hand them Polynomial Graph Questions. The goal of this exercise is for students to graph some polynomials using a graphing calculator and then try to develop a set of questions about why the graphs look the way they do. They first work independently to generate questions. I anticipate that students will ask questions like After students have had some time to generate questions individually, they join together with 3 or 4 other students to assemble a group set of questions. Each group is given a sheet of post-it paper and markers. The goal is to find commonalities among their questions and come up with their favorite 5 questions. These posters will be put on the wall at the end of the group work time. I ask each group to put their poster on the wall. Then students circulate around the room to read the questions developed by each group. We have a brief discussion about commonalities between the sets of questions generated by the groups and begin a discussion about how we might go about answering the questions generated. The posters will remain on the wall for the next class period, when we will attempt to answer the questions [MP3].
https://betterlesson.com/lesson/581601/quiz-and-intro-to-graphs-of-polynomials?from=breadcrumb_lesson
18
12
When the earth gets exposed, this percentage gets reversed. As the ice reaches the sea, pieces break off, or calve, forming icebergs. This retreat has occurred during a period of reduced winter snowfall and higher summer temperatures. While winter weather will still deposit snow in the mountains, this seasonal snow does not function the same as glacial ice since it melts early in the summer season. Ice-sheet dynamics Glaciers move, or flow, downhill due to gravity and the internal deformation of ice. The permanent snow cover necessary for glacier formation is affected by factors such as the degree of slope on the land, amount of snowfall and the winds. List of glaciers and Retreat of glaciers since Black ice glacier near AconcaguaArgentina Glaciers are present on every continent and approximately fifty countries, excluding those Australia, South Africa that have glaciers only on distant subantarctic island territories. The continued demise of glacier ice will result in a short-term increase, followed by a long-term decrease in glacial melt water flowing into rivers and streams. When the glacial ice occupies a valley, it Why are glaciers receding and what form terraces or kames along the sides of the valley. But a new study out Wednesday suggests these glaciers are suffering the same fate as their more famous brethren and are already punching far above their weight when it comes to their contribution to sea level rise. When the stress on the layer above exceeds the inter-layer binding strength, it moves faster than the layer below. Percentage of shrinking and growing glaciers in —, from the WGMS report It is also very important to understand that glacier changes are not only dictated by air temperature changes but also by precipitation. This snow will settle down, and when it snows again the lower layer of snow gets compressed. There are several method to calculating global glacier mass change. Still, melting mountain glaciers are responsible for a third of the sea level rise, with another third blamed on Antarctica and Greenland and the rest due to thermal warming. Moraines can also create moraine dammed lakes. Rocky Mountains[ edit ] On the sheltered slopes of the highest peaks of Glacier National Park in Montanathe eponymous glaciers are diminishing rapidly. In glaciated areas where the glacier moves faster than one km per year, glacial earthquakes occur. Each one of us can play a part in helping reduce harmful emissions, leading to a possible reduction in future global warming. Those depending on freshwater from the melting glacier will have to relocate. The highest quality glacier observations are ongoing, continuous and long term. In the Kebnekaise Mountains of northern Swedena study of 16 glaciers between and found that 14 glaciers were retreating, one was advancing and one was stable. Most tidewater glaciers calve above sea level, which often results in a tremendous impact as the iceberg strikes the water. This is a vicious trap which has already begun and it will be almost impossible for us to stop it totally. Gulf and Northwest coasts. Fluvial and outwash sediments: Part of this is to do with the inhospitable terrain. The photographs below clearly demonstrate the retreat of this glacier since When this phenomenon occurs in a valley, it is called a valley train. But if you want to find the reasons for melting glaciers in depth, then it is said that unreasonable CO2 emissions from fossil fuels -- much more than what the oceans and forests can take up -- is the actual reason. Eskers are composed of sand and gravel that was deposited by meltwater streams that flowed through ice tunnels within or beneath a glacier. Glaciers are sentinels of climate change. Abrasion occurs when the ice and its load of rock fragments slide over bedrock and function as sandpaper, smoothing and polishing the bedrock below. The above data is just one piece of evidence that is consistent with global warming. Even at high latitudes, glacier formation is not inevitable. Friction makes the ice at the bottom of the glacier move more slowly than ice at the top. This image shows the perimeter of Chaney Glacier in Glacier National Park in,and Contact Why do glaciers matter? When the mass of snow and ice is sufficiently thick, it begins to move due to a combination of surface slope, gravity and pressure. The Altai region has also experienced an overall temperature increase of 1. How is a Glacier Formed?The glacier has retreated so much that it is hardly visible in the photo. —Credit: National Snow and Ice Data Center (comp.). updated Glacier Photograph Collection. Hall notes that all but one of the Vatnajökull's outlet glaciers, roughly 40 named glaciers in all, are currently receding. Although all of Iceland's outlet glaciers are currently retreating, some glaciers like the Breidamerkurjökull expand and retreat independent of short-term climate. Glaciers in the Garhwal Himalaya in India are retreating so fast that researchers believe that most central and eastern Himalayan glaciers could virtually disappear by Arctic sea ice has. For more images and animations, see Iceland's Receding Glacier. According to NASA glaciologist Dorothy Hall, Landsat data show that Iceland's Breidamerkurjökull has receded by as much as 2. Inthe USGS published a time series analysis of the glacier margins of the named glaciers of Glacier National Park. The areas measured are from, and /, marking approximately 50 years of change in glacier area. The vast majority of glaciers are receding. And importantly, the shrinking trend is increasing (eg - 77% in94% in ). Figure 2: Glacier Mass Balance over (blue) and (red).Download
http://zoluxebesosuzyfibe.mint-body.com/why-are-glaciers-receding-and-what-5358853588.html
18
11
Until recently, all of the techniques (for example GNSS) we have used to measure surface deformation were based on detecting the changes at specific points on the ground surface. We can never be sure that we are seeing a whole picture of deformation, or that we aren’t missing small-scale deformation, of our monitoring networks. About twenty years ago, InSAR (Interferometric Synthetic Aperture Radar) technology as an effective tool for ground deformation monitoring burst on the scene. Scientists around the world were struck by the details visible in the image as one of the most essential advantages of InSAR technology is exploitation of an active radar systems. These are capable of providing high-resolution imagery independent of daytime (day/night) and weather. The principle of an active radar, opposed to its passive variation, is in the illumination of the Earth by an unnatural source of electromagnetic waves that is mounted on the satellite’s platform. Radar interferometry is currently one of the fastest developing areas of the remote sensing. Progressive methods of this technology has become an important tool for topographic mapping and for precise measurements of deformation of the Earth’s surface and man-made structures. InSAR techniques are effective in the geophysical monitoring of natural hazards (earthquakes, volcanic activity, landslides and subsidence), in the observation of dynamic components of the Earth’s surface, in generation of digital elevation models, in land cover classifications and environmental surveys. How radar interferometry works? InSAR is a remote sensing modality for analysing radar images obtained by the satellites that are carriers of Synthetic Aperture Radar (SAR). The radar antenna continuously transmits microwaves towards the Earth’s surface and record waves reflected back to the antenna position. Energy of the received signal contains a pair of important information, which compose radar images: amplitude and phase of backscattered electromagnetic waves. The former one contains information about how much of the wave bounced back off to the satellite (signal intensity). This depends on how much of the wave has been absorbed on the way and how much has been reflected in the direction of the satellite. The second information is the phase of the wave. The phase information plays a significant role also in other geodetic measurement methods (e.g terrestrial distance measurement by phase-shift rangefinder, GNSS positioning, etc.). When a wave travels through the space, we can think of it as a hand on a clock. It starts on 1200 when the wave leaves the satellite. The ‘hand’ (phase) keeps running round and round the clock until the wave reaches the ground. When the wave hits the ground, the hand stops indicating certain ‘time’ or ‘phase’. When the wave comes back to the satellite, it tells the satellite on what value the hand got stopped. Every point in a satellite image (pixel) is carrying those two information: the intensity and the phase. The intensity can be used to characterize the composition of the surface the wave bounced off and what orientation it has. Oil leaks on the sea, for instance, can be spotted in that way. They look much smoother than the water surrounding as their intensity is different from intensity of the water. The phase is used in another way. When the radar satellite revisits the exact same portion of the Earth, the phase image should be identical. If it is not the case, then something has been going on. By combining those two images, we can measure how much and where the ground has moved. Fig.1: Principle of radar interferometry. Interferometric exploitation of observed data requires processing of at least two radar images covering an identical territory. Therefore, data acquisition requires double registration of exactly same part of the Earth’s surface. This can proceed simultaneously using two SAR antennas installed on a single platform as it is, for example, the case of SRTM (Shuttle Radar Topography Mission) mission, or two SAR antennas at two different platforms, like in TanDEM-X (TerraSAR-X add-on for Digital Elevation Measurements) mission. Sensing the Earth’s surface is also possible by using multiple orbit positions of the satellites. From all previously stated information it is possible to outline the fundamental principle of the InSAR technology: by comparing the phase components of the radar images from the identical area, one can determine relative heights of the terrain and if the radar images are captured in different times, it is feasible to calculate spatial deformations that occurred during this period. The primary products of InSAR technology are thus digital elevation models (DEM) and maps of the surface displacement alongside the results from atmospheric monitoring, thematic mapping (forests, floods, landslides, vegetation, etc.) and others. Fig.2: Illustration of SRTM (left) and TanDEM-X (rigt) missions. The very first radar interferometry technique utilized for the ground deformation monitoring is called Differential InSAR (DInSAR). General principle of this method lies in the use of at least two radar images of the same area that are acquired at different times. By connecting one image to another it is possible to distinguish changes that occurred during the time interval between their acquisitions. Changes or surface deformations are represented by areally distributed information here. Fig. 3: The Differential Interferogram of the Bam Earthquake (Iran). Combining two separate radar images of the same scene highlights the alterations in the scene with radar waves backscattered from the ground features having a different signal phase, manifesting in colourful interference patterns on the combined image known as fringes. These fringes can be used to measure tiny changes in the landscape down to a maximum accuracy of a few millimeters. Interferogram shows coseismic deformation in wrapped fringes (2.8 cm per color cycle), which are similar to contour lines of the ground motion, measured from the satellite radar imagery. The ground moved both towards the satellite in the south-east quadrant (colour scale change from +π to – π) and away from the satellite in the north-east quadrant (colour scale change from -π to +π). The problem occurs when this information contains reflection of the radar waves from a ground element which, although changing, cannot be interpreted as the deformation (eg. vegetation change, variation in atmospheric conditions). For this reason, Multi Temporal InSAR (MTI) algorithm was developed. The MTI approach utilizes only reliable and stable in time (permanent/persistent) scattering ground elements. The deformation here will be revealed only in a set of significant points, where the theoretical accuracy can be lower than 1 mm (!). Multi Temporal InSAR techniques Satellite radar interferometry (InSAR) utilises microwave signal transmitted towards the Earth’s surface. A moving radar antenna onboard satellite transmits and simultaneously records a signal reflected back to the satellite’s position. The “phase” of the received signal is the primary component for the deformation analysis. Images are taken in certain times (T1 ,T2, …,Tn) in order to extract precise deformation time series over each man-made object. The “phase” of the signal is basically the measure of the change in satellite-to-target distance (R1, R2, …, Rn) from the first until last (n-th) acquisition of the satellite over the same area (Fig. 2). Fig. 2: Principle of the satellite radar interferometry (InSAR) (T1 ,T2, T3 – time of the acquisition, R1, R2 , R3 – satellite-to-target distance, É¢R12 … É¢Rn – measured change in satellite-to-target distance) In multi-temporal InSAR (MTI) techniques, resulting deformations are revealed in a dense set of a significant points. Multi-temporal InSAR technique is advanced method of satellite radar interferometry which utilises only reliable and stable in time scattering ground elements for millimetric precision movement estimation. The backscattered signal from these points shall be stable and constant for a longer periods. Therefore, MTI techniques are suitable for deformation monitoring in urban areas, as buildings and other urban elements are good reflectors of radar waves and changes or subsequent deterioration of reflected signal is smaller. Such features are widely available over a city, but less present in non-urban areas. The measurements here should be repeated exactly under the same conditions in order to reach millimetric accuracy in movement estimation. Phase stable points, can be used as a ”natural GPS network” to monitor terrain motion, analysing the phase history of each one. The formidable advantage of space-borne Interferometric Synthetic Aperture Radar (InSAR) is to monitor tiny displacements simultaneously over wide areas (whole cities in one image), in high resolution (up to 25 cm using X-band data such as TerraSAR-X or Cosmo-SkyMed) frequently every 12 days or 6 days (up to 12 hours) without the need for in-situ observations or special equipment in the areas of interest, while providing accuracy similar to the traditional or terrestrial techniques (e.g. levelling or GPS). InSAR as a remote sensing modality could complement and even outperform ground-based measurements, which in some cases tend to under-sample the displacement field in spatial (GPS antennas are available only at specific points) or temporal (e.g. low frequency of levelling measurements) domain. Nevertheless, there are important limitations associated with the application of InSAR measurements e.g.: - minimum number of images required for the processing (~20 images) what can increase overal the cost of analysis while exploiting commercial data, - deformation from one orbit track is measured only in the line-of-sight of the satellite, - geometric distortions of radar images (shadow) might represent difficulties in mountainous areas and, - lower (or none) sensitivity to very rapid or large changes (built-up areas, construction works), especially those exceeding phase detectable in one pixel (for Sentinel-1A/B it’s ~0.5 cm a day)
http://insar.space/insar-technology/
18
13
The purpose of today's lesson is to get students to move from expressing a rule in words (like they did in the previous lesson, Introduction to Functions) to expressing a rule symbolically. You can start by posting a table that uses a two operation rule on the board. Ask students if they can find the pattern and ask them to add two more values to the table. Draw their attention to the top of the table where it says In and Out. Have them notice that next to the In, it now says (x) and next to the Out, it now says (y). Tell students that today, they will be doing some translating. That is, they will be translating some rules in words (like those they worked on yesterday) into algebra. They can start with the opening example. After students have found the pattern and added two more values to the table, ask them to write the rule in words using the In/Out language from yesterday. They might say, for example, "the Out is 2 times the In, plus 1." Now tell them they will begin translating to algebra. We are going to call the In x (let students know they could really call it any letter) and the Out, y. Ask them, "how do we say y is 2 times x, plus 1, in algebra? Students may write something like x * 2 + 1. Now is a good time to explain some standard algebraic notation and terminology if students don't have it already. I like to say that yes x * 2 means x times 2, but that in algebra, the standard way to write that is 2x. You can talk about x and y as being variables here, and introduce variable as a vocabulary word. You can also tell students that the 2 in front of the x is called a coefficient and the 1 at the end of the equation is a constant term. Let students know that in today's activity, they will be following this same process with some new In/Out tables so they have a chance to practice this kind of translation into algebra. Let students get working in small groups or in pairs on some In/Out tables that they will represent as a rule in words and as an equation. Things to watch for as you circulate: I like to end class with a matching activity that really drives the idea of the multiple representations in this lesson home. Ask students to come up with a rule of their own. Once they have their rule, ask them to represent it in a table and as an equation. Give each student three strips of newsprint (or you can use pieces of construction paper). Tell students to write their rule in words on one piece of paper, their table on another, and their equation on the third. You might tell students to use different marker colors or to disguise their writing so it is not too easy to see which rule matches with which table and equation. While they are working, you can designate three sections on a board or a wall in your room. Label the three sections "Problem in Words," "Equation," and "Table." As students finish, collect their strips of paper, mix them up, and them tape them up on the wall under the appropriate sections, but in a random order. Ask students to match up the tables, rules, and equations. It may be interesting to note where students start and which representations they are most comfortable working between. If they need guidance, you could choose one of the equations and ask which rule in words matches with that equation. As students work, you can ask them what details make matching easy or hard. You can also ask students how they know their matches are correct. Since you have been working as a whole group at the end of class, you can close class by asking students to report out verbally on the following question: What do you ask yourself when you are looking for rules in a table? You might want to record these student ideas as Tips that others can refer to when they are stumped! The matching activity at the end of this lesson is derived from EMPower, Seeking Patterns, Building Rules. Schmitt, M.J., Steinback, M., Donovan, T., Merson, M. (2005). Seeking Patterns, Building Rules . Emeryville, CA: Key Curriculum Press.
https://betterlesson.com/lesson/448019/tables-words-and-equations
18
22
1. Why do we say there is one world ocean? What about the Atlantic and Pacific oceans, or the Baltic and Mediterranean seas? Traditionally, we have divided the ocean into artificial compartments called oceans and seas, using the boundaries of continents and imaginary lines such as the equator. In fact there are few dependable natural divisions, only one great mass of water. Because of the movement of continents and ocean floors (about which you’ll learn more in Chapter 3), the Pacific and Atlantic Oceans and the Mediterranean and Baltic Seas, so named for our convenience, are in reality only temporary features of a single world ocean. In this book we refer to the world ocean, or simply the ocean, as a single entity, with subtly different characteristics at different locations but with very few natural partitions. This view emphasizes the interdependence of ocean and land, life and water, atmospheric and oceanic circulation, and natural and man-made environments. 2. Which is greater: the average depth of the ocean or the average elevation of the continents? If Earth's contours were leveled to a smooth ball, the ocean would cover it to a depth of 2,686 meters (8,810 feet). The volume of the world ocean is presently 11 times the volume of land above sea level—average land elevation is only 840 meters (2,772 feet), but average ocean depth is 4½ times greater! 3. Can the scientific method be applied to speculations about the natural world that are not subject to test or observation? Science is a systematic process of asking questions about the observable world, and testing the answers to those questions. The scientific method is the orderly process by which theories are verified or rejected. It is based on the assumption that nature "plays fair"—that the answers to our questions about nature are ultimately knowable as our powers of questioning and observing improve. By its very nature, the scientific method depends on the application of specific tests to bits and pieces of the natural world, and explaining, by virtue of these tests, how the natural world will react in a given situation. Hypotheses and theories are devised to explain the outcomes. The tests must be repeatable—that is, other researches at other sites must be able to replicate the experiments (tests) with similar results. If replication is impossible, or if other outcomes are observed, the hypotheses and theories are discarded and replaced with new ones. Figure 1.4 shows the process. Nothing is ever proven absolutely true by the scientific method. Hypotheses and theories may change as our knowledge and powers of observation change; thus all scientific understanding is tentative. The conclusions about the natural world that we reach by the process of science may not always be popular or immediately embraced, but if those conclusions consistently match observations, they may be considered true. Can these methods be applied to speculations about the natural world that are not subject to test or observation? By definition, they cannot. 4. What are the major specialties within marine science? Marine science draws on several disciplines, integrating the fields of geology, physics, biology, chemistry, and engineering as they apply to the ocean and its surroundings. Marine geologists focus on questions such as the composition of the inner Earth, the mobility of the crust, and the characteristics of seafloor sediments. Some of their work touches on areas of intense scientific and public concern including earthquake prediction and the distribution of valuable resources. Physical oceanographers study and observe wave dynamics, currents, and ocean-atmosphere interaction. Their predictions of long-term climate trends are becoming increasingly important as pollutants change Earth's atmosphere. Marine biologists work with the nature and distribution of marine organisms, the impact of oceanic and atmospheric pollutants on the organisms, the isolation of disease-fighting drugs from marine species, and the yields of fisheries. Chemical oceanographers study the ocean's dissolved solids and gases, and the relationships of these components to the geology and biology of the ocean as a whole. Marine engineers design and build oil platforms, ships, harbors, and other structures that enable us to use the ocean wisely. Other marine specialists study weather forecasting, ways to increase the safety of navigation, methods to generate electricity, and much more. Virtually all marine scientists specialize in one area of research, but they also must be familiar with related specialties and appreciate the linkages between them. 5. Where did the Earth's heavy elements come from? As Carl Sagan used to say, “We are made of starstuff.” Heavy elements (iron, gold, uranium) are constructed in supernovas. The dying phase of a massive star’s life begins when its core—depleted of hydrogen—collapses in on itself. This rapid compression causes the star’s internal temperature to soar. When the infalling material can no longer be compressed, the energy of the inward fall is converted to a cataclysmic expansion called a supernova). The explosive release of energy in a supernova is so sudden that the star is blown to bits and its shattered mass accelerates outward at nearly the speed of light. The explosion lasts only about 30 seconds, but in that short time, the nuclear forces holding apart individual atomic nuclei are overcome and atoms heavier than iron are formed. The gold in your rings, the mercury in a thermometer, and the uranium in nuclear power plants were all created during such a brief and stupendous flash. The atoms produced by a star through millions of years of orderly fusion, and the heavy atoms generated in a few moments of unimaginable chaos, are sprayed into space. Every chemical element heavier than hydrogen—most of the atoms that make up the planets, the ocean, and living creatures—were manufactured by the stars. 6. Where did Earth’s surface water come from? Though most of Earth’s water was present in the solar nebula during the accretion phase, recent research suggests that a barrage of icy comets or asteroids from the outer reaches of the solar system colliding with Earth may also have contributed a portion of the accumulating mass of water, this ocean-to-be. Earth’s surface was so hot that no water could collect there, and no sunlight could penetrate the thick clouds. (A visitor approaching from space 4.4 billion years ago would have seen a vapor-shrouded sphere blanketed by lightning-stroked clouds.) After millions of years the upper clouds cooled enough for some of the outgassed water to form droplets. Hot rains fell toward Earth, only to boil back into the clouds again. As the surface became cooler, water collected in basins and began to dissolve minerals from the rocks. Some of the water evaporated, cooled, and fell again, but the minerals remained behind. The salty world ocean was gradually accumulating. 7. Considering what must happen to form them, do you think ocean worlds are relatively abundant in the galaxy? Why or why not? I wouldn't expect to encounter many. For starters, let's look at stars. Most stars visible to us are members of multiple-star systems. If the Earth were in orbit around a typical multiple-star system, we would be close to at least one of the host stars at certain places in our orbit, and too far away at others. Also, not all stars—in single or multiple systems—are as stable and steady in energy output as our sun. If we were in orbit around a star that grew hotter and cooler at intervals, our situation would be radically different than it is at the moment. Next, let's look at orbital characteristics. Our Earth is in a nearly circular orbit at just the right distance from the sun to allow liquid water to exist over most of the surface through most of the year. Next, consider our planet's cargo of elements. We picked these up during the accretion phase. At our area of orbit there was an unusually large amount of water (or chemical materials that would led to the formation of water). So, with a stable star, a pleasant circular orbit that is well placed, and suitable and abundant raw materials, we are a water planet. This marvelous combination is probably not found in many places in the galaxy. But a galaxy is a very, very large place. 8. Earth has had three distinct atmospheres. Where did each one come from, and what were the major constituents and causes of each? Earth’s first atmosphere formed during the accretion phase before our planet had a solid surface. Methane and ammonia with some water vapor and carbon dioxide—mixtures similar to those seen in the outer planets and swept from the solar nebula—were probably the most abundant gases. Radiation from the energetic young sun stripped away our planet’s first atmosphere, but gases trapped inside the planet rose to the surface during the density stratification process to form a second atmosphere. This process was aided by internal heating and by the impact of a planetary body somewhat larger than Mars. Infalling comets may have contributed some water to the Earth during this phase. This second atmosphere contained very little free oxygen. The evolution of photosynthetic organisms—single celled autotrophs and green plants—slowly modified the second atmosphere into the third (and present) oxygen-rich mixture. 9. How old is Earth? When did life arise? On what is that estimate based? Earth's first hard surface is thought to have formed about 4.6 billion years ago. This age estimate is derived from interlocking data obtained by many researchers using different sources. One source is meteorites—chunks of rock and metal formed at about the same time as the sun and planets and out of the same cloud. Many have fallen to Earth in recent times. We know from signs of radiation within these objects how long it has been since they were formed. That information, combined with the rate of radioactive decay of unstable atoms in meteorites, moon rocks, and in the oldest rocks on Earth, allows astronomers to make reasonably accurate estimates of how long ago these objects formed. How long ago might life have begun? The oldest fossils yet found, from northwestern Australia, are between 3.4 and 3.5 billion years old. They are remnants of fairly complex bacteria-like organisms, indicating that life must have originated even earlier, probably only a few hundred million years after a stable ocean formed. Evidence of an even more ancient beginning has been found in the form of carbonaceous residues in some of the oldest rocks on Earth, from Akilia Island near Greenland. These 3.85 billion year old specks of carbon bear a chemical fingerprint that researchers feel could only have come from a living organism. Life and Earth have grown old together; each has greatly influenced the other. 10. How did the moon form? About 30 million years after its formation, a planetary body somewhat larger than Mars smashed into the young Earth and broke apart. The metallic core fell into Earth’s core and joined with it, while most of the rocky mantle was ejected to form a ring of debris around Earth. The debris began condensing soon after and became our moon. 11. What is biosynthesis? Where and when do researchers think it might have occurred on our planet? Could it happen again this afternoon? Biosynthesis is the term given to the early evolution of living organisms from the simple organic building blocks present on and in the early Earth. The early steps in biosynthesis are still speculative. Planetary scientists suggest that the sun was faint in its youth. It put out so little heat that the ocean may have been frozen to a depth of around 300 meters (1,000 feet). The ice would have formed a blanket that kept most of the ocean fluid and relatively warm. Periodic fiery impacts by asteroids, comets, and meteor swarms could have thawed the ice, but between batterings it would have reformed. In 2002, chemists Jeffrey Bada and Antonio Lazcano suggested that organic material may have formed and then been trapped beneath the ice—protected from the atmosphere, which contained chemical compounds capable of shattering the complex molecules. The first self-sustaining – living – molecules might have arisen deep below the layers of surface ice, on clays or pyrite crystals at cool mineral-rich seeps on the ocean floor. The oldest fossils yet found, from northwestern Australia, are between 3.4 and 3.5 billion years old. A similar biosynthesis could not occur today. Living things have changed the conditions in the ocean and atmosphere, and those changes are not consistent with any new origin of life. For one thing, green plants have filled the atmosphere with oxygen, a compound that can disrupt any unprotected large molecule. For another, some of this oxygen (as ozone) now blocks much of the ultraviolet radiation from reaching the surface of the ocean. And finally, the many tiny organisms present today would gladly scavenge any large organic molecules as food. 12. Marine biologists sometimes say that all life-forms on Earth, even desert lizards and alpine plants, are marine. Can you think why? All life on Earth shares a basic underlying biochemistry. All living organisms on this planet are water-based, carbon-built, protein-structured, nucleic acid-moderated entities. All use the same energy compound (ATP) as a source of immediate energy. They appear to have had an ancient common oceanic origin—perhaps a self-replicating molecule of a nucleic acid. Scrape away the scales and feathers, the fur and fins, and look at the chemistry. Always the same. Always marine. 13. How do we know what happened so long ago? Science is a systematic process of asking questions about the observable world by gathering and then studying information. Science interprets raw information by constructing a general explanation with which the information is compatible. The information presented in this chapter may change as our knowledge and powers of observation change. Interlocking information concerning the distance and behavior of stars, the age-dating of materials on Earth, the fossil record of life here, and myriad of other details combine to suggest strongly (not absolutely prove) the details I have written in this text. 14. What is density stratification? What does it have to do with the present structure of Earth? Density is mass per unit of volume. Early in its formation, the still-fluid Earth was sorted by density—heavy elements and compounds were driven by gravity towards its center, lighter gases rose to the outside. The resulting layers (strata) are arranged with the densest at and near the Earth's center, the least dense as the atmosphere. The process of density stratification lasted perhaps 100 million years, and ended 4.6 billion years ago with the formation of Earth's first solid crust. For a preview of the result, see Figure 3.8. 1. How did the Library at Alexandria contribute to the development of marine science? What happened to most of the information accumulated there? Would you care to speculate on the historical impact the Library might have had if it had not been destroyed? The great Library at Alexandria constituted history's greatest accumulation of ancient writings. As we have seen, the characteristics of nations, trade, natural wonders, artistic achievements, tourist sights, investment opportunities, and other items of interest to seafarers were catalogued and filed in its stacks. Manuscripts describing the Mediterranean coast were of great interest. Traders quickly realized the competitive benefit of this information. Knowledge of where a cargo of olive oil could be sold at the greatest profit, or where the market for finished cloth was most lucrative, or where raw materials for metalworking could be obtained at low cost, was of enormous competitive value. Here perhaps was the first instance of cooperation between a university and the commercial community, a partnership that has paid dividends for science and business ever since. After their market research was completed, it is not difficult to imagine seafarers lingering at the Library to satisfy their curiosity about non-commercial topics. And there would have been much to learn! In addition to Eratosthenes' discovery of the size of the Earth (about which you read in the chapter), Euclid systematized geometry; the astronomer Aristarchus of Samos argued that Earth is one of the planets and that all planets orbit the sun; Dionysius of Thrace defined and codified the parts of speech (noun, verb, etc.) common to all languages; Herophilus, a physiologist, established the brain was the seat of intelligence; Heron built the first steam engines and gear trains; Archimedes discovered (among many other things) the principles of buoyancy on which successful shipbuilding is based. The last Librarian was Hypatia, the first notable woman mathematician, philosopher, and scientist. In Alexandria she was a symbol of science and knowledge, concepts the early Christians identified with pagan practices. After years of rising tensions, in 415 A.D. a mob brutally murdered her and burned the Library with all its contents. Most of the community of scholars dispersed and Alexandria ceased to be a center of learning in the ancient world. The academic loss was incalculable, and trade suffered because ship owners no longer had a clearing house for updating the nautical charts and information upon which they had come to depend. All that remains of the Library today is a remnant of an underground storage room. We shall never know the true extent and influence of its collection of over 700,000 irreplaceable scrolls. Historians are divided on the reasons for the fall of the Library. But we know there is no record that any of the Library's scientists ever challenged the political, economic, religious, or social assumptions of their society. Researchers did not attempt to explain or popularize the results of their research, so residents of the city had no understanding of the momentous discoveries being made at the Library at the top of the hill. With very few exceptions, the scientists did not apply their discoveries to the benefit of mankind, and many of the intellectual discoveries had little practical application. The citizens saw no practical value to such an expensive enterprise. Religious strife added elements of hostility and instability. As Carl Sagan pointed out, "When, at long last, the mob came to burn the Library down, there was nobody to stop them."1 As for speculations on historical impact had the Library survived, some specialists have suggested that much of the intellectual vacuum of the European Middle Ages might have been “sidestepped,” in a sense, if the information processing and dissemination processes centered at the Library had continued. Instead of the subsequent fragmentation and retraction, one wonders if continued academic stimulation might have reinvigorated the West. Also, had the Library lasted longer, one wonders if researchers there might have discovered the intellectual achievements of China, a civilization much advanced at the time. 2. What were the stimuli to Polynesian colonization? How were the long voyages accomplished? The ancestors of the Polynesians spread eastward from Southeast Asia or Indonesia in the distant past. Although experts vary in their estimates, there is some consensus that by 30,000 years ago New Guinea was populated by these wanderers and by 20,000 years ago the Philippines were occupied. By around 500 B.C. the so-called cradle of Polynesia—Tonga, Samoa, the Marquesas and the Society islands—was settled and the Polynesian cultures formed. For a long and evidently prosperous period the Polynesians spread from island to island until the easily accessible islands had been colonized. Eventually, however, overpopulation and depletion of resources became a problem. Politics, intertribal tensions, and religious strife shook their society. When tensions reached the breaking point, groups of people scattered in all directions from the Marquesas and Society Islands during a period of explosive dispersion. Between 300 and 600 A.D. Polynesians successfully colonized nearly every inhabitable island within the vast triangular area shown in Figure 2.5. Easter Island was found against prevailing winds and currents, and the remote islands of Hawaii were discovered and occupied. These were among the last places on Earth to be populated. Large dual-hulled sailing ships, some capable of transporting up to 100 people, were designed and built for the voyages. New navigation techniques were perfected that depended on the positions of stars barely visible to the north. New ways of storing food, water, and seeds were devised. In that anxious time the Polynesians honed and perfected their seafaring knowledge. To a skilled navigator a change in the rhythmic set of waves against the hull could indicate an island out of sight over the horizon. The flight tracks of birds at dusk could suggest the direction of land. The positions of the stars told stories, as did the distant clouds over an unseen island. The smell of the water, or its temperature, or salinity, or color, conveyed information, as did the direction of the wind relative to the sun, and the type of marine life clustering near the boat. The sunrise colors, sunset colors, the hue of the moon—every nuance had meaning, every detail had been passed in ritual from father to son. The greatest Polynesian minds were navigators, and reaching Hawaii was their greatest achievement. 3. Prince Henry the Navigator only took two sea voyages, yet is regarded as an important figure in the history of oceanography. Why? Prince Henry the Navigator, third son of the royal family of Portugal, was a European visionary who thought ocean exploration held the key to great wealth and successful trade. Prince Henry established a center at Sagres for the study of marine science and navigation. Although he personally was not well traveled, captains under his patronage explored from 1451 to 1470, compiling detailed charts wherever they went. Henry’s explorers pushed south into the unknown and opened the west coast of Africa to commerce. He sent out small, maneuverable ships designed for voyages of discovery and manned by well-trained crews. For navigation, his mariners used the compass—an instrument (invented in China in the fourth century B.C.) that points to magnetic north. Henry’s students knew the Earth was round (but because of the errors of Claudius Ptolemy they were wrong in their estimation of its size). 4. What were the main stimuli to European voyages of exploration during the Age of Discovery? Why did it end? There were two main stimuli: (1) encouragement of trade, and (2) military one-upsmanship. Trade between east and west had long been dependent on arduous and insecure desert caravan routes through the central Asian and Arabian deserts. This commerce was cut off in 1453 when the Turks captured Constantinople. An alternate ocean route was desperately needed. As we have seen, Prince Henry of Portugal thought ocean exploration held the key to great wealth and successful trade. Henry's explorers pushed south into the unknown and opened the West coast of Africa to commerce. He sent out small, maneuverable ships designed for voyages of discovery and manned by well-trained crews. Christopher Columbus was familiar with Prince Henry's work, and "discovered" the New World quite by accident while on a mission to encourage trade. His intention was to pioneer a sea route to the rich and fabled lands of the east made famous more than 200 years earlier in the overland travels of Marco Polo. As "Admiral of the Ocean Sea," Columbus was to have a financial interest in the trade routes he blazed. As we saw, Columbus never appreciated the fact that he had found a new continent. He went to his grave confident that he had found islands just off the coast of Asia. Charts that included the properly-identified New World inspired Ferdinand Magellan, a Portuguese navigator in the service of Spain, to believe that he could open a westerly trade route to the Orient. In the Philippines, Magellan was killed and his crew decided to continue sailing west around the world. Only 18 of the original 250 men survived, returning to Spain three years after they set out. But they had proved it was possible to circumnavigate the globe. The seeds of colonial expansion had been planted. Later, the empires of Spain, Holland, Britain, and France pushed into the distant oceanic reaches in search of lands to claim. Military strength might depend on good charts, knowledge of safe harbors in which to take on provisions, and friendly relations with the locals. Exploration was undertaken to insure these things. But that gets ahead of the story. The Magellan expedition's return to Spain in 1522—the end of the first circumnavigation—technically marks the end of the first age of European discovery. 5. Did Columbus discover North America? Who did? Were the Chinese involved? Columbus never saw North America. North America was “discovered” by people following migrating game across the Bering Straits land bridge about 20,000 years ago, during the last ice age. As for the Chinese, Gavin Menzies’ 2002 popular book “1421: The Year China Discovered America,” has caused an intensive re-examination of the voyages of Zheng He and his subordinates that you read about in this chapter. Menzies makes a compelling (though far from bulletproof) case that part of the Ming fleet continued westward around the tip of Africa and into the Atlantic, eventually sighting both the Atlantic and Pacific coasts of North America as well as the Antarctic continent. Menzies bases his argument on cartographic evidence, artifacts, and inferences in the logs of European explorers that they were following paths blazed by someone who had gone before. The equipment was up to the task (see Figure 2.9), but the jury is out on whether these discoveries were made as Menzies claims. 6. What were the contributions of Captain James Cook? Does he deserve to be remembered more as an explorer or as a marine scientist? Captain James Cook's contributions to marine science are justifiably famous. Cook was a critical link between the vague scientific speculations of the first half of the eighteenth century and the industrial revolution to come. He pioneered the use of new navigational techniques, measured and charted countless coasts, produced maps of such accuracy that some of their information is still in use, and revolutionized the seaman's diet to eliminate scurvy. His ship-handling in difficult circumstances was legendary, and his ability to lead his crew with humanity and justice remains an inspiration to naval officers to this day. While Captain Cook received no formal scientific training, he did learn methods of scientific observation and analysis from Joseph Banks and other researchers embarked on HMS Endeavour. Because his observations are clear and well recorded, and because his speculations on natural phenomena are invariably based on scientific analysis (rather than being glossed over or ascribed to supernatural forces), some consider him the first marine scientist.2 But, to be rigorously fair, perhaps his explorational and scientific skills should be given equal weighting. 7. What was the first purely scientific oceanographic expedition, and what were some of its accomplishments? The expeditions of Cook, Wilkes, the Rosses, de Bougainville, Wallis, and virtually all other runners-up to HMS Challenger were multi-purpose undertakings: military scouting, flag-waving, provision hunting, and trade analysis were coupled with exploration and scientific research. The first sailing expedition devoted completely to marine science was conceived by Charles Wyville Thomson, a professor of natural history at Scotland's University of Edinburgh, and his Canadian-born student of natural history, John Murray. They convinced the Royal Society and the British Government to provide a Royal Navy ship and trained crew for a "prolonged and arduous voyage of exploration across the oceans of the world." Thomson and Murray even coined a word for their enterprise: Oceanography. HMS Challenger, the 2,306 ton steam corvette chosen for the expedition, set sail on 7 December 1872 on a four-year voyage that took them around the world and covered 127,600 kilometers (79,300 nautical miles). Although the Captain was a Royal Naval officer, the six-man scientific staff directed the course of the voyage. The scientists also took salinity, temperature, and water density measurements during these soundings. Each reading contributed to a growing picture of the physical structure of the deep ocean. They completed at least 151 open water trawls, and stored 77 samples of seawater for detailed analysis ashore. The expedition collected new information on ocean currents, meteorology, and the distribution of sediments; the locations and profiles of coral reefs were charted. Thousands of pounds of specimens were brought to British museums for study. Manganese nodules, brown lumps of mineral-rich sediments, were discovered on the seabed, sparking interest in deep sea mining. This first pure oceanographic investigation was an unqualified success. The discovery of life in the depths of the oceans stimulated the new science of marine biology. The scope, accuracy, thoroughness, and attractive presentation of the researchers' written reports made this expedition a high point in scientific publication. The Challenger Report, the record of the expedition, was published between 1880 and 1895 by Sir John Murray in a well-written and magnificently illustrated 50-volume set; it is still used today. The Challenger expedition remains history's longest continuous scientific oceanographic expedition. 8. Who was probably the first person to undertake the systematic study of the ocean as a full-time occupation? Are his contributions considered important today? Matthew Maury is a likely candidate. A Virginian and officer (at different times) in both the United States and Confederate States Navy, Maury was the first person to sense the worldwide pattern of surface winds and currents. Based on an analysis undertaken while working full-time for the Bureau of Charts and Instruments, he produced a set of directions for sailing great distances more efficiently. Maury's sailing directions quickly attracted worldwide notice: He had shortened the passage for vessels traveling from the American east coast to Rio de Janeiro by 10 days, and to Australia by 20. His work became famous in 1849 during the California gold rush—his directions made it possible to save 30 days around Cape Horn to California. Applicable U.S. charts still carry the inscription, "Founded on the researches of M.F.M. while serving as a lieutenant in the U. S. Navy." His crowning achievement, The Physical Geography of the Seas, a book explaining his discoveries, was published in 1855. Maury, considered by many to be the father of physical oceanography, was perhaps the first man to undertake the systematic study of the ocean as a full-time occupation. 9. What famous American is also famous for publishing the first image of an ocean current? What was his motivation for studying currents? While serving as Postmaster General of the northern colonies, Benjamin Franklin noticed the peculiar fact that the fastest ships were not always the fastest ships—that is, hull speed did not always correlate with out-and-return time on the run to England. Naturally he wanted the most efficient transport of mail and freight, and the differences in ship speeds concerned him. Franklin's cousin, a Nantucket merchant named Tim Folger, noted Franklin's puzzlement and provided him with a rough chart of the "Gulph Stream" that he (Folger) had worked out. By staying within the stream on the outbound leg and adding its speed to their own, and by avoiding it on their return, captains could traverse the Atlantic much more quickly. It was Franklin who published, in 1769, the first chart of any current. 10. Sketch briefly the major developments in marine science since 1900. Do individuals, separate voyages, or institutions figure most prominently in this history? Individuals and voyages are most prominent in the first half of this century. Captain Robert Falcon Scott's British Antarctic expedition in HMS Discovery (1901-1904) set the stage for the golden age of Antarctic exploration. Roald Amundsen's brilliant assault on the South Pole (1911) demonstrated that superb planning and preparation paid great dividends when operating in remote and hazardous locales. The German Meteor expedition, the first "high tech" oceanographic expedition, showed how electronic devices and sophisticated sampling techniques could be adapted to the marine environment. And certainly the individual contributions of people like Jacques Cousteau and Emile Gagnan (inventors in 1943 of the "aqualung," the first scuba device) and Don Walsh and Jacques Piccard (pilots of Trieste to the ocean's deepest point in 1960) are important. But the undeniable success story of late twentieth century oceanography is the successful rise of the great research institutions with broad state and national funding. Without the cooperation of research universities and the federal government (through agencies like the National Science Foundation, the National Oceanic and Atmospheric Administration, and others), the great strides that were made in the fields of plate tectonics, atmosphere-ocean interaction, biological productivity, and ecological awareness would have been much slower in coming. Along with the Sea Grant Universities (and their equivalents in other countries), establishments like the Scripps Institution of Oceanography, the Lamont-Doherty Earth Observatory, and the Woods Hole Oceanographic Institution, with their powerful array of researchers and research tools, will define the future of oceanography. 11. What is an echo sounder? In the 1925 German Meteor expedition, which crisscrossed the south Atlantic for two years, introduced modern optical and electronic equipment to oceanographic investigation. Its most important innovation was to use an echo sounder, a device which bounces sound waves off the ocean bottom, to study the depth and contour of the seafloor. The echo sounder revealed to Meteor scientists a varied and often extremely rugged bottom profile rather than the flat floor they had anticipated. 12. In your opinion, where does the future of marine science lie? Later in your textbook you’ll find a detailed image of the Gulf Stream taken from space, and a photo of a ship taking a huge wave. Now ask yourself: "Where would I rather be to obtain and analyze information? At Mission Control looking at the computer readouts, or on that pitching, heaving ship?" The universities and institutions mentioned above are faced with rising expenses and falling budgets; each must do more with less. Economical remote sensing devices, where appropriate, will continue to supplant on-site gathering of data. Satellite imagery and autonomous robots are more expensive to make and deploy, but because of multi-use designs their return per dollar may be much higher in the long run. Some seasick scientists may be replaced by stationary technicians reading computer-generated graphs, but I am still confident that on-site researchers will always be needed. Whatever happens, I am reasonably certain that the greatest progress in the immediate future will be made by consortia of universities and research institutions funded by state and federal agencies. Through decisions on the use of tax revenue, the voters will directly or indirectly determine the future of marine science. The future will be what we make it.
http://essaydocs.org/as-a-single-entity.html
18
20
Critical thinking can be as much a part of a math class as learning concepts, computations, formulas, and theorems activities that stimulate. When students think critically in mathematics, they make reasoned decisions or judgments about what to do and think in other words, students consider the. Encourage students to use critical thinking skills to evaluate, then solve, a variety of math enrichment problems topics include number theory, geometry,. An arts education can help children develop pattern recognition, critical thinking and math skills. Higher order thinking involves the learning of complex judgmental skills such as critical thinking and problem solving below) as a cpd activity to support understanding and use of higher order skills in the maths learning environment. Free 2-day shipping on qualified orders over $35 buy leapfrog leapstart 1st grade activity book: spy math and critical thinking at walmartcom. Higher order thinking math tasks and activities for first grade students challenge themselves with these fun tasks that get them thinking about. Critical thinking activities – analogies, logic & more with our yearly testing behind us, we are going to spend some time on another side of math — analogies,. Critical and creative thinking activities, grade 2 profound thinking requires both imagination and intellectual ideas to produce excellence in thinking we need. Now even young children can begin developing critical thinking skills differentiating instruction with menus: math (grades 3-5) (2nd ed) sku: 5369. Creativity calendar: weekly activities to encourage creativity by laura magner blueprint: math logic, critical thinking, and problem solving are. Paired learner activities are critical thinking activities that engage students in problem solving, collaboration, communication, connections, and mathematical. Critical thinking skills are first learned in grade school and become even more these types of puzzles stem from the mathematics fields of deduction it would probably make your daily and long-term activities a lot easier. Which of the following are hands-on concrete activities processes that can be included in critical thinking, mathematical thinking, and scientific thinking. This 8th grade math class uses a quick warm-up to clarify certain math concepts i'm thinking i could also use it to go over my favorite no on their homework. Above for a complete catalog of criticalthinkingcom teacher-ready activities) be a math detective: use clues in the story to answer the questions (grades. Keywords: critical thinking skills, contextual mathematical problem, formal this study is meant to think is all mental activity can be observed from the behavior. Critical thinking skills are necessary in the 21st century, and these worksheets cover a wide but who don't understand why the math problem works out the way it does lack critical thinking skills number and letter kissing activity lesson. Mathematics in early childhood helps children develop critical thinking and and activities that children are already engaged in to foster math learning in. Challenge students with these mind-bending, critical thinking puzzles first find the answers to the math problems and plug the answers into the puzzles. Students use critical thinking skills to answer place value and time problems animated math video: solve word problems involving money grade 4 in this video, alex learns sharpen logic with this printable thanksgiving activity students.
http://pnhomeworkephc.isomerhalder.us/math-critical-thinking-activities.html
18
15
I will start off by asking the essential question. In the last lesson we wrote equations to represent the constant of proportionality as m = y/x. I will then display the graph from this section's resource section. I will write m = y / x on the graph. I will point out that this graph represents m, the constant of proportionality. We will find m. Then I'll ask: Could we rewrite the equation to solve for y, the total number of square feet? We will test students thinking using the graph. This should lead us finally to seeing the equation as y = mx. We will the do the example problem. The point is to examine the table to find the constant of proportionality. I'll ask students what the total cost is based on n cookies. There is a blank spot on the table for this. Watch out here as many students will give the cost for 5 cookies as opposed to writing n * 0.75 or 0.75 *n. As we work through parts of the example, students should be lead to the equation in the form y = mx or in this case t = 0.75n. Finally students will be asked to describe in words the meaning of the equation. They should say something like, the total cost is equal to the constant of proportionality multiplied by the number of cookies. This shows students are able to reason abstractly and quantitatively (MP2) about the equation. There is only one problem here. It is in the same structure as the example problem. The only difference is the data is represented in a graph. It would be a good idea to briefly ask if the graph represents a proportional relationship. Students will have already learned that this graph has the characteristics of a proportional relationship. It will be a nice review of the key characteristics. Students will work through this problem 1 part at a time, so that we can stop to discuss solutions. There are two problems with 5 parts each. The main thing to be on the look out for in both problems is whether students are first finding the correct unit rate. On the first problem we are specifically asked to find the cost per pizza, not how much pizza per cost of 1 dollar. The second problem is presented as a graph. Students may be tempted to say the cost per apple is $2. Here I could ask the students to tell me the meaning of cost per apple. When they are able to say that it means how much does 1 apple cost, I can then ask where this can be found on the graph. The should find the value of 1 on the x-axis and then its corresponding price. Before we begin the exit ticket we will summarize. We can look to part v of each problem. In each the constant of proportionality is being multiplied by a quantity of items. Students then take a 4 part exit ticket that is identical in structure to all of the problems explored today. I view question i-iii as a scaffold to get students to answer iv correctly. That being said, a successful student should be able to answer all 4 questions.
https://betterlesson.com/lesson/520759/writing-equations-for-proportional-relationships
18
10
Types of Variable > Random Variable What is a Random Variable? In algebra you probably remember using variables like “x” or “y” which represent an unknown quantity like y = x + 1. You solve for the value of x, and x therefore represents a particular number (or set of numbers, if you’re talking about a function). Then you get to statistics and different kinds of variables are used, including random variables. These variables are still quantities, but unlike “x” or “y” (which are simply just numbers), random variables have distinct characteristics and behaviors. Random variables are denoted by capital letters If you see a lowercase x or y, that’s the kind of variable you’re used to in algebra. It refers to an unknown quantity or quantities. If you see an uppercase X or Y, that’s a random variable and it usually refers to the probability of getting a certain outcome. Random variables are associated with random processes A random process is (just like you would guess) an event or experiment that has a random outcome. For example: rolling a die, choosing a card, choosing a bingo ball, playing slot machines or any one of hundreds of thousands of other possibilities. It’s something you can’t exactly predict an outcome for; you might have a range of possibilities so you calculate the probability of a particular outcome. Random variables give numbers to outcomes of random events Random variables are numerical in the same way that x or y is numerical, except it is attached to a random event. Let’s take rolling a die as an example. It’s a random event, but you can quantify (i.e. give a number to) the outcome. Let’s say you wanted to know how many sixes you get if you roll the die a certain number of times. Your random variable, X could be equal to 1 if you get a six and 0 if you get any other number. This is just an example…you can define X and Y however you like (i.e. 2 if you roll a six and 9 if you don’t). A few more example of random variables: X = total of lotto numbers Y = number of open parking spaces in a parking lot Z = number of aces in a card hand Random variables are most often used in conjunction with a probability of a random event happening. Say you wanted to see if the probability of getting four aces in a hand when playing cards is less than 5 percent. You could write it as: P (getting four aces in a hand of 52 cards when four are dealt at a time <.05) = That can get kind of wordy, especially if you have to write it over and over. If you define the random variable, X getting four aces in a hand: X = getting four aces in a hand of 52 cards when four are dealt at a time ...then you can write: P (X<.05) ...because you've defined X. If you are familiar with computer programming, it's a very similar concept to defining variables in a programming language so that your later calculations can draw on those variables. The good news is that in elementary statistics or AP statistics, the random variables are usually defined for you, so you don’t have to worry about defining them yourself. Mean of a Random Variable The mean of a discrete random variable is the weighted mean of the values. The formula is: μx = x1*p1 + x2*p2 + … + x2*p2 = Σ xipi. In other words, multiply each given value by the probability of getting that value, then add everything up. For continuous random variables, there isn’t a simple formula to find the mean. You’ll want to look up the formula for the probability distribution your variables fall into. For example, the mean for the normal distribution is the center of the curve, while the mean for the uniform distribution is b + a / 2. Variance of a Random Variable: Overview The formula for calculating the variance of a discrete random variable is: σ2 = Σ(xi-μ)2f(x) Note: This is also one of the AP Statistics formulas. Σ means to “add everything up” and f(x) is the probability. You might also see “Pi” instead of f(x), but they mean the same thing. Variance of a Random Variable: Steps Sample problem: Find the variance of X for the following set of probability distribution data which represents the number of misshapen pizzas for every 100 pizzas produced in a certain factory: x: 2, 3, 4, 5, 6 f(x): 0.01, 0.25, 0.4, 0.3, 0.4 Step 1: Multiply each value of x by f(x) and add them up to find the mean, μ: 2 * 0.1 + 3 * 0.25 + 4 * 0.4 + 5 * 0.3 + 6 * 0.4 = Step 2: Use the variance formula to find the variance. This time we’re going to subtract the mean, μ, from each x-value, square it, and then multiply by the f(x) values: σ2 = Σ(xi-μ)2f(p) = (2 – 4.11)2(0.01) + (3 – 4.11)2(0.25) + (4 – 4.11)2(0.4) + (5 – 4.11)2(0.3) + (6 – 4.11)2(0.04) = The variance of the random variable is 0.74 Tip: It is possible to calculating the variance of a random variable that’s continuous, but that requires knowledge of calculus, which is beyond elementary statistics. However, if you know calculus, the formula for the variance of a continuous random variable is: Example 2: Variance of a Discrete Random Variable (Probability Table) Question: Find the variance for the following data, giving the probability (p) of a certain percent increase in stocks 1, 2, and 3: Step 1: Find the expected value (which equals the mean of the distribution): =((-4.00% * 0.22) + (5.00% * 0.43) + (16.00%*0.35)) = 6.87%. Step 2: Subtract the mean from each X-value, then square the results: (-4.00% – 6.87%)2 = 118.1569 (5.00% – 6.87%)2 = 3.4969 (16.00% – 6.87%)2 = 83.3569 Step 3: Multiply the results in Step 2 by their associated probabilities (from the table): 118.1569 * 0.22 = 25.9945 3.4969 * 0.43 = 1.5037 83.3569 * 0.35 = 29.1749 Step 4: Add the results from Step 3 together: 25.9945 + 1.5037 + 29.1749= 56.67% Binomial Random Variable A binomial random variable is a count of the number of successes in a binomial experiment. For a variable to be classified as a binomial random variable, the following conditions must all be true: - There must be a fixed sample size (a certain number of trials). - For each trial, the success must either happen or it must not. - The probability for each event must be exactly the same. - Each trial must be an independent event. Examples of binomial random variables - The number of heads when you flip a fair coin 30 times. - Number of winning scratch-off lottery tickets when you purchase 20 of the same type. - Number of people who are right-handed in a random sample of 200 people. - Number of people who respond “yes” to whether they voted for Obama in the 2012 election. - Number of Starbucks customers in a sample of 40 who prefer house coffee to Frappuccinos. Two important characteristics of a binomial distribution (random binomial variables have a binomial distribution): - n = a fixed number of trials. - p = probability of success for each trial. For example, tossing a coin ten times to see how many heads you flip: n=10, p=.5 (because you have a 50% chance of flipping a head). - If you aren’t counting something, then it isn’t a binomial random variable. - The number of trials in your experiment must be fixed. For example, “the number of times you roll a die before rolling a 3” is not a binomial random variable, because there is an indefinite number of trials. On the other hand, rolling a die 30 times and counting how many times you roll a 3 is a binomial random variable. Next: Types of Random Variables Confused and have questions? Head over to Chegg and use code “CS5OFFBTS18” (exp. 11/30/2018) to get $5 off your first month of Chegg Study, so you can understand any concept by asking a subject expert and getting an in-depth explanation online 24/7. Comments? Need to post a correction? Please post a comment on our Facebook page.
https://www.statisticshowto.datasciencecentral.com/random-variable/
18
11
This distribution is called normal since most of the natural phenomena follow the normal distribution. The standard normal distribution is symmetric, therefore the area between the mean and a positive z-score is exactly the same as the area between the mean and the negative z-score of the same value (ignore the negative sign). The mean, median, and mode of a normal distribution are equal.Normal distribution is the most popular way of describing random events.The normal distribution is a two parameter distribution and is specified by the standard deviation and mean.Using the Table of Areas under the Normal Curve: The z-score One determines the probability of occurrence of a random event in a normal distribution by consulting a tables of areas under a normal curve (e.g., Table D.2, pp.702-705 in Kirk). Normal distribution - Statlect Normal Distribution Calculator - OmniThe standard normal distribution and scale may be thought of as a tool to scale up or down another normal distribution. Chapter 5 - Normal Distributions Flashcards | QuizletThe lecture entitled Normal distribution values provides a proof of this formula and discusses it in detail.The Normal Distribution (Bell Curve) In many natural processes, random variation conforms to a particular probability distribution known as the normal distribution, which is the most commonly observed probability distribution. These commands work just like the commands for the normal distribution.In probability theory, the normal (or Gaussian) distribution is a very common continuous probability distribution. Difference Between Gaussian Distribution and NormalThe normal distribution plays an extremely important role in statistics because 1) It is easy to work with mathematically 2) Many things in the world have nearly normal distributions.Normal curve is also known as bell curve and each curve is uniquely identified by the combination of mean and standard deviation. 4. Basic Probability Distributions — R TutorialThe standard normal distribution is a tool to translate a normal distribution into numbers which may be used to learn more information about the set of data than was originally known. Normal distribution | Psychology Wiki | FANDOM powered by Basic Characteristics of the Normal Distribution | RealThe area under the normal curve is equal to 1.0. Normal distributions are denser in the center and less dense in the tails.Symmetry - the normal probability distribution is symmetric relative to the average.The normal distribution is the most common distribution of all. Difference Between Binomial and Normal DistributionMathematics. a perpendicular line or plane, especially one perpendicular to a tangent line of a curve, or a tangent plane of a surface, at the point of contact.This week we will introduce two probability distributions: the normal and the binomial distributions in particular. Dealing with Non-normal Data: Strategies and ToolsThe normal distribution is defined by the following equation: Normal equation.The Normal Distribution (a) The normal distribution with mean and variance ˙2: X˘N(;˙2). Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.Most data is close to a central value, with no bias to left or right. CH 10: The normal distribution Flashcards | QuizletNormally distributed data is a commonly misunderstood concept in Six Sigma. First and foremost the normal distribution and the Gaussian distribution are used to refer the same distribution, which is perhaps the most encountered distribution in the statistical theory.To begin with, Normal distribution is a type of probability distribution. Normal Distribution: DefinitionSo in the last post, we talked about the normal distribution, and at the very end, discussed that if you knew the mean and standard deviation of a population for a particular variable, than you can compute the probabilities associated with a particular value of that variable within that population. The normal distribution is a bell-shaped distribution where successive standard deviations from the mean establish benchmarks for estimating the percentage of data observations.
http://freebtcbox.tk/hego/what-is-the-normal-distribution-1089.php
18
19
We spent as much time as we possibly could on division, primarily working in math workshop groups so that I could work on reteaching and enriching in small groups. We started out our division unit by making this little chart in our notebooks. As time goes on, my students are ready to use the algorithm with support from me. Then we multiply one ten by the number of students to see that we have used three tens. In our algorithm, we show that each child gets one ten by writing a one above the tens place in the dividend. You can download the template and word problems FREE here! Next, we divvy up the ones. This shows that we now have eleven ones. These are the most common responses students come up with. What division sentence would this be? Rather than asking students rewrite each writing a division algorithm anchor as a division expression, I have the students use this to then covert the fraction to a decimal thousands place only. It seems I can never get enough. There was an error submitting your subscription. You can use division to convert fractions to decimals. I circulate around the room to monitor student progress. We place the three below the four in the tens place and subtract to see how many tens we have left one. We made a foldable with examples of what you might do with the remainders. This reflects thinking from prior grades, and even though we have worked with dividing to get decimal quotients, some students have a hard time moving past the misconception "the big number goes in the house". Which number represents the dividend? We did two really fun activities during independent work We needed a quick way to review division and get into the division mindset! I make sure to address this misunderstanding right at the beginning of the lesson. Guided Practice Interactive Modeling and Mathematical Discourse I use the text book as a jumping off point for this lesson. Each tab has a word problem example under the flap. Now check your email to confirm your subscription. Earlier in the year, I introduced fractions as division notation when playing a math game online. In my classroom, we break out the funny money and work some simple problems like this: I draw attention to the fact that students might think it looks wrong to have the 3 in the dividend place with the 4 is in the divisor place. Unsubscribe at any time. A real-world problem is provided: This one gives them so much division practice that is all real-world and incredibly engaging, if I do say so myself. Throughout the interactive modeling, I focus on identifying the dividend and the divisor, then placing them in the correct places in the division notation. It was so much fun to see what they came up with. Subscribe to get teaching tips, lesson ideas, and more! The other independent work project we did was actually slightly different than what is pictured here because we did it with a Christmas Tree! In our algorithm, we drop down the one in the ones place and look down at the eleven. This is a picture of my teacher notebook that I projected using the document camera while the students made their own version in their notebooks.The division algorithm is a great time-saver, espe-cially when the number being divided is much larger, such as 5. Writing as Communication Ask students to write at least one division word problem daily. Each bers for students to use. For exam-ple, Use the dividend 37 and the. Multiplying and Dividing Fractions Word Problems Center. Preview. Subject. Math, Fractions, Word Problems. Grade Levels. 6 th. equation cards to scaffold students through the multiplication and division algorithm, including finding the reciprocal and writing numbers as improper fractions.4/5(72). Explore WeAreTeachers's board "Math Anchor Charts - Multiplication and Division" on Pinterest. | See more ideas about Teaching math, Math anchor charts and Multiplication anchor charts. Math Anchor Charts - Multiplication and Division. Writing Workshop Anchor Chartstry the FREEBIE Buy individually or purchase the bundle and SAVE. Algorithms This section provides examples that demonstrate how to use a variety of algorithms included in Everyday Mathematics. It also includes the research basis and explanations of and information and advice about basic facts and algorithm development. Writing your own square root function. a float of given precision, or something else. In this case the poster did ask for the "nearest integer" which this algorithm gives you. – Fredrik looks complicated this way but it is much simpler if you do it on paper using the same notation you use for "long division" you have. Find and save ideas about Teaching long division on Pinterest. | See more ideas about Long division strategies, Division chart and Division anchor chart. Math Division Division For Kids Division Anchor Chart Division Algorithm Anchor Charts Math Activities Long Division Activities Teaching Long This is a video I made to teach long.Download
http://joxokulunecyzeqi.mi-centre.com/writing-a-division-algorithm-anchor-7200472004.html
18
18
Notice that for the same set of 8 scores we got three different values -- If the distribution is truly normal i. The Standard Deviation is a more accurate and detailed estimate of dispersion because an outlier can greatly exaggerate the range as was true in this example where the single outlier value of 36 stands apart from the rest of the values. The Standard Deviation shows the relation that set of scores has to the mean of the sample. Again lets take the set of scores:. Why are the measures of central tendency like mean, median and mode important in research? Expert Answers justaguide Certified Educator. Related Questions Measure of central tendency help? I am wondering about the position of the central tendency of What are the characteristics or properties of a There are three major types of estimates of central tendency: Mean Median Mode The Mean or average is probably the most commonly used method of describing central tendency. For example, consider the test score values: If we order the 8 scores shown above, we would get: The email is on its way. Please allow a few minutes for it to arrive. Didn't receive the email? Go back and try again. Use the Contact Us link at the bottom of our website for account-specific questions or issues. Popular resources for grades P-5th: Worksheets Games Lesson plans Create your own. Grades Preschool Kindergarten 1st 2nd 3rd 4th 5th. Here's how students can access Education. Choose which type of app you would like to use. To use our web app, go to kids. Or download our app "Guided Lessons by Education. Mean, Median and Mode. Assignments are a Premium feature. Learn More No, thanks. Download Free Worksheet Assign Digitally beta. Click to find similar content by grade. Thank you for your input. Mean, Median and Mode Madness! In the following areas, we will take a look at the median, mean and mode, and discover the best ways to determine them and under exactly what conditions they are most suitable to be made use of. A step of central tendency likewise described as measures of centre or central place is a summary procedure that tries to explain an entire set of information with a single value that represents the middle or centre of its distribution. There are 3 primary measures of central tendency: Each of these measures explains a various sign of the central or normal value in the distribution. The mean or typical is the most well recognized and popular step of central tendency. It can be utilized with both constant and discrete information, although its usage is usually with constant information. The mean is basically a design of your information set. You will discover, nevertheless, that the mean is not frequently one of the real values that you have actually observed in your information set. A vital home of the mean is that it consists of every value in your information set as part of the estimation. In addition, the mean is the only step of central tendency where the amount of the variances of each value from the mean is constantly absolutely no. The mean has one primary drawback: These are values that are uncommon compared with the remainder of the information set by being huge or specifically little in mathematical value. If we think about the regular distribution — as this is the most regularly examined in stats — when the information is completely typical, the mean, mode and median are similar. As the information ends up being skewed the mean loses its capability to offer the finest central place for the information since the skewed information is dragging it away from the normal value. The median is the middle rating for a set of information that has actually been organized in order of magnitude. The median is less influenced by outliers and skewed information. The mode is the most regular rating in our information set. On a pie chart it represents the greatest bar in a bar chart or pie chart. Mean, median and mode are three types of questions that may be asked on the SAT and are great way to sort data for statistics and probability. The mean is the average of a set of numbers, the median is the middle of a sorted list of numbers and the mode is the most frequent number. Measures of central tendency, such as mean, median, and mode, represent the most typical value in a group of scores in a population or a sample. For example, if a research requires collecting the weight of a large number of people the mean gives an idea of the weight of people in general. The median would be the weight of the largest number of people and the mode would be the weight of people that is half-way between the highest and the lowest. The mean is the sum of all the values divided by the number of values. The mode is the value that occurs the maximum number of times and the median is the value which lies half way when the values are arranged in ascending or descending order.
http://bisnesila.tk/vupa/homework-help-math-mean-median-mode-qej.php
18
23
A square is a four-sided, two-dimensional shape. A square's four sides are equal in length, and its angles are all 90 degrees, or right angles. A square can be a rectangle (all 90 degree angles) or a rhombus (all sides are equal length). You can make a square as large or small as you'd like; the sides will always be the same length, and a square will always have four right angles. Determine if you can use trigonometry to find the height of the square. You can only use trigonometry if you have the length measurement for the diagonal line that can divide the square into two equal triangles. You need three pieces of information to use trigonometry. Any combination of three angles or sides will help you find the other missing measurements for the remaining angles or sides. The two exceptions are only having the three angle measurements or having only one angle and two sides. Determine which pieces of information you have. If you have the length of the diagonal line, you will be able to determine the height of the square. Knowing squares have four right angles, you also have two angles to use. The diagonal line cuts the right angle into two equal angles, half of a right angle. This is 45 degrees. Sciencing Video Vault Use cosine to find the height of the missing side. The cosine of the angle equals the adjacent side divided by the hypotenuse. Written, it is: cos(angle) = h/hypotenuse. As an example, the angle to use here is one of the 45 degree angles created by the diagonal line. The adjacent side is our unknown -- the height of the square. The hypotenuse is the longest side of the triangle, the length of the diagonal that is dividing the square into two equal triangles. Set up your equation, where "h" equals the unknown height of the square, and the hypotenuse equals 50. Cosine(45 degrees) = h/50. Use a scientific calculator to figure out what the cosine of 45 is. The answer is .71. Now the equation reads .71 = h/50. This number will change if the angle is a different measurement; but for squares, this will always be the number, as the shape is no longer a square if it does not have four right angles. Use algebra to solve for the unknown "h." Multiply both sides by 50 to isolate the "h" by itself on the right side of the equation. This reverses the 50 being divided by "h." You now have 35.35 = h, where the diagonal line equals 50. The height of the square is 35.35. Use whichever units the length of the diagonal line is given in. This could be centimeters, inches or feet. You can also measure the height of the square, if it is sized correctly.
https://sciencing.com/height-square-8525436.html
18
14
Or download our app "Guided Lessons by Education.com" on your device's app store. Urban, Suburban, or Rural No standards associated with this content. Which set of standards are you looking for? Students will identify and categorize the characteristics of rural, urban, and suburban communities - Begin your lesson by introducing the vocabulary words and discussing the definition of each term. Display pictures of each type of community. - Students’ background knowledge will impact their understanding of the terms. Start your introduction of the terms by selecting the term that identifies the type of community they live in. - After discussing the first term, continue discussing each vocabulary word as a whole group. - Ask for student input for each vocabulary word. All definitions should include the information that helps identify the community and explains where people work, live, and play within that community. - Check for understanding after all the vocabulary words have been reviewed and discussed. Beginning: Have ELs turn to a partner to repeat the definition of community, either in English or their home language (L1). If needed, provide the sentence stem: "A community is __." - Intermediate: Have ELs turn to a partner to repeat the definition of community. Explicit instruction/Teacher modeling(15 minutes) - Prior to the start of the lesson, create a three-circle Venn diagram that can be displayed where all students can see it. Label the circles with the following titles: urban, rural, and suburban - Ask for student input as you explore by comparison and contrasting how the three different communities (urban, rural, and suburban) are the same and different. - Fill out the circles and then use the diagram as an anchor chart in the classroom when completed. Beginning: Provide sentence frames for student input. For example: "I know our community is (rural/suburban/urban) because ___." - Intermediate: Provide sentence stems for student input. For example: "A rural community has _____." Guided practice/Interactive modeling(10 minutes) - Have students create a three-column T-chart on lined paper. Students will write the following headings above the columns: rural, urban, and suburban - Create the same three-column T-chart on chart paper. Place your chart in a spot that is visible to all students - Begin by asking for student input to add information to the columns regarding where people live, work, and play in each community. - Encourage students to make simple pictures on their charts to illustrate the different examples of each community. - Check for student understanding of the unique features of each community. Beginning: Pair ELs with sympathetic non-EL and have them discuss features of each type of community. - Intermediate: Ask ELs to repeat directions to show their understanding. Independent working time(15 minutes) - Hand out a piece of white paper to each student. - Ask students to each draw a circle that has a diameter that is approximately four inches in the center of their paper. - Your students will be creating a diagram with the use of three circles to illustrate the aspects of each community type. - The circles will be within each other to demonstrate the progression from urban to rural. - Ask students to draw another circle that has a diameter of approximately three inches inside the first circle. - Have them draw another circle outside the first circle. This one should have a diameter of approximately 5 ½ inches. - Students should label the inner circle "urban," the middle circle "suburban," and the outer circle "rural." - Discuss with students how the progression from urban to suburban to rural occurs in communities. - Have students add words and phrases to each circle that describe how people live, work, and play within that specific community. They should also describe the type of transportation available in each community. - Collect papers once students have finished working. - Beginning: Allow ELs to continue working with a partner to complete their diagrams. Allow students to use sentence frames. For example: "A word that describes an urban community is ____ because ____." - Intermediate: Allow intermediate ELs to use sentence stems about each community type. For example: "A suburban community is ___." Enrichment: Challenge advanced students by asking them to create a community of the future. They may select the type of community they are most interested in and then write about how it would look in the future. Encourage them to include illustrations of their futuristic communities. Support: Have struggling students work in pairs to complete the Independent Working Time activity. Beginning: Translate difficult vocabulary into EL's home language using an online bilingual dictionary. - Intermediate: *Use rewordify.com to simplify text from the City Mouse, Country Mouse. - Review students' work, and check for in-depth understanding of the dynamics of each community. - Provide written feedback for students on all papers. Beginning: Provide oral directions in simplified sentences and ask them to repeat the instructions. - Intermediate: Allow students to explain their work orally as you review work. Review and closing(10 minutes) - Hand back students' work, and give the students time to read your feedback. - Ask for volunteers to share what they wrote with the class. - Review the definitions of the key terms. Beginning: Provide sentence frames for ELs turn to their partner and use urban, rural, and suburban in context. - Intermediate: Provide sentence stems for ELs turn to their partner and use urban, rural, and suburban in context.
https://www.education.com/lesson-plan/urban-suburban-or-rural/
18
13
Lessons in UnitsCCSS Units How fast is the Earth spinning? Students use unit rates to find the speed at which the planet rotates along the Equator, Tropic of Cancer, and Arctic Circle. How fast is the Earth spinning? Students use rates, arc length, and trigonometric ratios to determine how fast the planet is spinning at different latitudes. How high can a ladder safely reach? Students combine the federal guideline for ladder safety with the Pythagorean Theorem (middle school) or trigonometric ratios (high school) to explore how high you can really climb. How much does Domino's charge for pizza? Students use linear functions — slope, y-intercept, and equations — to explore how much the famous pizzas really cost. Should people with small feet pay less for shoes? Students apply unit rates to calculate the cost per ounce for different sizes of Nike shoes, and use proportions to find out what would happen if Nike charged by weight. How hard is it to steal second base in baseball? Students use the Pythagorean Theorem and proportions to determine whether a runner will successfully beat the catcher's throw. Should you ever buy an extended warranty? Students use percents and expected value to determine whether product warranties are a good deal. Is Wheel of Fortune rigged? Students use percents and probabilities to compare theoretical versus experimental probabilities, and explore whether the show is legit, or whether there might be something shady going on! Who should buy health insurance? Students use percents and expected value to explore the mathematics of health insurance from a variety of perspectives. When you buy a bigger TV, how much more do you really get? Students use the Pythagorean Theorem and proportional reasoning to investigate the relationship between the diagonal length, aspect ratio, and screen area of a TV. How much of your life do you spend doing different activities? Students use proportional reasoning and unit rates to calculate how much of their total lifespan they can expect to spend sleeping, eating, and working...and discuss how they'd like to spend the time that's left over. How dangerous is texting and driving? Students use proportional reasoning to determine how far a car travels in the time it takes to send a message, and explore the consequences of distracted driving. What is the likelihood of winning at roulette? Students use probabilities and odds to examine the betting and gameplay of roulette, including where the infamous house edge comes from. How can you make money in a pyramid scheme? Students learn about how pyramid schemes work (and how they fail), and use geometric sequences to model the exponential growth of a pyramid scheme over time. Which size pizza is the best deal? Is it ever a good idea to buy the personal pan from Pizza Hut? Students use unit rates and percents, and the area of a circle to explore the math behind pizza bargains.
http://mathalicious.com/lessons/page/8
18
22
In Excel. there are text functions that allow you to quickly change the case of the text (to lower case, upper case, or proper case). Below is an example of each type of case: Excel LOWER Function – Overview LOWER function is one of the many text function in Excel. What does it do? It takes a string as the input and converts all the upper case characters in that text string to lower case. When to Use it? Use it when you have a text string and you want to convert it all into lower case. This can often be the case when you get the data from an external source and you want to clean it and make it consistent. Lower Function Syntax - text: This is the text string in which you want to convert upper case characters to lower case. Examples of using LOWER Function Here are some practical examples to show you how to the Lower function in Excel worksheet. Example 1 – Make the Data Consistent Below is an example dataset where I want to convert all the text into lower case. The following formula would do this: Example 2 – Create Email Address Using Names Suppose you’re working in the IT department of ABC Ltd, and you get a list of names new joiners for which you need to create an email id. You can use the below LOWER formula to first convert all the text into lower case and then combine it to get the email id. The above formula converts the first and the last name to lower case and then combines it along with the text “@abc.com” Some useful things to know about the LOWER Function: - The LOWER function only affects the uppercase characters of the text string. Any character other than the uppercase text characters is left unchanged. - Numbers, special characters, and punctuations are not changed by the LOWER function. - If you use a null character (or a reference to an empty cell), it will return a null character. Other Useful Excel Functions: - Excel FIND Function – The FIND function finds a text string within another text string and returns its position - Excel UPPER Function. – The UPPER function converts a given text string from lower case to upper case. Numbers, punctuations, and special characters are not changed. - Excel PROPER Function. – The PROPER function converts a given text string to the proper case (where the first letter of each word is capitalized). Numbers, punctuations, and special characters are not changed - Excel TRIM Function – The TRIM function removes leading, trailing, and double spaces from a text string. Similar Function in VBA: - VBA LCase Function – It converts a text string to the lower case in VBA.
https://trumpexcel.com/excel-lower-function/
18
12
After decades of debate, scientists have spotted hints of liquid water trapped beneath the planet’s south polar ice cap. A team of Italian researchers analysed radar data taken between May 2012 and December 2015 with an instrument on board the European Space Agency’s Mars Express spacecraft, according to a new paper. Parts of the ice returned strange signals in the instrument. “We interpret this feature as a stable body of liquid water on Mars,” the authors write in the paper published today in the journal Science. At this point, there’s lots of evidence that Mars used to have liquid water, based on its topography and other clues. And obviously the planet has frozen water, as evidenced by its ice caps. But whether there’s liquid water currently, either in the dirt or hidden beneath its poles, has long been a matter of discussion and debate. Artist impression of the Mars Express spacecraft using its radar and finding a bright spot. Graphic: ESA, INAF, Davide Coero Borga From May 29, 2012 to December 27, 2015, the MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) instrument on board the Mars Express spacecraft took data on a 200-kilometre-wide (124-mile) area near the planet’s south pole. It shot radio waves at the ground, then recorded how the waves that bounced back had changed. There was nothing strange about the area itself. But a 20-kilometre-wide (12-mile) region beneath the surface seemed to reflect way more of the radar signal than its surroundings. The bright reflection implied that the region had a much higher value of its dielectric permittivity, an important electrical property for studying penetrating radar. The researchers weighed the options: Could carbon dioxide ice have caused the signal, or some other material? Their analysis suggested that, no, the most likely explanation would be water, which has a much higher dielectric permittivity than ice. That water would have lots of dissolved salts, either as a brine pool or a sludge where water saturated soil, according to the paper. Why doesn’t the water freeze? First, there’s the dynamics of ice sheets, which can experience sub-surface melting due to the specific pressure and temperature environment (as they do on Earth). Then there’s the dissolved salts. You might remember the phrase “perchlorates” from past Mars water studies—these are molecules that contain chlorine atoms linked to four oxygen atoms. The presence of perchlorate salts can greatly reduce the melting point of water. “There’s something there, whatever it is.” “The temperature at the bottom of the south polar cap supports the idea that this could be liquid water, because it’s in a brine where perchlorates could have lowered the melting point so it could be stable at this temperature,” Tanya Harrison, director of research for Arizona State University’s NewsSpace initiative, told Gizmodo. Harrison was not involved in the current research, but has studied Mars in the past. The radar scientists Gizmodo spoke to were convinced by the data and were cautiously optimistic about the interpretation. “You can’t argue with the radar results,” Cassie Stuurman, a radar scientist soon to join the European Space Agency, told Gizmodo. “It’s clear they have an anomalously bright region, which does imply something with anomalous properties.” As for whether it’s truly liquid water, “I can’t think of something that would generate the same results,” she said. “The interpretation is up for debate,” and is already the subject of discussion in the planetary science community. “There’s something there, whatever it is.” This detection is the next big clue in the mystery of whether liquid water still exists in measurable quantities on the Mars today. You might be familiar with the “recurrent slope lineae,” or RSLs, on the planet—streaks on its surface that appear to darken. A much-hyped paper announced this as “strong evidence” for liquid water back in 2015, but another, more recent paper suggested there could be a better explanation for the darkening. This new paper offers another, probably unrelated source of liquid water. “This story provides corroboration that liquid water can exist on Mars,” Lujendra Ojha, author of the 2015 RSL paper from Johns Hopkins University, told Gizmodo. “It’s probably going to be transient and salty, it will most likely be subsurface... but I think it’s a bit of vindication.” And if it exists in the ice caps, perhaps it exists elsewhere, like the RSLs. There’s lots more work to do. A visit to Mars with the right tools is what would be needed to confirm or rule out the presence of liquid water. And, of course, this new paper says nothing about the presence of life—it’s unclear what kind of life form, if any, could survive in this cold brine. But a standing body of water is always intriguing. “Anywhere we find liquid water on Earth, we tend to find life,” said Harrison. “Even in ice cores from Greenland, there are bacteria and algae living in pockets or hanging out in a dormant state. If there’s briny liquid water, it might be a little abode for some kind of life to survive.” [Science] Image: USGS Astrogeology Science Center, Arizona State University, INAF
http://www.gizmodo.co.uk/2018/07/spacecraft-spots-signs-of-standing-body-of-water-under-martian-surface/
18
10
A summary of the socratic method Learning how to think with the socratic method the student should be able to recite a brief summary of the facts, explain the issue the case presents, as well as . In this socratic dialogue, a christian preacher states the often claimed idea that atheists cannot be moral because faith in god is the basis of morality the socratic method is used to question this idea in a way that demonstrates it is not religious faith, but secular knowledge that is needed in order to carry out moral deeds and to interpret . The socratic method (also known as method of elenchus, elenctic method, socratic irony, or socratic debate), named after the classical greek philosopher socrates, is a form of inquiry and debate between individuals with opposing viewpoints based on asking and answering questions to stimulate critical thinking and to illuminate ideas. Plato on tradition and belief the socratic method, teaching and learning as i mentioned earlier, the historical socrates does not seem to have presented himself as a teacher. Socratic questioning a companion to: the thinkers guide to analytic thinking the art of asking essential questions socrates, the socratic method, and critical . The socratic method (also known as method of elenchus, elenctic method, socratic irony, or socratic debate), is a form of inquiry and debate between individuals with opposing viewpoints based on asking and answering questions to stimulate critical thinking and to illuminate ideas. Experiences of the socratic method that make people more uncomfortable with asking and answering questions are worthless the socratic goal we should embrace is the goal of making students more interested in asking questions and doing their own thinking. The case method already places a dizzying burden on a 1l, but when combined with the socratic method, it leaves many feeling helpless how the socratic method works generally, the socratic professor invites a student to attempt a cogent summary of a case assigned for that day's class. The socratic method represents the core of an attorney's craft: questioning, analyzing and simplifying doing all this successfully in front of others for the first time is a memorable moment it’s important to remember that professors aren’t using the socratic seminar to embarrass or demean students. Start studying socrates the euthyphro socratic method learn vocabulary, terms, and more with flashcards, games, and other study tools. His socratic method of cross-examining meletus had turned most of them in favor of him the power of language has been used for centuries, and until today, the socratic method has been used over and over during investigations, most notably in court proceedings. Socratic method essay examples 12 total results a summary of the socratic method 472 words 1 page the basic and benefit of the socratic method 1,640 words 4 pages. Socratic conversations lesson summary the socratic method is a style of education involving a conversation in which a student is asked to question their assumptions it is a forum for open . The socratic method is a form of dialectic inquiry it typically involves two or more speakers at any one time, with one leading the discussion and the other agreeing to certain assumptions put forward for his. The modern socratic method the modern socratic method of teaching does not rely solely on students’ answers to a question instead, it relies on a very particular set of questions that have been designed in a way that lead the students to an idea. His style of teaching—immortalized as the socratic method—involved not conveying knowledge but rather asking question after clarifying question until his students arrived at their own . Assess your knowledge of the socratic method with an interactive practice quiz use this worksheet to focus in on the details of how and why. This is the only instance in the apology of the elenchus, or cross-examination, which is so central to most platonic dialogues his conversation with meletus, however, is a poor example of this method, as it seems more directed toward embarrassing meletus than toward arriving at the truth. A summary of the socratic method The so-called socratic method is a means of philosophical enquiry, wherein people are interrogated about what they have said and subsequently worked through several related questions to see if . The six types of socratic questions due to the rapid addition of new information and the advancement of science and technology that occur almost daily, an engineer must constantly expand his or her horizons beyond simple gathering information and relying on the basic engineering principles. Socratic dialectic: general, inclusive characterization of socrates’ overall method as a form of critical reasoning proceeding by means of question and answer as such, it incorporates the following components:. - Summary socratic teaching requires more from the parent or teacher than most of the study guides for novels that are popular among homeschoolers the teacher must read and be familiar with the literary works to be able to lead a discussion. - The socratic method is a 1st season episode of house which first aired on december 21, 2004 while dodging cuddy in the emergency room, house runs into the . - A short summary of plato's euthyphro this free synopsis covers all the crucial plot points of euthyphro. The socratic method on the other hand, is a method of conducting classes as jackson notes, “the socratic method was the „engine‟ langdell chose to power his case. The use of socratic methods, even when they clearly result in a rational victory, may not produce genuine conviction in those to whom they are applied apology : the examined life because of his political associations with an earlier regime, the athenian democracy put socrates on trial, charging him with undermining state religion and . What is the socratic method excerpted from socrates café by christopher phillips the socratic method is a way to seek truths by your own lights it is a system, a spirit, a method, a type of philosophical inquiry an intellectual technique, all rolled into one. This is a summary of a philosophy paper on socratic method if you need assistance in writing a similar paper or one on a topic of your choice contact us bestessayservicescom is a professional homework writing help website.
http://nxpaperpxix.safeschools.us/a-summary-of-the-socratic-method.html
18
20
Evaluate expressions, algebra, fifth 5th grade math values for letter variables in the expression - evaluating algebraic expressions can be a. Self guided worksheets for practicing writing an equation and making a table graph given fifth grade algebra functions expression vs how to simplify algebraic. Get your first taste of algebraic expressions with this writing algebraic expressions | worksheet algebraic expressions, algebra task cards, 5th grade. Writing basic algebraic expressions rewrite each question as an algebraic expression 1 what is the sum of a and 3 _____ 2 what is the. Solving a linear quadratic system worksheet, 5th grade integer worksheets , subtracting algebraic expressions, grade 9 fourth grade algebra worksheet. Algebraic expressions- worksheets, pre-k kindergarten 1st grade 2nd grade 3rd grade 4th grade 5th grade middle school writing expressions algebraic. I created this for a mixed ability year 7 class that needed extra practise taking worded problems and translating them into algebraic expressions and equations. Grade 6th on khan academy: (you saw these briefly in the 5th grade), writing algebraic expressions from word problems - duration:. How do i write algebraic expressions | 6th grade simplifying algebraic expressions writing and evaluating algebraic expressions: 6th grade. Worksheets are writing basic algebraic expressions, writing basic click on pop-out icon or print icon to worksheet to first day of school for 5th grade. The worksheets provide exercises on translating verbal phrases into linear algebraic expressions, multiple variable expressions, equations and inequalities. These dynamically created pre-algebra worksheets allow you to produce algebraic expressions the 5th grade through the 8th expressions worksheets will create. Grade 5 » operations & algebraic thinking print this page ccssmathcontent5oaa2 write simple expressions that. 5th grade related academic writing numerical expressions practice worksheet the lesson deals with writing and interpreting numerical expressions. ©l n2e0e1 82i kkiu ytea m ysyotftt kwmair pe8 tlpl qch v n za pl sl f 2r 8i9gxhttys6 wr2e psaefrwv7e admp m emwaod eeq rw4iltkh w. A printable worksheet with ten questions on writing simple expressions eg 2 x (18 + 11. The most important part of writing expressions is to know that words for addition, subtraction, multiplication and division it is also important to know turn around. This algebra 1 - basics worksheet will create word problems for the students to translate into an algebraic statements. Expressions and equations : algebra and percent : fifth grade math worksheets write algebraic expressions using numbers and fractions. This translating algebraic phrases worksheet is suitable for 5th - 7th grade in this algebraic phrases activity, writing algebra expressions. Improve your math knowledge with free questions in write variable expressions and thousands of other math skills. Writing basic algebraic expressions operation example written numerically example with a variable addition (sum) 3 + 2 6 + x subtraction (difference) 18 - 6 14 - a.
http://tstermpaperothj.archiandstyle.info/writing-algebraic-expressions-worksheet-5th-grade.html
18
20
In this experiment, sodium thiosulphate (na 2 s 2 o 3) is the source of the thiosulphate ions, clearly seen, but as soon as the concentration of sodium thiosulphate is progressively diluted, the x mark slowly disappears muhammad arif bin fazil 002206. 2009/3/28 in science we are doing an experiment where you add sodium thiosulphate, hydrochloric acid and water into a conical flask then place a cross drawn onto a piece of paper underneath the conical flask the solution will start to turn yellow once the chemicals have. To give year 9 a taster of what is expected of the them when carrying out coursework in ks4 to give year 9 a taster of what is expected of the them when carrying out coursework in ks4 resources topical and themed pre-k. 2018/8/14 the effect of concentration on reaction rate - teacher sheet experiment the effect of concentration on reaction rate - student handout sodium thiosulfate solution is reacted with acid – a sulfur precipitate forms the time taken for a. How i did this experiment at secondary school was that i (not my idea) mixed sodium thiosulphate with increasing volumes of water and therefore vary the concentration in which case you could make the x axis on the graph either a. 2017/5/18 to investigate how the rate of reaction between sodium thiosulphate and hydrochloric acid is affected by changing the concentration chemistry essay. The sodium thiosulphate-hydrochloric acid reaction, you can investigate the effects of temperature and concentration (as far as i know sulphur formation is only catalysed acid. Paraugs: disappearing x experiment fair testing to ensure a fair test, i will make sure i keep the volume of the sodium thiosulphate the same, 25cm3 i will a fair testing to ensure a fair test, i will make sure i keep the volume of the sodium thiosulphate the i. The rate of a chemical reaction will be increased by increasing the concentration this can be seen in the reaction between dilute hydrochloric acid and sodium thiosulfate solution rates of reaction what is the effect of. Extracts from this document introduction the disappearing cross in this experiment, we shall be adding sodium thiosulphate to hydrochloric acid together and placing a drawn cross underneath and seeing how long the rate of reaction lasts until you cannot see. 3 get a conical flask and put it on top of the x, 4 pour 20ml of sodium thiosulphate and 5ml of hydrochloric acid into a 25ml measuring cylinder. Hello, my question is why so2 is hazard low in disappearing cross experiment and what procedure we did that minimized the risk the equation of exper - search . Sodium thiosulphate and hydrochloric acid coursework evaluation download pdf mol dm-3, the stock to distilled water ratios are 4:6, 2:8, and 1:9 respectively both x and y-error bars are also set according to the uncertainties we have processed, though on. Gcse chemistry coursework: investigating the rate of a reaction your task is to plan, and carry out, an experiment to discover how to change the rate the reaction between sodium thiosulphate solution and dilute hydrochloric acid. Gcse chemistry rates of reaction coursework aim my aim is to investigate how changing the concentration of reactants can change the rate of reaction between hydrochloric acid and sodium thiosulphate rate of. The reaction between magnesium and hydrochloric acid aims magnesium and dilute hydrochloric acid react together according to the equation below: magnesium + hydrochloric acid →magnesium chloride + hydrogen. The reaction between sodium thiosulphate and dilute hydrochloric acid is investigated the concentration of the sodium thiosulphate is changed by adding different amounts of water the concentration of the acid remains unchanged explanation in this reaction. Disappearing x experiment coursework about sodium thiosulphate and hydrochloric acid method, results, fair testing, evaluation included was marked with a passing grade rate of reaction between sodium thiosulphate (thio) and hydrochloric acid hydrochloric acid. 2013/12/6 transcript of sodium thiosulphate chemistry isa chemistry isa what sources are you going to use google search: the disappearing cross experiment (sodium thiosulphate experiment) 1 . 2018/8/18 the purpose of this demonstration is to investigate the effect of sodium thiosulfate concentration on the rate of reaction of sodium thiosulfate with hydrochloric acid the reaction, which produces solid sulfur, will be followed by measuring the time needed for the. Coursework, controlled assessment and non-exam assessment (nea) access arrangements special consideration exams guidance exam papers and stationery send scripts for marking invoices and how to pay find. The effect of concentration on rate – student sheet nuffield practical work for learning: 5 0030 0020 020 105 x 10-5 a use the results of runs 1, 2 and 3 to deduce the order of reaction with respect to h 2 o 2 (aq) explain how you b use the results of runs. How to write a gcse music composition commentary write an essay on my favourite sport essay question starters research paper wireless security. Disappearing x experiment introduction i am going to investigate the rate of reaction between sodium thiosulphate (thio) and hydrochloric acid hydrochloric acid + sodium thiosulphate sodium chloride + water + sulphur the. We will be investigating the rates of reaction when we will mix together sodium thiosulphate and hydrochloric acid i will carry out trial runs to decide which of the four factors that affect rates of reaction i will measure in my final run. The independent variable in the experiment is the initial concentrations of sodium thiosulphate and hydrochloric acid we will write a custom essay sample on sodium thiosulphate and hydrochloric acid coursework specifically for. Disappearing x experimentintroductioni am going to investigate the rate of reaction between sodium thiosulphate (thio) and hydrochloric acidhydrochloric acid + sodium thiosulphate sodium chloride + water + sulphurthe reactants are both colourless,. Rates of reaction coursework sodium thiosulphate, gcse chemistry coursework rate reaction sodium thiosulphate chemistry rates of reaction coursework sodium thiosulphate and hydrochloric acid essayhow the rate of reaction between see thiosulphate.Download disappearing x experiment coursework sodium thiosulphate`
http://cmessaycums.iktichaf.info/disappearing-x-experiment-coursework-sodium-thiosulphate.html
18
41
Reading and writing to a binary file in c Binary streams are primarily used for non-textual data, where the appearance of the content inside the file is nor important it does not care if they are not seen as plain text. A buffer is a memory area that is temporarily used to store data before it is sent to its destination. Since the smallest object that can be represented in C is a character, access to a file is permitted at any character or byte boundary. Any number of characters can be read or written from a movable point, known as the file position indicator. The characters are read, or written, in sequence from this point, and the position indicator moved accordingly. The position indicator is initially set to the beginning of a file when it is opened, but can also be moved explicitly not just because a read or write has been performed. This section focuses on how to open and close a disk data file and how to interpret the error messages that can be returned by these two functions. The FILE structure is the structure that controls files and it is defined in the header file stdio. The following defines a file pointer:. In the FILE structure there is a member which represents the file position indicator. The function fopen opens a file and associates a stream with that opened file. You need to specify the method of opening the file and the filename as arguments. Here filename is a char pointer that references a string containing a filename. The fopen function returns a pointer of type FILE. If an error occurs during the procedure to open the file, the fopen function returns NULL. When you use the a character and the file already exists, the contents of the file will be preserved and new data that you write will be added to the end. If the file does not already exist, it will be created. This is in contrast to w ; this mode discards any data that may already be in the file if the file does not exist, it creates a new one. When you use r , the file must already exist; if it does not, the call will fail and returns NULL. After a disk file is read, written, or appended with some new data, you have to disassociate the file from the specified stream. This is done by calling the fclose function:. If fclose closes a file successfully, it returns 0. Otherwise, the function returns EOF. Normally, this function fails only when the disk is removed before the function is called or there is no more space left on the disk. Otherwise, the data saved in the file may be lost. In addition, failing to close a file when you are done with it may prevent other programs from accessing the file later. The next program shows how to open and close a text file, checking the values returned from the functions:. In order to open a file using its file descriptor fd , you must use the fdopen function. This function behaves as fopen , but instead of opening a file using its name or path, it uses the file descriptor:. A way to get a file descriptor is with mkstemp:. This function generates a unique temporary file name from template string. The last six characters of template must be "XXXXXX" and these are replaced with a string that makes the filename unique. The file is then created for reading or writing. Since the string will be modified, template must not be a string constant, but should be declared as a character array. The mkstemp function returns the file descriptor of the temporary file or -1 on error. Read or write one character or byte at a time, with functions like fgetc and fputc. Read or write one line of text that is, one character line at a time, with functions like fgets and fputs. Read or write one block of characters or bytes at a time, with fread and fwrite. In this course we are going to focus in the last way, block reading and writing, which is quite useful both for text and binary files. Here ptr is an array in which the data is stored. The fread function returns the number of elements that are actually read during the attempt, if an error occurs or an EOF is encountered. If we get an error, as the fread function does not distinguish between end-of-file and another type of error, you must use feof and ferror to determine what happened we will see it in a moment. The syntax for the fwrite function is:. The function returns the number of elements actually written; therefore, if no error has occurred, the value returned by fwrite should equal the third argument in the function. The return value may be less than the specified value if an error occurs. As we have seen before, we have the feof function to determine when the end of a file is encountered. This function returns 0 if the end of the file has not been reached; otherwise, it returns a nonzero integer. It returns 0 if no error has occurred; otherwise, it returns a nonzero integer. The following program read one block of characters at a time and write it in another file. The reason is that you append a null character in line 35 after the last character read so that you ensure the block of characters saved in buffer is treated as a string and can be printed out on the screen properly by the printf function. In the last section you have learned how to read or write data sequentially to an opened disk file. However, in some other cases, you need to access particular data somewhere in the middle of a disk file. The fseek function offers you the possibility to move the file position indicator to the spot you want to access in a file. After reading the structure the pointer is moved to point at the next structure. A write operation will write to the currently pointed-to structure. After the write operation the file position indicator is moved to point at the next structure. Remember that you keep track of things, because the file position indicator can not only point at the beginning of a structure, but can also point to any byte in the file. Only one block is requested. Changing the one into ten will read in ten blocks of x bytes at once. In this example we declare a structure rec with the members x,y and z of the type integer. In the main function we open fopen a file for writing w. Then we check if the file is open, if not, an error message is displayed and we exit the program. Then we write the record to the file. We do this ten times, thus creating ten records. Take a look at the example:. With the fread we read-in the records one by one. After we have read the record we print the member x of that record. The only thing we need to explain is the fseek option. The function fseek must be declared like this:. The fseek function sets the file position indicator for the stream pointed to by the stream. The new position, measured in characters from the beginning of the file, is obtained by adding offset to the position specified by whence. Three macros are declared in stdio. Using negative numbers it is possible to move from the end of the file. In this example we are using fseek to seek the last record in the file. This counter is then used in the fseek statement to set the file pointer at the desired record. The result is that we read-in the records in the reverse order. The function rewind can be used like this:. With the fseek statement in this example we go to the end of the file. Then we rewind to first position in the file. Then read-in all records and print the value of member x. Without the rewind you will get garbage. The int types must contain at least 16 bits to hold the required range of values. But it also can vary per compiler and the platform you are compiling for. On compilers for 8 and 16 bit processors including Intel x86 processors executing in 16 bit mode, such as under MS-DOS , an int is usually 16 bits and has exactly the same representation as a short. On compilers for 32 bit and larger processors including Intel x86 processors executing in 32 bit mode, such as Win32 or Linux an int is usually 32 bits long and has exactly the same representation as a long. A int holds 32 bits thus you see 01 00 00 00 in your hex-editor. Try the following example to see the sizeof variable types for your target platform: So if you want to see only one byte in your hex editor, change the program to this: If you open the file test. Or if you want to use int, accept that 4 bytes are written in the binary file but at least you know now why this is. Hi, after piratical and try to understanding write and read, my write is perfect what i want, but i can not read my bin file, i need some help please,. I have tested them and they work, so good luck! Hi, Tanks for your help and support, my last message with my binary writer test, is perfect for me, because i can write a hex value directly in file , 00 to FF anywhere i want in file, but i cant read it with fread after, i want practice this with a small file i have create because my true application i want manage. Is this the right declaration of a binary file Binary files should have b in the mode e. You are right, if you only look at the syntax. But if you open test. But to keep the syntax correct, we have changed the source code examples. The modes you can use are as follows: Is there a possibility to deal with data packing of the compiler in the structure, because due to this structure members are not necessarily continuous in the memory. And mention that you get garbage without the rewind. If you seek to the end of file and then offset beyond the end of the file, you will be reading garbage, you need a negative offset. The search for the end of the file is only done, so we that we can rewind. I want to dump some memory into a file, a complex structure which contains pointers to list etc, and again want to set the same memory from this file. I want to store a character array data 89 bytes to a binary file. Please send the fwrite function code for this. Is it possible to store current PC time to same binary file?
http://binary-options-try-best-platform.pw/reading-and-writing-to-a-binary-file-in-c-9378.php
18
18
Fantastic Lesson Plan, Mathematics Form One Solutions - And the two last facets. Labeled as a and b that are shorter are what we call the legs. A right triangle has a right perspective and has a measure of ninety. 3. The longer side. Problem 1. In terms of a and b. What is the area of the bigger square in phrases of c? 7. 10. In terms of a and b. What is the region of the internal rectangular? 6. 7. C2= a2 b2 or c= √ four. In case you upload the area of the four congruent triangles and the place of the inner square. Is what we name as the hypotenuse. Eight 6 x find the price x. Allow us to attempt solving a trouble using the pythagorean theorem. Answer: since the diagonals of a rectangular bisects every other. C= x. What's the location of one of the triangles? 5. The equation have to be in its most effective shape. Nine. Wonderful answer! That equation is what we name because the pythagorean theorem named after pythagoras. Locate the fringe of the square whose diagonal is 5√ cm lengthy. Five√ cm can be divided by way of two. 8. Precisely! 6. Shape an equation in getting the vicinity of the larger rectangular. A= 6 and b= eight. What is a proper triangle? 5. Try to remove the inner square and make an equation indicating the removal of the internal square. What is the simplified shape of the equation you’ve derived out from the cut outs? 2. Labeled as letter c.3. Allow c be the side of the square. A= (five√ )/2 b= (5√ )/2 c= √ √ √ 1. X= √ x= √ x= √ x= 10 problem 2. Answer: the usage of the pythagorean theorem. Lesson right 1. . C. Those 4 triangles are congruent and they're all proper triangles. The pythagorean theorem is getting used only in a proper triangle. By √ substitution. Four. C= . What's the period of the aspect of the internal square? Four. Out from the reduce outs. The simplified shape of the equation we’ve derived out from the reduce outs is. Will the sum be equal to the place of the larger rectangular? Why? Eight.
http://labelpropagation.info/read/lesson-plan-mathematics-form-one-8294
18
13
Ink is the defining medium in most architectural drawings made prior to 1860. It was a versatile, durable, and accessible media that could produce drawings that functioned on several levels--as utilitarian images required to execute construction of a building; as sales documents designed to persuade a client; and as works of art created to enhance the architect's professional position. During the eighteenth and the first half of the nineteenth century, the role of architectural drawings changed, evolving from simple utilitarian documents often worn out or discarded at the completion of a project, to works of art hung in public exhibitions. The type and number of drawings created for a particular project became increasingly complex and varied, moving from linear orthographic projections created in the eighteenth century to the rendered perspective drawings of the mid-nineteenth century. While the style of architectural drawings changed dramatically between the eighteenth century and 1860, the role of ink remained constant. Iron gall ink was the media of choice for most eighteenth century drafters while India ink from a variety of European and Oriental sources had become standard by 1860. Ink was used to create both the linear and tonal structure of a drawing using methods described in British and American architects' and builders' manuals. These methods evolved during the period in response to the changing role of the architect and the development of watercolor technique. A reliance on shadowless tinted elevations gave way to an increasingly sophisticated use of shadows and graduated shades that more fully articulated the building as well as enhancing the drawing. Relying on manuals, trade catalogs and drawings, this presentation will explore the use of ink in architectural drawings during this period and some technical aspects of its production. The early drawings, composed of thin, uniformly inked ruled lines, were generally done to a small scale and included very little detail. Some included dimensions and indications of material, but many did not. Decisions like the profile of moldings, the trim around windows and doors, and the design of decorative brick work were left to the discretion of the builder or made during the construction process through informal consultation. This abbreviated design process was possible because of the nature of eighteenth and early nineteenth century aesthetic assumptions and building practices. Construction technology for all types of structures and the design of vernacular buildings were based on traditional building practices that required little explanation among the parties involved. The design principles that guided more formal Georgian and Federal architecture were based on Palladio's theories of symmetry and hierarchy enriched with classical elements chosen from English design manuals and pattern books. These principles and sources were well understood and familiar to both client and builder. All this began to change by 1800 with the work of Benjamin Henry Latrobe, a fully trained English architect, who began practice in this country in 1798, and routinely produced sophisticated perspective drawings with a surrounding landscape fully rendered in watercolor. Beside his drawings, the work of native American builders appeared naive, but traditional attitudes and practices changed slowly. In urban areas like Philadelphia where competition was keen and professional architects were available, local practice changed much more rapidly than in more conservative cities and rural areas. With the emergence of the architectural profession by 1830, the design and construction functions became increasingly separate and fewer decisions were left to the discretion of the builder. With changes in style, building technology and craft practices, the assumptions of the eighteenth and early nineteenth centuries that had united the master-builder and client in a single vision no longer existed. Therefore, in addition to making presentation drawings for the client, architects began to execute detailed construction drawings for the builder that included framing, window sashes, heating and ventilation systems and other functional and decorative elements. Unlike the earlier informal sketches in builders' notebooks, these drawings were executed to scale on good quality drawing paper, neatly inked-in and often tinted with color. With this very brief stylistic history in mind, I am now going to turn to the techniques used to actually produce these drawings.1 After a drawing was laid-out in pencil, the lines were inked-in to create a clear and durable image using specialized drafting instruments. The work horses of the architect's tool case were the ruling pen, pricker and compass. Physical evidence of the use of these tools is often present in the drawings, evidence that must be carefully preserved. Ruling pens used during the period were very similar to those available today. The pen was used for drawing straight lines with a straight edge as a guide. Ink was placed between the nibs of the pen with a brush and the set screw was adjusted to control the width of the line. In raking light, the impressions caused by the nibs of the ruling pen are often clearly evident. The pricker was a pointed needle-like instrument used in copying drawings and evidence of its use, expressed as pricked outlines in the drawing, are common. The compass was used for drawing circles and curves, and most had one fixed leg with a point and one interchangeable leg that could be fitted with a point, a ruling pen or a pencil holder.2 In transmitted light, the pricks caused by the use of the compass and pricker can usually be clearly seen. In addition, one also frequently finds erasers and corrections on architectural drawings expressed as clearly defined thinned areas that were created with a pen knife or scraper. In the early years of this period, iron gall ink, commonly used as a writing ink, was the most available media. It was regularly employed by house carpenters executing simple plans and elevations, although Thomas Jefferson also used it exclusively for his architectural drawings such as those for the University of Virginia. Black carbon based inks, known as India inks, were mentioned in European artist's manuals of the seventeenth century and increasingly used by the emerging architectural profession in England during the eighteenth century. English architects like Joseph Horatio Anderson, working in the American colonies during the later half of the eighteenth century, used a black carbon ink for his drawings, although he frequently noted dimensions and textual information in iron gall ink.3 With the increased availability of classes in drafting, the publication of builders' manuals and the example of professional architects like Benjamin Latrobe, common practice began to change and carbon ink became the standard medium for drafting by 1830. As early as 1805, Owen Biddle in The Young Carpenter's Assistant, published in Philadelphia, warned his readers about their choice of ink for drafting noting that: "It may be proper to observe, that no kind of ink should be used except Indian ink."4 The authors of successive manuals all emphasized the use of India ink although William Minifie, as late as 1854, still felt it necessary to caution inexperienced readers against using common writing ink for architectural and engineering drawings.5 In spite of the near universal adoption of India ink for drafting, however, many architects, including sophisticated professionals, continued to make numerical and textual notations in iron gall ink throughout the first half of the nineteenth century. India ink was available in solid sticks that the drafter prepared as needed. R. G. Hatfield in The American House Carpenter, published in 1849, described the procedure for preparing ink in detail: With a drop or two of water, rub one end of the cake of ink upon a plate or saucer, until a sufficiency adheres to it. Be careful to dry the cake of ink; because if it is left wet, it will crack and crumble in pieces. With an inferior camel's-hair pencil, add a little water to the ink that was rubbed on the plate, and mix it well. It should be diluted sufficiently to flow freely from the pen, and yet be thick enough to make a black line. With the hair pencil, place a little of the ink between the nibs of the drawing-pen, and screw the nibs together until the pen makes a fine line. Beginning with the curved lines, proceed to ink all the lines of the figure; be careful now to make every line of its requisite length. If they are a trifle too short or too long, the drawing will have a ragged appearance; and this is opposed to that neatness and accuracy which is indispensable to a good drawing. When the ink is dry, efface the pencil-marks with the india-rubber. If the pencil is used lightly, they will all rub off, leaving those lines only that were inked.6 Variations in inking included the use of shadow or shade lines. These slightly wider lines were used to create a sense of depth and were placed on the side of an elevation or plan opposite an imaginary light source in the upper left corner striking the image at a 45 degree angle. The heavy black outline of a plan, known as the poche' was often filled in with a brush, sometimes using an ink that had been enriched with sugar or gum to make it more dense and glossy. By mid-century, drawings that were going to be shaded in ink or color were often inked-in using light grey lines of dilute ink rather than black lines to avoid a sharp or hard appearance in the final drawing.7 After a drawing was inked-in, the drafter had the option of continuing to develop the image by adding shadows, monochromatic shades and tints, or local color. The addition of properly cast shadows to an elevation increased the information in the drawing by defining the depth of various projections; the addition of monochromatic tints and shades increased the clarity of the image; local color suggested the materials to be used in the structure and all these elements increased the visual appeal and artistic appearance of the drawing. The progressively sophisticated use of these elements from the late eighteenth through the mid-nineteenth century paralleled the growing sophistication and professional consciousness of architects as they made the transition from craftsman-builder to professional designer. Shadows were laid out in pencil on a drawing whose lines had already been inked-in. The light was assumed to strike the building at a 45 degree angle from the upper left, entering over the left shoulder of the spectator. Using the rules of geometry, the plan of the building and the scale of the drawing, the drafter determined the dimensions and angle of every shadow and cast it accordingly. Dilute ink washes were used to fill in the shadows, often as part of an overall scheme for tinting and/or shading the building. The terms tint and shade were sometimes used interchangeably, but many authors drew a subtle but clear distinction. Tinting was characterized by the application of dilute ink washes to produce flat areas of tone without reference to light. An example might include tinting each major element in an elevation, such as the main block of a building and its projecting wings or roof, with a different dilution of ink. Shading was characterized by the application of dilute ink washes to produce graduated areas of tone to suggest the modeling of three dimensional forms or the affect of light. To create the graduated affect of shading, architects traditionally relied on flat tints of a single dilution of ink evenly applied in layers over a decreasing amount of the area to be shaded.8 This technique resulted in clearly defined areas of tone progressing from light to dark. By the mid-nineteenth century, innovations in artists' watercolor techniques, particularly the use of continuously graduated washes, influenced architect's approach to their drawings. Among most architects, continuously graduated or soft tint shading, using the techniques developed by watercolorists, became increasingly common. In a book written for draughtsmen in 1854, William Johnson noted that there are two methods for laying in shading on geometric forms; flat tint shading and graduated or softened tint shading. In introducing softened tint shading, Johnson explained that: This system of shading differs from the former [flat tint shading] in producing the effects of light and shade by imperceptible gradations, obtained by manipulation with the brush in the laying on of the color: this system possesses the advantage over the first, of not leaving any lines, dividing the different degrees of shade, which sometimes appear harsh to the eye, and seem to represent facets or flutings, which do not exist.9 Color was the final consideration of the architect in completing his rendering and watercolor was the medium of choice throughout the period. In standard watercolor technique of the eighteenth century, colored washes were laid in over washes executed in various dilutions of India ink. Ink washes established the tonal relationships and gradations within the picture while color tints expressed the local color. The final picture was a tinted drawing rather than a painting, and the ink washes muted the local color giving the overall composition a subdued tone. While unsophisticated builders of the eighteenth and early nineteenth century tended to apply color boldly and directly with out underlying ink washes, architects consistently used color washes over ink rendering. Only toward the end of the period did architects begin to use some of the direct color techniques watercolor artists had been using for decades.10 In any discussion of India ink, it is important to note that the term does not refer to one predictable and constant material. This media was originally imported from the Orient during the seventeenth century and known as India, Chinese or Japan ink. It was highly valued for the dense black lines and luminous washes it could produce. By the eighteenth century, European ink manufacturers like Ribaucourt of Paris and Eisler of Holland had begun to fabricate a European version, but the quality of the original was difficult to imitate. European ink sticks tended to be gritty and to produce an inferior ink with a distinctly brownish tint. The European product was designed to closely resemble the Asian product, however, even to the point of inscribing the sticks with Chinese characters. Given this situation, India and Chinese ink became generic terms for black carbon inks generally.11 Regardless of their geographic origin, India inks are commonly composed of a finely ground black carbon pigment, usually lamp black, and a glue or gum binder. Most imported India ink was manufactured in China. The most common source for the finest Chinese lamp black was tung oil. According to one account, the lampblack was produced by burning the oil in wick lamps over which terra cotta cones with polished inner walls were suspended. The soot from combustion of the oil collected on the smooth walls of the cone and was removed hourly with a feather, care being taken not to collect the resinous by-products which were also deposited on the cone walls. Only the very smallest particles of lamp black were suitable for fine inks. Some manufacturers sorted the particles during combustion by channeling the smoke through various tubes, chambers and partitions. The purity and fineness of lamp black was critical to the quality of the final ink. The presence of resinous or oily-by products from incomplete combustion produces an ink that has a brown tint and may turn browner with time until it resembles iron gall ink. The lamp black is kept in aqueous suspension with a binder that prevents the finely divided particles from coalescing and precipitating. The binders in Chinese ink sticks are most commonly glues from fish and animal sources or gums, especially gum arabic. A clear gelatin from fish skins was used for particularly fine inks. Mucilaginous substances such as agars and alginates from seaweed were also occasionally used. To prepare the ink sticks, glue or gum was pored through a sieve onto the lampblack pigment. This mixture was heated over steam and then pounded in a mortar until it became pliable. Additives, particularly musk and/or camphor were added to the ink and served as insecticides and fungicides as well as satisfying certain religious and cultural requirements. The ink was then molded into sticks and dried slowly.12 The European product was produced by combining lamp black from the soot of a variety of sources including burning oils, resins or resinous woods, charcoal, twigs, bones, ivory and seeds or stones of various plants and fruits. The soot was finely ground and mixed with a binding media of water and one of a variety of glues and gums. Gum arabic was the most common choice according to most recipes.13 One should note here, to avoid confusion, that bottled waterproof inks commonly referred to as India inks and still in use by modern drafters, illustrators and artists, were developed about 1790, but were probably not widely adopted by American architects for at least another half century and are not mentioned in trade catalogs or manuals until 1870. They consisted of lamp black in an aqueous suspension with a binder of shellac or a resin dissolved in borax or a similar soap. A study of the trade catalogs of American companies has been useful in establishing what inks were available to drafters during the nineteenth century. When N.D. Cotton of Boston published his catalog of artist's supplies in the 1840's, only one type of ink was available, India ink in sticks described only by shape, including Octagon, Large Square or Oval Lions Head.14 This description provided little guidance regarding quality for the consumer. Manuals of the period advise that ink sticks be rubbed against the teeth to test for grittiness or slightly dampened to see if they had a smooth, soft feel. Ox gall was available as an additive to help the ink flow, particularly across oil impregnated tracing papers.15 Cotton's catalog of 1855, and Goupil and Co.'s catalog of 1857, listed a similar assortment of ink sticks as well two types of black bottled inks.16 Liquid India ink may have been an early waterproof ink or more likely, a ground India ink prepared by the colormen to save the architect the trouble. Hogan & Thompson's Jet Black Ink was, according to the text in Cotton's catalog, produced by the chemical combination of new material with the gallate of iron to produce an ink that was permanent in nature and fluid from the pen. Hogan & Thompson's Jet Black Ink was probably what became known as Japan ink, an oxidized iron gall ink composed primarily of ferric tannate with large amounts of gum added to keep the pigment in suspension. It produces a very dense and glossy line, but has the reputation of turning slightly brown with age and tending to clog in the pen during use. Inks of this type were probably used for the dense black outlines habitually used in plans and known as poche'. A clue to its identification is ink that has a slightly brown and crusty appearance and that causes the transfer staining typical of iron gall inks.17 In an effort to further define the black inks used by American drafters before 1860, a group of fourteen representative drawings were subjected to elemental analysis using X-ray Fluorescence Spectroscopy (XRF).18 This analytical instrument cannot detect carbon or the compounds found in most organic pigments, but it can detect most of the elements present in inorganic pigments. Of the nine drawings done before 1840, five showed significant amounts of iron in the black pigment areas. Only one of the five drawings done after 1840 had iron. Five of the earlier drawings, including three that also had iron, exhibited significant amounts of calcium, probably indicating the presence of bone black. This pigment is a more likely component of European than of Chinese inks. The iron in the black pigment areas could have two major sources. The ink could be a carbon base ink mixed with an iron containing pigment such as Prussian blue. The Prussian blue would have masked the slightly brown tone many carbon inks, particularly those of European manufacture, tended to have. If this is the case, indigo, a frequently used pigment in these drawings, may also have been added to some of the inks for the same reason, but this pigment is organic and cannot be identified with XRF. Manuals note that ink washes were frequently tinted with various water color pigments to warm or cool the tone of the ink, so the use of pigments to adjust the tone of inked lines and washes is not unexpected. A second option, particularly in dense black areas, such as poche's and windows, would be the use of Japan black, the oxidized iron gall ink described above. These results suggest that the India ink in many drawings, particularly those done before 1840, may have come from European sources and that watercolor pigments, particularly blues, may have frequently been added to the ink to adjust its tone. Since the European inks may be less stable, and pigments such as indigo are moderately fugitive, the media in these drawings may be more vulnerable to deterioration and light damage than is commonly assumed. As this study shows, ink was the defining media in most architectural drawings made before 1860, although its composition and use changed over time to adjust to stylistic trends and the emerging role of the professional architect. In studying and treating these drawings, conservators need to be aware of the physical evidence left by drafting instruments and the many possible variants in ink application and composition. The research in this paper was undertaken as part of a larger project investigating the fabrication and preservation of American architectural drawings prior to 1930. This research has been supported by a Peterson Fellowship from the Athenaeum of Philadelphia, a Research Fellowship from the Winterthur Museum Library and grants from the Institute for Museum Services, the Graham Foundation and the Council on Library Resources. In addition, my former employer, the Conservation Center for Art and Historic Artifacts in Philadelphia, supported the project and granted the leave necessary to undertake the research. 1. Much of this summary history was drawn from Jeffrey A. Cohen, "Early American Architectural Drawings," Drawing Toward Building: Philadelphia Architectural Graphics, 1732-1986 (Philadelphia: Pennsylvania Academy of the Fine Arts) 1986, pp. 15-116. 2. For a thorough discussion of drawing instruments see Maya Hambly, Drawing Instruments 1580-1980 (Sothbey's Publications: London) 1988. 3. Drawings of Joseph Horatio Anderson of Whitehall and the Maryland Statehouse are found in the Downs Collection of Manuscripts and Printed Ephemera at the Winterthur Museum, Winterthur, DE. Jefferson's drawings for the University of Virginia are found in Alderman Library, University of Virginia. Many conclusions drawn in this paper are based on the examination of a large number of architectural drawings that has helped document when particular types of supports and media were used. 4. Owen Biddle, The Young Carpenter's Assistant (Philadelphia: Benjamin Johnson) 1805, p. 5. 5. R.G. Hatfield, The American House Carpenter, 3rd ed. (New York: John Wiley) 1849, p. 7. John Hall, Modern Designs for Dwelling Houses (Baltimore: John Murphy) 1840, p. 32. William Minifie, Essay on the Theory of Color and Its Application to Architectural and Mechanical Drawing (Baltimore: William Minifie) 1854, p. 150. 6. Hatfield, pp. 7-8. 7. William Johnson, The Practical Draughtsman's Book of Industrial Design (New York: Stringer and Townsend) 1854, p. 122. 8. The Rudiments of Architecture or the Young Workman's Instructor (Dundee, England) 1799, p. 60. 9. Johnson, p. 102. 10. "Washing in Painting," The Cyclopedia; or Universal Dictionary of Arts, Sciences, and Literature, ed Abraham Rees, v. 39 (Philadelphia: Samuel F. Bradford) 1810-24, n.p. 11. James Watrous, The Craft of Old Master Drawings (Madison, WI: The University of Wisconsin Press) 1957, p. 67-68. 12. James Stroud, "Inks on Manuscripts," Unpublished typescript, pp. 42-45. John Winter, "Preliminary Investigations on Chinese Ink in Eastern Painting," Journal of Archeological Chemistry, 2(Winter), pp 207-225. 13. Hambly, p. 62. 14. N. D. Cotton's Catalogue, (Boston) 184-, p. 13 15. Appleton's Cyclopedia of Drawing, ed. W. E. Worthen (New York: D. Appleton & Co.) 1857, p. 41. Hatfield, p. 5. 16. N. D. Cotton's Catalogue of Drawing Materials, Stationary (Boston) 1855, p. 80. Goupil & Co., Catalogue and Price List of Artist's Materials (New York) 1857, p. 37, 57. 17. Sigmund Lehner, Ink Manufacture (London: Scott Greenwood & Son) 1926, p. 44. 18. The analytical work cited here was undertaken by the Conservation Analytical Laboratory at the Winterthur Museum.Lois Olcott Price Paper delivered at the Book and Paper specialty group session, AIC 22nd Annual Meeting, June 11, 1994, [PLACE]. Papers for the specialty group session are selected by committee, based on abstracts and there has been no further peer review. Papers are received by the compiler in the Fall following the meeting and the author is welcome to make revisions, minor or major. Timestamp: Wednesday, 03-Aug-2011 10:44:35 PDT Retrieved: Saturday, 20-Oct-2018 16:49:04 GMT
http://cool.conservation-us.org/coolaic/sg/bpg/annual/v13/bp13-08.html
18
326
Special relativity(Redirected from Special Relativity) In physics, special relativity (SR, also known as the special theory of relativity or STR) is the generally accepted and experimentally well-confirmed physical theory regarding the relationship between space and time. In Albert Einstein's original pedagogical treatment, it is based on two postulates: - The laws of physics are invariant (i.e., identical) in all inertial systems (i.e., non-accelerating frames of reference). - The speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. It was originally proposed by Albert Einstein in a paper published 26 September 1905 titled "On the Electrodynamics of Moving Bodies". The inconsistency of Newtonian mechanics with Maxwell's equations of electromagnetism and the lack of experimental confirmation for a hypothesized luminiferous aether led to the development of special relativity, which corrects mechanics to handle situations involving motions at a significant fraction of the speed of light (known as relativistic velocities). As of today, special relativity is the most accurate model of motion at any speed when gravitational effects are negligible. Even so, the Newtonian mechanics model is still useful as an approximation at small velocities relative to the speed of light, due to its simplicity and high accuracy within its scope. Not until Einstein developed general relativity, to incorporate general (i.e., including accelerated) frames of reference and gravity, was the phrase "special relativity" employed. A translation that has often been used is "restricted relativity"; "special" really means "special case". Special relativity implies a wide range of consequences, which have been experimentally verified, including length contraction, time dilation, relativistic mass, mass–energy equivalence, a universal speed limit and relativity of simultaneity. It has replaced the conventional notion of an absolute universal time with the notion of a time that is dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula E = mc2, where c is the speed of light in a vacuum. A defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other. Rather, space and time are interwoven into a single continuum known as spacetime. Events that occur at the same time for one observer can occur at different times for another. The theory is "special" in that it only applies in the special case where the curvature of spacetime due to gravity is negligible. In order to include gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some outdated descriptions, is capable of handling accelerations as well as accelerated frames of reference. As Galilean relativity is now considered an approximation of special relativity that is valid for low speeds, special relativity is considered an approximation of general relativity that is valid for weak gravitational fields, i.e. at a sufficiently small scale and in conditions of free fall. Whereas general relativity incorporates noneuclidean geometry in order to represent gravitational effects as the geometric curvature of spacetime, special relativity is restricted to the flat spacetime known as Minkowski space. A locally Lorentz-invariant frame that abides by special relativity can be defined at sufficiently small scales, even in curved spacetime. Galileo Galilei had already postulated that there is no absolute and well-defined state of rest (no privileged reference frames), a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been recently observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics. |“||Reflections of this type made it clear to me as long ago as shortly after 1900, i.e., shortly after Planck's trailblazing work, that neither mechanics nor electrodynamics could (except in limiting cases) claim exact validity. Gradually I despaired of the possibility of discovering the true laws by means of constructive efforts based on known facts. The longer and the more desperately I tried, the more I came to the conviction that only the discovery of a universal formal principle could lead us to assured results... How, then, could such a universal principle be found?||”| |— Albert Einstein: Autobiographical Notes| Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the (then) known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light and the independence of physical laws (especially the constancy of the speed of light) from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as: - The Principle of Relativity – The laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other. - The Principle of Invariant Light Speed – "... light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (from the preface). That is, light in vacuum propagates with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source. The derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history. Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations. However, the most common set of postulates remains those employed by Einstein in his original paper. A more mathematical statement of the Principle of Relativity made later by Einstein, which introduces the concept of simplicity not mentioned above is: Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K' moving in uniform translation relatively to K. Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms. Many of Einstein's papers present derivations of the Lorentz transformation based upon these two principles. Einstein consistently based the derivation of Lorentz invariance (the essential core of special relativity) on just the two basic principles of relativity and light-speed invariance. He wrote: The insight fundamental for the special theory of relativity is this: The assumptions relativity and light speed invariance are compatible if relations of a new type ("Lorentz transformation") are postulated for the conversion of coordinates and times of events... The universal principle of the special theory of relativity is contained in the postulate: The laws of physics are invariant with respect to Lorentz transformations (for the transition from one inertial system to any other arbitrarily chosen inertial system). This is a restricting principle for natural laws... From the principle of relativity alone without assuming the constancy of the speed of light (i.e. using the isotropy of space and the symmetry implied by the principle of special relativity) one can show that the spacetime transformations between inertial frames are either Euclidean, Galilean, or Lorentzian. In the Lorentzian case, one can then obtain relativistic interval conservation and a certain finite limiting speed. Experiments suggest that this speed is the speed of light in vacuum. The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance. Lack of an absolute reference frameEdit The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. However, in the late 19th century, the existence of electromagnetic waves led physicists to suggest that the universe was filled with a substance that they called "aether", which would act as the medium through which these waves, or vibrations travelled. The aether was thought to constitute an absolute reference frame against which speeds could be measured, and could be considered fixed and motionless. Aether supposedly possessed some wonderful properties: it was sufficiently elastic to support electromagnetic waves, and those waves could interact with matter, yet it offered no resistance to bodies passing through it. The results of various experiments, including the Michelson–Morley experiment, led to the theory of special relativity, by showing that there was no aether. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities. Reference frames, coordinates, and the Lorentz transformationEdit Reference frames and relative motionEdit Reference frames play a crucial role in relativity theory. The term reference frame as used here is an observational perspective in space which is not undergoing any change in motion (acceleration), from which a position can be measured along 3 spatial axes. In addition, a reference frame has the ability to determine measurements of the time of events using a 'clock' (any reference device with uniform periodicity). An event is an occurrence that can be assigned a single unique time and location in space relative to a reference frame: it is a "point" in spacetime. Since the speed of light is constant in relativity in each and every reference frame, pulses of light can be used to unambiguously measure distances and refer back the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired. For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four spacetime coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S. In relativity theory we often want to calculate the position of a point from a different reference point. Suppose we have a second reference frame S′, whose spatial axes and clock exactly coincide with that of S at time zero, but it is moving at a constant velocity v with respect to S along the x-axis. Since there is no absolute reference frame in relativity theory, a concept of 'moving' doesn't strictly exist, as everything is always moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore, S and S′ are not comoving. Define the event to have spacetime coordinates (t,x,y,z) in system S and (t′,x′,y′,z′) in a reference frame moving at a velocity v with respect to that frame, S′. Then the Lorentz transformation specifies that these coordinates are related in the following way: is the Lorentz factor and c is the speed of light in vacuum, and the velocity v of S′ is parallel to the x-axis. The y and z coordinates are unaffected; only the x and t coordinates are transformed. These Lorentz transformations form a one-parameter group of linear mappings, that parameter being called rapidity. There is nothing special about the x-axis, the transformation can apply to the y- or z-axis, or indeed in any direction, which can be done by directions parallel to the motion (which are warped by the γ factor) and perpendicular; see main article for details. Writing the Lorentz transformation and its inverse in terms of coordinate differences, where for instance one event has coordinates (x1, t1) and (x′1, t′1), another event has coordinates (x2, t2) and (x′2, t′2), and the differences are defined as These effects are explicitly related to our way of measuring time intervals between events which occur at the same place in a given coordinate system (called "co-local" events). These time intervals will be different in another coordinate system moving with respect to the first, unless the events are also simultaneous. Similarly, these effects also relate to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spatial distance from each other when seen from another moving coordinate system. However, the spacetime interval will be the same for all observers. Measurement versus visual appearanceEdit Time dilation and length contraction are not optical illusions, but genuine effects. Measurements of these effects are not an artifact of Doppler shift, nor are they the result of neglecting to take into account the time it takes light to travel from an event to an observer. Scientists make a fundamental distinction between measurement or observation on the one hand, versus visual appearance, or what one sees. The measured shape of an object is a hypothetical snapshot of all of the object's points as they exist at a single moment in time. The visual appearance of an object, however, is affected by the varying lengths of time that light takes to travel from different points on the object to one's eye. For many years, the distinction between the two had not been generally appreciated, and it had generally been thought that a length contracted object passing by an observer would in fact actually be seen as length contracted. In 1959, James Terrell and Roger Penrose independently pointed out that differential time lag effects in signals reaching the observer from the different parts of a moving object result in a fast moving object's visual appearance being quite different from its measured shape. For example, a receding object would appear contracted, an approaching object would appear elongated, and a passing object would have a skew appearance that has been likened to a rotation. A sphere in motion retains the appearance of a sphere, although images on the surface of the sphere will appear distorted. Fig. 1‑13 illustrates a cube viewed from a distance of four times the length of its sides. At high speeds, the sides of the cube that are perpendicular to the direction of motion appear hyperbolic in shape. The cube is actually not rotated. Rather, light from the rear of the cube takes longer to reach one's eyes compared with light from the front, during which time the cube has moved to the right. This illusion has come to be known as Terrell rotation or the Terrell–Penrose effect.[note 1] Another example where visual appearance is at odds with measurement comes from the observation of apparent superluminal motion in various radio galaxies, BL Lac objects, quasars, and other astronomical objects that eject relativistic-speed jets of matter at narrow angles with respect to the viewer. An optical illusion results giving the appearance of faster than light travel. In Fig. 1‑14, galaxy M87 streams out a high-speed jet of subatomic particles almost directly towards us, but Penrose–Terrell rotation causes the jet to appear to be moving laterally in the same manner that the appearance of the cube in Fig. 1‑13 has been stretched out. Consequences derived from the Lorentz transformationEdit The consequences of special relativity can be derived from the Lorentz transformation equations. These transformations, and hence special relativity, lead to different physical predictions than those of Newtonian mechanics when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything humans encounter that some of the effects predicted by relativity are initially counterintuitive. Relativity of simultaneityEdit Two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer, may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity). From the first equation of the Lorentz transformation in terms of coordinate differences it is clear that two events that are simultaneous in frame S (satisfying Δt = 0), are not necessarily simultaneous in another inertial frame S′ (satisfying Δt′ = 0). Only if these events are additionally co-local in frame S (satisfying Δx = 0), will they be simultaneous in another frame S′. The time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames (e.g., the twin paradox which concerns a twin who flies off in a spaceship traveling near the speed of light and returns to discover that his or her twin sibling has aged much more). Suppose a clock is at rest in the unprimed system S. The location of the clock on two different ticks is then characterized by Δx = 0. To find the relation between the times between these ticks as measured in both systems, the first equation can be used to find: - for events satisfying This shows that the time (Δt′) between the two ticks as seen in the frame in which the clock is moving (S′), is longer than the time (Δt) between these ticks as measured in the rest frame of the clock (S). Time dilation explains a number of physical phenomena; for example, the lifetime of muons produced by cosmic rays impinging on the Earth's atmosphere is measured to be greater than the lifetimes of muons measured in the laboratory. The dimensions (e.g., length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e.g., the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage). Similarly, suppose a measuring rod is at rest and aligned along the x-axis in the unprimed system S. In this system, the length of this rod is written as Δx. To measure the length of this rod in the system S′, in which the rod is moving, the distances x′ to the end points of the rod must be measured simultaneously in that system S′. In other words, the measurement is characterized by Δt′ = 0, which can be combined with the fourth equation to find the relation between the lengths Δx and Δx′: - for events satisfying This shows that the length (Δx′) of the rod as measured in the frame in which it is moving (S′), is shorter than its length (Δx) in its own rest frame (S). Composition of velocitiesEdit Velocities (speeds) do not simply add. If the observer in S measures an object moving along the x axis at velocity u, then the observer in the S′ system, a frame of reference moving at velocity v in the x direction with respect to S, will measure the object moving with velocity u′ where (from the Lorentz transformations above): The other frame S will measure: Notice that if the object were moving at the speed of light in the S system (i.e. u = c), then it would also be moving at the speed of light in the S′ system. Also, if both u and v are small with respect to the speed of light, we will recover the intuitive Galilean transformation of velocities The usual example given is that of a train (frame S′ above) traveling due east with a velocity v with respect to the tracks (frame S). A child inside the train throws a baseball due east with a velocity u′ with respect to the train. In nonrelativistic physics, an observer at rest on the tracks will measure the velocity of the baseball (due east) as u = u′ + v, while in special relativity this is no longer true; instead the velocity of the baseball (due east) is given by the second equation: Again, there is nothing special about the x or east directions. This formalism applies to any direction by considering parallel and perpendicular components of motion to the direction of relative velocity v, see main article for details. The orientation of an object (i.e. the alignment of its axes with the observer's axes) may be different for different observers. Unlike other relativistic effects, this effect becomes quite significant at fairly low velocities as can be seen in the spin of moving particles. Equivalence of mass and energyEdit As an object's speed approaches the speed of light from an observer's point of view, its relativistic mass increases thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The energy content of an object at rest with mass m equals mc2. Conservation of energy implies that, in any reaction, a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction. Similarly, the mass of an object can be increased by taking in kinetic energies. In addition to the papers referenced above—which give derivations of the Lorentz transformation and describe the foundations of special relativity—Einstein also wrote at least four papers giving heuristic arguments for the equivalence (and transmutability) of mass and energy, for E = mc2. Mass–energy equivalence is a consequence of special relativity. The energy and momentum, which are separate in Newtonian mechanics, form a four-vector in relativity, and this relates the time component (the energy) to the space components (the momentum) in a non-trivial way. For an object at rest, the energy–momentum four-vector is (E/c, 0, 0, 0): it has a time component which is the energy, and three space components which are zero. By changing frames with a Lorentz transformation in the x direction with a small value of the velocity v, the energy momentum four-vector becomes (E/c, Ev/c2, 0, 0). The momentum is equal to the energy multiplied by the velocity divided by c2. As such, the Newtonian mass of an object, which is the ratio of the momentum to the velocity for slow velocities, is equal to E/c2. The energy and momentum are properties of matter and radiation, and it is impossible to deduce that they form a four-vector just from the two basic postulates of special relativity by themselves, because these don't talk about matter or radiation, they only talk about space and time. The derivation therefore requires some additional physical reasoning. In his 1905 paper, Einstein used the additional principles that Newtonian mechanics should hold for slow velocities, so that there is one energy scalar and one three-vector momentum at slow velocities, and that the conservation law for energy and momentum is exactly true in relativity. Furthermore, he assumed that the energy of light is transformed by the same Doppler-shift factor as its frequency, which he had previously shown to be true based on Maxwell's equations. The first of Einstein's papers on this subject was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905. Although Einstein's argument in this paper is nearly universally accepted by physicists as correct, even self-evident, many authors over the years have suggested that it is wrong. Other authors suggest that the argument was merely inconclusive because it relied on some implicit assumptions. Einstein acknowledged the controversy over his derivation in his 1907 survey paper on special relativity. There he notes that it is problematic to rely on Maxwell's equations for the heuristic mass–energy argument. The argument in his 1905 paper can be carried out with the emission of any massless particles, but the Maxwell equations are implicitly used to make it obvious that the emission of light in particular can be achieved only by doing work. To emit electromagnetic waves, all you have to do is shake a charged particle, and this is clearly doing work, so that the emission is of energy. How far can one travel from the Earth?Edit Since one can not travel faster than light, one might conclude that a human can never travel farther from Earth than 40 light years if the traveler is active between the ages of 20 and 60. One would easily think that a traveler would never be able to reach more than the very few solar systems which exist within the limit of 20–40 light years from the earth. But that would be a mistaken conclusion. Because of time dilation, a hypothetical spaceship can travel thousands of light years during the pilot's 40 active years. If a spaceship could be built that accelerates at a constant 1g, it will, after a little less than a year, be travelling at almost the speed of light as seen from Earth. This is described by: where v(t) is the velocity at a time t, a is the acceleration of 1g and t is the time as measured by people on Earth. Therefore, after one year of accelerating at 9.81 m/s2, the spaceship will be travelling at v = 0.77c relative to Earth. Time dilation will increase the travellers life span as seen from the reference frame of the Earth to 2.7 years, but his lifespan measured by a clock travelling with him will not change. During his journey, people on Earth will experience more time than he does. A 5-year round trip for him will take 6.5 Earth years and cover a distance of over 6 light-years. A 20-year round trip for him (5 years accelerating, 5 decelerating, twice each) will land him back on Earth having travelled for 335 Earth years and a distance of 331 light years. A full 40-year trip at 1g will appear on Earth to last 58,000 years and cover a distance of 55,000 light years. A 40-year trip at 1.1g will take 148,000 Earth years and cover about 140,000 light years. A one-way 28 year (14 years accelerating, 14 decelerating as measured with the astronaut's clock) trip at 1g acceleration could reach 2,000,000 light-years to the Andromeda Galaxy. This same time dilation is why a muon travelling close to c is observed to travel much further than c times its half-life (when at rest). Causality and prohibition of motion faster than lightEdit In diagram 2 the interval AB is 'time-like'; i.e., there is a frame of reference in which events A and B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames. It is hypothetically possible for matter (or information) to travel from A to B, so there can be a causal relationship (with A the cause and B the effect). The interval AC in the diagram is 'space-like'; i.e., there is a frame of reference in which events A and ' occur simultaneously, separated only in space. There are also frames in which A precedes C (as shown) and frames in which C precedes A. If it were possible for a cause-and-effect relationship to exist between events A and C, then paradoxes of causality would result. For example, if A was the cause, and C the effect, then there would be frames of reference in which the effect preceded the cause. Although this in itself will not give rise to a paradox, one can show that faster than light signals can be sent back into one's own past. A causal paradox can then be constructed by sending the signal if and only if no signal was received previously. Therefore, if causality is to be preserved, one of the consequences of special relativity is that no information signal or material object can travel faster than light in vacuum. However, some "things" can still move faster than light. For example, the location where the beam of a search light hits the bottom of a cloud can move faster than light when the search light is turned rapidly. Even without considerations of causality, there are other strong reasons why faster-than-light travel is forbidden by special relativity. For example, if a constant force is applied to an object for a limitless amount of time, then integrating F = dp/dt gives a momentum that grows without bound, but this is simply because approaches infinity as approaches c. To an observer who is not accelerating, it appears as though the object's inertia is increasing, so as to produce a smaller acceleration in response to the same force. This behavior is observed in particle accelerators, where each charged particle is accelerated by the electromagnetic force. Geometry of spacetimeEdit Comparison between flat Euclidean space and Minkowski spaceEdit Special relativity uses a 'flat' 4-dimensional Minkowski space – an example of a spacetime. Minkowski spacetime appears to be very similar to the standard 3-dimensional Euclidean space, but there is a crucial difference with respect to time. In 3D space, the differential of distance (line element) ds is defined by where dx = (dx1, dx2, dx3) are the differentials of the three spatial dimensions. In Minkowski geometry, there is an extra dimension with coordinate X0 derived from time, such that the distance differential fulfills where dX = (dX0, dX1, dX2, dX3) are the differentials of the four spacetime dimensions. This suggests a deep theoretical insight: special relativity is simply a rotational symmetry of our spacetime, analogous to the rotational symmetry of Euclidean space (see image right). Just as Euclidean space uses a Euclidean metric, so spacetime uses a Minkowski metric. Basically, special relativity can be stated as the invariance of any spacetime interval (that is the 4D distance between any two events) when viewed from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski spacetime. The actual form of ds above depends on the metric and on the choices for the X0 coordinate. To make the time coordinate look like the space coordinates, it can be treated as imaginary: X0 = ict (this is called a Wick rotation). According to Misner, Thorne and Wheeler (1971, §2.3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) and to take X0 = ct, rather than a "disguised" Euclidean metric using ict as the time coordinate. Some authors use X0 = t, with factors of c elsewhere to compensate; for instance, spatial coordinates are divided by c or factors of c±2 are included in the metric tensor. These numerous conventions can be superseded by using natural units where c = 1. Then space and time have equivalent units, and no factors of c appear anywhere. If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3D space which is the equation of a circle of radius c dt. If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone: This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star which I am receiving is X years old", we are looking down this line of sight: a null geodesic. We are looking at an event a distance away and a time d/c in the past. For this reason the null dual cone is also known as the 'light cone'. (The point in the lower left of the picture above right represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight".) The cone in the −t region is the information that the point is 'receiving', while the cone in the +t section is the information that the point is 'sending'. Physics in spacetimeEdit Transformations of physical quantities between reference framesEdit Above, the Lorentz transformation for the time coordinate and three space coordinates illustrates that they are intertwined. This is true more generally: certain pairs of "timelike" and "spacelike" quantities naturally combine on equal footing under the same Lorentz transformation. The Lorentz transformation in standard configuration above, i.e. for a boost in the x direction, can be recast into matrix form as follows: In Newtonian mechanics, quantities which have magnitude and direction are mathematically described as 3d vectors in Euclidean space, and in general they are parametrized by time. In special relativity, this notion is extended by adding the appropriate timelike quantity to a spacelike vector quantity, and we have 4d vectors, or "four vectors", in Minkowski spacetime. The components of vectors are written using tensor index notation, as this has numerous advantages. The notation makes it clear the equations are manifestly covariant under the Poincaré group, thus bypassing the tedious calculations to check this fact. In constructing such equations, we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation. Recognizing other physical quantities as tensors simplifies their transformation laws. Throughout, upper indices (superscripts) are contravariant indices rather than exponents except when they indicate a square (this should be clear from the context), and lower indices (subscripts) are covariant indices. For simplicity and consistency with the earlier equations, Cartesian coordinates will be used. The simplest example of a four-vector is the position of an event in spacetime, which constitutes a timelike component ct and spacelike component x = (x, y, z), in a contravariant position four vector with components: where we define X0 = ct so that the time coordinate has the same dimension of distance as the other spatial dimensions; so that space and time are treated equally. Now the transformation of the contravariant components of the position 4-vector can be compactly written as: where the Lorentz factor is: where m is the invariant mass. The four-acceleration is the proper time derivative of 4-velocity: The transformation rules for three-dimensional velocities and accelerations are very awkward; even above in standard configuration the velocity equations are quite complicated owing to their non-linearity. On the other hand, the transformation of four-velocity and four-acceleration are simpler by means of the Lorentz transformation matrix. which is the transpose of: only in Cartesian coordinates. It's the covariant derivative which transforms in manifest covariance, in Cartesian coordinates this happens to reduce to the partial derivatives, but not in other coordinates. More generally, the covariant components of a 4-vector transform according to the inverse Lorentz transformation: where is the reciprocal matrix of . The postulates of special relativity constrain the exact form the Lorentz transformation matrices take. where is the reciprocal matrix of . All tensors transform by this rule. An example of a four dimensional second order antisymmetric tensor is the relativistic angular momentum, which has six components: three are the classical angular momentum, and the other three are related to the boost of the center of mass of the system. The derivative of the relativistic angular momentum with respect to proper time is the relativistic torque, also second order antisymmetric tensor. The electromagnetic field tensor is another second order antisymmetric tensor field, with six components: three for the electric field and another three for the magnetic field. There is also the stress–energy tensor for the electromagnetic field, namely the electromagnetic stress–energy tensor. The metric tensor allows one to define the inner product of two vectors, which in turn allows one to assign a magnitude to the vector. Given the four-dimensional nature of spacetime the Minkowski metric η has components (valid in any inertial reference frame) which can be arranged in a 4 × 4 matrix: which is equal to its reciprocal, , in those frames. Throughout we use the signs as above, different authors use different conventions – see Minkowski metric alternative signs. The Poincaré group is the most general group of transformations which preserves the Minkowski metric: and this is the physical symmetry underlying special relativity. The metric can be used for raising and lowering indices on vectors and tensors. Invariants can be constructed using the metric, the inner product of a 4-vector T with another 4-vector S is: Invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no Λ appears in its trivial transformation. The magnitude of the 4-vector T is the positive square root of the inner product with itself: One can extend this idea to tensors of higher order, for a second order tensor we can form the invariants: similarly for higher order tensors. Invariant expressions, particularly inner products of 4-vectors with themselves, provide equations that are useful for calculations, because one doesn't need to perform Lorentz transformations to determine the invariants. Relativistic kinematics and invarianceEdit The coordinate differentials transform also contravariantly: so the squared length of the differential of the position four-vector dXμ constructed using The 4-velocity Uμ has an invariant form: which means all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. Differentiating the above equation by τ produces: So in special relativity, the acceleration four-vector and the velocity four-vector are orthogonal. Relativistic dynamics and invarianceEdit We can work out what this invariant is by first arguing that, since it is a scalar, it doesn't matter in which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero. We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero. The rest energy is related to the mass according to the celebrated equation discussed above: Note that the mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames. To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D which contains the components of the 3D force vector among its components. If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. It is the rate of change of the above energy momentum four-vector with respect to proper time. The covariant version of the four-force is: In the rest frame of the object, the time component of the four force is zero unless the "invariant mass" of the object is changing (this requires a non-closed system in which energy/mass is being directly added or removed from the object) in which case it is the negative of that rate of change of mass, times c. In general, though, the components of the four force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, i.e. dp/dt while the four force is defined by the rate of change of momentum with respect to proper time, i.e. dp/dτ. In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is −1/c times the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism. Relativity and unifying electromagnetismEdit Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. Equations generalizing the electromagnetic effects found that finite propagation speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity. The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame. Maxwell's equations in the 3D form are already consistent with the physical content of special relativity, although they are easier to manipulate in a manifestly covariant form, i.e. in the language of tensor calculus. Special relativity in its Minkowski spacetime is accurate only when the absolute value of the gravitational potential is much less than c2 in the region of interest. In a strong gravitational field, one must use general relativity. General relativity becomes special relativity at the limit of a weak field. At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. However, at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10−20) and thus accepted by the physics community. Experimental results which appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors. Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields). Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) – thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See classical mechanics for a more detailed discussion. Several experiments predating Einstein's 1905 paper are now interpreted as evidence for relativity. Of these it is known Einstein was aware of the Fizeau experiment before 1905, and historians have concluded that Einstein was at least aware of the Michelson–Morley experiment as early as 1899 despite claims he made in his later years that it played no role in his development of the theory. - The Fizeau experiment (1851, repeated by Michelson and Morley in 1886) measured the speed of light in moving media, with results that are consistent with relativistic addition of colinear velocities. - The famous Michelson–Morley experiment (1881, 1887) gave further support to the postulate that detecting an absolute reference velocity was not achievable. It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times. - The Trouton–Noble experiment (1903) showed that the torque on a capacitor is independent of position and inertial reference frame. - The Experiments of Rayleigh and Brace (1902, 1904) showed that length contraction doesn't lead to birefringence for a co-moving observer, in accordance with the relativity principle. Particle accelerators routinely accelerate and measure the properties of particles moving at near the speed of light, where their behavior is completely consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. These machines would simply not work if they were not engineered according to relativistic principles. In addition, a considerable number of modern experiments have been conducted to test special relativity. Some examples: - Tests of relativistic energy and momentum – testing the limiting speed of particles - Ives–Stilwell experiment – testing relativistic Doppler effect and time dilation - Experimental testing of time dilation – relativistic effects on a fast-moving particle's half-life - Kennedy–Thorndike experiment – time dilation in accordance with Lorentz transformations - Hughes–Drever experiment – testing isotropy of space and mass - Modern searches for Lorentz violation – various modern tests - Experiments to test emission theory demonstrated that the speed of light is independent of the speed of the emitter. - Experiments to test the aether drag hypothesis – no "aether flow obstruction". Despite the success of the Theory of Special Relativity, there are still detractors who insist on the existence of the aether. The basis for this is the experiment performed by Georges Sagnac that produced the Sagnac effect. However, this effect has been proven to reconcile with Special Relativity. Theories of relativity and quantum mechanicsEdit Special relativity can be combined with quantum mechanics to form relativistic quantum mechanics and quantum electrodynamics. It is an unsolved problem in physics how general relativity and quantum mechanics can be unified; quantum gravity and a "theory of everything", which require a unification including general relativity too, are active and ongoing areas in theoretical research. In 1928, Paul Dirac constructed an influential relativistic wave equation, now known as the Dirac equation in his honour, that is fully compatible both with special relativity and with the final version of quantum theory existing after 1926. This equation explained not only the intrinsic angular momentum of the electrons called spin, it also led to the prediction of the antiparticle of the electron (the positron), and fine structure could only be fully explained with special relativity. It was the first foundation of relativistic quantum mechanics. In non-relativistic quantum mechanics, spin is phenomenological and cannot be explained. On the other hand, the existence of antiparticles leads to the conclusion that relativistic quantum mechanics is not enough for a more accurate and complete theory of particle interactions. Instead, a theory of particles interpreted as quantized fields, called quantum field theory, becomes necessary; in which particles can be created and destroyed throughout space and time. - People: Hendrik Lorentz | Henri Poincaré | Albert Einstein | Max Planck | Hermann Minkowski | Max von Laue | Arnold Sommerfeld | Max Born | Gustav Herglotz | Richard C. Tolman - Relativity: Theory of relativity | History of special relativity | Principle of relativity | Doubly special relativity | General relativity | Frame of reference | Inertial frame of reference | Lorentz transformations | Bondi k-calculus | Einstein synchronisation | Rietdijk–Putnam argument | Special relativity (alternative formulations) | Criticism of relativity theory | Relativity priority dispute - Physics: Einstein's thought experiments | Newtonian Mechanics | spacetime | speed of light | simultaneity | center of mass (relativistic) | physical cosmology | Doppler effect | relativistic Euler equations | Aether drag hypothesis | Lorentz ether theory | Moving magnet and conductor problem | Shape waves | Relativistic heat conduction | Relativistic disk | Thomas precession | Born rigidity | Born coordinates - Mathematics: Derivations of the Lorentz transformations | Minkowski space | four-vector | world line | light cone | Lorentz group | Poincaré group | geometry | tensors | split-complex number | Relativity in the APS formalism - Philosophy: actualism | conventionalism | formalism - Paradoxes: Twin paradox | Ehrenfest paradox | Ladder paradox | Bell's spaceship paradox | Velocity composition paradox | Lighthouse paradox - Albert Einstein (1905) "Zur Elektrodynamik bewegter Körper", Annalen der Physik 17: 891; English translation On the Electrodynamics of Moving Bodies by George Barker Jeffery and Wilfrid Perrett (1923); Another English translation On the Electrodynamics of Moving Bodies by Megh Nad Saha (1920). - [Science and Common Sense, P. W. Bridgman, The Scientific Monthly, Vol. 79, No. 1 (Jul., 1954), pp. 32-39.; The Electromagnetic Mass and Momentum of a Spinning Electron, G. Breit, Proceedings of the National Academy of Sciences, Vol. 12, p.451, 1926; Kinematics of an electron with an axis. Phil. Mag. 3:1-22. L. H. Thomas.] Einstein himself, in The Foundations of the General Theory of Relativity, Ann. Phys. 49 (1916), writes "The word "special" is meant to intimate that the principle is restricted to the case...". See p. 111 of The Principle of Relativity, A. Einstein, H. A. Lorentz, H. Weyl, H. Minkowski, Dover reprint of 1923 translation by Methuen and Company.] - Tom Roberts & Siegmar Schleif (October 2007). "What is the experimental basis of Special Relativity?". Usenet Physics FAQ. Retrieved 2008-09-17. - Albert Einstein (2001). Relativity: The Special and the General Theory (Reprint of 1920 translation by Robert W. Lawson ed.). Routledge. p. 48. ISBN 978-0-415-25384-0. - Richard Phillips Feynman (1998). Six Not-so-easy Pieces: Einstein's relativity, symmetry, and space–time (Reprint of 1995 ed.). Basic Books. p. 68. ISBN 978-0-201-32842-4. - Sean Carroll, Lecture Notes on General Relativity, ch. 1, "Special relativity and flat spacetime," http://ned.ipac.caltech.edu/level5/March01/Carroll3/Carroll1.html - Wald, General Relativity, p. 60: "...the special theory of relativity asserts that spacetime is the manifold ℝ4 with a flat metric of Lorentz signature defined on it. Conversely, the entire content of special relativity ... is contained in this statement ..." - Koks, Don (2006). Explorations in Mathematical Physics: The Concepts Behind an Elegant Language (illustrated ed.). Springer Science & Business Media. p. 234. ISBN 978-0-387-32793-8. Extract of page 234 - Steane, Andrew M. (2012). Relativity Made Relatively Easy (illustrated ed.). OUP Oxford. p. 226. ISBN 978-0-19-966286-9. Extract of page 226 - Edwin F. Taylor & John Archibald Wheeler (1992). Spacetime Physics: Introduction to Special Relativity. W. H. Freeman. ISBN 978-0-7167-2327-1. - Rindler, Wolfgang (1977). Essential Relativity: Special, General, and Cosmological (illustrated ed.). Springer Science & Business Media. p. §1,11 p. 7. ISBN 978-3-540-07970-5. - Einstein, Autobiographical Notes, 1949. - Einstein, "Fundamental Ideas and Methods of the Theory of Relativity", 1920 - For a survey of such derivations, see Lucas and Hodgson, Spacetime and Electromagnetism, 1990 - Einstein, A., Lorentz, H. A., Minkowski, H., & Weyl, H. (1952). The Principle of Relativity: a collection of original memoirs on the special and general theory of relativity. Courier Dover Publications. p. 111. ISBN 978-0-486-60081-9. - Einstein, On the Relativity Principle and the Conclusions Drawn from It, 1907; "The Principle of Relativity and Its Consequences in Modern Physics", 1910; "The Theory of Relativity", 1911; Manuscript on the Special Theory of Relativity, 1912; Theory of Relativity, 1913; Einstein, Relativity, the Special and General Theory, 1916; The Principal Ideas of the Theory of Relativity, 1916; What Is The Theory of Relativity?, 1919; The Principle of Relativity (Princeton Lectures), 1921; Physics and Reality, 1936; The Theory of Relativity, 1949. - Das, A. (1993) The Special Theory of Relativity, A Mathematical Exposition, Springer, ISBN 0-387-94042-1. - Schutz, J. (1997) Independent Axioms for Minkowski Spacetime, Addison Wesley Longman Limited, ISBN 0-582-31760-6. - Yaakov Friedman (2004). Physical Applications of Homogeneous Balls. Progress in Mathematical Physics. 40. pp. 1–21. ISBN 978-0-8176-3339-4. - David Morin (2007) Introduction to Classical Mechanics, Cambridge University Press, Cambridge, chapter 11, Appendix I, ISBN 1-139-46837-5. - Michael Polanyi (1974) Personal Knowledge: Towards a Post-Critical Philosophy, ISBN 0-226-67288-3, footnote page 10–11: Einstein reports, via Dr N Balzas in response to Polanyi's query, that "The Michelson–Morley experiment had no role in the foundation of the theory." and "..the theory of relativity was not founded to explain its outcome at all." - Jeroen van Dongen (2009). "On the role of the Michelson–Morley experiment: Einstein in Chicago". Archive for History of Exact Sciences. 63 (6): 655–663. arXiv:0908.1545. doi:10.1007/s00407-009-0050-5. - Staley, Richard (2009), "Albert Michelson, the Velocity of Light, and the Ether Drift", Einstein's generation. The origins of the relativity revolution, Chicago: University of Chicago Press, ISBN 0-226-77057-5 - Terrell, James (15 November 1959). "Invisibility of the Lorentz Contraction". Physical Review. 116 (4): 1041–1045. Bibcode:1959PhRv..116.1041T. doi:10.1103/PhysRev.116.1041. - Penrose, Roger (24 October 2008). "The Apparent Shape of a Relativistically Moving Sphere". Mathematical Proceedings of the Cambridge Philosophical Society. 55 (1): 137. Bibcode:1959PCPS...55..137P. doi:10.1017/S0305004100033776. - Cook, Helen. "Relativistic Distortion". Mathematics Department, University of British Columbia. Retrieved 12 April 2017. - Signell, Peter. "Appearances at Relativistic Speeds" (PDF). Project PHYSNET. Michigan State University, East Lansing, MI. Archived from the original (PDF) on 12 April 2017. Retrieved 12 April 2017. - Kraus, Ute. "The Ball is Round". Space Time Travel: Relativity visualized. Institut für Physik Universität Hildesheim. Archived from the original on 16 April 2017. Retrieved 16 April 2017. - Zensus, J. Anton; Pearson, Timothy J. (1987). Superluminal Radio Sources (1st ed.). Cambridge, New York: Cambridge University Press. p. 3. ISBN 9780521345606. - Chase, Scott I. "Apparent Superluminal Velocity of Galaxies". The Original Usenet Physics FAQ. Department of Mathematics, University of California, Riverside. Retrieved 12 April 2017. - Richmond, Michael. ""Superluminal" motions in astronomical sources". Physics 200 Lecture Notes. School of Physics and Astronomy, Rochester Institute of Technology. Archived from the original on 20 April 2017. Retrieved 20 April 2017. - Keel, Bill. "Jets, Superluminal Motion, and Gamma-Ray Bursts". Galaxies and the Universe - WWW Course Notes. Department of Physics and Astronomy, University of Alabama. Archived from the original on 29 April 2017. Retrieved 29 April 2017. - Robert Resnick (1968). Introduction to special relativity. Wiley. pp. 62–63. - Daniel Kleppner & David Kolenkow (1973). An Introduction to Mechanics. pp. 468–70. ISBN 978-0-07-035048-9. - Does the inertia of a body depend upon its energy content? A. Einstein, Annalen der Physik. 18:639, 1905 (English translation by W. Perrett and G.B. Jeffery) - Max Jammer (1997). Concepts of Mass in Classical and Modern Physics. Courier Dover Publications. pp. 177–178. ISBN 978-0-486-29998-3. - John J. Stachel (2002). Einstein from B to Z. Springer. p. 221. ISBN 978-0-8176-4143-6. - On the Inertia of Energy Required by the Relativity Principle, A. Einstein, Annalen der Physik 23 (1907): 371–384 - In a letter to Carl Seelig in 1955, Einstein wrote "I had already previously found that Maxwell's theory did not account for the micro-structure of radiation and could therefore have no general validity.", Einstein letter to Carl Seelig, 1955. - Baglio, Julien (26 May 2007). "Acceleration in special relativity: What is the meaning of "uniformly accelerated movement" ?" (PDF). Physics Department, ENS Cachan. Retrieved 22 January 2016. - Philip Gibbs & Don Koks. "The Relativistic Rocket". Retrieved 30 August 2012. - The special theory of relativity shows that time and space are affected by motion Archived 2012-10-21 at the Wayback Machine.. Library.thinkquest.org. Retrieved on 2013-04-24. - Tolman, Richard C. (1917). The Theory of the Relativity of Motion. Berkeley: University of California Press. p. 54. - G. A. Benford; D. L. Book & W. A. Newcomb (1970). "The Tachyonic Antitelephone". Physical Review D. 2 (2): 263. Bibcode:1970PhRvD...2..263B. doi:10.1103/PhysRevD.2.263. - Ginsburg, David (1989). Applications of Electrodynamics in Theoretical Physics and Astrophysics (illustrated ed.). CRC Press. p. 206. ISBN 978-2-88124-719-4. Extract of page 206 - Wesley C. Salmon (2006). Four Decades of Scientific Explanation. University of Pittsburgh. p. 107. ISBN 978-0-8229-5926-7., Section 3.7 page 107 - J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. p. 58. ISBN 978-0-7167-0344-0. - J.R. Forshaw; A.G. Smith (2009). Dynamics and Relativity. Wiley. p. 247. ISBN 978-0-470-01460-8. - R. Penrose (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4. - Jean-Bernard Zuber & Claude Itzykson, Quantum Field Theory, pg 5, ISBN 0-07-032071-3 - Charles W. Misner, Kip S. Thorne & John A. Wheeler, Gravitation, pg 51, ISBN 0-7167-0344-0 - George Sterman, An Introduction to Quantum Field Theory, pg 4 , ISBN 0-521-31132-2 - Sean M. Carroll (2004). Spacetime and Geometry: An Introduction to General Relativity. Addison Wesley. p. 22. ISBN 978-0-8053-8732-2. - E. J. Post (1962). Formal Structure of Electromagnetics: General Covariance and Electromagnetics. Dover Publications Inc. ISBN 978-0-486-65427-0. - Øyvind Grøn & Sigbjørn Hervik (2007). Einstein's general theory of relativity: with modern applications in cosmology. Springer. p. 195. ISBN 978-0-387-69199-2. Extract of page 195 (with units where c=1) - The number of works is vast, see as example: Sidney Coleman; Sheldon L. Glashow (1997). "Cosmic Ray and Neutrino Tests of Special Relativity". Physics Letters B. 405 (3–4): 249–252. arXiv:hep-ph/9703240. Bibcode:1997PhLB..405..249C. doi:10.1016/S0370-2693(97)00638-2. An overview can be found on this page - John D. Norton, John D. (2004). "Einstein's Investigations of Galilean Covariant Electrodynamics prior to 1905". Archive for History of Exact Sciences. 59 (1): 45–105. Bibcode:2004AHES...59...45N. doi:10.1007/s00407-004-0085-6. - R. Resnick; R. Eisberg (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd ed.). John Wiley & Sons. pp. 114–116. ISBN 978-0-471-87373-0. - P.A.M. Dirac (1930). "A Theory of Electrons and Protons" (PDF). Proceedings of the Royal Society. A126 (801): 360–365. Bibcode:1930RSPSA.126..360D. doi:10.1098/rspa.1930.0013. JSTOR 95359. - C.D. Anderson (1933). "The Positive Electron". Phys. Rev. 43 (6): 491–494. Bibcode:1933PhRv...43..491A. doi:10.1103/PhysRev.43.491. - Even though it has been many decades since Terrell and Penrose published their observations, popular writings continue to conflate measurement versus appearance. For example, Michio Kaku wrote in Einstein's Cosmos (W. W. Norton & Company, 2004. p. 65): "... imagine that the speed of light is only 20 miles per hour. If a car were to go down the street, it might look compressed in the direction of motion, being squeezed like an accordion down to perhaps 1 inch in length." - Einstein, Albert (1920). Relativity: The Special and General Theory. - Einstein, Albert (1996). The Meaning of Relativity. Fine Communications. ISBN 1-56731-136-9 - Logunov, Anatoly A. (2005) Henri Poincaré and the Relativity Theory (transl. from Russian by G. Pontocorvo and V. O. Soleviev, edited by V. A. Petrov) Nauka, Moscow. - Charles Misner, Kip Thorne, and John Archibald Wheeler (1971) Gravitation. W. H. Freeman & Co. ISBN 0-7167-0334-3 - Post, E.J., 1997 (1962) Formal Structure of Electromagnetics: General Covariance and Electromagnetics. Dover Publications. - Wolfgang Rindler (1991). Introduction to Special Relativity (2nd ed.), Oxford University Press. ISBN 978-0-19-853952-0; ISBN 0-19-853952-5 - Harvey R. Brown (2005). Physical relativity: space–time structure from a dynamical perspective, Oxford University Press, ISBN 0-19-927583-1; ISBN 978-0-19-927583-0 - Qadir, Asghar (1989). Relativity: An Introduction to the Special Theory. Singapore: World Scientific Publications. p. 128. ISBN 978-9971-5-0612-4. - Silberstein, Ludwik (1914) The Theory of Relativity. - Lawrence Sklar (1977). Space, Time and Spacetime. University of California Press. ISBN 978-0-520-03174-6. - Lawrence Sklar (1992). Philosophy of Physics. Westview Press. ISBN 978-0-8133-0625-4. - Taylor, Edwin, and John Archibald Wheeler (1992) Spacetime Physics (2nd ed.). W.H. Freeman & Co. ISBN 0-7167-2327-1 - Tipler, Paul, and Llewellyn, Ralph (2002). Modern Physics (4th ed.). W. H. Freeman & Co. ISBN 0-7167-4345-0 - Alvager, T.; Farley, F. J. M.; Kjellman, J.; Wallin, L.; et al. (1964). "Test of the Second Postulate of Special Relativity in the GeV region". Physics Letters. 12 (3): 260. Bibcode:1964PhL....12..260A. doi:10.1016/0031-9163(64)91095-9. - Darrigol, Olivier (2004). "The Mystery of the Poincaré–Einstein Connection". Isis. 95 (4): 614–26. doi:10.1086/430652. PMID 16011297. - Wolf, Peter; Petit, Gerard (1997). "Satellite test of Special Relativity using the Global Positioning System". Physical Review A. 56 (6): 4405–09. Bibcode:1997PhRvA..56.4405W. doi:10.1103/PhysRevA.56.4405. - Special Relativity Scholarpedia - Special relativity: Kinematics Wolfgang Rindler, Scholarpedia, 6(2):8520. doi:10.4249/scholarpedia.8520 |Wikisource has original text related to this article:| |Wikisource has original works on the topic: Relativity| |Wikibooks has a book on the topic of: Special Relativity| |Wikiversity has learning resources about Special Relativity| |Look up special relativity in Wiktionary, the free dictionary.| - Zur Elektrodynamik bewegter Körper Einstein's original work in German, Annalen der Physik, Bern 1905 - On the Electrodynamics of Moving Bodies English Translation as published in the 1923 book The Principle of Relativity. Special relativity for a general audience (no mathematical knowledge required)Edit - Einstein Light An award-winning, non-technical introduction (film clips and demonstrations) supported by dozens of pages of further explanations and animations, at levels with or without mathematics. - Einstein Online Introduction to relativity theory, from the Max Planck Institute for Gravitational Physics. - Audio: Cain/Gay (2006) – Astronomy Cast. Einstein's Theory of Special Relativity Special relativity explained (using simple or more advanced mathematics)Edit - Greg Egan's Foundations. - The Hogg Notes on Special Relativity A good introduction to special relativity at the undergraduate level, using calculus. - Relativity Calculator: Special Relativity – An algebraic and integral calculus derivation for E = mc2. - MathPages – Reflections on Relativity A complete online book on relativity with an extensive bibliography. - Special Relativity An introduction to special relativity at the undergraduate level. - Relativity: the Special and General Theory at Project Gutenberg, by Albert Einstein - Special Relativity Lecture Notes is a standard introduction to special relativity containing illustrative explanations based on drawings and spacetime diagrams from Virginia Polytechnic Institute and State University. - Understanding Special Relativity The theory of special relativity in an easily understandable way. - An Introduction to the Special Theory of Relativity (1964) by Robert Katz, "an introduction ... that is accessible to any student who has had an introduction to general physics and some slight acquaintance with the calculus" (130 pp; pdf format). - Lecture Notes on Special Relativity by J D Cresser Department of Physics Macquarie University. - SpecialRelativity.net - An overview with visualizations and minimal mathematics. - Raytracing Special Relativity Software visualizing several scenarios under the influence of special relativity. - Real Time Relativity The Australian National University. Relativistic visual effects experienced through an interactive program. - Spacetime travel A variety of visualizations of relativistic effects, from relativistic motion to black holes. - Through Einstein's Eyes The Australian National University. Relativistic visual effects explained with movies and images. - Warp Special Relativity Simulator A computer program to show the effects of traveling close to the speed of light. - on YouTube visualizing the Lorentz transformation. - Original interactive FLASH Animations from John de Pillis illustrating Lorentz and Galilean frames, Train and Tunnel Paradox, the Twin Paradox, Wave Propagation, Clock Synchronization, etc. - lightspeed An OpenGL-based program developed to illustrate the effects of special relativity on the appearance of moving objects. - Animation showing the stars near Earth, as seen from a spacecraft accelerating rapidly to light speed.
https://en.m.wikipedia.org/wiki/Special_Relativity
18
12
You simply add or subtract the strings of digits. When we write a number in expanded form, each digit is broken out and multiplied by its place value, such that the sum of all of the values equals the original number. Numbers in Standard Form Expanded Form Worksheets After learning multiplication, exponents are an important Writing math standard form of understanding fundamental numeric nomenclature and order of operations. Exponents are also a critical part of understanding scientific notation, and one of the sets of exponents worksheets in this section focuses exclusively on powers of ten and exponents with base 10 to reinforce these concepts. The expanded form worksheets on this page are great practice for students learning about place value and a larger digit numbers. For example, the number is usually writtenRegardless of what approach you choose, all of the worksheets on this page, including the expanded form worksheets, will provide help converting between different forms of numbers and teaching place value. If the numbers have different exponents, convert one of them to the exponent of the other. These are typically the thousands, millions, billions and similar amounts that are separated into groups of three place values either by decimals or, in some countries, by commas. You can choose to vary the complexity of the work you assign by selecting expanded form worksheets with longer digits or with decimal values, or simply mix these worksheets in as review assignments periodically, especially with students who seem to struggle with basic operations involved multi-digit problems. If the entire original number is greater than 1, count the numbers that appear to the right of this decimal. Decomposing numbers into expanded form is somewhat more procedural than going to other forms, but once this skill is mastered any of the reverse Numbers From Expanded Form Worksheets will reinforce the concepts. This is important not just in writing numbers in word form, but also when writing the numerical description of a dollar amount while writing a check or other legal description of money. The exponent equals the number of zeros plus the first digit in the number series. Groups of Three Before converting a number to one containing an exponent, remember another convention, which is to split number strings into groups of three — or thousands — with commas. When you multiply numbers in standard form, you multiply the strings of numbers and add the exponents. If the number is very small, the first three digits that appear after the string of zeros are the three you use at the beginning of the number in standard form, and the exponent is negative. Note how as the decimal point moves, the exponent changes. Expanded form is a way to write a number such that all of the place value components of the number are separated. In standard form, the distance to the nearest star is a much more manageable 4. So for example, consider this number The answer is 4. This is true even if the first group contains only one or two digits. Adding them, we get Multiply the number, now in the form of the first digit, decimal point, and next two digits, by 10 raised to this exponent. A good place to start is either the earlier worksheets in the Place Value Expanded Form Worksheets or the Conventional Expanded Form Worksheets and then gradually work through these, incorporating expanded form exercises with decimals if you have convered those topics. If the number is large, you set the decimal after the first digit on the left, and you make the exponent positive. For example, the first three digits of the number 12, are 1, 2 and 3. It equals the number of digits that follow the decimal. In standard form, this is 5. The answer is Scientists handle very large numbers like this one, as well as very small numbers, by converting them to standard form, which is a decimal number followed by an exponent of You use the same strategy to convert either to standard form. Teaching Place Value with Expanded Form Worksheets Expanded form worksheets reinforce place value concepts by getting students to consider the actual value assigned to each digit in a number.Standard, Expanded and Word Form. This page contains links to free math worksheets for Standard, Expanded and Word Form problems. Click one of the buttons below to view a worksheet and its answer key. You can also use the 'Worksheets' menu on the side of. Standard form is a way of writing down very large or very small numbers easily. 10 3 =so 4 × 10 3 = So can be written as 4 × 10³. This idea can be used to write even larger numbers down easily in standard form. How to Write Numbers in Standard Form By Chris Deziel; Updated April 26, NASA tells us that the distance from the Earth to the nearest star is 40,, kilometers. Standard Form of a Polynomial. The "Standard Form" for writing down a polynomial is to put the terms with the highest degree first (like the "2" in x 2 if there is one variable). Write Numbers in Word Form Practice worksheets for converting numbers from standard numeric notation into written (word form) notation. Writing numbers in word form is similar to the written word form used to fill out checks and some of these word form worksheets include variants with decimals appropriate for that topic. Improve your math knowledge with free questions in "Write equations in standard form" and thousands of other math skills.Download
http://wojarepycugewu.bsaconcordia.com/writing-math-standard-form-3565835658.html
18
15
Confidence level is the proportion of confidence intervals (constructed with this same confidence level, sample size, etc) that would contain the population proportion n is the size of the. Confidence intervals give us a range of plausible values for some unknown value based on results from a sample this topic covers confidence intervals for means and proportions. This confidence interval has the same interpretation as the one in the last section ie, we are fairly confident that the true population proportion is contained in the interval baseball. One can compute confidence intervals all types of estimates, but this short module will provide the conceptual background for computing confidence intervals and will then focus on the. Note that when calculating confidence intervals for a binomial variable, one level of the nominal variable is chosen to be the “success” level this is an arbitrary decision, but you should. You can find the confidence interval (ci) for a population proportion to show the statistical probability that a characteristic is likely to occur within the population when a. In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments. For a confidence interval of one proportion estimated from a finite population caution: this procedure assumes that the proportion of the future sample will be the same as the proportion. Learn how to use the ti 84 to calculate a confidence interval for a single population proportion. Confidence interval for a proportion in one sample suppose we wish to estimate the proportion of people with diabetes in a population or the proportion of people with hypertension or. Confidence intervals for population proportions statslectures e 003 confidence interval for population proportion - duration: intro to confidence intervals for one mean.
http://nthomeworkyizi.card-hikaku.info/confidence-intervals-for-one-population-proportion.html
18
28
Critical thinking activities adults Critical thinking skills are a must for decision-making and to achieve the correct result read on to know more about the ways and activities to develop critical. Critical thinking activities to improve writing skills encourages students to think, choose their words carefully, and produce concise, accurate, detailed, and. Lesson – problem solving and critical thinking problem solving and critical thinking are defined as solution through a variety of activities. 81 fresh & fun critical-thinking activities fewer than half the adults in america today have the since critical thinking doesn’t end when an individual. Arecls, 2013, vol10, 147-176 critical thinking skills in adult learners caroline gibby abstract the need to identify the role of critical thinking within. Critical thinking worksheets for teachers used in engaging students in the advanced levels of thinking we have brain teasers and mad libs too. Simply copying notes from the board is a low level thinking skill, so please promote higher level thinking by providing appropriate graphic organizers, question stems or reflective prompts, which help students recognize their strengths and strengthen their weaknesses through effective critical thinking. Critical thinking skills are first learned in grade it would probably make your daily and long-term activities a lot easier critical thinking exercises also. Critical thinking and the adult learner is an overview of the research and best practices in the field of adult education. Rock or feather a critical-thinking activity a simple activity can reveal much about the students you work with each day students make and defend their choices in. Sharpen your critical thinking skills with thinkfun’s best fun online games like rush hour and our laser maze game. At times, language learning can be straightforward students memorize lists of vocabulary and rules for grammar this type of thinking, though, isn't very complex. Instructions for the free critical thinking games world's largest flood it game - this is a record breaking critical thinking game for kids and adults. 5 critical thinking activities | conversations in literacy | bloglovin find this pin and more on reading resources by mellalin 5 critical thinking activities (conversations in literacy) creative ideas for the classroom to promote critical thinking such as a wondering wall, identifying reading strategies (e eagle eye), the answer is. Here are 5 team building games to try out with your students that also develop critical thinking skills collaboration and team spirit await you. Does critical thinking occur naturally in adults 1 critical thinking is a design the reflection activities to achieve. Developing critical thinking skills in adult learners critical derived from the greek word meaning to separate critical thinking: fosters understanding. 20 creative questions to ask kids 20 fun activities you can use whenever you have 150 more questions that encourage creative and critical thinking tweet. Sharpen your child’s critical thinking and logical reasoning skills with our collection of fun, free and printable critical thinking worksheets.
http://woessaybxvu.artsales.biz/critical-thinking-activities-adults.html
18
40
Angles form the core of the geometry in mathematics. They are the fundamentals that eventually lead to the formation of the more complex geometrical figures and shapes. Lets us begin with the study of different types of angles to get a better understanding of the topic. When two rays combine with a common end point and the angle is formed. The two components of an angle are “sides” and “vertex”. The side can be categorized into terminal sides and initial sides (or vertical sides) as shown in the image below. These two rays can combine in multiple fashions to form the different types of angles in mathematics. Let us begin by studying these different types of angles in geometry. Types of Angles The images below illustrate certain types of angles. An acute angle lies between 0 degrees and 90 degrees or in other words, an acute angle is one that is less than 90 degrees. The figure below illustrates an acute angle. An obtuse angle is the opposite of an acute angle. It is the angle which lies between 90 degrees and 180 degrees or in other words, an obtuse angle is greater than 90 degrees and less than 180 degrees. The figure below illustrates an obtuse angle. A right angle is always equal to 90 degrees. Any angle less than 90 degrees is an acute angle whereas any angle greater than 90 degrees is an obtuse angle. The figure below illustrates a right angle or a 90-degree angle. A straight angle is 180 degrees when measured. The figure below illustrates a straight angle or a 180-degree angle. You can see that it is just a straight line because the angle between its arms is 180 degrees. Since this measurement is less than 90 degrees, the arms form an acute angle. But what about the angle on the other side? What is the larger angle that is complementary to the acute angle called? It is called a reflex angle. The image below illustrates a reflex angle. Any angle that has a measure which is greater than 180 degrees but less than 360 degrees (which coincides with 0 degrees), is a reflex angle. The following points can be kept in mind while dealing with angles. - Positive angle – Angle measured counterclockwise from the base - Negative Angle – Angle measured clockwise from the base The parts of an angle are - Vertex – Point where the arms meet - Arms – Two straight line segments from a vertex - Angle – If a ray is rotated about its end-point, the measure of its rotation is called angle between its initial and final position For application question based on angle and types of angles, visit our site BYJU’S.
https://byjus.com/maths/types-of-angles/
18
107
Right-angled triangle definitions Trigonometric function sin θ for selected angles θ , π − θ , π + θ , and 2π − θ in the four quadrants.Bottom: Graph of sine function versus angle. Angles from the top panel are identified. Plot of the six trigonometric functions and the unit circle for an angle of 0.7 radians. The notion that there should be some standard correspondence between the lengths of the sides of a triangle and the angles of the triangle comes as soon as one recognizes that similar triangles maintain the same ratios between their sides. That is, for any similar triangle the ratio of the hypotenuse (for example) and another of the sides remains the same. If the hypotenuse is twice as long, so are the sides. It is these ratios that the trigonometric functions express. To define the trigonometric functions for the angle A, start with any right triangle that contains the angle A. The three sides of the triangle are named as follows: - The hypotenuse is the side opposite the right angle, in this case side h. The hypotenuse is always the longest side of a right-angled triangle. - The opposite side is the side opposite to the angle we are interested in (angle A), in this case side a. - The adjacent side is the side having both the angles of interest (angle A and right-angle C), in this case side b. In ordinary Euclidean geometry, according to the triangle postulate, the inside angles of every triangle total 180° (π radians). Therefore, in a right-angled triangle, the two non-right angles total 90° (π/2 radians), so each of these angles must be in the range of (0, π/2) as expressed in interval notation. The following definitions apply to angles in this (0, π/2) range. They can be extended to the full set of real arguments by using the unit circle, or by requiring certain symmetries and that they be periodic functions. For example, the figure shows sin(θ) for angles θ, π − θ, π + θ, and 2π − θ depicted on the unit circle (top) and as a graph (bottom). The value of the sine repeats itself apart from sign in all four quadrants, and if the range of θ is extended to additional rotations, this behavior repeats periodically with a period 2π. The trigonometric functions are summarized in the following table and described in more detail below. The angle θ is the angle between the hypotenuse and the adjacent line – the angle at A in the accompanying diagram. ||Identities (using radians) ||tan (or tg) ||cot (or cotan or cotg or ctg or ctn) ||csc (or cosec) Sine, cosine, and tangent The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. The word comes from the Latin sinus for gulf or bay, since, given a unit circle, it is the side of the triangle on which the angle opens. In our case: An illustration of the relationship between sine and its out-of-phase complement, cosine. Cosine is identical, but π/2 radians out of phase to the left; so cos A = sin(A + π/2) The cosine (sine complement, Latin: cosinus, sinus complementi) of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse, so called because it is the sine of the complementary or co-angle, the other non-right angle. Because the angle sum of a triangle is π radians, the co-angle B is equal to π/2 − A; so cos A = sin B = sin(π/2 − A). In our case: The tangent of an angle is the ratio of the length of the opposite side to the length of the adjacent side, so called because it can be represented as a line segment tangent to the circle, i.e. the line that touches the circle, from Latin linea tangens or touching line (cf. tangere, to touch). In our case: Tangent may also be represented in terms of sine and cosine. That is: These ratios do not depend on the size of the particular right triangle chosen, as long as the focus angle is equal, since all such triangles are similar. The acronyms "SOH-CAH-TOA" ("soak-a-toe", "sock-a-toa", "so-kah-toa") and "OHSAHCOAT" are commonly used trigonometric mnemonics for these ratios. Secant, cosecant, and cotangent The remaining three functions are best defined using the three functions above and can be considered their reciprocals. The secant of an angle is the reciprocal of its cosine, that is, the ratio of the length of the hypotenuse to the length of the adjacent side, so called because it represents the secant line that cuts the circle (from Latin: secare, to cut): The cosecant (secant complement, Latin: cosecans, secans complementi) of an angle is the reciprocal of its sine, that is, the ratio of the length of the hypotenuse to the length of the opposite side, so called because it is the secant of the complementary or co-angle: The cotangent (tangent complement, Latin: cotangens, tangens complementi) of an angle is the reciprocal of its tangent, that is, the ratio of the length of the adjacent side to the length of the opposite side, so called because it is the tangent of the complementary or co-angle: Equivalent to the right-triangle definitions, the trigonometric functions can also be defined in terms of the rise, run, and slope of a line segment relative to horizontal. The slope is commonly taught as "rise over run" or rise/run. The three main trigonometric functions are commonly taught in the order sine, cosine and tangent. With a line segment length of 1 (as in a unit circle), the following mnemonic devices show the correspondence of definitions: - "Sine is first, rise is first" meaning that Sine takes the angle of the line segment and tells its vertical rise when the length of the line is 1. - "Cosine is second, run is second" meaning that Cosine takes the angle of the line segment and tells its horizontal run when the length of the line is 1. - "Tangent combines the rise and run" meaning that Tangent takes the angle of the line segment and tells its slope, or alternatively, tells the vertical rise when the line segment's horizontal run is 1. This shows the main use of tangent and arctangent: converting between the two ways of telling the slant of a line, i.e. angles and slopes. (The arctangent or "inverse tangent" is not to be confused with the cotangent, which is cosine divided by sine.) While the length of the line segment makes no difference for the slope (the slope does not depend on the length of the slanted line), it does affect rise and run. To adjust and find the actual rise and run when the line does not have a length of 1, just multiply the sine and cosine by the line length. For instance, if the line segment has length 5, the run at an angle of 7° is 5cos(7°).
https://www.wikiplanet.click/enciclopedia/en/Trigonometric_functions
18
33
The critical thinking company publishes prek-12+ books and software to develop critical thinking in core subject areas. Fourth grade teachers come here for complete no prep curriculum materials that kids will enjoy using in the classroom. How can students own their learning with critical thinking activities they’ll really love allowing our students to take stands on issues that matter to them. Free teacher worksheets critical thinking brain teaser brain teaser version 6: students might actually come across your average 4th grader would tackle. An easy-to-implement resource for standardized test preparation, each book in the series offers highinterest grade-leveled reading passages with accompanying. Critical thinking exercises for fourth graders critical thinking essay assignment example cv business manager essay on traffic rules follower cover letter title. New year's-theme worksheet will encourage critical thinking skills for the months with lots of fun facts and at-home activities ideas for students. Sharpen your child’s critical thinking and logical reasoning skills with our collection of fun, free and printable critical thinking worksheets. In this action research study, gifted elementary students benefited from the involvement in critical thinking activities the gifted education community has. Generating questions: using critical thinking skills by at the fourth level lower grade activities. Expand brain-power with this collection of critical and creative thinking worksheets 46 thematic language arts, science, and social studies topics will work on. Critical thinking is a skill that students develop gradually as they progress in school this skill becomes more important in higher grades, but some students find it. Logic puzzles with grids and graphics critical thinking puzzles more activities, lesson plans, and worksheets. Encourage kids to think out of the box and sharpen their logical reasoning and problem-solving skills with our fun critical thinking activities for kids. Multiple perspectives: building critical thinking the activities as necessary) students will type of critical thinking remind students that they. Explore hayley peeler's board thinking critically : 3rd grade thinking skills critical thinking activities question stems 4th-grade -reading-inference. 81 fresh & fun critical-thinking activities effective critical thinkers use one or more of the seven model critical thinking for students by sharing your own. Search results results for critical thinking this lesson could be linked to 4th grade science and in critical thinking in science, page 6 students will use. 4th grade 5th grade next they answer critical thinking questions to improve their and show students that math is integral to everyday activities. How to increase higher order thinking by: teachers should make sure students understand the critical features that practical and creative thinking activities. Increasing critical thinking skills of the fourth grade student through problem solving activities. Work sheet library: critical thinking: grades 3-5 work sheets organized by grade use with your students to build a wide variety of critical thinking. Find this pin and more on creative and critical thinking by not worksheet or math activities grade and these puzzles could help fourth-grade students learn. In the critical thinking worksheets' section key concept: students need to learn ways to organize their ideas by creating tables sampling - critical thinking. Readwritethink couldn't publish all of this great critical reading: two stories, two authors many students often lack critical thinking skills to be able to. Fun critical thinking activities for students which help students recognize their strengths and strengthen their weaknesses through effective critical thinking.
http://hxcourseworkttga.banhanguytin.info/critical-thinking-exercises-for-4th-graders.html
18
18
|Cultural origins||Late 19th century, Southern United States| Jazz is a music genre that originated in the African-American communities of New Orleans, United States, in the late 19th and early 20th centuries, and developed from roots in blues and ragtime. Jazz is seen by many as "America's classical music". Since the 1920s Jazz Age, jazz has become recognized as a major form of musical expression. It then emerged in the form of independent traditional and popular musical styles, all linked by the common bonds of African-American and European-American musical parentage with a performance orientation. Jazz is characterized by swing and blue notes, call and response vocals, polyrhythms and improvisation. Jazz has roots in West African cultural and musical expression, and in African-American music traditions including blues and ragtime, as well as European military band music. Intellectuals around the world have hailed jazz as "one of America's original art forms". As jazz spread around the world, it drew on different national, regional, and local musical cultures, which gave rise to many distinctive styles. New Orleans jazz began in the early 1910s, combining earlier brass-band marches, French quadrilles, biguine, ragtime and blues with collective polyphonic improvisation. In the 1930s, heavily arranged dance-oriented swing big bands, Kansas City jazz, a hard-swinging, bluesy, improvisational style and Gypsy jazz (a style that emphasized musette waltzes) were the prominent styles. Bebop emerged in the 1940s, shifting jazz from danceable popular music toward a more challenging "musician's music" which was played at faster tempos and used more chord-based improvisation. Cool jazz developed near the end of the 1940s, introducing calmer, smoother sounds and long, linear melodic lines. The 1950s saw the emergence of free jazz, which explored playing without regular meter, beat and formal structures, and in the mid-1950s, hard bop emerged, which introduced influences from rhythm and blues, gospel, and blues, especially in the saxophone and piano playing. Modal jazz developed in the late 1950s, using the mode, or musical scale, as the basis of musical structure and improvisation. Jazz-rock fusion appeared in the late 1960s and early 1970s, combining jazz improvisation with rock music's rhythms, electric instruments, and highly amplified stage sound. In the early 1980s, a commercial form of jazz fusion called smooth jazz became successful, garnering significant radio airplay. Other styles and genres abound in the 2000s, such as Latin and Afro-Cuban jazz. - 1 Etymology and definition - 2 Elements and issues - 3 Origins and early history - 3.1 Blended African and European music sensibilities - 3.2 African rhythmic retention - 3.3 "Spanish tinge"—the Afro-Cuban rhythmic influence - 3.4 Ragtime - 3.5 Blues - 3.6 New Orleans - 3.7 Other regions - 4 The Jazz Age - 5 Post-war jazz - 5.1 Bebop - 5.2 Afro-Cuban jazz (cu-bop) - 5.3 Dixieland revival - 5.4 Cool jazz and West Coast jazz - 5.5 Hard bop - 5.6 Modal jazz - 5.7 Free jazz - 5.8 Latin jazz - 5.9 Post-bop - 5.10 Soul jazz - 5.11 African-inspired - 5.12 Jazz fusion - 5.13 Jazz-funk - 5.14 Other trends - 5.15 Traditionalism in the 1980s - 5.16 Smooth jazz - 5.17 Acid jazz, nu jazz and jazz rap - 5.18 Punk jazz and jazzcore - 5.19 M-Base - 5.20 1990s–present - 6 See also - 7 Notes - 8 References - 9 External links Etymology and definition The origin of the word jazz has resulted in considerable research, and its history is well documented. It is believed to be related to jasm, a slang term dating back to 1860 meaning "pep, energy". The earliest written record of the word is in a 1912 article in the Los Angeles Times in which a minor league baseball pitcher described a pitch which he called a jazz ball "because it wobbles and you simply can't do anything with it". The use of the word in a musical context was documented as early as 1915 in the Chicago Daily Tribune. Its first documented use in a musical context in New Orleans was in a November 14, 1916 Times-Picayune article about "jas bands". In an interview with NPR, musician Eubie Blake offered his recollections of the original slang connotations of the term, saying: "When Broadway picked it up, they called it 'J-A-Z-Z'. It wasn't called that. It was spelled 'J-A-S-S'. That was dirty, and if you knew what it was, you wouldn't say it in front of ladies." The American Dialect Society named it the Word of the Twentieth Century. Jazz is difficult to define because it encompasses a wide range of music spanning a period of over 100 years, from ragtime to the rock-infused fusion. Attempts have been made to define jazz from the perspective of other musical traditions, such as European music history or African music. But critic Joachim-Ernst Berendt argues that its terms of reference and its definition should be broader, defining jazz as a "form of art music which originated in the United States through the confrontation of the Negro with European music" and arguing that it differs from European music in that jazz has a "special relationship to time defined as 'swing'". Jazz involves "a spontaneity and vitality of musical production in which improvisation plays a role" and contains a "sonority and manner of phrasing which mirror the individuality of the performing jazz musician". In the opinion of Robert Christgau, "most of us would say that inventing meaning while letting loose is the essence and promise of jazz". A broader definition that encompasses different eras of jazz has been proposed by Travis Jackson: "it is music that includes qualities such as swing, improvising, group interaction, developing an 'individual voice', and being open to different musical possibilities". Krin Gibbard argued that "jazz is a construct" which designates "a number of musics with enough in common to be understood as part of a coherent tradition". In contrast to commentators who have argued for excluding types of jazz, musicians are sometimes reluctant to define the music they play. Duke Ellington, one of jazz's most famous figures, said, "It's all music." Elements and issues Although jazz is considered difficult to define, in part because it contains many subgenres, improvisation is one of its key elements. The centrality of improvisation is attributed to the influence of earlier forms of music such as blues, a form of folk music which arose in part from the work songs and field hollers of African-American slaves on plantations. These work songs were commonly structured around a repetitive call-and-response pattern, but early blues was also improvisational. Classical music performance is evaluated more by its fidelity to the musical score, with less attention given to interpretation, ornamentation, and accompaniment. The classical performer's goal is to play the composition as it was written. In contrast, jazz is often characterized by the product of interaction and collaboration, placing less value on the contribution of the composer, if there is one, and more on the performer. The jazz performer interprets a tune in individual ways, never playing the same composition twice. Depending on the performer's mood, experience, and interaction with band members or audience members, the performer may change melodies, harmonies, and time signatures. In early Dixieland, a.k.a. New Orleans jazz, performers took turns playing melodies and improvising countermelodies. In the swing era of the 1920s–'40s, big bands relied more on arrangements which were written or learned by ear and memorized. Soloists improvised within these arrangements. In the bebop era of the 1940s, big bands gave way to small groups and minimal arrangements in which the melody was stated briefly at the beginning and most of the song was improvised. Modal jazz abandoned chord progressions to allow musicians to improvise even more. In many forms of jazz, a soloist is supported by a rhythm section of one or more chordal instruments (piano, guitar), double bass, and drums. The rhythm section plays chords and rhythms that outline the song structure and complement the soloist. In avant-garde and free jazz, the separation of soloist and band is reduced, and there is license, or even a requirement, for the abandoning of chords, scales, and meters. Tradition and race Since the emergence of bebop, forms of jazz that are commercially oriented or influenced by popular music have been criticized. According to Bruce Johnson, there has always been a "tension between jazz as a commercial music and an art form". Traditional jazz enthusiasts have dismissed bebop, free jazz, and jazz fusion as forms of debasement and betrayal. An alternative view is that jazz can absorb and transform diverse musical styles. By avoiding the creation of norms, jazz allows avant-garde styles to emerge. For some African Americans, jazz has drawn attention to African-American contributions to culture and history. For others, jazz is a reminder of "an oppressive and racist society and restrictions on their artistic visions". Amiri Baraka argues that there is a "white jazz" genre that expresses whiteness. White jazz musicians appeared in the midwest and in other areas throughout the U.S. Papa Jack Laine, who ran the Reliance band in New Orleans in the 1910s, was called "the father of white jazz". The Original Dixieland Jazz Band, whose members were white, were the first jazz group to record, and Bix Beiderbecke was one of the most prominent jazz soloists of the 1920s. The Chicago School (or Chicago Style) was developed by white musicians such as Eddie Condon, Bud Freeman, Jimmy McPartland, and Dave Tough. Others from Chicago such as Benny Goodman and Gene Krupa became leading members of swing during the 1930s. Many bands included both black and white musicians. These musicians helped change attitudes toward race in the U.S. Roles of women Female jazz performers and composers have contributed throughout jazz history. Although Betty Carter, Ella Fitzgerald, Adelaide Hall, Billie Holiday, Abbey Lincoln, Anita O'Day, Dinah Washington, and Ethel Waters were recognized for their vocal talent, women received less recognition for their accomplishments as bandleaders, composers, and instrumentalists. This group includes pianist Lil Hardin Armstrong and songwriters Irene Higginbotham and Dorothy Fields. Women began playing instruments in jazz in the early 1920s, drawing particular recognition on piano. Popular musicians of the time were Lovie Austin, Sweet Emma Barrett, Jeanette Kimball, Billie Pierce, Mary Lou Williams When male jazz musicians were drafted during World War II, many all-female bands took over. The International Sweethearts of Rhythm, which was founded in 1937, was a popular band that became the first all-female integrated band in the U.S. and the first to travel with the USO, touring Europe in 1945. Women were members of the big bands of Woody Herman and Gerald Wilson. From the 1950s onwards many women jazz instrumentalists became prominent, some sustaining lengthy careers. Over the decades, some of the most distinctive improvisers, composers and bandleaders in jazz have been women. Origins and early history Jazz originated in the late 19th to early 20th century as interpretations of American and European classical music entwined with African and slave folk songs and the influences of West African culture. Its composition and style have changed many times throughout the years with each performer's personal interpretation and improvisation, which is also one of the greatest appeals of the genre. Blended African and European music sensibilities By the 18th century, slaves gathered socially at a special market, in an area which later became known as Congo Square, famous for its African dances. By 1866, the Atlantic slave trade had brought nearly 400,000 Africans to North America. The slaves came largely from West Africa and the greater Congo River basin and brought strong musical traditions with them. The African traditions primarily use a single-line melody and call-and-response pattern, and the rhythms have a counter-metric structure and reflect African speech patterns. An 1885 account says that they were making strange music (Creole) on an equally strange variety of 'instruments'—washboards, washtubs, jugs, boxes beaten with sticks or bones and a drum made by stretching skin over a flour-barrel. Lavish festivals featuring African-based dances to drums were organized on Sundays at Place Congo, or Congo Square, in New Orleans until 1843. There are historical accounts of other music and dance gatherings elsewhere in the southern United States. Robert Palmer said of percussive slave music: Usually such music was associated with annual festivals, when the year's crop was harvested and several days were set aside for celebration. As late as 1861, a traveler in North Carolina saw dancers dressed in costumes that included horned headdresses and cow tails and heard music provided by a sheepskin-covered "gumbo box", apparently a frame drum; triangles and jawbones furnished the auxiliary percussion. There are quite a few [accounts] from the southeastern states and Louisiana dating from the period 1820–1850. Some of the earliest [Mississippi] Delta settlers came from the vicinity of New Orleans, where drumming was never actively discouraged for very long and homemade drums were used to accompany public dancing until the outbreak of the Civil War. Another influence came from the harmonic style of hymns of the church, which black slaves had learned and incorporated into their own music as spirituals. The origins of the blues are undocumented, though they can be seen as the secular counterpart of the spirituals. However, as Gerhard Kubik points out, whereas the spirituals are homophonic, rural blues and early jazz "was largely based on concepts of heterophony." During the early 19th century an increasing number of black musicians learned to play European instruments, particularly the violin, which they used to parody European dance music in their own cakewalk dances. In turn, European-American minstrel show performers in blackface popularized the music internationally, combining syncopation with European harmonic accompaniment. In the mid-1800s the white New Orleans composer Louis Moreau Gottschalk adapted slave rhythms and melodies from Cuba and other Caribbean islands into piano salon music. New Orleans was the main nexus between the Afro-Caribbean and African-American cultures. African rhythmic retention The "Black Codes" outlawed drumming by slaves, which meant that African drumming traditions were not preserved in North America, unlike in Cuba, Haiti, and elsewhere in the Caribbean. African-based rhythmic patterns were retained in the United States in large part through "body rhythms" such as stomping, clapping, and patting juba dancing. In the opinion of jazz historian Ernest Borneman, what preceded New Orleans jazz before 1890 was "Afro-Latin music", similar to what was played in the Caribbean at the time. A three-stroke pattern known in Cuban music as tresillo is a fundamental rhythmic figure heard in many different slave musics of the Caribbean, as well as the Afro-Caribbean folk dances performed in New Orleans Congo Square and Gottschalk's compositions (for example "Souvenirs From Havana" (1859)). Tresillo is the most basic and most prevalent duple-pulse rhythmic cell in sub-Saharan African music traditions and the music of the African Diaspora. Tresillo is heard prominently in New Orleans second line music and in other forms of popular music from that city from the turn of the 20th century to present. "By and large the simpler African rhythmic patterns survived in jazz ... because they could be adapted more readily to European rhythmic conceptions," jazz historian Gunther Schuller observed. "Some survived, others were discarded as the Europeanization progressed." In the post-Civil War period (after 1865), African Americans were able to obtain surplus military bass drums, snare drums and fifes, and an original African-American drum and fife music emerged, featuring tresillo and related syncopated rhythmic figures. This was a drumming tradition that was distinct from its Caribbean counterparts, expressing a uniquely African-American sensibility. "The snare and bass drummers played syncopated cross-rhythms," observed the writer Robert Palmer, speculating that "this tradition must have dated back to the latter half of the nineteenth century, and it could have not have developed in the first place if there hadn't been a reservoir of polyrhythmic sophistication in the culture it nurtured." "Spanish tinge"—the Afro-Cuban rhythmic influence African-American music began incorporating Afro-Cuban rhythmic motifs in the 19th century when the habanera (Cuban contradanza) gained international popularity. Musicians from Havana and New Orleans would take the twice-daily ferry between both cities to perform, and the habanera quickly took root in the musically fertile Crescent City. John Storm Roberts states that the musical genre habanera "reached the U.S. twenty years before the first rag was published." For the more than quarter-century in which the cakewalk, ragtime, and proto-jazz were forming and developing, the habanera was a consistent part of African-American popular music. Habaneras were widely available as sheet music and were the first written music which was rhythmically based on an African motif (1803). From the perspective of African-American music, the habanera rhythm (also known as congo, tango-congo, or tango.) can be thought of as a combination of tresillo and the backbeat. The habanera was the first of many Cuban music genres which enjoyed periods of popularity in the United States and reinforced and inspired the use of tresillo-based rhythms in African-American music. New Orleans native Louis Moreau Gottschalk's piano piece "Ojos Criollos (Danse Cubaine)" (1860) was influenced by the composer's studies in Cuba: the habanera rhythm is clearly heard in the left hand. In Gottschalk's symphonic work "A Night in the Tropics" (1859), the tresillo variant cinquillo appears extensively. The figure was later used by Scott Joplin and other ragtime composers. Comparing the music of New Orleans with the music of Cuba, Wynton Marsalis observes that tresillo is the New Orleans "clave", a Spanish word meaning 'code' or 'key', as in the key to a puzzle, or mystery. Although technically the pattern is only half a clave, Marsalis makes the point that the single-celled figure is the guide-pattern of New Orleans music. Jelly Roll Morton called the rhythmic figure the Spanish tinge and considered it an essential ingredient of jazz. The abolition of slavery in 1865 led to new opportunities for the education of freed African Americans. Although strict segregation limited employment opportunities for most blacks, many were able to find work in entertainment. Black musicians were able to provide entertainment in dances, minstrel shows, and in vaudeville, during which time many marching bands were formed. Black pianists played in bars, clubs, and brothels, as ragtime developed. Ragtime appeared as sheet music, popularized by African-American musicians such as the entertainer Ernest Hogan, whose hit songs appeared in 1895. Two years later, Vess Ossman recorded a medley of these songs as a banjo solo known as "Rag Time Medley". Also in 1897, the white composer William H. Krell published his "Mississippi Rag" as the first written piano instrumental ragtime piece, and Tom Turpin published his "Harlem Rag", the first rag published by an African-American. The classically trained pianist Scott Joplin produced his "Original Rags" in 1898 and, in 1899, had an international hit with "Maple Leaf Rag", a multi-strain ragtime march with four parts that feature recurring themes and a bass line with copious seventh chords. Its structure was the basis for many other rags, and the syncopations in the right hand, especially in the transition between the first and second strain, were novel at the time. African-based rhythmic patterns such as tresillo and its variants, the habanera rhythm and cinquillo, are heard in the ragtime compositions of Joplin, Turpin, and others. Joplin's "Solace" (1909) is generally considered to be within the habanera genre: both of the pianist's hands play in a syncopated fashion, completely abandoning any sense of a march rhythm. Ned Sublette postulates that the tresillo/habanera rhythm "found its way into ragtime and the cakewalk," whilst Roberts suggests that "the habanera influence may have been part of what freed black music from ragtime's European bass." Blues is the name given to both a musical form and a music genre, which originated in African-American communities of primarily the "Deep South" of the United States at the end of the 19th century from their spirituals, work songs, field hollers, shouts and chants and rhymed simple narrative ballads. Many of the rural blues of the Deep South are stylistically an extension and merger of basically two broad accompanied song-style traditions in the west central Sudanic belt: - A strongly Arabic/Islamic song style, as found for example among the Hausa. It is characterized by melisma, wavy intonation, pitch instabilities within a pentatonic framework, and a declamatory voice. - An ancient west central Sudanic stratum of pentatonic song composition, often associated with simple work rhythms in a regular meter, but with notable off-beat accents (1999: 94). W. C. Handy: early published blues W. C. Handy became intrigued by the folk blues of the Deep South whilst traveling through the Mississippi Delta. In this folk blues form, the singer would improvise freely within a limited melodic range, sounding like a field holler, and the guitar accompaniment was slapped rather than strummed, like a small drum which responded in syncopated accents, functioning as another "voice". Handy and his band members were formally trained African-American musicians who had not grown up with the blues, yet he was able to adapt the blues to a larger band instrument format and arrange them in a popular music form. Handy wrote about his adopting of the blues: The primitive southern Negro, as he sang, was sure to bear down on the third and seventh tone of the scale, slurring between major and minor. Whether in the cotton field of the Delta or on the Levee up St. Louis way, it was always the same. Till then, however, I had never heard this slur used by a more sophisticated Negro, or by any white man. I tried to convey this effect ... by introducing flat thirds and sevenths (now called blue notes) into my song, although its prevailing key was major ..., and I carried this device into my melody as well. The publication of his "Memphis Blues" sheet music in 1912 introduced the 12-bar blues to the world (although Gunther Schuller argues that it is not really a blues, but "more like a cakewalk"). This composition, as well as his later "St. Louis Blues" and others, included the habanera rhythm, and would become jazz standards. Handy's music career began in the pre-jazz era and contributed to the codification of jazz through the publication of some of the first jazz sheet music. Within the context of Western harmony The blues form which is ubiquitous in jazz is characterized by specific chord progressions, of which the twelve-bar blues progression is the most common. Basic blues progressionions are based on the I, IV and V chords (often called the "one", "four" and "five" chords). An important part of the sound are the microtonal blue notes which, for expressive purposes, are sung or played flattened (thus "between" the notes on a piano), or gradually "bent" (minor third to major third) in relation to the pitch of the major scale. The blue notes opened up an entirely new approach to Western harmony, ultimately leading to a high level of harmonic complexity in jazz. The music of New Orleans had a profound effect on the creation of early jazz. The reason why jazz is mainly associated with New Orleans is due to the slaves being able to practice elements of their culture such as voodoo, and they were also allowed drums. Many early jazz performers played in venues throughout the city, such as the brothels and bars of the red-light district around Basin Street, known as "Storyville". In addition to dance bands, there were numerous marching bands who played at lavish funerals (later called jazz funerals), which were arranged by the African-American and European-American communities. The instruments used in marching bands and dance bands became the basic instruments of jazz: brass, reeds tuned in the European 12-tone scale, and drums. Small bands which mixed self-taught and well-educated African-American musicians, many of whom came from the funeral procession tradition of New Orleans, played a seminal role in the development and dissemination of early jazz. These bands travelled throughout Black communities in the Deep South and, from around 1914 onwards, Afro-Creole and African-American musicians played in vaudeville shows which took jazz to western and northern US cities. In New Orleans, a white marching band leader named Papa Jack Laine integrated blacks and whites in his marching band. Laine was known as "the father of white jazz" because of the many top players who passed through his bands (including George Brunies, Sharkey Bonano and the future members of the Original Dixieland Jass Band). Laine was a good talent scout. During the early 1900s, jazz was mostly done in the African-American and mulatto communities, due to segregation laws. The red light district of Storyville, New Orleans was crucial in bringing jazz music to a wider audience via tourists who came to the port city. Many jazz musicians from the African-American communities were hired to perform live music in brothels and bars, including many early jazz pioneers such as Buddy Bolden and Jelly Roll Morton, in addition to those from New Orleans other communities such as Lorenzo Tio and Alcide Nunez. Louis Armstrong also got his start in Storyville and would later find success in Chicago (along with others from New Orleans) after the United States government shut down Storyville in 1917. The cornetist Buddy Bolden led a band who are often mentioned as one of the prime originators of the style later to be called "jazz". He played in New Orleans around 1895–1906, before developing a mental illness; there are no recordings of him playing. Bolden's band is credited with creating the big four, the first syncopated bass drum pattern to deviate from the standard on-the-beat march. As the example below shows, the second half of the big four pattern is the habanera rhythm. Afro-Creole pianist Jelly Roll Morton began his career in Storyville. From 1904, he toured with vaudeville shows around southern cities, also playing in Chicago and New York City. In 1905, he composed his "Jelly Roll Blues", which on its publication in 1915 became the first jazz arrangement in print, introducing more musicians to the New Orleans style. Now in one of my earliest tunes, "New Orleans Blues," you can notice the Spanish tinge. In fact, if you can't manage to put tinges of Spanish in your tunes, you will never be able to get the right seasoning, I call it, for jazz. Morton was a crucial innovator in the evolution from the early jazz form known as ragtime to jazz piano, and could perform pieces in either style; in 1938, Morton made a series of recordings for the Library of Congress, in which he demonstrated the difference between the two styles. Morton's solos, however, were still close to ragtime, and were not merely improvisations over chord changes as in later jazz, but his use of the blues was of equal importance. Swing in the early 20th century Morton loosened ragtime's rigid rhythmic feeling, decreasing its embellishments and employing a swing feeling. Swing is the most important and enduring African-based rhythmic technique used in jazz. An oft quoted definition of swing by Louis Armstrong is: "if you don't feel it, you'll never know it." The New Harvard Dictionary of Music states that swing is: "An intangible rhythmic momentum in jazz ... Swing defies analysis; claims to its presence may inspire arguments." The dictionary does nonetheless provide the useful description of triple subdivisions of the beat contrasted with duple subdivisions: swing superimposes six subdivisions of the beat over a basic pulse structure or four subdivisions. This aspect of swing is far more prevalent in African-American music than in Afro-Caribbean music. One aspect of swing, which is heard in more rhythmically complex Diaspora musics, places strokes in-between the triple and duple-pulse "grids". New Orleans brass bands are a lasting influence, contributing horn players to the world of professional jazz with the distinct sound of the city whilst helping black children escape poverty. The leader of New Orleans' Camelia Brass Band, D'Jalma Ganier, taught Louis Armstrong to play trumpet; Armstrong would then popularize the New Orleans style of trumpet playing, and then expand it. Like Jelly Roll Morton, Armstrong is also credited with the abandonment of ragtime's stiffness in favor of swung notes. Armstrong, perhaps more than any other musician, codified the rhythmic technique of swing in jazz and broadened the jazz solo vocabulary. The Original Dixieland Jass Band made the music's first recordings early in 1917, and their "Livery Stable Blues" became the earliest released jazz record. That year, numerous other bands made recordings featuring "jazz" in the title or band name, but most were ragtime or novelty records rather than jazz. In February 1918 during World War I, James Reese Europe's "Hellfighters" infantry band took ragtime to Europe, then on their return recorded Dixieland standards including "Darktown Strutters' Ball". In the northeastern United States, a "hot" style of playing ragtime had developed, notably James Reese Europe's symphonic Clef Club orchestra in New York City, which played a benefit concert at Carnegie Hall in 1912. The Baltimore rag style of Eubie Blake influenced James P. Johnson's development of stride piano playing, in which the right hand plays the melody, while the left hand provides the rhythm and bassline. In Ohio and elsewhere in the midwest the major influence was ragtime, until about 1919. Around 1912, when the four-string banjo and saxophone came in, musicians began to improvise the melody line, but the harmony and rhythm remained unchanged. A contemporary account states that blues could only be heard in jazz in the gut-bucket cabarets, which were generally looked down upon by the Black middle-class. The Jazz Age Problems playing this file? See media help. From 1920 to 1933, Prohibition in the United States banned the sale of alcoholic drinks, resulting in illicit speakeasies which became lively venues of the "Jazz Age", hosting popular music including current dance songs, novelty songs, and show tunes. Jazz began to get a reputation as being immoral, and many members of the older generations saw it as threatening the old cultural values and promoting the new decadent values of the Roaring 20s. Henry van Dyke of Princeton University wrote, "... it is not music at all. It's merely an irritation of the nerves of hearing, a sensual teasing of the strings of physical passion." The media too began to denigrate jazz. The New York Times used stories and headlines to pick at jazz: Siberian villagers were said by the paper to have used jazz to scare off bears when, in fact, they had used pots and pans; another story claimed that the fatal heart attack of a celebrated conductor was caused by jazz. In 1919, Kid Ory's Original Creole Jazz Band of musicians from New Orleans began playing in San Francisco and Los Angeles, where in 1922 they became the first black jazz band of New Orleans origin to make recordings. That year also saw the first recording by Bessie Smith, the most famous of the 1920s blues singers. Chicago meanwhile was developing the new "Hot Jazz", where King Oliver joined Bill Johnson. Bix Beiderbecke formed The Wolverines in 1924. Despite its Southern black origins, there was a larger market for jazzy dance music played by white orchestras. In 1918, Paul Whiteman and his orchestra became a hit in San Francisco, California, signing with Victor Talking Machine Company in 1920 and becoming the top bandleader of the 1920s, giving "hot jazz" a white component, hiring white musicians including Bix Beiderbecke, Jimmy Dorsey, Tommy Dorsey, Frankie Trumbauer, and Joe Venuti. In 1924, Whiteman commissioned Gershwin's Rhapsody in Blue, which was premiered by his orchestra and jazz began to be recognized as a notable musical form. Olin Downes, reviewing the concert in The New York Times: "This composition shows extraordinary talent, as it shows a young composer with aims that go far beyond those of his ilk, struggling with a form of which he is far from being master.... In spite of all this, he has expressed himself in a significant and, on the whole, highly original form.... His first theme ... is no mere dance-tune ... it is an idea, or several ideas, correlated and combined in varying and contrasting rhythms that immediately intrigue the listener." After Whiteman's band successfully toured Europe, huge hot jazz orchestras in theater pits caught on with other whites, including Fred Waring, Jean Goldkette, and Nathaniel Shilkret. According to Mario Dunkel, Whiteman's success was based on a "rhetoric of domestication" according to which he had elevated and rendered valuable (read "white") a previously inchoate (read "black") kind of music. Whiteman's success caused blacks to follow suit, including Earl Hines (who opened in The Grand Terrace Cafe in Chicago in 1928), Duke Ellington (who opened at the Cotton Club in Harlem in 1927), Lionel Hampton, Fletcher Henderson, Claude Hopkins, and Don Redman, with Henderson and Redman developing the "talking to one another" formula for "hot" Swing music. In 1924, Louis Armstrong joined the Fletcher Henderson dance band for a year, as featured soloist. The original New Orleans style was polyphonic, with theme variation and simultaneous collective improvisation. Armstrong was a master of his hometown style, but by the time he joined Henderson's band, he was already a trailblazer in a new phase of jazz, with its emphasis on arrangements and soloists. Armstrong's solos went well beyond the theme-improvisation concept and extemporized on chords, rather than melodies. According to Schuller, by comparison, the solos by Armstrong's bandmates (including a young Coleman Hawkins), sounded "stiff, stodgy," with "jerky rhythms and a grey undistinguished tone quality." The following example shows a short excerpt of the straight melody of "Mandy, Make Up Your Mind" by George W. Meyer and Arthur Johnston (top), compared with Armstrong's solo improvisations (below) (recorded 1924). (The example approximates Armstrong's solo, as it doesn't convey his use of swing.) Armstrong's solos were a significant factor in making jazz a true 20th-century language. After leaving Henderson's group, Armstrong formed his virtuosic Hot Five band, where he popularized scat singing. Also in the 1920s Skiffle, jazz played with homemade instruments such as washboard, jugs, musical saw, kazoos, etc. began to be recorded in Chicago, later merging with country music. Swing in the 1920s and 1930s The 1930s belonged to popular swing big bands, in which some virtuoso soloists became as famous as the band leaders. Key figures in developing the "big" jazz band included bandleaders and arrangers Count Basie, Cab Calloway, Jimmy and Tommy Dorsey, Duke Ellington, Benny Goodman, Fletcher Henderson, Earl Hines, Harry James, Jimmie Lunceford, Glenn Miller and Artie Shaw. Although it was a collective sound, swing also offered individual musicians a chance to "solo" and improvise melodic, thematic solos which could at times be very complex "important" music. Swing was also dance music. It was broadcast on the radio "live" nightly across America for many years, especially by Earl Hines and his Grand Terrace Cafe Orchestra broadcasting coast-to-coast from Chicago (well placed for "live" US time-zones). Over time, social strictures regarding racial segregation began to relax in America: white bandleaders began to recruit black musicians and black bandleaders white ones. In the mid-1930s, Benny Goodman hired pianist Teddy Wilson, vibraphonist Lionel Hampton and guitarist Charlie Christian to join small groups. In the 1930s, Kansas City Jazz as exemplified by tenor saxophonist Lester Young marked the transition from big bands to the bebop influence of the 1940s. An early 1940s style known as "jumping the blues" or jump blues used small combos, uptempo music and blues chord progressions, drawing on boogie-woogie from the 1930s. "American music"—the influence of Ellington While swing was reaching the height of its popularity, Duke Ellington spent the late 1920s and 1930s developing an innovative musical idiom for his orchestra. Abandoning the conventions of swing, he experimented with orchestral sounds, harmony, and musical form with complex compositions that still translated well for popular audiences; some of his tunes became hits, and his own popularity spanned from the United States to Europe. Ellington called his music "American Music" rather than jazz, and liked to describe those who impressed him as "beyond category." These included many of the musicians who were members of his orchestra, some of whom are considered among the best in jazz in their own right, but it was Ellington who melded them into one of the most popular jazz orchestras in the history of jazz. He often composed specifically for the style and skills of these individuals, such as "Jeep's Blues" for Johnny Hodges, "Concerto for Cootie" for Cootie Williams (which later became "Do Nothing Till You Hear from Me" with Bob Russell's lyrics), and "The Mooche" for Tricky Sam Nanton and Bubber Miley. He also recorded songs written by his bandsmen, such as Juan Tizol's "Caravan" and "Perdido", which brought the "Spanish Tinge" to big-band jazz. Several members of the orchestra remained with him for several decades. The band reached a creative peak in the early 1940s, when Ellington and a small hand-picked group of his composers and arrangers wrote for an orchestra of distinctive voices who displayed tremendous creativity. Beginnings of European jazz As only a limited number of American jazz records were released in Europe, European jazz traces many of its roots to American artists such as James Reese Europe, Paul Whiteman, and Lonnie Johnson, who visited Europe during and after World War I. It was their live performances which inspired European audiences' interest in jazz, as well as the interest in all things American (and therefore exotic) which accompanied the economic and political woes of Europe during this time. The beginnings of a distinct European style of jazz began to emerge in this interwar period. British jazz began with a tour by the Original Dixieland Jazz Band in 1919. In 1926, Fred Elizalde and His Cambridge Undergraduates began broadcasting on the BBC. Thereafter jazz became an important element in many leading dance orchestras and jazz instrumentalists quickly became numerous. This distinct style entered full swing in France with the Quintette du Hot Club de France, which began in 1934. Much of this French jazz was a combination of African-American jazz and the symphonic styles in which French musicians were well-trained; in this, it is easy to see the inspiration taken from Paul Whiteman since his style was also a fusion of the two. Belgian guitar virtuoso Django Reinhardt popularized gypsy jazz, a mix of 1930s American swing, French dance hall "musette" and Eastern European folk with a languid, seductive feel; the main instruments are steel stringed guitar, violin, and double bass, and solos pass from one player to another as the guitar and bass play the role of the rhythm section. Some music researchers hold that it was Philadelphia's Eddie Lang and Joe Venuti who pioneered the guitar-violin partnership typical of the genre, which was brought to France after they had been heard live or on Okeh Records in the late 1920s. The outbreak of World War II marked a turning point for jazz. The swing-era jazz of the previous decade had challenged other popular music as being representative of the nation's culture, with big bands reaching the height of the style's success by the early 1940s; swing acts and big bands traveled with U.S. military overseas to Europe, where it also became popular. Stateside, however, the war presented difficulties for the big-band format: conscription shortened the number of musicians available; the military's need for shellac (commonly used for pressing gramophone records) limited record production; a shortage of rubber (also due to the war effort) discouraged bands from touring via road travel; and a demand by the musicians' union for a commercial recording ban limited music distribution between 1942 and 1944. Many of the big bands who were deprived of experienced musicians because of the war effort began to enlist young players who were below the age for conscription, as was the case with saxophonist Stan Getz's entry in a band as a teenager. This coincided with a nationwide resurgence in the Dixieland style of pre-swing jazz; performers such as clarinetist George Lewis, cornetist Bill Davison, and trombonist Turk Murphy were hailed by conservative jazz critics as more authentic than the big bands. Elsewhere, with the limitations on recording, small groups of young musicians developed a more uptempo, improvisational style of jazz, collaborating and experimenting with new ideas for melodic development, rhythmic language, and harmonic substitution, during informal, late-night jam sessions hosted in small clubs and apartments. Key figures in this development were largely based in New York and included pianists Thelonious Monk and Bud Powell, drummers Max Roach and Kenny Clarke, saxophonist Charlie Parker, and trumpeter Dizzy Gillespie. This musical development became known as bebop. Bebop and subsequent post-war jazz developments featured a wider set of notes, played in more complex patterns and at faster tempos than previous jazz. According to Clive James, bebop was "the post-war musical development which tried to ensure that jazz would no longer be the spontaneous sound of joy ... Students of race relations in America are generally agreed that the exponents of post-war jazz were determined, with good reason, to present themselves as challenging artists rather than tame entertainers." The end of the war marked "a revival of the spirit of experimentation and musical pluralism under which it had been conceived", along with "the beginning of a decline in the popularity of jazz music in America", according to American academic Michael H. Burchett. With the rise of bebop and the end of the swing era after the war, jazz lost its cachet as pop music. Vocalists of the famous big bands moved on to being marketed and performing as solo pop singers; these included Frank Sinatra, Peggy Lee, Dick Haymes, and Doris Day. Older musicians who still performed their pre-war jazz, such as Armstrong and Ellington, were gradually viewed in the mainstream as passé. Other younger performers, such as singer Big Joe Turner and saxophonist Louis Jordan, who were discouraged by bebop's increasing complexity pursued more lucrative endeavors in rhythm and blues, jump blues, and eventually rock and roll. Some, including Gillespie, composed intricate yet danceable songs for bebop musicians in an effort to make them more accessible, but bebop largely remained on the fringes of American audiences' purview. "The new direction of postwar jazz drew a wealth of critical acclaim, but it steadily declined in popularity as it developed a reputation as an academic genre that was largely inaccessible to mainstream audiences", Burchett said. "The quest to make jazz more relevant to popular audiences, while retaining its artistic integrity, is a constant and prevalent theme in the history of postwar jazz." During its swing period, jazz had been an uncomplicated musical scene; according to Paul Trynka, this changed in the post-war years: "Suddenly jazz was no longer straightforward. There was bebop and its variants, there was the last gasp of swing, there were strange new brews like the progressive jazz of Stan Kenton, and there was a completely new phenomenon called revivalism -- the rediscovery of jazz from the past, either on old records or performed live by ageing players brought out of retirement. From now on it was no good saying that you liked jazz, you had to specify what kind of jazz. And that it the way it has been ever since, only more so. Today, the word 'jazz' is virtually meaningless without further definition." In the early 1940s, bebop-style performers began to shift jazz from danceable popular music toward a more challenging "musician's music." The most influential bebop musicians included saxophonist Charlie Parker, pianists Bud Powell and Thelonious Monk, trumpeters Dizzy Gillespie and Clifford Brown, and drummer Max Roach. Divorcing itself from dance music, bebop established itself more as an art form, thus lessening its potential popular and commercial appeal. Composer Gunther Schuller wrote: ... In 1943 I heard the great Earl Hines band which had Bird in it and all those other great musicians. They were playing all the flatted fifth chords and all the modern harmonies and substitutions and Dizzy Gillespie runs in the trumpet section work. Two years later I read that that was 'bop' and the beginning of modern jazz ... but the band never made recordings. Dizzy Gillespie wrote: ... People talk about the Hines band being 'the incubator of bop' and the leading exponents of that music ended up in the Hines band. But people also have the erroneous impression that the music was new. It was not. The music evolved from what went before. It was the same basic music. The difference was in how you got from here to here to here ... naturally each age has got its own shit. Since bebop was meant to be listened to, not danced to, it could use faster tempos. Drumming shifted to a more elusive and explosive style, in which the ride cymbal was used to keep time while the snare and bass drum were used for accents. This led to a highly syncopated music with a linear rhythmic complexity. Bebop musicians employed several harmonic devices which were not previously typical in jazz, engaging in a more abstracted form of chord-based improvisation. Bebop scales are traditional scales with an added chromatic passing note; bebop also uses "passing" chords, substitute chords, and altered chords. New forms of chromaticism and dissonance were introduced into jazz, and the dissonant tritone (or "flatted fifth") interval became the "most important interval of bebop" Chord progressions for bebop tunes were often taken directly from popular swing-era songs and reused with a new and more complex melody and/or reharmonized with more complex chord progressions to form new compositions, a practice which was already well-established in earlier jazz, but came to be central to the bebop style. Bebop made use of several relatively common chord progressions, such as blues (at base, I-IV-V, but often infused with ii-V motion) and 'rhythm changes' (I-VI-ii-V) - the chords to the 1930s pop standard "I Got Rhythm." Late bop also moved towards extended forms that represented a departure from pop and show tunes. The harmonic development in bebop is often traced back to a transcendent moment experienced by Charlie Parker while performing "Cherokee" at Clark Monroe's Uptown House, New York, in early 1942: I'd been getting bored with the stereotyped changes that were being used, ... and I kept thinking there's bound to be something else. I could hear it sometimes. I couldn't play it.... I was working over 'Cherokee,' and, as I did, I found that by using the higher intervals of a chord as a melody line and backing them with appropriately related changes, I could play the thing I'd been hearing. It came alive—Parker. Auditory inclinations were the African legacy in [Parker's] life, reconfirmed by the experience of the blues tonal system, a sound world at odds with the Western diatonic chord categories. Bebop musicians eliminated Western-style functional harmony in their music while retaining the strong central tonality of the blues as a basis for drawing upon various African matrices. Samuel Floyd states that blues were both the bedrock and propelling force of bebop, bringing about three main developments: - A new harmonic conception, using extended chord structures that led to unprecedented harmonic and melodic variety. - A developed and even more highly syncopated, linear rhythmic complexity and a melodic angularity in which the blue note of the fifth degree was established as an important melodic-harmonic device. - The reestablishment of the blues as the music's primary organizing and functional principle. As Kubik explained: While for an outside observer, the harmonic innovations in bebop would appear to be inspired by experiences in Western "serious" music, from Claude Debussy to Arnold Schoenberg, such a scheme cannot be sustained by the evidence from a cognitive approach. Claude Debussy did have some influence on jazz, for example, on Bix Beiderbecke's piano playing. And it is also true that Duke Ellington adopted and reinterpreted some harmonic devices in European contemporary music. West Coast jazz would run into such debts as would several forms of cool jazz, but bebop has hardly any such debts in the sense of direct borrowings. On the contrary, ideologically, bebop was a strong statement of rejection of any kind of eclecticism, propelled by a desire to activate something deeply buried in self. Bebop then revived tonal-harmonic ideas transmitted through the blues and reconstructed and expanded others in a basically non-Western harmonic approach. The ultimate significance of all this is that the experiments in jazz during the 1940s brought back to African-American music several structural principles and techniques rooted in African traditions These divergences from the jazz mainstream of the time initially met with a divided, sometimes hostile, response among fans and fellow musicians, especially established swing players, who bristled at the new harmonic sounds. To hostile critics, bebop seemed to be filled with "racing, nervous phrases". But despite the initial friction, by the 1950s, bebop had become an accepted part of the jazz vocabulary. Afro-Cuban jazz (cu-bop) Machito and Mario Bauza The general consensus among musicians and musicologists is that the first original jazz piece to be overtly based in clave was "Tanga" (1943), composed by Cuban-born Mario Bauza and recorded by Machito and his Afro-Cubans in New York City. "Tanga" began as a spontaneous descarga (Cuban jam session), with jazz solos superimposed on top. This was the birth of Afro-Cuban jazz. The use of clave brought the African timeline, or key pattern, into jazz. Music organized around key patterns convey a two-celled (binary) structure, which is a complex level of African cross-rhythm. Within the context of jazz, however, harmony is the primary referent, not rhythm. The harmonic progression can begin on either side of clave, and the harmonic "one" is always understood to be "one". If the progression begins on the "three-side" of clave, it is said to be in 3-2 clave. If the progression begins on the "two-side", its in 2-3 clave. Bobby Sanabria mentions several innovations of Machito's Afro-Cubans, citing them as the first band: to wed big band jazz arranging techniques within an original composition, with jazz oriented soloists utilizing an authentic Afro-Cuban based rhythm section in a successful manner; to explore modal harmony (a concept explored much later by Miles Davis and Gil Evans) from a jazz arranging perspective; and to overtly explore the concept of clave counterpoint from an arranging standpoint (the ability to weave seamlessly from one side of the clave to the other without breaking its rhythmic integrity within the structure of a musical arrangement). They were also the first band in the United States to use the term "Afro-Cuban" as the band's moniker, thus identifying itself and acknowledging the West African roots of the musical form they were playing. It forced New York City's Latino and African-American communities to deal with their common West African musical roots in a direct way, whether they wanted to acknowledge it publicly or not. Dizzy Gillespie and Chano Pozo Mario Bauzá introduced bebop innovator Dizzy Gillespie to Cuban conga drummer and composer Chano Pozo. Gillespie and Pozo's brief collaboration produced some of the most enduring Afro-Cuban jazz standards. "Manteca" (1947) is the first jazz standard to be rhythmically based on clave. According to Gillespie, Pozo composed the layered, contrapuntal guajeos (Afro-Cuban ostinatos) of the A section and the introduction, while Gillespie wrote the bridge. Gillespie recounted: "If I'd let it go like [Chano] wanted it, it would have been strictly Afro-Cuban all the way. There wouldn't have been a bridge. I thought I was writing an eight-bar bridge, but...I had to keep going and ended up writing a sixteen-bar bridge." The bridge gave "Manteca" a typical jazz harmonic structure, setting the piece apart from Bauza's modal "Tanga" of a few years earlier. Gillespie's collaboration with Pozo brought specific African-based rhythms into bebop. While pushing the boundaries of harmonic improvisation, cu-bop also drew from African rhythm. Jazz arrangements with a Latin A section and a swung B section, with all choruses swung during solos, became common practice with many Latin tunes of the jazz standard repertoire. This approach can be heard on pre-1980 recordings of "Manteca", "A Night in Tunisia", "Tin Tin Deo", and "On Green Dolphin Street". Cuban percussionist Mongo Santamaria first recorded his composition "Afro Blue" in 1959. "Afro Blue" was the first jazz standard built upon a typical African three-against-two (3:2) cross-rhythm, or hemiola. The song begins with the bass repeatedly playing 6 cross-beats per each measure of 12/8, or 6 cross-beats per 4 main beats—6:4 (two cells of 3:2). The following example shows the original ostinato "Afro Blue" bass line; the slashed noteheads indicate the main beats (not bass notes), where you would normally tap your foot to keep time. When John Coltrane covered "Afro Blue" in 1963, he inverted the metric hierarchy, interpreting the tune as a 3/4 jazz waltz with duple cross-beats superimposed (2:3). Originally a B♭ pentatonic blues, Coltrane expanded the harmonic structure of "Afro Blue." In the late 1940s, there was a revival of Dixieland, harking back to the contrapuntal New Orleans style. This was driven in large part by record company reissues of jazz classics by the Oliver, Morton, and Armstrong bands of the 1930s. There were two types of musicians involved in the revival: the first group was made up of those who had begun their careers playing in the traditional style and were returning to it (or continuing what they had been playing all along), such as Bob Crosby's Bobcats, Max Kaminsky, Eddie Condon, and Wild Bill Davison. Most of these players were originally Midwesterners, although there were a small number of New Orleans musicians involved. The second group of revivalists consisted of younger musicians, such as those in the Lu Watters band, Conrad Janis, and Ward Kimball and his Firehouse Five Plus Two Jazz Band. By the late 1940s, Louis Armstrong's Allstars band became a leading ensemble. Through the 1950s and 1960s, Dixieland was one of the most commercially popular jazz styles in the US, Europe, and Japan, although critics paid little attention to it. Cool jazz and West Coast jazz In 1944, jazz impresario Norman Granz organized the first Jazz at the Philharmonic concert in Los Angeles, which helped make a star of Nat "King" Cole and Les Paul. In 1946, he founded Clef Records, discovering Canadian jazz pianist Oscar Peterson in 1949, and merging Clef Records with his new label Verve Records in 1956, which advanced the career of Ella Fitzgerald et al. By the end of the 1940s, the nervous energy and tension of bebop was replaced with a tendency toward calm and smoothness with the sounds of cool jazz, which favored long, linear melodic lines. It emerged in New York City and dominated jazz in the first half of the 1950s. The starting point was a collection of 1949 and 1950 singles by a nonet led by Miles Davis, released as the Birth of the Cool (1957). Later cool jazz recordings by musicians such as Chet Baker, Dave Brubeck, Bill Evans, Gil Evans, Stan Getz, the Modern Jazz Quartet, and Gerry Mulligan usually had a lighter sound that avoided the aggressive tempos and harmonic abstraction of bebop. Cool jazz later became strongly identified with the West Coast jazz scene, as typified by singers Chet Baker, Mel Tormé, and Anita O'Day, but it also had a particular resonance in Europe, especially Scandinavia, where figures such as baritone saxophonist Lars Gullin and pianist Bengt Hallberg emerged. The theoretical underpinnings of cool jazz were laid out by the Chicago pianist Lennie Tristano, and its influence stretches into such later developments as bossa nova, modal jazz, and even free jazz. This 1941 sample of Duke Ellington's signature tune is an example of the swing style. Excerpt from a saxophone solo by Charlie Parker. The fast, complex rhythms and substitute chords of bebop were important to the formation of jazz. This hard blues by John Coltrane is an example of hard bop, a post-bebop style which is informed by gospel music, blues, and work songs. This 1973 piece by the Mahavishnu Orchestra merges jazz improvisation and rock instrumentation into jazz fusion Problems playing these files? See media help. Hard bop is an extension of bebop (or "bop") music which incorporates influences from rhythm and blues, gospel music and blues, especially in the saxophone and piano playing. Hard bop was developed in the mid-1950s, coalescing in 1953 and 1954; it developed partly in response to the vogue for cool jazz in the early 1950s and paralleled the rise of rhythm and blues. Miles Davis' 1954 performance of "Walkin'" at the first Newport Jazz Festival announced the style to the jazz world. The quintet Art Blakey and the Jazz Messengers, fronted by Blakey and featuring pianist Horace Silver and trumpeter Clifford Brown, were leaders in the hard bop movement along with Davis. Modal jazz is a development which began in the later 1950s which takes the mode, or musical scale, as the basis of musical structure and improvisation. Previously, a solo was meant to fit into a given chord progression, but with modal jazz, the soloist creates a melody using one (or a small number of) modes. The emphasis is thus shifted from harmony to melody: "Historically, this caused a seismic shift among jazz musicians, away from thinking vertically (the chord), and towards a more horizontal approach (the scale)," explained pianist Mark Levine. The modal theory stems from a work by George Russell. Miles Davis introduced the concept to the greater jazz world with Kind of Blue (1959), an exploration of the possibilities of modal jazz which would become the best selling jazz album of all time. In contrast to Davis' earlier work with hard bop and its complex chord progression and improvisation, Kind of Blue was composed as a series of modal sketches in which the musicians were given scales that defined the parameters of their improvisation and style. "I didn't write out the music for Kind of Blue, but brought in sketches for what everybody was supposed to play because I wanted a lot of spontaneity," recalled Davis. The track "So What" has only two chords: D-7 and E♭-7. By the 1950s, Afro-Cuban jazz had been using modes for at least a decade, as much of it borrowed from Cuban popular dance forms which are structured around multiple ostinatos with only a few chords. A case in point is Mario Bauza's "Tanga" (1943), the first Afro-Cuban jazz piece. Machito's Afro-Cubans recorded modal tunes in the 1940s, featuring jazz soloists such as Howard McGhee, Brew Moore, Charlie Parker, and Flip Phillips. However, there is no evidence that Davis or other mainstream jazz musicians were influenced by the use of modes in Afro-Cuban jazz, or other branches of Latin jazz.[clarification needed] Free jazz, and the related form of avant-garde jazz, broke through into an open space of "free tonality" in which meter, beat, and formal symmetry all disappeared, and a range of world music from India, Africa, and Arabia were melded into an intense, even religiously ecstatic or orgiastic style of playing. While loosely inspired by bebop, free jazz tunes gave players much more latitude; the loose harmony and tempo was deemed controversial when this approach was first developed. The bassist Charles Mingus is also frequently associated with the avant-garde in jazz, although his compositions draw from myriad styles and genres. The first major stirrings came in the 1950s with the early work of Ornette Coleman (whose 1960 album Free Jazz: A Collective Improvisation coined the term) and Cecil Taylor. In the 1960s, exponents included Albert Ayler, Gato Barbieri, Carla Bley, Don Cherry, Larry Coryell, John Coltrane, Bill Dixon, Jimmy Giuffre, Steve Lacy, Michael Mantler, Sun Ra, Roswell Rudd, Pharoah Sanders, and John Tchicai. In developing his late style, Coltrane was especially influenced by the dissonance of Ayler's trio with bassist Gary Peacock and drummer Sunny Murray, a rhythm section honed with Cecil Taylor as leader. In November 1961, Coltrane played a gig at the Village Vanguard, which resulted in the classic Chasin' the 'Trane, which Down Beat magazine panned as "anti-jazz". On his 1961 tour of France, he was booed, but persevered, signing with the new Impulse! Records in 1960 and turning it into "the house that Trane built", while championing many younger free jazz musicians, notably Archie Shepp, who often played with trumpeter Bill Dixon, who organized the 4-day "October Revolution in Jazz" in Manhattan in 1964, the first free jazz festival. A series of recordings with the Classic Quartet in the first half of 1965 show Coltrane's playing becoming increasingly abstract, with greater incorporation of devices like multiphonics, utilization of overtones, and playing in the altissimo register, as well as a mutated return to Coltrane's sheets of sound. In the studio, he all but abandoned his soprano to concentrate on the tenor saxophone. In addition, the quartet responded to the leader by playing with increasing freedom. The group's evolution can be traced through the recordings The John Coltrane Quartet Plays, Living Space and Transition (both June 1965), New Thing at Newport (July 1965), Sun Ship (August 1965), and First Meditations (September 1965). In June 1965, Coltrane and 10 other musicians recorded Ascension, a 40-minute-long piece without breaks that included adventurous solos by young avante-garde musicians as well as Coltrane, and was controversial primarily for the collective improvisation sections that separated the solos. Dave Liebman later called it "the torch that lit the free jazz thing.". After recording with the quartet over the next few months, Coltrane invited Pharoah Sanders to join the band in September 1965. While Coltrane used over-blowing frequently as an emotional exclamation-point, Sanders would opt to overblow his entire solo, resulting in a constant screaming and screeching in the altissimo range of the instrument. Free jazz in Europe Free jazz quickly found a foothold in Europe, in part because musicians such as Ayler, Taylor, Steve Lacy and Eric Dolphy spent extended periods there, and European musicians Michael Mantler, John Tchicai et al. traveled to the U.S. to experience American approaches firsthand. A distinctive European contemporary jazz (sometimes incorporating elements of free jazz but not limited to it) flourished because of the emergence of highly distinctive European or European-based musicians such as Peter Brötzmann, John Surman, Krzysztof Komeda, Zbigniew Namysłowski, Tomasz Stanko, Lars Gullin, Joe Harriott, Albert Mangelsdorff, Kenny Wheeler, Graham Collier, Michael Garrick and Mike Westbrook, who were anxious to develop new approaches reflecting their national and regional musical cultures and contexts. Since the 1960s, various creative centers of jazz have developed in Europe, such as the creative jazz scene in Amsterdam. Following the work of veteran drummer Han Bennink and pianist Misha Mengelberg, musicians started to explore free music by collectively improvising until a certain form (melody, rhythm, or even famous song) is found by the band. Jazz critic Kevin Whitehead documented the free jazz scene in Amsterdam and some of its main exponents such as the ICP (Instant Composers Pool) orchestra in his book New Dutch Swing. Since the 1990s Keith Jarrett has been prominent in defending free jazz from criticism by traditionalists. British scholar Stuart Nicholson has been prominent in arguing that European contemporary jazz's identity is now substantially independent of American jazz and follows a different trajectory. Latin jazz is the term used to describe jazz which employs Latin American rhythms and is generally understood to have a more specific meaning than simply jazz from Latin America. A more precise term might be Afro-Latin jazz, as the jazz subgenre typically employs rhythms that either have a direct analog in Africa or exhibit an African rhythmic influence beyond what is ordinarily heard in other jazz. The two main categories of Latin jazz are Afro-Cuban jazz and Brazilian jazz. In the 1960s and 1970s, many jazz musicians had only a basic understanding of Cuban and Brazilian music, and jazz compositions which used Cuban or Brazilian elements were often referred to as "Latin tunes", with no distinction between a Cuban son montuno and a Brazilian bossa nova. Even as late as 2000, in Mark Gridley's Jazz Styles: History and Analysis, a bossa nova bass line is referred to as a "Latin bass figure." It was not uncommon during the 1960s and 1970s to hear a conga playing a Cuban tumbao while the drumset and bass played a Brazilian bossa nova pattern. Many jazz standards such as "Manteca", "On Green Dolphin Street" and "Song for My Father" have a "Latin" A section and a swung B section. Typically, the band would only play an even-eighth "Latin" feel in the A section of the head and swing throughout all of the solos. Latin jazz specialists like Cal Tjader tended to be the exception. For example, on a 1959 live Tjader recording of "A Night in Tunisia", pianist Vince Guaraldi soloed through the entire form over an authentic mambo. Afro-Cuban jazz often uses Afro-Cuban instruments such as congas, timbales, güiro, and claves, combined with piano, double bass, etc. Afro-Cuban jazz began with Machito's Afro-Cubans in the early 1940s, but took off and entered the mainstream in the late 1940s when bebop musicians such as Dizzy Gillespie and Billy Taylor began experimenting with Cuban rhythms. Mongo Santamaria and Cal Tjader further refined the genre in the late 1950s. Although a great deal of Cuban-based Latin jazz is modal, Latin jazz is not always modal: it can be as harmonically expansive as post-bop jazz. For example, Tito Puente recorded an arrangement of "Giant Steps" done to an Afro-Cuban guaguancó. A Latin jazz piece may momentarily contract harmonically, as in the case of a percussion solo over a one or two-chord piano guajeo. Guajeo is the name for the typical Afro-Cuban ostinato melodies which are commonly used motifs in Latin jazz compositions. They originated in the genre known as son. Guajeos provide a rhythmic and melodic framework that may be varied within certain parameters, whilst still maintaining a repetitive - and thus "danceable" - structure. Most guajeos are rhythmically based on clave (rhythm). Guajeos are one of the most important elements of the vocabulary of Afro-Cuban descarga (jazz-inspired instrumental jams), providing a means of tension and resolution and a sense of forward momentum, within a relatively simple harmonic structure. The use of multiple, contrapuntal guajeos in Latin jazz facilitates simultaneous collective improvisation based on theme variation. In a way, this polyphonic texture is reminiscent of the original New Orleans style of jazz. Afro-Cuban jazz renaissance For most of its history, Afro-Cuban jazz had been a matter of superimposing jazz phrasing over Cuban rhythms. But by the end of the 1970s, a new generation of New York City musicians had emerged who were fluent in both salsa dance music and jazz, leading to a new level of integration of jazz and Cuban rhythms. This era of creativity and vitality is best represented by the Gonzalez brothers Jerry (congas and trumpet) and Andy (bass). During 1974–1976, they were members of one of Eddie Palmieri's most experimental salsa groups: salsa was the medium, but Palmieri was stretching the form in new ways. He incorporated parallel fourths, with McCoy Tyner-type vamps. The innovations of Palmieri, the Gonzalez brothers and others led to an Afro-Cuban jazz renaissance in New York City. This occurred in parallel with developments in Cuba The first Cuban band of this new wave was Irakere. Their "Chékere-son" (1976) introduced a style of "Cubanized" bebop-flavored horn lines that departed from the more angular guajeo-based lines which were typical of Cuban popular music and Latin jazz up until that time. It was based on Charlie Parker's composition "Billie's Bounce", jumbled together in a way that fused clave and bebop horn lines. In spite of the ambivalence of some band members towards Irakere's Afro-Cuban folkloric / jazz fusion, their experiments forever changed Cuban jazz: their innovations are still heard in the high level of harmonic and rhythmic complexity in Cuban jazz and in the jazzy and complex contemporary form of popular dance music known as timba. Brazilian jazz such as bossa nova is derived from samba, with influences from jazz and other 20th-century classical and popular music styles. Bossa is generally moderately paced, with melodies sung in Portuguese or English, whilst the related term jazz-samba describes an adaptation of street samba into jazz. The bossa nova style was pioneered by Brazilians João Gilberto and Antônio Carlos Jobim and was made popular by Elizete Cardoso's recording of "Chega de Saudade" on the Canção do Amor Demais LP. Gilberto's initial releases, and the 1959 film Black Orpheus, achieved significant popularity in Latin America; this spread to North America via visiting American jazz musicians. The resulting recordings by Charlie Byrd and Stan Getz cemented bossa nova's popularity and led to a worldwide boom, with 1963's Getz/Gilberto, numerous recordings by famous jazz performers such as Ella Fitzgerald and Frank Sinatra, and the eventual entrenchment of the bossa nova style as a lasting influence in world music. Brazilian percussionists such as Airto Moreira and Naná Vasconcelos also influenced jazz internationally by introducing Afro-Brazilian folkloric instruments and rhythms into a wide variety of jazz styles, thus attracting a greater audience to them. Post-bop jazz is a form of small-combo jazz derived from earlier bop styles. The genre's origins lie in seminal work by John Coltrane, Miles Davis, Bill Evans, Charles Mingus, Wayne Shorter, and Herbie Hancock. Generally, the term post-bop is taken to mean jazz from the mid-sixties onwards that assimilates influences from hard bop, modal jazz, the avant-garde and free jazz, without necessarily being immediately identifiable as any of the above. Much post-bop was recorded for Blue Note Records. Key albums include Speak No Evil by Shorter; The Real McCoy by McCoy Tyner; Maiden Voyage by Hancock; Miles Smiles by Davis; and Search for the New Land by Lee Morgan (an artist who is not typically associated with the post-bop genre). Most post-bop artists worked in other genres as well, with a particularly strong overlap with the earlier hard bop. Soul jazz was a development of hard bop which incorporated strong influences from blues, gospel and rhythm and blues to create music for small groups, often the organ trio of Hammond organ, drummer and tenor saxophonist. Unlike hard bop, soul jazz generally emphasized repetitive grooves and melodic hooks, and improvisations were often less complex than in other jazz styles. It often had a steadier "funk" style groove, which was different from the swing rhythms typical of much hard bop. Horace Silver had a large influence on the soul jazz style, with songs that used funky and often gospel-based piano vamps. Important soul jazz organists included Jimmy McGriff, Jimmy Smith and Johnny Hammond Smith, and influential tenor saxophone players included Eddie "Lockjaw" Davis and Stanley Turrentine. There was a resurgence of interest in jazz and other forms of African-American cultural expression during the Black Arts Movement and Black nationalist period of the 1960s and 1970s. African themes became popular, and many new jazz compositions were given African-related titles: "Black Nile" (Wayne Shorter), "Blue Nile" (Alice Coltrane), "Obirin African" (Art Blakey), "Zambia" (Lee Morgan), "Appointment in Ghana" (Jackie McLean), "Marabi" (Cannonball Adderley), "Yoruba" (Hubert Laws), and many more. Pianist Randy Weston's music incorporated African elements, such as in the large-scale suite "Uhuru Africa" (with the participation of poet Langston Hughes) and "Highlife: Music From the New African Nations." Both Weston and saxophonist Stanley Turrentine covered the Nigerian Bobby Benson's piece "Niger Mambo", which features Afro-Caribbean and jazz elements within a West African Highlife style. Some musicians, including Pharoah Sanders, Hubert Laws, and Wayne Shorter, began using African instruments such as kalimbas, bells, beaded gourds and other instruments which were not traditional to jazz. During this period, there was an increased use of the typical African 12/8 cross-rhythmic structure in jazz. Herbie Hancock's "Succotash" on Inventions and Dimensions (1963) is an open-ended modal 12/8 improvised jam, in which Hancock's pattern of attack-points, rather than the pattern of pitches, is the primary focus of his improvisations, accompanied by Paul Chambers on bass, percussionist Osvaldo Martinez playing a traditional Afro-Cuban chekeré part and Willie Bobo playing an Abakuá bell pattern on a snare drum with brushes. The first jazz standard composed by a non-Latino to use an overt African 12/8 cross-rhythm was Wayne Shorter's "Footprints" (1967). On the version recorded on Miles Smiles by Miles Davis, the bass switches to a 4/4 tresillo figure at 2:20. "Footprints" is not, however, a Latin jazz tune: African rhythmic structures are accessed directly by Ron Carter (bass) and Tony Williams (drums) via the rhythmic sensibilities of swing. Throughout the piece, the four beats, whether sounded or not, are maintained as the temporal referent. In the example below, the main beats are indicated by slashed noteheads, which do not indicate bass notes. The minor pentatonic scale is often used in blues improvisation, and like a blues scale, a minor pentatonic scale can be played over all of the chords in a blues. The following pentatonic lick was played over blues changes by Joe Henderson on Horace Silver's "African Queen" (1965). Levine points out that the V pentatonic scale works for all three chords of the standard II-V-I jazz progression. This is a very common progression, used in pieces such as Miles Davis' "Tune Up." The following example shows the V pentatonic scale over a II-V-I progression. Accordingly, John Coltrane's "Giant Steps" (1960), with its 26 chords per 16 bars, can be played using only three pentatonic scales. Coltrane studied Nicolas Slonimsky's Thesaurus of Scales and Melodic Patterns, which contains material that is virtually identical to portions of "Giant Steps". The harmonic complexity of "Giant Steps" is on the level of the most advanced 20th-century art music. Superimposing the pentatonic scale over "Giant Steps" is not merely a matter of harmonic simplification, but also a sort of "Africanizing" of the piece, which provides an alternate approach for soloing. Mark Levine observes that when mixed in with more conventional "playing the changes", pentatonic scales provide "structure and a feeling of increased space." In the late 1960s and early 1970s, the hybrid form of jazz-rock fusion was developed by combining jazz improvisation with rock rhythms, electric instruments and the highly amplified stage sound of rock musicians such as Jimi Hendrix and Frank Zappa. Jazz fusion often uses mixed meters, odd time signatures, syncopation, complex chords, and harmonies. According to AllMusic: ...until around 1967, the worlds of jazz and rock were nearly completely separate. [However, ...] as rock became more creative and its musicianship improved, and as some in the jazz world became bored with hard bop and did not want to play strictly avant-garde music, the two different idioms began to trade ideas and occasionally combine forces." Miles Davis' new directions In 1969, Davis fully embraced the electric instrument approach to jazz with In a Silent Way, which can be considered his first fusion album. Composed of two side-long suites edited heavily by producer Teo Macero, this quiet, static album would be equally influential to the development of ambient music. As Davis recalls: The music I was really listening to in 1968 was James Brown, the great guitar player Jimi Hendrix, and a new group who had just come out with a hit record, "Dance to the Music", Sly and the Family Stone... I wanted to make it more like rock. When we recorded In a Silent Way I just threw out all the chord sheets and told everyone to play off of that." Davis' Bitches Brew (1970) album was his most successful of this era. Although inspired by rock and funk, Davis' fusion creations were original and brought about a new type of avant-garde, electronic, psychedelic-jazz, as far from pop music as any other Davis work. Pianist Herbie Hancock (a Davis alumnus) released four albums in the short-lived (1970–1973) psychedelic-jazz subgenre: Mwandishi (1972), Crossings (1973), and Sextant (1973). The rhythmic background was a mix of rock, funk, and African-type textures. Musicians who had previously worked with Davis formed the four most influential fusion groups: Weather Report and Mahavishnu Orchestra emerged in 1971 and were soon followed by Return to Forever and The Headhunters. Weather Report's self-titled electronic and psychedelic Weather Report debut album caused a sensation in the jazz world on its arrival in 1971, thanks to the pedigree of the group's members (including percussionist Airto Moreira), and their unorthodox approach to music. The album featured a softer sound than would be the case in later years (predominantly using acoustic bass with Shorter exclusively playing soprano saxophone, and with no synthesizers involved), but is still considered a classic of early fusion. It built on the avant-garde experiments which Joe Zawinul and Shorter had pioneered with Miles Davis on Bitches Brew, including an avoidance of head-and-chorus composition in favour of continuous rhythm and movement – but took the music further. To emphasise the group's rejection of standard methodology, the album opened with the inscrutable avant-garde atmospheric piece "Milky Way", which featured by Shorter's extremely muted saxophone inducing vibrations in Zawinul's piano strings while the latter pedalled the instrument. Down Beat described the album as "music beyond category", and awarded it Album of the Year in the magazine's polls that year. Although some jazz purists protested against the blend of jazz and rock, many jazz innovators crossed over from the contemporary hard bop scene into fusion. As well as the electric instruments of rock (such as electric guitar, electric bass, electric piano and synthesizer keyboards), fusion also used the powerful amplification, "fuzz" pedals, wah-wah pedals and other effects that were used by 1970s-era rock bands. Notable performers of jazz fusion included Miles Davis, Eddie Harris, keyboardists Joe Zawinul, Chick Corea, and Herbie Hancock, vibraphonist Gary Burton, drummer Tony Williams (drummer), violinist Jean-Luc Ponty, guitarists Larry Coryell, Al Di Meola, John McLaughlin, and Frank Zappa, saxophonist Wayne Shorter and bassists Jaco Pastorius and Stanley Clarke. Jazz fusion was also popular in Japan, where the band Casiopea released over thirty fusion albums. According to jazz writer Stuart Nicholson, "just as free jazz appeared on the verge of creating a whole new musical language in the 1960s ... jazz-rock briefly suggested the promise of doing the same" with albums such as Williams' Emergency! (1970) and Davis' Agharta (1975), which Nicholson said "suggested the potential of evolving into something that might eventually define itself as a wholly independent genre quite apart from the sound and conventions of anything that had gone before." This development was stifled by commercialism, Nicholson said, as the genre "mutated into a peculiar species of jazz-inflected pop music that eventually took up residence on FM radio" at the end of the 1970s. By the mid-1970s, the sound known as jazz-funk had developed, characterized by a strong back beat (groove), electrified sounds and, often, the presence of electronic analog synthesizers. Jazz-funk also draws influences from traditional African music, Afro-Cuban rhythms and Jamaican reggae, notably Kingston bandleader Sonny Bradshaw. Another feature is the shift of emphasis from improvisation to composition: arrangements, melody and overall writing became important. The integration of funk, soul, and R&B music into jazz resulted in the creation of a genre whose spectrum is wide and ranges from strong jazz improvisation to soul, funk or disco with jazz arrangements, jazz riffs and jazz solos, and sometimes soul vocals. Early examples are Herbie Hancock's Headhunters band and Miles Davis' On the Corner album, which, in 1972, began Davis' foray into jazz-funk and was, he claimed, an attempt at reconnecting with the young black audience which had largely forsaken jazz for rock and funk. While there is a discernible rock and funk influence in the timbres of the instruments employed, other tonal and rhythmic textures, such as the Indian tambora and tablas and Cuban congas and bongos, create a multi-layered soundscape. The album was a culmination of sorts of the musique concrète approach that Davis and producer Teo Macero had begun to explore in the late 1960s. Jazz continued to expand and change, influenced by other types of music such as world music, avant garde classical music and rock and pop. Jazz musicians began to improvise on unusual instruments, such as the jazz harp (Alice Coltrane), the electrically amplified and wah-wah pedaled jazz violin (Jean-Luc Ponty) and the bagpipes (Rufus Harley). In 1966 jazz trumpeter Don Ellis and Indian sitar player Harihar Rao founded the Hindustani Jazz Sextet. In 1971, guitarist John McLaughlin's Mahavishnu Orchestra began playing a mix of rock and jazz infused with East Indian influences. In the 1970s the ECM record label began in Germany with artists including Keith Jarrett, Paul Bley, the Pat Metheny Group, Jan Garbarek, Ralph Towner, Kenny Wheeler, John Taylor, John Surman, and Eberhard Weber, establishing a new chamber music aesthetic which featured mainly acoustic instruments, occasionally incorporating elements of world music and folk. Traditionalism in the 1980s The 1980s saw something of a reaction against the fusion and free jazz that had dominated the 1970s. Trumpeter Wynton Marsalis emerged early in the decade, and strove to create music within what he believed was the tradition, rejecting both fusion and free jazz and creating extensions of the small and large forms initially pioneered by artists such as Louis Armstrong and Duke Ellington, as well as the hard bop of the 1950s. It is debatable whether Marsalis' critical and commercial success was a cause or a symptom of the reaction against Fusion and Free Jazz and the resurgence of interest in the kind of jazz pioneered in the 1960s (particularly modal jazz and post-bop); nonetheless there were many other manifestations of a resurgence of traditionalism, even if fusion and free jazz were by no means abandoned and continued to develop and evolve. For example, several musicians who had been prominent in the fusion genre during the 1970s began to record acoustic jazz once more, including Chick Corea and Herbie Hancock. Other musicians who had experimented with electronic instruments in the previous decade had abandoned them by the 1980s; for example, Bill Evans, Joe Henderson, and Stan Getz. Even the 1980s music of Miles Davis, although certainly still fusion, adopted a far more accessible and recognisably jazz-oriented approach than his abstract work of the mid-1970s, such as a return to a theme-and-solos approach. The emergence of young jazz talent beginning to perform in older, established musicians' groups further impacted the resurgence of traditionalism in the jazz community. In the 1970s, the groups of Betty Carter and Art Blakey and the Jazz Messengers retained their conservative jazz approaches in the midst of fusion and jazz-rock, and in addition to difficulty booking their acts, struggled to find younger generations of personnel to authentically play traditional styles such as hard bop and bebop. In the late 1970s, however, a resurgence of younger jazz players in Blakey's band began to occur. This movement included musicians such as Valery Ponomarev and Bobby Watson, Dennis Irwin and James Williams. In the 1980s, in addition to Wynton and Branford Marsalis, the emergence of pianists in the Jazz Messengers such as Donald Brown, Mulgrew Miller, and later, Benny Green, bassists such as Charles Fambrough, Lonnie Plaxico (and later, Peter Washington and Essiet Essiet) horn players such as Bill Pierce, Donald Harrison and later Javon Jackson and Terence Blanchard emerged as talented jazz musicians, all of whom made significant contributions in later 1990s and 2000s jazz music. The young Jazz Messengers' contemporaries, including Roy Hargrove, Marcus Roberts, Wallace Roney and Mark Whitfield were also influenced by Wynton Marsalis's emphasis toward jazz tradition. These younger rising stars rejected avant-garde approaches and instead championed the acoustic jazz sound of Charlie Parker, Thelonious Monk and early recordings of the first Miles Davis quintet. This group of "Young Lions" sought to reaffirm jazz as a high art tradition comparable to the discipline of classical music. In addition, Betty Carter's rotation of young musicians in her group foreshadowed many of New York's preeminent traditional jazz players later in their careers. Among these musicians were Jazz Messenger alumni Benny Green, Branford Marsalis and Ralph Peterson Jr., as well as Kenny Washington, Lewis Nash, Curtis Lundy, Cyrus Chestnut, Mark Shim, Craig Handy, Greg Hutchinson and Marc Cary, Taurus Mateen and Geri Allen. Blue Note Records's O.T.B. ensemble featured a rotation of young jazz musicians such as Kenny Garrett, Steve Wilson, Kenny Davis, Renee Rosnes, Ralph Peterson Jr., Billy Drummond, and Robert Hurst. the very leaders of the avant garde started to signal a retreat from the core principles of Free Jazz. Anthony Braxton began recording standards over familiar chord changes. Cecil Taylor played duets in concert with Mary Lou Williams, and let her set out structured harmonies and familiar jazz vocabulary under his blistering keyboard attack. And the next generation of progressive players would be even more accommodating, moving inside and outside the changes without thinking twice. Musicians such as David Murray or Don Pullen may have felt the call of free-form jazz, but they never forgot all the other ways one could play African-American music for fun and profit. Pianist Keith Jarrett—whose bands of the 1970s had played only original compositions with prominent free jazz elements—established his so-called 'Standards Trio' in 1983, which, although also occasionally exploring collective improvisation, has primarily performed and recorded jazz standards. Chick Corea similarly began exploring jazz standards in the 1980s, having neglected them for the 1970s. ... that jazz is hereby designated as a rare and valuable national American treasure to which we should devote our attention, support and resources to make certain it is preserved, understood and promulgated. It passed in the House of Representatives on September 23, 1987, and in the Senate on November 4, 1987. In the early 1980s, a commercial form of jazz fusion called "pop fusion" or "smooth jazz" became successful, garnering significant radio airplay in "quiet storm" time slots at radio stations in urban markets across the U.S. This helped to establish or bolster the careers of vocalists including Al Jarreau, Anita Baker, Chaka Khan, and Sade, as well as saxophonists including Grover Washington Jr., Kenny G, Kirk Whalum, Boney James, and David Sanborn. In general, smooth jazz is downtempo (the most widely played tracks are of 90–105 beats per minute), and has a lead melody-playing instrument (saxophone, especially soprano and tenor, and legato electric guitar are popular). In his Newsweek article "The Problem With Jazz Criticism", Stanley Crouch considers Miles Davis' playing of fusion to be a turning point that led to smooth jazz. Critic Aaron J. West has countered the often negative perceptions of smooth jazz, stating: I challenge the prevalent marginalization and malignment of smooth jazz in the standard jazz narrative. Furthermore, I question the assumption that smooth jazz is an unfortunate and unwelcomed evolutionary outcome of the jazz-fusion era. Instead, I argue that smooth jazz is a long-lived musical style that merits multi-disciplinary analyses of its origins, critical dialogues, performance practice, and reception. Acid jazz, nu jazz and jazz rap Acid jazz developed in the UK in the 1980s and 1990s, influenced by jazz-funk and electronic dance music. Acid jazz often contains various types of electronic composition (sometimes including Sampling (music) or a live DJ cutting and scratching), but it is just as likely to be played live by musicians, who often showcase jazz interpretation as part of their performance. Richard S. Ginell of AllMusic considers Roy Ayers "one of the prophets of acid jazz." Nu jazz is influenced by jazz harmony and melodies, and there are usually no improvisational aspects. It can be very experimental in nature and can vary widely in sound and concept. It ranges from the combination of live instrumentation with the beats of jazz house (as exemplified by St Germain, Jazzanova, and Fila Brazillia) to more band-based improvised jazz with electronic elements (for example, The Cinematic Orchestra, Kobol and the Norwegian "future jazz" style pioneered by Bugge Wesseltoft, Jaga Jazzist, and Nils Petter Molvær). Jazz rap developed in the late 1980s and early 1990s and incorporates jazz influences into hip hop. In 1988, Gang Starr released the debut single "Words I Manifest", which sampled Dizzy Gillespie's 1962 "Night in Tunisia", and Stetsasonic released "Talkin' All That Jazz", which sampled Lonnie Liston Smith. Gang Starr's debut LP No More Mr. Nice Guy (1989) and their 1990 track "Jazz Thing" sampled Charlie Parker and Ramsey Lewis. The groups which made up the Native Tongues Posse tended toward jazzy releases: these include the Jungle Brothers' debut Straight Out the Jungle (1988), and A Tribe Called Quest's People's Instinctive Travels and the Paths of Rhythm (1990) and The Low End Theory (1991). Rap duo Pete Rock & CL Smooth incorporated jazz influences on their 1992 debut Mecca and the Soul Brother. Rapper Guru's Jazzmatazz series began in 1993 using jazz musicians during the studio recordings. Alhough jazz rap had achieved little mainstream success, Miles Davis' final album Doo-Bop (released posthumously in 1992) was based on hip hop beats and collaborations with producer Easy Mo Bee. Davis' ex-bandmate Herbie Hancock also absorbed hip-hop influences in the mid-1990s, releasing the album Dis Is Da Drum in 1994. Punk jazz and jazzcore The relaxation of orthodoxy which was concurrent with post-punk in London and New York City led to a new appreciation of jazz. In London, the Pop Group began to mix free jazz and dub reggae into their brand of punk rock. In New York, No Wave took direct inspiration from both free jazz and punk. Examples of this style include Lydia Lunch's Queen of Siam, Gray, the work of James Chance and the Contortions (who mixed Soul with free jazz and punk) and the Lounge Lizards (the first group to call themselves "punk jazz"). John Zorn took note of the emphasis on speed and dissonance that was becoming prevalent in punk rock, and incorporated this into free jazz with the release of the Spy vs. Spy album in 1986, a collection of Ornette Coleman tunes done in the contemporary thrashcore style. In the same year, Sonny Sharrock, Peter Brötzmann, Bill Laswell, and Ronald Shannon Jackson recorded the first album under the name Last Exit, a similarly aggressive blend of thrash and free jazz. These developments are the origins of jazzcore, the fusion of free jazz with hardcore punk. The M-Base movement started in the 1980s, when a loose collective of young African-American musicians in New York which included Steve Coleman, Greg Osby, and Gary Thomas developed a complex but grooving sound. In the 1990s, most M-Base participants turned to more conventional music, but Coleman, the most active participant, continued developing his music in accordance with the M-Base concept. M-Base changed from a movement of a loose collective of young musicians to a kind of informal Coleman "school", with a much advanced but already originally implied concept. Steve Coleman's music and M-Base concept gained recognition as "next logical step" after Charlie Parker, John Coltrane, and Ornette Coleman. Since the 1990s, jazz has been characterized by a pluralism in which no one style dominates, but rather a wide range of styles and genres are popular. Individual performers often play in a variety of styles, sometimes in the same performance. Pianist Brad Mehldau and The Bad Plus have explored contemporary rock music within the context of the traditional jazz acoustic piano trio, recording instrumental jazz versions of songs by rock musicians. The Bad Plus have also incorporated elements of free jazz into their music. A firm avant-garde or free jazz stance has been maintained by some players, such as saxophonists Greg Osby and Charles Gayle, while others, such as James Carter, have incorporated free jazz elements into a more traditional framework. Harry Connick Jr. began his career playing stride piano and the dixieland jazz of his home, New Orleans, beginning with his first recording when he was ten years old. Some of his earliest lessons were at the home of pianist Ellis Marsalis. Connick had success on the pop charts after recording the soundtrack to the movie When Harry Met Sally, which sold over two million copies. Crossover success has also been achieved by Diana Krall, Norah Jones, Cassandra Wilson, Kurt Elling, and Jamie Cullum. A number of players who usually perform in largely straight-ahead settings have emerged since the 1990s, including pianists Jason Moran and Vijay Iyer, guitarist Kurt Rosenwinkel, vibraphonist Stefon Harris, trumpeters Roy Hargrove and Terence Blanchard, saxophonists Chris Potter and Joshua Redman, clarinetist Ken Peplowski and bassist Christian McBride. Although jazz-rock fusion reached the height of its popularity in the 1970s, the use of electronic instruments and rock-derived musical elements in jazz continued in the 1990s and 2000s. Musicians using this approach include Pat Metheny, John Abercrombie, John Scofield and the Swedish group e.s.t. In 2001, Ken Burns's documentary Jazz was premiered on PBS, featuring Wynton Marsalis and other experts reviewing the entire history of American jazz to that time. It received some criticism, however, for its failure to reflect the many distinctive non-American traditions and styles in jazz that had developed, and its limited representation of US developments in the last quarter of the 20th century. The mid-2010s have seen an increasing influence of R&B, hip-hop, and pop music on jazz. In 2015, Kendrick Lamar released his third studio album, To Pimp a Butterfly. The album heavily featured prominent contemporary jazz artists such as Thundercat and redefined jazz rap with a larger focus on improvisation and live soloing rather than simply sampling. In that same year, saxophonist Kamasi Washington released his nearly three-hour long debut, The Epic. Its hip-hop inspired beats and R&B vocal interludes was not only acclaimed by critics for being innovative in keeping jazz relevant, but also sparked a small resurgence in jazz on the internet. Another internet-aided trend of 2010's jazz is that of extreme reharmonization, inspired by both virtuosic players known for their speed and rhythm such as Art Tatum, as well as players known for their ambitious voicings and chords such as Bill Evans. Supergroup Snarky Puppy has adopted this trend and has allowed for players like Cory Henry to shape the grooves and harmonies of modern jazz soloing. YouTube phenomenon Jacob Collier also gained recognition for his ability to play an incredibly large number of instruments and his ability to use microtones, advanced polyrhythms, and blend a spectrum of genres in his largely homemade production process. - "Jazz Origins in New Orleans - New Orleans Jazz National Historical Park". National Park Service. Retrieved 2017-03-19. - Germuska, Joe. ""The Jazz Book": A Map of Jazz Styles". WNUR-FM, Northwestern University. Retrieved 2017-03-19 – via University of Salzburg. - Roth, Russell (1952). "On the Instrumental Origins of Jazz". American Quarterly. 4 (4): 305–16. doi:10.2307/3031415. ISSN 0003-0678. JSTOR 3031415. - Hennessey, Thomas (1973). From Jazz to Swing: Black Jazz Musicians and Their Music, 1917–1935 (Ph.D. dissertation). Northwestern University. pp. 470–473. - Ferris, Jean (1993) America's Musical Landscape. Brown and Benchmark. ISBN 0697125165. pp. 228, 233 - Starr, Larry, and Christopher Waterman. "Popular Jazz and Swing: America's Original Art Form." IIP Digital. Oxford University Press, 26 July 2008. - Wilton, Dave (6 April 2015). "The baseball origin of 'jazz'". OxfordDictionaries.com. Oxford University Press. Retrieved 20 June 2016. - Seagrove, Gordon (July 11, 1915). "Blues is Jazz and Jazz Is Blues" (PDF). Chicago Daily Tribune. Archived from the original (PDF) on January 30, 2012. Retrieved November 4, 2011 – via Paris-Sorbonne University. Archived at Observatoire Musical Français, Paris-Sorbonne University. - Benjamin Zimmer (June 8, 2009). ""Jazz": A Tale of Three Cities". Word Routes. The Visual Thesaurus. Retrieved June 8, 2009. - "The Musical That Ushered In The Jazz Age Gets Its Own Musical", NPR Music, March 19, 2016 - "1999 Words of the Year, Word of the 1990s, Word of the 20th Century, Word of the Millennium". 13 January 2000. - Joachim E. Berendt. The Jazz Book: From Ragtime to Fusion and Beyond. Translated by H. and B. Bredigkeit with Dan Morgenstern. 1981. Lawrence Hill Books, p. 371. - Berendt, Joachim Ernst (1964). The New Jazz Book. P. Owen. p. 278. Retrieved 4 August 2013. - Christgau, Robert (October 28, 1986). "Christgau's Consumer Guide". The Village Voice. New York. Retrieved September 10, 2015. - In Review of The Cambridge Companion to Jazz by Peter Elsdon, FZMw (Frankfurt Journal of Musicology) No. 6, 2003. - Cooke, Mervyn; Horn, David G. (2002). The Cambridge companion to jazz. New York: Cambridge University Press. pp. 1, 6. ISBN 978-0-521-66388-5. - Luebbers, Johannes (September 8, 2008). "It's All Music". Resonate. - Giddins 1998, 70. - Giddins 1998, 89. - Jazz Drum Lessons Archived 2010-10-27 at the Wayback Machine. – Drumbook.org - "Jazz Inc.: The bottom line threatens the creative line in corporate America's approach to music". Archived from the original on 2001-07-20. Retrieved 2001-07-20. by Andrew Gilbert, Metro Times, December 23, 1998. - "African American Musicians Reflect On 'What Is This Thing Called Jazz?' In New Book By UC Professor". Oakland Post. 38 (79): 7. 20 March 2001. Retrieved December 6, 2011. - Imamu Amiri Baraka (2000). The LeRoi Jones/Amiri Baraka Reader (2 ed.). Basic Books. p. 42. ISBN 978-1-56025-238-2. - Bob Yurochko (1993), A Short History of Jazz, Rowman & Littlefield, p. 10, ISBN 9780830415953, He is known as 'The Father of White Jazz'... - Philip Larkin (2004). Jazz Writings. Continuum International Publishing Group. p. 94. ISBN 978-0-8264-7699-9. - Andrew R. L. Cayton; Richard Sisson; Chris Zacher, eds. (2006). The American Midwest: An Interpretive Encyclopedia. Indiana University Press. p. 569. ISBN 978-0-253-00349-2. - Hentoff, Nat (15 Jan 2009). "How Jazz Helped Hasten the Civil Rights Movement". Wall Street Journal. - Murph, John. "NPR's Jazz Profiles: Women In Jazz, Part 1". www.npr.org. Retrieved 2015-04-24. - Placksin, Sally (1985). Jazzwomen: 1900 to the Present, their words, lives, and music. London: Pluto Press. - "15 Most Influential Jazz Artists". Listverse. 2010-02-27. Retrieved 27 July 2014. - Criswell, Chad. "What Is a Jazz Band?". Archived from the original on 28 July 2014. Retrieved 25 July 2014. - "Jazz Origins in New Orleans". New Orleans Jazz National Historical Park. U.S. National Park Service. April 14, 2015. - Gates, Henry Louis, Jr. (3 January 2013). "How Many Slaves Landed in the U.S.?". The African Americans: Many Rivers to Cross. PBS. Archived from the original on September 21, 2015. - Cooke 1999, pp. 7–9 - DeVeaux, Scott (1991). "Constructing the Jazz Tradition: Jazz Historiography". Black American Literature Forum. 25 (3): 525–560. doi:10.2307/3041812. JSTOR 3041812. - Hearn, Lafcadio, Delphi Complete Works of Lafcadio Hearn, Volume 19 of Delphi Series Eight, Delphi Classics, 2017, ISBN 1786560909 - "The primary instrument for a cultural music expression was a long narrow African drum. It came in various sized from three to eight feet long and had previously been banned in the South by whites. Other instruments used were the triangle, a jawbone, and early ancestors to the banjo. Many types of dances were performed in Congo Square, including the 'flat-footed-shuffle' and the 'Bamboula.'" African American Registry. Archived 2014-12-02 at the Wayback Machine. - Palmer, Robert (1981). Deep Blues. New York: Viking Penguin. p. 37. ISBN 9780670495115. - Cooke 1999, pp. 14–17, 27–28 - Kubik, Gerhard (1999: 112). - Palmer 1981, p. 39 - Borneman, Ernest (1969: 104). Jazz and the Creole Tradition." Jazz Research I: 99–112. - Sublette, Ned (2008: 124, 287). The World that made New Orleans: from Spanish silver to Congo Square. Chicago: Lawrence Hill Books. ISBN 1-55652-958-9 - Peñalosa 2010, pp. 38–46 - Garrett, Charles Hiroshi (2008). Struggling to Define a Nation: American Music and the Twentieth Century, p.54. ISBN 978-0-520-25486-2. Shown in common time and then in cut time with tied sixteenth & eighth note rather than rest. - Sublette, Ned (2007), Cuba and Its Music: From the First Drums to the Mambo, p. 134. ISBN 978-1-55652-632-9. Shown with tied sixteenth & eighth note rather than rest. - Wynton Marsalis states that tresillo is the New Orleans "clave." "Wynton Marsalis part 2." 60 Minutes. CBS News (June 26, 2011). - Schuller 1968, p. 19 - Kubik, Gerhard (1999: 52). Africa and the Blues. Jackson, MI: University Press of Mississippi. - "[Afro]-Latin rhythms have been absorbed into black American styles far more consistently than into white popular music, despite Latin music's popularity among whites" (Roberts 1979: 41). - Roberts, John Storm (1999: 12) Latin Jazz. New York: Schirmer Books. - Roberts, John Storm (1999: 16) Latin Jazz. New York: Schirmer Books. - Manuel, Peter (2009: 67). Creolizing Contradance in the Caribbean. Philadelphia: Temple University Press. - Manuel, Peter (2009: 69). Creolizing Contradance in the Caribbean. Philadelphia: Temple University Press. - Acosta, Leonardo (2003: 5). Cubano Be Cubano Bop; One Hundred Years of Jazz in Cuba. Washington D.C.: Smithsonian Books. - Mauleón (1999: 4), Salsa Guidebook for Piano and Ensemble. Petaluma, California: Sher Music. ISBN 0-9614701-9-4. - Peñalosa 2010, p. 42 - Sublette, Ned (2008: 125). The World that made New Orleans: from Spanish silver to Congo Square. Chicago: Lawrence Hill Books. ISBN 1-55652-958-9 - Sublette, Ned (2008:125). Cuba and its Music; From the First Drums to the Mambo. Chicago: Chicago Review Press. - "Wynton Marsalis part 2." 60 Minutes. CBS News (June 26, 2011). - Morton, Jelly Roll (1938: Library of Congress Recording) The Complete Recordings By Alan Lomax. - Cooke 1999, pp. 28, 47 - Catherine Schmidt-Jones (2006). "Ragtime". Connexions. Retrieved October 18, 2007. - Cooke 1999, pp. 28–29 - "The First Ragtime Records (1897–1903)". Retrieved October 18, 2007. - Tanner, Paul, David W. Megill, and Maurice Gerow. Jazz. 11th edn. Boston: McGraw-Hill Higher Education, 2009, pp. 328-331. - Benward & Saker 2003, p. 203. - Matthiesen, Bill (2008: 8). Habaneras, Maxixies & Tangos The Syncopated Piano Music of Latin America. Mel Bay. ISBN 0-7866-7635-3 - Sublette, Ned (2008:155). Cuba and its Music; From the First Drums to the Mambo. Chicago: Chicago Review Press. - Roberts, John Storm (1999: 40). The Latin Tinge. Oxford University Press. - Kunzler's Dictionary of Jazz provides two separate entries: blues, an originally African-American genre (p. 128), and the blues form, a widespread musical form (p. 131). - "The Evolution of Differing Blues Styles". How To Play Blues Guitar. Archived from the original on 2010-01-18. Retrieved 2008-08-11. - Cooke 1999, pp. 11–14 - Kubik, Gerhard (1999: 96). - Palmer (1981: 46). - Handy, Father (1941), p. 99. - Schuller (1968: 66, 145n.). - W. C. Handy, Father of the Blues: An Autobiography, edited by Arna Bontemps: foreword by Abbe Niles. Macmillan Company, New York; (1941), pp. 99, 100 (no ISBN in this first printing). - "Birthplace of Jazz". www.neworleansonline.com. Retrieved 2017-12-14. - Cooke 1999, pp. 47, 50 - "Original Creole Orchestra". The Red Hot Archive. Retrieved October 23, 2007. - "Jazz Neighborhoods - New Orleans Jazz National Historical Park (U.S. National Park Service)". - "The characters -". - "The Legend of Storyville". 6 May 2014. Archived from the original on May 6, 2014. - "Marsalis, Wynton (2000: DVD n.1). Jazz. PBS". Pbs.org. Retrieved 2013-10-02. - "Jazz and Math: Rhythmic Innovations", PBS.org. The Wikipedia example shown in half time compared to the source. - Cooke 1999, pp. 38, 56 - Roberts, John Storm 1979. The Latin Tinge: The impact of Latin American music on the United States. Oxford. - Gridley, Mark C. (2000: 61). Jazz Styles: History and Analysis, 7th edn. - Schuller 1968, p. 6 - The New Harvard Dictionary of Music (1986: 818). - Greenwood, David Peñalosa; Peter; collaborator; editor (2009). The Clave Matrix: Afro-Cuban rhythm: its principles and African origins. Redway, CA: Bembe Books. p. 229. ISBN 978-1-886502-80-2. - Gridley, Mark C. (2000). Jazz Styles: history & analysis (7th ed.). Upper Saddle River, NJ: Prentice Hall. pp. 72–73. ISBN 978-0130212276. - Schoenherr, Steven. "Recording Technology History". history.sandiego.edu. Archived from the original on March 12, 2010. Retrieved December 24, 2008. - Thomas, Bob (1994). "The Origins of Big Band Music". redhotjazz.com. Retrieved December 24, 2008. - Alexander, Scott. "The First Jazz Records". redhotjazz.com. Retrieved December 24, 2008. - "Jazz Milestones". apassion4jazz.net. Retrieved December 24, 2008. - "Original Dixieland Jazz Band Biography". pbs.org. Retrieved December 24, 2008. - Martin, Henry; Waters, Keith (2005). Jazz: The First 100 Years. Thomson Wadsworth. p. 55. ISBN 978-0-534-62804-8. - "Tim Gracyk's Phonographs, Singers, and Old Records – Jass in 1916–1917 and Tin Pan Alley". Retrieved October 27, 2007. - Scott, Emmett J. (1919). "Chapter XXI: Negro Music That Stirred France". Scott's Official History of the American Negro in the World War. [Chicago]: [Homewood Press]. Retrieved 19 June 2017. - Cooke 1999, p. 44 - Floyd Levin (1911). "Jim Europe's 369th Infantry "Hellfighters" Band". The Red Hot Archive. Retrieved October 24, 2007. - Cooke 1999, p. 78 - Cooke 1999, pp. 41–42 - Palmer (1968: 67). - Ward, Geoffrey C.; Burns, Ken (October 8, 2002). Jazz: A History of America's Music (1st ed.). New York: Alfred A. Knopf. ISBN 978-0679765394. - Cooke 1999, p. 54 - "Kid Ory". The Red Hot Archive. Retrieved October 29, 2007. - "Bessie Smith". The Red Hot Archive. Retrieved October 29, 2007. - Downes, Olin (1924). "A Concert of Jazz". The New York Times. February 13, 1924. p. 16. - Mario Dunkel, "W. C. Handy, Abbe Niles, and (Auto)biographical Positioning in the Whiteman Era," Popular Music and Society 38.2 (2015): 122-139. - Cooke 1999, pp. 82–83, 100–103 - Schuller 1968, p. 91 - Schuller 1968, p. 93 - Cooke 1999, pp. 56–59, 78–79, 66–70 - See lengthy interviews with Hines in [Nairn] Earl "Fatha" Hines: – see External Links below. - van de Leur, Walter (2015). "12 "Seldom seen, but always heard": Billy Strayhorn and Duke Ellington". In Green, Edward. The Cambridge Companion to Duke Ellington. Cambridge University Press. ISBN 1316194132. - Tucker 1995, p. 6 writes "He tried to avoid the word 'jazz' preferring 'Negro' or 'American' music. He claimed there were only two types of music, 'good' and 'bad' ... And he embraced a phrase coined by his colleague Billy Strayhorn – 'beyond category' – as a liberating principle." - "Jazz Musicians – Duke Ellington". Theory Jazz. Archived from the original on September 3, 2015. Retrieved July 14, 2009. - Wynn, Neil A., ed. (2007). Cross the Water Blues: African American music in Europe (1 ed.). Jackson, Missippi: University Press of Mississippi. p. 67. ISBN 9781604735468. - Godbolt, Jim (2010) A History of Jazz in Britain 1919-50, London: Northway Publications. ISBN 9780955788819 - Jackson, Jeffrey (2002). "Making Jazz French: The Reception of Jazz Music in Paris, 1927-1934". French Historical Studies. 25 (1): 149–170. doi:10.1215/00161071-25-1-149. - "Ed Lang and his Orchestra". redhotjazz.com. Retrieved March 28, 2008. - Crow, Bill (1990). Jazz Anecdotes. New York: Oxford University Press. - Burchett, Michael H. (2015). "Jazz". In Ciment, James. Postwar America: An Encyclopedia of Social, Political, Cultural, and Economic History. Routledge. p. 730. ISBN 1317462351. - Tucker, Mark; Jackson, Travis (2015). "7. Traditional and Modern Jazz in the 1940s". Jazz: Grove Music Essentials. Oxford University Press. ISBN 0190268719. - Trynka, Paul (2003). The Sax & Brass Book. Hal Leonard Corporation. pp. 45, 48–49. ISBN 0879307374. - James, Clive (2007). Cultural Amnesia: Necessary Memories from History and the Arts. W. W. Norton & Company. p. 163. ISBN 0393061167. - Gunther Schuller November 14, 1972. Dance, p. 290. - Dance p. 260. - Floyd, Samuel A., Jr. (1995). The Power of Black Music: Interpreting its history from Africa to the United States. New York: Oxford University Press. - Levine 1995, p. 171 - Joachim Berendt. The Jazz Book, 1981, p. 15. - Charlie Parker quoted by Gerhard Kubik (2005). "Bebop: A Case in Point. The African Matrix in Jazz Harmonic Practices" (critical essay), Black Music Research Journal 22 March. Digital. - Gerhard Kubik (2005). "Bebop: A Case in Point. The African Matrix in Jazz Harmonic Practices" (critical essay), Black Music Research Journal March 22, Digital. - Kubik (2005). - Joachim Berendt. The Jazz Book. 1981, p. 16. - In 1992 Bauza recorded "Tanga" in the expanded form of an Afro-Cuban suite, consisting of five movements. Mario Bauza and his Afro-Cuban Orchestra. Messidor CD (1992). - Peñalosa 2010, p. 56 - Peñalosa 2010, pp. 131–136 - Fraser, Dizzy Gillespie, with Al (March 1, 1985). To Be or Not to Bop: Memoirs of Dizzy Gillespie. New York, N.Y.: Da Capo Press. p. 77. ISBN 978-0306802362. - "Afro Blue", Afro Roots (Mongo Santamaria) Prestige CD 24018-2 (1959). - Peñalosa 2010, p. 26 - Collier, 1978. - Natambu, Kofi (2014). "Miles Davis: A New Revolution in Sound". Black Renaissance/Renaissance Noire. 2: 39. - Litweiler 1984, pp. 110–111 - Levine 1995, p. 30 - Yudkin, Jeremy (2012). "The Naming of Names: "Flamenco Sketches" or "All Blues"? Identifying the Last Two Tracks on Miles Davis's Classic Album Kind of Blue". Musical Quarterly. 95 (1): 15–35. - Davis, Miles (1989: 234). The Autobiography. New York: Touchstone. - Levine 1995, p. 29 - Litweiler 1984, pp. 120–123 - Joachim Berendt. The Jazz Book. 1981. Page 21. - Stuart Nicholson, Is Jazz Dead? Or Has it Moved to a New Address (NY: Routledge, 2005). - Gridley, Mark C. (2000: 444). Jazz Styles: History and Analysis, 7th ed. - Tjader, Cal (1959). Monterey Concerts. Prestige CD. ASIN: B000000ZCY. - Andy Gonzalez interviewed by Larry Birnbaum. Ed. Boggs, Vernon W. (1992: 297–298). Salsiology; Afro-Cuban Music and the Evolution of Salsa in New York City. New York: Greenwood Press. ISBN 0-313-28468-7 - Acosta, Leonardo (2003). Cubano Be, Cubano Bop: One Hundred Years of Jazz in Cuba, p. 59. Washington, D.C.: Smithsonian Books. ISBN 1-58834-147-X - Moore, Kevin (2007) "History and Discography of Irakere". Timba.com. - Yanow, Scott (August 5, 1941). "Airto Moreira". AllMusic. Retrieved 2011-10-22. - Allmusic Biography - Palmer, Robert (1982-06-28). "Jazz Festival - Jazz Festival - A Study Of Folk-Jazz Fusion - Review". New York Times. Retrieved 2012-07-07. - "Footprints" Miles Smiles (Miles Davis). Columbia CD (1967). - An ancient west central Sudanic stratum of pentatonic song composition, often associated with simple work rhythms in a regular meter, but with notable off-beat accents ... reaches back perhaps thousands of years to early West African sorgum agriculturalists—Kubik, Gerhard (1999: 95). Africa and the Blues. Jackson, MI: University Press of Mississippi. - Gridley, Mark C. (2000: 270). Jazz Styles: History and Analysis, 7th ed. - Map showing distribution of harmony in Africa. Jones, A.M. (1959). Studies in African Music. Oxford Press. - Levine 1995, p. 235 - Levine, Mark (1989: 127). The Jazz Piano Book. Petaluma, CA: Sher Music. ASIN: B004532DEE - Levine (1989: 127). - After Mark Levine (1989: 127). The Jazz Piano Book. - Bair, Jeff (2003: 5). Cyclic Patterns in John Coltrane's Melodic Vocabulary as Influenced by Nicolas Slonimsky's Thesaurus of Scales and Melodic Patterns: An Analysis of Selected Improvisations. PhD Thesis. University of North Texas. Web. https://digital.library.unt.edu/ark:/67531/metadc4348/m2/1/high_res_d/dissertation.pdf - Levine, Mark (1995: 205). The Jazz Theory Book. Sher Music. ISBN 1-883217-04-0 - "Explore: Fusion". AllMusic. Retrieved November 7, 2010. - Davis, Miles, with Quincy Troupe (1989: 298) The Autobiography. New York: Simon and Schuster. - Dan, Morgenstern (1971). Down Beat May 13. - Harrison, Max; Thacker, Eric; Nicholson, Stuart (2000). The Essential Jazz Records: Modernism to Postmodernism. A&C Black. p. 614. ISBN 978-0720118223. - "Free Jazz-Funk Music: Album, Track and Artist Charts". Archived from the original on 2008-09-20. Retrieved 2010-11-28. , Rhapsody Online — Rhapsody.com (October 20, 2010). - "Explore: Jazz-Funk". Archived from the original on 2010-10-19. Retrieved 2010-10-19. - Guilliatt, Richard (13 September 1992). "Jazz: The Young Lions' Roar". Los Angeles Times. Retrieved 14 January 2018. - Yanow, Scott. "Out of the Blue". AllMusic. Retrieved 14 January 2018. - "Where Did Our Revolution Go? (Part Three) – Jazz.com | Jazz Music – Jazz Artists – Jazz News". Jazz.com. Archived from the original on 2013-05-17. Retrieved 2013-10-02. - HR-57 Center HR-57 Center for the Preservation of Jazz and Blues, with the six-point mandate. Archived 2008-09-18 at the Wayback Machine. - Stanley Crouch (June 5, 2003). "Opinion: The Problem With Jazz Criticism". Newsweek. Retrieved April 9, 2010. - "Caught Between Jazz and Pop: The Contested Origins, Criticism, Performance Practice, and Reception of Smooth Jazz". Digital.library.unt.edu. October 23, 2010. Retrieved November 7, 2010. - Ginell, Richard S. "Roy Ayers". AllMusic. Retrieved 21 July 2018. - Dave Lang, Perfect Sound Forever, February 1999. "Archived copy". Archived from the original on 1999-04-20. Retrieved 2016-01-23. Access date: November 15, 2008. - Bangs, Lester. "Free Jazz / Punk Rock". Musician Magazine, 1979. Access date: July 20, 2008. - ""House Of Zorn", Goblin Archives, at". Sonic.net. Archived from the original on October 19, 2010. Retrieved November 7, 2010. - "Progressive Ears Album Reviews". Progressiveears.com. October 19, 2007. Archived from the original on June 7, 2011. Retrieved November 7, 2010. - "... circular and highly complex polymetric patterns which preserve their danceable character of popular Funk-rhythms despite their internal complexity and asymmetries ..." (Musicologist and musician Ekkehard Jost, Sozialgeschichte des Jazz, 2003, p. 377) - Jazz, All About. "All About Jazz". Archived from the original on 2010-08-05. - Blumenfeld, Larry (11 June 2010). "A Saxophonist's Reverberant Sound". Wall Street Journal. Retrieved 14 January 2018. It's hard to overstate [Coleman's] influence. He's affected more than one generation, as much as anyone since John Coltrane...It's not just that you can connect the dots by playing seven or 11 beats. What sits behind his influence is this global perspective on music and life. He has a point of view of what he does and why he does it. - Ratliff, Ben (14 June 2010). "Undead Jazzfest Roams the West Village". The New York Times. Retrieved 14 January 2018. His recombinant ideas about rhythm and form and his eagerness to mentor musicians and build a new vernacular have had a profound effect on American jazz. - Michael J. West (June 2, 2010). "Jazz Articles: Steve Coleman: Vital Information". Jazztimes.com. Retrieved June 5, 2011. - "What Is M-Base?". M-base.com. Retrieved June 5, 2011. - In 2014 drummer Billy Hart said that "Coleman has quietly influenced the whole jazz musical world," and is the "next logical step" after Charlie Parker, John Coltrane, and Ornette Coleman. (Source: Kristin E. Holmes, Genius grant saxman Steve Coleman redefining jazz, October 09, 2014, web portal Philly.com, Philadelphia Media Network) Already in 2010 pianist Vijay Iyer (who was chosen as "Jazz Musician of the Year 2010" by the Jazz Journalists Association) said: "To me, Steve [Coleman] is as important as [John] Coltrane. He has contributed an equal amount to the history of the music. He deserves to be placed in the pantheon of pioneering artists." (Source: Larry Blumenfeld, A Saxophonist's Reverberant Sound, June 11, 2010, The Wall Street Journal) In September 2014, Coleman was awarded a MacArthur Fellowship (a.k.a. "Genius Grant") for "redefining the vocabulary and vernaculars of contemporary music." (Source: Kristin E. Holmes, Genius grant saxman Steve Coleman redefining jazz, October 09, 2014, web portal Philly.com, Philadelphia Media Network) - Bush, John. "Harry Connick, Jr". AllMusic. Retrieved 14 January 2018. - Louis Gates Jr. (host), Henry (July 17, 2010). "Branford Marsalis and Harry Connick Jr.". Finding Your Roots (DVD). Season 1. Episode 1. PBS. - To Pimp a Butterfly (Media notes). Interscope Records. - Russell Warfield (May 5, 2015). "The Epic". drownedinsound.com. Retrieved October 12, 2017. - David Hochman (May 15, 2018). "Grammy-Winning Keyboardist Cory Henry On Inspiration And Funky Improvisation". Retrieved May 16, 2018. - Michael Bailey (May 1, 2018). "Jacob Collier review: Youtuber gets Gen Y into jazz". Retrieved May 16, 2018. - Litweiler, John (1984). The Freedom Principle: Jazz After 1958. Da Capo. ISBN 978-0-306-80377-2. - Joachim Ernst Berendt, Günther Huesmann (Bearb.): Das Jazzbuch. 7. Auflage. S. Fischer Verlag, Frankfurt am Main, 2005, ISBN 3-10-003802-9 - Burns, Ken, and Geoffrey C. Ward. 2000. Jazz—A History of America's Music. New York: Alfred A. Knopf. Also: The Jazz Film Project, Inc. - Levine, Mark (1995). The Jazz theory book. Petaluma, Calif.: Sher Music. ISBN 978-1-883217-04-4. - Cooke, Mervyn (1999). Jazz. London: Thames and Hudson. ISBN 978-0-500-20318-7.. - Carr, Ian. Music Outside: Contemporary Jazz in Britain. 2nd edition. London: Northway. ISBN 978-0-9550908-6-8 - Collier, James Lincoln. The Making of Jazz: A Comprehensive History (Dell Publishing Co., 1978) - Dance, Stanley (1983). The World of Earl Hines. Da Capo Press. ISBN 0-306-80182-5. Includes a 120-page interview with Hines plus many photos. - Davis, Miles. Miles Davis (2005). Boplicity. Delta Music plc. UPC 4-006408-264637. - Downbeat (2009). The Great Jazz Interviews: Frank Alkyer & Ed Enright (eds). Hal Leonard Books. ISBN 978-1-4234-6384-9 - Elsdon, Peter. 2003. "The Cambridge Companion to Jazz, Edited by Mervyn Cooke and David Horn, Cambridge: Cambridge University Press, 2002. Review." Frankfürter Zeitschrift für Musikwissenschaft 6:159–75. - Giddins, Gary. 1998. Visions of Jazz: The First Century. New York: Oxford University Press. ISBN 0-19-507675-3 - Gridley, Mark C. 2004. Concise Guide to Jazz, fourth edition. Upper Saddle River, NJ: Pearson/Prentice Hall. ISBN 0-13-182657-3 - Nairn, Charlie. 1975. Earl 'Fatha' HInes: 1 hour 'solo' documentary made in "Blues Alley" Jazz Club, Washington DC, for ATV, England, 1975: produced/directed by Charlie Nairn: original 16mm film plus out-takes of additional tunes from that film archived in British Film Institute Library at bfi.org.uk and http://www.itvstudios.com: DVD copies with Jean Gray Hargrove Music Library [who hold The Earl Hines Collection/Archive], University of California, Berkeley: also University of Chicago, Hogan Jazz Archive Tulane University New Orleans and Louis Armstrong House Museum Libraries. - Peñalosa, David (2010). The Clave Matrix; Afro-Cuban Rhythm: Its Principles and African Origins. Redway, CA: Bembe Inc. ISBN 978-1-886502-80-2. - Schuller, Gunther (1968). Early Jazz: Its Roots and Musical Development. New York: Oxford University Press. New printing 1986. - Schuller, Gunther. 1991. The Swing Era: The Development of Jazz, 1930–1945. Oxford University Press. |Library resources about | - Jazz at the Smithsonian Museum - Alabama Jazz Hall of Fame website - Jazz at Lincoln Center - American Jazz Museum website - The International Archives for the Jazz Organ - Free 1920s Jazz Collection available for downloading at Archive.org - Jazz History Database - DownBeat's Jazz 101 A Guide to the Music - The Historyscoper - Jazz Greats in One Immortal 1958 Image (NYT, September 25, 2018)
https://en.wikipedia.org/wiki/Jazz
18
10
Students love playing games! As you are likely busy reviewing some basic skills with your students, this is an ideal time to use games and to practice those Math-Talk Guidelines (see previous post for these guidelines). As games engage students in learning and with each other, they are a great tool in setting up your math class culture (see previous blog post), In this post I will share several math games you can easily incorporate into your class. Even students in middle school can benefit greatly from learning and using strategies such as ‘Make 10’, ‘Near Doubles’ and ‘Add 9’ for basic addition. I see many students in grades 6-8 (and beyond) still finger counting. They are more capable AND would benefit from the number sense and flexibility in thinking they will receive by learning and using these strategies. Using these strategies, students can also practice their math talk. For example: Given this question: 7 + 8, students could do a number of strategies that do not involve finger counting. They could: Use near doubles: 7+7+1 = 15, or Make 10: 7+3+5 = 15, or Add 10, take away 2 (instead of adding 8) = 7+10-2 = 15 Find 5’s: 5+5+2+3 = 15 Can you see how each of these strategies builds number sense? Students are learning how they can deconstruct and reconstruct numbers. Ideally, students are fluent in a variety of strategies and choose the strategy that best fits with the numbers. Students can share their individual strategies using the Math-Talk Guidelines to practice. This is also a great way to ensure that they are actually using strategies other than counting. Depending on the age of your students, you may also have blocks that they can use to work through these strategies and this may help them to develop a better understanding of what they are doing with the numbers and why. I give my grade 2 students base 10 blocks and they use a 10’s ‘stick’ as their guide for how many to add to make 10. With a little practice, they start to learn the pairs of numbers that make 10, and then can start partitioning other numbers. At this age, they are using the blocks each time. With my grade 6 and older students, most are able to mentally (abstractly or visually) figure out how to break apart the numbers and create strategies that work for them. Sharing to Increase Learning Some students struggle with the Math-Talk because they don’t know what they did to figure it out. This ability to reflect on their learning and thinking is a really important meta-cognitive skill so this would be a great opportunity to slow things down and focus on the thinking (rather than on the fact or the answer). Because we are working on relatively simple tasks (for middle school students) such as single-digit addition, it is a good time to practice the Math-Talk Guidelines in a low-stress environment. Students love learning about their peers’ strategies as they often find ways that they hadn’t thought of themselves and that make a lot of sense to them. There are often a lot of ‘Ah-ha’ moments during this whole class discussion. Again, because of the relatively simplistic nature of the arithmetic, students are able to ‘Go Beyond’ and ‘Build On’, which are two of the guidelines that require more sophisticated thinking. Once the strategies have been learned and understood (this is important because if they are just following a rote process without knowing why, they likely won’t retain the strategies), then they need to practice to become fluent. Using Games to Practice I find games are a great way to do this practice. Students often leave a class after playing games saying; “that was awesome! We didn’t do any math today”. I always find this ironic because they often perform far more operations or ‘practice questions’ while playing a game than they would completing a worksheet. It is always good to mix it up, so doing some worksheets as well as some games and activities will help you reach more of the learners’ needs in your class and will hopefully keep them engaged. This is critical to retaining learning; students need to find meaning and understanding and sometimes what is meaningful is doing something fun – like playing a game! I really like the free worksheets found on www.gregtangmath.com (see materials and then downloads) because the worksheets are specific to the strategies. I also like Trevor Calkin’s ‘Power of Ten’ program www.poweroften.ca (many materials are free on this site). Please note that if these sheets are done without strategies, they will not be nearly as beneficial. Below I have included some games and links to games that I have used and that students have enjoyed and there are thousands of free online games that students can use at home to continue practicing as well. Please email me any games you have used with success and I will share on a future blog post. For those who are still working on learning and understanding numbers that make 10, you can play “Make 10- Go Fish” and “Missing Number” How to play “Make 10- Go Fish”: Students work in partners. Use a deck of cards with all of the 10’s and face cards removed and then deal out 7 cards to each person and the rest sit face down in a pile in the middle. Instead of asking for pairs, ask for the number that would “Make 10”. For example, if they had a 3, they would ask for a 7. The game is played as normal “Go Fish” until someone is out of cards. The person with the most pairs wins. Unifix cubes are there to help the students figure out what number to ask for. For example, if they had a six in their hand, they could split their row of 10 into 6 and 4 and so figure out that 4 is what they need to ask for. How to play “Hidden Number”: In partners students are given one row of 10 blocks. One player breaks the row into two groups behind their backs and then shows their partner only one hand worth of blocks and asks “how many are missing?”. If the partner gives the correct answer, they get a point, and then they switch turns. For those that have these important pairs mastered, you can play a number of games that would give them the opportunity to practice the strategies for adding and subtracting: Using a deck of cards, you can either use the face cards as 11,12, 13 or remove them or use them as 10’s (depending on the age and level of your students – I like to give them the option and monitor that they chose correctly). Each player gets half the deck of the cards, face down in a pile. Each player turns over 2 cards and adds them together. The person with the higher SUM gets all the cards. If there is a tie, flip again and the winner gets all 8 cards. The person at the end with the most cards wins. Asks students to talk their strategy aloud as they play. For example: If I had 6 + 7, I might say “13 because 6 + 6 + 1 = 12+ 1 = 13”. Same as addition war except they subtract the lower number from the higher number. Reflection: What game did you find easier addition or subtraction? Why? Hi-Lo Dice Games: I love using various types of dice – 8 sided, 10 sided, 12 sided, 20 sided, and 30 sided. Students can use two different dice, like an 8 sided and 10 sided to start and then move up or you could differentiate by giving certain dice to pairs of students based on where they need to start practicing (some can move straight into using two 30 sided dice and will break apart the numbers into place value to do their mental calculations. How to play: Both players roll the dice and add/subtract/multiply their numbers. Then roll a 6-sided die and if the number is EVEN the person with the HIGH sum/difference wins 1 point, if the number is ODD the person with the LOW sum/difference wins 1 point. If they get the same number, they roll again and the winner of this round earns 2 points. The first player to 10 points wins. Math-Talk: Asks students to talk their strategy aloud as they play. For example: If I had 16 + 23, I might say “39 because 10 + 20 = 30 and 6 + 3 = 9 and 30 + 9 = 39”. Reflection: What strategy or strategies do you find the most useful for you? Why? Which strategies do you find difficult to use? Why? Here is great article I highly recommend reading and at the end are descriptions of games that involve more than just fact recall but also the use of strategies and visuals: Bingo – always a favourite! Here are some free printable versions: Here is a nice collection of games: I like this one because each student has to hold the other accountable (if they get the question wrong, they lose their turn) Enjoy playing math with your students! Educating Now was created due to teacher requests to have Nikki as their daily math coach. The site has lesson by lesson video tutorials for teachers to help them prep for their next math class and incorporate manipulatives, differentiated tasks, games and specific language into their class. Teachers who use the site can improve student engagement and understanding, in addition to saving prep time, by watching a 10 minute video tutorial and downloading a detailed lesson plan. My mission is for teachers to feel great about their impact on student learning. I make it easy for teachers to prepare and deliver lessons that will change lives.
http://educatingnow.com/blog/using-games-to-review-basic-facts-and-practice-math-talk-guidelines/
18
33
Ideal gas law The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. The ideal gas law is often written as where , and are the pressure, volume and absolute temperature; is the number of moles of gas; and is the ideal gas constant. It is the same for all gases. It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Krönig in 1856 and Rudolf Clausius in 1857. The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin. The most frequently introduced form is - is the pressure of the gas, - is the volume of the gas, - is the amount of substance of gas (also known as number of moles), - is the number of gas molecules (or the Avogadro constant times the amount of substance ), - is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, - is the Boltzmann constant - is the absolute temperature of the gas. In SI units, P is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (the Kelvin scale is a shifted Celsius scale, where 0.00 K = −273.15 °C, the lowest possible temperature). R has the value 8.314 J/(K·mol) ≈ 2 cal/(K·mol), or 0.08206 L·atm/(mol·K). How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount (n) (in moles) is equal to total mass of the gas (m) (in grams) divided by the molar mass (M) (in grams per mole): By replacing n with m/M and subsequently introducing density ρ = m/V, we get: Defining the specific gas constant Rspecific(r) as the ratio R/M, This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v, the reciprocal of density, as It is common, especially in engineering applications, to represent the specific gas constant by the symbol R. In such cases, the universal gas constant is usually given a different symbol such as to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to. In statistical mechanics the following molecular equation is derived from first principles where P is the absolute pressure of the gas, n is the number of molecules in the given volume V (the number density is given by the ratio n = N/V, in contrasts to the previous formulation in which n is the number of moles), T is the absolute temperature, and kB is the Boltzmann constant relating temperature and energy, given by: where NA is the Avogadro constant. and since ρ = m/V = nμmu, we find that the ideal gas law can be rewritten as Energy associated with a gasEdit According to the assumptions of the kinetic theory of gases, we assumed that there are no intermolecular attractions between the molecules of an ideal gas. In other words, its potential energy is zero. Hence, all the energy possessed by the gas is kinetic energy. This is the kinetic energy of one mole of a gas. |Energy of gas||Mathematical formula| |energy associated with one mole of a gas| |energy associated with one gram of a gas| |energy associated with one molecule of a gas| Applications to thermodynamic processesEdit The table below essentially simplifies the ideal gas equation for a particular processes, thus making this equation easier to solve using numerical methods. A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties (P, V, T, S, or H) is constant throughout the process. For a given thermodynamics process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation). In the final three columns, the properties (P, V, or T) at state 2 can be calculated from the properties at state 1 using the equations listed. |Process||Constant||Known ratio or delta||P2||V2||T2| |Isobaric process||P2 = P1||V2 = V1(V2/V1)||T2 = T1(V2/V1)| |P2 = P1||V2 = V1(T2/T1)||T2 = T1(T2/T1)| |P2 = P1(P2/P1)||V2 = V1||T2 = T1(P2/P1)| |P2 = P1(T2/T1)||V2 = V1||T2 = T1(T2/T1)| |Isothermal process||P2 = P1(P2/P1)||V2 = V1/(P2/P1)||T2 = T1| |P2 = P1/(V2/V1)||V2 = V1(V2/V1)||T2 = T1| (Reversible adiabatic process) |P2 = P1(P2/P1)||V2 = V1(P2/P1)(−1/γ)||T2 = T1(P2/P1)(γ − 1)/γ| |P2 = P1(V2/V1)−γ||V2 = V1(V2/V1)||T2 = T1(V2/V1)(1 − γ)| |P2 = P1(T2/T1)γ/(γ − 1)||V2 = V1(T2/T1)1/(1 − γ)||T2 = T1(T2/T1)| ||P2 = P1(P2/P1)||V2 = V1(P2/P1)(-1/n)||T2 = T1(P2/P1)(n − 1)/n| |P2 = P1(V2/V1)−n||V2 = V1(V2/V1)||T2 = T1(V2/V1)(1 − n)| |P2 = P1(T2/T1)n/(n − 1)||V2 = V1(T2/T1)1/(1 − n)||T2 = T1(T2/T1)| (Irreversible adiabatic process) |P2 = P1 + (P2 − P1)||T2 = T1 + μJT(P2 − P1)| |P2 = P1 + (T2 − T1)/μJT||T2 = T1 + (T2 − T1)| ^ a. In an isentropic process, system entropy (S) is constant. Under these conditions, P1 V1γ = P2 V2γ, where γ is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for γ is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also γ is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines γ varies between 1.35 and 1.15, depending on constitution gases and temperature. ^ b. In an isenthalpic process, system enthalpy (H) is constant. In the case of free expansion for an ideal gas, there are no molecular interactions, and the temperature remains constant. For real gasses, the molecules do interact via attraction or repulsion depending on temperature and pressure, and heating or cooling does occur. This is known as the Joule–Thomson effect. For reference, the Joule–Thomson coefficient μJT for air at room temperature and sea level is 0.22 °C/bar. Deviations from ideal behavior of real gasesEdit The equation of state given here (PV=nRT) applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and inter molecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed equations of state, such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces. Hence the ideal gas law is The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved. Let q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then the time-averaged kinetic energy of the particle is: By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure P of the gas. Hence where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is the divergence theorem implies that where dV is an infinitesimal volume within the container and V is the total volume of the container. Putting these equalities together yields which immediately implies the ideal gas law for N particles: - Clapeyron, E. (1834). "Mémoire sur la puissance motrice de la chaleur". Journal de l'École Polytechnique (in French). XIV: 153–90. Facsimile at the Bibliothèque nationale de France (pp. 153–90). - Krönig, A. (1856). "Grundzüge einer Theorie der Gase". Annalen der Physik und Chemie (in German). 99 (10): 315–22. Bibcode:1856AnP...175..315K. doi:10.1002/andp.18561751008. Facsimile at the Bibliothèque nationale de France (pp. 315–22). - Clausius, R. (1857). "Ueber die Art der Bewegung, welche wir Wärme nennen". Annalen der Physik und Chemie (in German). 176 (3): 353–79. Bibcode:1857AnP...176..353C. doi:10.1002/andp.18571760302. Facsimile at the Bibliothèque nationale de France (pp. 353–79). - "Equation of State". - Moran; Shapiro (2000). Fundamentals of Engineering Thermodynamics (4th ed.). Wiley. ISBN 0-471-31713-6. - J. R. Roebuck (1926). "The Joule-Thomson Effect in Air". Proceedings of the National Academy of Sciences of the United States of America. 12 (1): 55–58. Bibcode:1926PNAS...12...55R. doi:10.1073/pnas.12.1.55. PMC 1084398. PMID 16576959. - "Website giving credit to Benoît Paul Émile Clapeyron, (1799–1864) in 1834". Archived from the original on July 5, 2007. - Configuration integral (statistical mechanics) where an alternative statistical mechanics derivation of the ideal-gas law, using the relationship between the Helmholtz free energy and the partition function, but without using the equipartition theorem, is provided. Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28. - pv=nrt calculator Engineering Units online calculator
https://en.m.wikipedia.org/wiki/Ideal_gas_law
18
41
Graphing mathematical functions is not too difficult if you’re familiar with the function you’re graphing. Each type of function, whether linear, polynomial, trigonometric or some other math operation, has its own particular features and quirks. The details of major classes of functions provide starting points, hints and general guidance for graphing them. TL;DR (Too Long; Didn't Read) To graph a function, calculate a set of y-axis values based on carefully chosen x-axis values, and then plot the results. Graphing Linear Functions Linear functions are among the easiest to graph; each is simply a straight line. To plot a linear function, calculate and mark two points on the graph, and then draw a straight line that passes through both of them. The point-slope and y-intercept forms give you one point right off the bat; a y-intercept linear equation has the point (0,y), and point-slope has some arbitrary point (x,y). To find one other point, you can, for example, set y = 0 and solve for x. For example, to graph the function, y = 11x + 3, 3 is the y-intercept, so one point is (0,3). Setting y to zero gives you the following equation: 0 = 11x + 3 Sciencing Video Vault Subtract 3 from both sides: 0 – 3 = 11x + 3 – 3 Simplify: -3 = 11x Divide both sides by 11: -3 ÷ 11 = 11x ÷ 11 Simplify: -3 ÷ 11 = x So, your second point is (-0.273,0) When using the general form, you set y = 0 and solve for x, and then set x = 0 and solve for y to get two points. To graph the function, x – y = 5, for example, setting x = 0 gives you a y of -5, and setting y = 0 gives you an x of 5. The two points are (0,-5) and (5,0). Graphing Trig Functions Trigonometric functions such as sine, cosine and tangent are cyclical, and a graph made with trig functions has a regularly repeating wavelike pattern. The function y = sin(x), for example, starts at y = 0 when x = 0 degrees, then increases smoothly to a value of 1 when x = 90, decreases back to 0 when x = 180, decreases to -1 when x = 270 and returns to 0 when x = 360. The pattern repeats itself indefinitely. For simple sin(x) and cos(x) functions, y never exceeds the range of -1 to 1, and the functions always repeat every 360 degrees. The tangent, cosecant and secant functions are a little more complicated, though they too follow strictly repeating patterns. More generalized trig functions, such as y = A × sin(Bx + C) offer their own complications, though with study and practice, you can identify how these new terms affect the function. For example, the constant A alters the maximum and minimum values, so it becomes A and negative A instead of 1 and -1. The constant value B increases or decreases the rate of repetition, and the constant C shifts the starting point of the wave to the left or right. Graphing With Software In addition to graphing manually on paper, you can create function graphs automatically with computer software. For example, many spreadsheet programs have built-in graphing capabilities. To graph a function in a spreadsheet, you create one column of x values and the other, representing the y-axis, as a calculated function of the x-value column. When you’ve completed both columns, select them and pick the scatter plot feature of the software. The scatter plot graphs a series of discrete points based on your two columns. You can optionally choose to either keep the graph as discrete points or to connect each point, creating a continuous line. Before printing the graph or saving the spreadsheet, label each axis with an appropriate description, and create a main heading that describes the purpose of the graph.
https://sciencing.com/how-to-graph-a-function-13712205.html
18
38
Write the equation of a line in standard form, two-point form, slope-intercept form and point-slope form. What is a balanced equation? Help balancing math equations When you balance an equation, you make sure that both sides of your equation are equal to the same value. A balanced equation is an equation where both sides are equal to the same amount. Find the sum and product of the roots. Absolute value equation worksheets Use these worksheets to teach your students about the absolute value of integers. Equation of a line worksheets Click here for worksheets on equation of a line. Much of what you learn when you are balancing equations, you will need to draw on when you are doing algebra and solving equations. Use the x values to complete the function tables and Balancing equations worksheet the line. Graphing linear equation worksheets You are just a click away from a huge collection of worksheets on graphing linear equations. Quadratic equation worksheets Click on the link for an extensive set of worksheets on quadratic equations. Analyze the nature of the roots. This module includes exercises like evaluating the absolute value expression at a particular value, input and output tables, graph the absolute value function and solve the various types of absolute value equation. Solve the quadratic equations by factoring, completing the square, quadratic formula or square root methods. Plot the points and graph the line. Balancing equations is a great way to start your algebra journey without having to worry about algebraic expressions or letters. In an unbalanced equation, either the left hand side of the equation has a greater value than the right hand side, like the example below. How to balance a mathematical statement Step 1 Find the value of the side of the equation without any missing numbers. The 3rd and 4th worksheets are more challenging and include multiplication and larger numbers. Balancing Math Equations Worksheets 3rd Grade The first worksheet is the most basic, with only a single value on the right hand side. The 2nd worksheet involves addition and subtraction only. Or the right hand side of the equation has a greater value than the left side, like the example below. Step 2 Make sure the value on the other side of the equation is equal to this value. Download the complete set of worksheets on equation of a line that comprise worksheets on parallel and perpendicular lines as well.Balancing Chemical Equations Worksheet 1 - ANSWERS 1. 2H 2 + O 2 → 2H 2 O 2. 2Na + Cl 2 → 2NaCl 3. N 2 O 4 → 2NO 2 4. 2Mg + O 2 → 2MgO 5. 2H 2 O 2 → 2H 2 O + O 2 6. 3Ca + N 2 → Ca 3 N 2 7. 2Li + F 2 → 2LiF 8. 3Mg + N 2 → Mg 3 N 2 9. 2NH 3 →. Balancing Chemical Equations Gapfill exercise. Enter your answers in the gaps. Every space will require a coefficient. Unlike when we balance equations in class, you will have to include coefficients of "one" by typing in a value of "1." When you have entered all the answers, click on the "Check" button. KEY Review: Worksheet on Balancing Redox Equations Two methods are often mentioned for balancing redox reactions: the half reaction method and the change. Balancing Equations Worksheet – Answers Note to students: It is acceptable to leave spaces blank when balancing equations – blank spaces are interpreted as containing the number “1”. Balancing Equations Practice Worksheet Balance the following equations: 1) ___ NaNO3 + ___ PbO ___ Pb(NO3) Solutions for the Balancing Equations Practice Worksheet 1) 2 NaNO3 + PbO Pb(NO3)2 + Na2O Balancing_Eqns. ord Equations Worksheet Write the word equations for each of the following chemical reactions: 1) When dissolved beryllium chloride reacts with dissolved silver nitrate in water, aqueous beryllium nitrate and silver chloride powder are made.Download
http://zusihenelipija.killarney10mile.com/balancing-equations-worksheet-67933nah9896.html
18
23
A common beginning geometry problem is calculating the area of standard shapes such as squares and circles. An intermediate step in this learning process is combining the two shapes. For instance, if you draw a square and then draw a circle inside the square so that the circle touches all four sides of the square, you can determine the total area outside the circle within the square. Calculate the area of the square first by multiplying its side length, s, by itself: area = s2 For example, suppose the side of your square is 10 cm. Multiply 10 cm x 10 cm to get 100 square centimeters. Sciencing Video Vault Calculate the circle's radius, which is half the diameter: radius = 1/2 diameter Because the circle fits entirely inside the square, the diameter is 10 cm. The radius is half the diameter, which is 5 cm. Calculate the area of the circle using the equation: area = πr2 The value of pi (π) is 3.14, so the equation becomes 3.14 x 5 cm2. So you have 3.14 x 25 cm squared, equaling 78.5 square centimeters. Subtract the area of the circle (78.5 cm squared) from the area of the square (100 cm squared) to determine the area outside the circle, but still within the square. This becomes 100 cm2 - 78.5 cm2, equaling 21.5 cm squared. A common mistake in this problem is to use the circle's diameter in the area equation and not the radius. Be careful to make sure you have all the correct information before you start working.
https://sciencing.com/area-part-square-circle-middle-8166634.html
18
15
News Release 18-013 Astronomers detect ancient signal from first stars in universe Within 180 million years of the Big Bang, stars were born February 28, 2018 For additional information on this breakthrough, NSF has produced the video "The birth of the first stars." For the first time, astronomers have detected a signal from stars emerging in the early universe. Using a radio antenna not much larger than a refrigerator, the researchers discovered that ancient suns were active within 180 million years of the Big Bang. The astronomers, from Arizona State University (ASU), the Massachusetts Institute of Technology (MIT) and the University of Colorado at Boulder, made the discovery with their Experiment to Detect the Global EoR (Epoch of Reionization) Signature (EDGES) project, funded by the National Science Foundation (NSF). They reported their findings in the March 1 issue of Nature. "Finding this miniscule signal has opened a new window on the early universe," says astronomer Judd Bowman of the Arizona State University, the lead investigator on the project. "Telescopes cannot see far enough to directly image such ancient stars, but we've seen when they turned on in radio waves arriving from space." Models of the early universe predict such stars were massive, blue and short-lived. Because telescopes cannot see them, though, astronomers have been hunting for indirect evidence, such as a tell-tale change in the background electromagnetic radiation that permeates the universe, called the cosmic microwave background (CMB). A small dip in intensity, for example, should be apparent in CMB radio signals, but Earth's crowded radio-wave environment has hampered astronomers' search. Such dips occur at wavelengths between 65 megahertz (MHz) and 95 MHz, overlapping with some of the most widely used frequencies on the FM radio dial, as well as booming radio waves emanating naturally from the Milky Way galaxy. "There is a great technical challenge to making this detection," says Peter Kurczynski, the NSF program director who oversaw funding for EDGES. "Sources of noise can be 10,000 times brighter than the signal -- it's like being in the middle of a hurricane and trying to hear the flap of a hummingbird's wing." Despite the obstacles, astronomers were confident that finding such a signal would be possible, thanks to previous research indicating that the first stars released tremendous amounts of ultraviolet (UV) light. That light interacted with free-floating hydrogen atoms, which began absorbing surrounding CMB photons. "You start seeing the hydrogen gas in silhouette at particular radio frequencies," says co-author Alan Rogers of MIT's Haystack Observatory. "This is the first real signal that stars are starting to form, and starting to affect the medium around them." In their paper, the EDGES team reported seeing a clear signal in the radio wave data, detecting a fall in CMB intensity when that process began. As stellar fusion continued, its resulting UV light began to rip apart the free-floating hydrogen atoms, stripping away their electrons in a process called ionization. When the early stars died, black holes, supernovae and other objects they left behind continued the ionizing process and heated the remaining free hydrogen with X-rays, eventually extinguishing the signal. EDGES data reveal that milestone occurred roughly 250 million years after the Big Bang. EDGES began more than a decade ago when Bowman and Rogers proposed building a unique antenna with a specialized receiver, a system that could detect clean signals across the target radio band. Through a series of NSF grants beginning in 2009, the researchers built the instrument, honed calibration methods, and developed statistical techniques for refining signal data. Bowman, co-author Raul Monsalve from the University of Colorado at Boulder and their collaborators added elements such as an automated system to measure antenna reflection to assess the system's performance, control hut housing the electronics, and a component known as a ground plane. With those tools in place, the researchers set up the EDGES antennae in the desert to eliminate as much radio noise as possible, selecting an isolated site at the Murchison Radio-astronomy Observatory in Australia, run by that nation's Commonwealth Scientific and Industrial Research Organization (CSIRO). Once the signal emerged in their data, the astronomers initiated a years-long process to check and recheck their findings against any known causes of instrumental errors and rule out potential sources of radio interference. In all, EDGES applied dozens of verification tests to ensure that the signal was truly from space. While confirming the signal, the EDGES data also raise new questions, as the signal was twice as intense as models had predicted. The researchers suggest this means either the fog of hydrogen gas so soon after the Big Bang was colder than expected or that background radiation levels were significantly hotter than the photons of the CMB. The study authors suggest one possibility is that dark matter interactions may explain the effect. "If that idea is confirmed," Bowman says, "then we've learned something new and fundamental about the mysterious dark matter that makes up 85 percent of the matter in the universe. This would provide the first glimpse of physics beyond the standard model." Larger radio arrays are continuing the search and are expected to build beyond the initial EDGES findings to gain far greater insight into the earliest stars and galaxies. "This discovery opens a new chapter in our understanding of how the world we see came into being," Kurczynski says. "With an antenna not much different than an FM radio's, and a great deal of care and ingenuity, the researchers saw something not yet detected by interferometers requiring hundreds of antennas, complex data processing and hundreds of observing hours. Indirectly, they have seen farther than even the Hubble Space Telescope to find evidence of the earliest stars." More information on NSF's six-decade legacy in radio astronomy is available in the NSF-Discover gallery "Revealing the Invisible Universe." Video: NSF's Peter Kurczynski on how scientists detected a signal from the birth of the first stars. Credit and Larger Version A timeline of the universe, updated to show when the first stars emerged. Credit and Larger Version Joshua Chamot, NSF, (703) 292-4489, email: email@example.com Karin Valentine, School of Earth & Space Exploration, Arizona State University, (480) 965-9345, email: KARIN.VALENTINE@asu.edu Nancy Wolfe Kotary, MIT Haystack Observatory, (617) 715-3490, email: firstname.lastname@example.org Trent Knoss, University of Colorado Boulder, (303) 735-0528, email: email@example.com Annabelle Young, CSIRO: Commonwealth Scientific and Industrial Research Organization, Australia, email: Annabelle.firstname.lastname@example.org The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2018, its budget is $7.8 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives more than 50,000 competitive proposals for funding and makes about 12,000 new funding awards. Useful NSF Web Sites: NSF Home Page: https://www.nsf.gov NSF News: https://www.nsf.gov/news/ For the News Media: https://www.nsf.gov/news/newsroom.jsp Science and Engineering Statistics: https://www.nsf.gov/statistics/ Awards Searches: https://www.nsf.gov/awardsearch/
https://www.nsf.gov/news/news_summ.jsp?cntn_id=244599&WT.mc_id=USNSF_51&WT.mc_ev=click
18
18
Each night, the skies are observed from thousands of telescopes across the globe. Most of them are low-power and in the hands of hobbyists and amateur sky watchers. The ones with the ability to see into the depths of space are managed by governments, space agencies such as NASA, or Universities and other research groups. These giant earth-based instruments along with the orbiting Hubble telescope, are where most of the knowledge about outer-space comes from. Researchers and scientists spend countless hours scanning the skies to understand how the many different objects in space move and interact with one another. Many of those investigators are looking for signs of life on distant objects, while others are trying to identify things which haven’t already been mapped or recorded. It seems that our quest for discovery and the great unknown of space are a perfect match. Some of these discoveries are considered game-changers such as the discovery of extra solar planets or dark matter. Others are routine and commonplace, unless they present a unique feature that captures the attention of the curious. One recent discovery falls into the latter category. It is being called the “Death Comet” or as it’s known scientifically, Asteroid 2015 TB145. The comet was originally discovered on Oct. 10, 2015, by researchers at the University of Hawaii’s Pan-STARRS-1 (Panoramic Survey Telescope and Rapid Response System). At the time, the discovery received a lot of attention, but not the same reasons as today. At the time, it was the closest any object of that size had come to earth in recorded history, passing by at a distance of 310,000 miles away. The moon is 239,000 miles away from Earth, so the cause for concern was quite high at what might happen. Fortunately nothing happened, and the comet quietly passed Earth on Halloween night that same year. Name Change But No Game Change Being so close, every piece of scientific equipment that was available was pointed directly at the comet. Based on observation and comparison with other known comets, this one was deemed to be dead; meaning it had shed all of it’s volatile elements. Usually comets leave a tail in their wake which could be made up of ammonia, water, methane, hydrogen, and other elements which are being stripped off during flight. This stripping process creates the tail. There was consensus that TB145 was nothing more than an orbiting space rock; that is until the Arecibo Observatory shared the chilling radar images of the strange tumbling rock. The shape of the comet resembled a human skull, complete with deep eye sockets and a protruding chin. Up until that moment, NASA scientists had been calling the asteroid, “the Great Pumpkin,” due to it passing on Halloween night. Once the pictures were seen, the old name was quickly replaced. Now that the Death Comet is on the radar, so to speak, it’s being regularly tracked. On November 11th, 2018, the asteroid will pass earth for the second time this decade at a distance of approximately 25 million miles. The scientific community can breathe a sign of relief as the spacing between it and earth is much wider. Strangely enough there were theories floating around on what kind of damage something of this size would do if it hit earth. Much of the rock would burn away while traveling through the atmosphere, but enough would remain to cause a serious catastrophic event. If it struck land, the damage could range from total destruction of the planet or a lasting scar that would not be soon forgotten. Previously in our history, we’ve seen what even small space objects can do to the planet. In 2013, a space rock hit a small city in Russia, causing damage to 7,000 buildings and seriously injuring 1,500 people. Another impact in 1908 near Tunguska, Russia flattened over 80 million trees over an area the size of New York City. Could A Comet Destroy Earth? NASA and other space agencies around the world scan the skies for objects which could impact earth. What most people don’t realize it that Earth is in the direct path of approximately 500,000 asteroids and comets and that NASA is unaware of the exact location of 498,000 of them. Most of them are considered too small to cause worry, but just knowing that so many are out there does make some people squeamish. While this year’s comet won’t be appearing in time for trick or treating, it will make another Oct. 31 appearance in the future. The comet will appear again on Halloween day in the year 2088, zooming by Earth at about 20 lunar distances. Comet or Asteroid – What is the Difference? The main difference between asteroids and comets is what they are made of. Asteroids contain metals and rocky material, while comets are made up of ice, dust, chemicals, and rocky material. Although both asteroids and comets were formed billions of years ago, their positioning in the universe is what matters. Asteroids formed much closer to the Sun, where it was too warm for ices to remain solid. Comets, on the other hand, formed farther from the Sun where ice would not melt. When comets approach the warmth of the sun, they lose material because of melting ice, which vaporizes to form the tail. Once all of the ice has been melted away, they are considered dead. Latest posts by thegypsy (see all) - Full Hunter’s Moon – October, 2018 – Putting the Community First - October 11, 2018 - Death Comet – Spooky Skull-Shaped Asteroid Once Again Passing Earth - October 8, 2018 - Full Harvest Moon – September, 2018 – Wisdom Found in the Falling Leaves - September 19, 2018
http://www.thegypsythread.org/death-comet-asteroid/
18
11
5E lesson plan model Many of my science lessons are based upon and taught using the 5E lesson plan model: Engage, Explore, Explain, Elaborate, and Evaluate. This lesson plan model allows me to incorporate a variety of learning opportunities and strategies for students. With multiple learning experiences, students can gain new ideas, demonstrate thinking, draw conclusions, develop critical thinking skills, and interact with peers through discussions and hands-on activities. With each stage in this lesson model, I select strategies that will serve students best for the concepts and content being delivered to them. These strategies were selected for this lesson to facilitate peer discussions, participation in a group activity, reflective learning practices, and accountability for learning. The Earth's Changing Surface unit focuses on some processes that change Earth's surface slowly, over a long period of time, or abruptly. In order for students to develop an understanding that the surface is constantly changing, they take part in a variety of guided inquiries geared towards scaffolding this understanding. In the first part of the unit, students explore the structure of the Earth and processes that cause changes to it. These lessons include earthquakes, volcanoes, landslides, physical and chemical weathering, erosion and deposition. They need to develop an understanding of these processes and how they change the Earth's surface for the second part of the unit which focuses primarily on minerals, rocks, and the rock cycle. Students apply their understanding of these processes as they investigate the formation of rocks and the cycle of changes they go through in a lifetime. The Structure of Earth's Layers lesson takes place over the course of two days. In the first part of this lesson, students take part in a quick write activity by answering the question: Is it possible to dig our way through the ground to the the other side of the Earth? Then students begin exploring the four main layers of the Earth by participating in a guided gallery walk to read and learn about each layer. They use a matrix graphic organizer to write information after reading about the four four layers of the Earth's surface. By the end of the lesson, they use the information as their evidence to construct a claims and evidence explanation as to why we cannot dig our way to the other side of the Earth. Next Generation Science Standards This lesson will address the following NGSS Standard(s): 5-ESS-2 Develop a model using an example to describe ways the geosphere, biosphere, hydrosphere, and/or atmosphere interact. I address this standard in fifth grade because my students have not had science prior to fifth grad. They have a limited science background and need a lot of scaffolding throughout these standards. By engaging students with guided inquiries and common core related activities to support this NGSS standard, I am providing them experiences to prepare them for later lessons involving minerals, rocks, soil, and plants. Why do I teach with this lesson? I teach the Structure of Earth's Layers lesson with guided guided gallery walk to help students scaffold information on the layers of the Earth so they can construct a scientific explanation. Many of my students have a very limited background in science as the elementary school's within my district do not formally teach science prior to my students entering the 5th grade (the middle school). I find it important to provide guided inquiries that build their vocabulary and understanding of concepts in order to facilitate scientific thinking for future inquiry lessons related to Earth's Changing Surface. In this lesson, students read about the four layers of the Earth, practice organizing information on a table, and construct a scientific explanation using the information as evidence to support their response. By exposing and engaging students in obtaining and communicating information, I am providing them with a foundation that will support their experiences in later lessons involving processes that change the Earth slowly and rapidly. Students are engaged in the following scientific and engineering Practices. 4. Analyzing and Interpreting Data Students analyze the information they collected and recorded in a matrix graphic organizer. Then they use the information to evaluate the question, "Why can't you dig your way through the ground to the other side of the Earth?" 8. Obtaining, evaluating, and communicating information: Students read and comprehend information on the Earth's layers. They organize the information in a matrix chart which is used to construct an explanation about not being able to dig your way to the other side of of the world. The Structure of Earth's Layers correlates with other interdisciplinary areas. These Crosscutting Concepts include 6. Structure and Function: Students learn the Earth's structure is composed of different materials, some that can be observed and others that cannot. These materials contribute to the function of each layer and supports why it is impossible to dig your way to the other side of the world. Disciplinary Core Ideas within this lesson include: ESS2.A Earth Materials and Systems Importance of Modeling to Develop Student Responsibility, Accountability, and Independence Depending upon the time of year, this lesson is taught, teachers should consider modeling how groups should work together; establish group norms for activities, class discussions, and partner talks. In addition, it is important to model think aloud strategies. This will set up students to be more expressive and develop thinking skills during the activity. The first half of the year, I model what group work and/or talks “look like and sound like.” I intervene the moment students are off task with reminders and redirecting. By the second and last half of the year, I am able to ask students, “Who can give of three reminders for group activities to be successful?” Who can tell us two reminders for partner talks?” Students take responsibility for becoming successful learners. Again before teaching this lesson, consider the time of year, it may be necessary to do a lot of front loading to get students to eventually become more independent and transition through the lessons in a timely manner. I begin today's lesson by reviewing the key words from our vocabulary preview lesson. Then I bring students attention to the question displayed on the board: "Do you think it is possible to dig your way through the ground to the other side of the Earth? Explain why or why not?" I tell the students to take out their quick write notebook. While they are writing, I am walking around the room monitoring students as they write. Then, I direct students to their elbow partner for a turn and talk using turn and talk norms. During this time I am walking around listening to conversations about the responses. The class reconvenes as a whole for a discussion. I use the quick pick bucket and call upon five-six people to share out loud. To keep others as active listeners, I remind students to give a thumbs up if they agree and/or have similarities to the students sharing. I explain to the students that it is impossible to actually dig your way through the ground to reach the other side of the Earth because after about five miles down, you would experience such intense heat that you would burn up. Nobody has ever been able to do dig this far. So knowing we cannot actually dig our way to the other side the world, we'll have to get there another way such as plane or boat. I continue by posing the question: "So how do we know what's down there?" I explain, "scientists who study the Earth are geologists have been able to learn about the different layers by using other methods such as studying rocks and minerals, volcanoes, and earthquakes. They believe that as the Earth cooled, heavier materials sank towards the center of the Earth and lighter materials surfaced to the top. Thus determining there are four main layers which are known as the crust, the mantle, the outer core, and the inner core. At this point I direct students to the front board where the image of Earth's structure is displayed on the board through the projector I ask the class to observe and share what they see. I want them to recognize that it is a model of the Earth's layers but to realize it is not the exact size. I continue with questions such as, "what do you notice about this image? Is it the exact size of the layers? Is the model flat or dimensional?" By asking these questions I am looking for their awareness of actual size vs models. I tell students they will be creating a model of the Earth's layers, but first they need to gather information about each layer's composition, depth/thickness, and temperature to develop a clearer understanding as to why people cannot make their way to the center of the Earth or even to the other side of the world. I tell them they are taking part in a guided gallery walk and are using a matrix data table to record details about each layer. Each gallery station has a posted information chart that students read through to locate details according to the data chart headings. I use a guided gallery walk to engage students with a purpose and focus for learning. They need to be active participants in order to have enough information to build their models. While students participate in the gallery walk, I am moving throughout, stopping randomly to check in. My intent is to listen to students explain what information they have read about already. I come back to the quick write question from the start of class, "Do you think it is possible to dig your way through the ground to the other side of the Earth? I am looking for students to accurately record details relevant to each layer of the Earth in their data table in order to construct a model of Earth's layers later in the lesson. At the end of three and a half minutes at the first station, I ask for a student to model while reminding us orally how we move from one station to the next. The gallery walk continues until all 4 galleries have been completed. Once students go back to their seats discuss what we have discovered. *Reflection Note/Suggtion: I originally had students gather and record information in a four square graphic organizer, then organize information into the data chart. I made this change in classes that followed. As class comes to a close for today, I ask students to keep data table they created during the guided gallery walk on their desk. Then, I hand out the exit ticket for students. I explain they are to reflect on the information they read, analyzed, and organized into a data table by answering the same question they did when at the start of class. "Do you think it is possible to dig your way through the ground to the other side of the Earth? Explain why or why not? I ask them for a thumbs up if they understand the assignment or a hand up if they have any questions. My students are familiar with writing claim statements from previous lessons. I use this exit ticket as a formative assessment to identify areas students are struggling with understanding and / or misconceptions. I collect their papers at the end of class.
https://betterlesson.com/lesson/633931/part-1-the-structure-of-the-earth
18
16
The bad thing about testing against EOF is that if the file is not in the right format e. Pointer to the input file. When we need to take input from a file instead of having the user type data at the keyboard we can use input redirection: All that means is that to read in data, we use scanf or a few other functions and to write out data, we use printf. This output buffer would hold the text temporarily: If file does not open successfully then the pointer will be assigned a NULL value, so you can write the logic like this: If the file does not exist, it will be created. Just think of it as some abstract data structure, whose details are hidden from you. The second parameter is the position of the record to be found, and the third parameter specifies the location where the offset starts. Your users, of course, do not need to do this! Here are examples of opening files: Standard Input Standard input is where things come from when you use scanf. Writing to a binary file To write into a binary file, you need to use the function fwrite. The fputc function allows you to write a character at a time--you might find this useful if you wanted to copy a file character by character. There are other kinds of buffering than the one we describe here. Continuing our example from above, suppose the input file consists of lines with a username and an integer test score, e. This code will check whether the file has opened successfully or not. When done with a file, it must be closed using the function fclose. This will waste a lot of memory and operation time. Length of the input record. The functions takes four arguments: As an argument you must provide a pointer to the file that you want to close. You can look up these other modes in a good C reference on stdio. The first parameter takes the address of num and the second parameter takes the size of the structure threeNum. Lets say I have two binary files bin1. The buffer is really just 1-dimensional despite this drawing. If the file does not open, this will display an error message to the user. Reading from or writing to a file: Once a file has been successfully opened, you can read from it using fscanf or write to it using fprintf. Of course, the 2 types of redirection can be used at the same time Reading and writing to a text file For reading and writing to a text file, we use the functions fprintf and fscanf. Errors like that will at least mess up how the rest of the file is read. An example of fclose is fclose fp ; Reading and writing with fprintf, fscanf fputc, and fgetc To work with text input and output, you use fprintf and fscanf, both of which are similar to their friends printf and scanf except that you must pass the FILE pointer as first argument. Different Whence in fseek. A complete program that includes the example described below, plus an input file to use with that program, is available to download. The function fscanflike scanfnormally returns the number of values it was able to read in. After you compile and run this program, you can see a text file program.C programming fwrite jumps to end of file. I have set the file pointer to the very start of the file. When I write some data, it should write the data out at the position I seek'd to and then increment the file position with the number of bytes written (in this case, 8). And how is it going to affect C++ programming? Why is. To determine size of a file, use ftell or another platform specific function because there is no file size information method in the C standard as far as I can remember. Or you go the long way and just read all the bytes from the file and count them. C Working with Files - Free tutorial and references for ANSI C Programming. You will learn ISO GNU K and R C99 C Programming computer language in easy steps. On the other hand, when you're at the very end of the file, you can't get a character value--in this case, fgetc will return "EOF", which is a constnat that indicates that you've. Writing information at the end of a file in C. Ask Question. It is supposed to write at the end of the file. How can I open a file at append mode in a loop and write information at the end of the file? c file io. share | improve this question. asked Nov 5 '14 at Luniam. Programming Puzzles & Code Golf; Stack Overflow en español. C Programming Files I/O. fscanf(), fread(), fwrite(), mint-body.com more. In C programming, file is a place on your physical disk where information is stored. Why files are needed? When a program is terminated, the entire data is lost. Storing in a file will preserve your data even if the program terminates. Data is added to end of file. If. EOF indicates "end of file". A newline (which is what happens when you press enter) isn't the end of a file, it's the end of a line, so a newline doesn't terminate this loop. The code isn't wrong[*], it just doesn't do what you seem to expect. It reads to the end of the input, but you seem to want to read only to the end of a line.Download
http://qelarurymysifiw.mint-body.com/write-at-the-end-of-file-c-programming-2042420424.html
18
53
By Catherine Close, PhD, Psychometrician In this post, we take a closer look at validity. In the past we’ve noted that test scores can be reliable (consistent) without being valid, which is why validity ultimately takes center stage. We will still define validityas the extent to which a test measures what it’s intended to measure for the proposed interpretations and uses of test scores. Going beyond the definition, we begin to talk about evidence—a whole lot of evidence—needed to show that scores are valid for the planned uses. What kind of evidence? Well, it depends. But before you run for the hills, let me tell you that the way we plan to use test scores is the important thing. So, our primary goal is to provide validity evidence in support of the planned score uses. There are several types of validity evidence. Although they are presented separately, they all link back to the construct. A construct is the attribute that we intend to measure. For example, perhaps reading achievement. If the items in a reading achievement test are properly assembled, students’ responses to these items should reflect their reading achievement level. We look for evidence of just that, in various ways: Evidence related to construct. Evidence that shows the degree to which a test measures the construct it was intended to measure. Evidence related to content. Evidence that shows the extent to which items in a test are adequately matched to the area of interest, say reading. Evidence related to a criterion. Evidence that shows the extent to which our test scores are related to a criterion measure. A criterion measure is another measure or test that we desire to compare with our test. There are many types of criterion measures. Again, these types of evidence all relate to the construct—the attribute the test is intended to measure—as we shall see in the example below. Before we continue, recall that in our last blog on reliability we defined the correlation coefficient statistic. This is another key term to understand when evaluating test validity, because the correlation coefficient is also used to show validity evidence in some instances. When used this way, the correlation coefficient is referred to as a validity coefficient. Suppose we have a test designed to measure reading achievement. The construct here is reading achievement. Can we use the scores from this test to show students’ reading achievement? First, we might want to look at whether the test is truly measuring reading achievement—our construct. So, we look for construct-related evidence of validity. Evidence commonly takes two forms: Evidence of a strong relationship between scores from our test and other similar tests that measure reading achievement. If scores from our test and another reading achievement test rank order students in a similar manner, the scores will have a high correlation that we refer to as convergent evidence of validity. Evidence of a weak relationship between our test and other tests that don’t measure reading achievement. We may find that scores from our test and another test of, say, science knowledge have a low correlation. This low correlation—believe it or not!—is a good thing, and we call it divergent evidence of validity. Both convergent and divergent evidence are types of construct-related evidence of validity. Second, a reading achievement test should contain items that specifically measure reading achievement only, as opposed to writing or even math. As a result, we look for content-related evidence of validity. This evidence is contained in what we call a table of specifications or a test blueprint. The test blueprint shows all of the items in a test and the specific knowledge and skill areas that the items assess. Together, all of the items in a test should measure the construct we want to measure. I’ll tell you that although the test blueprint is enough to demonstrate validity evidence related to content, it is only a summary of a much lengthier item development process used to show this type of validity evidence. Third, being able to compare scores from our test with scores from another similar test that we hold in high esteem is often desirable. This reputable test is an example of a criterion measure. If students take both the reading test and this criterion measure at approximately the same time, we look for a high correlation between the two sets of scores. We refer to this correlation coefficient as concurrent evidence of validity. What if I told you that you can also predict—without a crystal ball—how your students will likely perform on an end-of-year reading achievement test based on their current scores on our reading test? You may not believe me, but you sure can! You simply take scores on the reading test taken early in the year and compare them with the end-of-year reading test scores for the same students. A high correlation between the two sets of scores tells you that students who score highly on the reading test are also likely to score highly on the end-of-year reading test. This correlation coefficient shows predictive evidence of validity. Both concurrent and predictive evidence are types of criterion-related evidence of validity. Finally, there’s a fourth type of validity evidence related to consequences. Validity evidence for consequences of testing refers to both the intended and the unintended consequences of score use. For example, our reading test is designed to measure reading achievement. This is the intended use if we only use it to show how students are performing in reading. However, this same test may also be used for teacher evaluation. This is an unintended score use in this particular instance, because whether the test accurately measures reading achievement—the purpose for which we validated the scores—has no direct relationship with teacher evaluation. If we desire to use the scores for teacher evaluation, we must seek new validity evidence for that specific use. Still, there are other unintended consequences, usually negative, that don’t call for supporting validity evidence. An example might be an instance where the educator strays from the prescribed curriculum to focus on areas that might give his or her students a chance to score highly on the said reading test and hence deny the students an opportunity to learn important materials. The burden of proof of validity evidence lies primarily with the test publisher, but a complete list of all unintended uses that may arise from test scores is beyond the realm of possibility. Who then is responsible for validity evidence of unintended score uses not documented by the test publisher? You guessed right—there’s still no agreement on that one. Test score validity is a deep and complex topic. The above summary is by no means complete, but it gives you a snapshot of the most common types of validity evidence. Again, the specific interpretations we wish to make about test score uses will guide our validation process. Hence, the specific types of validity evidence we look for may be unique to our specific use for the test scores in question. With validity evidence in hand, how then do you determine whether the evidence is good enough? Although validity coefficients generally tend to be smaller than reliability coefficients, validity—much like reliability—is a matter of degree. Just how good is good enough is largely tied to the stakes in decision making. If the stakes are high, stronger evidence might be preferred than if the stakes were lower. In general, some arbitrary guidelines are cited in literature to help test users interpret validity coefficients. Coefficients equal to .70 or greater are considered strong; coefficients ranging from .50 to .70 are considered moderate, and coefficients less than .50 are considered weak. Usually, there is additional evidence that these coefficients are not simply due to chance. At Renaissance, we dedicate a whole chapter in the Star technical manuals to document validity as a body of evidence. Part of that evidence shows the validity coefficients, which for the Renaissance Star Assessments® range from moderate to strong. To summarize, when judging the validity of test scores, one should consider the available body of evidence. not just the individual coefficients. For the best outcome, the validation of a test for specific uses is best achieved through collaboration between educators and the test designers. This joint effort ensures that the educator is aware of the intended uses for which the test is designed and seeks new evidence if there’s a need to use scores for purposes not yet validated. Well, this concludes our series on reliability and validity. I hope this overview of the basics will help you make sense of test scores and better evaluate the assessments available. I hope you’ll also check out my next post on measurement error!
https://www.renaissance.com/2015/04/02/what-educators-need-to-know-about-validity/
18
29
Proportional representation (PR) characterizes electoral systems by which divisions into an electorate are reflected proportionately into the elected body. If n% of the electorate support a particular political party, then roughly n% of seats will be won by that party. The essence of such systems is that all votes contribute to the result: not just a plurality, or a bare majority, of them. The most prevalent forms of proportional representation all require the use of multiple-member voting districts (also called super-districts), as it is not possible to fill a single seat in a proportional manner. In fact, the implementations of PR that achieve the highest levels of proportionality tend to include districts with large numbers of seats. With party list PR, political parties define candidate lists and voters vote for a list. The relative vote for each list determines how many candidates from each list are actually elected. Lists can be "closed" or "open"; open lists allow voters to indicate individual candidate preferences and vote for independent candidates. Voting districts can be small (as few as three seats in some districts in Chile or Ireland) or as large as a province or an entire nation. The single transferable vote uses small multiple-member districts, with voters ranking individual candidates in order of preference. During the count, as candidates are elected or eliminated, surplus or discarded votes that would otherwise be wasted are transferred to other candidates according to the preferences. STV enables voters to vote across party lines and to elect independent candidates. Mixed member proportional representation (MMP), also called the additional member system (AMS), is a two-tier mixed electoral system combining a non-proportional plurality/majoritarian election and a compensatory regional or national party list PR election. Voters typically have two votes, one for their single-member district and one for the party list, the party list vote determining the balance of the parties in the elected body. According to the ACE Electoral Knowledge Network, some form of proportional representation is used for national lower house elections in 94 countries. Party list PR, being used in 85 countries, is the most widely used. MMP is used in seven lower houses. STV, despite long being advocated by political scientists,:71 is used in only two: Ireland, since independence in 1922, and Malta, since 1921. - 1 Advantages and disadvantages - 2 Attributes of PR systems - 3 Measuring proportionality - 4 PR electoral systems - 4.1 Party list PR - 4.2 Single transferable vote - 4.3 Mixed compensatory systems - 4.4 Biproportional apportionment - 4.5 Other proportional systems - 5 History - 6 List of countries using proportional representation - 7 See also - 8 References - 9 Further reading - 10 External links Advantages and disadvantages In a representative body actually deliberating, the minority must of course be overruled; and in an equal democracy, the majority of the people, through their representatives, will outvote and prevail over the minority and their representatives. But does it follow that the minority should have no representatives at all? ... Is it necessary that the minority should not even be heard? Nothing but habit and old association can reconcile any reasonable being to the needless injustice. In a really equal democracy, every or any section would be represented, not disproportionately, but proportionately. A majority of the electors would always have a majority of the representatives, but a minority of the electors would always have a minority of the representatives. Man for man, they would be as fully represented as the majority. Unless they are, there is not equal government ... there is a part whose fair and equal share of influence in the representation is withheld from them, contrary to all just government, but, above all, contrary to the principle of democracy, which professes equality as its very root and foundation. PR tries to resolve the unfairness of majoritarian and plurality voting systems where the largest parties receive an "unfair" "seat bonus" and smaller parties are disadvantaged and have difficulty winning any representation at all (Duverger's law).:6–7 The established parties in UK elections can win formal control of the parliament with as little as 35% of votes (2005 UK general election). In certain Canadian elections, majority governments have been formed by parties with the support of under 40% of votes cast (2011 Canadian election, 2015 Canadian election). If turnout levels in the electorate are less than 60%, such outcomes allow a party to form a majority government by convincing as few as one quarter of the electorate to vote for it. In the 2005 UK election, for example, the Labour Party under Tony Blair won a comfortable parliamentary majority with the votes of only 21.6% of the total electorate.:3 Such misrepresentation has been criticized as "no longer a question of 'fairness' but of elementary rights of citizens".:22 However, PR systems with a high electoral threshold, or other features that reduce proportionality, are not necessarily much fairer: in the Turkish general election, 2002, using an open list system with a 10% threshold, 46% of votes were wasted.:83 Plurality/majoritarian systems can also disproportionately benefit regional parties that can win districts where they have a strong following, while other parties with national support but no strongholds, like the Greens, win few or no seats. An example is the Bloc Québécois in Canada that won 52 seats in the 1993 federal election, all in Quebec, on 13.5% of the national vote, while the Progressive Conservatives collapsed to two seats on 16% spread nationally. Similarly, in the 2015 UK General Election, the Scottish National Party gained 56 seats, all in Scotland, with a 4.7% share of the national vote while the UK Independence Party, with 12.6%, gained only a single seat. Election of minor parties The use of multiple-member districts enables a greater variety of candidates to be elected. The more representatives per district and the lower the minimum threshold of votes required for election, the more minor parties can gain representation. It has been argued that in emerging democracies, inclusion of minorities in the legislature can be essential for social stability and to consolidate the democratic process.:58 Critics, on the other hand, claim this can give extreme parties a foothold in parliament, sometimes cited as a cause for the collapse of the Weimar government. With very low thresholds, very small parties can act as "king-makers", holding larger parties to ransom during coalition discussions. The example of Israel is often quoted,:59 but these problems can be limited, as in the modern German Bundestag, by the introduction of higher threshold limits for a party to gain parliamentary representation. Another criticism is that the dominant parties in plurality/majoritarian systems, often looked on as "coalitions" or as "broad churches", can fragment under PR as the election of candidates from smaller groups becomes possible. Israel, again, and Brazil and Italy are examples.:59,89 However, research shows, in general, there is only a marginal increase in the number of parties in parliament. Open list systems and STV, the only prominent PR system which does not require political parties, enable independent candidates to be elected. In Ireland, on average, about six independent candidates have been elected each parliament. Supporters of PR see coalitions as an advantage, forcing compromise between parties to form a coalition at the centre of the political spectrum, and so leading to continuity and stability. Opponents counter that with many policies compromise is not possible (for example funding a new stealth bomber, or leaving the EU). Neither can many policies be easily positioned on the left-right spectrum (for example, the environment). So policies are horse-traded during coalition formation, with the consequence that voters have no way of knowing which policies will be pursued by the government they elect; voters have less influence on governments. Also, coalitions do not necessarily form at the centre, and small parties can have excessive influence, supplying a coalition with a majority only on condition that a policy or policies favoured by few voters is/are adopted. Most importantly, the ability of voters to vote a party in disfavour out of power is curtailed. All these disadvantages, the PR opponents contend, are avoided by two-party plurality systems. Coalitions are rare; the two dominant parties necessarily compete at the centre for votes, so that governments are more reliably moderate; the strong opposition necessary for proper scrutiny of government is assured; and governments remain sensitive to public sentiment because they can be, and are, regularly voted out of power. However, the US experience shows that this is not necessarily so, and that a two-party system can result in a "drift to extremes", hollowing out the centre, or, at least, in one party drifting to an extreme. The opponents of PR also contend that coalition governments created under PR are less stable, and elections are more frequent. Italy is an often-cited example with many governments composed of many different coalition partners. However, Italy has had an unusual and complicated mix of FPTP and PR since 1993, so it is not an appropriate candidate for measuring the stability of PR. Nevertheless, some studies have found that on average, compared to countries using plurality systems, governments elected with PR accord more closely with the median voter and the citizens are more content with democracy. Plurality systems usually result in single-party government because relatively few votes in the most finely balanced districts, the "swing seats", can transfer sufficient seats to the opposition to swing the election. More partisan districts remain invulnerable to swings of political mood. In the UK, for example, about half the constituencies have always elected the same party since 1945; in the 2012 US House elections 45 districts (10% of all districts) were uncontested by one of the two dominant parties. Voters who know their preferred candidate cannot win have little incentive to vote, and if they do their vote has no effect, it is "wasted".:10 With PR, there are no "swing seats", most votes contribute to the election of a candidate so parties need to campaign in all districts, not just those where their support is strongest or where they perceive most advantage. This fact in turn encourages parties to be more responsive to voters, producing a more "balanced" ticket by nominating more women and minority candidates. On average about 8% more women are elected. Since most votes count, there are fewer "wasted votes", so voters, aware that their vote can make a difference, are more likely to make the effort to vote, and less likely to vote tactically. Compared to countries with plurality electoral systems, voter turnout improves and the population is more involved in the political process. However some experts argue that transitioning from plurality to PR only increases voter turnout in geographical areas associated with safe seats under the plurality system; turnout may decrease in areas formerly associated with swing seats. To ensure approximately equal representation, plurality systems are dependent on the drawing of boundaries of their single-member districts, a process vulnerable to political interference (gerrymandering). To compound the problem, boundaries have to be periodically re-drawn to accommodate population changes. Even apolitically drawn boundaries can unintentionally produce the effect of gerrymandering, reflecting naturally occurring concentrations.:65 PR systems with their multiple-member districts are less prone to this – research suggests five-seat districts are immune to gerrymandering.:66 The district boundaries are less critical and so can be aligned with historical boundaries such as cities, counties, states, or provinces; population changes can be accommodated by simply adjusting the number of representatives elected. For example, Professor Mollison in his 2010 plan for STV for the UK set an upper limit of 100,000 electors per MP so that a constituency of 500,000 electors would have five seats (1:100,000) but one of 500,001 six seats (1:83,000). His district boundaries follow historical county and local authority boundaries, yet he achieves more uniform representation than does the Boundary Commission, the body responsible for balancing the UK's first-past-the-post constituency sizes. Mixed member systems are susceptible to gerrymandering for the local seats that remain a part of such systems. Under parallel voting, a semi-proportional system, there is no compensation for the effects that such gerrymandering might have. Under MMP, the use of compensatory list seats makes gerrymandering less of an issue. However, its effectiveness in this regard depends upon the features of the system, including the size of the regional districts, the relative share of list seats in the total, and opportunities for collusion that might exist. A striking example of how the compensatory mechanism can be undermined can be seen in the 2014 Hungarian parliamentary election, where the leading party, Fidesz, combined gerrymandering and decoy lists, which resulted in a two-thirds parliamentary majority from a 45% vote. This illustrates how certain implementations of MMP can produce moderately proportional outcomes, similar to parallel voting. Link between constituent and representative It is generally accepted that a particular advantage of plurality electoral systems such as first past the post, or majoritarian electoral systems such as the alternative vote, is the geographic link between representatives and their constituents.:36:65:21 A notable disadvantage of PR is that, as its multiple-member districts are made larger, this link is weakened.:82 In party list PR systems without delineated districts, such as the Netherlands and Israel, the geographic link between representatives and their constituents is considered extremely weak. Yet with relatively small multiple-member districts, in particular with STV, there are counter-arguments: about 90% of voters can consult a representative they voted for, someone whom they might think more sympathetic to their problem. In such cases it is sometimes argued that constituents and representatives have a closer link;:212 constituents have a choice of representative so they can consult one with particular expertise in the topic at issue.:212 With multiple-member districts, prominent candidates have more opportunity to be elected in their home constituencies, which they know and can represent authentically. There is less likely to be a strong incentive to parachute them into constituencies in which they are strangers and thus less than ideal representatives.:248–250 Mixed-member PR systems incorporate single-member districts to preserve the link between constituents and representatives.:95 However because up to half the parliamentary seats are list rather than district seats, the districts are necessarily up to twice as large as with a plurality/majoritarian system where all representatives serve single-member districts.:32 Wider benefits to society Wider benefits from PR have been identified in societies using it as compared to those using FPTP, including higher scores on the UN Human Development Index, a measure of health, education, and personal security, higher economic growth, less inequality, and better environmental protection. Attributes of PR systems Academics agree that the most important influence on proportionality is an electoral district's magnitude, the number of representatives elected from the district. Proportionality improves as the magnitude increases. Some scholars recommend voting districts of roughly four to eight seats, which are considered small relative to PR systems in general. At one extreme, the binomial electoral system used in Chile between 1989 and 2013, a nominally proportional open-list system, features two-member districts. As this system can be expected to result in the election of one candidate from each of the two dominant political blocks in most districts, it is not generally considered proportional.:79 At the other extreme, where the district encompasses the entire country (and with a low minimum threshold, highly proportionate representation of political parties can result), parties gain by broadening their appeal by nominating more minority and women candidates.:83 After the introduction of STV in Ireland in 1921 district magnitudes slowly diminished as more and more three-member constituencies were defined, benefiting the dominant Fianna Fáil, until 1979 when an independent boundary commission was established reversing the trend. In 2010, a parliamentary constitutional committee recommended a minimum magnitude of four. Nonetheless, despite relatively low magnitudes Ireland has generally experienced highly proportional results.:73 In the FairVote plan for STV (which FairVote calls choice voting) for the US House of Representatives, three- to five-member super-districts are proposed. In Professor Mollison's plan for STV in the UK, four- and five-member districts are used, with three and six as necessary to fit existing boundaries. The minimum threshold is the minimum vote required to win a seat. The lower the threshold, the higher the proportion of votes contributing to the election of representatives and the lower the proportion of votes wasted. All electoral systems have thresholds, either formally defined or as a mathematical consequence of the parameters of the election.:83 A formal threshold usually requires parties to win a certain percentage of the vote in order to be awarded seats from the party lists. In Germany and New Zealand (both MMP), the threshold is 5% of the national vote but the threshold is not applied to parties that win a minimum number of constituency seats (three in Germany, one in New Zealand). Turkey defines a threshold of 10%, the Netherlands 0.67%. Israel has raised its threshold from 1% (before 1992) to 1.5% (up to 2004), 2% (in 2006) and 3.25% in 2014. In STV elections, winning the quota (ballots/(seats+1)) of first preference votes assures election. However, well regarded candidates who attract good second (and third, etc.) preference support can hope to win election with only half the quota of first preference votes. Thus, in a six-seat district the effective threshold would be 7.14% of first preference votes (100/(6+1)/2). The need to attract second preferences tends to promote consensus and disadvantage extremes. Party magnitude is the number of candidates elected from one party in one district. As party magnitude increases a more balanced ticket will be more successful encouraging parties to nominate women and minority candidates for election. But under STV, nominating too many candidates can be counter-productive, splitting the first-preference votes and allowing the candidates to be eliminated before receiving transferred votes from other parties. An example of this was identified in a ward in the 2007 Scottish local elections where Labour, putting up three candidates, won only one seat while they might have won two had one of their voters' preferred candidates not stood. The same effect may have contributed to the collapse of Fianna Fáil in the 2011 Irish general election. Other aspects of PR can influence proportionality such as the size of the elected body, the choice of open or closed lists, ballot design, and vote counting methods. A number of ways of measuring proportionality have been proposed, including the Loosemore–Hanby index, the Gallagher Index, and the Sainte-Laguë Index. These metrics actually quantify the disproportionality of an election, the degree to which the number of seats won by each party differs from that of a perfectly proportional outcome. For example, the Canadian Parliament's 2016 Special Committee on Electoral Reform recommended that a system be designed to achieve "a Gallagher score of 5 or less". This indicated a much higher degree of proportionality than observed in the 2015 Canadian election under first-past-the-post voting, where the Gallagher index was 12. The Loosemore-Hanby index is calculated by subtracting each party's vote share from its seat share, adding up the absolute values (ignoring any negative signs), and dividing by two.:4–6 The Gallagher index is similar, but involves squaring the difference between each party’s vote share and seat share, and taking the square root of the sum. With the Sainte-Laguë index, the discrepancy between a party’s vote share and seat share is measured relative to its vote share. PR electoral systems Party list PR Party list proportional representation is an electoral system in which seats are first allocated to parties based on vote share, and then assigned to party-affiliated candidates on the parties' electoral lists. This system is used in many countries, including Finland (open list), Latvia (open list), Sweden (open list), Israel (national closed list), Brazil (open list), Nepal (Closed list) adopted in 2008 in first CA election, the Netherlands (open list), Russia (closed list), South Africa (closed list), Democratic Republic of the Congo (open list), and Ukraine (open list). For elections to the European Parliament, most member states use open lists; but most large EU countries use closed lists, so that the majority of EP seats are distributed by those. Local lists were used to elect the Italian Senate during the second half of the 20th century. Closed list PR In closed list systems, each party lists its candidates according to the party's candidate selection process. This sets the order of candidates on the list and thus, in effect, their probability of being elected. The first candidate on a list, for example, will get the first seat that party wins. Each voter casts a vote for a list of candidates. Voters, therefore, do not have the option to express their preferences at the ballot as to which of a party's candidates are elected into office. A party is allocated seats in proportion to the number of votes it receives. There is an intermediate system in countries like Uruguay, where each party presents several closed lists, each representing a faction. Seats are distributed between parties according to the number of votes, and then between the factions within each party. Open list PR In an open list, voters may vote, depending on the model, for one person, or for two, or indicate their order of preference within the list. These votes sometimes rearrange the order of names on the party's list and thus which of its candidates are elected. Nevertheless, the number of candidates elected from the list is determined by the number of votes the list receives. Local list PR In a local list system, parties divide their candidates in single member-like constituencies, which are ranked inside each general party list depending by their percentages. This method allows electors to judge every single candidate as in a FPTP system. Two-tier party list systems Some party list proportional systems with open lists use a two-tier compensatory system, as in Denmark, Norway, and Sweden. In Denmark, for example, the country is divided into ten multiple-member voting districts arranged in three regions, electing 135 representatives. In addition, 40 compensatory seats are elected. Voters have one vote which can be cast for an individual candidate or for a party list on the district ballot. To determine district winners, candidates are apportioned their share of their party's district list vote plus their individual votes. The compensatory seats are apportioned to the regions according to the party votes aggregated nationally, and then to the districts where the compensatory representatives are determined. In the 2007 general election, the district magnitudes, including compensatory representatives, varied between 14 and 28. The basic design of the system has remained unchanged since its introduction in 1920. Single transferable vote The single transferable vote (STV), also called choice voting, is a ranked system: voters rank candidates in order of preference. Voting districts usually elect three to seven representatives. The count is cyclic, electing or eliminating candidates and transferring votes until all seats are filled. A candidate is elected whose tally reaches a quota, the minimum vote that guarantees election. The candidate's surplus votes (those in excess of the quota) are transferred to other candidates at a fraction of their value proportionate to the surplus, according to the votes' preferences. If no candidates reach the quota, the candidate with the fewest votes is eliminated, those votes being transferred to their next preference at full value, and the count continues. There are many methods for transferring votes. Some early, manual, methods transferred surplus votes according to a randomly selected sample, or transferred only a "batch" of the surplus, other more recent methods transfer all votes at a fraction of their value (the surplus divided by the candidate's tally) but may need the use of a computer. Some methods may not produce exactly the same result when the count is repeated. There are also different ways of treating transfers to already elected or eliminated candidates, and these, too, can require a computer. In effect, the method produces groups of voters of equal size that reflect the diversity of the electorate, each group having a representative the group voted for. Some 90% of voters have a representative to whom they gave their first preference. Voters can choose candidates using any criteria they wish, the proportionality is implicit. Political parties are not necessary; all other prominent PR electoral systems presume that parties reflect voters wishes, which many believe gives power to parties. STV satisfies the electoral system criterion proportionality for solid coalitions – a solid coalition for a set of candidates is the group of voters that rank all those candidates above all others – and is therefore considered a system of proportional representation. However, the small district magnitude used in STV elections has been criticized as impairing proportionality, especially when more parties compete than there are seats available,:50 and STV has, for this reason, sometimes been labelled "quasi proportional".:83 While this may be true when considering districts in isolation, results overall are proportional. In Ireland, with particularly small magnitudes, results are "highly proportional".:73 In 1997, the average magnitude was 4.0 but eight parties gained representation, four of them with less than 3% of first preference votes nationally. Six independent candidates also won election. STV has also been described as the most proportional system.:83 The system tends to handicap extreme candidates because, to gain preferences and so improve their chance of election, candidates need to canvass voters beyond their own circle of supporters, and so need to moderate their views. Conversely, widely respected candidates can win election with relatively few first preferences by benefitting from strong subordinate preference support. Australian Senate STV The term STV in Australia refers to the Senate electoral system, a variant of Hare-Clark characterized by the "above the line" group voting ticket, a party list option. It is used in the Australian upper house, the Senate, and some state upper houses. Due to the number of preferences that are compulsory if a vote for candidates (below-the-line) is to be valid – for the Senate a minimum of 90% of candidates must be scored, in 2013 in New South Wales that meant writing 99 preferences on the ballot – 95% and more of voters use the above-the-line option, making the system, in all but name, a party list system. Parties determine the order in which candidates are elected and also control transfers to other lists and this has led to anomalies: preference deals between parties, and "micro parties" which rely entirely on these deals. Additionally, independent candidates are unelectable unless they form, or join, a group above-the-line. Concerning the development of STV in Australia researchers have observed: "... we see real evidence of the extent to which Australian politicians, particularly at national levels, are prone to fiddle with the electoral system".:86 As a result of a parliamentary commission investigating the 2013 election, from 2016 the system has been considerably reformed (see Australian federal election, 2016), with group voting tickets (GVTs) abolished and voters no longer required to fill all boxes. Mixed compensatory systems A mixed compensatory system is an electoral system that is mixed, meaning that it combines a plurality/majority formula with a proportional formula, and that uses the proportional component to compensate for disproportionality caused by the plurality/majority component. For example, suppose that a party wins 10 seats based on plurality, but requires 15 seats in total to obtain its proportional share of an elected body. A fully proportional mixed compensatory system would award this party 5 compensatory (PR) seats, raising the party's seat count from 10 to 15. The most prominent mixed compensatory system is mixed member proportional representation (MMP), used in Germany since 1949. In MMP, the seats won by plurality are associated with single-member districts. Mixed member proportional representation Mixed member proportional representation (MMP) is a two-tier system that combines a single-district vote, usually first-past-the-post, with a compensatory regional or nationwide party list proportional vote. The system aims to combine the local district representation of FPTP and the proportionality of a national party list system. MMP has the potential to produce proportional or moderately proportional election outcomes, depending on a number of factors such as the ratio of FPTP seats to PR seats, the existence or nonexistence of extra compensatory seats to make up for overhang seats, and election thresholds. It was invented for the German Bundestag after the Second World War and has spread to Lesotho, Bolivia and New Zealand. The system is also used for the Welsh and Scottish assemblies where it is called the additional member system. Voters typically have two votes, one for their district representative and one for the party list. The list vote usually determines how many seats are allocated to each party in parliament. After the district winners have been determined, sufficient candidates from each party list are elected to "top-up" each party to the overall number of parliamentary seats due to it according to the party's overall list vote. Before apportioning list seats, all list votes for parties which failed to reach the minimum threshold are discarded. If eliminated parties lose seats in this manner, then the seat counts for parties that achieved the minimum threshold improve. Also, any direct seats won by independent candidates are subtracted from the parliamentary total used to apportion list seats. The system has the potential to produce proportional results, but proportionality can be compromised if the ratio of list to district seats is too low, it may then not be possible to completely compensate district seat disproportionality. Another factor can be how overhang seats are handled, district seats that a party wins in excess of the number due to it under the list vote. To achieve proportionality, other parties require "balance seats", increasing the size of parliament by twice the number of overhang seats, but this is not always done. Until recently, Germany increased the size of parliament by the number of overhang seats but did not use the increased size for apportioning list seats. This was changed for the 2013 national election after the constitutional court rejected the previous law, not compensating for overhang seats had resulted in a negative vote weight effect. Lesotho, Scotland and Wales do not increase the size of parliament at all, and, in 2012, a New Zealand parliamentary commission also proposed abandoning compensation for overhang seats, and so fixing the size of parliament. At the same time, it would abolish the single-seat threshold – any such seats would then be overhang seats and would otherwise have increased the size of parliament further – and reduce the vote threshold from 5% to 4%. Proportionality would not suffer. Dual member proportional representation Dual member proportional representation (DMP) is a single-vote system that elects two representatives in every district. The first seat in each district is awarded to the candidate who wins a plurality of the votes, similar to first-past-the-post voting. The remaining seats are awarded in a compensatory manner to achieve proportionality across a larger region. DMP employs a formula similar to the "best near-winner" variant of MMP used in the German state of Baden-Württemberg. In Baden-Württemberg, compensatory seats are awarded to candidates who receive high levels of support at the district level compared with other candidates of the same party. DMP differs in that at most one candidate per district is permitted to obtain a compensatory seat. If multiple candidates contesting the same district are slated to receive one of their parties' compensatory seats, the candidate with the highest vote share is elected and the others are eliminated. DMP is similar to STV in that all elected representatives, including those who receive compensatory seats, serve their local districts. Invented in 2013 in the Canadian province of Alberta, DMP received attention on Prince Edward Island where it appeared on a 2016 plebiscite as a potential replacement for FPTP, but was eliminated on the third round. Biproportional apportionment applies a mathematical method (iterative proportional fitting) for the modification of an election result to achieve proportionality. It was proposed for elections by the mathematician Michel Balinski in 1989, and first used by the city of Zurich for its council elections in February 2006, in a modified form called "new Zurich apportionment" (Neue Zürcher Zuteilungsverfahren). Zurich had had to modify its party list PR system after the Swiss Federal Court ruled that its smallest wards, as a result of population changes over many years, unconstitutionally disadvantaged smaller political parties. With biproportional apportionment, the use of open party lists hasn't changed, but the way winning candidates are determined has. The proportion of seats due to each party is calculated according to their overall citywide vote, and then the district winners are adjusted to conform to these proportions. This means that some candidates, who would otherwise have been successful, can be denied seats in favor of initially unsuccessful candidates, in order to improve the relative proportions of their respective parties overall. This peculiarity is accepted by the Zurich electorate because the resulting city council is proportional and all votes, regardless of district magnitude, now have equal weight. The system has since been adopted by other Swiss cities and cantons. Fair majority voting Balinski has proposed another variant called fair majority voting (FMV) to replace single-winner plurality/majoritarian electoral systems, in particular the system used for the US House of Representatives. FMV introduces proportionality without changing the method of voting, the number of seats, or the – possibly gerrymandered – district boundaries. Seats would be apportioned to parties in a proportional manner at the state level. In a related proposal for the UK parliament, whose elections are contested by many more parties, the authors note that parameters can be tuned to adopt any degree of proportionality deemed acceptable to the electorate. In order to elect smaller parties, a number of constituencies would be awarded to candidates placed fourth or even fifth in the constituency – unlikely to be acceptable to the electorate, the authors concede – but this effect could be substantially reduced by incorporating a third, regional, apportionment tier, or by specifying minimum thresholds. Other proportional systems Reweighted range voting Reweighted range voting (RRV) is a multi-winner voting system similar to STV in that voters can express support for multiple candidates, but different in that candidates are graded instead of ranked. That is, a voter assigns a score to each candidate. The higher a candidate’s scores, the greater the chance they will be among the winners. Similar to STV, the vote counting procedure occurs in rounds. The first round of RRV is identical to range voting. All ballots are added with equal weight, and the candidate with the highest overall score is elected. In all subsequent rounds, ballots that support candidates who have already been elected are added with a reduced weight. Thus voters who support none of the winners in the early rounds are increasingly likely to elect one of their preferred candidates in a later round. The procedure has been shown to yield proportional outcomes if voters are loyal to distinct groups of candidates (e.g. political parties). Reweighted approval voting Reweighted approval voting (RAV) is similar to reweighted range voting in that several winners are elected using a multi-round counting procedure in which ballots supporting already elected candidates are given reduced weights. Under RAV, however, a voter can only choose to approve or disapprove of each candidate, as in approval voting. RAV was used briefly in Swedan during the early 1900s. In asset voting, the voters vote for candidates and then the candidates negotiate amongst each other and reallocate votes amongst themselves. Asset voting was independently rediscovered by each of Lewis Carroll, Warren D. Smith, and Forest Simmons. The random ballot, or lottery voting, is a single-vote, single-winner voting method in which one of the marked ballots is selected at random, and the candidate supported by that ballot is declared the winner. Although it has been described as "a thought experiment", the system is statistically likely to produce proportional election outcomes if applied over a large number of single-member districts. In a related method called sortition, one dispenses with voting altogether and simply appoints randomly selected individuals from a population to serve in a representative decision-making body. It should be in miniature, an exact portrait of the people at large. It should think, feel, reason, and act like them. That it may be the interest of this Assembly to do strict justice at all times, it should be an equal representation, or in other words equal interest among the people should have equal interest in it. A representative body is to the nation what a chart is for the physical configuration of its soil: in all its parts, and as a whole, the representative body should at all times present a reduced picture of the people, their opinions, aspirations, and wishes, and that presentation should bear the relative proportion to the original precisely. In February 1793, the Marquis de Condorcet led the drafting of the Girondist constitution which proposed a limited voting scheme with proportional aspects. Before that could be voted on, the Montagnards took over the National Convention and produced their own constitution. On June 24, Saint-Just proposed the single non-transferable vote, which can be proportional, for national elections but the constitution was passed on the same day specifying first-past-the-post voting. Already in 1787, James Wilson, like Adams a US Founding Father, understood the importance of multiple-member districts: "Bad elections proceed from the smallness of the districts which give an opportunity to bad men to intrigue themselves into office", and again, in 1791, in his Lectures on Law: "It may, I believe, be assumed as a general maxim, of no small importance in democratical governments, that the more extensive the district of election is, the choice will be the more wise and enlightened". The 1790 Constitution of Pennsylvania specified multiple-member districts for the state Senate and required their boundaries to follow county lines. STV, or, more precisely, an election method where voters have one transferable vote, was first invented in 1819 by an English schoolmaster, Thomas Wright Hill, who devised a "plan of election" for the committee of the Society for Literary and Scientific Improvement in Birmingham that used not only transfers of surplus votes from winners but also from losers, a refinement that later both Andræ and Hare initially omitted. But the procedure was unsuitable for a public election and wasn't publicised. In 1839, Hill's son, Rowland Hill, recommended the concept for public elections in Adelaide, and a simple process was used in which voters formed as many groups as there were representatives to be elected, each group electing one representative. The first practical PR election method, a list method, was conceived by Thomas Gilpin, a retired paper-mill owner, in a paper he read to the American Philosophical Society in Philadelphia in 1844: "On the representation of minorities of electors to act with the majority in elected assemblies". But the paper appears not to have excited any interest. A practical election using a single transferable vote was devised in Denmark by Carl Andræ, a mathematician, and first used there in 1855, making it the oldest PR system, but the system never really spread. It was re-invented (apparently independently) in the UK in 1857 by Thomas Hare, a London barrister, in his pamphlet The Machinery of Representation and expanded on in his 1859 Treatise on the Election of Representatives. The scheme was enthusiastically taken up by John Stuart Mill, ensuring international interest. The 1865 edition of the book included the transfer of preferences from dropped candidates and the STV method was essentially complete. Mill proposed it to the House of Commons in 1867, but the British parliament rejected it. The name evolved from "Mr.Hare's scheme" to "proportional representation", then "proportional representation with the single transferable vote", and finally, by the end of the 19th century, to "the single transferable vote". A party list proportional representation system was devised and described in 1878 by Victor D'Hondt in Belgium. D'Hondt's method of seat allocation, the D'Hondt method, is still widely used. Victor Considerant, a utopian socialist, devised a similar system in an 1892 book. Some Swiss cantons (beginning with Ticino in 1890) used the system before Belgium, which was first to adopt list PR in 1900 for its national parliament. Many European countries adopted similar systems during or after World War I. List PR was favoured on the Continent because the use of lists in elections, the scrutin de liste, was already widespread. STV was preferred in the English-speaking world because its tradition was the election of individuals. In the UK, the 1917 Speaker's Conference recommended STV for all multi-seat Westminster constituencies, but it was only applied to university constituencies, lasting from 1918 until 1950 when those constituencies were abolished. In Ireland, STV was used in 1918 in the University of Dublin constituency, and was introduced for devolved elections in 1921. STV is currently used for two national lower houses of parliament, Ireland, since independence (as the Irish Free State) in 1922, and Malta, since 1921, long before independence in 1966. In Ireland, two attempts have been made by Fianna Fáil governments to abolish STV and replace it with the 'First Past the Post' plurality system. Both attempts were rejected by voters in referendums held in 1959 and again in 1968.. STV is also used for all other elections in Ireland except for that of the presidency, for the Northern Irish assembly and European and local authorities, Scottish local authorities, some New Zealand and Australian local authorities, the Tasmanian (since 1907) and Australian Capital Territory assemblies, where the method is known as Hare-Clark, and the city council in Cambridge, Massachusetts, (since 1941). PR is used by a majority of the world's 33 most robust democracies with populations of at least two million people;[unbalanced opinion?] only six use plurality or a majoritarian system (runoff or instant runoff) for elections to the legislative assembly, four use parallel systems, and 23 use PR. PR dominates Europe, including Germany and most of northern and eastern Europe; it is also used for European Parliament elections. France adopted PR at the end of World War II, but discarded it in 1958; it was used for parliament elections in 1986. Switzerland has the most widespread use of proportional representation, which is the system used to elect not only national legislatures and local councils, but also all local executives. PR is less common in the English-speaking world; New Zealand adopted MMP in 1993, but the UK, Canada, India and Australia all use plurality/majoritarian systems for legislative elections. In Canada, STV was used by the cities of Edmonton and Calgary in Alberta from 1926 to 1955, and by Winnipeg in Manitoba from 1920 to 1953. In both provinces the alternative vote (AV) was used in rural areas. First-past-the-post was re-adopted in Alberta by the dominant party for reasons of political advantage, in Manitoba a principal reason was the underrepresentation of Winnipeg in the provincial legislature.:223–234 STV has some history in the United States. Between 1915 and 1962, twenty-four cities used the system for at least one election. In many cities, minority parties and other groups used STV to break up single-party monopolies on elective office. One of the most famous cases is New York City, where a coalition of Republicans and others imposed STV in 1936 as part of an attack on the Tammany Hall machine. Another famous case is Cincinnati, Ohio, where, in 1924, Democrats and Progressive-wing Republicans imposed a council-manager charter with STV elections to dislodge the Republican machine of Rudolph K. Hynicka. Although Cincinnati's council-manager system survives, Republicans and other disaffected groups replaced STV with plurality-at-large voting in 1957. From 1870 to 1980, Illinois used a semi-proportional cumulative voting system to elect its House of Representatives. Each district across the state elected both Republicans and Democrats year-after-year. Cambridge, Massachusetts, (STV) and Peoria, Illinois, (cumulative voting) continue to use PR. San Francisco had citywide elections in which people would cast votes for five or six candidates simultaneously, delivering some of the benefits of proportional representation. List of countries using proportional representation The table below lists the countries that use a PR electoral system to fill a nationwide elected body. Detailed information on electoral systems applying to the first chamber of the legislature is maintained by the ACE Electoral Knowledge Network. (See also the complete list of electoral systems by country.) |1||Albania||Party list, 4% national threshold or 2.5% in a district| |4||Argentina||Party list in the Chamber of Deputies| |5||Armenia||Two-tier party list Nationwide closed lists and open lists in each of 13 election districts. If needed to ensure a stable majority with at least 54% of the seats, the two best-placed parties participate in a run-off vote to receive a majority bonus. Threshold of 5% for parties and 7% for election blocs. |7||Australia||For Senate only, Single transferable vote| |8||Austria||Party list, 4% threshold| |9||Belgium||Party list, 5% threshold| |11||Bolivia||Mixed-member proportional representation, 3% threshold| |12||Bosnia and Herzegovina||Party list| |14||Bulgaria||Party list, 4% threshold| |15||Burkina Faso||Party list| |16||Burundi||Party list, 2% threshold| |18||Cape Verde||Party list| |21||Costa Rica||Party list| |22||Croatia||Party list, 5% threshold| |24||Czech Republic||Party list, 5% threshold| |25||Denmark||Two-tier party list, 2% threshold| |26||Dominican Republic||Party list| |27||East Timor||Party list| |28||El Salvador||Party list| |29||Equatorial Guinea||Party list| |30||Estonia||Party list, 5% threshold| |31||European Union||Each member state chooses its own PR system| |32||Faroe Islands||Party list| |33||Fiji||Party list, 5% threshold| |35||Germany||Mixed-member proportional representation, 5% (or 3 district winners) threshold| |36||Greece||Two-tier party list| |43||Indonesia||Party list, 3.5% threshold| |45||Ireland||Single transferable vote (For Dáil only)| |46||Israel||Party list, 3.25% threshold| |47||Kazakhstan||Party list, 7% threshold| |49||Kyrgyzstan||Party list, 5% threshold| |50||Latvia||Party list, 5% threshold| |52||Lesotho||Mixed-member proportional representation| |53||Liechtenstein||Party list, 8% threshold| |56||Malta||Single transferable vote| |57||Moldova||Party list, 6% threshold| |62||New Zealand||Mixed-member proportional representation, 5% (or 1 district winner) threshold| |64||Northern Ireland||Single transferable vote| |65||Norway||Two-tier party list, 4% national threshold| |68||Poland||Party list, 5% threshold or more| |72||San Marino||Party list If needed to ensure a stable majority, the two best-placed parties participate in a run-off vote to receive a majority bonus. Threshold of 3.5%. |73||São Tomé and Príncipe||Party list| |74||Serbia||Party list, 5% threshold or less| |75||Sint Maarten||Party list| |76||Slovakia||Party list, 5% threshold| |77||Slovenia||Party list, 4% threshold| |78||South Africa||Party list| |79||Spain||Party list, 3% threshold in small constituencies| |80||Sri Lanka||Party list| |82||Sweden||Two-tier party list, 4% national threshold or 12% in a district| |86||Turkey||Party list, 10% threshold| - Semi-proportional representation - Mixed electoral system - Apportionment (politics) - Hare quota - D'Hondt method - Sainte-Laguë method - Interactive representation - Direct representation - One man, one vote - Index of politics articles - Mill, John Stuart (1861). "Chapter VII, Of True and False Democracy; Representation of All, and Representation of the Majority only". Considerations on Representative Government. London: Parker, Son, & Bourn. - "Proportional Representation (PR)". ACE Electoral Knowledge Network. Retrieved 9 April 2014. - "Electoral System Design: the New International IDEA Handbook". International Institute for Democracy and Electoral Assistance. 2005. Retrieved 9 April 2014. - Amy, Douglas J. "How Proportional Representation Elections Work". FairVote. Retrieved 26 October 2017. - "Additional Member System". London: Electoral Reform Society. Retrieved 16 October 2015. - ACE Project: The Electoral Knowledge Network. "Electoral Systems Comparative Data, Table by Question". Retrieved 20 November 2014. - Gallagher, Michael. "Ireland: The Archetypal Single Transferable Vote System" (PDF). Retrieved 26 October 2014. - Hirczy de Miño, Wolfgang, University of Houston; Lane, John, State University of New York at Buffalo (1999). "Malta: STV in a two-party system" (PDF). Retrieved 24 July 2014. - ACE Project Electoral Knowledge Network. "The Systems and Their Consequences". Retrieved 26 September 2014. - House of Commons of Canada Special Committee on Electoral Reform (December 2016). "Strengthening Democracy in Canada: Principles, Process and Public Engagement for Electoral Reform". - Forder, James (2011). The case against voting reform. Oxford: Oneworld Publications. ISBN 978-1-85168-825-8. - Amy, Douglas. "Proportional Representation Voting Systems". Fairvote.org. Takoma Park. Retrieved 25 August 2017. - Norris, Pippa (1997). "Choosing Electoral Systems: Proportional, Majoritarian and Mixed Systems" (PDF). Harvard University. Retrieved 9 April 2014. - Colin Rallings; Michael Thrasher. "The 2005 general election: analysis of the results" (PDF). Electoral Commission, Research, Electoral data. London: Electoral Commission. Retrieved 29 March 2015. - "Report of the Hansard Society Commission on Electoral Reform". Hansard Society. London. 1976. - "1993 Canadian Federal Election Results". University of British Columbia. Retrieved 25 January 2016. - "Election 2015 - BBC News". BBC. Retrieved 11 May 2015. - Ana Nicolaci da Costa; Charlotte Greenfield (September 23, 2017). "New Zealand's ruling party ahead after poll but kingmaker in no rush to decide". Reuters. - Roberts, Iain (29 June 2010). "People in broad church parties should think twice before attacking coalitions". Liberal Democrat Voice. Retrieved 29 July 2014. - "Why Proportional Representation? A look at the evidence" (PDF). Fair Vote Canada. Retrieved 8 April 2016. - Amy, Douglas J. "Single Transferable Vote Or Choice Voting". FairVote. Retrieved 9 April 2014. - "Electoral Reform Society's evidence to the Joint Committee on the Draft Bill for House of Lords Reform". Electoral Reform Society. 21 October 2011. Retrieved 10 May 2015. - Harris, Paul (20 November 2011). "'America is better than this': paralysis at the top leaves voters desperate for change". The Guardian. Retrieved 17 November 2014. - Krugman, Paul (19 May 2012). "Going To Extreme". The Conscience of a Liberal, Paul Krugman Blog. The New York Times Co. Retrieved 24 Nov 2014. - Mollison, Denis. "Fair votes in practice STV for Westminster" (PDF). Heriot Watt University. Retrieved 3 June 2014. - "Democrats' Edge in House Popular Vote Would Have Increased if All Seats Had Been Contested". FairVote. Retrieved 7 July 2014. - Cox, Gary W.; Fiva, Jon H.; Smith, Daniel M. (2016). "The Contraction Effect: How Proportional Representation Affects Mobilization and Turnout" (PDF). The Journal of Politics. 78 (4). - Amy, Douglas J (2002). Real Choices / New Voices, How Proportional Representation Elections Could Revitalize American Democracy. Columbia University Press. ISBN 9780231125499. - Mollison, Denis (2010). "Fair votes in practice: STV for Westminster". Heriot-Watt University. Retrieved 3 June 2014. - Scheppele, Kim Lane (April 13, 2014). "Legal But Not Fair (Hungary)". The Conscience of a Liberal, Paul Krugman Blog. The New York Times Co. Retrieved 12 July 2014. - Office for Democratic Institutions and Human Rights (11 July 2014). "Hungary, Parliamentary Elections, 6 April 2014: Final Report". OSCE. - "Voting Counts: Electoral Reform for Canada" (PDF). Law Commission of Canada. 2004. p. 22. - "Single Transferable Vote". London: Electoral Reform Society. Retrieved 28 July 2014. - Humphreys, John H (1911). Proportional Representation, A Study in Methods of Election. London: Methuen & Co.Ltd. - Carey, John M.; Hix, Simon (2011). "The Electoral Sweet Spot: Low-Magnitude Proportional Electoral Systems". American Journal of Political Science. 55 (2): 383–397. - "Electoral reform in Chile: Tie breaker". The Economist. 14 February 2015. Retrieved 11 April 2018. - Laver, Michael (1998). "A new electoral system for Ireland?" (PDF). The Policy Institute, Trinity College, Dublin. - "Joint Committee on the Constitution" (PDF). Dublin: Houses of the Oireachtas. July 2010. - "National projections" (PDF). Monopoly Politics 2014 and the Fair Voting Solution. FairVote. Retrieved 9 July 2014. - Lubell, Maayan (March 11, 2014). "Israel ups threshold for Knesset seats despite opposition boycott". Thomson Reuters. Retrieved 10 July 2014. - "Party Magnitude and Candidate Selection". ACE Electoral Knowledge Network. - O'Kelly, Michael. "The fall of Fianna Fáil in the 2011 Irish general election". Significance. Royal Statistical Society, American Statistical Association. Archived from the original on 2014-08-06. - Dunleavy, Patrick; Margetts, Helen (2004). "How proportional are the 'British AMS' systems?". Representation. Taylor & Francis. 40 (4): 317–329. Retrieved 25 November 2014. - Kestelman, Philip (March 1999). "Quantifying Representativity". Voting matters. London: The McDougall Trust (10). Retrieved 10 August 2013. - Hill, I D (May 1997). "Measuring proportionality". Voting matters. London: The McDougall Trust (8). - As counted from the table in http://www.wahlrecht.de/ausland/europa.htm [in German]; "Vorzugsstimme(n)" means "open list". - "Party List PR". Electoral Reform Society. Retrieved 23 May 2016. - Gordon Gibson (2003). Fixing Canadian Democracy. The Fraser Institute. p. 76. - Gallagher, Michael; Mitchell, Paul (2005). The Politics of Electoral Systems. Oxford, New York: Oxford University Press. p. 11. ISBN 0-19-925756-6. - "The Parliamentary Electoral System in Denmark". Copenhagen: Ministry of the Interior and Health. 2011. Retrieved 1 Sep 2014. - "The main features of the Norwegian electoral system". Oslo: Ministry of Local Government and Modernisation. Retrieved 1 Sep 2014. - "The Swedish electoral system". Stockholm: Election Authority. 2011. Retrieved 1 Sep 2014. - "Fair Voting/Proportional Representation". FairVote. Retrieved 9 April 2014. - Amy, Douglas J. "A Brief History of Proportional Representation in the United States". FairVote. Retrieved 16 October 2015. - Tideman, Nicolaus (1995). "The Single Transferable Vote". Journal of Economic Perspectives. American Economic Association. 9 (1): 27–38. doi:10.1257/jep.9.1.27. - O’Neill, Jeffrey C. (July 2006). "Comments on the STV Rules Proposed by British Columbia". Voting matters. London: The McDougall Trust (22). Retrieved 10 August 2013. - David M. Farrell; Ian McAllister (2006). The Australian Electoral System: Origins, Variations, and Consequences. Sydney: UNSW Press. ISBN 978-0868408583. - "Referendum 2011: A look at the STV system". Auckland: The New Zealand Herald. 1 Nov 2011. Retrieved 21 Nov 2014. - "Change the Way We Elect? Round Two of the Debate". Vancouver: The Tyee. 30 Apr 2009. Retrieved 21 Nov 2014. - "The Hare-Clark System of Proportional Representation". Melbourne: Proportional Representation Society of Australia. Retrieved 21 Nov 2014. - "Above the line voting". Perth: University of Western Australia. Retrieved 21 Nov 2014. - "Glossary of Election Terms". Sydney: Australian Broadcasting Corporation. Retrieved 21 Nov 2014. - Hill, I.D. (November 2000). "How to ruin STV". Voting matters. London: The McDougall Trust (12). Retrieved 10 August 2013. - Green, Anthony (20 April 2005). "Above or below the line? Managing preference votes". Australia: On Line Opinion. Retrieved 21 Nov 2014. - Terry, Chris (5 April 2012). "Serving up a dog's breakfast". London: Electoral Reform Society. Retrieved 21 Nov 2014. - ACE Project Electoral Knowledge Network. "Mixed Systems". Retrieved 29 June 2016. - Massicotte, Louis (2004). In Search of Compensatory Mixed Electoral System for Québec (PDF) (Report). - Bochsler, Daniel (May 13, 2010). "Chapter 5, How Party Systems Develop in Mixed Electoral Systems". Territory and Electoral Rules in Post-Communist Democracies. Palgrave Macmillan. - "Electoral Systems and the Delimitation of Constituencies". International Foundation for Electoral Systems. 2 Jul 2009. - Moser, Robert G. (December 2004). "Mixed electoral systems and electoral system effects: controlled comparison and cross-national analysis". Electoral Studies. 23 (4): 575–599. doi:10.1016/S0261-3794(03)00056-8. - Massicotte, Louis (September 1999). "Mixed electoral systems: a conceptual and empirical survey". Electoral Studies. 18 (3): 341–366. doi:10.1016/S0261-3794(98)00063-8. - "MMP Voting System". Wellington: Electoral Commission New Zealand. 2011. Retrieved 10 Aug 2014. - "Deutschland hat ein neues Wahlrecht" (in German). Zeit Online. 22 February 2013. - "Report of the Electoral Commission on the Review of the MMP Voting System". Wellington: Electoral Commission New Zealand. 2011. Retrieved 10 Aug 2014. - Sean Graham (April 4, 2016). "Dual-Member Mixed Proportional: A New Electoral System for Canada" (PDF). - Antony Hodgson (Jan 21, 2016). "Why a referendum on electoral reform would be undemocratic". The Tyee. - PEI Special Committee on Democratic Renewal (April 15, 2016). "Recommendations in Response to the White Paper on Democratic Renewal - A Plebiscite Question" (PDF). - Kerry Campbell (April 15, 2016). "P.E.I. electoral reform committee proposes ranked ballot". CBC News. - Elections PEI (November 7, 2016). "Plebiscite Results". - Susan Bradley (November 7, 2016). "P.E.I. plebiscite favours mixed member proportional representation". CBC News. - Pukelsheim, Friedrich (September 2009). "Zurich's New Apportionment" (PDF). German Research. Wiley Online Library. 31 (2). Retrieved 10 August 2014. - Balinski, Michel (February 2008). "Fair Majority Voting (or How to Eliminate Gerrymandering)". The American Mathematical Monthly. Washington D.C.: Mathematical Association of America. 115 (2). Retrieved 10 August 2014. - Akartunali, Kerem; Knight, Philip A. (January 2014). "Network Models and Biproportional Apportionment for Fair Seat Allocations in the UK Elections" (PDF). University of Strathclyde. Retrieved 10 August 2014. - Smith, Warren (18 June 2006). "Comparative survey of multiwinner election methods" (PDF). - Kok, Jan; Smith, Warren. "Reweighted Range Voting – a Proportional Representation voting method that feels like range voting". Retrieved 4 April 2016. - Ryan, Ivan. "Reweighted Range Voting – a Proportional Representation voting method that feels like range voting". Retrieved 4 April 2016. - Smith, Warren (6 August 2005). "Reweighted range voting – new multiwinner voting method" (PDF). - "89th Annual Academy Awards of Merit for Achievements during 2017" (PDF). Retrieved 4 April 2016. - "Rule Twenty-Two: Special Rules for the Visual Effects Award". Archived from the original on 14 September 2012. Retrieved 4 April 2016. - Aziz, Haris; Serge Gaspers, Joachim Gudmundsson, Simon Mackenzie, Nicholas Mattei, Toby Walsh. "Computational Aspects of Multi-Winner Approval Voting" (PDF). Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. pp. 107–115. ISBN 978-1-4503-3413-6. - Smith, Warren (8 March 2005). ""Asset voting" scheme for multiwinner elections" (PDF). - Smith, Warren. "Asset voting – an interesting and very simple multiwinner voting system". Retrieved 4 April 2016. - Akhil Reed Amar (1 January 1995). ""Lottery Voting: A Thought Experiment"". - Adams, John (1776). "Thoughts on Government". The Adams Papers Digital Edition. Massachusetts Historical Society. Retrieved 26 July 2014. - Hoag, Clarence; Hallett, George (1926). Proportional Representation. New York: The Macmillan Company. - Madison, James. "Notes of Debates in the Federal Convention of 1787, Wednesday, June 6". TeachingAmericanHistory.org. Retrieved 5 August 2014. - Wilson, James (1804). "Vol 2, Part II, Ch.1 Of the constitutions of the United States and of Pennsylvania – Of the legislative department, I, of the election of its members". The Works of the Honourable James Wilson. Constitution Society. Retrieved 5 August 2014. - "Constitution of the Commonwealth of Pennsylvania – 1790, art.I,§ VII, Of districts for electing Senators". Duquesne University. Retrieved 9 December 2014. - Hirczy de Miño, Wolfgang (1997). "Malta: Single-Transferable Vote with Some Twists". ACE Electoral Knowledge Network. Retrieved 5 Dec 2014. - "Adoption of Plan E". Welcome to the City of Cambridge. City of Cambridge, MA. Archived from the original on 13 December 2014. Retrieved 25 November 2014. - "Proportional Representation in Most Robust Democracies". Fair Vote: The Center for Voting And Democracy. Retrieved 9 October 2017. - Jansen, Harold John (1998). "The Single Transferable Vote in Alberta and Manitoba" (PDF). Library and Archives Canada. University of Alberta. Retrieved 23 March 2015. - Santucci, Jack (2016-11-10). "Party Splits, Not Progressives". American Politics Research. 45 (3): 494–526. doi:10.1177/1532673x16674774. ISSN 1532-673X. - Barber, Kathleen (1995). Proportional Representation and Election Reform in Ohio. Columbus, OH: Ohio State University Press. ISBN 978-0814206607. - ACE Project: The Electoral Knowledge Network. "Electoral Systems Comparative Data, World Map". Retrieved 24 October 2017. - ACE Project: The Electoral Knowledge Network. "Electoral Systems Comparative Data, Table by Country". Retrieved 24 October 2017. - Office for Democratic Institutions and Human Rights. "Republic of Armenia, Parliamentary Elections, 2 April 2017". OSCE. - Amy, Douglas J. (1993). Real Choices/New Voices: The Case for Proportional Representation Elections in the United States. Columbia University Press. - Batto, Nathan F.; Huang, Chi; Tan, Alexander C.; Cox, Gary (2016). Mixed-Member Electoral Systems in Constitutional Context: Taiwan, Japan, and Beyond. Ann Arbor: University of Michigan Press. - Pilon, Dennis (2007). The Politics of Voting. Edmond Montgomery Publications. - Colomer, Josep M. (2003). Political Institutions. Oxford University Press. - Colomer, Josep M., ed. (2004). Handbook of Electoral System Choice. Palgrave Macmillan. - Pukelsheim, Friedrich (2014). Proportional Representation. Springer. - Linton, Martin; Southcott, Mary (1998). Making Votes Count: The Case for Electoral Reform. London: Profile Books. - Forder, James (2011). The case against voting reform. Oxford: Oneworld Publications. ISBN 978-1-85168-825-8. - Hickman, John; Little, Chris (November 2000). "Seat/vote proportionality in Romanian and Spanish parliamentary elections". Journal of Southern Europe and the Balkans Online. Taylor and Francis. 2 (2): 197–212. doi:10.1080/713683348. - Galasso, Vincenzo; Nannicini, Tommaso (December 2015). "So closed: political selection in proportional systems". European Journal of Political Economy. Elsevier. 40 (B): 260–273. doi:10.1016/j.ejpoleco.2015.04.008. - Golder, Sona N.; Stephenson, Laura B.; Van der Straeten, Karine; Blais, André (March 2017). "Votes for women: electoral systems and support for female candidates". Politics & Gender. Cambridge Journals. 13 (1): 107&ndash, 131. doi:10.1017/S1743923X16000684. - Proportional Representation Library - Handbook of Electoral System Choice - Quantifying Representativity Article by Philip Kestelman - The De Borda Institute A Northern Ireland-based organisation promoting inclusive voting procedures - Election Districts Voting improves PR with overlapping districts elections for first past the post, alternative vote and single transferable vote voters - Electoral Reform Society founded in England in 1884, the longest running PR organization. Contains good information about single transferable vote – the Society's preferred form of PR - Electoral Reform Australia - Proportional Representation Society of Australia - Fair Vote Canada - FairVote, USA - Why Not Proportional Representation? - Vote Dilution means Voters have Less Voice Law is Cool site - Proportional Representation and British Democracy Debate on British electoral system reform - Felsenthal, Dan S. (2010). "Review of paradoxes afflicting various voting procedures where one out of m candidates (m ≥ 2) must be elected" (PDF). Assessing Alternative Voting Procedures. London, UK: London School of Economics and Political Science. Retrieved October 9, 2011. - RangeVoting.org. page on PR
http://wiki-offline.jakearchibald.com/wiki/Proportional_representation
18
13
“Only when one comes to listen, only when one is aware and still, can things be seen and heard. Everyone has a listening-point somewhere...some place of quiet where the universe can be contemplated in awe.” - Listening Point by Sigurd Olsen What are natural soundscapes - and why are they important? The audio equivalent of a landscape, a natural soundscape is the combination of an area’s natural sounds. Natural soundscapes can include everything from animal and bird sounds to rushing water, wind through vegetation, glacial crevassing, and thermal or volcanic activity. The amplitude of these sounds is measured in a unit known as decibels (dB), which is similar to the Richter scale for earthquakes because an increase of 10 dB equals a 10-fold increase in intensity. Below are some examples of sound pressure levels measured in national parks. Sound Level (dBA)1 Threshold of human hearing Crickets at 5 meters Conversational speech at 5 meters Cruiser motorcycle at 15 meters Military jet at 100 m AGL Cannon fire at 150 meters 1Sound levels are often adjusted ("weighted") to match the hearing abilities of a given animal. Sound levels adjusted for human hearing are expressed as dB(A). These sounds are vital to the natural functioning of ecosystems. For example, they play important roles in the ability of wildlife to communicate, establish territory, find prey or avoid predation, experience courtship rituals and mate, and protect young. Natural soundscapes also integral to the experiences visitors have in wild places like the Mount Rainier Wilderness. The sounds of elk bugling and glaciers cracking - or mountain music - can create a sense of awe that connects us to the splendor of a natural environment. In fact, a system-wide survey of park visitors revealed that nearly as many visitors come to national parks to enjoy the natural soundscape (91 percent) as come to view the scenery (93 percent) (Report on the Effects of Aircraft Overflights on the National Park System, Table 6.1. 1995). Human-caused noise, however, can threaten natural soundscapes - and the value afforded by them. Noise from transportation including military, commercial, private overflight, and park operations as well as vehicles contribute the largest amounts of noise. Given the impact soundscapes have on ecosystems and visitors’ experiences, the National Park Service regards natural sounds as a resource that must be protected. Specifically, the agency policy on soundscape management states: “The National Park Service will preserve, to the greatest extent possible, the natural soundscapes of parks.” (National Park Service 2006 Management Policy, Soundscapes Section 4.9, pg 56). What does Mount Rainier sound like? Mount Rainier’s soundscape program aims to answer this question. Acoustical monitoring stations are located throughout the park to capture soundscapes of the diverse vegetation and elevation zones, and a sampling of the recordings collected can be explored in the map above. Overall, the thousands of hours of audio recorded reveal numerous species such as woodpeckers at Trail of the Shadows, barred owls and red-tailed hawks at Green Lake, elk at Crystal Mountain, bears at Lakes Trail, and pikas at Sunrise Ridge. There were also recordings collected of a rock slide at Green Lake and shale sliding down the slope on Crystal Mountain. Soundscape equipment also captured audio from a 2015 debris flow along Tahoma Creek. Along with wind and flowing water, these are just a few of the many natural sounds which join to create the acoustical environment in Mount Rainier National Park. In addition to natural sounds, studies found Mount Rainier National Park to be moderately impacted by human-caused noise. Over all hours of the day, human-caused sounds were audible between 12.5 and 26 percent of the time in the backcountry and 40 percent of the time in the developed areas of the park. At the backcountry sites aircraft were the most pervasive non-natural sound source, audible between 11 and 24 percent of the time. Figure 1. Mount Rainier Spectrograms Sounds occur in different frequencies and are represented visually using a spectrogram, an image that incorporates frequency (y axis), time (x axis), and amplitude (brightness of color). This spectrogram displays 60 minutes of sounds from Ptarmigan Ridge, Mount Rainier National Park in 2013. Each row displays 20 minutes of acoustical data. Brighter colors indicate louder sounds, which stand out from the quiet blue background. Each type of sounds has its own unique signature, allow us to read the spectrogram and see what sounds occurred during the hour. The National Park Service examines several metrics to learn more about the condition of the acoustic environment at each site. Examples of these measures include daily or hourly sound pressure levels, and percent time that certain sounds can be heard. High frequency sounds (such as a cricket chirping) and low frequency sounds (such as thunder rumbling) often occur simultaneously, so the frequency spectrum is split into 33 smaller ranges, each encompassing one-third of an octave. The sound levels for 33 one-third octave band frequencies over the day (yellow) and night (purple) periods at Longmire during the summer of 2006 are shown in Figure 2 below. The grayed area of the graph represents sound levels outside of the typical range of human hearing. For example, in order for us to hear very low frequency sounds (far left), we need them to be really loud (higher sound pressure level). Similarly, we need high pitched sounds (far right) to be really loud for us to hear them. The exceedance levels (Lx) represent the sound level exceeded x percent of the time. For example, L90 is the sound level that has been exceeded 90% of the time, and only the quietest 10% of the samples can be found below this point. On the other hand, the L10 is the sound level that has been exceeded 10% of the time, and 90% of the measurements are quieter than the L10. The size of the bold portion of the column between the L50 (median sound level) and Lnat (natural ambient) is directly related to the percent time that human caused sounds are audible. When bold portions of the column do not appear, the natural and existing ambient levels were either very close to each other, or were equal. In the examaple figure below, the night hours were quieter, and human caused sounds at night were audible less frequently than during the day. Use of Soundscape Data in Other Disciplines Data from the soundscape array can have unexpected benefits to other groups studying the features of Mount Rainier’s dynamic landscape. In 2015, a soundscape monitor was placed near Tahoma Creek and was recording during a sequence of outburst floods and debris flows that occurred on August 13th. The soundscape data proved to be invaluable to geologists as it not only recorded the passage of each of the four debris flow surges, but it also recorded an unusual, anomalous, and never-before-seen decrease in “river noise” beginning about an hour prior to the first debris flow surge’s arrival (Figure 3). This critical finding suggests that there was a physical blockage either in or just downstream of the South Tahoma Glacier which decreased the flow of water to Tahoma Creek. Following the third debris flow surge, the “river noise” was elevated, which suggests that the physical blockage was eliminated and the impounded water was released to the stream. Higher than normal flows were seen for several days after the debris flow event, which was corroborated by the soundscape data. Without the data from the soundscape array, a critical finding from the August 13th 2015 debris flow event would have been missed. Figure 3. NPS Soundscape data as recorded near Tahoma Creek on August 13, 2015. Data is clipped to the period between 8:00 AM – 8:00 PM. The green line indicates the 42-day average background level, while the blue line is the sound trace from August 13th. Individual seismically-indicated debris flows (DF) are correlated to peaks in the sound trace from 10:00 AM to 1:00 PM. Additional Resources on Mount Rainier's Soundscape
https://www.nps.gov/mora/learn/nature/soundscapes.htm
18
18
- What is an F Test? - General Steps for an F Test - F Test to Compare Two Variances See also: F Statistic in ANOVA/Regression An “F Test” is a catch-all term for any test that uses the F-distribution. In most cases, when people talk about the F-Test, what they are actually talking about is The F-Test to Compare Two Variances. However, the f-statistic is used in a variety of tests including regression analysis, the Chow test and the Scheffe Test (a post-hoc ANOVA test). If you’re running an F Test, you should use Excel, SPSS, Minitab or some other kind of technology to run the test. Why? Calculating the F test by hand, including variances, is tedious and time-consuming. Therefore you’ll probably make some errors along the way. If you’re running an F Test using technology (for example, an F Test two sample for variances in Excel), the only steps you really need to do are Step 1 and 4 (dealing with the null hypothesis). Technology will calculate Steps 2 and 3 for you. - State the null hypothesis and the alternate hypothesis. - Calculate the F value. The F Value is calculated using the formula F = (SSE1 – SSE2 / m) / SSE2 / n-k, where SSE = residual sum of squares, m = number of restrictions and k = number of independent variables. - Find the F Statistic (the critical value for this test). The F statistic formula is: F Statistic = variance of the group means / mean of the within group variances. You can find the F Statistic in the F-Table. - Support or Reject the Null Hypothesis. A Statistical F Test uses an F Statistic to compare two variances, s1 and s2, by dividing them. The result is always a positive number (because variances are always positive). The equation for comparing two variances with the f-test is: F = s21 / s22 If the variances are equal, the ratio of the variances will equal 1. For example, if you had two data sets with a sample 1 (variance of 10) and a sample 2 (variance of 10), the ratio would be 10/10 = 1. You always test that the population variances are equal when running an F Test. In other words, you always assume that the variances are equal to 1. Therefore, your null hypothesis will always be that the variances are equal. Several assumptions are made for the test. Your population must be approximately normally distributed (i.e. fit the shape of a bell curve) in order to use the test. Plus, the samples must be independent events. In addition, you’ll want to bear in mind a few important points: - The larger variance should always go in the numerator (the top number) to force the test into a right-tailed test. Right-tailed tests are easier to calculate. - For two-tailed tests, divide alpha by 2 before finding the right critical value. - If you are given standard deviations, they must be squared to get the variances. - If your degrees of freedom aren’t listed in the F Table, use the larger critical value. This helps to avoid the possibility of Type I errors. Warning: F tests can get really tedious to calculate by hand, especially if you have to calculate the variances. You’re much better off using technology (like Excel — see below). These are the general steps to follow. Scroll down for a specific example (watch the video underneath the steps). Step 2: Square both standard deviations to get the variances. For example, if σ1 = 9.6 and σ2 = 10.9, then the variances (s1 and s2) would be 9.62 = 92.16 and 10.92 = 118.81. Step 3: Take the largest variance, and divide it by the smallest variance to get the f-value. For example, if your two variances were s1 = 2.5 and s2 = 9.4, divide 9.4 / 2.5 = 3.76. Why? Placing the largest variance on top will force the F-test into a right tailed test, which is much easier to calculate than a left-tailed test. Step 4: Find your degrees of freedom. Degrees of freedom is your sample size minus 1. As you have two samples (variance 1 and variance 2), you’ll have two degrees of freedom: one for the numerator and one for the denominator. Step 5: Look at the f-value you calculated in Step 3 in the f-table. Note that there are several tables, so you’ll need to locate the right table for your alpha level. Unsure how to read an f-table? Read What is an f-table?. Step 6: Compare your calculated value (Step 3) with the table f-value in Step 5. If the f-table value is smaller than the calculated value, you can reject the null hypothesis. Back to Top The difference between running a one or two tailed F test is that the alpha level needs to be halved for two tailed F tests. For example, instead of working at α = 0.05, you use α = 0.025; Instead of working at α = 0.01, you use α = 0.005. With a two tailed F test, you just want to know if the variances are not equal to each other. In notation: Ha = σ21 ≠ σ2 2 Sample problem: Conduct a two tailed F Test on the following samples: Sample 1: Variance = 109.63, sample size = 41. Sample 2: Variance = 65.99, sample size = 21. Step 1: Write your hypothesis statements: Ho: No difference in variances. Ha: Difference in variances. Step 2: Calculate your F critical value. Put the highest variance as the numerator and the lowest variance as the denominator: F Statistic = variance 1/ variance 2 = 109.63 / 65.99 = 1.66 Step 5: Find the critical F Value using the F Table. There are several tables, so make sure you look in the alpha = .025 table. Critical F (40,20) at alpha (0.025) = 2.287. Step 6: Compare your calculated value (Step 2) to your table value (Step 5). If your calculated value is higher than the table value, you can reject the null hypothesis: F calculated value: 1.66 F value from table: 2.287. 1.66 < 2 .287. So we cannot reject the null hypothesis. Back to Top Watch the video or read the steps below: F-test two sample for variances Excel 2013: Steps Step 1: Click the “Data” tab and then click “Data Analysis.” Step 2: Click “F test two sample for variances” and then click “OK.” Step 3: Click the Variable 1 Range box and then type the location for your first set of data. For example, if you typed your data into cells A1 to A10, type “A1:A10” into that box. Step 4: Click the Variable 2 box and then type the location for your second set of data. For example, if you typed your data into cells B1 to B10, type “B1:B10” into that box. Step 5: Click the “Labels” box if your data has column headers. Step 6: Choose an alpha level. In most cases, an alpha level of 0.05 is usually fine. Step 7: Select a location for your output. For example, click the “New Worksheet” radio button. Step 8: Click “OK.” Step 9: Read the results. If your f-value is higher than your F critical value, reject the null hypothesis as your two populations have unequal variances. Warning: Excel has a small “quirk.” Make sure that variance 1 is higher than variance 2. If it isn’t switch your input data around (i.e. make input 1 “B” and input 2 “A”). Otherwise, Excel will calculate an incorrect f-value. This is because the variance is a ratio of variance 1/variance 2, and Excel can’t work out which set of data is set 1 and set 2 without you explicitly telling it. Subscribe to our Youtube channel for more stats videos.------------------------------------------------------------------------------ Confused and have questions? Head over to Chegg and use code “CS5OFFBTS18” (exp. 11/30/2018) to get $5 off your first month of Chegg Study, so you can understand any concept by asking a subject expert and getting an in-depth explanation online 24/7. Comments? Need to post a correction? Please post a comment on our Facebook page.
https://www.statisticshowto.datasciencecentral.com/probability-and-statistics/hypothesis-testing/f-test/
18
32
Equation 2 is the correct one. Set Up Two Equations Set up two separate and unrelated equations for x in terms of y, being careful not to treat them as two equations in two variables: To solve this, you have to set up two equalities and solve each separately. Writes the solutions of the first equation using absolute value symbols. If you already know the solution, you can tell immediately whether the number inside the absolute value brackets is positive or negative, and you can drop the absolute value brackets. If needed, clarify the difference between an absolute value equation and the statement of its solutions. Then explain why the equation the student originally wrote does not model the relationship described in the problem. What is the difference? Ask the student to consider these two solutions in the context of the problem to see if each fits the condition given in the problem i. Do you think you found all of the solutions of the first equation? This is the solution for equation 2. Instructional Implications Model using absolute value to represent differences between two numbers. Plug these values into both equations. Instructional Implications Provide feedback to the student concerning any errors made. What are these two values? Do you know whether or not the temperature on the first day of the month is greater or less than 74 degrees? Questions Eliciting Thinking How many solutions can an absolute value equation have? What are the solutions of the first equation? This means that any equation that has an absolute value in it has two possible solutions. Should you use absolute value symbols to show the solutions? Guide the student to write an equation to represent the relationship described in the second problem. If you plot the above two equations on a graph, they will both be straight lines that intersect the origin. Evaluate the expression x — 12 for a sample of values some of which are less than 12 and some of which are greater than 12 to demonstrate how the expression represents the difference between a particular value and You can now drop the absolute value brackets from the original equation and write instead: This is solution for equation 1. Emphasize that each expression simply means the difference between x and Examples of Student Work at this Level The student correctly writes and solves the first equation: Finds only one of the solutions of the first equation. Questions Eliciting Thinking Can you reread the first sentence of the second problem? For a random number x, both the following equations are true: A difference is described between two values. For example, represent the difference between x and 12 as x — 12 or 12 — x. Examples of Student Work at this Level The student: When you take the absolute value of a number, the result is always positive, even if the number itself is negative. Plug in known values to determine which solution is correct, then rewrite the equation without absolute value brackets. Ask the student to solve the equation and provide feedback. Provide additional opportunities for the student to write and solve absolute value equations. Sciencing Video Vault 1. Why is it necessary to use absolute value symbols to represent the difference that is described in the second problem? Writing an Equation with a Known Solution If you have values for x and y for the above example, you can determine which of the two possible relationships between x and y is true, and this tells you whether the expression in the absolute value brackets is positive or negative. Got It The student provides complete and correct responses to all components of the task.The vertex of the graph is (0, º3), so the equation has the form: y =a|x º 0|+(º3) or y = a|x| º 3 To find the value of a, substitute the coordinates of the point (2,1) into the equation and solve. y = a|x| º 3 Write equation. 1 = a|2| º 3 Substitute 1 for y and 2 for x. 1 = 2a º 3 Simplify. 4 = 2a Add 3 to each side. 2 = a Divide each side by 2. An equation of the. Kutools for Excel collects many commonly-used formulas for Excel users to quickly apply complicate formulas without remember the formula exactly, such as the Sum absolute values formula, Add months to date formula, Add hours/minutes/seconds to time formula, Find the most common value formula, etc. Click for day free trial! Why was it necessary to use absolute value to write this equation? How many solutions do you think this equation has? Why are there two solutions? What would they mean in this context? • Writing Absolute Value Equations worksheet. SOURCE AND ACCESS INFORMATION. Contributed by: MFAS FCRSTEM. Sep 02, · writing absolute value equations from graphs. Skip navigation Writing Equations of Absolute Value Functions from Graphs: Writing absolute value equations - Duration. How to sum the absolute values in Excel? Supposing you have a list of data which contains both positive numbers and negatives, and now you want to sum their absolute values which means all the negatives will be calculated as positives. Find the most common value formula, etc. Click for day free trial! Sum the absolute values with. Copy the table below, and paste into cell A1 in Excel. You may need to select any cells that contain formulas and press F2 and then Enter to make the formulas work. You may also want to make the columns wider to make your worksheet easier to read.Download
http://konupagexiqyk.killarney10mile.com/how-to-write-absolute-value-equation-from-graph-in-excel-44525wy4844.html
18
45
Before you start managing a Solaris network, you need to know the definitions of some terms used in networking. There are numerous acronyms related to networking, and many of them are explained in the following sections. You'll first learn about the networking model that is deployed by Solaris 10, and then you'll learn about the types of networks that are available, including the various network protocols. Finally, you'll learn about the physical components of the network hardware, including the network interfaces and cables. The term network topology refers to the overall picture of the network and the arrangement in which the nodes on a network are connected to each other. The topology describes small and large networks, including local area networks (LANs) and wide area networks (WANs). A LAN is a set of hosts, usually in the same building and on the same floor, connected by a high-speed medium such as Ethernet. A LAN might be a single Internet Protocol (IP) network or a collection of networks or subnets that are connected through high-speed switches and/or routers. The network interface, and cable or wire, used for computer networks is referred to as network media. Normally a type of twisted-pair wire or fiber-optic cable connects nodes on a LAN. Twisted-pair cable has less bandwidth than optical fiber, but it is less costly and easier to install. With twisted-pair cable, the two individual wires are twisted around each other to minimize interference from the other twisted pairs in the cable. Twisted pair cable is available in two different categories: In addition, twisted-pair cable is available in stranded or solid wire (22 to 26 gauge). Stranded wire is used most commonly because it is very flexible and can be bent around corners. Solid wire cable suffers less attenuation (that is, signal loss) and can span longer distances, but it is less flexible than stranded wire and can break if it is repeatedly bent. Furthermore, cable is grouped into seven categories, according to the Electronic Industries Alliance/Telecommunications Industry Association (EIA/TIA) standard EIA/TIA-568, based on its transmission capacity. The categories are listed in Table 8.1. You can see from Table 8.1 that there are several variants of twisted-pair cable, each with different capacities. For example, Category 5 (Cat 5) UTP cable can support sustained data throughput of 100Mbps. A wide area network (WAN) is a network that covers a potentially vast geographic area. An example of a WAN is the Internet. Another example is an enterprise network that links the separate offices of a single corporation into one network spanning an entire country or perhaps an entire continent. A WAN, unlike a LAN, usually makes use of third-party service providers for interconnection. It is a common misconception among newcomers to the world of networking that a WAN is simply a LAN but on a larger scale. This is not true because different technologies, equipment, and protocols are used in LANs and WANs. For example, Ethernet is a LAN technology that is not usually used in WANs (but this is changing with wider availability and lower cost of high-speed long-distance fiber connections). Network Protocols and Network Models A network protocol is the part of the network that you configure but cannot see. It's the "language" of the network, which controls data transmission between systems across the network. To understand protocols, you need to first understand network models. A network model is an abstract common structure used to describe communication between systems. The two network models that provide the framework for network communication and that are the standards used in Solaris network environments are the International Standards Organization (ISO)/Open Systems Interconnection (OSI) reference model and the Transmission Control Protocol/Internet Protocol (TCP/IP) model. These models are discussed in the following sections. The ISO/OSI Model The seven-layered ISO/OSI model was devised in the early 1980s. Although this model represents an ideal world and is somewhat meaningless in today's networking environment, it's quite helpful in identifying the distinct functions that are necessary for network communication to occur. In the ISO/OSI model, individual services that are required for communication are arranged in seven layers that build on one another. Each layer describes a specific network function, as shown in Figure 8.1. Figure 8.1. The seven-layer ISO/OSI model. Table 8.2 describes the function of each individual layer. The TCP/IP Model In order for a network to function properly, information must be delivered to the intended destination in an intelligible form. Because different types of networking software and hardware need to interact to perform the network function, designers developed the TCP/IP communications protocol suite (a collection of protocols), which is now recognized as a standard and is used throughout the world. Because it is a set of standards, TCP/IP runs on many different types of computers, making it easy for you to set up a heterogeneous network running any operating system that supports TCP/IP. The Solaris operating system includes the networking software to implement the TCP/IP communications protocol suite. The TCP/IP model is a network communications protocol suite that consists of a set of formal rules that describe how software and hardware should interact within a network. The TCP/IP model has five layers: Four or Five LayersBe careful on the exam because Sun has used both a four-layer and five-layer description of this model since Solaris 8. If a question describes a four-layer model then the hardware layer should be thought of as being integrated with the network interface layer. Each of these is discussed in the following sections. The Hardware Layer The TCP/IP model hardware layer corresponds to the ISO/OSI model physical layer and describes the network hardware, including electrical and mechanical connections to the network. This layer regulates the transmission of unstructured bit streams over a transmission medium, which might be one of the following: Support for Token Ring has been removed in Solaris 10, as it is now considered an obsolete technology. For each medium, the IEEE has created an associated standard under project 802, which was named for the month (February) and year (1980) of its inception. Each medium has its own standard, which is named based on the 802 project. For example, Ethernet has its own standard: 802.3. The Network Interface Layer The TCP/IP model network interface layer corresponds to the ISO/OSI data link layer; it manages the delivery of data across the physical network. This layer provides error detection and packet framing. Framing is a process of assembling bits into manageable units of data. A frame is a series of bits with a well-defined beginning and end. The network interface layer protocols include the following: The Internet Layer The TCP/IP model Internet layer corresponds to the ISO/OSI network layer and manages data addressing and delivery between networks, as well as fragmenting data for the data link layer. The Internet layer uses the following protocols: The Transport Layer The TCP/IP model transport layer corresponds to the ISO/OSI model transport layer and ensures that messages reach the correct application process by using Transmission Control Program (TCP) and User Datagram Protocol (UDP). TCP uses a reliable, connection-oriented circuit for connecting to application processes. A connection-oriented virtual circuit allows a host to send data in a continuous stream to another host. It guarantees that all data is delivered to the other end in the same order as it was sent and without duplication. Communication proceeds through three well-defined phases: connection establishment, data transfer, and connection release. UDP is a connectionless protocol. It has traditionally been faster than TCP because it does not have to establish a connection or handle acknowledgements. As a result, UDP does not guarantee delivery. UDP is lightweight and efficient, but the application program must take care of all error processing and retransmission. Considerable improvements in network technology, however, have virtually eliminated the performance gap between TCP and UDP, making TCP the protocol of choice. The Application Layer The TCP/IP model application layer corresponds to the session layer, presentation layer, and application layer of the ISO/OSI model. The TCP/IP model application layer manages user-accessed application programs and network services. This layer is responsible for defining the way in which cooperating networks represent data. The application layer protocols include the following: Know Layers and FunctionsFor the exam, ensure that you are familiar with the layers of both the OSI seven-layer model and the TCP/IP model. You should be able to identify functions/protocols that operate at each layer and the order in which the layers are processed. Encapsulation and Decapsulation When you think of systems communicating via a network, you can imagine the data progressing through each layer down from the application layer to the hardware layer, across the network, and then flowing back up from the hardware layer to the application layer. A header is added to each segment that is received on the way down the layers (encapsulation), and a header is removed from each segment on the way up through the layers (decapsulation). Each header contains specific address information so that the layers on the remote system know how to forward the communication. For example, in TCP/IP, a packet would contain a header from the physical layer, followed by a header from the network layer (IP), followed by a header from the transport layer (TCP), followed by the application protocol data. A packet is the basic unit of information to be transferred over the network. A packet is organized much like a conventional letter. Each packet has a header that corresponds to an envelope. The header contains the addresses of the recipient and the sender, plus information on how to handle the packet as it travels through each layer of the protocol suite. The message part of the packet corresponds to the contents of the letter itself. A packet can contain only a finite number of bytes of data, depending on the network medium in use. Therefore, typical communications such as email messages are split into packets. Ethernet is a standard that defines the physical components a machine uses to access the network and the speed at which the network runs. It includes specifications for cable, connectors, and computer interface components. Ethernet is a LAN technology that originally facilitated transmission of information between computers at speeds of up to 10Mbps. A later version of Ethernet, called 100BASE-T, or Fast Ethernet, pushed the speed up to 100Mbps, and Gigabit Ethernet supports data transfer rates of 1Gbps (1,000Mbps). Table 8.3 lists some common media names and their associated cable types. 10BASE2 and 10BASE5 media are now very rarely used; even 10BASE-T networks are becoming increasingly rare. The 100BASE-T type of Ethernet is the most popular medium, but it is gradually being replaced by newer systems that support 1000BASE-T (gigabit) and a growing number of fiber-optic connected devices. Ethernet uses a protocol called CSMA/CD, which stands for Carrier Sense Multiple Access with Collision Detection. Multiple Access means that every station can access the single cable to transmit data. Carrier Sense means that before transmitting data, a station checks the cable to determine whether any other station is already sending something. If the LAN appears to be idle, the station can begin to send data. When several computers connected to the same network need to send data, two computers might try to send at the same time, causing a collision of data. The Ethernet protocol senses this collision and notifies the computer to send the data again. How can two computers send data at the same time? Isn't Ethernet supposed to check the network for other systems that might be transmitting before sending data across the network? Here's what happens in a 10Mbps network: An Ethernet station sends data at a rate of 10Mbps. It allows 100 nanoseconds per bit of information that is transmitted. The signal travels about 0.3 meters (1 foot) in 1 nanosecond. After the electrical signal for the first bit has traveled about 30 meters (100 feet) down the wire, the station begins sending the second bit. An Ethernet cable can run for hundreds of feet. If two stations are located about 75 meters (250 feet) apart on the same cable and both begin transmitting at the same time, they will be in the middle of the third bit before the signal from each reaches the other station. This explains the need for the Collision Detection part of CSMA/CD. If two stations begin sending data at the same time, their signals collide nanoseconds later. When such a collision occurs, the two stations stop transmitting and try again later, after a randomly chosen delay period. This also explains why distances are an important consideration in planning Ethernet networks. Although an Ethernet network can be built by using one common signal wire, such an arrangement is not flexible enough to wire most buildings. Unlike an ordinary telephone circuit, Ethernet wire cannot be spliced to connect one copper wire to another. Instead, Ethernet requires a repeater, a simple station that is connected to two wires. When the repeater receives data on one wire, it repeats the data bit-for-bit on the other wire. When collisions occur, the repeater repeats the collision as well. In buildings that have two or more types of Ethernet cable, a common practice is to use media converters, switches, or repeaters to convert the Ethernet signal from one type of wire to another. Network hardware is discussed in more detail later in the chapter. As of Solaris 10, the FDDI interface is no longer supported. The network hardware is the physical part of the network that you can actually see. The physical components connect the systems and include the network interface cards (NICs), hosts, cable, connectors, hubs, and routers, some of which are discussed in the following sections. The computer hardware that allows you to connect a computer to a network is known as a network interface card (NIC), or network adapter. The network interface can support one or more communication protocols that specify how computers use the physical mediumthe network cable or the radio spectrumto exchange data. Most computer systems come with a preinstalled network interface. Each LAN media type has its own associated network interface. For example, if you want to use Ethernet as your network medium, you must have an Ethernet interface installed in each host that is to be part of the network. The connectors on the board to which you attach the Ethernet cable are referred to as Ethernet ports. If you are an experienced Unix/Solaris user, you are no doubt familiar with the term host, which is often used as a synonym for computer or machine. From a TCP/IP perspective, only two types of entities exist on a network: routers and hosts. When a host initiates communication, it is called a sending host, or sender. For example, a host initiates communications when the user uses ping or sends an email message to another user. The host that is the target of the communication is called the receiving host, or recipient. Each host has an Internet address and a hardware address that identify it to its peers on the network, and usually a hostname. These are described in Table 8.4. Hubs and Switches Ethernet cable is run to each system from a hub or switch. A hub does nothing more than connect all the Ethernet cables so that the computers can connect to one another. It does not boost the signal or route packets from one network to another. When a packet arrives at one port, it is copied to the other ports so that all the computers on the LAN can see all the packets. Hubs can support from two to several hundred systems. A passive hub serves as a conduit for the data, allowing it to go from one device, or segment, to another. Intelligent hubs include additional features that let you monitor the traffic passing through the hub and configure each port in the hub. Intelligent hubs are also called manageable hubs. A third type of hub, called a packet-switching hub (or switch), is a special type of hub that forwards packets to the appropriate port based on the packet's destination address. A network that utilizes conventional hubs is a shared network because every node on the network competes for a fraction of the total bandwidth. In a shared network, data packets are broadcast to all stations until they discover their intended destinations; this wastes both time and network bandwidth. A switch remedies this problem by looking at the address for each data packet and delivering the packet directly to the correct destination, and this provides much better performance than the hub system. Most switches also support load balancing so that ports are dynamically reassigned to different LAN segments based on traffic patterns. Most switches are autosensing, which means they support both Fast Ethernet (100Mbps) and Gigabit Ethernet (1000Mbps) ports. This lets the administrator establish a dedicated Ethernet channel for high-traffic devices such as servers. In addition, some switches include a feature called full-duplex data transfer. With this feature, all computers on the switch can "talk" to the switch at the same time. Full-duplex data transfer also allows switches to send and receive data simultaneously to all connections, whereas a hub cannot. A hub simply works with one computer at a time and only sends or only receives data because it cannot handle simultaneous two-way communication. A router is a machine that forwards packets from one network to another. In other words, whereas a hub connects computers, a router connects networks. To do this, a router must have at least two network interfaces. A machine with only one network interface cannot forward packets; it is considered a host. Most of the machines you set up on a network are likely to be hosts. Routers use packet headers and a forwarding table, called a routing table, to determine where packets go. Routes can be either static (in which case they are preset by network/system administrator) or dynamic (in which case a route to a destination host is learned or calculated at the time that it is requested). In IPv4, each host on a TCP/IP network has a 32-bit network addressreferred to as the IP addressthat must be unique for each host on the network. If the host will participate on the Internet, this address must also be unique to the Internet. For this reason, IP addresses are assigned by special organizations known as regional Internet registries (RIRs). The IPv4 address space is the responsibility of Internet Corporation for Assigned Names and Numbers (ICANN; see www.icann.org). The overall responsibility for IP addresses, including the responsibility for allocation of IP ranges, belongs to the Internet Assigned Numbers Authority (IANA; see www.iana.org). An IP address is a sequence of 4 bytes and is written in the form of four decimal integers separated by periods (for example, 10.11.12.13). Each integer is 8 bits long and ranges from 0 to 255. An IP address consists of two parts: a network ID, which is assigned by an RIR, and a host ID, which is assigned by the local administrator. The first integer of the address (10.0.0.0) determines the address type and is referred to as its class. Five classes of IP addresses exist: A, B, C, D, and E. The following sections briefly describe each class. IPv6 Due to limited address space and other considerations of the IPv4 scheme, a revised IP protocol is gradually being made available. The protocol, named IPv6, has been designed to overcome the major limitations of the current approach. IPv6 is compatible with IPv4, but IPv6 makes it possible to assign many more unique Internet addresses and offers support for improved security and performance. A brief section on IPv6 appears later in this chapter for background information, even though it is not a specific objective in the Solaris 10 Part II exam. Class A Addresses Class A addresses are used for very large networks with millions of hosts, such as the Internet. A Class A network number uses the first 8 bits of the IP address as its network ID. The remaining 24 bits make up the host part of the IP address. The value assigned to the first byte of a Class A network number falls within the range 0 to 127. For example, consider the IP address 126.96.36.199. The value 75 in the first byte indicates that the host is on a Class A network. The remaining bytes, 4.10.4, establish the host address. An RIR assigns only the first byte of a Class A number. Use of the remaining 3 bytes is left to the discretion of the owner of the network number. Only 126 Class A networks can exist because 0 is reserved for the network, and 127 is reserved for the loopback device, leaving 1 to 126 as usable addresses. Each Class A network can accommodate up to 16,777,214 hosts. The 10.x.x.x network is reserved for use by private networks for hosts that are not connected to the Internet. If you want to assign a Class A network and you are not visible on the Internet, you can use one of these network addresses. Class B Addresses Class B addresses are used for medium-size networks, such as universities and large businesses with many hosts. A Class B address uses 16 bits for the network number and 16 bits for the host number. The first byte of a Class B network number is in the range 128 to 191. In the number 188.8.131.52, the first 2 bytes, 129.144, are assigned by an RIR and make up the network address. The last 2 bytes, 50.56, make up the host address and are assigned at the discretion of the network's owner. A Class B network can accommodate a maximum of 65,534 hosts. Again, the first and last addresses on the network are reserved. The 0 host address is reserved for the network, and the 255 address is reserved as the IP broadcast address. Therefore, the actual number of hosts that can be assigned on a Class B network is 65,534, not 65,536. The network address ranges 172.16.x.x through 172.31.x.x are reserved for use by private networks that are not connected to the Internet. If you want to assign a Class B network and you are not visible on the Internet, you can use one of these network addresses. Class C Addresses Class C addresses are used for small networks with fewer than 254 hosts. A Class C address uses 24 bits for the network number and 8 bits for host number. A Class C network number occupies the first 3 bytes of an IP address; only the fourth byte is assigned at the discretion of the network's owner. The first byte of a Class C network number covers the range 192 to 223. The second and third bytes each cover the range 0 to 255. A typical Class C address might be 184.108.40.206, with the first 3 bytes, 192.5.2, forming the network number. The final byte in this example, 5, is the host number. A Class C network can accommodate a maximum of 254 hosts out of 256 addresses; again, this is because the first and last values are reserved. The 192.168.x.x network ranges are specially reserved for private networks that are not connected to the Internet. If you want to assign a Class C network and you are not visible on the Internet, you can use one of these network addresses. Class D and E Addresses Planning for IP Addressing The first step in planning for IP addressing on a network is to determine how many IP addresses you need and whether the network is going to be connected to the Internet. If the network is not going to be connected to the Internet, you could choose addresses in the 10.x.x.x, or 172.16.x.x172.31.x.x, or 192.168.x.x range. For networks that are going to be connected to the Internetand hence visible to the rest of the worldyou need to obtain legal IP addresses; this is necessary because each host on a network must have a unique IP address. IP addresses can be obtained either through an Internet service provider (ISP) or an RIR, as mentioned earlier in this section. When you receive your network number, you can plan how you will assign the host parts of the IP address. Your nearest RIR depends on where, geographically, your network is located. The current list of RIRs is as follows: After you contact the correct RIR, you have to justify why you should be given global IP addresses. Normally, unless yours is a large organization, you would be expected to obtain IP addresses from your ISP. Being Careful with IP Addresses You should not arbitrarily assign network numbers to a network, even if you do not plan to attach your network to other existing TCP/IP networks. As your network grows, you might decide to connect it to other networks. Changing IP addresses at that time can be a great deal of work and can cause downtime. Instead, you might want to use the specially reserved IP networks 192.168.x.x, or 172.16.x.x172.31.x.x, or 10.x.x.x for networks that are not connected to the Internet. IPv6 No questions on the exam relate to IPv6. This section is included purely for background information. As the Internet community continues to grow and use more IPv4 addresses, we have been running out of available IPv4 addresses. IPv6, also called IP Next Generation (IPng), improves Internet capability by using a simplified header format, longer addresses (128 instead of 32 bits), support for authentication and privacy, autoconfiguration of address assignments, and new Quality of Service (QoS) capabilities. Specifically, IPv6 provides these enhancements: IPv6 increases the IP address size from 32 bits to 128 bits, to support more levels of addressing hierarchy. Thus, the number of potential addresses is 4 billion x 4 billion x 4 billion times the size of the IPv4 address space. Here's an example of an IPv6 address: The first 48 bits of the address represent the public topology. The next 16 bits represent the site topology.
http://books.gigatux.nl/mirror/solaris10examprep/0789734613/ch08lev1sec4.html
18
16
When a restriction endonuclease recognizes a sequence, it snips through the DNA molecule by catalyzing the hydrolysis splitting of a chemical bond by addition of a water molecule of the bond between adjacent nucleotides. Type II restriction enzymes also differ from types I and III in that they cleave DNA at specific sites within the recognition site; the others cleave DNA randomly, sometimes hundreds of bases from the recognition sequence. Contamination will ruin your experiment. In order to be able to sequence DNA, it is first necessary to cut it into smaller fragments. Different enzymes that recognize and cleave in the same location are known as isoschizomers. A restriction enzyme is a protein that recognizes a specific, short nucleotide sequence and cuts the DNA only at that specific site, which is known as restriction site or target sequence. Type II enzymes EC 3. Notice the "sticky ends. Thus treatment of this DNA with the enzyme produces 11 fragments, each with a precise length and nucleotide sequence. The ends Restriction enzymes and the dna the cut have an overhanging piece of single-stranded DNA. What is needed is a way to cleave the DNA molecule at a few precisely-located sites so that a small set of homogeneous fragments are produced. The restriction enzyme and its corresponding methylase constitute the restriction-modification system of a bacterial species. Know what buffers to use. The ability of the enzymes to cut DNA at precise locations enabled researchers to isolate gene-containing fragments and recombine them with other molecules of DNA—i. This allows the enzyme to cut both strands. Later, Daniel Nathans and Kathleen Danna showed that cleavage of simian virus 40 SV40 DNA by restriction enzymes yields specific fragments that can be separated using polyacrylamide gel electrophoresisthus showing that restriction enzymes can also be used for mapping DNA. By cutting open vector DNA with the same with restriction enzymes used to cleave the target DNA, complementary "sticky ends" are created. Ligation enzymes can then be used to sort of paste in new genomic sequences. Because they cut within the molecule, they are often called restriction endonucleases. Smithand Daniel Nathans. A restriction enzyme recognizes and cuts DNA only at a particular sequence of nucleotides. A blunt end may look like this: The rarer the site it recognizes, the smaller the number of pieces produced by a given restriction endonuclease. These are called "sticky ends" because they are able to form base pairs with any DNA molecule that contains the complementary sticky end. Link to discussion of recombinant DNA. These enzymes recognize a few hundred distinct sequences, generally four to eight bases in length. They are essential tools for recombinant DNA technology. There is a great deal of variation in restriction sites even within a species. Link to page describing DNA sequencing. This allows a scientist to choose from a number of places to cut the plasmid with a restriction enzyme. Restriction enzymes can be isolated from bacterial cells and used in the laboratory to manipulate fragments of DNA, such as those that contain genes ; for this reason they are indispensible tools of recombinant DNA technology genetic engineering. Type I enzymes EC 3. You always want to read carefully the information sheet that comes with your enzymes as well as the catalogue information. They differ in their recognition sequence, subunit composition, cleavage position, and cofactor requirements, as summarised below: These fragments can be separated from one another and the sequence of each determined. In live bacteria, restriction enzymes function to defend the cell against invading viral bacteriophages. Learn More in these related Britannica articles: The names of restriction enzymes are derived from the genusspecies, and strain designations of the bacteria that produce them; for example, the enzyme EcoRI is produced by Escherichia coli strain RY In the bacterial cell, restriction enzymes cleave foreign DNA, thus eliminating infecting organisms. Quizzes in Other Languages Restriction Enzymes Restriction enzymes, also known as restriction endonucleases, are enzymes that cut a DNA molecule at a particular place.DNA Restriction. The discovery of enzymes that could cut and paste DNA made genetic engineering possible. Restriction enzymes, found naturally in bacteria, can be used to cut DNA fragment at specific sequences, while another enzyme, DNA ligase, can attach or rejoin DNA fragments with complementary ends. A restriction enzyme or restriction endonuclease is an enzyme that cleaves DNA into fragments at or near specific recognition sites within the molecule known as restriction sites. Restrictions enzymes are one class. Restriction enzymes are found in bacteria (and other prokaryotes). They recognize and bind to specific sequences of DNA, called restriction mi-centre.com restriction enzyme recognizes just one or a few restriction sites. Restriction enzyme, also called restriction endonuclease, a protein produced by bacteria that cleaves DNA at specific sites along the molecule. In the bacterial cell, restriction enzymes cleave foreign DNA, thus eliminating infecting organisms. Restriction Enzymes. Restriction enzymes are DNA-cutting enzymes found in bacteria (and harvested from them for use). Because they cut within the molecule, they are often called restriction endonucleases. In order to be able to sequence DNA, it is first necessary to cut it into smaller fragments. A restriction enzyme is a protein that recognizes a specific, short nucleotide sequence and cuts the DNA only at that specific site, which is known as restriction site or target sequence. More than restriction enzymes have been isolated from the bacteria that manufacture them.Download
http://dobiwawacukefu.mi-centre.com/restriction-enzymes-and-the-dna-7610676106.html
18
49
Time Complexity: The Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input. The time complexity of an algorithm is commonly expressed using big O notation which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, where an elementary operation takes a fixed amount of time to perform. Thus the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor. Space Complexity: SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given problem with a given algorithm. It is one of the most well-studied complexity measures, because it corresponds so closely to an important real-world resource: the amount of physical computer memory needed to run a given program. To analyses the given algorithm we need to know on what inputs the algorithm is taking less time(performing well) and on what inputs the algorithm is taking huge time. We have already seen that an algorithm can be represented in the form of an expression. We can represent an algorithm with multiple expressions and they can be the worst case , average case and best case 1. Worst Case: • Defines the input for which the algorithm takes huge time. • Input is the one for which the algorithm runs the slower. 2. Average Case: • Provides a prediction about the running time for the algorithm • Assumes that the input is random Lower bound<= average bound <= upper bound For a given algorithm, we can represent best, worst, average cases in the form of expressions. As an example, let f(n) be the function which represents the given algorithm. F(n)= n2 + 600, for worst case F(n)= n + 100n + 600 , for best case Similarly , for average case too. The expression defines the inputs with which the algorithm takes the average running time(or memory). 3. Best Case: • Defines the input for which the algorithm takes lowest time. • Input is the onre for which the algorithm runs the fastest. How to compare algorithm 1. Execution Time: How fast is your computer depends on execution of a program 2. Number of statements: number of statements directly depend upon the number of lines 3. Ideal solution: Let us assume that we expressed running time of given algorithm as a function of the input size n and compare these different functions corresponding to running times. This kind of comparison is independent of machine time and programming style etc. What is rate of growth: The rate at which the running time increases as a function of input is called rate of growth. Let us assume that you went to a shop for buying a car and a cycle. If you friend sees you there and asks what you are buying then I ngeneral we say a buying a car. This si because , cost of car is too big compared to cost of cycle. Similarly , if we have a equation like n4+n3+2n2+100 so we will ignore the all the low order terms and pick only n4(they all comes in power of n.). This is the pattern of Rate of Growth. In the below content you can see the decreasing rate of growth with some examples. Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation, as mentioned below − • A priori analysis − This is theoretical analysis of an algorithm. Efficiency of algorithm is measured by assuming that all other factors e.g. processor speed, are constant and have no effect on implementation. • A posterior analysis − This is empirical analysis of an algorithm. The selected algorithm is implemented using programming language. This is then executed on target computer machine. In this analysis, actual statistics like running time and space required, are collected. We shall learn here a priori algorithm analysis. Algorithm analysis deals with the execution or running time of various operations involved. Running time of an operation can be defined as no. of computer instructions executed per operation. Suppose C is an algorithm and n is the size of input data, the time and space used by the Algorithm C are the two main factors which decide the efficiency of C. • Time Factor − The time is measured by counting the number of key operations such as comparisons in sorting algorithm • Space Factor − The space is measured by counting the maximum memory space required by the algorithm. The complexity of an algorithm f(n) gives the running time and / or storage space required by the algorithm in terms of n as the size of input data.
http://sourabhgupta.com/index.aspx/onlinelearning_daacontentextended.php?aid=7
18
120
Equations of motion |Part of a series of articles about| In physics, equations of motion are equations that describe the behavior of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behaviour of a physical system as a set of mathematical functions in terms of dynamic variables: normally spatial coordinates and time are used, but others are also possible, such as momentum components and time. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system. The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions to the differential equations describing the motion of the dynamics. There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations. However, kinematics is simpler as it concerns only variables derived from the positions of objects, and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the SUVAT equations, arising from the definitions of kinematic quantities: displacement (s), initial velocity (u), final velocity (v), acceleration (a), and time (t). A differential equation of motion, usually identified as some physical law and applying definitions of physical quantities, is used to set up an equation for the problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants. To state this formally, in general an equation of motion M is a function of the position r of the object, its velocity (the first time derivative of r, v = dr/), and its acceleration (the second derivative of r, a = d2r/), and time t. Euclidean vectors in 3D are denoted throughout in bold. This is equivalent to saying an equation of motion in r is a second order ordinary differential equation (ODE) in r, The solution r(t) to the equation of motion, with specified initial values, describes the system for all times t after t = 0. Other dynamical variables like the momentum p of the object, or quantities derived from r and p like angular momentum, can be used in place of r as the quantity to solve for from some equation of motion, although the position of the object at time t is by far the most sought-after quantity. Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions. - 1 History - 2 Kinematic equations for one particle - 3 Dynamic equations of motion - 4 Analytical mechanics - 5 Electrodynamics - 6 General relativity - 7 Analogues for waves and fields - 8 See also - 9 References Historically, equations of motion first appeared in classical mechanics to describe the motion of massive objects, a notable application was to celestial mechanics to predict the motion of the planets as if they orbit like clockwork (this was how Neptune was predicted before its discovery), and also investigate the stability of the solar system. It is important to observe that the huge body of work involving kinematics, dynamics and the mathematical models of the universe developed in baby steps – faltering, getting up and correcting itself – over three millennia and included contributions of both known names and others who have since faded from the annals of history. In antiquity, notwithstanding the success of priests, astrologers and astronomers in predicting solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the Moon, there was nothing other than a set of algorithms to help them. Despite the great strides made in the development of geometry made by Ancient Greeks and surveys in Rome, we were to wait for another thousand years before the first equations of motion arrive. The exposure of Europe to the collected works by the Muslims of the Greeks, the Indians and the Islamic scholars, such as Euclid’s Elements, the works of Archimedes, and Al-Khwārizmī's treatises began in Spain, and scholars from all over Europe went to Spain, read, copied and translated the learning into Latin. The exposure of Europe to Arabic numerals and their ease in computations encouraged first the scholars to learn them and then the merchants and invigorated the spread of knowledge throughout Europe. By the 13th century the universities of Oxford and Paris had come up, and the scholars were now studying mathematics and philosophy with lesser worries about mundane chores of life—the fields were not as clearly demarcated as they are in the modern times. Of these, compendia and redactions, such as those of Johannes Campanus, of Euclid and Aristotle, confronted scholars with ideas about infinity and the ratio theory of elements as a means of expressing relations between various quantities involved with moving bodies. These studies led to a new body of knowledge that is now known as physics. Of these institutes Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, of similar in stature to the intellectuals at the University of Paris. Thomas Bradwardine, one of those scholars, extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion. For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish theologian, in his commentary on Aristotle's Physics published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) – the word velocity wasn't used – as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. De Soto's comments are shockingly correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that during the violent motion of ascent acceleration would be negative. Discourses such as these spread throughout Europe and definitely influenced Galileo and others, and helped in laying the foundation of kinematics. Galileo deduced the equation s = 1/gt2 in his work geometrically, using the Merton rule, now known as a special case of one of the equations of kinematics. He couldn't use the now-familiar mathematical reasoning. The relationships between speed, distance, time and acceleration was not known at the time. Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in Discourses that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution. The term "inertia" was used by Kepler who applied it to bodies at rest. (The first law of motion is now often called the law of inertia.) Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope. Galileo also was interested by the laws of the pendulum, his first observations of which were as a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum. More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation varies with the square root of length but is independent of the mass the pendulum. Later the equations of motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations. However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields. Kinematic equations for one particle From the instantaneous position r = r(t), instantaneous meaning at an instant value of time t, the instantaneous velocity v = v(t) and acceleration a = a(t) have the general, coordinate-independent definitions; Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature. The rotational analogues are the "angular vector" (angle the particle rotates about some axis) θ = θ(t), angular velocity ω = ω(t), and angular acceleration α = α(t): where n̂ is a unit vector in the direction of the axis of rotation, and θ is the angle the object turns through about the axis. The following relation holds for a point-like particle, orbiting about some axis with angular velocity ω: where r is the position vector of the particle (radial from the rotation axis) and v the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body. The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below. Constant translational acceleration in a straight line These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration. Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) – only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one. - r0 is the particle's initial position - r is the particle's final position - v0 is the particle's initial velocity - v is the particle's final velocity - a is the particle's acceleration - t is the time interval Here a is constant acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity g is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two. In elementary physics the same formulae are frequently written in different notation as: where u has replaced v0, s replaces r, and s0 = 0. They are often referred to as the SUVAT equations, where "SUVAT" is an acronym from the variables: s = displacement (s0 = initial displacement), u = initial velocity, v = final velocity, a = acceleration, t = time. Constant linear acceleration in any direction The initial position, initial velocity, and acceleration vectors need not be collinear, and take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case, Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial speed u, one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity g. At this point one must remember that while these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as unidirectional vectors. Choosing s to measure up from the ground, the acceleration a must be in fact −g, since the force of gravity acts downwards and therefore also the acceleration on the ball due to it. At the highest point, the ball will be at rest: therefore v = 0. Using equation in the set above, we have: Substituting and cancelling minus signs gives: Constant circular acceleration The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary, where α is the constant angular acceleration, ω is the angular velocity, ω0 is the initial angular velocity, θ is the angle turned through (angular displacement), θ0 is the initial angle, and t is the time taken to rotate from the initial state to the final state. General planar motion These are the kinematic equations for a particle traversing a path in a plane, described by position r = r(t). They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity ω and angular acceleration α. These are instantaneous quantities which change with time. The position of the particle is where êr and êθ are the polar unit vectors. Differentiating with respect to time gives the velocity with radial component dr/ and an additional component rω due to the rotation. Differentiating with respect to time again obtains the acceleration Special cases of motion described be these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration. |State of motion||Constant r||r linear in t||r quadratic in t||r non-linear in t| |Constant θ||Stationary||Uniform translation (constant translational velocity)||Uniform translational acceleration||Non-uniform translation| |θ linear in t||Uniform angular motion in a circle (constant angular velocity)||Uniform angular motion in a spiral, constant radial velocity||Angular motion in a spiral, constant radial acceleration||Angular motion in a spiral, varying radial acceleration| |θ quadratic in t||Uniform angular acceleration in a circle||Uniform angular acceleration in a spiral, constant radial velocity||Uniform angular acceleration in a spiral, constant radial acceleration||Uniform angular acceleration in a spiral, varying radial acceleration| |θ non-linear in t||Non-uniform angular acceleration in a circle||Non-uniform angular acceleration in a spiral, constant radial velocity||Non-uniform angular acceleration in a spiral, constant radial acceleration||Non-uniform angular acceleration in a spiral, varying radial acceleration| General 3D motion In 3D space, the equations in spherical coordinates (r, θ, φ) with corresponding unit vectors êr, êθ and êφ, the position, velocity, and acceleration generalize respectively to In the case of a constant φ this reduces to the planar equations above. Dynamic equations of motion The first general equation of motion developed was Newton's second law of motion, in its most general form states the rate of change of momentum p = p(t) = mv(t) of an object equals the force F = F(x(t), v(t), t) acting on it, The force in the equation is not the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as since m is a constant in Newtonian mechanics. Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continua, like deformable solids or fluids, but the motion of the system must be accounted for, see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum, see variable-mass system. It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex. The momentum form is preferable since this is readily generalized to more complex systems, generalizes to special and general relativity (see four-momentum). It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces. where pi is the momentum of particle i, Fij is the force on particle i by particle j, and FE is the resultant external force due to any agent not part of system. Particle i does not exert a force on itself. Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation. Newton's second law for rotation takes a similar form to the translational case, by equating the torque acting on the body to the rate of change of its angular momentum L. Analogous to mass times acceleration, the moment of inertia tensor I depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity, Again, these equations apply to point like particles, or at each point of a rigid body. Likewise, for a number of particles, the equation of motion for one particle i is where Li is the angular momentum of particle i, τij the torque on particle i by particle j, and τE is resultant external torque (due to any agent not part of system). Particle i does not exert a torque on itself. For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass m thrown in the air, in air currents (such as wind) described by a vector field of resistive forces R = R(r, t), where G is the gravitational constant, M the mass of the Earth, and A = R/ is the acceleration of the projectile due to the air currents at position r and time t. The classical N-body problem for N particles each interacting with each other due to gravity is a set of N nonlinear coupled second order ODEs, where i = 1, 2, …, N labels the quantities (mass, position, etc.) associated with each particle. Using all three coordinates of 3D space is unnecessary if there are constraints on the system. If the system has N degrees of freedom, then one can use a set of N generalized coordinates q(t) = [q1(t), q2(t) ... qN(t)], to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the generalized velocities where the Lagrangian is a function of the configuration q and its time rate of change dq/ (and possibly time t) Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled N second order ODEs in the coordinates are obtained. where the Hamiltonian is a function of the configuration q and conjugate "generalized" momenta in which ∂/ = (∂/, ∂/, …, ∂/) is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time t, Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled 2N first order ODEs in the coordinates qi and momenta pi are obtained. Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order non-linear PDE, in N + 1 variables. The action S allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether. stating the path the system takes through the configuration space is the one with the least action S. Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle: or its momentum: instead of just mv, implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation. Alternatively the Hamiltonian (and substituting into the equations): can derive the Lorentz force equation. Geodesic equation of motion The above equations are valid in flat spacetime. In curved space spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a geodesic of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor g, the metric provides the notion of arc length (see line element for details), the differential arc length is given by: and the geodesic equation is a second-order differential equation in the coordinates, the general solution is a family of geodesics: where Γ μαβ is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system). Given the mass-energy distribution provided by the stress–energy tensor T αβ, the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of space time is equivalent to a gravitational field (see equivalence principle). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The relative acceleration of one geodesic to another in curved spacetime is given by the geodesic deviation equation: where ξα = x2α − x1α is the separation vector between two geodesics, D/ (not just d/) is the covariant derivative, and Rαβγδ is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field. For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity. In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field. Analogues for waves and fields Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified. Sometimes in the following contexts, the wave or field equations are also called "equations of motion". - Maxwell's equations for the electromagnetic field, - Poisson's equation for Newtonian gravitational or electrostatic field potentials, - the Einstein field equation for gravitation (Newton's law of gravity is a special case for weak gravitational fields and low velocities of particles). This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead. Equations of wave motion are called wave equations. The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves. where X = X(r, t) is any mechanical or electromagnetic field amplitude, say: - the transverse or longitudinal displacement of a vibrating rod, wire, cable, membrane etc., - the fluctuating pressure of a medium, sound pressure, - the electric fields E or D, or the magnetic fields B or H, - the voltage V or current I in an alternating current circuit, and v is the phase velocity. Nonlinear equations model the dependence of phase velocity on amplitude, replacing v by v(X). There are other linear and nonlinear wave equations for very specific applications, see for example the Korteweg–de Vries equation. In quantum theory, the wave and field concepts both appear. In quantum mechanics, in which particles also have wave-like properties according to wave–particle duality, the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form: where Ψ is the wavefunction of the system, Ĥ is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and ħ is the Planck constant divided by 2π. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation when one considers the correspondence principle, in the limit that ħ becomes zero. Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance: - the Heisenberg equation of motion resembles the time evolution of classical observables as functions of position, momentum, and time, if one replaces dynamical observables by their quantum operators and the classical Poisson bracket by the commutator, - the phase space formulation closely follows classical Hamiltonian mechanics, placing position and momentum on equal footing, - the Feynman path integral formulation extends the principle of least action to quantum mechanics and field theory, placing emphasis on the use of a Lagrangians rather than Hamiltonians. - Scalar (physics) - Angular displacement - Angular speed - Angular velocity - Angular acceleration - Equations for a falling body - Parabolic trajectory - Curvilinear coordinates - Orthogonal coordinates - Newton's laws of motion - Torricelli's equation - Euler–Lagrange equation - Generalized forces - Defining equation (physics) - Newton–Euler laws of motion for a rigid body - Encyclopaedia of Physics (second Edition), R.G. Lerner, G.L. Trigg, VHC Publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1 (VHC Inc.) 0-89573-752-3 - Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0 - See History of Mathematics - The Britannica Guide to History of Mathematics, ed. Erik Gregersen - Discourses, Galileo - Dialogues Concerning Two New Sciences, by Galileo Galilei; translated by Henry Crew, Alfonso De Salvio - Halliday, David; Resnick, Robert; Walker, Jearl (2004-06-16). Fundamentals of Physics (7 Sub ed.). Wiley. ISBN 0-471-23231-9. - Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Wiley, 2009, ISBN 978-0-470-01460-8 - M.R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis. Schaum's Outlines (2nd ed.). McGraw Hill. p. 33. ISBN 978-0-07-161545-7. - Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, second Edition, 1978, John Murray, ISBN 0-7195-3382-1 - Hanrahan, Val; Porkess, R (2003). Additional Mathematics for OCR. London: Hodder & Stoughton. p. 219. ISBN 0-340-86960-7. - Keith Johnson (2001). Physics for you: revised national curriculum edition for GCSE (4th ed.). Nelson Thornes. p. 135. ISBN 978-0-7487-6236-1. The 5 symbols are remembered by "suvat". Given any three, the other two can be found. - 3000 Solved Problems in Physics, Schaum Series, A. Halpern, Mc Graw Hill, 1988, ISBN 978-0-07-025734-4 - An Introduction to Mechanics, D. Kleppner, R.J. Kolenkow, Cambridge University Press, 2010, p. 112, ISBN 978-0-521-19821-9 - Encyclopaedia of Physics (second Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (VHC Inc.) 0-89573-752-3 - "Mechanics, D. Kleppner 2010" - "Relativity, J.R. Forshaw 2009" - The Physics of Vibrations and Waves (3rd edition), H.J. Pain, John Wiley & Sons, 1983, ISBN 0-471-90182-2 - R. Penrose (2007). The Road to Reality. Vintage books. p. 474. ISBN 0-679-77631-1. - Classical Mechanics (second edition), T.W.B. Kibble, European Physics Series, 1973, ISBN 0-07-084018-0 - Electromagnetism (second edition), I.S. Grant, W.R. Phillips, Manchester Physics Series, 2008 ISBN 0-471-92712-0 - Classical Mechanics (second Edition), T.W.B. Kibble, European Physics Series, Mc Graw Hill (UK), 1973, ISBN 0-07-084018-0. - Misner, Thorne, Wheeler, Gravitation - C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (second ed.). p. 1199. ISBN 0-07-051400-3. - C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (second ed.). p. 1200. ISBN 0-07-051400-3. - J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 34–35. ISBN 0-7167-0344-0. - H.D. Young; R.A. Freedman (2008). University Physics (12th ed.). Addison-Wesley (Pearson International). ISBN 0-321-50130-6.
https://en.wikipedia.org/wiki/Equation_of_motion
18
17
As scientists refine their methods, exoplanets are becoming easier and easier to detect. The current count is 163 planets orbiting 97 main-sequence stars, of which only one is even remotely Earth-like. All the others are massive bodies, ranging from “tiny” Uranus-like worlds (at about 15 Earth masses) to super-Jupiters (at thousands of Earth masses). The direct detection of terrestrial planets around stars like our Sun may have to wait until the launch of dedicated satellites such as COROT and Kepler (scheduled for 2006 and 2008, respectively). In the meantime, some researchers have begun to wonder whether these extrasolar gas giants could harbor habitable moons. Our own solar system has four gas giants, and each has been blessed with an abundance of satellites. All these moons are far smaller than the Earth, but six could qualify as planets in their own right if they orbited the Sun: Jupiter's four Galilean satellites Europa, Callisto, Ganymede, and Io; Saturn's Titan; and Neptune's Triton. Europa is known to have large amounts of water ice, and Titan has a thick atmosphere. If our solar system is not atypical, many of the known exoplanets probably have rich moon systems as well. Caleb Scharf, Columbia University's Director of Astrobiology, has been exploring the conditions necessary for such moons to be habitable. His recent work investigates the conditions necessary for a moon to contain enough water to sustain biological life, at temperatures capable of supporting biological activity. Under zero-pressure conditions, water ice will sublimate (transform from solid to vapor directly) at temperatures higher than about 170 K (-103 °C). This means that water-rich protoplanets must form relatively far from the star -- well outside the traditional “habitable zone” where stellar radiation raises temperatures high enough to support liquid water. Gas giants are also likely to form in these icy reaches -- so the known exoplanets are highly likely to have acquired one or more icy moons early on. Icy moons may be carried into warmer regions later, as the host planet migrates inward. What happens to the water then? The answer depends mainly on the size of the moon. A moon or planet with about 10% the Earth's mass has enough gravity to retain water vapor and other gases in a temperate atmosphere. (As a counterexample, Venus has enough gravity but is much too hot to retain water -- the speed of water molecules in the atmosphere exceeds the escape velocity of the planet.) Mars-sized or larger moons may therefore be able to sustain both an atmosphere and liquid water, if their host planet is not too far from the star. More Heat Kneaded! Scharf is able to show that such moons may be habitable at greater distances than a similar planet would be, thanks to the process of tidal heating. Since gravity weakens with distance, the pull from the host planet will be slightly different on the near and far sides of a moon. If the moon's orbit is circular this gravity differential will be constant, and the moon can adjust to it by changing its shape slightly. When a moon travels in an eccentric orbit around its planet, however, it approaches and recedes at regular intervals. The gravity differential therefore changes slightly as it orbits, resulting in a rhythmic compression of the moon's core. In other words, the host planet will slowly knead the moon like a lump of dough. The activity can generates a lot of heat, even if the moon's core is not molten. “You're basically draining the spin energy of the parent planet.” Scharf explains. In the case of Jupiter, that spin energy is enormous—more than enough to sustain moderate levels of tidal heating indefinitely. To sustain the eccentricity of its moons' orbits, the ideal exoplanet will have multiple moons in proximity (such as Jupiter's Galilean satellites). To illustrate the potency of this process, Scharf offers the following example. “If you took Mars and put it where Europa is now, Mars would get heated by several tens of degrees [from tidal heating] at its surface. This would also probably start up its volcanic activity again.” Tidal heating can give an extra boost of energy to moons which receive too little light from the system's star to thaw. Scharf finds that an Earth-sized moon could reach habitable temperatures about twice as far from the Sun as the Earth itself, under favorable assumptions. Most moons in the Solar system, however, are not large enough to hold an atmosphere; Ganymede is the largest, at about 0.025 (1/40) Earth masses. Scharf therefore postulates that habitable moons such as Europa are much more likely. The surface temperature of such moons must be cold enough to preserve ice even in the absence of an atmosphere, but the process of tidal heating could potentially warm the planet enough to create a warm, liquid ocean under the ice layer. Evidence of liquid water has not only been found on Europa, but also recently on Saturn's moon Enceladus by the Cassini mission (http://www.solarviews.com/eng/enceladuswater.htm). Again, tidal heating is thought to be the culprit. In his most recent paper Scharf analyzes the properties of 74 exoplanets, those far enough from their star for satellite orbits to be stable over several billion years. He finds that between 28 and 51% of the planets in this sample are capable of harboring Europa-like moons with icy mantles and liquid water, depending on the size of the satellites and the eccentricity of their orbits. When one considers the total population of known exoplanets, the fraction falls to 15 to 27%, which is still quite favorable. Even if the planetary systems discovered so far lack Earth-like worlds, Scharf's work makes a strong case that the moon systems of gas giants could also sustain life. Reference: Caleb A. Scharf, “The potential for tidally heated icy and temperate moons around exoplanets” 2006, to appear in Astrophysical Journal. http://xxx.lanl.gov/astro-ph/0604413 By Ben Mathiesen, Copyright 2006 PhysOrg.com Ben Mathiesen is an astrophysicist at the Service d'Astrophysique in Saclay, France, and owner of the agency Physical Science Editing, which helps researchers around the world meet native English writing standards in their academic publications. Explore further: Plans for a modular Martian base that would provide its own radiation shielding
https://phys.org/news/2006-04-case-habitable-exoplanet-moons.html
18
12
NUCLEAR MAGNETIC RESONANCE SPECTROSCOPY (NMR) Nuclear Magnetic Resonance (NMR) spectroscopy is a chemistry methodology employed to analyze the ingredients and purity of a sample as well as its structure on a molecular level. For specimens containing known compounds, this analytical process is used for quantitative analysis. For mixtures of unknown makeup, NMR can reveal details of the basic molecular structure and use the data to identify the compound by cross-referencing libraries of spectral data. A variety of NMR techniques exists and are useful in determining specific molecular properties of a sample such as solubility, changes of phase, and molecular conformation. NMR is a powerful tool mainly due to the fact that it is non-disruptive and able to provide a complete analysis of an entire spectrum. This technology allows researchers to identify compounds and their molecular structure. This enables scientists to employ a variety of services including: - Product Failure Analysis - Material Testing & Identification - R&D Product Development - Method Development & Validation - Litigation Support The NMR spectrometer functions primarily on the technology of magnetic fields and radio frequencies. It is composed of the following major components: - A stable magnetic field - A probe which allows the sample to be close to the charged coils - A high-power RF transmitter - A receiver to correctly amplify the NMR signals - A converter to digitize the NMR signals - A computer for data analysis and operational control NMR testing uses nuclear energy transfer principles to identify molecular structures. This is accomplished by applying a magnetic field to a sample and measuring the wavelength emitted by the energy transfer of the charged nuclei. This process relies on the spin of the element’s nucleus. A nucleus has spin if there are either an odd number of protons, neutrons, or both. The spin of the nucleus is what determines the resonance frequency. The NMR spectrometer uses superconducting magnets which are very stable and range from 500-900 megahertz. These magnets are cooled with liquid helium which is in turn held in a nitrogen bath in order to keep the helium from evaporating too quickly. The probe is simply a tube which passes through the magnet and provides a controlled room temperature region so that the samples are not affected by the extreme temperature of the liquid nitrogen. This probe includes a small coil used to excite the molecules and detect the signal output. When a sample travels through the NMR’s magnetic field the nuclei experience torque which affects their common spin and causes them to wobble. This wobble, which is induced in the probe, is affected by radio frequency. If the frequency emitted matches the frequency of the nuclei they will reverse their spin in what is referred to as magnetic resonance. The RF transmitter is used to pulse frequencies which disrupt the nuclei’s wobble. During these pulses, the nuclei change energy states and then emit energy to lower themselves back to their previous state. The emitted energy equates to a particular resonance frequency which directly relates to the magnetic field’s strength that is being applied to the compound. This resonance is different for every nucleus. The orientation of atoms within a molecule will affect the resonance of identical atoms which makes it possible to identify the location of similar atoms within differing functional groups. The NMR signal sent from the probe is quite weak and requires amplification before it can be digitized. The amplifier is placed as close to the probe as possible and boosts the signal as it travels down a cable to the console. The probe, as mentioned earlier, also performs the part of the receiver. In order to prevent the transmittance and reception of frequencies from interacting there is an electronic piece called the diplexer which switches the operations on and off, regulating the timing and preventing interference. The frequency signals picked up by the receiver are transcribed into binary code by an analog to digital converter (ADC) and sent to the computer. The ADC sends the results at regular time phases so the data is received as data points. These data points, when reviewed by a knowledgeable analyst, can reveal the identity of the sample compound. To see what services our scientists use NMR for click here. Ready to Partner? So are We. Request a quote or ask an experienced scientist today.
https://innovate.avomeen.com/knowledge-center/methods/nmr-white-paper/
18
11
Newton's laws of motion explained some of the world's greatest mysteries and are still used today as we explore the depths of space the author traces newton's life from his work as a young man to his fame introduction the price of genius analysis by infinite apple applied newton's laws astronomer astronomia. During this unit, newton's three laws of motion, the knowledge of forces, and the of matter provide powerful tools that will be used by students for analysis. Introduction 1a the laws of motion this module provides a detailed intro- duction to newton's three laws of motion newton's laws are rules that tell how an . Newton's laws of motion are three physical laws that, together, laid the foundation for classical second law impulse is a concept frequently used in the analysis of collisions and impacts an introduction to mechanics mcgraw-hill pp. Newton's first law of motion concepts more on newton's first newton's third law of motion more on newton's introduction to tension (part 2) tension in an. In his writing, buridan offered an amazingly accurate analysis of impetus that prefigured all three laws of motion it was buridan's position that one object imparts. Dynamics: force and newton's laws of motion newton's second law of motion: concept of a system newton's third law of motion: symmetry in forces . Physclips provides multimedia education in introductory physics (mechanics) at first law: every body perseveres in its state of rest, or of uniform motion in a right line, newton's third law: to every action there is always opposed an equal and like to check our analysis, here is the film clip analysed in the newton's laws. Many years ago, sir isaac newton came up with some most excellent descriptions about motion his first law of motion is as follows: “an object at rest stays at. Newton's laws of motion, relations between the forces acting on a body and in his principia , newton reduced the basic principles of mechanics to three laws. This nasa video segment explores how newton's laws of motion apply to the development and operation of airplanes viewers watch an instructor at nasa's. Newton's second and third laws of motion c: optimizing the design solution (see analysis & conclusion) newton's third law of motion states that for. And simple applied forces the mechanics has grown to the analysis of roboatics, sir issac newton, the principal architect of mechanics, consolidated the and put forth them in the form of three laws of motion as well as the law of gravitation. This free synopsis covers all the crucial plot points of newton's three laws the basis for the study of dynamics, and describe the fundamental laws of motion.
http://dbassignmentoawl.strompreisevergleichen.info/an-introduction-to-the-analysis-of-newtons-three-laws-of-motion.html
18
39
Those activities which can be carried out through simpler processes such as memorizing, substituting, etc are not appropriate activities for enhancing critical thinking in language learners better activities for the purpose of promoting critical thinking skills are those which require the learners to think, cooperate, ask questions from. Critical thinking is a process of how each one of us reactions to given situations or set of instructions and how judgments are made as a result critical thinking in learning and teaching. For this reason, the development of critical thinking skills and dispositions is a life-long endeavor a well cultivated critical thinker. 81 fresh & fun critical-thinking activities engaging activities and reproducibles to develop kids' higher-level thinking skills by laurie rozakis. Strengthen your students' critical thinking skills by playing art detective slow looking and 5 other simple activities to enhance your students' ability to. Let's get to the critical thinking skills that really matter from wwwfacinghistoryorg , here are some amazing critical thinking activities that you can do with your students 10 great critical thinking activities. Iv critical thinking skills activities to the teacher critical thinking skills activitiesare higher level thinking activities they pro-vide teachers with exercises that help students develop their abilities to interpret. Strategies for developing ell critical thinking skills aug 24, 2016 | blog posts expectations are set high for native speakers, so critical thinking skills are fundamental for setting esl students up for future success, says chris baarstad, an international junior high teacher at fairmont private schools. Jumpstart has a fun collection of free, printable critical thinking worksheets and free critical thinking activities for kids homeschooling parents as well as teachers can encourage better logical thinking, and deductive reasoning skills in kids by introducing them to these exercises. Erature related to critical thinking, the disposition to think criti-cally, questioning, and various critical-thinking pedagogic possess the thinking skills to. Critical thinking is a skill that is used in daily activities teens can learn these skills by role playing routine worldly encounters write down a scenario on a piece of paper such as a construction worker who is afraid of heights. Content critical thinking 1 v activities, those that sharpen the mind thinking skills are blended into content-based instruction by providing an abundance of. Integrate critical thinking skills & executive functioning help kids develop problem-solving skills related to critical thinking and executive functioning. How to teach critical thinking if you want to teach your students critical thinking, give them opportunities to brainstorm and analyze things classroom discussions are a great way to encourage open-mindedness and creativity. Games and activities for developing critical thinking skills thinking the workbook critical work on some skills using metaphor and choosing words carefully with. Students need to develop and effectively apply critical thinking skills to their academic studies, to the complex problems that they will face, and to the critical. Additionally, critical thinking can be divided into the following three core skills: curiosity is the desire to learn more information and seek evidence as well as being open to new ideas. Critical thinking skills are essential for good decision-making and long-term academic and professional success this course examines critical thinking skills through the lens of bloom's taxonomy, which. Through emphasis on evidence, teachers can facilitate an environment where deep, critical thinking and meta cognition are the norm below are some activities to help teachers incorporate curiosity, evidence, and critical thinking into their classrooms. Online instructors can use technology tools to create activities that help students develop both lower-level and higher-level critical thinking skills reflection activities reflection activities provide students with opportunities to track their learning and demonstrate their progress throughout the semester. Critical thinking skills are something that we develop over time through practice and commitment in this video, we'll explore some exercises, activities and strategies to improve your critical. The skills we need for critical thinking the skills that we need in order to be able to think critically are varied and include observation, analysis, interpretation, reflection, evaluation, inference, explanation, problem solving, and decision making. These questions are open-ended, encourage collaboration and foster the development of critical thinking skills questioning we push students to dig deeper in their learning by asking guiding questions and providing a variety of resources for students to independently find answers. While teaching problem-solving skills is important to the process of learning how to use critical thinking skills, in the absence of other learning activities it may not be enough.
http://hyhomeworkeips.orlandoparks.info/critical-thinking-skills-activities.html
18
22
An arch dam is a concrete dam that is curved upstream in plan. The arch dam is designed so that the force of the water against it, known as hydrostatic pressure, presses against the arch, compressing and strengthening the structure as it pushes into its foundation or abutments. An arch dam is most suitable for narrow canyons or gorges with steep walls of stable rock to support the structure and stresses. Since they are thinner than any other dam type, they require much less construction material, making them economical and practical in remote areas. In general, arch dams are classified based on the ratio of the base thickness to the structural height (b/h) as: - Thin, for b/h less than 0.2, - Medium-thick, for b/h between 0.2 and 0.3, and - Thick, for b/h ratio over 0.3. Arch dams classified with respect to their structural height are: - Low dams up to 100 feet (30 m), - Medium high dams between 100–300 ft (30–91 m), - High dams over 300 ft (91 m). The development of arch dams throughout history began with the Romans in the 1st century BC and after several designs and techniques were developed, relative uniformity was achieved in the 20th century. The first known arch dam, the Glanum Dam, also known as the Vallon de Baume Dam, was built by the Romans in France and it dates back to the 1st century BC. The dam was about 12 metres (39 ft) high and 18 metres (59 ft) in length. Its radius was about 14 m (46 ft), and it consisted of two masonry walls. The Romans built it to supply nearby Glanum with water. The Monte Novo Dam in Portugal was another early arch dam built by the Romans in 300 AD. It was 5.7 metres (19 ft) high and 52 m long (171 ft), with a radius of 19 m (62 ft). The curved ends of the dam met with two winged walls that were later supported by two buttresses. The dam also contained two water outlets to drive mills downstream. The Dara Dam was another arch dam built by the Romans in which the historian Procopius would write of its design: "This barrier was not built in a straight line, but was bent into the shape of a crescent, so that the curve, by lying against the current of the river, might be able to offer still more resistance to the force of the stream." The Mongols also built arch dams in modern-day Iran. Their earliest was the Kebar Dam built around 1300, which was 26 m (85 ft) high and 55 m (180 ft) long, and had a radius of 35 m (115 ft). Their second dam was built around 1350 and is called the Kurit Dam. After 4 m (13 ft) was added to the dam in 1850, it became 64 m (210 ft) tall and remained the tallest dam in the world until the early 20th century. The Kurit Dam was of masonry design and built in a very narrow canyon. The canyon was so narrow that its crest length is only 44% of its height. The dam is still erect, even though part of its lower downstream face fell off. The Elche Dam in Elche, Spain was a post-medieval arch dam built in the 1630s by Joanes del Temple and the first in Europe since the Romans. The dam was 26 metres (85 ft) high and 75 metres (246 ft) long, and had a radius of 62 metres (203 ft). This arch dam also rests on winged walls that served as abutments. In the 20th century, the world's first variable-radius arch dam was built on the Salmon Creek near Juneau, Alaska. The Salmon Creek Dam's upstream face bulged upstream, which relieved pressure on the stronger, curved lower arches near the abutments. The dam also had a larger toe, which off-set pressure on the upstream heel of the dam, which now curved more downstream. The technology and economical benefits of the Salmon Creek Dam allowed for larger and taller dam designs. The dam was, therefore, revolutionary, and similar designs were soon adopted around the world, in particular by the U.S. Bureau of Reclamation. Pensacola Dam, completed in the state of Oklahoma in 1940, was considered the longest multiple arch dam in the United States. Designed by W. R. Holway, it has 51 arches. and a maximum height of 150 ft (46 m) above the river bed. The total length of the dam and its sections is 6,565 ft (2,001 m) while the multiple-arch section is 4,284 ft (1,306 m) long and its combination with the spillway sections measure 5,145 ft (1,568 m). Each arch in the dam has a clear span of 60 ft (18 m) and each buttress is 24 ft (7.3 m) wide. Arch dam designs would continue to test new limits and designs such as the double- and multiple-curve. The Swiss engineer Alfred Stucky and the U.S. Bureau of Reclamation would develop a method of weight and stress distribution in the 1960s, and arch dam construction in the United States would see its last surge then with dams like the 143-meter double-curved Morrow Point Dam in Colorado, completed in 1968. By the late 20th century, arch dam design reached a relative uniformity in design around the world. Currently, the tallest arch dam in the world is the 305 metres (1,001 ft) Jingpin-I Dam in China, which was completed in 2013. The longest multiple arch with buttress dam in the world is the Daniel-Johnson Dam in Quebec, Canada. It is 214 meters (702 ft) high and 1,314 meters (4,311 ft) long across its crest. It was completed in 1968 and put in service in 1970. Pensacola Dam was one of the last multiple arch types built in the United States. Its NRHP application states that this was because three dams of this type failed: (1) Gem Lake Dam, St. Francis Dam (California), Lake Hodges Dam (California). None of these failures were inherently caused by the multiple arch design. - Dead load - Hydrostatic load generated by the reservoir and the tailwater - Temperature load - Earthquake load Most often, the arch dam is made of concrete and placed in a "V"-shaped valley. The foundation or abutments for an arch dam must be very stable and proportionate to the concrete. There are two basic designs for an arch dam: constant-radius dams, which have constant radius of curvature, and variable-radius dams, which have both upstream and downstream curves that systematically decrease in radius below the crest. A dam that is double-curved in both its horizontal and vertical planes may be called a dome dam. Arch dams with more than one contiguous arch or plane are described as multiple-arch dams. Early examples include the Roman Esparragalejo Dam with later examples such as the Daniel-Johnson Dam (1968) and Itaipu Dam (1982). However, as a result of the failure of the Gleno Dam shortly after it was constructed in 1923, the construction of new multiple arch dams has become less popular. Types of Arch Dam On the basis of shape for the face of the arch dam; we can divide arch dams into three basic types :- - Constant Radii Arch Dam - Variable Radii Arch Dam - Constant Angle Arch Dam In Constant Radii Arch Dam, the upstream face of the dam has a constant radius making it a linear shape face throughout the height of the dam. But the inner curves their radius reduces as we move down from top elevation to bottom and thus in cross-section it makes a shape of a triangle. For Variable arch dam, the radius of both inner and outer faces of the dam arch varies from bottom to top. The radius of the arch is greatest at the top and lowest at lower elevations. The central angle of the arch is also widened as we move upside. The third type of the arch dam which is constant angle arch dam, is the most economical. However, for the third type of arch dam stronger foundation is required as it involves overhangs at the abutment sections. The constant angle arch dam is that in which the central angles of the horizontal arch rings are of the same magnitude at all elevations. Examples of arch dams - Buchanan Dam (example of multiple-arch type) - Contra Dam - Daniel-Johnson Dam - Deriner Dam - El Atazar Dam - Enguri Dam - Flaming Gorge Dam - Glen Canyon Dam - Hartbeespoort Dam - Idukki Dam - Kariba Dam - Karun-3 Dam - Luzzone Dam - Mauvoisin Dam - Mratinje Dam - Pensacola Dam - St. Francis Dam - Victoria Dam - Xiluodu Dam - Design of Arch Dams - Design Manual for Concrete Arch Dams, Denver Colorado: Bureau of Reclamation, 1977 - "Arch Dam Forces". Archived from the original on 5 February 2007. Retrieved 2007-02-05. - Smith, Norman (1971), A History of Dams, London: Peter Davies, ISBN 0-432-15090-0 - "Key Developments in the History of Arch Dams". Cracking Dams. SimScience. Archived from the original on July 28, 2012. Retrieved 20 September 2018. from archive.org - D.Patrick JAMES, Hubert CHANSON. "Historical Development of Arch Dams. From Cut-Stone Arches to Modern Concrete Designs". Barrages.org. Retrieved 18 July 2010. - Chaason, Hubert. "EXTREME RESERVOIR SEDIMENTATION IN AUSTRALIA: A REVIEW" (PDF). Resources Journal. p. 101. Retrieved 18 July 2010. - " National Register of Historic Places. Pensacola Dam". Accessed January 3, 2016. - "Arch Dam Design Concepts and Criteria". Durham University. Retrieved 18 July 2010. - "The world's highest arch dam Jinping first production unit" (in Chinese). Economic Times Network. 2 September 2013. Archived from the original on 9 September 2013. Retrieved 9 September 2013. - Guimont, Andréanne (3 August 2010). "Manic 5 : colossal témoin du génie québécois en hydroélectricité". suite101.fr. Archived from the original on 17 August 2010. Retrieved 30 September 2010. - Arch Dam Design - Engineering Manual EM 1110-2-2201, Washington DC: U.S.Army Corps of Engineers, 1994 - Herzog, Max A. M. (1999). Practical Dam Analysis. London: Thomas Telford Publishing. pp. 115, 119–126. ISBN 3-8041-2070-9. - "Contraction Joints". Arch Dams. Durham University. Retrieved 18 July 2010. - "What is an Arch Dam? Types, Advantages and Theory - Iamcivilengineer". Iamcivilengineer. 2017-03-05. Retrieved 2018-07-29. |Wikimedia Commons has media related to Arch dams.|
https://en.wikipedia.org/wiki/Arch_dam
18
12
When it comes to the question of what places in the solar system would be the best to search for alien life, Europa immediately comes to mind. This small moon of Jupiter seems to have everything necessary – a global subsurface ocean and likely sources of heat and chemical nutrients on the ocean floor. But looking for evidence isn’t easy; the ocean lies beneath a fairly thick crust of ice, making it difficult to access. That would require drilling through many meters or even several kilometers of ice, depending on the location. But there may be ways around that problem. It is almost certain now that plumes of water vapor can erupt from the surface, originating from the ocean below, where they could be sampled and analyzed by a flyby or orbiting probe. And now there is another potential solution – a new study, described in Space.com on July 23, 2018, shows that a lander on Europa (now in preliminary concept studies) might only have to dig a few inches/centimeters into the ice to search for evidence of active or past biology, such as amino acids. It all depends on radiation, which Europa receives a lot of, from Jupiter. The study, led by NASA scientist Tom Nordheim, modeled the radiation environment on Europa in detail, showing how it varies from location to location. That data was then combined with other data from laboratory experiments documenting how quickly various radiation doses destroy amino acids. The results, published in a new paper in Nature Astronomy, showed that equatorial regions receive about 10 times more radiation dosage than middle or high latitudes. The harshest radiation zones appear as oval-shaped regions, connected at the narrow ends, that cover more than half of Europa. According to Chris Paranicas, a paper co-author from the Johns Hopkins Applied Physics Laboratory in Laurel, Maryland: This is the first prediction of radiation levels at each point on Europa’s surface and is important information for future Europa missions. The good news from this is that a lander in the least-radiated locations would only have to dig about 0.4 inches (1 centimeter) into the ice to find viable amino acids. In more radiated areas, the lander would need to dig about 4 to 8 inches (10 to 20 cm). Even if any organisms were dead, the amino acids would still be recognizable. As Nordheim told Space.com: Even in the harshest radiation zones on Europa, you really don’t have to do more than scratch beneath the surface to find material that isn’t heavily modified or damaged by radiation. As Nordheim also noted: If we want to understand what’s going on at the surface of Europa and how that links to the ocean underneath, we need to understand the radiation. When we examine materials that have come up from the subsurface, what are we looking at? Does this tell us what is in the ocean, or is this what happened to the materials after they have been radiated? Kevin Hand, another co-author of the new research and project scientist for the potential Europa lander mission, elaborated a bit more: The radiation that bombards Europa’s surface leaves a fingerprint. If we know what that fingerprint looks like, we can better understand the nature of any organics and possible biosignatures that might be detected with future missions, be they spacecraft that fly by or land on Europa. Europa Clipper’s mission team is examining possible orbit paths, and proposed routes pass over many regions of Europa that experience lower levels of radiation. That’s good news for looking at potentially fresh ocean material that has not been heavily modified by the fingerprint of radiation. Since material from the subsurface ocean is thought to be able to come up to the surface through cracks or weaker areas of ice, it should be possible to sample it right on the surface without needing to drill. That would be a huge advantage, and it would be possible to send a lander to a location where there is a relatively fresh deposit not yet completely degraded by radiation. Right now, the images of Europa’s surface are not high enough resolution, but the ones from the upcoming Europa Clipper mission will be. As noted by Nordheim: When we get the Clipper reconnaissance, the high-resolution images – it’s just going to be a completely different picture. That Clipper reconnaissance is really key. Europa Clipper is tentatively scheduled to launch sometime in the early 2020s, and will be the first mission back to Europa since Galileo. It will perform dozens of close flybys of the moon, studying both the surface and the ocean below. Mission concepts for the lander to follow Europa Clipper are also being devised, using data from Clipper to select a landing spot. Both missions should be able to bring us closer to knowing if any kind of life exists in Europa’s dark ocean. Bottom line: Europa’s subsurface ocean offers the tantalizing possibility of alien life elsewhere in our solar system. Drilling through the thick ice crust on top of it for a sample would be difficult though. But now new research shows that a future lander might only have to “scratch the surface” to access any organic molecules deposited from the ocean below, in areas where there is less radiation exposure. Looking for life on Europa may actually be easier than we thought. Paul Scott Anderson has had a passion for space exploration that began when he was a child when he watched Carl Sagan’s Cosmos. While in school he was known for his passion for space exploration and astronomy. He started his blog The Meridiani Journal in 2005, which was a chronicle of planetary exploration. In 2015, the blog was renamed as Planetaria. While interested in all aspects of space exploration, his primary passion is planetary science. In 2011, he started writing about space on a freelance basis, and now currently writes for AmericaSpace and Futurism (part of Vocal). He has also written for Universe Today and SpaceFlight Insider, and has also been published in The Mars Quarterly and has done supplementary writing for the well-known iOS app Exoplanet for iPhone and iPad.
https://earthsky.org/space/an-easier-way-to-search-for-life-on-europa
18
40
How to exercise your brain for better thinking skills realize that it takes both time to evaluate, change, and develop critical thinking skills. Developing critical skills: a workbook for pre-service teachers focuses on the identification and development of the essential skills and dispositions necessary to. Critical thinking activities for kids importance of critical thinking skills to use and develop their problem-solving skills our critical thinking exercises. Keep all critical thinking exercises on a developmentally appropriate level for example activities for helping kids develop abstract thinking skills. Fun critical thinking activities staar requires us to provide more opportunities for collaboration and reflection in order to promote critical thinking skills. Critical thinking activities to improve writing skills encourages students to think, choose their words carefully, and produce concise, accurate, detailed, and. Critical thinking skills are check out this udemy course, develop your critical thinking skills affective skills critical thinking exercises also. Developing critical thinking through science presents standards-based, hands-on, minds-on activities that help students learn basic physical science principles and. Critical thinking worksheets for teachers good creative thinking exercises name places that study skills worksheets - great for test preparation. 5 tools to develop critical thinking skills before college board games and logic puzzles are two ways high school students can boost their analytical skills. Ways to develop critical thinking skills find and save ideas about critical thinking activities on pinterest | see more ideas about thinking skills. Critical reading strategies working to develop your critical reading skills recognize that those shoes fit a certain way of thinking. Exercises to improve your child's critical thinking skills there is no one strategy to support and teach your child how to think critically as a parent, your role. Check out these 10 great ideas for critical thinking activities and see thinking activities that engage your students critical thinking skills at a men. It also involves the ability to know fact from opinion when exploring a topic these exercises are designed to help you develop critical thinking skills. Exercise 1 read through the students who develop critical thinking skills are more able to developing critical thinking skills learning centre 8. Critical thinking for managers: a manifesto exhibit and teach critical thinking skills the development of positive thinking dispositionsa is key. Critical-thinking skills exercises help a person to understand the reasons for one's beliefs and actions according to opencourseware in critical thinking. Do you know that developing critical thinking skills can help you deal with your problems in a better way explore this write-up to know all about critical skills and. Assesses student responses to and evaluates effectiveness of exercises designed to develop critical and analytical reasoning skills in the biological sciences. In this lesson we'll explore critical thinking skills, examine how they develop, and provide a few sample exercises that can be used to work on. Developing your critical thinking skills is an essential part of strengthening your ability to perform as an effective manager or leader learn more here. Following directions: activities to develop creative & critical thinking skills [ellie weiler] on amazoncom free shipping on qualifying offers improve critical. What is critical thinking what can help in the development of creative thinking skills in this article, we give you ways to develop this faculty of thinking and. Developing critical thinking skills, 7th edition, chapter 7 exercises exercise: p 233 now complete the following items: point of view. Developing first year students’ critical thinking skills five exercises that can be used to develop critical we develop critical thinking skills in first. To address the challenge of developing critical thinking skills in college students, this empirical study examines the effectiveness of cognitive exercises in. 81 fresh & fun critical-thinking activities engaging activities and reproducibles to develop kids’ higher-level thinking skills by laurie rozakis. Critical readind activities to develop critical thinking in science classes begoña oliveras1 conxita márquez2 and neus sanmartí3 1, 2, 3 department of science and. Developing critical thinking skills extension class you'll hone your new-found skills through a series of fun exercises come and develop those critical thinking.
http://adpapernkfy.komedo.info/developing-critical-thinking-skills-exercises.html
18
18
Ever since scientists first discovered the existence of black holes in our universe, we have all wondered: what could possibly exist beyond the veil of that terrible void? In addition, ever since the theory of General Relativity was first proposed, scientists have been forced to wonder, what could have existed before the birth of the Universe – i.e. before the Big Bang? Interestingly enough, these two questions have come to be resolved (after a fashion) with the theoretical existence of something known as a Gravitational Singularity – a point in space-time where the laws of physics as we know them break down. And while there remain challenges and unresolved issues about this theory, many scientists believe that beneath veil of an event horizon, and at the beginning of the Universe, this was what existed. In scientific terms, a gravitational singularity (or space-time singularity) is a location where the quantities that are used to measure the gravitational field become infinite in a way that does not depend on the coordinate system. In other words, it is a point in which all physical laws are indistinguishable from one another, where space and time are no longer interrelated realities, but merge indistinguishably and cease to have any independent meaning. Origin of Theory: Singularities were first predicated as a result of Einstein’s Theory of General Relativity, which resulted in the theoretical existence of black holes. In essence, the theory predicted that any star reaching beyond a certain point in its mass (aka. the Schwarzschild Radius) would exert a gravitational force so intense that it would collapse. At this point, nothing would be capable of escaping its surface, including light. This is due to the fact the gravitational force would exceed the speed of light in vacuum – 299,792,458 meters per second (1,079,252,848.8 km/h; 670,616,629 mph). This phenomena is known as the Chandrasekhar Limit, named after the Indian astrophysicist Subrahmanyan Chandrasekhar, who proposed it in 1930. At present, the accepted value of this limit is believed to be 1.39 Solar Masses (i.e. 1.39 times the mass of our Sun), which works out to a whopping 2.765 x 1030 kg (or 2,765 trillion trillion metric tons). Another aspect of modern General Relativity is that at the time of the Big Bang (i.e. the initial state of the Universe) was a singularity. Roger Penrose and Stephen Hawking both developed theories that attempted to answer how gravitation could produce singularities, which eventually merged together to be known as the Penrose–Hawking Singularity Theorems. According to the Penrose Singularity Theorem, which he proposed in 1965, a time-like singularity will occur within a black hole whenever matter reaches certain energy conditions. At this point, the curvature of space-time within the black hole becomes infinite, thus turning it into a trapped surface where time ceases to function. The Hawking Singularity Theorem added to this by stating that a space-like singularity can occur when matter is forcibly compressed to a point, causing the rules that govern matter to break down. Hawking traced this back in time to the Big Bang, which he claimed was a point of infinite density. However, Hawking later revised this to claim that general relativity breaks down at times prior to the Big Bang, and hence no singularity could be predicted by it. Some more recent proposals also suggest that the Universe did not begin as a singularity. These includes theories like Loop Quantum Gravity, which attempts to unify the laws of quantum physics with gravity. This theory states that, due to quantum gravity effects, there is a minimum distance beyond which gravity no longer continues to increase, or that interpenetrating particle waves mask gravitational effects that would be felt at a distance. Types of Singularities: The two most important types of space-time singularities are known as Curvature Singularities and Conical Singularities. Singularities can also be divided according to whether they are covered by an event horizon or not. In the case of the former, you have the Curvature and Conical; whereas in the latter, you have what are known as Naked Singularities. A Curvature Singularity is best exemplified by a black hole. At the center of a black hole, space-time becomes a one-dimensional point which contains a huge mass. As a result, gravity become infinite and space-time curves infinitely, and the laws of physics as we know them cease to function. Conical singularities occur when there is a point where the limit of every general covariance quantity is finite. In this case, space-time looks like a cone around this point, where the singularity is located at the tip of the cone. An example of such a conical singularity is a cosmic string, a type of hypothetical one-dimensional point that is believed to have formed during the early Universe. And, as mentioned, there is the Naked Singularity, a type of singularity which is not hidden behind an event horizon. These were first discovered in 1991 by Shapiro and Teukolsky using computer simulations of a rotating plane of dust that indicated that General Relativity might allow for “naked” singularities. In this case, what actually transpires within a black hole (i.e. its singularity) would be visible. Such a singularity would theoretically be what existed prior to the Big Bang. The key word here is theoretical, as it remains a mystery what these objects would look like. For the moment, singularities and what actually lies beneath the veil of a black hole remains a mystery. As time goes on, it is hoped that astronomers will be able to study black holes in greater detail. It is also hoped that in the coming decades, scientists will find a way to merge the principles of quantum mechanics with gravity, and that this will shed further light on how this mysterious force operates. We have many interesting articles about gravitational singularities here at Universe Today. Here is 10 Interesting Facts About Black Holes, What Would A Black Hole Look Like?, Was the Big Bang Just a Black Hole?, Goodbye Big Bang, Hello Black Hole?, Who is Stephen Hawking?, and What’s on the Other Side of a Black Hole?
https://www.universetoday.com/84147/singularity/
18
16
Science fair projects the easiest way to get an entire package of pop rocks into a balloon is to use a want to make the pop rocks expander into a science. This video describes how to perform a three balloon science experiment which is a good way for children to learn scientific principles and could make a. This experiment is ideal for your science projects watch other amazing science projects fire snake . How to design and develop a science project about a balloon car type of question identify the variables: independent variable, dependent variable, controlled variables. Paper plate hovercraft science fair project ideas which the balloon projects downward toward the table easy one day middle school science fair projects. Soda pop in a balloon - the soda pop in a balloon experiment is an easy way to teach kids about carbon dioxide learn more about science projects for kids on. Do you have a science fair project of your own magic balloons it can be a frustrating task as we try our hardest to blow the balloon up only to have all the. Make a balloon rocket car with this fun science project watch our video to see the balloon car in action science projects. Kids science, science for kids, balloon activities, learning about air pressure. These balloon science experiments are sure to excite the kids make a balloon rocket, light up a light bulb with balloon, and more. This easy balloon experiment is sure to be a bang keep in mind, the rules of science apply here do this experiment wrong, and it's sure to be a bust. Science projects on how hot and cold water changes a balloon allow students to explore the concepts of the density of matter, air pressure and surface tension. Combine quick science and balloon play with our easy to set up chemistry for kids test out this balloon baking soda science activity it's a must save homemade science experiment for fizzing baking soda and vinegar science all year long. This video is about one of the most amazing science fair projects for 3rd grade you can find out if the balloon's shape affects the thrust. Engaging student science doesn't always require specialty materials all of the projects in this k-12 hands-on science project roundup use balloons. Think you can blow up a balloon without using your mouth this isn't just a cool trick, it's also a great way to introduce chemistry and biology concepts. How to make a balloon car making a balloon car is a fun craft project and educational science experiment that can be done with kids this activity can be used to help teach kids how wind energy can be used to propel an object, as well as. Weather experiments | science fair projects you will use a balloon to make a barometer that will measure changes in atmospheric more science fair projects. Simple but mindblowing helium balloon experiment (accompanied by two adorable and incredibly polite little science helpers), then by using a helium balloon. Have you ever heard of magic balloons in this experiment, you will witness a balloon inflating without you blowing it up. Science fair project category: astrophysics topic: balloon rocket introduction the development of the science fair project described in this booklet can be used for any 6 th through 12 th grade student. In this lesson from the chemical educational foundation, apply the concepts of pressure and newton's laws of motion to build balloon rockets. Get kids to clearly see a chemical reaction taking place with this self inflating balloon science experiment steps, explanation, images and. Build a balloon hovercraft out of an old cd with this fun science project build a balloon hovercraft out of an old cd with this fun science projects balloon. Balloon car project the balloon powered car project: science, period 2 car. Fire water balloon kids science fair projects, kids model experiments for cbse isc stream students, kids activities, craft and art ideas for kids in middle school, elementary school for class 5th grade, 6th, 7th, 8th, 9th 10th, 11th, 12th grade and high school, msc and college students. Part 4 of 6 - how to make a homemade hot air balloon wonderhowto science experiments the best investigatory projects in science:. Make a balloon rocket you will need 1 balloon (round ones will work, but the longer “airship” balloons work best) science bob more from my site.
http://bcpaperfwwd.archiandstyle.info/balloon-science-projects.html
18
73
Hands-on explorations of the pythagorean theorem here is a great example of a hands-on pythagorean ideas for the squares for hands-on student projects. Pythagorean theorem projects april 9, 2015 geometry covers a range of topics including shapes such as parallelograms, rhombi, trapezoid, and triangles. Watch as middle school students discover the pythagorean theorem in an engaging lesson this lesson plan is a new way to teach this challenging math topic covers common core standards in math. Find and save ideas about pythagorean theorem on pinterest | see more ideas about pythagorean theorem problems, 8th grade math and proof of pythagoras theorem. You have heard of the pythagorean theorem, and you know it can be used to find the missing side of a right triangle but does it always work. Date: tue, 29 dec 1998 19:34:24 est subject: pythegorean theorem research project hi my name is mohammed i am a math honors student in 8th grade. Mathematics in construction introduction: the pythagorean theorem is used extensively in designing and building structures, especially roofs. Hands-on manipulatives help students to prove how, why, and when the pythagorean theorem shows relationships within triangles plan your 60 minutes lesson in math or pythagorean theroem with helpful tips from christa lemily. Math 150 projects a proof of the pythagorean theorem steve wilson it is possible to prove that the cartesian coordinate plane is a euclidean plane with practically no axioms. Upon completion of this lesson, students will: know the pythagorean theorem use the pythagorean theorem to find side lengths of right triangles. This demonstration shows (in six different ways) that the length of the diagonal of a box with sides of length , , and is equal to. Mathematics capstone course page 1 the pythagorean theorem in crime scene investigation i unit overview & purpose: students are asked to solve a series of crimes using critical thinking, science and. My total score is _____ out of 20 points possible in order to show i have mastered the pythagorean theorem, i need to have earned at least 16 points. Mathematics assessment project balanced assessment proofs of the pythagorean theorem unpdf (2154k pdf) (2154k pdf/acrobat 14. How to use the pythagorean theorem, explained with examples,practice problems, a vidoe tutorial and pictures. Applies to right triangles 2 2 2 pythagorean theorem states that a squared plus b squared will equal c squared the side of the triangle right across the right angle sign is called the hypotenuse the two other sides are called legs hypotenuse leg leg in the pythagorean theorem. How to use the pythagorean theorem the pythagorean theorem describes the lengths of the sides of a right triangle in a way that is so elegant and practical that the theorem is still widely used today. The pythagorean theorem this measurement lesson is one of 37 hands-on projects focused on mathematics. Geometry research projectthe pythagorean theorem a2 + b2 = c2mr yates the pythagorean theorem has been proven countless times in the pas. The pythagorean (or pythagoras') theorem is the statement that the sum of (the areas of) the two small squares equals (the area of) the big one in algebraic. Pythagorean theorem though the instructions are given in english, student sheets are also available in french grade level 8 rubric - pythagorean relationships. Given two straight lines, the pythagorean theorem allows you to calculate the length of the diagonal connecting them this application is frequently used in architecture, woodworking, or other physical construction projects. Pythagorean theorem collaborative group projects introduction the students were broken up into groups of 4 the students were assigned a past regents exam question that required them to use the pythagorean theorem.
http://cyassignmentdubr.afterschoolprofessional.info/pythagorean-theorem-projects.html
18
25
Our word problem worksheets force students to carefully read and digest problems, then logically and creatively work to come up with the correct answers in addition to covering a variety of math topics, these worksheets provide plenty of different themes to keep things interesting. Practice this set of 8th grade math worksheets that is compiled into workbooks covering exponents, radicals, scientific notations, function, pythagorean theorem, volume and more. Build your students' math skills with these daily practice word problem worksheets download and print them for free. A collection of math videos, solutions, activities and worksheets that are based on singapore math, examples and step by step solutions of singapore math word problems, videos and worksheets for singapore math from grade 1 to grade 6, what is singapore math, how to explain singapore math. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Discover over 50 thousand math worksheets on a variety of elementary and middle school topics our pdf math worksheets are easy to print or download and free to use in your school or home. Use these math printables to help second-graders learn to do word problems, involving such concepts as shapes, patterns, days of the week, and money 2nd grade math word problems search the site go. Additional worksheet titles available in the subscribers area include missing number, two step problems, and number bonds. Test and worksheet generators for math teachers all worksheets created with infinite pre-algebra algebra 1 worksheets word problems one-step equation word problems. Whether your child needs a challenge or a little extra help with a specific second grade math skill, our second grade math worksheets are here to help get practice and support for skills like rounding, word problems, measurement and more check out other second grade worksheets to give your child. Everything a teacher needs to supplement math materials use these free math worksheets for homework assignments and to reinforce concepts, skills, and problem-solving. Studies show that lots of math practice leads to better math grades practice math problems are a great way for kids to get extra math practice math worksheets. Math worksheets sixth grade math worksheets sixth grade is a crucial year for students to have complete understanding of basic skills in the four operations as well as the comprehension of prime numbers and factorization, number theory, algebraic reasoning, and equations. Practice your addition, subtraction, mutliplication and division - self-marking online exercises and worksheets with answers, you create and print from your browser set your level of difficulty math worksheet generator - free - practice math with these worksheets. Adaptedmind is a customized online math curriculum, problems, and worksheets that will significantly improve your child's math performance, guaranteed we make learning fun, game-oriented, and give you ways to get involved. Free high school (grades 10, 11 and 12) math questions and problems with answers free mathematics tutorials home step by step math worksheets solvers. Math practice by ipracticemath is the best place to build concepts of math through fun and interactive sessions for grades 1 up to 12. Third grade math here is a list of all of the math skills students learn in third grade c3 add two numbers up to three digits - word problems c4 complete the. Free math worksheets to download grade 7 math word problems with answers are presented some of these problems are challenging and need more time to solve. Here you will find our range of printable math problem worksheets there are a range of problems from 1st grade through to 5th grade, complete with answers. A listing of math word problem worksheets that are available on the site all worksheets are free and formatted for easy printing and include an option to view the answers. Multiplication word problem worksheets 3rd grade welcome to the math salamanders multiplication word problem worksheets for 3rd grade here you will find our range of printable multiplication problems which will help your child apply and practice their multiplication and times tables skills to solve a range of 'real life' problems. These word problem worksheets place 4th grade math concepts in real world problems that students can relate to we provide math word problems for addition, subtraction, multiplication, division, time, money, fractions and measurement (volume, mass and length. Math exercises, math worksheets, math printables for 5th graders, decimals, fractions, multiplication, algebra and more. Mixed problems worksheets for practice here is a graphic preview for all of the mixed problems worksheets you can select different variables to customize these mixed problems worksheets for your needs. Give your fourth grader practice solving word problems involving money, finding elapsed time, and calculating distance and speed with these math worksheets this math worksheet will give your child practice figuring out the correct multiplication and division equations, as well as coming up with.
http://apassignmenteanf.orlandoparks.info/math-problems-worksheets.html
18
12
(7ee1–2 ) • solve real-life and mathematical problems using numerical standards for mathematical practice—explanation and examples for grade seven. Grade 7 module 3: expressions and equations they use linear equations to solve unknown angle problems and other problems presented. 7th grade math inequalities example problem overview public lessons put down that 7th grade math dictionary and set aside that solve the inequality com. They solve real-world and mathematical problems involving area, surface area, and in grade 7, students solve problems involving ratios and rates and discuss . Need 7th grade math help mathhelpcom offers over 1000 online math lessons covering 6th grade math, 7th grade math, 8th grade math, and more. Each common core: 7th grade math problem is tagged down to the core, underlying concept that is being tested the common core: 7th grade math. Support about us contact open up resources grade 7math units problems glossary my reflections zemis 7e unit 7 angles, triangles, and prisms. Solving multi-step real world problems involving percent of increase ( progression lesson six) 7th grade math » unit: proportional reasoning with percents. Here is a graphic preview for all of the word problems worksheets are a good resource for students in the 5th grade, 6th grade, 7th grade, and 8th grade. The worksheets support any seventh grade math and pre-algebra programs, but cover all 7th grade topics most notably, they do not include problem solving. Are you looking for free 7th grade math games that you can play online match addition and subtraction problems with the correct integers in as few attempts. Printable worksheets and online practice tests on full year 7th grade review for grade 7 full year 7th grade review - all topics. Here are all the sample math questions for the 8th grade test 8th grade math test 1 5/5 in 7th grade in my head but nice problems. Here is a list of all the skills students learn in eighth grade and lcm: word problem a7 compare numbers written in scientific notation consumer math i 1. In 7th grade math students will tackle operations with integers, unit rates and proportional explore some of the 7th grade games, problems and projects. Use this basic 7th grade math worksheet to learn to calculate surface area percent problems – percentage worksheet for seventh grade - math blaster. Learn seventh grade math for free—proportions, algebra basics, arithmetic with by the end of this tutorial, you'll be solving problems like 4 + (-7) with ease. A new edsource issue brief, california's math pipeline: the grade 7 many ways that students' understanding of 7th grade math concepts difficulty with algebra stems from poor arithmetic and poor problem solving skills. Description 7th grade math learning games is a year's worth of math lessons, worksheets, and problems for seventh graders covering 19 common core. Seventh grade ns3 solve real-world and mathematical problems involving the four i can solve real-world mathematical problems by adding, subtracting,. It is known that there are many factors affecting students' problem-solving abilities in this study, the influence of seventh-grade students' affective factors, their. The seventh grade math curriculum starts to take students more into algebra and geometry solving basic equations, such as one step solving for x problems. Seventh grade overview ratios and proportional relationships (rp) • analyze proportional relationships and use them to solve mathematical problems and. This quiz is about the math problems that you get in 7th gradeto pass: you need a. Content of 7th grade proportional relationships cluster nc7 solve real-world and mathematical problems nc7rp1 compute unit rates. Common core standards: grade 7 7ns3 solve real-world and mathematical problems involving the four operations with rational numbers. Consumer math: unit prices percents with multi-step problems to develop and demonstrate in 7th grade, according to the common core state standards.
http://zhhomeworkslul.isomerhalder.us/seventh-grade-math-problems.html
18
18
A fraction is a common method of expressing rational numbers that aren’t whole numbers (integers). A fraction may also be used to determine the partial value of a rational number. The concept of fractions is generally taught at the grade school level and must be mastered before advancing in mathematics. Identify the components of a fraction. A fraction is defined as the expression a/b, where a and b are integers. In the fraction a/b, a is the numerator and b is the denominator. Find the fraction of an integer. You can calculate the fraction of a whole number by multiplying the number by the numerator and dividing that product by the denominator. Thus, the fraction a/b of a whole number x is given by ax/b. Sciencing Video Vault Calculate the fractions of an integer for specific cases. For example, ¾ of 21 is (3x21)/4 or 63/4. This fraction is known as an improper fraction because the numerator is greater than the denominator. Convert an improper fraction to a mixed number. A mixed number is a number that contains an integer and a proper fraction. The integer portion of an improper fraction is the largest integer less than or equal to the improper fraction. The difference between the mixed number and integer will be a proper fraction. For example, 63/4 is equal to 15.75 so the integer portion is 15 and the fractional portion is .75 or 3/4. Therefore, 63/4 = 15 3/4. Reduce a fraction by dividing the numerator and denominator by their greatest common factor (GCF). The GCF of two integers a and b is the largest integer such that a/c and b/c are both integers. For example, the GCF of 20 and 24 is 4. Therefore, the fraction 20/24 is equal to (20/4)/(24/4) or 5/6.
https://sciencing.com/how-to-find-a-fraction-of-a-number-12751782.html
18
30
Bilingual education is a broad term that refers to the presence of two languages in instructional settings. The term is, however, "a simple label for a complex phenomenon" (Cazden and Snow, p. 9) that depends upon many variables, including the native language of the students, the language of instruction, and the linguistic goal of the program, to determine which type of bilingual education is used. Students may be native speakers of the majority language or a minority language. The students' native language may or may not be used to teach content material. Bilingual education programs can be considered either additive or subtractive in terms of their linguistic goals, depending on whether students are encouraged to add to their linguistic repertoire or to replace their native language with the majority language (see Table 1 for a typology of bilingual education). Bilingual education is used here to refer to the use of two languages as media of instruction. Need for Bilingual Education At the beginning of the twenty-first century, proficiency in only one language is not enough for economic, societal, and educational success. Global interdependence and mass communication often require the ability to function in more than one language. According to the 2000 U.S. Census, more than 9.7 million children ages five to seventeen–one of every six school-age children–spoke a language other than English at home. These language-minority children are the fastest-growing segment of the U.S. school-age population. Between 1990 and 2000, the population of language-minority children increased by 55 percent, while the population of children living in homes where only English is spoken grew by only 11 percent. Language-minority students in U.S. schools speak virtually all of the world's languages, including more than a hundred that are indigenous to the United States. Language-minority students may be monolingual in their native language, bilingual in their native language and English, or monolingual in English but from a home where a language other than English is spoken. Those who have not yet developed sufficient proficiency in English to learn content material in all-English-medium classrooms are known as limited English proficient (LEP) or English language learners (ELLs). Reliable estimates place the number of LEP students in American schools at close to four million. Benefits of Bilingualism and Theoretical Foundations of Bilingual Education Bilingual education is grounded in common sense, experience, and research. Common sense says that children will not learn academic subject material if they can't understand the language of instruction. Experience documents that students from minority-language backgrounds historically have higher dropout rates and lower achievement scores. Finally, there is a basis for bilingual education that draws upon research in language acquisition and education. Research done by Jim Cummins, of the Ontario Institute for Studies in Education at the University of Toronto, supports a basic tenet of bilingual education: children's first language skills must become well developed to ensure that their academic and linguistic performance in the second language is maximized. Cummins's developmental interdependencetheory suggests that growth in a second language is dependent upon a well-developed first language, and his thresholds theory suggests that a child must attain a certain level of proficiency in both the native and second language in order for the beneficial aspects of bilingualism to accrue. Cummins also introduced the concept of the common underlying proficiency model of bilingualism, which explains how concepts learned in one language can be transferred to another. Cummins is best known for his distinction between basic interpersonal communication skills (BICS) and cognitive academic language proficiency (CALP). BICS, or everyday conversational skills, are quickly acquired, whereas CALP, the highly decontextualized, abstract language skills used in classrooms, may take seven years or more to acquire. Stephen Krashen, of the School of Education at the University of Southern California, developed an overall theory of second language acquisition known as the monitor model. The core of this theory is the distinction between acquisition and learning–acquisition being a subconscious process occurring in authentic communicative situations and learning being the conscious process of knowing about a language. The monitor model also includes the natural order hypothesis, the input hypothesis, the monitor hypothesis, and the affective filter hypothesis. Together, these five hypotheses provide a structure for, and an understanding of how to best design and implement, educational programs for language-minority students. Krashen put his theory into practice with the creation of the natural approach and the gradual exit model, which are based on a second tenet of bilingual education–the concept of comprehensible input. In other words, language teaching must be designed so that language can be acquired easily, and this is done by using delivery methods and levels of language that can be understood by the student. Bilingual Education around the World It is estimated that between 60 and 75 percent of the world is bilingual, and bilingual education is a common educational approach used throughout the world. It may be implemented in different ways for majority and/or minority language populations, and there may be different educational and linguistic goals in different countries. In Canada, immersion education programs are designed for native speakers of the majority language (English) to become proficient in a minority language (French), whereas heritage-language programs are implemented to assist native speakers of indigenous and immigrant languages become proficient in English. In Israel, bilingual education programs not only help both the Arabic-and Hebrew-speaking populations become bilingual, they also teach Hebrew to immigrants from around the world. In Ireland, bilingual education is being implemented to restore the native language. In many South American countries, such as Peru and Ecuador, there are large populations of indigenous peoples who speak languages other than Spanish. Bilingual education programs there have the goal of bilingualism. Throughout Europe, bilingual education programs are serving immigrant children as well as promoting bilingualism for speakers of majority languages. Bilingual Education in the United States Since the first colonists arrived on American shores, education has been provided through languages other than English. As early as 1694, German-speaking Americans were operating schools in their mother tongue. As the country expanded, wherever language-minority groups had power, bilingual education was common. By the mid-1800s, there were schools throughout the country using German, Dutch, Czech, Spanish, Norwegian, French, and other languages, and many states had laws officially authorizing bilingual education. In the late 1800s, however, there was a rise in nativism, accompanied by a large wave of new immigrants at the turn of the century. As World War I began, the language restrictionist movement gained momentum, and schools were given the responsibility of replacing immigrant languages and cultures with those of the United States. Despite myths to the contrary, non-native English speakers neither learned English very quickly nor succeeded in all-English schools. A comparison of the high-school entry rates based on a 1908 survey of public schools shows, for example, that in Boston, while 70 percent of the children of native whites entered high school, only 32 percent of the children of non-native English-speaking immigrants did so. However, at the beginning of the twentieth century one could easily find a good job that did not require proficiency in English. By 1923, thirty-four states had passed laws mandating English as the language of instruction in public schools. For the next two decades, with significantly reduced immigration levels, bilingual education was virtually nonexistent in the public schools, although parochial and private schools continued to teach in languages other than English. In the post–World War II period, however, a series of events–including increased immigration, the Brown vs. Board of Education Supreme Court decision, the civil rights movement, the Soviet launch of the Sputnik satellite, the National Defense Education Act, the War on Poverty, and the Elementary and Secondary Education Act of 1965–led to a rebirth of bilingual education in the United States. In 1963, in response to the educational needs of the large influx of Cuban refugees in Miami, Coral Way Elementary School began a two-way bilingual education program for English-speaking and Spanish-speaking students. In 1967, U.S. Senator Ralph Yarborough introduced a bill, the Bilingual Education Act, as Title VII of the Elementary and Secondary Education Act, noting that children who enter schools not speaking English cannot understand instruction that is conducted in English. By the mid-1970s, states were funding bilingual education programs, and many passed laws mandating or permitting instruction though languages other than English. In 1974, the Supreme Court heard the case of Lau v. Nichols, a class-action suit brought on behalf of Chinese students in the San Francisco schools, most of whom were receiving no special instruction despite the fact that they did not speak English. The Court decided that these students were not receiving equal educational opportunity because they did not understand the language of instruction and the schools were not doing anything to assist them. The Court noted that "imposition of a requirement that, before a child can effectively participate in the educational program, he must already have acquired those basic [English] skills is to make a mockery of public education." While there has never been a federal mandate requiring bilingual education, the courts and federal legislation–including Title VI of the Civil Rights Act of 1964, which prohibits discrimination on the basis of race, color, or national origin in federally assisted programs and activities, and the Equal Educational Opportunities Act of 1974, which defines a denial of educational opportunity as the failure of an educational agency to take appropriate action to overcome language barriers that impede equal participation by its students in its instructional programs–have attempted to guarantee that LEP students are provided with comprehensible instruction. The population of the United States became more and more diverse as immigration levels reached record levels between the 1970s and the turn of the century, and bilingual education programs were implemented throughout the country. The Bilingual Education Act was reauthorized in 1974, 1978, 1984, 1988, 1994, and 2001, each time improving and expanding upon the opportunities for school districts and institutions of higher education to receive assistance from this discretionary, competitive grant program. The 2001 reauthorization significantly changed the program, replacing all references to bilingual education with the phrase "language instruction educational program" and turning it into a state-administered formula-grant program. Characteristics of Good Bilingual Education Programs Good bilingual education programs recognize and build upon the knowledge and skills children bring to school. They are designed to be linguistically, culturally, and developmentally appropriate for the students and have the following characteristics: - High expectations for students and clear programmatic goals. - A curriculum that is comparable to the material covered in the English-only classroom. - Instruction through the native language for subject matter. - An English-language development component. - Multicultural instruction that recognizes and incorporates students' home cultures. - Administrative and instructional staff, and community support for the program. - Appropriately trained personnel. - Adequate resources and linguistically, culturally, and developmentally appropriate materials. - Frequent and appropriate monitoring of student performance. - Parental and family involvement. Debate over Bilingual Education The debate over bilingual education has two sources. Part of it is a reflection of societal attitudes towards immigrants. Since language is one of the most obvious identifiers of an immigrant, restrictions on the use of languages other than English have been imposed throughout the history of the United States, particularly in times of war and economic uncertainty. Despite claims that the English language is in danger, figures from the 2000 Census show that 96 percent of those over the age of five speak English well or very well. Rolf Kjolseth concluded that language is also closely associated with national identity, and Americans often display a double standard with regard to bilingualism. On the one hand, they applaud a native English-speaking student studying a foreign language and becoming bilingual, while on the other hand they insist that non-native English speakers give up their native languages and become monolingual in English. Much of the debate over bilingual education stems from an unrealistic expectation of immediate results. Many people expect LEP students to accomplish a task that they themselves have been unable to do–become fully proficient in a new language. Furthermore, they expect these students to do so while also learning academic subjects like mathematics, science, and social studies at the same rate as their English-speaking peers in a language they do not yet fully command. While students in bilingual education programs maintain their academic progress by receiving content-matter instruction in their native language, they may initially lag behind students in all-English programs on measures of English language proficiency. But longitudinal studies show that not only do these students catch up, but they also often surpass their peers both academically and linguistically. Proposition 227, a ballot initiative mandating instruction only in English for students who did not speak English, and passed by 63 percent of the 30 percent of the people in California who voted in 1998, is both a reflection of the public debate over bilingual education and an example of the impact of public opinion on education policy. Although only 30 percent of the LEP students in California were enrolled in bilingual education programs at the time (the other 70 percent were in all-English programs), bilingual education was identified as the cause of academic failure on the part of Hispanic students (many of whom were monolingual in English), and the public voted to prohibit bilingual education. Instead, LEP students were to be educated through sheltered English immersion during a temporary transition period not normally to exceed one year. Three years after the implementation of Proposition 227, the scores of LEP students on state tests were beginning to decline rather than increase. Research Evidence on the Effectiveness of Bilingual Education There are numerous studies that document the effectiveness of bilingual education. One of the most notable was the eight-year (1984-1991) Longitudinal Study of Structured English Immersion Strategy, Early-Exit and Late-Exit Programs for Language-Minority Children. The findings of this study were later validated by the National Academy of Sciences. The study compared three different approaches to educating LEP students where the language of instruction was radically different in grades one and two. One approach was structured immersion, where almost all instruction was provided in English. A second approach was early-exit transitional bilingualeducation, in which there is some initial instruction in the child's primary language (thirty to sixty minutes per day), and all other instruction in English, with the child's primary language used only as a support, for clarification. However, instruction in the primary language is phased out so that by grade two, virtually all instruction is in English. The third approach was late-exit transitional bilingual education, where students received 40 percent of their instruction in the primary language and would continue to do so through sixth grade, regardless of whether they were reclassified as fluent-English-proficient. Although the outcomes were not significantly different for the three groups at the end of grade three, by the sixth grade late-exit transitional bilingual education students were performing higher on mathematics, English language, and English reading than students in the other two programs. The study concluded that those students who received more native language instruction for a longer period not only performed better academically, but also acquired English language skills at the same rate as those students who were taught only in English. Furthermore, by sixth grade, the late-exit transitional bilingual education students were the only group catching up academically, in all content areas, to their English-speaking peers; the other two groups were falling further behind. Virginia Collier and Wayne Thomas, professors in the Graduate School of Education at George Mason University, have conducted one of the largest longitudinal studies ever, with more than 700,000 student records. Their findings document that when students who have had no schooling in their native language are taught exclusively in English, it takes from seven to ten years to reach the age and grade-level norms of their native English-speaking peers. Students who have been taught through both their native language and English, however, reach and surpass the performance of native English-speakers across all subject areas after only four to seven years when tested in English. Furthermore, when tested in their native language, these bilingual education students typically score at or above grade level in all subject areas. Ninety-eight percent of the children entering kindergarten in California's Calexico School District are LEP. In the early 1990s, the school district shifted the focus of its instructional program from student limitations to student strengths–from remedial programs emphasizing English language development to enriched programs emphasizing total academic development; from narrow English-as-a-second-language programs to comprehensive developmental bilingual education programs that provide dual-language instruction. In Calexico schools, LEP students receive as much as 80 percent of their early elementary instruction in their native language. After students achieve full English proficiency, they continue to have opportunities to study in, and further develop, their Spanish language skills. By the late 1990s, Calexico's dropout rate was half the state average for Hispanic students, and more than 90 percent of their graduates were continuing on to junior or four-year colleges and universities. The evidence on the effectiveness of dual immersion (or two-way) bilingual education programs is even more compelling. In dual immersion programs, half of the students are native speakers of English and half are native speakers of another language. Instruction is provided through both languages and the goal of these programs is for all students to become proficient in both languages. In her research, Kathryn Lindholm-Leary, a professor of child development in the College of Education at San Jose State University, found that in developing proficiency in the English language, both English and Spanish speakers benefit equally from dual-language programs. Whether they spend 10 to 20 percent or 50 percent of their instructional day in English, students in such programs are equally proficient in English. Mathematics achievement was also found to be highly related across the two languages, demonstrating that content learned in one language is available in the other language. Despite limited English instruction and little or no mathematics instruction in English, students receiving 90 percent of their instruction in Spanish score at or close to grade level on mathematics achievement tests in English. Bilingual education offers great opportunities to both language-majority and language-minority populations. It is an educational approach that not only allows students to master academic content material, but also become proficient in two languages–an increasingly valuable skill in the early twenty-first century. See also: Bilingualism, Second Language Learning, AND English as a Second Language; Foreign Language Education. Baker, Colin. 1995. A Parents' and Teachers' Guide to Bilingualism. Clevedon, Eng.: Multilingual Matters. Baker, Colin. 1996. Foundations of Bilingual Education and Bilingualism, 2nd edition. Clevedon, Eng.: Multilingual Matters. Baker, Colin. 2000. The Care and Education of Young Bilinguals: An Introduction for Professionals. Clevedon, Eng.: Multilingual Matters. Baker, Colin, and Hornberger, Nancy H., eds. 2001. An Introductory Reader to the Writings of Jim Cummins. Clevedon, Eng.: Multilingual Matters. Cazden, Courtney B., and Snow, Catherine E., eds. 1990. "English Plus: Issues in Bilingual Education." Annals of the American Academy of Political and Social Science, Volume 508. London:Sage. Collier, Virginia P. 1992. "A Synthesis of Studies Examining Long-Term Language Minority Student Data on Academic Achievement." Bilingual Research Journal 16 (1&2) 187–212. Collier, Virginia P., and Thomas, Wayne P. 1997. School Effectiveness for Language Minority Students, NCBE Resource Collection Number 9. Washington, DC: National Clearinghouse for Bilingual Education. Collier, Virginia P., and Thomas, Wayne P. 2002. A National Survey of School Effectiveness for Language Minority Students' Long-Term Academic Achievement: Executive Summary. Santa Cruz, CA: Crede. Crawford, James. 1991. Bilingual Education: History, Politics, Theory and Practice, 2nd edition. Los Angeles: Bilingual Educational Services. Crawford, James. 1997. Best Evidence: Research Foundations of the Bilingual Education Act. Washington, DC: National Clearinghouse for Bilingual Education. Cummins, James. 1979. "Linguistic Interdependence and the Educational Development of Bilingual Children." Review of Educational Research 49:222–251. Cummins, James. 1980. "The Entry and Exit Fallacy in Bilingual Education." NABE Journal 4:25–60. Cummins, James. 2000. Language, Power, and Pedagogy: Bilingual Children in the Crossfire. Clevedon, Eng., and Buffalo, NY: Multilingual Matters. Kjolseth, Rolf. 1983. "Cultural Politics of Bilingualism." Society 20 (May/June):40–48. Krashen, Stephen D. 1999. Condemned Without a Trial: Bogus Arguments Against Bilingual Education. Portsmouth, NH: Heinemann. Lindholm-Leary, Kathryn. 2000. Biliteracy for a Global Society: An Idea Book on Dual Language Education. Washington, DC: National Clearing-house for Bilingual Education. Lindholm-Leary, Kathryn. 2001. Dual Language Education. Clevedon, Eng., and Buffalo, NY: Multilingual Matters. National Clearinghouse for Bilingual Education. 2000. The Growing Number of Limited English Proficient Students. Washington, DC: National Clearinghouse for Bilingual Education. RamÍrez, J. David; Yuen, Sandra D.; and Ramey, Dena R. 1991. Final Report: Longitudinal Study of Structured English Immersion Strategy, Early-Exit and Late-Exit Programs for Language-Minority Children. Report Submitted to the U.S. Department of Education. San Mateo, CA: Aguirre International. Skutnabb-Kangas, Tove. 1981. Bilingualism or Not: The Education of Minorities. Clevedon, Eng.: Multilingual Matters. United States Government Accounting Office. 2001. Meeting the Needs of Students with Limited English Proficiency. Washington, DC: Government Accounting Office. Zelasko, Nancy Faber. 1991. The Bilingual Double Standard: Mainstream Americans' Attitudes Towards Bilingualism. Ph.D. diss., Georgetown University. Nancy F. Zelasko "Bilingual Education." Encyclopedia of Education. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/bilingual-education "Bilingual Education." Encyclopedia of Education. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/bilingual-education Modern Language Association The Chicago Manual of Style American Psychological Association Bilingualism is the ability to communicate in two different languages. Bilingual education is the use of two different languages in classroom instruction. Languages are learned most readily during the toddler and preschool years and, to a lesser extent, during elementary school. Therefore, children growing up in bilingual homes and/or receiving bilingual education easily acquire both languages. Throughout much of the world, bilingualism is the norm for both children and adults. In the past, immigrants to the United States often began learning and using English in their homes as soon as possible. In the early 2000s, however, many immigrants choose to maintain their native language at home. Bilingual children are at an advantage in this increasingly multilingual nation. Bilingual language development Language acquisition is very similar for monolingual and bilingual children, although some experts view bilingualism as a specialized case of language development . Children growing up in homes where two different languages are spoken usually acquire both languages simultaneously. Although their acquisition of each language may be somewhat slower than that of children who are acquiring a single language, their development in the two languages combined is equivalent to that of monolingual children. Bilingual language learners proceed through the same patterns of language and speech development as children acquiring a single language. Their first words usually are spoken at about one year of age, and they begin stringing two words together at about age two. Even if the two languages do not share similarities in pronunciation, children eventually master them both. There are two major patterns of bilingual language development, both occurring before the age of three. Simultaneous bilingualism occurs when a child learns both languages at the same time. In the early stages of simultaneous bilingual language development, a child may mix words, parts of words, and inflections from both languages in a single sentence. Sometimes this occurs because a child knows a word in one language but not in the other. Some bilingual children initially resist learning words for the same thing in two languages. Children also may experiment with their two languages for effect. During the second stage of bilingual language development, at age four or older, children gradually begin to distinguish between the two languages and use them separately, sometimes depending on where they are. One language may be used less formally to talk about home and family , whereas the other language may be used more formally, perhaps for relating events that took place outside the home. Often children find it easier to express a specific idea in one language rather than the other. Bilingual children also go through periods when one language is used more than the other. Some children may begin to prefer one language over the other, particularly if that language is spoken more frequently in their home or school. Bilingual children usually are not equally skilled in both languages. Often they understand more in one language but speak more in the other. Sequential bilingualism occurs when children use their knowledge of and experience with a first language to rapidly acquire a second language. The first language may influence the way in which they learn and use their second language. Learning the second language is easier for children if the sounds, words, and vocabulary of the languages are similar. Bilingual language development usually proceeds more smoothly when both languages are introduced early and simultaneously. When the parents each use a different language with their child, the child is less likely to experience language confusion. Research indicates that there are numerous advantages to bilingualism. Bilingualism has been reported to improve the following skills: - verbal and linguistic abilities - general reasoning - concept formation - divergent thinking - metalinguistic skills, the ability to analyze and talk about language and control language processing These abilities are important for reading development in young children and may be a prerequisite for later learning to read and write in a new language. Types of bilingual education Bilingual education is common throughout the world and involves hundreds of languages. In the United States bilingualism is assumed to mean English and another language, often Spanish. More than 300 languages are spoken in the United States. In New York City schools, classroom instruction is given in 115 different languages. Bilingual education includes all teaching methods that are designed to meet the needs of English-language learners (ELLs), also referred to as "limited English proficient" (LEP) students. There are numerous approaches to bilingual education, although all include English as a second language (ESL). ESL is English language instruction that includes little or no use of a child's native language. ESL classes often include students with many different primary languages. Some school districts use a variety of approaches to bilingual education, designing individual programs based on the needs of each child. A common approach is transitional bilingual education (TBE). TBE programs include ESL; however, some or all academic classes are conducted in children's primary languages until they are well-prepared for English-only classes. Even children who converse well in English may not be ready to learn academic subjects in English. Often these children spend part of the school day in an intensive ESL program and the remainder of the day receiving instruction in their primary language. Bilingual teachers may help students improve their primary language skills. Bilingual/bicultural programs include instruction in the history and culture of a student's ethnic heritage. Studies have shown that children who receive several years of instruction in their native language learn English faster and have higher overall academic achievement levels that those who do not. Two-way bilingual or dual-language programs use both English and a second language in classrooms made up of both ELLs and native English speakers. The goal is for both groups to become bilingual. Children in twoway bilingual education programs have been found to outperform their peers academically. Many educators—and a segment of the public—believe in the English immersion approach, even if ELLs do not understand very much in the classroom. In this approach nearly all instruction is in English, and there is little or no use of other languages. If the teacher is bilingual, students may be allowed to ask questions in their native language, but the teacher answers them in English. Some schools employ structured English immersion or sheltered English, in which teachers use pictures, simple reading words, and other techniques to teach ELLs both English and academic subjects. History of bilingual education Although bilingual education has been used in the United States for more than 200 years, the 1968 Title VII amendment to the 1965 Elementary and Secondary Education Act (ESEA) instituted federal grants for bilingual education programs. This legislation led to the development of appropriate teaching and learning materials and training for teachers of bilingual students. In 1974 the U.S. Supreme Court ruled that the San Francisco school system had violated the Civil Rights Act of 1964 by not providing English-language instruction for Chinese-speaking students. All school districts were directed to serve ELLs adequately, and bilingual education quickly spread throughout the United States. In the 1980s a group called Asian Americans United filed a class-action lawsuit charging that Asian Americans were not being provided with an equitable education because they were not offered bilingual classes. The result of this suit was the creation of sheltered ESL, in which ESL students take all of their classes together. The No Child Left Behind (NCLB) Act of 2001—President George W. Bush's major education initiative—reauthorized the ESEA. It also imposed penalties on schools that did not raise the achievement levels of ELLs for at least two consecutive years. Although most research indicates that it often takes seven years for ELLs to attain full English fluency, the new federal law allows these children only three years before they must take standardized tests in English. Schools with large numbers of children speaking many different languages are particularly disadvantaged under the law. A 2003 survey by the National Education Association found that 22,000 schools in 44 states failed to make the required yearly progress on standardized tests, primarily because of low test scores by ELLs and disabled students. The National Association for Bilingual Education claims that NCLB sets arbitrary goals for achievement and uses "invalid and unreliable assessments." Furthermore, although the NCLB requires teachers to be qualified, as of 2004 there is a severe shortage of qualified teachers for ELLs. Some communities have developed early-intervention programs for Spanish-speaking parents and preschoolers to help children develop their Spanish language skills in preparation for entering English-only schools. In May of 2004, the U.S. Department of Education and faith-based community leaders launched an initiative to inform Hispanic, Asian, and other parents of ELLs about the NCLB. It featured the "Declaration of Rights for Parents of English Language Learners under No Child Left Behind." As of 2004 American public schools include about 11 million children of immigrants. Approximately 5.5 million students—10 percent of the public school enrollment—speak little or no English. Spanish speakers account for 80 percent of these children. About one-third of children enrolled in urban schools speak a primary language other than English in their homes. Between 2001 and 2004, 19 states reported increases of 50 to 200 percent in Spanish-speaking students. ELLs are the fastest-growing public school population in kindergarten through twelfth grade. Between 2000 and 2002, nationwide ELL enrollment increased 27 percent. About 25 percent of California public school children are ELLs. However, there is a profound shortage of bilingual and ESL teachers throughout the United States. Although 41 percent of U.S. teachers have ELLs in their classrooms, only about 2.5 percent of them have degrees in ESL or bilingual education. The majority of these teachers report that they are not well-prepared for teaching ELLs. About 75 percent of ELLs are in poverty schools, where student turnover is high and many teachers have only emergency credentials. Opposition to bilingual education In 1980 voters in Dade County, Florida, made English their official language. In 1981 California Senator S. I. Hayakawa introduced a constitutional amendment to make English the country's official language. In 1983 Hayakawa founded U.S. English, Inc., which grew to include 1.8 million members by 2004. U.S. English argues the following premises: - The unifying effect of the English language must be preserved in the United States. - Bilingual education fails to adequately teach English. - Learning English quickly in English-only classrooms is best for ELLs, both academically and socially. - Any special language instruction should be short-term and transitional. In 1986 California voters passed Proposition 63 that made English the state's official language. Other states did the same. In 1998 Californians passed Proposition 227, a referendum that attempted to eliminate bilingual education by allowing only one year of structured English immersion, followed by mainstreaming. Similar initiatives have appeared on other state ballots. However, only 9 percent of the California children attained English proficiency in one year, and most remained in the immersion programs for a second year. Prior to the new law only 29 percent of California ELLs were in bilingual programs, in part because of a shortage of qualified teachers. Since the law allowed parents to apply for waivers, 12 percent of the ELLs were allowed to remain in bilingual classes. In January of 2004, as part of a lawsuit settlement, the California State Board of Education was forced to radically revise the implementation of their "Reading First" program. Previously California had withheld all of the $133 million provided by NCLB from ELLs enrolled in alternative bilingual programs. Language and learning difficulties occur with the same frequency in monolingual and bilingual children. However, as the number of bilingual children in the United States increases, it becomes increasingly important for parents and pediatricians to understand the normal patterns of bilingual language development in order to recognize abnormal language development in a bilingual child. If a bilingual child has a speech or language problem, it should be apparent in both languages. However detecting language delays or abnormalities in bilingual children can be difficult. Signs of possible language delay in bilingual children include the following: - not making sounds between two and six months of age - fewer than one new word per week in children aged six to 15 months - fewer than 20 words in the two languages combined by 20 months of age - limited vocabulary without word combinations in children aged two to three years of age - prolonged periods without using speech - difficulty remembering words - missing normal milestones of language development in the first language of a sequentially bilingual child Language development in bilingual children can be assessed by a bilingual speech/language pathologist or by a professional who has knowledge of the rules and structure of both languages, perhaps with the assistance of a translator or interpreter. ELLs in English-only programs often fall behind academically. Many ELLs who are assessed using traditional methods are referred for special education . Such children often become school drop-outs. Parents in bilingual households can help their children by taking the following steps: - speaking the language in which they are most comfortable - being consistent regarding how and with whom they use each language - using each language's grammar in a manner that is appropriate for the child's developmental stage - keeping children interested and motivated in language acquisition Elementary and Secondary Education Act (ESEA) —The 1965 federal law that is reauthorized and amended every five years. English as a second language (ESL) —English language instruction for English language learners (ELLs) that includes little or no use of a child's native language; a component of all bilingual education programs. English language learner (ELL) —A student who is learning English as a second language; also called limited English proficient (LEP). Immersion —A language education approach in which English is the only language used. Limited English proficient (LEP) —Used to identify children who have insufficient English to succeed in English-only classrooms; also called English language learner (ELL). Metalinguistic skills —The ability to analyze language and control internal language processing; important for reading development in children. No Child Left Behind (NCLB) Act —The 2001 reauthorization of the ESEA, President George W. Bush's major education initiative. Sequential bilingualism —Acquiring first one language and then a second language before the age of three. Sheltered English —Structured English immersion; English instruction for ELLs that focuses on content and skills rather than the language itself; uses simplified language, visual aids, physical activity, and the physical environment to teach academic subjects. Sheltered ESL —Bilingual education in which ESL students attend all of their classes together. Simultaneous bilingualism —Acquiring two languages simultaneously before the age of three. Structured English immersion —Sheltered English; English-only instruction for ELLs that uses simplified language, visual aids, physical activity, and the physical environment to teach academic subjects. Transitional bilingual education (TBE) —Bilingual education that includes ESL and academic classes conducted in a child's primary language. Two-way bilingual education —Dual language programs in which English and a second language are both used in classes consisting of ELLs and native-English speakers. See also Language development. Bhatia, Tej K., and William C. Ritchie, eds. The Handbook of Bilingualism. Malden, MA: Blackwell, 2004. Cadiero-Kaplan, Karen. The Literacy Curriculum and Bilingual Education: A Critical Examination. New York: P. Lang, 2004. Calderon, Margarita, and Liliana Minaya-Rowe. Designing and Implementing Two-Way Bilingual Programs: A Step-by-Step Guide for Administrators, Teachers, and Parents. Thousand Oaks, CA: Corwin Press, 2003. Crawford, James. Educating English Learners: Language Diversity in the Classroom. Los Angeles, CA: Bilingual Educational Services, 2004. Santa Ana, Otto, ed. Tongue-Tied: The Lives of Multilingual Children in Public Education. Lanham, MD: Rowman & Littlefield, 2004. San Miguel Jr., Guadalupe. Contested Policy: The Rise and Fall of Federal Bilingual Education in the United States, 1960–2001. Denton, TX: University of North Texas Press, 2004. Dillon, Sam. "School Districts Struggle with English Fluency Mandate." New York Times November 5, 2003. Gutiérrez-Clellen, Vera F., et al. "Verbal Working Memory in Bilingual Children." Journal of Speech, Language, and Hearing Research 47, no. 4 (August 2004): 863–76. Hamers, Josiane F. "A Sociocognitive Model of Bilingual Development." Journal of Language and Social Psychology 23, no. 1 (March 2004): 70. Hammer, Carol Scheffner, et al. "Home Literacy Experiences and Their Relationship to Bilingual Preschoolers' Developing English Literacy Abilities: An Initial Investigation." Language, Speech, and Hearing Services in Schools 34 (January 2003): 20–30. American Speech-Language-Hearing Association. 10801 Rockville Pike, Rockville, MD 20852. Web site: <http://asha.org>. National Association for Bilingual Education. 1030 15th St., NW, Suite 470, Washington, DC 20005. Web site: <www.nabe.org>. National Association for Multicultural Education. 733 15th St., NW, Suite 430, Washington, DC 20005. Web site: <http://nameorg.org>. National Clearinghouse for English Language Acquisition. Office of English Language Acquisition, Language Enhancement & Academic Achievement for Limited English Proficient Students, U.S. Department of Education, George Washington University Graduate School of Education and Human Development, 2121 K St., NW, Suite 260, Washington, DC 20037. Web site: <www.ncela.gwu.edu>. U.S. English Inc. 1747 Pennsylvania Ave., NW, Suite 1050, Washington, DC 20006. Web site: <www.usenglish.org>. "Children and Bilingualism." American Speech-Language-Hearing Association. Available online at <www.asha.org/public/speech/development/Bilingual-Children.htm> (accessed December 6, 2004). "Immigrant Children Enrolled in Some of the State's Poorest School Districts Will Now Have Access to Millions of Dollars to Help Them Learn to Read." hispanicvista, January 29, 2004. Available online at <www.latinobeat.net/html4/013104be.htm> (accessed December 6, 2004). Jehlen, Alain. "English Lessons." National Education Association, May 2002. Available online at <www.nea.org/neatoday/0205/cover.html> (accessed December 6, 2004). "Language Development in Bilingual Children." KidsGrowth.com. Available online at <www.kidsgrowth.com/resources/articledetail.cfm?id=1229> (accessed December 6, 2004). "What is Bilingual Education?" National Association for Bilingual Education, 2001. Available online at <www.nabe.org/faq_detail.asp?ID=20> (accessed December 6, 2004). "What's the Score on English-Only?" National Education Association, May 2002. Available online at <www.nea.org/neatoday/0205/cover.html> (accessed December 6, 2004). Margaret Alic, PhD "Bilingualism/Bilingual Education." Gale Encyclopedia of Children's Health: Infancy through Adolescence. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/medicine/encyclopedias-almanacs-transcripts-and-maps/bilingualismbilingual-education "Bilingualism/Bilingual Education." Gale Encyclopedia of Children's Health: Infancy through Adolescence. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/medicine/encyclopedias-almanacs-transcripts-and-maps/bilingualismbilingual-education Modern Language Association The Chicago Manual of Style American Psychological Association On June 2, 1998, California voters approved Proposition 227, a measure designed to eliminate bilingual education, the use of another language along with English, in their public schools. In the preceding two decades there had been many other efforts to do away with bilingual education. The subject is a lightning rod of controversy and one of the most recognizable issues in what commentators have termed the nation's "culture wars." Its history is as controversial as its practice. One central question is whether or not the United States had a true "bilingual tradition." Another is to what degree bilingual education represented movements toward either assimilation or ethnic maintenance. Bilingual education goes back as far as the colonial period in the United States. Franciscan missionaries from California to Texas systematically used indigenous languages in translating and teaching the Catholic catechism to Native Americans. In the English-speaking colonies before the American Revolution and up through the early Republic, a myriad of ethnic groups, especially Germans, patronized bilingual schools, although influential thinkers and nationalists such as Noah Webster and Benjamin Franklin opposed them because they feared linguistic heterogeneity. The nineteenth century witnessed the rise of significant pro-bilingualism legislation, particularly for German speakers. In the 1830s, for example, the state of Ohio constitutionally guaranteed German-English bilingual education to local communities that wanted it. States such as Indiana, Illinois, Michigan, and Wisconsin also protected German bilingual education through statutory or constitutional means. Cities such as Baltimore, Cincinnati, Cleveland, Indianapolis, Milwaukee, and St. Louis operated large, public bilingual programs for German Americans. Historian Heinz Kloss links the bilingual education of the past with that of today and sees it as evidence of a national bilingual tradition. Several historians defend this interpretation through focused, regional studies. Other scholars criticize this contention, holding instead that it was a disorganized phenomenon and not representative of a true bilingual tradition. While German Americans were certainly the most influential nineteenth-century practitioners of bilingual education, many other groups were also involved. Though often relegated to private bilingual schools due to their lack of political influence, Czechs, Italians, Poles, Mexicans, and others established bilingual programs when they deemed them necessary, usually because of a belief that the public schools were culturally intolerant. Most immigrants wanted their children to speak English, and preferred bilingual to completely non-English schools. In the middle of the nineteenth century, Mexican Americans utilized both public and private bilingual schools, particularly in New Mexico and Texas, which had recently been acquired from Mexico. At this time the state of Louisiana constitutionally protected bilingual instruction for its native French speakers. Chicago's Catholic schools implemented bilingual education for Poles at the turn of the century, with Americanization as the goal. Indeed, one of bilingual education's most important rationales among non-ethnic educators was the belief that it furthered Americanization by making public schools more desirable to ethnic parents and by ensuring some level of English instruction. This varied, hard-to-define bilingual tradition in nineteenth-century America was the product of a Jeffersonian society committed to principles of local, limited government and undertaken at the behest of the ethnic communities themselves. These ethnic epicenters, sometimes called island communities, were targeted by both the Progressive and Americanization movements in the twentieth century. Progressives advocated centralized control of educational decision making and wanted to standardize the teaching of non-English-speaking children using an English-Only pedagogy. Traditional bilingual methods depended upon literacy in a foreign language for the ultimate acquisition of English. However, English-Only entailed all-English instruction for non-English speakers; not one foreign language word could be used in the lessons. Violation of these rules meant physical punishment and possibly expulsion for students. For teachers it entailed incarceration, fines, and loss of certification. The Americanization movement during the hysteria of World War I resulted in the criminalization of bilingual education. Though the Wilson administration discussed outlawing all German in the nation's public schools, it settled for federal directives to the states urging replacement of bilingual education with English-Only. States pursued this to extremes. But in Meyer v. Nebraska in 1923 the U.S. Supreme Court grudgingly overturned a Nebraska law banning all foreign languages in private institutions. English-Only pedagogy and IQ testing became key legal and educational justifications for segregated schools. Despite a brief flirtation with foreign languages during World War II, English-Only remained the nation's official pedagogical approach for non-English speakers well into the 1960s. By then scholars had begun to question English-Only's pedagogical assumptions. Also, ethnic activists, especially Mexican Americans, brought increasing legal and political pressure to bear against English-Only's segregating effect on their children. These unrelated forces culminated in the modern bilingual education movement. The Bilingual Education Act–passed in late 1967 and signed early in 1968–represented bilingual education's rebirth in the United States. It was signed by Lyndon Johnson, the only American president with experience in teaching non-English-speaking children: during the 1928-1929 academic year, the young Johnson taught impoverished Mexican Americans in Cotulla, Texas (ironically he taught his children using English-Only). Bilingual education's growth during the 1970s was aided by its utility as an affirmative curricular tool in desegregation cases. In 1970 the Office of Civil Rights in the Nixon Justice Department ruled that grouping children in so-called "special" or "educationally retarded" classes on the basis of language was a violation of their civil rights. Spurred by Chinese-American parents, the Supreme Court ruled four years later in Lau v. Nichols that schools were obligated to offer non-English-speaking children equal educational opportunity, in this case bilingual education. However, bilingual education was never uniformly accepted. By the late 1970s a significant backlash against it developed among serious intellectuals and nativist groups. The Reagan administration actively sought to discredit bilingual education by promoting English as a Second Language (ESL) as a better option. This politicization escalated in the 1990s, culminating in California's Proposition 227. In the early twenty-first century, bilingual education remains a hot-button political issue with an indisputably rich and meaningful history in the United States. See also: Education, United States; Literacy. Crawford, James. 1999. Bilingual Education: History, Politics, Theory, and Practice, 4th ed. Los Angeles: Bilingual Educational Services. Davies, Gareth. 2002. "The Great Society after Johnson: The Case of Bilingual Education." Journal of American History 88:1405-1429. Finkelman, Paul. 1996. "German American Victims and American Oppressors: The Cultural Background and Legacy of Meyer v. Nebraska. " In Law and the Great Plains: Essays on the Legal History of the Heartland, ed. John R. Wunder. Westport, CT: Greenwood Press. Kloss, Heinz. 1977. The American Bilingual Tradition. Rowley, MA: Newbury House. Leibowitz, Arnold H. 1971. Educational Policy and Political Acceptance: The Imposition of English as the Language of Instruction in American Schools. Washington, DC: Center for Applied Linguistics. San Miguel, Guadalupe, Jr. 1984. "Conflict and Controversy in the Evolution of Bilingual Education in the United States–An Interpretation." Social Science Quarterly 65: 508-518. Schlossman, Steven L. 1983. "Is There an American Tradition of Bilingual Education? German in the Public Elementary Schools, 1840-1919." American Journal of Education 91:139-186. Tamura, Eileen H. 1994. Americanization, Acculturation, and Ethnic Identity: The Nisei Generation in Hawaii. Urbana: University of Illinois Press. Wiebe, Robert H. 1967. The Search for Order, 1877-1920. New York: Hill and Wang. Carlos Kevin Blanton "Bilingual Education." Encyclopedia of Children and Childhood in History and Society. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/children/encyclopedias-almanacs-transcripts-and-maps/bilingual-education "Bilingual Education." Encyclopedia of Children and Childhood in History and Society. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/children/encyclopedias-almanacs-transcripts-and-maps/bilingual-education Modern Language Association The Chicago Manual of Style American Psychological Association Use of a language other than English in public school classrooms. The language rights of ethnic minorities in the United States have been a source of public controversy for close to two decades. The 1970s saw record levels of immigration, bringing an estimated 4 million legal and 8 million illegal immigrants into the country. To accommodate this dramatic surge in the nation's population of foreign language speakers, language assistance has been mandated on the federal, state, and local levels in areas ranging from voting and tax collection to education, social services, disaster assistance, and consumer rights. Today Massachusetts offers driver's license tests in 24 languages; residents of California can choose one of six different languages when they vote; street signs in some parts of Miami are printed in both English and Spanish; and classroom instruction is taught in 115 different languages in New York City schools. Altogether, over 300 languages are spoken in the United States. As of 1990,31.8 million Americans spoke a language other than English at home, and the country's population included6.7 million non-English speakers. Nationwide, one-third of the children enrolled in urban schools speak a language other than English at home as their first language. Around 2.6 million schoolchildren throughout the country do not speak English at all. Organized opposition to bilingualism, which collectively became known as the English-Only movement, began in the 1980s. In 1980 voters in Dade County, Florida, designated English as their official language. The following year, U.S. Senator S.I. Hayakawa of California introduced a constitutional amendment to make English the country's official language. Two influential English-Only lobbying groups were formed: U.S. English, in 1983, and English First, in 1986. In 1986, with the passage of Proposition 63, English became the official language of California. By the mid-1990s, 22 states had passed similar measures. In August 1996, the U.S. House of Representatives, by a margin of 259-169, passed a bill to make English the official language of the federal government. (However, President Bill Clinton vowed to veto the bill if it passed the Senate.) Observers attribute the English-Only movement to backlash against immigration and affirmative action, spurred by fear of competition for jobs and resentment of government spending on bilingual programs. The government program that has drawn the most fire is bilingual education, which costs taxpayers an estimated $200 million a year in federal funds and billions of dollars in state and local expenditures. Bilingual education programs, which allow students to pursue part of their study in their first language and part in English, were first mandated by Congress in 1968. The constitutionality of bilingual education was upheld in a 1974 Supreme Court ruling affirming that the city of San Francisco had discriminated against 18,000 Chinese-American students by failing to make special provisions to help them overcome the linguistic barriers they faced in school. However, the court did not specify what these provisions should be, and educators have evolved several different methods of instruction for students with first languages other than English. With the immersion (or "sink or swim") approach, nearly all instruction is in English, and the students are expected to pick up the language through intensive exposure. If the teacher is bilingual, the students may be allowed to ask questions in their native language, but the teacher is supposed to answer them in English. The English as a Second Language (ESL) approach, often used in a class where students speak more than one foreign language, takes a more gradual approach to mastering English, using it in conjunction with the student's first language. English-only instruction may be offered, but only in some, rather than all, classes. The remaining methods rely more heavily on the student's first language. Even though, technically, all teaching methods aimed at meeting the needs of foreign language speakers are considered bilingual education, participants in debates about bilingual education often single out the following methods as targets of praise or criticism. In Transitional Bilingual Education (TBE), students study English but are taught all other academic subjects in their native languages until they are considered ready to switch to English. In some cases, bilingual teachers also help the students improve their skills in their native language. Bilingual/bicultural programs use the students' native languages not only to teach them the standard curriculum but also for special classes about their ethnic heritage and its history and culture. Two-way or dual language programs enroll students from different backgrounds with the goal of having all of them become bilingual, including those who speak only English. For example, Spanish-speaking children may learn English while their English-speaking classmates learn Spanish. Critics of bilingual education (or of those methods that rely heavily on the students' native languages) claim that it fails to provide children with an adequate knowledge of English, thus disadvantaging them academically, and they cite high dropout rates for Hispanic teenagers, the group most likely to have received instruction in their native language. They accuse school systems of continuing to promote bilingual programs to protect the jobs of bilingual educators and receive federal funding allocated for such programs. As evidence of this charge, they cite barriers placed in the way of parents who try to remove their children from bilingual programs. Hispanic parents in New York City have claimed that their children are being railroaded into bilingual programs by a system that requires all children with Spanish surnames, as well as children of any nationality who have non-English-speaking family members, to take a language proficiency exam. Children scoring in the bottom 40% are then required to enroll in bilingual classes even if English is the primary language spoken at home. Critics of bilingual instruction also cite a 1994 New York City study that reported better results for ESL instruction than for methods that taught children primarily in their native languages. In spite of the criticism it has aroused, bilingual education is strongly advocated by many educators. Defenders cite a 1991 study endorsed by the National Academy of Sciences stating that children who speak a foreign language learn English more rapidly and make better overall academic progress when they receive several years of instruction in their native language. A later study, conducted at George Mason University, tracked 42,000 children who had received bilingual instruction and reported that the highest scores on standardized tests in the eleventh grade were earned by those students who had had six years of bilingual education. Programs with two way bilingual education have had particularly impressive results. Oyster Bilingual Elementary School in Washington, D.C., (whose student body is 58% Hispanic, 26% white, 12% black, and 4% Asian) is admiringly cited as a model for bilingual education. Its sixth graders read at a ninth-grade level and have tenth-grade-level math skills. Experts on both sides of the controversy agree that for any teaching method to be successful, the teaching must be done by qualified instructors equipped with adequate teaching materials in appropriately assigned classes with a reasonable ratio of students to teachers. Chavez, Linda. Out of the Barrio: Toward a New Politics of Hispanic Assimilation. New York: Basic Books, 1991. Crawford, James. Hold Your Tongue: Bilingualism and the Politics of "English-Only." Reading, MA: Addison-Wesley Publishing Co., 1992. Harlan, Judith. Bilingualism in the United States: Conflict and Controversy. New York: Franklin Watts, 1991. Lang, Paul. The English Language Debate: One Nation, One Language! Springfield, NJ: Enslow Publishers, Inc., 1995. Porter, Rosalie Pedalino. Forked Tongue: The Politics of Bilingual Education. New York: Basic Books, 1990. Rodriguez, Richard. Hunger of Memory: The Education of Richard Rodriguez. New York: Bantam Books, 1983. Simon, Paul. The Tongue-Tied American: Confronting the Foreign Language Crisis. New York: Continuum, 1980. Multicultural Education, Training, and Advocacy, Inc. (META). 240A Elm Street, Suite 22, Somerville, MA 02144. National Association for Bilingual Education (NABE). Union Center Plaza, 1220 L Street NW, Suite 605, Washington, DC 20005. U.S. English. 818 Connecticut Ave. NW, Suite 200, Washington, DC 20006. "Bilingualism/Bilingual Education." Gale Encyclopedia of Psychology. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/medicine/encyclopedias-almanacs-transcripts-and-maps/bilingualismbilingual-education-0 "Bilingualism/Bilingual Education." Gale Encyclopedia of Psychology. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/medicine/encyclopedias-almanacs-transcripts-and-maps/bilingualismbilingual-education-0 Modern Language Association The Chicago Manual of Style American Psychological Association EDUCATION, BILINGUAL. Bilingual education refers to an educational program in which both a native language and a second language are taught as subject matter and used as media of instruction for academic subjects. In the United States the tradition of public bilingual education began during the 1840s as a response to the many children who spoke German, Dutch, French, Spanish, Swedish, and other languages. As a result of the nativism of World War I and the adverse popular reaction to the large number of non-English-speaking immigrants entering the country in the late nineteenth and early twentieth centuries, restrictive laws prohibiting instruction in languages other than English brought this educational practice to a halt. Renewed interest developed, however, with the civil rights movement of the 1960s. In 1968 Congress provided funding for bilingual programs in Title VII of the Elementary and Secondary Education Act, also known as the Bilingual Education Act. In Lau v. Nichols (1974) the U.S. Supreme Court ruled that eighteen hundred Chinese students in the San Francisco School District were being denied a "meaningful education" when they received English-only instruction and that public schools had to provide special programs for students who spoke little or no English. The number of students fitting this description increased dramatically toward the end of the twentieth century. Since 1989, for example, they went from 2,030,451 to 4,148,997 at the end of the century, representing an increase from 5 percent to almost 9 percent of the school-age population. These children come from more than one hundred language groups. Of those served by special language programs, almost half are enrolled in bilingual education programs; the others are served by English-as-a-second-language or regular education programs. In the 1990s an increasing number of English-speaking children sought to learn a second language by enrolling in enrichment bilingual education programs. Title VII appropriation for special language programs for both minority language and mainstream groups rose from $7.5 million in 1969 to $117 million in 1995. The effectiveness of such programs has been much debated. Opponents have claimed that promoting languages other than English would result in national dis-unity, inhibit children's social mobility, and work against the rise of English as a world language. Advocates propose that language is central to the intellectual and emotional growth of children. Rather than permitting children to languish in classrooms while developing their English, proponents claim that a more powerful long-term strategy consists of parallel development of intellectual and academic skills in the native language and the learning of English as a second language. Proponents also argue that immigrants and other non-English-speaking students have valuable resources to offer this multicultural nation and the polyglot world. While in 1999 forty-three states and the District of Columbia had legislative provisions for bilingual and English-as-a-second-language programs, the citizens of California and Arizona voted to restrict the use of languages other than English for instruction. The growing anti-bilingual-education movement had similar proposals on the ballot in other states at the beginning of the twenty-first century. August, Diane, and Kenji Hakuta. Education of Language-minority Children. Washington, D.C.: National Academy Press, 1998. Baker, Colin, and Sylvia Prys Jones. Encyclopedia of Bilingualism and Bilingual Education. Clevedon, U.K.: Multilingual Matters, 1998. Brisk, Maria Estela. Bilingual Education: From Compensatory to Quality Schooling. 2d ed. Mahwah, N.J.: Erlbaum, 2001. See alsoSpanish Language . "Education, Bilingual." Dictionary of American History. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/education-bilingual "Education, Bilingual." Dictionary of American History. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/education-bilingual Modern Language Association The Chicago Manual of Style American Psychological Association bilingualism, ability to use two languages. Fluency in a second language requires skills in listening comprehension, speaking, reading, and writing, although in practice some of those skills are often considerably less developed than others. Few bilinguals are equally proficient in both languages. However, even when one language is dominant (see language acquisition), performance in the other language may be superior in certain situations—e.g., someone generally stronger in Russian than in English may find it easier to talk about baseball in English. Native speakers of two languages are sometimes called equilingual, or ambilingual, if their mastery of both languages is equal. Some bilinguals are persons who were reared by parents who each spoke a different language or who spoke a language different from the one used in school. In some countries, especially those with two or more official languages, schools encourage bilinguilism by requiring intensive study of a second language. Bilinguals sometimes exhibit code-switching, or switching from one language to the other in the middle of a conversation or even the same sentence; it may be triggered by the use of a word that is similar in both languages. See G. Saunders, Bilingual Children (1988); K. Hyltenstam and L. K. Obler, ed., Bilingualism Across the Lifespan (1989). "bilingualism." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/bilingualism "bilingualism." The Columbia Encyclopedia, 6th ed.. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/bilingualism Modern Language Association The Chicago Manual of Style American Psychological Association bilingual education, the sanctioned use of more than one language in U.S. education. The Bilingual Education Act (1968), combined with a Supreme Court decision (1974) mandating help for students with limited English proficiency, requires instruction in the native languages of students. The National Association for Bilingual Education (founded 1975) is the main U.S. professional and advocacy organization for blingual education. Critics (including the national group English First), who maintain that some students never join mainstream classes, have attempted to make English the language in several states and cities; state ballot initiatives approved in California (1998) and Arizona (2000) mostly eliminated bilingual education programs there. Bilingualism proponents note the importance of ethnic heritage and the preservation of language and culture, as well as the need to educate non-English-speaking students in all subjects, not just English. See K. Hakuta, Mirror of Language: The Debate on Bilingualism (1986); J. Cummins and D. Corson, Bilingual Education (1997); and H. Kloss, The American Bilingual Tradition (1998). "bilingual education." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/bilingual-education "bilingual education." The Columbia Encyclopedia, 6th ed.. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/bilingual-education Modern Language Association The Chicago Manual of Style American Psychological Association "BILINGUALISM." Concise Oxford Companion to the English Language. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-and-maps/bilingualism "BILINGUALISM." Concise Oxford Companion to the English Language. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-and-maps/bilingualism Modern Language Association The Chicago Manual of Style American Psychological Association BILINGUALISM. SeeEducation, Bilingual . "Bilingualism." Dictionary of American History. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/bilingualism "Bilingualism." Dictionary of American History. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/bilingualism Modern Language Association The Chicago Manual of Style American Psychological Association The United States has always been home to significant numbers of non-English speakers. Sometimes the language differences have been tolerated by English-speaking Americans, but not always. In the first half of the nineteenth century, for example, the most prevalent language next to English was German. In the 1850s, bilingual schools (schools in which two languages were taught) teaching in German and English were operating in Baltimore, Maryland ; Cincinnati and Cleveland, Ohio ; Indianapolis, Indiana ; Milwaukee, Wisconsin ; and St. Louis, Missouri . Similarly, Louisiana , with its large French-speaking population, allowed bilingual instruction in its schools. (See New France .) Several states in the Southwest had Spanish as well as English instruction. Hundreds of thousands of children in the United States were educated in a language other than English. Around 1900, anti-immigrant sentiments in the country increased. Several states passed laws against teaching in other languages. Immigrant children who did not speak English began to have a hard time in the public schools. In 1908, only 13 percent of the immigrant children enrolled in New York City schools at age twelve were likely to go on to high school, as opposed to 32 percent of native-born students. This trend was mirrored across the country as non-English-speaking immigrant children, not understanding the language spoken in their classrooms, fell further and further behind. During World War I (1914–18), an intense wave of nationalism (pride and loyalty to one's own country, sometimes in an excessive way) swept the country. It reinforced the negative reaction of many Americans to the large number of immigrants entering the country. By 1925, thirty-seven states had passed laws requiring instruction in English regardless of the dominant language of the region. This opposition to bilingual education continued into the 1950s. Many children whose native language was not English received a very poor education in the public school system. Federal government support After the Cuban revolution of 1959, waves of Cubans fled to South Florida . Florida's Coral Way school district established the first state-supported program in decades to instruct students in Spanish, their native language, thereby easing their transition to English. The bilingual program provided all students, Anglo and Cuban, instruction in both Spanish and English with excellent results. With the success of the Coral Way project, state and local government involvement in language education became accepted. The federal government soon took up the cause, starting with the Civil Rights Act of 1964 , which prohibited discrimination in education, and the Elementary and Secondary Education Act of 1965, which funded schools and provided help for disadvantaged students. In 1968, after considerable debate, Congress passed a bill that amended (modified) the Elementary and Secondary Education Act. Under the amendment, the federal government would provide funding for bilingual education to school districts with a large proportion of non-English-speaking students who lived in poor neighborhoods. To receive funding, districts would be required to provide instruction in a student's native language until the child could demonstrate competence in English. The federal government put hundreds of millions of dollars into bilingual education programs nationwide by the mid-1970s. Supreme Court support In 1974, the Supreme Court gave its support to bilingual education in Lau v. Nichols. The ruling states that school districts with a substantial number of non-English-speaking students must take steps to overcome the students’ language differences. After that ruling, the federal government was able to force school districts to initiate bilingual education plans. These Lau plans greatly expanded the number of bilingual programs across the country. They set standards to determine which students qualified for inclusion in a program and when they could be allowed (or forced) to exit. During this period, test scores repeatedly showed that non-English-speaking students who participated in well-designed bilingual programs consistently performed at the same level as their English-speaking classmates. None of the new acts or policies clearly addressed the goals of bilingual programs. Should the programs aim to send the student quickly back to regular English-language classes, or should they take a slower approach, allowing the student to maintain good grades and stay up to standard with his or her age level in school? Different programs addressed these questions in their own ways, and the lack of clarity contributed to a conflict that lasted into the 2000s. By the 1980s, a growing number of opponents of bilingual education believed that, rather than speeding immigrants into the English-speaking mainstream, bilingual education was causing them to hold onto their native languages and cultures. The critics considered this undesirable. Studies showed that some bilingual programs were allowing students to remain in bilingual classes longer than three years and were not teaching them sufficient English to function in mainstream classrooms. In the early 1980s, the federal government quietly withdrew its support for native-language instruction programs. In 1984, the government began providing funding for English immersion programs—programs that placed non-English-speaking students in all-English classes, forcing them to learn English in a hurry or be left behind. Several studies in the mid-1980s showed that the performance of the limited-English students in the English immersion programs declined. Meanwhile, public attitudes in California , with its rapidly growing foreign-born population, became increasingly hostile to bilingual programs. In 1998, California adopted an English-only requirement for instruction in all its schools. Arizona and several other states followed. Bilingual education remained controversial in the 2000s. Advocates contended that non-English-speaking children will receive little or no education unless they are taught in their own language during the years when they are first learning English. With a poor start due to language difference, students are much more likely to drop out of school and consequently face low-paying jobs and poverty in the future. Opponents argue that students in bilingual programs may not be motivated to learn English as well as they should and will therefore not be able to secure good jobs later in life. They argue that the government should not use its funds to help non-native people preserve their cultures in the United States. "Bilingual Education." U*X*L Encyclopedia of U.S. History. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/bilingual-education "Bilingual Education." U*X*L Encyclopedia of U.S. History. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/bilingual-education Modern Language Association The Chicago Manual of Style American Psychological Association Bilingual education programs in schools aim to teach students to listen, comprehend, speak, read, and write in a language other than their native tongue. This is done most effectively when use of their primary language is encouraged as well. Students in bilingual classes acquire greater skills and acquire them more quickly when they continue to practice both languages. This also increases their effectiveness in the other core classroom subjects and helps them to develop social competencies. A language may be acquired by being in the environment where the language is spoken and participating in that cultural setting, or it may be learned in a classroom with field techniques that allow practice in the new language. Therefore, one goal of bilingual education is to create an environment where students and their cultures are fully supported. Baker, Colin. A Parents' and Teachers' Guide to Bilingualism. Clevedon, Eng.: Multilingual Matters, 2000. Baker, Colin. Foundations of Bilingual Education and Bilingualism. Clevedon, Eng.: Multilingual Matters, 1997. Baker, Colin, and Sylvia Prys Jones. Encyclopedia of Bilingualism and Bilingual Education. Clevedon, Eng.: Multilingual Matters, 1998. Valdes, Guadalupe, and Richard Figueroa. Bilingualism and Testing:A Special Case of Bias. Norwood, NJ: Ablex Publishing Corporation, 1994. "Bilingual Education." Child Development. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/children/applied-and-social-sciences-magazines/bilingual-education "Bilingual Education." Child Development. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/children/applied-and-social-sciences-magazines/bilingual-education Modern Language Association The Chicago Manual of Style American Psychological Association Bilingual education developed into a particularly contentious topic for defining American identity in the twentieth century. While federal legislation since the 1960s has recognized the United States as a multilingual nation, the professed long-range goal of institutionalized bilingual education was not that students should achieve bilingualism but proficiency in English. The vast majority of bilingual education programs were considered "transitional," functioning to introduce younger students with limited English-speaking ability into the general education curriculum where English served historically as the language of instruction. Many bilingual programs were taught principally either in English or in the primary language of the student. However, by the end of the twentieth century, federally funded programs had begun to favor instruction in both English and the primary language, an apparent departure from the goal of achieving proficiency in a single language. The country's continued difficulty through the late twentieth century in educating immigrant children, mostly from Spanish-speaking countries, forced the federal legislature to institutionalize bilingual education. Following the passage of the Civil Rights Act (1964), Congress passed the Bilingual Education Act (1968), providing the first federal funds for bilingual education. The federal government elaborated its guidelines in the amended Bilingual Education Act of 1974, the same year the Supreme Court rendered its landmark Lau vs. Nichols decision, ruling that instructing students in a language they do not understand violates the Fourteenth Amendment of the Constitution. "Bilingual" was often interpreted as "bicultural," suggesting that the question of bilingual education belonged to a broader debate over the efficacy of a polyglot society. The discussion in the United States focused on the progress of social mobility and the development of a unique American culture. For many, proficiency in English appeared to facilitate social advancement and incorporation into a mainstream culture despite that culture's multifaceted character. The letters of J. Hector St. Jean de Crévecoeur in the late eighteenth century and Alexis de Tocqueville's published travels Democracy in America in 1835 contributed to an understanding of American culture as a "melting pot" of ethnicity. This identity became increasingly complex with the country's continued expansion through the nineteenth century and increasingly vexed with the rise of nationalism in the post bellum era. The nationalist urgency to homogenize the nation after the Civil War, accompanied by notions of Anglo-Saxon supremacy and the advent of eugenics, forced further eruptions of nationalist sentiment, including loud, jingoist cries for a single national language after the first World War. However, by the middle of the twentieth century, efforts to empower underrepresented communities contributed to an increased public interest in multiculturalism and ethnocentric agendas. A dramatic increase in immigration from Spanish-speaking countries during the second half of the twentieth century finally motivated the United States to institutionalize bilingual education. But the strong opposition to the bilingual education legislation of the early 1970s, expressed in the influential editorial pages of the Washington Post and the New York Times between 1975 and 1976, suggested that bilingual programs never enjoyed overwhelming public support. The articulate arguments of Richard Rodriguez, an editor at the Pacific News Service and author of Hunger for Memory (1982), contributed to this opposition by distinguishing between private (primary language) and public (English) language while influential figures like Pulitzer Prize-winning historian Arthur M. Schlesinger, Jr., author of The Disuniting of America (1992), documented an increased national disenchantment with multiculturalism and bilingual education. Discussions of bilingual education more often centered on Latino communities in metropolitan areas such as Miami, Los Angeles, and New York. But the debate was not exclusively Latino. The Lau vs. Nichols verdict, which involved a Chinese-speaking student, along with the advent of post-Vietnam War Asian immigration, suggested that the debate was relevant to other communities in the country. Similar interests were present in localized but nationally observed efforts to incorporate the language of a surrounding community into a school's curriculum. A particularly contentious and widely publicized debate arose over "Ebonics" in Oakland, California, in the early 1990s. Due in part to increasing black nationalism among African American intellectuals, prominent national political figures such as Reverend Jesse Jackson endorsed the incorporation of the local dialect and vernacular variations of language into the curriculum, while figures such as Harvard sociologist Cornel West and Harvard literary and social critic Henry Louis Gates, Jr., suggested that such programs lead to black ghettoization. California showcased a national concern about bilingual education at the end of the twentieth century. Bilingual education became increasingly contentious in the state in the late 1990s with the passage of a proposition eliminating bilingual instruction. Approval of the initiative occurred in the shadow of two earlier state propositions and a vote by the regents of University of California to effectively terminate Affirmative Action, acts widely perceived in some underrepresented communities as attacks directed at Latino and immigrant communities. Bilingual programs enjoyed public support in cities with wide and long established minority political bases, such as Miami, where they where viewed as beneficial to developing international economies, but California continued to focus the debate primarily on social and cultural concerns. Lau vs. Nichols. United States Supreme Court, 1974. Porter, Rosalie Pedalino. Forked Tongue: The Politics of Bilingual Education. New York, Basic, 1990. Rodriguez, Richard. Hunger of Memory: The Education of Richard Rodriguez. Boston, David R. Godine, 1982. Schlesinger, Arthur, Jr. The Disuniting of America. New York, Norton, 1992. Tocqueville, Alexis de. Democracy in America. 1835. "Bilingual Education." St. James Encyclopedia of Popular Culture. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/media/encyclopedias-almanacs-transcripts-and-maps/bilingual-education "Bilingual Education." St. James Encyclopedia of Popular Culture. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/media/encyclopedias-almanacs-transcripts-and-maps/bilingual-education Modern Language Association The Chicago Manual of Style American Psychological Association Sections within this essay:Background Types of Bilingual Education Setting the Stage The Civil Rights Act (1964) Bilingual Education Act (1968) Lau v. Nichols State and Local Initiatives Grants and Programs Center for Applied Linguistics National Association for Bilingual Education (NABE) National Clearinghouse for Bilingual Education (NCBE) National Education Association (NEA) National multicultural Institute (NMCI) Teachers of English to Speakers of Other Languages (TESOL) U.S. Department of Education Office of Bilingual Education and Minority Language Affairs (OBEMLA) At the beginning of the twenty-first century there were some three million children in the United States who were classified as Limited English Proficient (LEP). For much of the twentieth century these students would have been placed in so-called "immersion programs," in which they would be taught solely in English until they understood it as well or better than their native tongue. Beginning in the 1960s there was a gradual shift toward bilingual education, in which students can master English while retaining their native-language skills. There is a difference between bilingual programs and English as a Second Language (ESL) programs, although bilingual programs include an ESL component. Bilingual programs are designed to introduce students to English gradually by working with them in both English and their native tongue. The students are able to master English without losing proficiency in the native language. In bilingual or dual language immersion, the class typically includes English speaking students and LEP students who share the same native language. Instruction is given in both English and the native language. In developmental or late-exit programs, all students share the same language; instruction begins in that language but gradually shifts to English as the students become more proficient. Transitional or early-exit programs are similar to developmental programs, except that the goal is mastery of English rather than bilingualism. Students who become proficient in English are transferred to English-only classes. Bilingualism is not generally a goal in ESL programs. In sheltered English or structured immersion programs, LEP students are taught in English (supplemented by gestures and other visual aids). The goal is acquisition of English. Pull-out ESL programs include English-only instruction, but LEP participants are "pulled out" of the classroom for part of the day for lessons in their native tongue. Bilingual education in the United States is a complex cultural issue because of two conflicting philosophies. On the one hand is the idea that the United States welcomes people from all societies, from all walks of life. Immigrants have long seen the States as the "Land of Opportunity," in which individuals can rise to the top through hard work and determination. They can build new identities for themselves, but they can also hold on to their past culture without fear of reprisal. At the same time, the United States is also the great "melting pot" in which immigrants are expected to assimilate if they wish to avail themselves of the many opportunities for freedom and success. Everyone who comes to the States, so they are told, should want to become American. Thus there are people who believe strongly that erasing an immigrant's native tongue is erasing a key cultural element. People are entitled to speak and use their native languages as they please; anything less goes against the freedom for which the United States stands. Besides, having proficiency or fluency in more than one language is a decided advantage in a world that has become more interdependent. There are other people who believe, equally strongly, that everyone who lives and works in the United States should speak, read, and write in English. Those who oppose bilingual programs for LEP students believe that allowing children to learn in their native tongue puts them at a disadvantage in a country in which English is the common language. A student whose instruction is in another language, they say, may never master English. This closes doors to opportunities including higher education and choice of career. There is no uniform opinion even among immigrant parents of LEP children. Some parents want their children to be taught in their native tongue as a means of preserving their culture. Others, wishing their children to have the same opportunities as native speakers of English, want their children to be taught in English from the outset. The one point on which everyone seems to agree is that LEP children deserve the best educational opportunities available, and any language program must be structured enough to give them a good foundation, while it remains flexible enough to meet their varied needs. Although we tend to think of bilingualism in the United States as a modern issue, in fact it has always been a part of our history. In the early days of exploration and colonization, French, Spanish, Dutch, and German were as common as English. By 1664, the year that the British took control of New York from the Dutch, there were some 18 languages (not including the native American tongues) spoken in lower Manhattan alone. No doubt many of the inhabitants of the colony were conversant in more than two languages. German and French remained common in colonial North America. Many Germans educated their children in German-language schools. Although many colonial leaders (among them Benjamin Franklin) complained about bilingualism, it was generally accepted. In fact, during and after the American Revolution, such documents as the Articles of Confederation were published in both English and German. During the nineteenth century millions of immigrants came to the United States and brought their languages with them. German remained popular, as did other European tongues. Spanish was introduced when the United States took possession of Texas, Florida, and California from Spain. The enormous wave of immigration that began in the 1880s and lasted until the early 1920s brought a change in sentiment toward bilingual education. The goals of voluntary assimilation were gradually replaced by strident calls for "Americanization." In Puerto Rico, Hawaii, and the Philippines (which the United States had acquired after the Spanish-American War in 1898), English was to be the language of instruction even though most of these new Americans spoke no English at all. In 1906, Congress passed a law, the first language law ever passed, requiring naturalized citizens to be able to speak English. Anti-bilingual sentiment got stronger as more immigrants poured into the United States. Anti-German sentiment, which reached its peak when the United States entered World War I in 1917, caused some communities to ban the use of German in public. By the end of the war, bilingualism had fallen out of favor even in areas where it had thrived. In 1924 strict immigration quotas sharply reduced the number of new foreigners coming into the United States. For almost the next 40 years, bilingual education in U.S. schools was almost exclusively based on variations of immersion; students were taught in English no matter what their native tongue was, and those who did not master English were required to stay back in the same grade until they became proficient. Bilingual education in the United States was pushed back into the spotlight as a direct result of the 1959 revolution in Cuba. After Fidel Castro overthrew the dictatorship and established a Communist government, many middle- and upper-class Cubans fled to the United States. A large number of these refugees settled in Florida. Well-educated but with little in the way of resources, they were assisted quite generously by the federal and state governments. Among this assistance was ESL instruction, provided by the Dade County (Florida) Public Schools. In addition, the school district launched a "Spanish for Spanish Speakers" program. In 1963, a bilingual education program was introduced at the Coral Way Elementary School in Miami. Directed by both U.S. and Cuban educators, the program began in the first through third grades. U.S. and Cuban students received a half day of English and a half day of Spanish instruction; at lunch time and recess and during music and art classes the groups were mixed together. Within three years the district was able to report benefits for both groups of students, who were now not only bilingual but also bicultural. This was no accident: the goal of the Coral Way initiative was to promote exactly this level of fluency. The Civil Rights Act of 1964 did not address bilingual education directly, but it opened an important door. Title VI of the Act specifically prohibits discrimination on the basis of race, color, or national origin in any programs or activities that receive federal financial assistance. What this means, among other aspects, is that school districts that receive federal aid are required to ensure that minority students are getting the same access to programs as non-minorities. This minority group includes language minority (LM) students, defined as students who live in a home in which a language other than English is spoken. (Although some LM students are fluent in English, many are classified as LEP.) Title VI's critical role in bilingualism would be made clear a decade later in the Lau v. Nichols case. The Elementary and Secondary Education Act of 1968 was another important step for bilingual education. In particular, Title VII of that act, known as the Bilingual Education Act, established federal policy for bilingual education. Citing its recognition of "the special educational needs of the large numbers children of limited English-speaking ability in the United States," the Act stipulated that the federal government would provide financial assistance for innovative bilingual programs. Funding would be provided for the development of such programs and for implementation, staffing and staff training, and long-term program maintenance. Title VII has been amended several times since its establishment, and it was reauthorized in 1994 as part of the Improving America's Schools Act. The basic goal has remained the same: access to bilingual programs for children of limited means. Probably the most important legal event for bilingual education was the Lau v. Nichols case, which was brought against the San Francisco Unified School District by the parents of nearly 1,800 Chinese students. It began as a discrimination case in 1970 when a poverty lawyer decided to represent a Chinese student who was failing in school because he could not understand the lessons and was given no special assistance. The school district countered that its policies were not discriminatory because it offered the same instruction to all students regardless of national origin. The lack of English proficiency was not the district's fault. Lower courts ruled in favor of the San Francisco schools, but in 1974 the U.S. Supreme Court ruled unanimously in favor of the plaintiffs. In his opinion, Justice William O. Douglas stated simply that "there is no equality of treatment merely by providing students with the same facilities, textbooks, teachers, and curriculum; for students who do not understand English are effectively foreclosed from any meaningful education." The Court cited Title VI of the Civil Rights Act, noting that the students in question fall into the protected category established therein. What Lau v. Nichols did not do was establish a specific bilingual policy. Individual school districts were responsible for taking "affirmative steps" toward reaching the goal of providing equal educational opportunities for all students. In the 1960s there were no state bilingual programs; many states actually had English-only instruction laws on their books. After the Civil Rights Act and the Bilingual Education Act, states began to take more initiative. In 1971, Massachusetts became the first state to establish a bilingual mandate. Under this mandate, any school that had 20 or more students of the same language background was required to implement some sort of bilingual program. A decade later, 11 more states had passed bilingual education laws, and an additional 19 offered some sort of legislative efforts in that direction. Today, bilingual or ESL education is offered in some form by every state. Not surprisingly, those states with the highest concentration of immigrants (New York, California, Texas, Florida) tend to have the most comprehensive programs. In fact, according to the most recent data from the National Clearinghouse for Bilingual Education (NCBE), 18 of the 20 urban school districts with the highest LEP enrollment are in one of these four states. Some states fund all bilingual education programs; others fund only bilingual or only ESL programs. It should be noted that bilingual needs can differ widely from state to state or district to district. According to the U.S. Department of Education, Spanish-speaking students make up nearly three-quarters of all LEP students in the United States. But in a district in which the predominant foreign language is Chinese, Vietnamese, or Hindi, the needs would of course be geared toward those languages. Local schools can create effective bilingual programs based on their specific needs. At the William Barton Rogers School in Boston, for example, a transitional program for middle-school LEP students who speak Vietnamese has met with success; likewise, a program for elementary school students in the Madawaska School District in Maine has been successful with French-speaking students. Because each state's needs are different, and because those needs are subject to change, the best way to get comprehensive and up-to-date information on each state's initiatives is to contact individual state education departments (see below). Obtaining information about bilingual grants, programs, and other initiatives is much easier today than it was in the past thanks to the Internet. Federal, state, and local government agencies offer a surprising variety of information on their web sites. Those who do not own a computer can access these sites at any local public library. Following is a sampling of what is available. The U.S. Department of Education's Office of Bilingual Education and Minority Language Affairs (OBEMLA) is in charge of awarding Title VII grants to both state and local education agencies. There are 12 types of discretionary grants, which cover training, development, implementation, school reform programs, and foreign language instruction. These grants are awarded only to "education-related organizations." Individuals are not eligible for Title VII grants. Those interested in applying for a Title VII grant can obtain the necessary information by visiting OBEMLA's web site (http://www.ed.gov.offices/ OBEMLA) A good beginning resource for anyone who wishes to find out about programs, grants, and other information on bilingual education and bilingual initiatives is the National Clearinghouse for Bilingual Education (NCBE). Funded by OBEMLA, this organization collects and analyzes information and also provides links to other organizations. The NCBE web site (http://www.ncbe.gwu.edu) is a comprehensive starting point. Each state's Department of Education provides information on its statewide and local bilingual initiatives; the easiest way to find this information is to visit individual state education department web sites. Also, large cities such as New York, Miami, Houston, Los Angeles, and San Francisco provide information on their web sites about their comprehensive bilingual programs. Bilingual Education: A Sourcebook. Alba M. Ambert and Sarah E. Melendez, Garland Publishing, 1985. Bilingual Education: History, Politics, Theory, and Practice. Third Edition. James Crawford, Bilingual Educational Services, Inc., 1995. Bilingual Education: Issues and Strategies. Amado M. Padilla, Halford M. Fairchild, and Concepc.on M. Valadez, editors, Sage Publications, 1990. Learning in Two Languages: From Conflict to Consensus in the Reorganization of Schools. Gary Imhoff, editor, Transaction Publishers, 1990. 4646 40th Street, NW Washington, DC 20016 USA Phone: (202) 362-0700 Fax: (202) 362-3740 Primary Contact: Donna Christian, President 1220 L Street, NW, Suite 605 Washington, DC 20005 USA Phone: (202) 898-1829 Fax: (202) 789-2866 Primary Contact: Delia Pompa, Executive Director The George Washington University Center for the Study of Language and Education 2121 K Street, Suite 260 Washington, DC 20037 USA Phone: (202) 467-0867 Fax: (800) 531-9347 Primary Contact: Minerva Gorena, Director 1201 16th Street, NW Washington, DC 20036 USA Phone: (202) 833-4000 Fax: (202) 822-7170 Primary Contact: Robert F. Chase, President 3000 Connecticut Avenue, NW, Suite 438 Washington, DC 20008 USA Phone: (202) 483-0700 Fax: (202) 483-5233 Primary Contact: Elizabeth Pathy Salett, President 700 South Washington Street, Suite 200 Alexandria, VA 22314 USA Phone: (703) 836-0774 Fax: (703) 836-7864 Primary Contact: Charles Amorosino, Executive Director 400 Maryland Avenue, SW Washington, DC 20202 USA Phone: (202) 205-5463 Fax: (202) 205-8737 Primary Contact: Art Love, Acting Director "Bilingualism." Gale Encyclopedia of Everyday Law. . Encyclopedia.com. (October 17, 2018). http://www.encyclopedia.com/law/encyclopedias-almanacs-transcripts-and-maps/bilingualism "Bilingualism." Gale Encyclopedia of Everyday Law. . Retrieved October 17, 2018 from Encyclopedia.com: http://www.encyclopedia.com/law/encyclopedias-almanacs-transcripts-and-maps/bilingualism
https://www.encyclopedia.com/social-sciences-and-law/education/education-terms-and-concepts/bilingualism
18
11
When people travel to foreign countries, they must change their money into foreign currencies. The same is true when goods are imported. For example, when Americans import Toyotas, Volkswagens, champagne, or coffee, the dollars paid for these goods must be exchanged for yen, Deutsche marks, francs, or pesos. In April 1983 a United States dollar was valued in the foreign exchange markets at 238 Japanese yen, 2.4 West German Deutsche marks, 7.3 French francs, 151 Mexican pesos, and 0.65 British pound. The other way around, a British pound could be exchanged for $1.58, the peso for less than a cent, the franc for 13 cents, the Deutsche mark for 41 cents, and the yen for less than half a cent. A foreign exchange rate is a kind of price—the price of one country's currency in terms of another's. Like all prices, exchange rates rise and fall. If Americans buy more from Japan than the Japanese buy from the United States, the value of the yen tends to rise in terms of the dollar. If over the years one country's economy grows faster than another's so that its citizens become relatively more productive, its currency will rise in terms of the other. A United States citizen had to exchange five dollars for a British pound in 1936 and only $1.58 in April 1983. The difference reflected the more rapid economic growth of the United States. Balance of Payments Just as a business keeps a balance sheet of its income and expenditures, so most countries keep track of their currency flows through a balance of payments account. The United States account shows the payments made to foreigners for goods and services purchased by Americans. It also shows the payments made by foreigners for American items. The balance of payments has two main parts: the current account and the capital account. The current account covers all payments made for goods and services purchased during the period covered plus transfers of money. The bulk of these are for imports and exports of merchandise. The difference between the values of imports and exports of merchandise is called the balance of trade. A negative balance of trade means that the country is importing more goods than it exports. But often a negative balance of trade will be more than offset by net receipts from investments abroad, sales of services, and other “invisibles.” This makes it possible to have a positive balance of payments despite a negative balance of trade. For example, if an American company sells insurance to foreigners, or transports foreign goods, the payments will be counted as exports of services. In the first quarter of 1982, the United States balance of trade was a negative 5.9 billion dollars, but net receipts from services plus income from investments amounted to more than 9 billion dollars. Even after subtracting gifts and transfers of money to foreigners, there was a positive balance on the current account of about 1.1 billion dollars. The other main part of the balance of payments is the capital account. This is a record of investments and loans that flow from one country to another. If a United States company buys a factory in France, dollars must be converted into francs. This is called an export of capital, but it is treated in the balance of payments as a negative item. On the other hand, if a French investor buys United States bonds, francs will be exchanged for dollars—a positive item in the balance of payments. In the first quarter of 1982, the United States balance on the capital account was a negative 6.1 billion dollars. The Gold Standard Before World War I the currencies of most countries were based on gold. That is, even though they used paper money and silver coins, the governments of these countries stood ready to exchange their currency for gold at specified rates. When a country's imports exceeded its exports, it paid for the extra imports with shipments of gold. When its imports were less than its exports, it received gold from other countries. Gold flowing into a country increased the money supply and caused prices to rise, while gold flowing out of a country had the opposite effect. These changes in prices tended to restore the balance of trade, since a country with rising prices found it more difficult to sell goods abroad while a country with falling prices had an advantage in international competition. The Bretton Woods System The international gold standard was abandoned during World War I. Efforts were made to revive it after the war but without much success. Toward the end of World War II, a new international monetary system was established at the Bretton Woods, N.H., conference in 1944. The system was based on fixed exchange rates as under the gold standard but differed in that countries faced with a trade deficit could borrow from the International Monetary Fund (IMF) instead of relying on their gold reserves. The IMF held reserves of gold and currencies and lent them to these countries. The Bretton Woods system worked fairly well in the late 1940s and the 1950s. During those years the United States dollar was strong. The United States Treasury had most of the world's gold and was prepared to pay foreigners 35 dollars per ounce for additional gold. Dollars became a sort of international currency because they were readily accepted in payment for goods and services throughout the world. The era of the dollar ended, however, in the 1970s. The economies of other countries had grown stronger, while inflation had made the dollar less desirable abroad. In August 1971, because of balance of payments difficulties, the United States stopped exchanging dollars for gold. This was the end of the fixed exchange rate After 1973 a flexible system developed. Countries allowed their exchange rates to fluctuate in response to changing conditions. The Swiss franc rose against the dollar—from 21 cents in 1972 to 60 cents in 1979—and declined to 49 cents in 1982. Over the same period the British pound fell against the dollar—from $2.50 to $1.75. In the 1970s the dollar fell against strong currencies such as the Swiss, the Japanese, and the West German. Exchange rates, however, are not completely free to float; governments try to prevent them from fluctuating widely because wide fluctuations discourage international trade. For example, if the dollar depreciates by more than 10 percent in a short period, the United States government is likely to draw upon its reserves and buy dollars in the foreign exchange market to raise their price. Governments can increase their reserves by drawing upon the International Monetary Fund and by borrowing from other governments. The gold standard was a commitment by participating countries to fix the prices of their domestic currencies in terms of a specified amount of gold. National money and other forms of money (bank deposits and notes) were freely converted into gold at the fixed price. England adopted a de facto gold standard in 1717 after the master of the mint, Sir Isaac Newton, overvalued the guinea in terms of silver, and formally adopted the gold standard in 1819. The United States, though formally on a bimetallic (gold and silver) standard, switched to gold de facto in 1834 and de jure in 1900 when Congress passed the Gold Standard Act. In 1834, the United States fixed the price of gold at $20.67 per ounce, where it remained until 1933. Other major countries joined the gold standard in the 1870s. The period from 1880 to 1914 is known as the classical gold standard. During that time, the majority of countries adhered (in varying degrees) to gold. It was also a period of unprecedented ECONOMIC GROWTH with relatively FREE TRADE in goods, labor, and capital. The gold standard broke down during World War I, as major belligerents resorted to inflationary finance, and was briefly reinstated from 1925 to 1931 as the Gold Exchange Standard. Under this standard, countries could hold gold or dollars or pounds as reserves, except for the United States and the United Kingdom, which held reserves only in gold. This version broke down in 1931 following Britain’s departure from gold in the face of massive gold and capital outflows. In 1933, President Franklin D. Roosevelt nationalized gold owned by private citizens and abrogated contracts in which payment was specified in gold. Between 1946 and 1971, countries operated under the Bretton Woods system. Under this further modification of the gold standard, most countries settled their international balances in U.S. dollars, but the U.S. government promised to redeem other central banks’ holdings of dollars for gold at a fixed rate of thirty- five dollars per ounce. Persistent U.S. balance-of-payments deficits steadily reduced U.S. gold reserves, however, reducing confidence in the ability of the United States to redeem its currency in gold. Finally, on August 15, 1971, President Richard M. Nixon announced that the United States would no longer redeem currency for gold. This was the final step in abandoning the gold Widespread dissatisfaction with high INFLATION in the late 1970s and early 1980s brought renewed interest in the gold standard. Although that interest is not strong today, it seems to strengthen every time inflation moves much above 5 percent. This makes sense: whatever other problems there were with the gold standard, persistent inflation was not one of them. Between 1880 and 1914, the period when the United States was on the “classical gold standard,” inflation averaged only 0.1 percent per year. How the Gold Standard Worked The gold standard was a domestic standard regulating the quantity and growth rate of a country’s MONEY SUPPLY. Because new production of gold would add only a small fraction to the accumulated stock, and because the authorities guaranteed free convertibility of gold into nongold money, the gold standard ensured that the money supply, and hence the price level, would not vary much. But periodic surges in the world’s gold stock, such as the gold discoveries in Australia and California around 1850, caused price levels to be very unstable in the short run. The gold standard was also an international standard determining the value of a country’s currency in terms of other countries’ currencies. Because adherents to the standard maintained a fixed price for gold, rates of exchange between currencies tied to gold were necessarily fixed. For example, the United States fixed the price of gold at $20.67 per ounce, and Britain fixed the price at £3 17s. 10½ per ounce. Therefore, the exchange rate between dollars and pounds—the “par exchange rate”—necessarily equaled $4.867 per pound. Because exchange rates were fixed, the gold standard caused price levels around the world to move together. This comovement occurred mainly through an automatic balance-of-payments adjustment process called the price-specie- flow mechanism. Here is how the mechanism worked. Suppose that a technological INNOVATION brought about faster real economic growth in the United States. Because the supply of money (gold) essentially was fixed in the short run, U.S. prices fell. Prices of U.S. exports then fell relative to the prices of imports. This caused the British to DEMAND more U.S. exports and Americans to demand fewer imports. A U.S. balance-of-payments surplus was created, causing gold (specie) to flow from the United Kingdom to the United States. The gold inflow increased the U.S. money supply, reversing the initial fall in prices. In the United Kingdom, the gold outflow reduced the money supply and, hence, lowered the price level. The net result was balanced prices among countries. The fixed exchange rate also caused both monetary andnonmonetary (real) shocks to be transmitted via flows of gold and capital between countries. Therefore, a shock in one country affected the domestic money supply, expenditure, price level, and real income in another country. The California gold discovery in 1848 is an example of a monetary shock. The newly produced gold increased the U.S. money supply, which then raised domestic expenditures, nominal income, and, ultimately, the price level. The rise in the domestic price level made U.S. exports more expensive, causing a deficit in the U.S. BALANCE OF PAYMENTS. For America’s trading partners, the same forces necessarily produced a balance-of-trade surplus. The U.S. trade deficit was financed by a gold (specie) outflow to its trading partners, reducing the monetary gold stock in the United States. In the trading partners, the money supply increased, raising domestic expenditures, nominal incomes, and, ultimately, the price level. Depending on the relative share of the U.S. monetary gold stock in the world total, world prices and income rose. Although the initial effect of the gold discovery was to increase real output (because wages and prices did not immediately increase), eventually the full effect was on the price For the gold standard to work fully, central banks, where they existed, were supposed to play by the “rules of the game.” In other words, they were supposed to raise their discount rates—the interest rate at which the central bank lends money to member banks—to speed a gold inflow, and to lower their discount rates to facilitate a gold outflow. Thus, if a country was running a balance-of-payments deficit, the rules of the game required it to allow a gold outflow until the ratio of its price level to that of its principal trading partners was restored to the par exchange rate. The exemplar of central bank behavior was the Bank of England, which played by the rules over much of the period between 1870 and 1914. Whenever Great Britain faced a balance-of-payments deficit and the Bank of England saw its gold reserves declining, it raised its “bank rate” (discount rate). By causing other INTEREST RATES in the United Kingdom to rise as well, the rise in the bank rate was supposed to cause the holdings of inventories and other INVESTMENT expenditures to decrease. These reductions would then cause a reduction in overall domestic spending and a fall in the price level. At the same time, the rise in the bank rate would stem any short-term capital outflow and attract short-term funds from abroad. Most other countries on the gold standard—notably France and Belgium—did not follow the rules of the game. They never allowed interest rates to rise enough to decrease the domestic price level. Also, many countries frequently broke the rules by “sterilization”—shielding the domestic money supply from external disequilibrium by buying or selling domestic securities. If, for example, France’s central bank wished to prevent an inflow of gold from increasing the nation’s money supply, it would sell securities for gold, thus reducing the amount of gold Yet the central bankers’ breaches of the rules must be put into perspective. Although exchange rates in principal countries frequently deviated from par, governments rarely debased their currencies or otherwise manipulated the gold standard to support domestic economic activity. Suspension of convertibility in England (1797-1821, 1914-1925) and the United States (1862-1879) did occur in wartime emergencies. But, as promised, convertibility at the original parity was resumed after the emergency passed. These resumptions fortified the credibility of the gold standard rule. Performance of the Gold Standard As mentioned, the great virtue of the gold standard was that it assured long- term price stability. Compare the aforementioned average annual inflation rate of 0.1 percent between 1880 and 1914 with the average of 4.1 percent between 1946 and 2003. (The reason for excluding the period from 1914 to 1946 is that it was neither a period of the classical gold standard nor a period during which governments understood how to manage MONETARY POLICY.) But because economies under the gold standard were so vulnerable to real and monetary shocks, prices were highly unstable in the short run. A measure of short-term price instability is the coefficient of variation—the ratio of the standard deviation of annual percentage changes in the price level to the average annual percentage change. The higher the coefficient of variation, the greater the short-term instability. For the United States between 1879 and 1913, the coefficient was 17.0, which is quite high. Between 1946 and 1990 it was only 0.88. In the most volatile decade of the gold standard, 1894-1904, the mean inflation rate was 0.36 and the standard deviation was 2.1, which gives a coefficient of variation of 5.8; in the most volatile decade of the more recent period, 1946-1956, the mean inflation rate was 4.0, the standard deviation was 5.7, and the coefficient of variation was 1.42. Moreover, because the gold standard gives government very little discretion to use monetary policy, economies on the gold standard are less able to avoid or offset either monetary or real shocks. Real output, therefore, is more variable under the gold standard. The coefficient of variation for real output was 3.5 between 1879 and 1913, and only 0.4 between 1946 and 2003. Not coincidentally, since the government could not have discretion over monetary policy, UNEMPLOYMENT was higher during the gold standard years. It averaged 6.8 percent in the United States between 1879 and 1913, and 5.9 percent between 1946 and 2003. Finally, any consideration of the pros and cons of the gold standard must include a large negative: the resource cost of producing gold. MILTON FRIEDMAN estimated the cost of maintaining a full gold coin standard for the United States in 1960 to be more than 2.5 percent of GNP. In 2005, this cost would have been about $300 billion. Although the last vestiges of the gold standard disappeared in 1971, its appeal is still strong. Those who oppose giving discretionary powers to the central bank are attracted by the simplicity of its basic rule. Others view it as an effective anchor for the world price level. Still others look back longingly to the fixity of exchange rates. Despite its appeal, however, many of the conditions that made the gold standard so successful vanished in 1914. In particular, the importance that governments attach to full employment means that they are unlikely to make maintaining the gold standard link and its corollary, long-run price stability, the primary goal of economic policy. The Bretton Woods system It was clear during the Second World War that a new international system would be needed to replace the Gold Standard after the war ended. The design for it was drawn up at the Bretton Woods Conference in the US in 1944. US political and economic dominance necessitated the dollar being at the centre of the system. After the chaos of the inter-war period there was a desire for stability, with fixed exchange rates seen as essential for trade, but also for more flexibility than the traditional Gold Standard had provided. The system drawn up fixed the dollar to gold at the existing parity of US$35 per ounce, while all other currencies had fixed, but adjustable, exchange rates to the dollar. Unlike the classical Gold Standard, capital controls were permitted to enable governments to stimulate their economies without suffering from financial market penalties. During the Bretton Woods era the world economy grew rapidly. Keynesian economic policies enabled governments to dampen economic fluctuations, and recessions were generally minor. However strains started to show in the 1960s. Persistent, albeit low-level, global inflation made the price of gold too low in real terms. A chronic US trade deficit drained US gold reserves, but there was considerable resistance to the idea of devaluing the dollar against gold; in any event this would have required agreement among surplus countries to raise their exchange rates against the dollar to bring about the needed adjustment. Meanwhile, the pace of economic growth meant that the level of international reserves generally became inadequate; the invention of the ‘Special Drawing Right’ (SDR) failed to solve this problem. While capital controls still remained, they were considerably weaker by the end of the 1960s than in the early 1950s, raising prospects of capital flight from, or speculation against, currencies that were perceived as weak. In 1961 the London Gold Pool was formed. Eight nations pooled their gold reserves to defend the US$35 per ounce peg and prevent the price of gold moving upwards. This worked for a while, but strains started to emerge. In March 1968, a two-tier gold market was introduced with a freely floating private market, and official transactions at the fixed parity. The two-tier system was inherently fragile. The problem of the US deficit remained and intensified. With speculation against the dollar intensifying, other central banks became increasingly reluctant to accept dollars in settlement; the situation became untenable. Finally in August 1971, President Nixon announced that the US would end on-demand convertibility of the dollar into gold for the central banks of other nations. The Bretton Woods system collapsed and gold traded freely on the world’s markets. The Bretton Woods System Nations attempted to revive the gold standard following World War I, but it collapsed entirely during the Great Depression of the 1930s. Some economists said adherence to the gold standard had prevented monetary authorities from expanding the money supply rapidly enough to revive economic activity. In any event, representatives of most of the world's leading nations met at Bretton Woods, New Hampshire, in 1944 to create a new international monetary system. Because the United States at the time accounted for over half of the world's manufacturing capacity and held most of the world's gold, the leaders decided to tie world currencies to the dollar, which, in turn, they agreed should be convertible into gold at $35 per ounce. Under the Bretton Woods system, central banks of countries other than the United States were given the task of maintaining fixed exchange ratesbetween their currencies and the dollar. They did this by intervening in foreign exchange markets. If a country's currency was too high relative to the dollar, its central bank would sell its currency in exchange for dollars, driving down the value of its currency. Conversely, if the value of a country's money was too low, the country would buy its own currency, thereby driving up the price. The Bretton Woods system lasted until 1971. By that time, inflation in the United States and a growing American trade deficit were undermining the value of the dollar. Americans urged Germany and Japan, both of which had favorable payments balances, to appreciate their currencies. But those nations were reluctant to take that step, since raising the value of their currencies would increases prices for their goods and hurt their exports. Finally, the United States abandoned the fixed value of the dollar and allowed it to "float" -- that is, to fluctuate against other currencies. The dollar promptly fell. World leaders sought to revive the Bretton Woods system with the so-called Smithsonian Agreement in 1971, but the effort failed. By 1973, the United States and other nations agreed to allow exchange rates to float. Economists call the resulting system a "managed float regime," meaning that even thoughexchange rates for most currencies float, central banks still intervene to prevent sharp changes. As in 1971, countries with large trade surpluses often sell their own currencies in an effort to prevent them from appreciating (and thereby hurting exports). By the same token, countries with large deficits often buy their own currencies in order to prevent depreciation , which raises domestic prices. But there are limits to what can be accomplished through intervention, especially for countries with large trade deficits. Eventually, a country that intervenes to support its currency may deplete its international reserves, making it unable to continue buttressing the currency and potentially leaving it unable to meet its international obligations.
http://docshare.tips/foreign-exchange_574b5e3fb6d87ffa448b4f0d.html?utm_source=docshare&utm_medium=sidebar&utm_campaign=5bbe364308bbc5bf627d2505
18
13
PS1.A: Structure and Properties of Matter Students diagram and label an object's materials and its properties and how these support the purpose of the product. 2-PS1-3. Make observations to construct an evidence-based account of how an object made of a small set of pieces can be disassembled and made into a new object Students gather data to discover how a few materials can be used to create a variety of objects. - Developing and Using Models (SP 2) Students create diagrams to identify the objects' materials and the properties of the materials that support the object's purpose. - Using Mathematics and Computational Thinking (SP 5) Students collaborate to create a class tally chart of materials used in their products to discover and develop an evidenced based conclusion about which material are used in most objects. Students use tally marks to create a bar graph to analyze and interpret their data. - Constructing Explanations and Designing Solutions (Sp 6) Students review the materials' properties to construct an explanation why certain materials can be used in a variety of objects. Cross-cutting Concepts - Appendix G - Structure and Function (XC 6) Students learn how the materials' properties supports the function of the object. Identify 12 or more objects for pairs of students to diagram. Possible objects to include: markers, chair, ball, book, shoe, clothing, scissors, musical instrument, desk, stapler, tape dispenser, notebook, pencil box, crayon, water bottle, pen, backpack, pencil sharpener I have selected objects that do not have internal pieces that students could not easily access in one lesson. Decide how many students will work on a poster. My students will work in pairs. Construction paper for the poster; one poster/team Gather the objects students will use Have extra pens in the designated colors available for teams to use I post a question at the beginning of each science lesson. This provides an opportunity for students to consider today's topic before the lesson has officially begun. I have established this routine with the kiddos to keep transition time short and effective and redirect student's attention back to content while allowing time for focused peer interaction. Question for the Day: Look around our classroom, what materials do you think are used the most to make the products that you see? I show the definition for product and review the term before students turn and talk. Product - something that is made or grown to be used and/or sold. Students read the question and discuss their ideas with their should partner. After partners have shared, they turn to face me. I call volunteers to share their ideas, and write their answers on the board. "This is your hypothesis to this question. What do scientists do to find out if their hypothesis is correct? Right! We test and make observations!" "Please return to your seats and I will explain how we will gather data to help us prove or disprove our hypothesis." "I have made a list of products that we use everyday. You will work with a partner to 'decompose a product' to identify its materials. After your product diagrams are complete, we will use your diagram data to create a bar graph to learn which material is used most often." Providing my students with an overview of the lesson, helps them make connections with their learning experience and goals of the lesson. I show my students the pencil poster we made in a previous lesson on materials. I explain that they will be making a similar poster for a product they choose. I teach 2 classes. One of my classes did the 'Ball Point Pen' Lesson which would also help scaffold the experience for this lesson. "Let's review the parts of the poster so you will know what to include on your poster: - There is a drawing of the product - Materials used for the product are labeled - Properties of the materials are labeled in another color - There is an explanation how the materials' properties help the product do its job - Properties and materials are written in different colors As the parts of the poster are noted, I project a checklist for students to reference as they work on their product diagram. With the students, the material and property colors that they will use on their poster are selected. I make a key to remind the students of their color choices for material and property labels. Students write with a pencil to explain how the property helps the the material support the purpose of the product. Students select the product that they want to 'decompose'. They will work in pairs to create their product diagram. I direct students to sit with their partner at the desks. Then one student from each pair picks up the product that they will diagram. I have selected items that are familiar to the students and do not have lots of internal parts which students cannot observe or identify. After students have had a couple of minutes to discuss and handle their product, I signal their attention. With student input we develop a word bank for materials and properties. The word bank is posted on the board as a reference as students begin to work on their posters. I will use the 'material word bank column' later in the lesson when students tally the materials used in their product. I pass out the poster paper and check that each group has the designated colored pens to label materials and properties. I check in with the groups and help them identify the different materials and properties and how the parts help the system of the product. When teams finish their posters, I direct them to tally mark the materials used in their product on the material column of the word bank. I signal students to meet me on the rug to review the tally marks. "Let's total the tally marks to discover which materials are used in most of the products we looked at." After the tally marks are counted, the class sorts the materials from greatest to least occurring materials. I use the observation results to start a discussion on the application of the top used material. "What can we say about our products?" Right, most products have --------- material. What does this tell us about this material? Yes, it can be used for lots of different products. Why do you think it used in most products? Think about the properties of this material that may make it useful for the other products." I list their ideas on the board, with the students we discuss the merits of each idea. I circle the ones that can be referenced back to the material properties. I save the data and student ideas. Later this week students will use this information to create a bar graph and conclusion. If time I lead a discussion on the resources used to produce this material and the impact it has on the Earth. I congratulate the students on their participation, remind them to write their name on their poster and place them on the back table. I will hang their work in the classroom.
https://betterlesson.com/lesson/639712/products-and-properties
18
26
Most people have never seen an accurate model of the solar system. One reason for this is it cannot be represented on a single piece of paper or on a screen. The planets are extremely small when compared to the distance between them and for that reason is it not easily modeled. The purpose of this lesson is to change the mental model that most students have about the size of the planets and our place in the solar system. This is done using actual diameters and distances from the sun and students will apply mathematical calculations to create an accurate model of the sun and planets in our solar system. This activity involves CCSS Math Practice 4: Model with mathematics and NGSS Science Practice 2: Developing and using models, Science Practice 4: Analyzing and interpreting data and Science Practice 5: Using mathematics and computational thinking. This lesson builds off the previous lesson, Going Full Circle, only the objects in orbit are the planets, not human-made satellites. This involves application of NGSS performance standard HS-ESS1-4: Use mathematical or computational representations to predict the motion of orbiting objects in the solar system. To do this activity, I have a large orange beach ball (38 cm diameter) that represents the sun. I also have a collection of smaller spheres such as a golf ball, a fooze ball, marbles of various sizes, pins with a ball on the end and a small stopper that has a grain of sand sized dot on it. These represent the planets of our solar system relative to the beach ball. Before the lesson, I measure each sphere to make sure they accurately represent all the planets relative to the sun model. For my own reference, I have the Solar System Calculator MS-Excel document which has the scaled sizes of the planet models relative to the sun model. The cell in yellow is where I put the diameter of the ball which represents the sun. The diameters and distances (in the blue cells) for each planet are then calculated automatically. The other astronomical data in the table I leave, should I want to someday expand this lesson. Before class begins, I have the various spheres at the front of the class, ordered from small to large, and the first slide of the Scale Power Point displayed on the whiteboard. I tell students that today they will make an accurate scale model of our solar system using the various spheres in the front of the room, with the biggest one representing our sun. If this is an honors class, I know that most of the students are able to determine how to make a scale and perform the conversions without much guidance. However, if this is a college prep class, the students need more structure and guidance. For that class, I go through the scaling exercise on the power point. The students take notes and perform the calculations in their notes as I go through the presentation. I ask the students if they have ever seen a model of our solar system. I ask what it looked like and have them describe it. Usually it is like the solar system pictures on the first slide on the Scale Power Point. Then, I hand out the Solar System to Scale worksheet. Students complete their own sheet, but they are allowed collaborate. Then I call up groups of 5 to the front of the room where I have the model spheres in size order, each with a letter label (e.g. the smallest sphere is labeled A, the second smallest B and so on). I have the students predict which sphere represents each of the 8 planets by putting the letter on the planet name on the worksheet. I tell the students that not all spheres have to be used and some spheres can be used more than once. In general, students guess that the planets are much larger than they really are. I have a soccer ball which most students believe is Jupiter. Students work through the worksheet as I circulate the class. The first thing students must do is determine the scale. The diameter of the actual sun is 1,390,000 km. The size of our scaled down sun (beach ball) is 38 cm. So the scale is 38 cm/1,3900,000 km. If students multiply the rest of the distances by this number, they have the scaled down diameters and distances for each of the planets. The orbital distances are scaled down using the same scale (38 cm/1,390,000 km). While I circulate, I use Solar System to Scale-Solutions to make sure students have the correct scale and that their calculations are what is expected. If not, I help them correct their mistakes. If a student finishes early, they are to work on answering the rest of the questions on the back of the sheet. It takes the students no more than 15 minutes to complete the scaled distance calculations. At that point, we move on to create our scale model. After all students have completed the scaled diameter and distance calculations, it is time to build our model. I ask for volunteers for each of the bodies in our solar system starting with our sun. As students volunteer, I hand them the correctly sized, scaled down sphere and they stand in the front of the room. For the planets, when a student volunteers , I ask them what their calculated value was for the planet. They shout it out and I ask the rest of the class if they agree. Once our solar system volunteers are lined up in the front of the room, in the same order as the planets in our solar system, I instruct each planet to memorize the scaled distance of their planet from the sun. At that point we move to a larger space. Ideally, I take my class outside to the front of the school (pictured below) so that we can go all the way to Jupiter (which is 200 meters from the sun!). If the weather is bad, then we stay inside in the hallway, which can accommodate the scaled down distance form the sun to Mars (59 meters). All students start at one end and Pace Out the Planets. Then each of the planets take 1-meter steps out to their orbital distances, starting with Mercury, then Venus, Earth, Mars and Jupiter. I don't let Saturn go as that student would have to leave the high school grounds and Uranus would take a student to the next town! After the students have seen our accurate scale model of the solar system, we return to reflect on what they learned. For students who want to share, I have them tell the class how their view of the solar system has changed. Usually, students comment about how amazing it is how far away and small the planets are relative to the sun and that the vast majority of our solar system is empty space. They put their reflection on the Solar System to Scale worksheet which I collect as students exit.
https://betterlesson.com/lesson/637804/accurate-model-of-the-solar-system?from=consumer_breadcrumb_dropdown_lesson
18
19
In this lesson students will observe patterns when attempting to balance two objects of different masses on either side of the teeter-totter. There are an increasing number of middle school students who have not had the opportunity to experience physics in action on the playground. I drove around my hometown to look for a teetor-totter at the schools and parks. I found none. In many ways I think our physics / physical science teachers had an easier time explaining to us how things worked by activating our prior knowledge via experiences on the playground. Students without this prior knowledge will need to have experiences to help them understand abstract concepts. This simulation - Balancing Act - will help students see how passive objects exert forces and by observing patterns understand how proportional reasoning will help them predict how two objects of different masses can balance using mathematics. In this lesson students will observe how applied forces can change an objects motion. We are looking at rotational motion as the students make observations to find a pattern that can be used to predict how to balance two objects of different mass. (MS-PS2-2 Plan an investigation to provide evidence that the change in an object’s motion depends on the sum of the forces on the object and the mass of the object.) Disciplinary Core Ideas of Forces and Motion include: PS2.A The motion of an object is determined by the sum of the forces acting on it; if the total force on the object is not zero, its motion will change. The greater the mass of the object, the greater the force needed to achieve the same change in motion. For any given object, a larger force causes a larger change in motion. And All positions of objects and the directions of forces and motions must be described in an arbitrarily chosen reference frame and arbitrarily chosen units of size. In order to share information with other people, these choices must also be shared. The Cross Cutting Concept is Stability and Change. The simulation include options to view visual displays that will help students strengthen their claims supporting the CCSS ELA standard integrating visuals to clarify understanding and provide evidence to support student claims (SL.8.5 - Integrate multimedia and visual displays into presentations to clarify information, strengthen claims and evidence, and add interest.) Student engagement is supported as students use the Balancing Act simulation as a model to explain proportional reasoning as a tool to predict balance. Students are also developing perseverance as they extract evidence through inquiry to support their understanding of force and motion. Perseverance is being challenged as students use their data collection to inspire them to use the appropriate mathematical concepts to answer scientific questions. By changing variables to deepen their understanding, students are developing mastery as they use the simulation as an iterative process. (7.RP.A.3 - Use proportional relationships to solve multiple step ratio and percent problems). (SP2 - Using Models) (SP4 - Using Data) (SP5 - Using Mathematical Thinking) Students will discover that balance can be predicted mathematically by multiplying mass * distance on each side of the fulcrum. If the products are equal, the lever will balance. Students employ the use of two standards of mathematical practice in this lesson. With a model, students will be able to collect the data to support their mathematical findings. (MP4 - Model with mathematics) and (MP2 - Reason abstractly and quantitatively). Conducting investigations is inherent in all the PhET simulations as they allow for the change of variables that allow students to make changes in their investigation that lead to the discovery of answers to specific questions. Students in this simulation are changing variables in the simulation to understand the relationship between force and motion of objects. (SP3 - Planning and Conducting Investigations) Throughout their investigations students are asked to collect observations and use the information collected to make conclusions building upon their experiences to develop habits and skills leading towards independent explorations. The lesson asks students to collection observations in a table and use that information to state conclusions about their investigation. (SP8 - Collecting and Communicating Information) At the end of the lesson, students will replicate their virtual experience with a homemade first class level. pennies and nickels. A complete materials list is available in the resource section. Students in Action Students take a few minutes to answer questions 1, 2 & 3 on today's lesson handout using Turn/Talk/Record. Working with their elbow partner, students will discuss the questions and record their answers. Recording answers provides a level of accountability for the discussion time. It is also building a habit of practice. Since I will soon call on groups to share out their answers, students are vigilant about recording their answers. When we work as a groups students can easily explain and demonstrate the answers to questions 1 & 2. It seems most all of the students have carried a heavy backpack or other load in one hand and felt themselves leaning in the direction of the load. When the load is balanced - two heavy backpacks, one in each hand, students demonstrate that they are able to walk upright if not somewhat slouched. Students are hesitant to proclaim that they can predict where the objects of different mass should be in order to balance. The lesson requires students to find objects of different masses that balance and record the distance from the fulcrum and the mass for each object. The data collected should be sufficient for students to begin to see a pattern. If students multiple the distance and force on the left, it should match the product of the distance and force on the right. In this video, I share how I model using the simulation for students and the expectations for the type of data they are collecting. I also share an example of student data collection. As students work, I circulate around the classroom. I want to make sure that the data they are recording is correct so they can make an unencumbered observation to find the mathematical pattern that allows them to predict balance. When a student says they cannot find a pattern, I read their data out loud in such a way as to help them hear the pattern as I say it. So, you placed 20kg at 1 meter and 10kg at 2 meters and the result was a balance. Look at the numbers on the left 20kg, 1 meter they are equal to the 10kg, 2 meters. What operation can be performed on these two sets of numbers to show they are equal? I hand out the rulers, fulcrums, and pennies. Providing tools for exploration allows students an opportunity to test their virtual experiences with real world objects. It allows students to apply what they learned to a new situation, answering the question whether this example applies only in certain situations or can I apply the concept to other situations as well. We use building toys to make our fulcrum - quick, easy and fun! Students should balance the ruler on the fulcrum (first class lever) before adding objects. I remind students this is our experimental control. Does the ruler balance without additional mass? This time I ask students to apply the formula they used to balance the objects in the virtual lab to the pennies and the ruler. I ask students to balance various combinations with multiple pennies stacked one on top another. Left Side Right Side Does the formula apply to this situation as it did in the virtual lab? Yes, indeed, we did find the formula applied to the student made lever in the same way it worked in the virtual lab. Example: Left side 1 penny * 6 inches (distance from the fulcrum) = Right side 2 pennies * 3 inches (distance from the fulcrum) I hand out nickels and ask students to balance the teeter-totter using one nickel and one penny. We did not attempt to calculate the mass * distance for the right and left side of the balance to prove what we learned virtually. I am definitely adding that step the next time I use this lesson in class!
https://betterlesson.com/lesson/640042/balancing-act
18
20
Home current students learning resources writing center writing resources parts of an essay essay conclusions into the conclusion anywhere—the first sentence . In this lesson, you will learn to draft a conclusion that will leave your reader thinking by restating your thesis and giving a plea for action. When a quotation is placed at the end of a sentence, a single sentence made up of material from two or more original sentences, ellipses should be used for all . Essays end with a clincher, a final sentence that may reinforce an overall argument or leave the reader with an intriguing thought, question or quotation the idea is to. How to write a strong essay conclusion the number of sentences in your conclusion will depend on how many paragraphs (statements) you have in the essay. Get an answer for 'when writing the end of a body paragraph in an essay, how would you conclude it' and find homework help for other essay lab questions at enotes. How to write the last sentence in a paper all's well that ends well, writing famous quotes in the end just takes your essay to another standard. What you write about is only part of what makes up a great essay without good flow, your writers will end up lost or bored, so be sure your writing flows the best way to be sure your writing flows is by linking up your paragraphs and sentences properly take a class on college writing essentials . How to write a 5 paragraph essay how to write a concluding hook sentence (optional) a good way to end an essay is something unexpected, to surprise the reader. A conclusion provides a thoughtful end to a piece of writing focusing on a minor point in the essay concluding with a sentence tacked on to your final point. Use this list of 20 essay conclusion examples that covers a range of topics and essay formats as a stepping stone to inspire and inform your own writing. Writing transitions effectively ending the danish phase of transitional devices are words or phrases that help carry a thought from one sentence to . When you’re writing a good conclusion paragraph, your conclusion wraps up your essay in a tidy package and brings it home conclusion outline topic sentence. A linking sentence coherently connects two other sentences together in an essay it is placed between the two sentences in order to provide them with more context, allowing the paragraph to proceed in a logical fashion a linking sentence found at the end of a paragraph or the beginning of a new . How to write a conclusion paragraph for a persuasive essay tell them what to do in two sentences end the essay with an enthusiastic sentence full transcript. Step 6: write introduction and conclusion prepare the reader for the thesis statement in sentence 5 that the essay ‘continues to be a valuable . If one or more sentences are omitted, end the sentence before the ellipsis with a period and then insert your ellipsis marks with a space on both sides . If your essay about love is a general essay, for writing about love essay, all you have to do is to follow the pattern of introduction, body and conclusion. Ending the essay: conclusions so much is at stake in writing a conclusion conclude with a sentence that's compound or parallel in structure . Finally, wrap up your conclusion with an authoritative sentence that effectively summarizes the main point of the whole thing how to close an informative essay. Example academic essay: is a greater deterrent than being given a life sentence in prison and rather than feed and house them for years on end. All are final lines from some popular essays end with an image tips to create a memorable ending for your narrative essay ” jenny says:. Essay generator helps you generate unique essays and articles with one click, create your own plagiarism free academic essay writings now for your school essays. Learn about how to write concluding sentences that tie everything together with these free downloads, printables, and games for students from time4writing. Free tips on scholarship essay the first sentence the first sentence of an essay must be yet the conclusion of a scholarship essay should do more than . How to write a strong essay conclusion the number of sentences in your conclusion will depend on what are the best ways to start a conclusion paragraph in essay. Find out how to end an essay in an impressive manner check out our free essay conclusion examples and get some valuable tips on how to finalize your academic paper. Ending sentence : all the previous sentences have been building up to this: your thesis your thesis statement expresses the overall idea of your paper and show where .
http://xvessayqffw.njdata.info/an-ending-sentence-for-an-essay.html
18
11
The first thing to remember when you want to calculate a gradient on a topographic map is that the two terms “gradient” and “slope” are interchangeable. The gradient change occurring within a specific area on the map reveals the lay of the land. In turn, this helps geologists and environmentalists determine any effect the gradient of the specified area has on areas around it. Erosion is a good example of why knowing the gradient of specific areas is important. Doing a project such as this is easier with a scientific calculator because you may need to calculate arctangents. Place the map on a smooth surface, and choose the area where the gradient needs to be calculated. Do not choose an area that goes over a hill or down and then up a valley. Draw a line perpendicular to the lines depicting the contours of the slope with a ruler. Begin your line on one of the contour lines and end on another one. Measure the line and translate that figure into feet, using the map legend. Sciencing Video Vault Calculate the gradient by subtracting the elevation of the lower contour line on the line you drew from the elevation of the contour line at the other end of the line you drew. Divide the answer by the distance in feet represented by the line you drew. Multiply that number by 100 to give you the percent slope of the hill. For example, if the number you arrived at was 45. This means that for every 100 feet traveled in the area marked on the map, the elevation changes 45 feet whether going up or down the hill. Determine the angle of the slope by dividing the change in elevation by length represented by the line you drew. This gives you the tangent value of the slope. Use the arctangent function on your scientific calculator to get the angle of the slope. Place weights on the corners of the map if it curls up while using it. Picture the percent slope as rise or fall over run. Always start and end your line directly on a contour line to avoid imprecise calculations.
https://sciencing.com/calculate-gradients-topographic-map-7597807.html
18
88
1 CCSS: Mathematics Operations & Algebraic Thinking CCSS: Grade 5 5.OA.A. Write and interpret numerical expressions. 5.OA.A.1. Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols. 5.OA.A.2. Write simple expressions that record calculations with numbers, and interpret numerical expressions without evaluating them. For example, express the calculation add 8 and 7, then multiply by 2 as 2 (8 + 7). Recognize that 3 ( ) is three times as large as , without having to calculate the indicated sum or product. 5.OA.B. Analyze patterns and relationships. 5.OA.B.3. Generate two numerical patterns using two given rules. Identify apparent relationships between corresponding terms. Form ordered pairs consisting of corresponding terms from the two patterns, and graph the ordered pairs on a coordinate plane. For example, given the rule Add 3 and the starting number 0, and given the rule Add 6 and the starting number 0, generate terms in the resulting sequences, and observe that the terms in one sequence are twice the corresponding terms in the other sequence. Explain informally why this is so. Number & Operations in Base Ten 5.NBT.A. Understand the place value system. 5.NBT.A.1. Recognize that in a multi-digit number, a digit in one place represents 10 times as much as it represents in the place to its right and 1/10 of what it represents in the place to its left. 5.NBT.A.2. Explain patterns in the number of zeros of the product when multiplying a number by powers of 10, and explain patterns in the placement of the decimal point when a decimal is multiplied or divided by a power of 10. Use whole-number exponents to denote powers of NBT.A.3. Read, write, and compare decimals to thousandths. 5.NBT.A.3a. Read and write decimals to thousandths using base-ten numerals, number names, and expanded form, e.g., = (1/10) + 9 (1/100) + 2 (1/1000). 5.NBT.A.3b. Compare two decimals to thousandths based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. 2 5.NBT.A.4. Use place value understanding to round decimals to any place. 5.NBT.B. Perform operations with multi-digit whole numbers and with decimals to hundredths. 5.NBT.B.5. Fluently multiply multi-digit whole numbers using the standard algorithm. 5.NBT.B.6. Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models. 5.NBT.B.7. Add, subtract, multiply, and divide decimals to hundredths, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. Number & Operations Fractions 5.NF.A. Use equivalent fractions as a strategy to add and subtract fractions. 5.NF.A.1. Add and subtract fractions with unlike denominators (including mixed numbers) by replacing given fractions with equivalent fractions in such a way as to produce an equivalent sum or difference of fractions with like denominators. For example, 2/3 + 5/4 = 8/ /12 = 23/12. (In general, a/b + c/d = (ad + bc)/bd.) 5.NF.A.2. Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e.g., by using visual fraction models or equations to represent the problem. Use benchmark fractions and number sense of fractions to estimate mentally and assess the reasonableness of answers. For example, recognize an incorrect result 2/5 + 1/2 = 3/7, by observing that 3/7 < 1/2. 5.NF.B. Apply and extend previous understandings of multiplication and division to multiply and divide fractions. 5.NF.B.3. Interpret a fraction as division of the numerator by the denominator (a/b = a b). Solve word problems involving division of whole numbers leading to answers in the form of fractions or mixed numbers, e.g., by using visual fraction models or equations to represent the problem. For example, interpret 3/4 as the result of dividing 3 by 4, noting that 3/4 multiplied by 4 equals 3, and that when 3 wholes are shared equally among 4 people each person has a share of size 3/4. If 9 people want to share a 50-pound sack of rice equally by weight, how many pounds of rice should each person get? Between what two whole numbers does your answer lie? 5.NF.B.4. Apply and extend previous understandings of multiplication to multiply a fraction or whole number by a fraction. 5.NF.B.4a. Interpret the product (a/b) q as a parts of a partition of q into b equal parts; equivalently, as the result of a sequence of operations a q b. For example, use a visual fraction model to show (2/3) 4 = 8/3, and create a story context 3 for this equation. Do the same with (2/3) (4/5) = 8/15. (In general, (a/b) (c/d) = ac/bd.) 5.NF.B.4b. Find the area of a rectangle with fractional side lengths by tiling it with unit squares of the appropriate unit fraction side lengths, and show that the area is the same as would be found by multiplying the side lengths. Multiply fractional side lengths to find areas of rectangles, and represent fraction products as rectangular areas. 5.NF.B.5. Interpret multiplication as scaling (resizing), by: 5.NF.B.5a. Comparing the size of a product to the size of one factor on the basis of the size of the other factor, without performing the indicated multiplication. 5.NF.B.5b. Explaining why multiplying a given number by a fraction greater than 1 results in a product greater than the given number (recognizing multiplication by whole numbers greater than 1 as a familiar case); explaining why multiplying a given number by a fraction less than 1 results in a product smaller than the given number; and relating the principle of fraction equivalence a/b = (n a)/(n b) to the effect of multiplying a/b by 1. 5.NF.B.6. Solve real world problems involving multiplication of fractions and mixed numbers, e.g., by using visual fraction models or equations to represent the problem. 5.NF.B.7. Apply and extend previous understandings of division to divide unit fractions by whole numbers and whole numbers by unit fractions. 5.NF.B.7a. Interpret division of a unit fraction by a non-zero whole number, and compute such quotients. Students able to multiply fractions in general can develop strategies to divide fractions in general, by reasoning about the relationship between multiplication and division. But division of a fraction by a fraction is not a requirement at this grade. For example, create a story context for (1/3) 4, and use a visual fraction model to show the quotient. Use the relationship between multiplication and division to explain that (1/3) 4 = 1/12 because (1/12) 4 = 1/3. 5.NF.B.7b. Interpret division of a whole number by a unit fraction, and compute such quotients. For example, create a story context for 4 (1/5), and use a visual fraction model to show the quotient. Use the relationship between multiplication and division to explain that 4 (1/5) = 20 because 20 (1/5) = 4. 5.NF.B.7c. Solve real world problems involving division of unit fractions by non-zero whole numbers and division of whole numbers by unit fractions, e.g., by using visual fraction models and equations to represent the problem. Measurement & Data 5.MD.A. Convert like measurement units within a given measurement system. 5.MD.A.1. Convert among different-sized standard measurement units within a given measurement system (e.g., convert 5 cm to 0.05 m), and use these conversions in solving multi-step, real world problems. 5.MD.B. Represent and interpret data. 5.MD.B.2. Make a line plot to display a data set of measurements in fractions of a unit 4 (1/2, 1/4, 1/8). Use operations on fractions for this grade to solve problems involving information presented in line plots. For example, given different measurements of liquid in identical beakers, find the amount of liquid each beaker would contain if the total amount in all the beakers were redistributed equally. 5.MD.C. Geometric measurement: understand concepts of volume and relate volume to multiplication and to addition. 5.MD.C.3. Recognize volume as an attribute of solid figures and understand concepts of volume measurement. 5.MD.C.3a. A cube with side length 1 unit, called a unit cube, is said to have one cubic unit of volume, and can be used to measure volume. 5.MD.C.3b. A solid figure which can be packed without gaps or overlaps using n unit cubes is said to have a volume of n cubic units. 5.MD.C.4. Measure volumes by counting unit cubes, using cubic cm, cubic in, cubic ft, and improvised units. 5.MD.C.5. Relate volume to the operations of multiplication and addition and solve real world and mathematical problems involving volume. 5.MD.C.5a. Find the volume of a right rectangular prism with whole-number side lengths by packing it with unit cubes, and show that the volume is the same as would be found by multiplying the edge lengths, equivalently by multiplying the height by the area of the base. Represent threefold whole-number products as volumes, e.g., to represent the associative property of multiplication. 5.MD.C.5b. Apply the formulas V = l w h and V = b h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real world and mathematical problems. 5.MD.C.5c. Recognize volume as additive. Find volumes of solid figures composed of two non-overlapping right rectangular prisms by adding the volumes of the non-overlapping parts, applying this technique to solve real world problems. Geometry 5.G.A. Graph points on the coordinate plane to solve real-world and mathematical problems. 5.G.A.1. Use a pair of perpendicular number lines, called axes, to define a coordinate system, with the intersection of the lines (the origin) arranged to coincide with the 0 on each line and a given point in the plane located by using an ordered pair of numbers, called its coordinates. Understand that the first number indicates how far to travel from the origin in the direction of one axis, and the second number indicates how far to travel in the direction of the second axis, with the convention that the names of the two axes and the coordinates correspond (e.g., x-axis and x-coordinate, y-axis and y-coordinate). 5.G.A.2. Represent real world and mathematical problems by graphing points in the first quadrant of the coordinate plane, and interpret coordinate values of points in the context of the situation. 5 5.G.B. Classify two-dimensional figures into categories based on their properties. 5.G.B.3. Understand that attributes belonging to a category of two-dimensional figures also belong to all subcategories of that category. For example, all rectangles have four right angles and squares are rectangles, so all squares have four right angles. 5.G.B.4. Classify two-dimensional figures in a hierarchy based on properties. Mathematical Practice MP.The Standards for Mathematical Practice describe varieties of expertise that mathematics educators at all levels should seek to develop in their students. MP.1. Make sense of problems and persevere in solving them. Mathematically proficient students start by explaining to themselves the meaning of a problem and looking for entry points to its solution. They analyze givens, constraints, relationships, and goals. They make conjectures about the form and meaning of the solution and plan a solution pathway rather than simply jumping into a solution attempt. They consider analogous problems, and try special cases and simpler forms of the original problem in order to gain insight into its solution. They monitor and evaluate their progress and change course if necessary. Older students might, depending on the context of the problem, transform algebraic expressions or change the viewing window on their graphing calculator to get the information they need. Mathematically proficient students can explain correspondences between equations, verbal descriptions, tables, and graphs or draw diagrams of important features and relationships, graph data, and search for regularity or trends. Younger students might rely on using concrete objects or pictures to help conceptualize and solve a problem. Mathematically proficient students check their answers to problems using a different method, and they continually ask themselves, Does this make sense? They can understand the approaches of others to solving complex problems and identify correspondences between different approaches. MP.2. Reason abstractly and quantitatively. Mathematically proficient students make sense of quantities and their relationships in problem situations. They bring two complementary abilities to bear on problems involving quantitative relationships: the ability to decontextualize to abstract a given situation and represent it symbolically and manipulate the representing symbols as if they have a life of their own, without necessarily attending to their referents and the ability to contextualize, to pause as needed during the manipulation process in order to probe into the referents for the symbols involved. Quantitative reasoning entails habits of creating a coherent representation of the problem at hand; considering the units involved; attending to the meaning of quantities, not just how to compute them; and knowing and flexibly using different properties of operations and objects. 6 MP.3. Construct viable arguments and critique the reasoning of others. Mathematically proficient students understand and use stated assumptions, definitions, and previously established results in constructing arguments. They make conjectures and build a logical progression of statements to explore the truth of their conjectures. They are able to analyze situations by breaking them into cases, and can recognize and use counterexamples. They justify their conclusions, communicate them to others, and respond to the arguments of others. They reason inductively about data, making plausible arguments that take into account the context from which the data arose. Mathematically proficient students are also able to compare the effectiveness of two plausible arguments, distinguish correct logic or reasoning from that which is flawed, and if there is a flaw in an argument explain what it is. Elementary students can construct arguments using concrete referents such as objects, drawings, diagrams, and actions. Such arguments can make sense and be correct, even though they are not generalized or made formal until later grades. Later, students learn to determine domains to which an argument applies. Students at all grades can listen or read the arguments of others, decide whether they make sense, and ask useful questions to clarify or improve the arguments. MP.4. Model with mathematics. Mathematically proficient students can apply the mathematics they know to solve problems arising in everyday life, society, and the workplace. In early grades, this might be as simple as writing an addition equation to describe a situation. In middle grades, a student might apply proportional reasoning to plan a school event or analyze a problem in the community. By high school, a student might use geometry to solve a design problem or use a function to describe how one quantity of interest depends on another. Mathematically proficient students who can apply what they know are comfortable making assumptions and approximations to simplify a complicated situation, realizing that these may need revision later. They are able to identify important quantities in a practical situation and map their relationships using such tools as diagrams, two-way tables, graphs, flowcharts and formulas. They can analyze those relationships mathematically to draw conclusions. They routinely interpret their mathematical results in the context of the situation and reflect on whether the results make sense, possibly improving the model if it has not served its purpose. MP.5. Use appropriate tools strategically. Mathematically proficient students consider the available tools when solving a mathematical problem. These tools might include pencil and paper, concrete models, a ruler, a protractor, a calculator, a spreadsheet, a computer algebra system, a statistical package, or dynamic geometry software. Proficient students are sufficiently familiar with tools appropriate for their grade or course to make sound decisions about when each of these tools might be helpful, recognizing both the insight to be gained and their limitations. For example, mathematically proficient high school students analyze graphs of functions and solutions generated using a graphing calculator. They detect possible errors by strategically using estimation and other mathematical knowledge. When making mathematical models, they know that technology can enable them to visualize the results of varying assumptions, 7 explore consequences, and compare predictions with data. Mathematically proficient students at various grade levels are able to identify relevant external mathematical resources, such as digital content located on a website, and use them to pose or solve problems. They are able to use technological tools to explore and deepen their understanding of concepts. MP.6. Attend to precision. Mathematically proficient students try to communicate precisely to others. They try to use clear definitions in discussion with others and in their own reasoning. They state the meaning of the symbols they choose, including using the equal sign consistently and appropriately. They are careful about specifying units of measure, and labeling axes to clarify the correspondence with quantities in a problem. They calculate accurately and efficiently, express numerical answers with a degree of precision appropriate for the problem context. In the elementary grades, students give carefully formulated explanations to each other. By the time they reach high school they have learned to examine claims and make explicit use of definitions. MP.7. Look for and make use of structure. Mathematically proficient students look closely to discern a pattern or structure. Young students, for example, might notice that three and seven more is the same amount as seven and three more, or they may sort a collection of shapes according to how many sides the shapes have. Later, students will see 7 8 equals the well remembered , in preparation for learning about the distributive property. In the expression x² + 9x + 14, older students can see the 14 as 2 7 and the 9 as They recognize the significance of an existing line in a geometric figure and can use the strategy of drawing an auxiliary line for solving problems. They also can step back for an overview and shift perspective. They can see complicated things, such as some algebraic expressions, as single objects or as being composed of several objects. For example, they can see 5 3(x y)² as 5 minus a positive number times a square and use that to realize that its value cannot be more than 5 for any real numbers x and y. MP.8. Look for and express regularity in repeated reasoning. Mathematically proficient students notice if calculations are repeated, and look both for general methods and for shortcuts. Upper elementary students might notice when dividing 25 by 11 that they are repeating the same calculations over and over again, and conclude they have a repeating decimal. By paying attention to the calculation of slope as they repeatedly check whether points are on the line through (1, 2) with slope 3, middle school students might abstract the equation (y 2)/(x 1) = 3. Noticing the regularity in the way terms cancel when expanding (x 1)(x + 1), (x 1)(x² + x + 1), and (x 1)(x³ + x² + x + 1) might lead them to the general formula for the sum of a geometric series. As they work to solve a problem, mathematically proficient students maintain oversight of the process, while attending to the details. They continually evaluate the reasonableness of their intermediate results.
http://docplayer.net/26421209-Ccss-mathematics-operations-algebraic-thinking-ccss-grade-5-5-oa-a-write-and-interpret-numerical-expressions.html
18
19
If you put a liquid into a closed space, molecules from the surface of that liquid will evaporate until the entire space is filled with vapor. The pressure created by the evaporating liquid is called the vapor pressure. Knowing the vapor pressure at a specific temperature is important because vapor pressure determines a liquid's boiling point and is related to when a flammable gas will burn. If the vapor of a liquid in your location is hazardous to your health, the vapor pressure helps you determine how much of that liquid will become gas in a given amount of time, and therefore whether the air will be dangerous to breathe. The two equations used to estimate vapor pressure of a pure liquid are the Clausius-Clapeyron equation and the Antoine Equation. The Clausius-Clapeyron Equation Measure the temperature of your liquid using a thermometer or thermocouple. In this example we'll look at benzene, a common chemical used to make several plastics. We'll use benzene at a temperature of 40 degrees Celsius, or 313.15 Kelvin. Find the latent heat of vaporization for your liquid in a data table. This is the amount of energy it takes to go from a liquid to a gas at a specific temperature. The latent heat of vaporization of benzene at this temperature is 35,030 Joules per mole. Sciencing Video Vault Find the Clausius-Clapeyron constant for your liquid in a data table or from separate experiments that measure vapor pressure at different temperatures. This is just an integration constant that comes from doing the calculus used to derive the equation, and it is unique to each liquid. Vapor pressure constants are often referenced to pressure measured in millimeters of Mercury, or mm of Hg. The constant for the vapor pressure of benzene in mm of Hg is 18.69. Use the Clausius-Clapeyron Equation to calculate the natural log of the vapor pressure. The Clausius-Clapeyron equation says that the natural log of the vapor pressure is equal to -1 multiplied by the heat of vaporization, divided by the Ideal Gas constant, divided by the temperature of the liquid, plus a constant unique to the liquid.) For this example with benzene at 313.15 degrees Kelvin, the natural log of the vapor pressure is -1 multiplied by 35,030, divided by 8.314, divided by 313.15, plus 18.69, which equals 5.235. Calculate the vapor pressure of benzene at 40 degrees Celsius by evaluating the exponential function at 5.235, which is 187.8 mm of Hg, or 25.03 kilopascals. The Antoine Equation Find the Antoine constants for benzene at 40 degrees Celsius in a data table. These constants are also unique to each liquid, and they are calculated by using non-linear regression techniques on the results of many different experiments that measure the vapor pressure at different temperatures. These constants referenced to mm of Hg for benzene are 6.90565, 1211.033 and 220.790. Use the Antione Equation to calculate the base 10 log of the vapor pressure. The Antoine Equation, using three constants unique to the liquid, says that the base 10 log of the vapor pressure equals the first constant minus the quantity of the second constant divided by the sum of temperature and the third constant. For benzene, this is 6.90565 minus 1211.033 divided by the sum of 40 and 220.790, which equals 2.262. Calculate the vapor pressure by raising 10 to the power of 2.262, which equals 182.8 mm of Hg, or 24.37 kilopascals. Neither total volume nor other gases in the same space, such as air, have an effect on the amount of evaporation and resulting vapor pressure, so they do not affect the vapor pressure calculation. Vapor pressure of a mixture is calculated with Raoult's Law, which adds the vapor pressures of the individual components multiplied by their mole fraction. The Clausius-Clapeyron and Antoine equations only provide estimates of the vapor pressure at a specific temperature. If knowing the exact vapor pressure is required for your application, you must measure it.
https://sciencing.com/calculate-vapor-pressure-4479034.html
18
24
The ionosphere (/aɪˈɒnəˌsfɪər/) is the ionized part of Earth's upper atmosphere, from about 60 km (37 mi) to 1,000 km (620 mi) altitude, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on the Earth. As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a 152.4 m (500 ft) kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced. The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later. In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Heaviside's proposal, coupled with Planck's law of black body radiation, may have hampered the growth of radio astronomy for the detection of electromagnetic waves from celestial bodies until 1932 (and the development of high-frequency radio transceivers). Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties. In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). The government thought those frequencies were useless. This led to the discovery of HF radio propagation via the ionosphere in 1923. We have in quite recent years seen the universal adoption of the term 'stratosphere'..and..the companion term 'troposphere'... The term 'ionosphere', for the region in which the main characteristic is large scale ionisation with considerable mean free paths, appears appropriate as an addition to this series. In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere; HAARP ran a series of experiments in 2017 using the eponymous Luxembourg Effect. Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere. In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere. On July 26, 1963 the first operational geosynchronous satellite Syncom 2 was launched. The board radio beacons on this satellite (and its successors) enabled – for the first time – the measurement of total electron content (TEC) variation along a radio beam from geostationary orbit to an earth receiver. (The rotation of the plane of polarization directly measures TEC along the path.) Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica. The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about 50 km (31 mi) to more than 1,000 km (620 mi). It exists primarily due to ultraviolet radiation from the Sun. The lowest part of the Earth's atmosphere, the troposphere extends from the surface to about 10 km (6.2 mi). Above that is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above 80 km (50 mi), in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is partially ionized and contains a plasma which is referred to as the ionosphere. Ultraviolet (UV), X-ray and shorter wavelengths of solar radiation are ionizing, since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present. Ionization depends primarily on the Sun and its activity. The amount of ionization in the ionosphere varies greatly with the amount of radiation received from the Sun. Thus there is a diurnal (time of day) effect and a seasonal effect. The local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. The activity of the Sun is associated with the sunspot cycle, with more radiation occurring with more sunspots. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization. There are disturbances such as solar flares and the associated release of charged particles into the solar wind which reaches the Earth and interacts with its geomagnetic field. At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F1 layer. The F2 layer persists by day and night and is the main region responsible for the refraction and reflection of radio waves. The D layer is the innermost layer, 60 km (37 mi) to 90 km (56 mi) above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.6 nanometre (nm) ionizing nitric oxide (NO). In addition, high solar activity can generate hard X-rays (wavelength < 1 nm) that ionize N2 and O2. Recombination rates are high in the D layer, so there are many more neutral air molecules than ions. Medium frequency (MF) and lower high frequency (HF) radio waves are significantly attenuated within the D layer, as the passing radio waves cause electrons to move, which then collide with the neutral molecules, giving up their energy. Lower frequencies experience greater absorption because they move the electrons farther, leading to greater chance of collisions. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively less absorption at higher frequencies. This effect peaks around noon and is reduced at night due to a decrease in the D layer's thickness; only a small part remains due to cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime. During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours. The E layer is the middle layer, 90 km (56 mi) to 150 km (93 mi) above the surface of the Earth. Ionization is due to soft X-ray (1–10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O2). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense sporadic E events, the Es layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer weakens because the primary source of ionization is no longer present. After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer. This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). However, it was not until 1924 that its existence was detected by Edward V. Appleton and Miles Barnett. The Es layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, rarely up to 225 MHz. Sporadic-E events may last for just a few minutes to several hours. Sporadic E propagation makes VHF-operating radio amateurs very excited, as propagation paths that are generally unreachable can open up. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs most frequently during the summer months when high signal levels may be reached. The skip distances are generally around 1,640 km (1,020 mi). Distances for one hop propagation can be anywhere from 900 km (560 mi) to 2,500 km (1,600 mi). Double-hop reception over 3,500 km (2,200 mi) is possible. The F layer or region, also known as the Appleton–Barnett layer, extends from about 150 km (93 mi) to more than 500 km (310 mi) above the surface of Earth. It is the layer with the highest electron density, which implies signals penetrating this layer will escape into space. Electron production is dominated by extreme ultraviolet (UV, 10–100 nm) radiation ionizing atomic oxygen. The F layer consists of one layer (F2) at night, but during the day, a secondary peak (labelled F1) often forms in the electron density profile. Because the F2 layer remains by day and night, it is responsible for most skywave propagation of radio waves and long distances high frequency (HF, or shortwave) radio communications. Above the F layer, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant. This region above the F layer peak and below the plasmasphere is called the topside ionosphere. An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity. Geophysically, the state of the ionospheric plasma may be described by four parameters: electron density, electron and ion temperature and, since several species of ions are present, ionic composition. Radio propagation depends uniquely on electron density. Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC). Since 1999 this model is "International Standard" for the terrestrial ionosphere (standard TS16457). Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions. At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity. Within approximately ± 20 degrees of the magnetic equator, is the equatorial anomaly. It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the equatorial fountain. The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) (100–130 km (62–81 mi) altitude). Resulting from this current is an electrostatic field directed west–east (dawn–dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet. When the Sun is active, strong solar flares can occur that will hit the sunlit side of Earth with hard X-rays. The X-rays will penetrate to the D-region, releasing electrons that will rapidly increase absorption, causing a high frequency (3–30 MHz) radio blackout. During this time very low frequency (3–30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out ends as the electrons in the D-region recombine rapidly and signal strengths return to normal. Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours. Coronal mass ejections can also release energetic protons that enhance D-region absorption in the polar regions. Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (very low frequency) radio waves launched into the magnetosphere. These so-called "whistler" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called "lightning-induced electron precipitation" (LEP) events. Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called early/fast. In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focused on the mechanism by which this process can occur. Due to the ability of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can reflect radio waves directed into the sky back toward the Earth. Radio waves directed at an angle into the sky can return to Earth beyond the horizon. This technique, called "skip" or "skywave" propagation, has been used since the 1920s to communicate at international or intercontinental distances. The returning radio waves can reflect off the Earth's surface into the sky again, allowing greater ranges to be achieved with multiple hops. This communication method is variable and unreliable, with reception over a given path depending on time of day or night, the seasons, weather, and the 11-year sunspot cycle. During the first half of the 20th century it was widely used for transoceanic telephone and telegraph service, and business and diplomatic communication. Due to its relative unreliability, shortwave radio communication has been mostly abandoned by the telecommunications industry, though it remains important for high-latitude communication where satellite-based radio communication is not possible. Some broadcasting stations and automated services still use shortwave radio frequencies, as do radio amateur hobbyists for private recreational contacts. When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy. Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough. A qualitative understanding of how an electromagnetic wave propagates through the ionosphere can be obtained by recalling geometric optics. Since the ionosphere is a plasma, it can be shown that the refractive index is less than unity. Hence, the electromagnetic "ray" is bent away from the normal rather than toward the normal as would be indicated when the refractive index is greater than unity. It can also be shown that the refractive index of a plasma, and hence the ionosphere, is frequency-dependent, see Dispersion (optics). The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below: where N = electron density per m3 and fcritical is in Hz. The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time. The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer. The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction. Scientists explore the structure of the ionosphere by a wide variety of methods. They include: A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska. The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 different countries and multiple radars in both hemispheres. Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo radio telescope located in Puerto Rico, was originally intended to study Earth's ionosphere. Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: "URSI Handbook of Ionogram Interpretation and Reduction", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available). Incoherent scatter radars operate above the critical frequencies. Therefore, the technique allows probing the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities. Radio occultation is a remote sensing technique where a GNSS signal tangentially scrapes the Earth, passing through the atmosphere, and is received by a Low Earth Orbit (LEO) satellite. As the signal passes through the atmosphere, it is refracted, curved and delayed. An LEO satellite samples the total electron content and bending angle of many such signal paths as it watches the GNSS satellite rise or set behind the Earth. Using an Inverse Abel's transform, a radial profile of refractivity at that tangent point on earth can be reconstructed. In empirical models of the ionosphere such as Nequick, the following indices are used as indirect indicators of the state of the ionosphere. F10.7 and R12 are two indices commonly used in ionospheric modelling. Both are valuable for their long historical records covering multiple solar cycles. F10.7 is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a ground radio telescope. R12 is a 12 months average of daily sunspot numbers. Both indices have been shown to be correlated to each other. However, both indices are only indirect indicators of solar ultraviolet and X-ray emissions, which are primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere. There are a number of models used to understand the effects of the ionosphere global navigation satellite systems. The Klobuchar model is currently used to compensate for ionospheric effects in GPS. This model was developed at the US Air Force Geophysical Research Laboratory circa 1974 by John (Jack) Klobuchar. The Galileo navigation system uses the NeQuick model. Objects in the Solar System that have appreciable atmospheres (i.e., all of the major planets and many of the larger natural satellites) generally produce ionospheres. Planets known to have ionospheres include Venus, Uranus, Mars and Jupiter.
https://howlingpixel.com/i-en/Ionosphere
18
18
I start class today by reminding students that we have been working on arithmetic and geometric sequences for the last few days. In the last lesson, they learned the defining features of an arithmetic sequence and how to identify such a sequence from a graph, table, recursive, and explicit equation. I let them know that today we will be looking at another problem situation and using these representations to help us make sense of the problem. We start by just reading through the problem in Don't Break the Chain together. I ask for a volunteer to go to the board and draw a diagram that would show the first two days of Bill's sent e-mails. I want to make sure students are understanding the email chain and are clear about how the growth of the chain occurs. Next, I let students get to work in small groups on Questions 1 and 2. If some groups finish early, they can move on to Question 3. I emphasize with students that for Question 2, I would like them to represent their work with a table, graph, recursive function and explicit function. Things I watch for as students work: At the start of today's discussion I have students share out their table, graph, recursive, and explicit equations. I follow the Breaking the Chain PowerPoint to make sure students understand the sequence and that I hit all of the main points. A lot of my focus in today's discussion is helping students to recognize how they can determine if a sequence is geometric by looking at its graph, table, recursive and explicit function. We will spend a fair amount of time during the discussion working through the explicit formula. I anticipate that most of my students will have trouble writing it at this stage in the unit. We work with a table with two additional columns (included in PowerPoint) so students can see that 80 can be written as 8 x 10 and then 800 can be written as 8 x 10 x 10 and so on. Next, I try to elicit from students that the number of 10s they need to multiply by is one less than the number of days that have gone by. I also refer students to a previous geometric sequence to see how we wrote that one to see if they can find some clues to writing the current explicit equation. Today's closing activity asks students to reflect on the similarities and differences between arithmetic and geometric sequences. The reflection questions are included in the Break the Chain PowerPoint. Students respond in writing to the following prompt: Don't Break the Chain is licensed by © 2012 Mathematics Vision Project | MVP In partnership with the Utah State Office of Education Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported license.
https://betterlesson.com/lesson/639293/chain-emails-practice-with-a-geometric-sequence?from=consumer_breadcrumb_dropdown_lesson
18
37
Background beliefs when two people have radically different background beliefs (or worldviews), they often have difficulty finding any sort of common. Using resources and tools like the ones below, educators at kipp king collegiate high school focus on honing critical thinking skills across all subjects. Critical thinking worksheets for teachers used in engaging students in the advanced levels of thinking we have brain teasers and mad libs too. Games and activities for developing critical thinking skills thinking the workbook critical the activity pages in the critical thinking. See how you can introduce the arts to your elementary school students and help them build critical thinking using the arts to promote critical thinking transcript. Educators publishing service critical thinking in the elementary classroom: problems and solutions • 1 critical thinking has been an important issue in. Critical thinking involves suspending your beliefs to explore and question topics from a blank slate point of view. Engage students in critical thinking activities with these great applications the most important gift that educators can give to students is the ability to think. For young students (elementary/k-6) at this time we would like to present the children's guide to critical thinking video series. Here are some teaching strategies that may prove immediately effective when encouraging critical thinking. Content critical thinking 1 v irtually anything written or talked about in the activities, those that sharpen the mind for the philosophers, educating our students. Explore critical thinking games and activities for making learning a blast the critical thinking workbook helps you and your students develop mindful communication. Critical thinking skills are something that we develop over time through practice and commitment in this video, we'll explore some exercises. Promoting and assessing critical thinking see the centre for teaching excellence responses from the activities that promote critical thinking to assess. Encourage kids to think out of the box and sharpen their logical reasoning and problem-solving skills with our fun critical thinking activities for kids. Critical web thinking activities substitute teacher handboo / 8th edition following directions activity sheet name: stediorg guess the object g r a d e s l e v e l. Readwritethink couldn't publish all of this great content without literacy experts to write and review for us if you've got lessons plans, videos, activities, or. Although it may be easier to grade tests that require a standard memorized answer, teaching critical thinking helps students think creatively and generate new ideas. Free lesson plans and resources for creativity, problem solving, and critical thinking. Students employ critical thinking skills to solve time, measurement, and money problems. Critical thinking in elementary each student is trying to solve the distributed activities critical thinking is one of the basic skills that school. Classroom activities for encouraging evidence-based critical thinking 85 the journal of effective teaching, vol 13, no 2, 2013, 83-93. Check out these 10 great ideas for critical thinking activities and see how you can use them with your own modern learners. Work sheet library: critical thinking click on the small banner advertisement above for a complete catalog of criticalthinkingcom teacher-ready activities. Activities recommended for teachers that they can implement in the classroom to help students develop critical thinking skill and prepare them for a better future. Contains a large variety of mathematical and verbal thinking activities unlike any collection cranium crackers book 2 - ebook critical thinking activities. Free activities libros en español thinking skills for tests: upper elementary developing test-taking test questions had more to do with critical thinking. Critical thinking brain teaser worksheets language arts math worksheets science social studies holidays critical thinking elementary brain teaser. Sort and graph: critical thinking by using the activities in this printable book, elementary students will more fully understand the importance of numbers. Elementary stem activities using critical thinking 6 week novel study will guide students through an imaginative exploration and critical analysis. Common sense media editors help you choose games that help kids think critically has lots of activities game gets kids thinking.
http://vvpapervplr.afvallenbuik.info/critical-thinking-elementary-activities.html
18
39
All the help you need to get Great Grades Boost From Level 4 to Level 5 To give you a better idea of what is on the digital download below is a list of the main categories and further information of what is expected: The questions on the National Tests often test the children's understanding of the number system as well as their ability to perform calculations using the four basic operations. It is therefore worth spending some time discussing the structure of the number system and the fact that it is one of the greatest inventions of all time. Place value is an extremely important idea embedded in the number system and children should be developing a good understanding of how each digit takes its value and how this value changes as the digits move to the left or right when multiplying or dividing by powers of 10. They should also understand that concepts may be combined. For example, to multiply by 500, multiply by 5 and then by 100; to test to see if a number is divisible by 15, test to see if it is divisible by 3 and divisible by 5. After analysing all the past NCT papers, we have found that questions in this section fall mainly into the following types which are practised on the pages in this module shown: - Simple calculations where the only skill required is proficiency in the application of basic operations on the calculator. - Simple calculations involving diagrams or stories or where the results of calculations must be put onto a diagram or into a table. - Inverse operations (these come up frequently) in which a missing number must be found to make a calculation true. - What could the missing number(s) be? In these problems, there are a number of possible answers, any one (or set) of which is acceptable. - Rounding numbers to the nearest 100 etc or saying which of a group of numbers is nearest to a given number. - Problems involving algebraic concepts, but without the letters. (Eg the sum and difference of two numbers is given. What are the two numbers?) - Negative numbers which so far have mainly been about differences between two negative numbers or a positive and a negative number. - Explaining why certain answers are possible or impossible for certain problems. We expect to see an increase in the number of this type of problem on the NCT papers in the future. Plenty of practice is therefore given in these areas. Some of the questions are simple enough to be done without a calculator, but calculators should be used to practice the operations and reading the displays so children feel confident when the more difficult problems are tackled. Here we are looking at odd, even, square numbers, triangle numbers, prime numbers, multiples and factors Technique for finding factors: There is a very quick and logical method of finding factors of numbers apart from just guessing and children should be taught this method. Say we want to find the factors of 60. 1 is obviously a factor and it divides into 60 sixty times, so write 1 on the far left and 60 on the far right… Try the next number that seems likely to be a factor, in this case 2. This goes into 60 thirty times, so write 2 after the 1 and 30 before the 60… 1, 2 30, 60 Try the next numbers in the same way. 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 Next we try 7, but this is not a factor. You can stop when you get to the square root of 60 (about 7.7). You now have all the factors. It is easy to see that there are no factors between 30 and 60, for example, because if there were, they would divide into 60 between 1 and 2 times which is clearly impossible. This method gives you “two for the price of one” because each time you find a factor you divide it into 60 and immediately obtain another. We also have work on how to find a remainder with a calculator. This is quite a straight forward module looking at sequences in different ways, except that the arithmetic is more difficult than in the Level 3/4 module and the required understanding of number greater. There is always a starting number and a rule for continuing the sequence. Sometimes children are given the sequence and need to understand the rule. Sometimes they are given the rule and need to complete the sequence. The rules that are usually used in NCT papers are of the following forms: Adding a constant number to the previous term. Subtracting a constant number from the previous term. Multiply the previous term by a fixed number, especially doubling and trebling. Multiplying by a fixed number and adding or subtracting a constant. (Eg ×2 and +5). Other types occasionally appear and we have included some in this module. Eg add 5, then add 3, then add 5, then add 3 etc. (Calculators may be used for the more difficult questions in this module) A good understanding of equivalent fractions is absolutely necessary for competent work in fractions. Make sure your children are able to change a fraction into any number of equivalent fractions and that they are able to cancel and lecnac (opposite of cancelling) fractions at will. For example, in questions where children have to spot two fractions which have the same value, one method is to cancel all the fractions to their lowest terms and same value fractions can then be easily spotted. Children should also be able to convert fractions to decimals by dividing the numerator by the denominator with a calculator (fractions are, after all, division sums). This will enable them to easily find fractions that are less than or more than another fraction. Children should be able to quickly make all possible proper** fractions from a group of numbers. E.g. the numbers 2, 5, 6 and 7 give the proper fractions: 2/5, 2/6, 2/7, 5/6, 5/7 and 6/7 Children should be able to find simple fractions of quantities and shapes and should know the percentage equivalents of simple fractions. Lastly, they should be able to calculate percentages of quantities by either recognising the equivalent fraction of the percentage or by dividing by 100 and multiplying by the percentage. ** A fraction of value less than 1. Moving from level 4 to level 5 on probability demands much more clarity in answering. Usually the questions ask that the answer is put in the form of a fraction. Make sure that children understand that when asking for an answer as a fraction 1/8 is correct, but 1 in 8 will not gain a mark. Decimal fractions eg 0.125 are usually accepted as correct. When explaining comparisons between spinners it is important to state the probability ie 2 in 5 or 2/5 compared to 3 in 6 etc Vague answers such as, “because it is bigger,” or, “it is the same space for both,” will not gain a mark at this higher level. Again the terms more likely, less likely and equally likely keep re-occurring in the tests. To work out the probability of an event, first count how many possibilities there are altogether ‐ this will give you the bottom number (denominator) of your fraction. Then work out how many possibilities for the event you are considering ‐ this will give you the top number (numerator). An understanding of simple equivalent fractions is also needed at this stage. Children need to know that ? is equivalent to 2/8 etc. Teacher/Parent Health Warning These worksheets are difficult for children in year 6. They should only be used with children who are well advanced at level 4 or are at level 5 already. They should certainly not be used with children who are working at level 3. These are intended as practice pages for children who already have a good understanding of the basic concepts of algebra. Virtually no questions on algebra appear at level 3 or 4. They are all either upper level 4 or level 5. Children should be able to spot patterns and describe these in words and algebraic terms. They should be able to give the next terms in sequences, having spotted the patterns. Spelling note The correct plural of ‘formula’, i.e. ‘formulae’ seems to have been dropped in all but the most academic of circles in favour of ‘formulas’. The latter spelling is the one that children come across in mathematics text books and with their ICT work. We have, therefore, used this spelling in our modules. Tables of prices or distances are popular questions and can prove to be quite tricky to answer, even with a calculator. At level 4/5 the questions usually involve multiple operations to reach the answer (eg two of one thing plus one of another). Encourage very careful reading of the table to ensure that the correct numbers are chosen. Pupils need to be acquainted with Venn diagrams and be able to interpret what they show. Not only are pupils expected to know the terms mean and mode but they are also expected to be able to explain the use of them on graphs. Line graphs are introduced on the 4/5 questions. It is important for children to realise that a line graph shows continuous data ie every point on the line has a value. Block graphs or bar charts can not be converted into line graphs as the values between the bars have no value. Scales also differ at this level. Encourage children to look carefully to see exactly how the numbers are going up or down on an axis and what each square or interval represents. Reading information from pie charts is also introduced. Again these questions can be harder than they first appear as similar pie charts can have different total values. Nearly all the work on this topic occurs at levels 4 and 5. Consequently, many of the concepts covered are quite difficult. Some of the calculations are also tricky, so a calculator may be needed for the harder examples. The work falls into the following categories: Number sequences. This is allied to the work covered in the algebra module, but is a good introduction to ratios and proportion. Proportion: Examples in this topic are normally of the recipe ingredients type. Children are given the quantities of materials needed to make a cake, concrete path etc and are asked to find how these quantities vary as the number of cakes/total amount changes. Conversion. This involves conversion of currencies, litres to gallons, miles to kilometres etc. Children may be required to perform a simple calculation or use a chart, table or graph. Reading scales. Questions in this section normally involve an object placed against a ruler drawn on the page. The idea is to use the scale to give the length of the object, which may involve a direct reading or the difference between two readings. There have only been a few questions on scale drawing in the test papers at KS2 since their inception and these seem to have been aimed at children working at levels 4 and 5. Nevertheless, every mark counts and children should have good proficiency with scale drawing. They should be able to draw and measure lines to the nearest millimetre and draw and measure angles to the nearest degree. A sharp pencil always helps! Any form of angle measurer is permitted (there are several on the market), but many will still use the traditional 180 or 360 degree protractor. All the questions in this topic so far set concern only co-ordinates in the first quadrant (i.e. with positive x and y co-ordinates). However, problems involving negative co-ordinates are covered in the year 6 syllabus of the Primary Framework Document and may therefore pop up at any time. With this in mind, most of the problems in this module are set with co-ordinates in the first quadrant, but there is a section involving negative co-ordinates at the end. The whole module involves a complete understanding of how position is described using co-ordinates. Children should not only be able to describe the position of a point using co-ordinates, but should also be able to say how far one point is from another in both the x and y directions. They should then be able to find the co-ordinates of intermediate points and notice relationships such as “To get from one dot to another on this line you go along two squares and up one”. (No algebra is needed to describe these relationships at this stage apart from knowing that the number on the horizontal axis is the x co-ordinate and the one on the vertical axis is the y co-ordinate. When a shape is reflected on a grid (whether the grid is shown or not), children should be taught to find the distance from the mirror line to the corners of the shape and use this information to find the co-ordinates of the reflected shape. This module gives practice in sorting shapes according to their properties. Children should be familiar with the material in the level 3/4 module including a good knowledge of triangles, different types of quadrilaterals, pentagons, hexagons and octagons and should be able to find the perimeter and area of simple shapes. In addition, they should know that the sum of angles around a point is 360°, the sum of the angles of a triangle is 180°, angles in an equilateral triangle are each 60° and the sum of the angles in a quadrilateral is 360°. They should be able to calculate angles using any of the above, e.g. one of the equal angles in an isosceles triangle is 35°, what are the other two angles? They should know and recognise acute, right, and obtuse angles. A good knowledge of the metric system of measuring length is needed to succeed with this section, including converting millimetres to centimetres and centimetres to metres. Children may well be asked to compare measurements written in mm, cm or m. One of the best techniques to use with these comparisons is to convert all the measurements to centimetres and then compare ‐ but don't forget to convert back to the originals when writing in the answers! Perimeter questions assume knowledge of properties of shapes eg an isosceles triangle has two sides of equal length, an equilateral triangle has 3 equal sides of equal length and a regular polygon has all its sides of equal length. Popular questions also involve working out perimeters, but firstly having to work out lengths of sides from the given information. Many children find these two and three part operations to get an answer very tricky and need plenty of experience of how to attack them. A further complication is that it is often stated that the drawings are not to scale ‐ a clear warning not to try and work the answer out by using a ruler! Area questions are seldom straightforward. A popular type is to have two rectangles together forming a third shape. Again, length of sides has to be worked out before the area can be calculated.
http://ks2-maths-sats.co.uk/ks2-maths-sats-boost-level-4-to-5.php
18