score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4.09375 | Physics Demo -- Jumping Ring
A solid metal ring is placed on an iron core whose base is wrapped in wire. When DC current is passed through the wire, a magnetic field is formed in the iron core. This sudden magnetic field induces a current in the metal ring, which in turn creates another magnetic field that opposes the original field. This causes the ring to briefly jump upwards.
If there is a cut in the ring, it cannot form current inside it, and thus will not jump.
When the ring is cooled in liquid nitrogen, the resistance of the metal is lowered, allowing more current to flow. This lets the ring jump higher. However, the magnetic field curves away at the top of the iron coil, meaning with DC power, the ring will never fly off the top.
When AC current is passed through the wire, the ring flies off the top of the iron core. This is due to the fact that the current lags the emf by 90 degrees in inductors (which is what we have here). This yields forces on the ring that are always pointing upwards, even as the current oscillates.
You need to log in, in order to post comments.
Added over 7 years ago | 00:00:57 | 42244 views
Added over 7 years ago | 00:00:53 | 39215 views
Added almost 6 years ago | 00:02:02 | 37878 views
Added over 6 years ago | 00:01:19 | 46821 views
Added 7 years ago | 00:02:54 | 47274 views
Added 7 years ago | 00:00:55 | 44012 views | http://techtv.mit.edu/collections/physicsdemos/videos/514-physics-demo-jumping-ring |
4.15625 | Precession is a change in the orientation of the rotational axis of a rotating body. In an appropriate reference frame it can be defined as a change in the first Euler angle, whereas the third Euler angle defines the rotation itself. In other words, if the axis of rotation of a body is itself rotating about a second axis, that body is said to be precessing about the second axis. A motion in which the second Euler angle changes is called nutation. In physics, there are two types of precession: torque-free and torque-induced.
In astronomy, "precession" refers to any of several slow changes in an astronomical body's rotational or orbital parameters, and especially to Earth's precession of the equinoxes. (See section Astronomy below.)
Torque-free precession implies that no external moment (torque) is applied to the body. In torque-free precession, the angular momentum is a constant, but the angular velocity vector changes orientation with time. What makes this possible is a time-varying moment of inertia, or more precisely, a time-varying inertia matrix. The inertia matrix is composed of the moments of inertia of a body calculated with respect to separate coordinate axes (e.g. x, y, z). If an object is asymmetric about its principal axis of rotation, the moment of inertia with respect to each coordinate direction will change with time, while preserving angular momentum. The result is that the component of the angular velocities of the body about each axis will vary inversely with each axis' moment of inertia.
The torque-free precession rate of an object with an axis of symmetry, such as a disk, spinning about an axis not aligned with that axis of symmetry can be calculated as follows:
where is the precession rate, is the spin rate about the axis of symmetry, is the moment of inertia about the axis of symmetry, is moment of inertia about either of the other two equal perpendicular principal axes, and is the angle between the moment of inertia direction and the symmetry axis.
For a generic solid object without any axis of symmetry, the evolution of the object's orientation, represented (for example) by a rotation matrix that transforms internal to external coordinates, may be numerically simulated. Given the object's fixed internal moment of inertia tensor and fixed external angular momentum , the instantaneous angular velocity is . Precession occurs by repeatedly recalculating and applying a small rotation vector for the short time ; e.g., for the skew-symmetric matrix . The errors induced by finite time steps tend to increase the rotational kinetic energy, ; this unphysical tendency can be counter-acted by repeatedly applying a small rotation vector perpendicular to both and , noting that .
Another type of torque-free precession can occur when there are multiple reference frames at work. For example, Earth is subject to local torque induced precession due to the gravity of the sun and moon acting on Earth's axis, but at the same time the solar system is moving around the galactic center. As a consequence, an accurate measurement of Earth's axial reorientation relative to objects outside the frame of the moving galaxy (such as distant quasars commonly used as precession measurement reference points) must account for a minor amount of non-local torque-free precession, due to the solar system’s motion.
Torque-induced precession (gyroscopic precession) is the phenomenon in which the axis of a spinning object (e.g.,a gyroscope) describes a cone in space when an external torque is applied to it. The phenomenon is commonly seen in a spinning toy top, but all rotating objects can undergo precession. If the speed of the rotation and the magnitude of the external torque are constant, the spin axis will move at right angles to the direction that would intuitively result from the external torque. In the case of a toy top, its weight is acting downwards from its center of mass and the normal force (reaction) of the ground is pushing up on it at the point of contact with the support. These two opposite forces produce a torque which causes the top to precess.
The device depicted on the right is gimbal mounted. From inside to outside there are three axes of rotation: the hub of the wheel, the gimbal axis, and the vertical pivot.
To distinguish between the two horizontal axes, rotation around the wheel hub will be called spinning, and rotation around the gimbal axis will be called pitching. Rotation around the vertical pivot axis is called rotation.
First, imagine that the entire device is rotating around the (vertical) pivot axis. Then, spinning of the wheel (around the wheelhub) is added. Imagine the gimbal axis to be locked, so that the wheel cannot pitch. The gimbal axis has sensors, that measure whether there is a torque around the gimbal axis.
In the picture, a section of the wheel has been named dm1. At the depicted moment in time, section dm1 is at the perimeter of the rotating motion around the (vertical) pivot axis. Section dm1, therefore, has a lot of angular rotating velocity with respect to the rotation around the pivot axis, and as dm1 is forced closer to the pivot axis of the rotation (by the wheel spinning further), because of the Coriolis effect, with respect to the vertical pivot axis, dm1 tends to move in the direction of the top-left arrow in the diagram (shown at 45°) in the direction of rotation around the pivot axis. Section dm2 of the wheel is moving away from the pivot axis, and so a force (again, a Coriolis force) acts in the same direction as in the case of dm1. Note that both arrows point in the same direction.
The same reasoning applies for the bottom half of the wheel, but there the arrows point in the opposite direction to that of the top arrows. Combined over the entire wheel, there is a torque around the gimbal axis when some spinning is added to rotation around a vertical axis.
It is important to note that the torque around the gimbal axis arises without any delay; the response is instantaneous.
In the discussion above, the setup was kept unchanging by preventing pitching around the gimbal axis. In the case of a spinning toy top, when the spinning top starts tilting, gravity exerts a torque. However, instead of rolling over, the spinning top just pitches a little. This pitching motion reorients the spinning top with respect to the torque that is being exerted. The result is that the torque exerted by gravity – via the pitching motion – elicits gyroscopic precession (which in turn yields a counter torque against the gravity torque) rather than causing the spinning top to fall to its side.
Precession is the result of the angular velocity of rotation and the angular velocity produced by the torque. It is an angular velocity about a line that makes an angle with the permanent rotation axis, and this angle lies in a plane at right angles to the plane of the couple producing the torque. The permanent axis must turn towards this line, because the body cannot continue to rotate about any line that is not a principal axis of maximum moment of inertia; that is, the permanent axis turns in a direction at right angles to that in which the torque might be expected to turn it. If the rotating body is symmetrical and its motion unconstrained, and, if the torque on the spin axis is at right angles to that axis, the axis of precession will be perpendicular to both the spin axis and torque axis.
Under these circumstances the angular velocity of precession is given by:
Where Is is the moment of inertia, is the angular velocity of spin about the spin axis, m is the mass, g is the acceleration due to gravity and r is the perpendicular distance of the spin axis about the axis of precession. The torque vector originates at the center of mass. Using = , we find that the period of precession is given by:
There is a non-mathematical way of understanding the cause of gyroscopic precession. The behavior of spinning objects simply obeys the law of inertia by resisting any change in direction. If a force is applied to a spinning object to induce a change the direction of the spin axis, the object behaves as if that force was applied at a location exactly 90 degrees ahead, in the direction of rotation. This is why: A solid object can be thought of as an assembly of individual molecules. If the object is spinning, each molecule's direction of travel constantly changes as that molecule revolves around the object's spin axis. When a force is applied that is parallel to the axis, molecules are being forced to move in new directions at certain places during their path around the axis. These new changes in direction are resisted by inertia.
Imagine the object to be a spinning bicycle wheel, held at both ends of its axle in the hands of a subject. The wheel is spinning clock-wise as seen from a viewer to the subject’s right. Clock positions on the wheel are given relative to this viewer. As the wheel spins, the molecules comprising it are travelling vertically downward the instant they pass the 3-o'clock position, horizontally to the left the instant they pass 6 o'clock, vertically upward at 9 o'clock, and horizontally to the right at 12 o'clock. Between these positions, each molecule travels components of these directions, which should be kept in mind as you read ahead. The viewer then applies a force to the wheel at the 3-o'clock position in a direction away from himself. The molecules at the 3-o'clock position are not being forced to change their direction when this happens; they still travel vertically downward. Actually, the force attempts to displace them some amount horizontally at that moment, but the ostensible component of that motion, attributed to the horizontal force, never occurs, as it would if the wheel was not spinning. Therefore, neither the horizontal nor downward components of travel are affected by the horizontally-applied force. The horizontal component started at zero and remains at zero, and the downward component is at its maximum and remains at maximum. The same holds true for the molecules located at 9 o'clock; they still travel vertically upward and not at all horizontally, thus are unaffected by the force that was applied. However, molecules at 6 and 12 o'clock are being forced to change direction. At 6 o'clock, molecules are forced to veer toward the viewer. At the same time, molecules that are passing 12 o'clock are being forced to veer away from the viewer. The inertia of those molecules resists this change in direction. The result is that they apply an equal and opposite reactive force in response. At 6 o'clock, molecules exert a push directly away from the viewer, while molecules at 12 o'clock push directly toward the viewer. This all happens instantaneously as the force is applied at 3 o'clock. Since no physical force was actually applied at 6 or 12 o’clock, there is nothing to oppose these reactive forces; therefore, the reaction is free to take place. This makes the wheel as a whole tilt toward the viewer. Thus, when the force was applied at 3 o'clock, the wheel behaved as if that force was applied at 6 o'clock, which is 90 degrees ahead in the direction of rotation. This principle is demonstrated in helicopters. Helicopter controls are rigged so that inputs to them are transmitted to the rotor blades at points 90 degrees prior to the point where the change in aircraft attitude is desired.
Precession causes another phenomenon for spinning objects such as the bicycle wheel in this scenario. If the subject holding the wheel removes one hand from the end of the axle, the wheel will not topple over, but will remain upright, supported at just the other end of its axle. However, it will immediately take on an additional motion; it will begin to rotate about a vertical axis, pivoting at the point of support as it continues spinning. If the wheel was not spinning, it would topple over and fall when one hand is removed. The ostensible action of the wheel beginning to topple over is equivalent to applying a force to it at 12 o'clock in the direction of the unsupported side (or a force at 6 o’clock toward the supported side). When the wheel is spinning, the sudden lack of support at one end of its axle is equivalent to this same force. So, instead of toppling over, the wheel behaves as if a continuous force is being applied to it at 3 or 9 o’clock, depending on the direction of spin and which hand was removed. This causes the wheel to begin pivoting at the point of support while remaining upright. It should be noted that although it pivots at the point of support, it does so only because of the fact that it is supported there; the actual axis of precessional rotation is located vertically through the wheel, passing through its center of mass. Also, this explanation does not account for the effect of variation in the speed of the spinning object; it only describes how the spin axis behaves due to precession. More correctly, the object behaves according to the balance of all forces based on the magnitude of the applied force, mass and rotational speed of the object.
The special and general theories of relativity give three types of corrections to the Newtonian precession, of a gyroscope near a large mass such as Earth, described above. They are:
- Thomas precession a special relativistic correction accounting for the observer's being in a rotating non-inertial frame.
- de Sitter precession a general relativistic correction accounting for the Schwarzschild metric of curved space near a large non-rotating mass.
- Lense–Thirring precession a general relativistic correction accounting for the frame dragging by the Kerr metric of curved space near a large rotating mass.
In astronomy, precession refers to any of several gravity-induced, slow and continuous changes in an astronomical body's rotational axis or orbital path. Precession of the equinoxes, perihelion precession, changes in the tilt of Earth's axis to its orbit, and the eccentricity of its orbit over tens of thousands of years are all important parts of the astronomical theory of ice ages. (See Milankovitch cycles.)
Axial precession (precession of the equinoxes)
Axial precession is the movement of the rotational axis of an astronomical body, whereby the axis slowly traces out a cone. In the case of Earth, this type of precession is also known as the precession of the equinoxes, lunisolar precession, or precession of the equator. Earth goes through one such complete precessional cycle in a period of approximately 26,000 years or 1° every 72 years, during which the positions of stars will slowly change in both equatorial coordinates and ecliptic longitude. Over this cycle, Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5 degrees.
Hipparchus is the earliest known astronomer to recognize and assess the precession of the equinoxes at about 1° per century (which is not far from the actual value for antiquity, 1.38°). The precession of Earth's axis was later explained by Newtonian physics. Being an oblate spheroid, Earth has a non-spherical shape, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque to the equator, attempting to pull the equatorial bulge into the plane of the ecliptic, but instead causing it to precess. The torque exerted by the planets, particularly Jupiter, also plays a role.
The orbits of a planet around the Sun do not really follow an identical ellipse each time, but actually trace out a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession.
In the adjunct image, the Earth apsidal precession is illustrated. As the Earth travels around the Sun, its elliptical orbit rotates gradually over time. The eccentricity of its ellipse and the precession rate of its orbit are exaggerated for visualization. Most orbits in the Solar System have a much smaller eccentricity and precess at a much slower rate, making them nearly circular and stationary.
Discrepancies between the observed perihelion precession rate of the planet Mercury and that predicted by classical mechanics were prominent among the forms of experimental evidence leading to the acceptance of Einstein's Theory of Relativity (in particular, his General Theory of Relativity), which accurately predicted the anomalies. Deviating from Newton's law, Einstein's theory of gravitation predicts an extra term of A/r4, which accurately gives the observed excess turning rate of 43 arcseconds every 100 years.
The gravitational force between the Sun and moon induces the precession in Earth's orbit, which is the major cause of the widely known climate oscillation of Earth that has a period of 19,000 to 23,000 years. It follows that changes in Earth's orbital parameters (e.g., orbital inclination, the angle between Earth's rotation axis and its plane of orbit) is important to the study of Earth's climate, in particular to the study of past ice ages. (See also nodal precession. For precession of the lunar orbit see lunar precession).
|Wikimedia Commons has media related to Precession.|
- Schaub, Hanspeter (2003), Analytical Mechanics of Space Systems, AIAA, pp. 149–150, ISBN 9781600860270, retrieved May 2014
- Boal, David (2001). "Lecture 26 – Torque-free rotation – body-fixed axes" (PDF). Retrieved 2008-09-17.
- Teodorescu, Petre P (2002). Mechanical Systems, Classical Models. Springer. p. 420.
- DIO 9.1 ‡3
- Bradt, Hale (2007). Astronomy Methods. Cambridge University Press. p. 66. ISBN 978 0 521 53551 9.
- Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury, Venus, and Earth.)
- An even larger value for a precession has been found, for a black hole in orbit around a much more massive black hole, amounting to 39 degrees each orbit.
|Wikibooks has a book on the topic of: Rotational Motion|
- Explanation and derivation of formula for precession of a top
- Precession and the Milankovich theory from educational web site From Stargazers to Starships | https://en.wikipedia.org/wiki/Precession_of_the_equinox |
4.25 | This tutorial explains how to read, construct, and interpret basic kinematic graphs. Animated examples are accompanied by detailed discussion of how to understand the patterns produced by Position vs. Time, Velocity vs. Time, and Acceleration vs. Time graphs. The resource includes supplementary practice exercises, worksheets, and related problems for student exploration.
Please note that this resource requires
Microsoft Internet Explorer.
acceleration, acceleration vs. time graph, average acceleration, displacement, graphing motion, instantaneous acceleration, kinematics, motion graphs, position vs. time graph, velocity, velocity vs. time graph
Metadata instance created
October 23, 2006
by Caroline Hall
6-8: 9B/M3. Graphs can show a variety of possible relationships between two variables. As one variable increases uniformly, the other may do one of the following: increase or decrease steadily, increase or decrease faster and faster, get closer and closer to some limiting value, reach some intermediate maximum or minimum, alternately increase and decrease, increase or decrease in steps, or do something different from any of these.
9-12: 9B/H4. Tables, graphs, and symbols are alternative ways of representing data and relationships that can be translated from one to another.
Next Generation Science Standards
Crosscutting Concepts (K-12)
Graphs and charts can be used to identify patterns in data. (6-8)
NGSS Science and Engineering Practices (K-12)
Using Mathematics and Computational Thinking (5-12)
Mathematical and computational thinking at the 9–12 level builds on K–8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including trigonometric functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are created and used based on mathematical models of basic assumptions. (9-12)
Use mathematical representations of phenomena to describe explanations. (9-12)
Common Core State Standards for Mathematics Alignments
High School — Functions (9-12)
Interpreting Functions (9-12)
F-IF.5 Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes.?
F-IF.6 Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph.
F-IF.7.a Graph linear and quadratic functions and show intercepts, maxima, and minima.
F-IF.7.e Graph exponential and logarithmic functions, showing intercepts and end behavior, and trigonometric functions, showing period, midline, and amplitude.
Linear, Quadratic, and Exponential Models? (9-12)
F-LE.1.a Prove that linear functions grow by equal differences over equal intervals, and that exponential functions grow by equal factors over equal intervals.
F-LE.3 Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function.
Common Core State Reading Standards for Literacy in Science and Technical Subjects 6—12
Range of Reading and Level of Text Complexity (6-12)
RST.11-12.10 By the end of grade 12, read and comprehend science/technical texts in the grades 11—CCR text complexity band independently and proficiently.
This resource is part of a Physics Front Topical Unit.
Topic: Kinematics: The Physics of Motion Unit Title: Graphing
A very well-organized tutorial on how to construct and interpret three basic kinematic graphs: P/T, V/T and A/T. It includes animated examples, links to five worksheets, and related problems for student exploration.
%0 Electronic Source %A Elert, Glenn %D June 27, 2007 %T The Physics Hypertextbook: Graphs of Motion %V 2016 %N 8 February 2016 %8 June 27, 2007 %9 text/html %U http://physics.info/motion-graphs/
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | http://www.thephysicsfront.org/items/detail.cfm?ID=4547 |
4.21875 | Updated at 6:05 p.m. ET
Valleys on Mars were carved over long periods by recurring floods at a time when Mars might have had wet and dry seasons much like some of Earth's deserts, a new study suggests.
The research contradicts other suggestions that the large valley networks on the red planet were the result of short-lived catastrophic flooding, lasting just hundreds to a few thousand years and perhaps triggered by asteroid impacts.
The new modeling suggests wet periods lasted at least 10,000 years.
"Precipitation on Mars lasted a long time – it wasn't a brief interval of massive deluges," said study leader Charles Barnhart, a graduate student in Earth and planetary sciences at the University of California, Santa Cruz. "Our results argue for liquid water being stable at the surface of Mars for prolonged periods in the past."
NASA planetary scientist Jeffrey Moore and Alan Howard of the University of Virginia contributed to the research, which will be detailed in the Journal of Geophysical Research – Planets.
In recent years, pictures of Mars have revealed a landscape clearly shaped by runoff. Most researchers now see water as a key player, though carbon dioxide has also been put forth as a possible culprit. A recent study of Box Canyon in Idaho concluded that similar features on Mars could have been sculpted by ancient megafloods.
Whatever, Mars is bone-dry today, and it's not clear just how wet it was in the past.
The new work, based on computer models, paints a picture of ancient Mars, more than 3.5 billion years ago, as looking somewhat like the deserts of the U.S. Southwest, sans cacti of course.
The new thinking is based on the idea that asteroid impacts, as fuel for floods, would have created features that aren't found on Mars.
"Our research finds that these catastrophic anomalies would be so humid and wet there would be breaching of the craters, which we don't see on Mars," he said. "The precipitation needs to be seasonal or periodic, so that there are periods of evaporation and infiltration. Otherwise the craters overflow."
The computer models generated 70 different simulations, including one that yielded the best match to the observed topography of martian valleys.
The sculpting was done in a semiarid to arid climate that persisted for tens of thousands to hundreds of thousands of years, the researchers say. Episodic flooding alternated with long dry periods when water could evaporate or soak into the ground. Rainfall may have been seasonal, or wet intervals may have occurred over longer cycles. But conditions that allowed for the presence of liquid water on the surface of Mars must have lasted for at least 10,000 years, Barnhart said.
The study does not suggest what a typical day on Mars might have been like back then. Nor does the work pin down how long seasons might have lasted. Barnhart said the changes from dry to wet periods might have had to do with periods of greenhouse-gas outgassing associated with volcanic eruptions, large impacts, or a change in the tilt of Mars' rotation, though all that remains to be studied further.
"Our results do suggest that river discharges were similar to flood stages in Earth-like desert environments like the Mojave desert or the Colorado Plateau," he told SPACE.com. | http://www.space.com/5816-long-wet-periods-sculpted-ancient-mars.html |
4.03125 | Portuguese orthography is based on the Latin alphabet and makes use of the acute accent, the circumflex accent, the grave accent, the tilde, and the cedilla, to denote stress, vowel height, nasalization, and other sound changes. Accented letters and digraphs are not counted as separate characters for collation purposes.
The spelling of Portuguese is largely phonemic, but some phonemes can be spelled in more than one way. In ambiguous cases, the correct spelling is determined through a combination of etymology with morphology and tradition so there is not a perfect one-to-one correspondence between sounds and letters or digraphs. Knowing the main inflectional paradigms of Portuguese and being acquainted with the orthography of other Western European languages can be helpful.
A full list of sounds, diphthongs, and their main spellings, is given at Portuguese phonology. This article addresses the less trivial details of the spelling of Portuguese as well as other issues of orthography, such as accentuation.
- 1 Letter names and pronunciations
- 2 Digraphs
- 3 Diacritics
- 4 Consonants with more than one spelling
- 5 Vowels
- 6 Morphological considerations
- 7 Etymological considerations
- 8 Syllabification and collation
- 9 Other symbols
- 10 Brazilian vs. European spelling
- 11 See also
- 12 References
- 13 External links
Letter names and pronunciations
Only the most frequent sounds appear below since a listing of all cases and exceptions would become cumbersome. Portuguese is a pluricentric language, and pronunciation of some of the letters differs in European Portuguese (EP) and in Brazilian Portuguese (BP). Apart from those variations, the pronunciation of most consonants is fairly straightforward. Only the consonants r, s, x, z, the digraphs ch, lh, nh, rr, and the vowels may require special attention from English speakers.
Although many letters have more than one pronunciation, their phonetic value is often predictable from their position within a word; that is normally the case for the consonants (except x). Since only five letters are available to write the fourteen vowel sounds of Portuguese, vowels have a more complex orthography, but even then, pronunciation is somewhat predictable. Knowing the main inflectional paradigms of Portuguese can help.
In the following table and in the remainder of this article, the phrase "at the end of a syllable" can be understood as "before a consonant, or at the end of a word". For the letter r, "at the start of a syllable" means "at the beginning of a word, or after l, n, s". For letters with more than one common pronunciation, their most common phonetic values are given on the left side of the semicolon; sounds after it occur only in a limited number of positions within a word. Sounds separated by "~" are allophones or dialectal variants.
The names of the letters are masculine.
Letter European Brazilian Phonemic
Name Name (IPA) Name Name (IPA) Aa á /a/ á /a/ /a/, /ɐ/ Bb bê /be/ bê /be/ /b/ Cc cê /se/ cê /se/ /k/; /s/ nb 1 Dd dê /de/ dê /de/ /d/ ~ [dʒ] nb 2 Ee é /ɛ/ é or ê /ɛ/, /e/ /e/, /ɛ/, /i/ nb 3, /ɨ/, /ɐ/, /ɐi/ Ff efe /ˈɛfɨ/ efe /ˈɛfi/ /f/ Gg gê or guê /ʒe/, /ɡe/ gê /ʒe/ /ɡ/; /ʒ/ nb 1 Hh agá /ɐˈɡa/ agá /aˈɡa/ natively silent, /ʁ/ in loanwords nb 4 Ii i /i/ i /i/ /i/ nb 3 Jj jota /ˈʒɔtɐ/ jota /ˈʒɔta/ /ʒ/ Kk capa /ˈkapɐ/ cá /ka/ nb 5 Ll ele /ˈɛlɨ/ ele /ˈɛli/ /l/ ~ [ɫ ~ w] nb 6 Mm eme /ˈɛmɨ/ eme /ˈemi/ /m/ nb 7 Nn ene /ˈɛnɨ/ ene /ˈeni/ /n/ nb 7 Oo ó /ɔ/ ó or ô /ɔ/, /o/ /o/, /ɔ/, /u/ nb 3 Pp pê /pe/ pê /pe/ /p/ quê /ke/ quê /ke/ /k/ Rr erre or rê /ˈɛʁɨ/, /ˈʁe/ erre /ˈɛʁi/ /ɾ/, /ʁ/ nb 8 Ss esse /ˈɛsɨ/ esse /ˈɛsi/ /s/, /z/ nb 9, /ʃ/ nb 10 Tt tê /te/ tê /te/ /t/ ~ [tʃ] nb 2 Uu u /u/ u /u/ /u/ nb 3 Vv vê /ve/ vê /ve/ /v/ Ww dâblio or duplo vê /ˈdɐbliu/ dáblio or duplo vê /ˈdabliu/ nb 5 Xx xis /ʃiʃ/ xis /ʃis/ /ʃ/, /ks/, /z/, /s/ nb 10 nb 11 Yy ípsilon or i grego /ˈipsɨlɔn/ ípsilon /ˈipsilõ/ nb 5 Zz zê /ze/ zê /ze/ /z/, /s/, /ʃ/ nb 10
Listen to the alphabet recited by a native speaker from Brazil. The alphabet is spoken in a Brazilian dialect in which the 'E' is pronounced as 'É'
|Problems playing this file? See media help.|
- ^ Before the letters e, i, y, or with the cedilla.
- ^ Allophonically affricated before the sound /i/ (spelled i, or sometimes e), in BP.
- ^ May become an approximant as a form of vowel reduction when unstressed before or after another vowel. Words such as bóia and proa are pronounced [ˈbɔj.jɐ] and [ˈpɾow.wɐ].
- ^ Silent at the start or at the end of a word. Also part of the digraphs ch, lh, nh. See below.
- ^ Not part of the official alphabet before 2009. Used only in foreign words, personal names, and hybrid words derived from them. The letters K, W and Y were included in the alphabet used in Brazil, East Timor, Macau, Portugal and five countries in Africa, when the 1990 Portuguese Language Orthographic Agreement went into legal effect, since January 1, 2009. However, they were used before 1911 (see the article on spelling reform in Portugal).
- ^ Velarized to [ɫ] in EP and conservative registers of southern BP. Vocalized to [u̯], [ʊ̯], or seldom [o̯] (as influence from Spanish or Japanese), at the end of syllables in most of Brazil.
- ^ Usually silent or voiceless at the end of syllables (word-final n is fully pronounced by some speakers in a few loaned words). See Nasalization section, below.
- ^ At the start of syllables (in all dialects) or at the end of syllables (in some dialects of BP), a single r is pronounced /ʁ/ (see Portuguese phonology for variants of this sound). Elsewhere, it is pronounced /ɾ/. Word-final rhotics may also be silent when the last syllable is stressed, in casual and vernacular speech, especially in Brazil (pervasive nationwide, though not in educated and some colloquial registers) and in some African and Asian countries.
- ^ A single s is pronounced voiced /z/ between vowels.
- ^ The opposition between the four sibilants /s/, /z/, /ʃ/, /ʒ/ is neutralized at the end of syllables; see below for more information.
- ^ The letter x may represent /ʃ/, /ks/, /z/, and /s/ (peixe, fixar, exemplo, próximo). It is always pronounced /ʃ/ at the beginning of words.
Portuguese uses of digraphs, pairs of letters which represent a single sound different from the sum of their components. Digraphs are not included in the alphabet.
The digraphs qu and gu, before e and i, may represent both plain or labialised sounds (quebra /ˈkebɾɐ/, cinquenta /sĩˈkʷẽtɐ/, guerra /ˈɡɛʁɐ/, sagui /saˈɡʷi/), but they are always labialised before a and o (quase, quociente, guaraná). Pronunciation divergences make some of those words be spelled in diffent forms (quatorze / catorze and quotidiano / cotidiano). The digraph ch is pronounced as an English sh by the overwhelmingly majority of speakers. The digraphs lh and nh, of Occitan origin, denote palatal consonants that do not exist in English. The digraphs rr and ss are used only between vowels. The pronunciation of the digraph rr varies with dialect (see the note on the phoneme /ʁ/, above).
The acute accent and the circumflex accent indicate that a vowel is stressed and the quality of the accented vowel and, more precisely, its height: á, é, and ó are low vowels (except in nasal vowels); â, ê, and ô are high vowels. They also distinguish a few homographs: por "by" with pôr "to put", pode "[he/she/it] can" with pôde "[he/she/it] could".
The tilde marks nasal vowels before glides such as in cãibra and nação, at the end of words, before final -s, and in some compounds: romãzeira "pomegranate tree", from romã "pomegranate", and vãmente "vainly", from vã "vain". It usually coincides with the stressed vowel unless there is an acute or circumflex accent elsewhere in the word or if the word is compound: órgão "organ", irmã + zinha ("sister" + diminutive suffix) = irmãzinha "little sister". The form õ is used only in the plurals of nouns ending in -ão (nação → nações) and in the second and third person singular forms of the verb pôr (pões, põe).
The grave accent marks the contraction of two consecutive vowels in adjacent words (crasis), normally the preposition a and an article or a demonstrative pronoun: a + aquela = àquela "at that", a + a = à "at the". It does not indicate stress.
The graphemes â, ê and ô typically represent oral vowels, but before m or n followed by another consonant, the vowels represented are nasal. Elsewhere, nasal vowels are indicated with a tilde (ã, õ).
Below are the general rules for the use of the acute accent and the circumflex in Portuguese. Primary stress may fall on any of the three final syllables of a word but occurs usually on the last two. A word is called oxytone if it is stressed on its last syllable, paroxytone if stress falls on the syllable before the last (the penult), and proparoxytone if stress falls on the third syllable from the end (the antepenult). Most multisyllabic words are stressed on the penult.
All words stressed on the antepenult take an accent mark. Words with two or more syllables, stressed on their last syllable, are not accented if they end with any consonant letter but -m and -s or -i, -is, -im, -u, -us, -um except in hiatuses as in açaí, but paroxytonic words may then be accented to differentiate them from oxytonic words, as in lápis.
Monosyllables are typically not accented, but those whose last vowel is a, e, or o, possibly followed by final -s or final -m, may require an accent mark.
- The verb pôr is accented to distinguish it from the preposition por.
- Third-person plural forms of the verbs ter and vir, têm and vêm are accented to be distinguished from third-person singulars of the same verbs, tem, vem. Other monosyllables ending in -em are not accented.
- Monosyllables ending in -o or -os with the vowel pronounced /u/ (as in English "do") or in -e or -es with the vowel pronounced /i/ (as in English "be") or /ɨ/ (approximately as in English "roses") are not accented. Otherwise, they are accented.
- Monosyllables containing only the vowel a take an acute accent except for the contractions of the preposition a with the articles a, as, which take the grave accent, à, às, and for the following clitic articles, pronouns, prepositions, or contractions, which are not accented: a, da, la, lha, ma, na, ta; as, das, las, lhas, mas, nas, tas. Most of those words have a masculine equivalent ending in -o(s), also not accented: o(s), do(s), lo(s), lho(s), mo(s), no(s), to(s).
- The endings -a, -e, -o, -as, -es, -os, -am, -em, -ens are unstressed. The stressed vowel of words with such endings is assumed to be the first one before the ending itself: bonita, bonitas, gente, viveram, seria, serias (verbs), seriam. If the word happens to be stressed elsewhere, it requires an accent mark: será, serás, até, séria, sérias (adjectives), Inácio, Amazônia/Amazónia. The endings -em and -ens take the acute accent when stressed (contém, convéns), except in third-person plural forms of verbs derived from ter and vir, which take the circumflex (contêm, convêm). Words with other endings are regarded as oxytone by default: viver, jardim, vivi, bambu, pensais, pensei, pensou. They require an accent when they are stressed on a syllable other than their last: táxi, fácil, amáveis.
- Rising diphthongs (which may also be pronounced as hiatuses) containing stressed i or stressed u are accented so they will not be pronounced as falling diphthongs. Exceptions are those whose stressed vowel forms a syllable with a letter other than s. Thus, raízes (syllabified as ra-í-zes), incluído (u-í), and saíste (a-ís) are accented, but raiz (ra-iz), sairmos (a-ir) and saiu (a-iu) are not. (There are a few more exceptions, not discussed here.)
- The stressed diphthongs ei, eu, oi take an acute accent on the first vowel whenever it is low.
Aside from those cases, there are a few more words that take an accent, usually to disambiguate frequent homographs such as pode (present tense of the verb poder) and pôde (past tense of the same verb). Also, accentuation rules of Portuguese are somewhat different from those of Spanish (English "continuous" is Portuguese contínuo, Spanish continuo, and English "I continue" is Portuguese continuo, Spanish continúo, in both cases with the same syllable accented in Portuguese and Spanish).
The use of diacritics in personal names is generally restricted to the combinations above, often also by the applicable Portuguese spelling rules.
Portugal is more restrictive than Brazil in regard to given names. They must be Portuguese or adapted to the Portuguese orthography and sound and should also be easily discerned as either a masculine or feminine name by a Portuguese speaker. There are lists of previously accepted names, and names not included therein must be subject to consultation of the national director of registries. Brazilian birth registrars, on the other hand, are likely to accept names containing any (Latin) letters or diacritics and are limited only to the availability of such characters in their typesetting facility.
Consonants with more than one spelling
Most consonants have the same values as in the International Phonetic Alphabet, except for the palatals /ʎ/ and /ɲ/, which are spelled lh and nh, respectively, and the following velars, rhotics, and sibilants:
|Phoneme||Default||Before e or i|
The alveolar tap /ɾ/ is always spelled as a single r. The other rhotic phoneme of Portuguese, which may be pronounced as a trill [r] or as one of the fricatives [x], [ʁ], or [h], according to the idiolect of the speaker, is either written rr or r, as described below.
|Phoneme||Start of syllable[rhotic note 1]||Between vowels||End of syllable[rhotic note 2]|
|/ʁ/||r||rosa, tenro||rr||carro||r||sorte, mar|
- only when it is the first sound in the syllable (in which case it is always followed by a vowel). For instance, a word like prato is pronounced with a tap, /ɾ/
- in some dialects; in the others, the r is usually a tap or approximant at the end of syllables
For the following phonemes, the phrase "at the start of a syllable" can be understood as "at the start of a word, or between a consonant and a vowel, in that order".
|Phoneme||Start of syllable1||Between vowels||End of syllable|
|/s/||s, c3||sapo, psique,
|ss, ç2, c3, x4||assado, passe,
|s, x5, z6||isto,
|z, s, x7||prazo, azeite,
|s, x8, z8||turismo,
|/ʃ/||ch, x||chuva, cherne,
|ch, x||fecho, duche,
|s, x5, z6||isto,
|/ʒ/||j, g3||jogo, jipe,
|j, g3||ajuda, pajem,
|s, x8, z8||turismo,
- 1 including consonant clusters that belong to a single syllable, like psique
- 2 before a, o, u. Ç never starts or ends a word.
- 3 before e, i
- 4 only in a very small number of words derived from Latin, such as trouxe and próximo
- 5 only in words derived from Latin or Greek, preceded by e and followed by one of the voiceless consonants c, p, s, t
- 6 only at the end of words and in rare compounds
- 7 only in a few words derived from Latin or Greek that begin with ex- or hex- followed by a vowel, and in compounds made from such words
- 8 only in a few compound words
Note that there are two main groups of accents in Portuguese, one in which the sibilants are alveolar at the end of syllables (/s/ or /z/), and another in which they are postalveolar (/ʃ/ or /ʒ/). In this position, the sibilants occur in complementary distribution, voiced before voiced consonants, and voiceless before voiceless consonants or at the end of utterances.
The vowels in the pairs /a, ɐ/, /e, ɛ/, /o, ɔ/ only contrast in stressed syllables. In unstressed syllables, each element of the pair occurs in complementary distribution with the other. Stressed /ɐ/ appears mostly before the nasal consonants m, n, nh, followed by a vowel, and stressed /a/ appears mostly elsewhere although they have a limited number of minimal pairs in EP.
In Brazilian Portuguese, both nasal and unstressed vowel phonemes that only contrast when stressed tend to a mid height though [a] may be often heard in unstressed position (especially when singing or speaking emphatically). In pre-20th-century European Portuguese, they tended to be raised to [ə], [i] (now [ɯ̽] except when close to another vowel) and [u]. It still is the case of most Brazilian dialects in which the word elogio may be variously pronounced as [iluˈʒiu], [e̞lo̞ˈʒiu], [e̞luˈʒiu], etc. Some dialects, such as those of Northeastern and Southern Brazil, tend to do less pre-vocalic vowel reduction and in general the unstressed vowel sounds adhere to that of one of the stressed vowel pair, namely [ɛ, ɔ] and [e, o] respectively.
In the educated speech, vowel reduction is used less often than in colloquial and vernacular speech though still more than the more distant dialects, and in general, mid vowels are dominant over close-mid ones and especially open-mid ones in unstressed environments when those are in free variation (that is, sozinho is always [sɔˈzĩɲu], even in Portugal, while elogio is almost certainly [e̞lo̞ˈʒi.u]). Mid vowels are also used as choice for stressed nasal vowels in both Portugal and Rio de Janeiro though not in São Paulo and southern Brazil, but in Bahia, Sergipe and neighboring areas, mid nasal vowels supposedly are close-mid like those of French. Veneno can thus vary as EP [vɯ̽ˈne̞nu], RJ [vẽ̞ˈnẽ̞nu], SP [veˈnenʊ] and BA [vɛˈnɛ̃nu] according to the dialect. /ɐ̃/ also got significant dialectal variation, respectively in the same of the last sentence, banana [baˈnə̃nə], [bə̃ˈnə̃nə], [bəˈnənə] etc.
Vowel reduction of unstressed nasal vowels is extremely pervasive nationwide in Brazil, in vernacular, colloquial and even most educated speech registers: então [ĩˈtɐ̃w], camondongo [kɐmũˈdõɡu]. It slightly more resisted but still present in Portugal.
The pronunciation of the accented vowels is fairly stable except that they become nasal in certain conditions. See the section on Nasalization for further information about this regular phenomenon. In other cases, nasal vowels are marked with a tilde. The diacritic ` is used only in the letter A and is merely grammatical, meaning a crasis between two a such as adverb "to" and feminine pronoun "the" (vou a a cidade to vou à cidade "I'm going to the city"), not affecting pronunciation at all. The trema was official prior to the last orthographical reform and can still be found in older texts. It meant that the usually silent u between q or g and i or e is in fact pronounced: liqüido "liquid" and sangüíneo "related to blood". Some words have two acceptable pronunciations, varying largely by accents.
The pronunciation of each diphthong is also fairly predictable, but one must know how to distinguish true diphthongs from adjacent vowels in hiatus, which belong to separate syllables. For example, in the word saio /ˈsaiu/ ([ˈsaj.ju]), the i forms a clearer diphthong with the previous vowel (but a slight yod also in the next syllable is generally present), but in saiu /sɐˈiu/ ([sɐˈiw]), it forms a diphthong with the next vowel. As in Spanish, a hiatus may be indicated with an acute accent, distinguishing homographs such as saia /ˈsaiɐ/ ([ˈsaj.jɐ]) and saía /sɐˈiɐ/.
Oral Grapheme Pronunciation Grapheme Pronunciation ai, ái [ai ~ ɐi] au, áu [au ~ ɐu] ei, êi [ei ~ eː], [əi]1 eu, êu [eu] éi [ɛi], [əi]1 éu [ɛu] oi [oi] ou [ou ~ oː] ói [ɔi] óu [ɔu] ui [ui] iu [iu] Nasal Grapheme Pronunciation Grapheme Pronunciation ãe, ãi [ɐ̃ĩ] ão [ɐ̃ũ] õe [õĩ] -
1 In central Portugal.
When a syllable ends with m or n, the consonant is not fully pronounced but merely indicates the nasalization of the vowel which precedes it. At the end of words, it sometimes produces a nasal diphthong.
Monophthongs Diphthongs Grapheme Pronunciation Grapheme Pronunciation -an, -am, -ân, -âm1 /ɐ̃/ -am2 /ɐ̃ũ/ -en, -em, -ên, -êm1 /ẽ/ -em, -ém2 /ẽĩ/ ([ɐ̃ĩ]) -in, -im, -ín, -ím3 /ĩ/ -en-, -én-4 -on, -om, -ôn, -ôm3 /õ/ -êm2 /ẽĩ/ ([ɐ̃ĩ]) -un, -um, -ún, -úm3 /ũ/
1 at the end of a syllable
2 at the end of a word
3 at the end of a syllable or word
4 before final s, for example in the words bens and parabéns
The letter m is conventionally written before b or p or at the end of words (also in a few compound words such as comummente - comumente in Brazil), and n is written before other consonants. In the plural, the ending -m changes into -ns; for example bem, rim, bom, um → bens, rins, bons, uns. Some loaned words end with -n (which is usually pronounced in European Portuguese).
Nasalization of u is left unmarked in the six words muito, muita, muitos, muitas, mui, ruim (the latter one only in Brazilian Portuguese).
The word endings -am, -em, -en(+s), with or without an accent mark on the vowel, represent nasal diphthongs derived from various Latin endings, often -ant, -unt or -en(t)-. Final -am, which appears in polysyllabic verbs, is always unstressed. The grapheme -en- is also pronounced as a nasal diphthong in a few compound words, such as bendito (bem + dito), homenzinho (homem + zinho), and Benfica.
Verbs whose infinitive ends in -jar have j in the whole conjugation: viagem "voyage" (noun) but viajem (third person plural of the present subjunctive of the verb viajar "to travel").
Verbs whose thematic vowel becomes a stressed i in one of their inflections are spelled with an i in the whole conjugation, as are other words of the same family: crio (I create) implies criar (to create) and criatura (creature).
Verbs whose thematic vowel becomes a stressed ei in one of their inflections are spelled with an e in the whole conjugation, as are other words of the same family: nomeio (I nominate) implies nomear (to nominate) and nomeação (nomination).
The majority of the Portuguese lexicon is derived from Latin, Greek, and some Arabic. In principle, that would require some knowledge of those languages. However, Greek words are Latinized before being incorporated into the language, and many words of Latin or Greek origin have easily recognizable cognates in English and other western European languages and are spelled according to similar principles. For instance, glória, "glory", glorioso, "glorious", herança "inheritance", real "real/royal". Some general guidelines for spelling are given below:
- CU vs. QU: if u is pronounced syllabically, it is written with c, as in cueca [kuˈɛkɐ] (underwear), and if it represents a labialized velar plosive, it is written with q, as in quando [ˈkwɐ̃du] (when).
- G vs. J: etymological g changes into j before a, o, u.
- H: this letter is silent; it appears for etymology at the start of a word, in a few interjections, and as part of the digraphs ch, lh, nh. Latin or Greek ch, ph, rh, th, and y are usually converted into c/qu, f, r, t, and i, respectively.
- O vs. OU: in many words, the variant oi normally corresponds to Latin and Arabic au or al, more rarely to Latin ap, oc.
- S/SS vs. C/Ç: the letter s and the digraph ss correspond to Latin s, ss, or ns, and to Spanish s. The graphemes c (before e or i) and ç (before a, o, u) are usually derived from Latin c or t(i), or from s in non-European languages, such as Arabic and Amerindian languages. They correspond to Spanish z or c. At the beginning of words, however, s is written instead of etymological ç, by convention.
- Z vs. S between vowels: the letter z corresponds to Latin c (+e, i) or t(i), to Greek or Arabic z. Intervocalic s corresponds to Latin s.
- X vs. CH: the letter x derives from Latin x or s, or from Arabic sh and usually corresponds to Spanish j. The digraph ch (before vowels) derives from Latin cl, fl, pl or from French ch and corresponds to Spanish ll or ch.
- S vs. X vs. Z at the end of syllables: s is the most common spelling for all sibilants. The letter x appears, preceded by e and followed by one of the voiceless consonants c, p, s, t, in some words derived from Latin or Greek. The letter z occurs only at the end of oxytone words and in compounds derived from them, corresponding to Latin x, c (+e, i) or to Arabic z.
Loanwords with a /ʃ/ in their original languages receive the letter x to represent it when they are nativised: xampu (shampoo). While the pronunciations of ch and x merged long ago, some Galician-Portuguese dialects like the Galician language, the portunhol da pampa and the speech registers of northeastern Portugal still preserve the difference as ch /tʃ/ vs. x /ʃ/, as do other Iberian languages and Medieval Portuguese. When one wants to stress the sound difference in dialects in which it merged the convention is to use tch: tchau (ciao) and Brazilian Portuguese República Tcheca (Czech Republic). In most loanwords, it merges with /ʃ/ (or /t/ :moti for mochi), just as [dʒ] most often merges with /ʒ/. Alveolar affricates [ts] and [dz], though, are more likely to be preserved (pizza, Zeitgeist, tsunami, kudzu, adzuki, etc.)
Syllabification and collation
Portuguese syllabification rules require a syllable break between double letters: cc, cç, mm, nn, rr, ss, or other combinations of letters that may be pronounced as a single sound: fric-ci-o-nar, pro-ces-so, car-ro, ex-ce(p)-to, ex-su-dar. Only the digraphs ch, lh, nh, gu, qu, and ou are indivisible. All digraphs are however broken down into their constituent letters for the purposes of collation, spelling aloud, and in crossword puzzles.
The apostrophe (') appears as part of certain phrases, usually to indicate the elision of a vowel in the contraction of a preposition with the word that follows it: de + água = d'água. It is used almost exclusively in poetry.
The hyphen (-) is used to make compound words, especially animal names like papagaio-de-rabo-vermelho "red-tailed parrot". It is also extensively used to append clitic pronouns to the verb, as in quero-o "I want it" (enclisis), or even to embed them within the verb, as in levaria + vos + os = levar-vos-ia "I would take to you", "levar-vo-los-ia" = "I would take them to you" (mesoclisis). Proclitic pronouns are not connected graphically to the verb: não o quero "I do not want it". Each element in such compounds is treated as an individual word for accentuation purposes.
In European Portuguese, as in many other European languages, angular quotation marks are used for general quotations in literature:
- «Isto é um exemplo de como fazer uma citação em português europeu.»
- “This is an example of how to make a quotation in European Portuguese.”
Although American-style (“…”) or British-style (‘…’) quotation marks are sometimes used as well, especially in less formal types of writing (they are more easily produced in keyboards) or inside nested quotations, they are less common in careful writing. In Brazilian Portuguese, only American and British-style quote marks are used.
- “Isto é um exemplo de como fazer uma citação em português brasileiro.”
- “This is an example of how to make a quotation in Brazilian Portuguese.”
In both varieties of the language, dashes are normally used for direct speech rather than quotation marks:
- ― Aborreço-me tanto ― disse ela.
- ― Não tenho culpa disso ― retorquiu ele.
- “I’m so bored,” she said.
- “That’s not my fault,” he shot back.
Brazilian vs. European spelling
|Portuguese-speaking countries except Brazil before the 1990 agreement||Brazil before the 1990 agreement||All countries after the 1990 agreement||translation|
|anónimo||anônimo||Both forms remain||anonymous|
|Vénus||Vênus||Both forms remain||Venus|
|facto||fato||Both forms remain||fact|
|Non-personal and non-geographical names|
As of 2005[update], Portuguese has two orthographic standards:
- The Brazilian orthography, official in Brazil.
- The European orthography, official in Portugal, Macau, East Timor and the five African Lusophone countries.
In East Timor, both orthographies are currently being taught in schools.
The table to the right illustrates typical differences between the two orthographies. Some are due to different pronunciations, but others are merely graphic. The main ones are:
- Presence or absence of certain consonants: The letters c and p appear in some words before c, ç or t in one orthography, but are absent from the other. Normally, the letter is written down in the European spelling, but not in the Brazilian spelling.
- Different use of diacritics: the Brazilian spelling has a, ê or ô followed by m or n before a vowel, in several words where the European orthography has á, é or ó, due to different pronunciation.
- Different usage of double letters: also due to different pronunciation, Brazilian spelling has only cc, rr and ss as double letters. So, Portuguese connosco becomes Brazilian conosco and words ended in m with suffix -mente added, (like ruimmente and comummente) become ruimente e comumente in Brazilian spelling.
- Academia Brasileira de Letras
- Differences between Spanish and Portuguese
- Portuguese names
- Portuguese phonology
- Spelling reforms of Portuguese
- The Vietnamese orthography, partly based on the orthography of Portuguese, through the work of 16th-century Catholic missionaries.
- Accordo Ortográfico de 1990
- Wikipedia in Portuguese: Ortografia da língua portuguesa
- (Portuguese) Delta: Documentation of studies on theoric and applied Linguistics – Problems in the tense variant of carioca speech.
- Ministro da Cultura quer Acordo vigorando antes de janeiro de 2010 [Minister of Culture wants Agreement enforced before January 2010] (in Portuguese), Portugal: Sapo. In Brazil, the Orthographic Agreement went into legal effect from January 1, 2009.
- catorze / quatorze [Ortografia / Fonética e Fonologia / Etimologia]
- Portal do Cidadão (Portuguese)
- (Italian) Accenti romanze: Portogallo e Brasile (portoghese) – The influence of foreign accents on Italian language acquisition.
- Bergström, Magnus & Reis, Neves Prontuário Ortográfico Editorial Notícias, 2004.
- Estrela, Edite A questão ortográfica — Reforma e acordos da língua portuguesa (1993) Editorial Notícias
- Formulário Ortográfico (Orthographic Form) published by the Brazilian Academy of Letters in 1943 - the present day spelling rules in Brazil
- Text of the decree of the Brazilian government, in 1971, amending the orthography adopted in 1943
- Orthographic Agreement of 1945 (in Portuguese) - the present day spelling rules in all Portuguese speaking countries
- Orthographic Agreement of 1990 (PDF - in Portuguese) - to be adopted by all Portuguese speaking countries | https://en.wikipedia.org/wiki/Portuguese_orthography |
4.125 | Google has been working with its quantum computer for several years now, and finally has results that prove the D-Wave 2 can perform certain calculations up to a hundred million times faster than existing conventional computers. That kind of performance gain is only possible if the system is actually performing quantum computing.
German researchers have devised a technique of creating self-assembled nanodiamond quantum bits (qubits) that could form the basis of quantum computers and storage devices that, unlike every other quantum tech that we’ve seen on ET, could operate at room temperature.
It used to be that all you needed to do to talk about the frontiers of quantum research was regurgitate a few thought experiments about cats in boxes, but in the past few years the pure science has started to pay practical dividends. PHD comics is the latest to help the public try to wrap their head around some crazy but vital concepts.
The new Quantum Artificial Intelligence Lab (QAIL), housed at NASA’s Ames Research Center in Silicon Valley and staffed by Google and NASA scientists, has become the second lab in the world to own a quantum computer. As the name suggests, the Google and NASA scientists will use the quantum computer to advance machine learning — a field of AI that deals with computers that autonomously optimize their behavior as they garner more experience.
A computer scientist at Amherst College has performed the first ever head-to-head speed test between a conventional and quantum computer — and, you’ll be glad to hear that the quantum computer won. But only just.
Researchers at the University of New South Wales in Australia have created the first quantum bit (qubit) based on the nuclear spin of an atom, within a silicon transistor. This breakthrough is significant for two reasons: The qubit produced by the researchers is highly stable — and it’s in silicon, meaning it can be wired up and controlled electronically, just like a conventional computer chip.
A team of quantum engineers in Germany have created the first air-to-surface quantum network, between a base station and an airplane flying 20 kilometers (12.4 miles) above. This is a very tantalizing step towards a global quantum communications network.
A team of international researchers have successfully teleported a quantum bit (qubit) over a record distance of 143 kilometers (89 miles), between the Canary Islands of La Palma and Tenerife. This distance is significant, as it is roughly the same distance to low Earth orbit (LEO) satellites — meaning it is now theoretically possible to build a satellite-based quantum communication network. | http://www.extremetech.com/tag/qubit |
4.25 | Educators often want to know how they can use PBL in their individual classroom. Project-based learning can be applied in any content area or any grade, but it may look very different across subjects. In the series of examples below, you will find descriptions of actual projects and exercises teachers have implemented in schools around the country, as well as links to the research reports on their outcomes. It should be noted that there are also many wonderful examples of cross-curricular projects, where teachers from two or more core subjects work together on a project. For an example, check out our Schools That Work package on an interdisciplinary project at Manor High School in Texas.
Schools That Work:
English teacher Mary Mobley (left) shared the PBL resources and tools she and her teaching partner created for their sophomore world studies project (right). Learn more about this project.
Credit: Zachary Fink
Urban students in grades 3-5 received inquiry-science instruction. Matched pre- and post-tests found substantial learning gains and a cumulative effect that lasted over several years (Lee, Buxton, Lewis, & LeRoy, 2006).
Fourth graders learned science through PBL or through traditional methods with the same teacher. The PBL curriculum involved figuring out a way to create electricity during a blackout, as blackouts had commonly affected the school’s region. PBL students had fewer stereotypical images of scientists on a “draw-a-scientist” test and were able to generate more problem-solving strategies than students in the traditional group. Content knowledge learned was equivalent in both groups (Drake & Long, 2009).
Urban middle school students engaged in a standards-based, inquiry-based science curriculum in ten middle schools showed higher levels of achievement on a curriculum-aligned test than students who received traditional instruction in a district-comparison group (Lynch, Kuipers, Pyke, & Szesze, 2005).
Urban middle school students engaged in PBL showed increased academic performance in science and improved behavior ratings over a two-year period (Gordon, Rogers, Comfort, Gavula, & McGee, 2001).
Urban students in grades seven and eight who were engaged in the LeTUS inquiry-based science curriculum demonstrated higher standardized test scores than students engaged in traditional instruction in a sample of 5,000 students. The LeTUs inquiry-science curriculum involves eight- to ten-week units addressing questions such as What Is the Quality of Air in My Community? or What Is the Water Like in My River? and is aligned with professional development, learning technology, and administrative support (Geier, Blumenfeld, Marx, Krajcik, Fishman, Soloway, & Clay-Chambers, 2008).
Middle school students engaged in Learning by Design (LBD) consistently outperformed students engaged in traditional instruction on tests of collaboration and metacognitive skills, such as checking work, designing fair tests, and explaining evidence. LBD students also learned science content as well as or better than students engaged in traditional learning methods, with the largest gains among economically disadvantaged students (Kolodner, Camp, Crismond, Fasse, Gray, Holbrook, Puntambekar, & Ryan, 2003).
Middle school students who received a computer-enhanced PBL unit had a better understanding of science concepts and felt more confident about being successful learners (Liu, Hsieh, Cho, & Schallert, 2006).
Tenth-grade earth science students who engaged in PBL earned higher scores on an achievement test as compared to students who received traditional instruction (Chang, 2001).
High school students engaged in PBL in biology, chemistry, and earth science classes outscored their peers on 44 percent of the items on the National Assessment of Educational Progress science test during their twelfth-grade year (Schneider, Krajcik, Marx, & Soloway, 2002).
Students in grades five and up in 11 school districts learned math problems through videotaped problems over a three-week period (The Adventures of Jasper Woodbury series). The PBL students showed improved competence in solving basic math word problems and planning skills and more positive attitudes toward math (Cognition and Technology Group at Vanderbilt, 1992).
PBL increased learning of macroeconomics at the high school level, as compared with traditional classes, in a sample of 252 students at 11 high schools (Maxwell, Mergendoller & Bellisimo, 2005).
A randomized, controlled trial in Arizona and California in 2007-08 examined the effects of a project-based economics curriculum developed by the Buck Institute for Education on student learning and problem-solving skills in a sample of 7,000 twelfth graders in 66 high schools. Seventy-six teachers received 40 hours of professional development in teaching economics with PBL instead of their normal professional development activities. Students who received PBL scored significantly higher on problem-solving skills and in their ability to apply knowledge to real-world economic challenges than students taught economics using traditional methods. Economics teachers who used the PBL approach reported greater satisfaction with the materials and methods, and no significant differences were detected between intervention and control-group teachers (Finkelstein, Hanson, Huang, Hirschman, & Huang, 2010).
Four veteran teachers taught macroeconomics using PBL in one or two courses and traditional instruction in another course. 246 twelfth-grade students in 11 classes completed pre- and post-tests in macroeconomics. Results showed that PBL was more effective than traditional instruction for teaching macroeconomics concepts (Mergendoller, Maxwell & Bellisimo, 2006).
History and U.S. Government
Second graders from low-income backgrounds participated in two project-based units which integrated literacy and social studies. The outcomes on standards-based social studies and content literacy assessments indicated that the project-based learning curriculum virtually erased the achievement gap between second graders of high and low-socioeconomic backgrounds (Halvorsen, Duke, Burgar, Block, Strachan, Berka, & Brown, 2012).
Students in grades four and five collaboratively researched primary and secondary sources to discover themes and reasons for human migration in the local region. PBL students showed improved reasoning and collaboration skills and increased knowledge of local history and communities (Wieseman & Cadwell, 2005).
Eighth-grade groups created mini-documentaries about their interpretation of a time period in the 1800s, using state standards as the content guide and presenting their completed work in a public event. PBL increased students’ content knowledge and historical-research skills (Hernandez-Ramos & De La Paz, 2009).
High school students using PBL in American studies performed as well on multiple-choice tests as students who received a traditional model of instruction, and they showed a deeper understanding of content (Gallagher & Stepien, 1996).
In the Knowledge in Action Research Project, high school students learned Advanced Placement U.S. Government with PBL or traditional instruction. The PBL course consisted of a public-policy action proposal and four role-playing projects: designing democracy, simulating legislation, a Supreme Court case, and an election. The PBL students showed improved performance on a complex scenario test, measuring strategies for monitoring and influencing public policy, and performed as well as or better than traditionally-taught students on the AP U.S. Government test (Boss et al., 2011; Parker et al., 2013; Parker et al., 2011). Learn more about the study's results to date and its course design.
Continue to the next section of the PBL research review, Avoiding Pitfalls. | http://www.edutopia.org/pbl-research-practices-disciplines |
4.03125 | - Development & Aid
- Economy & Trade
- Human Rights
- Global Governance
- Civil Society
Sunday, February 14, 2016
- A black community in the southern Brazilian state of Rio de Janeiro is trying to maintain its cultural heritage on 287 hectares granted to it by the government in 1999 as part of reparations to the descendants of slaves.
“I would define ‘quilombos’ as resistance by black people, as their essence,” Vagner do Nascimento, president of the Association of Residents of Campinho da Independencia, tells IPS.
Situated to the southwest of the city of Rio de Janeiro, 20 km from the town of Paraty in the middle of lush jungle that forms part of the Mata Atlântica (Atlantic forest), Campinho – as it is referred to by the people who live here – is one of the 3,524 quilombos distributed throughout Brazil, according to the Culture Ministry’s Palmares Cultural Foundation.
However, independent sources say there are 1,500 more quilombos – which were originally remote villages or collections of villages founded by runaway slaves, often hidden in the jungle.
In the quilombos, escaped slaves kept alive the cultures and lifestyles brought over from Africa. They also became bastions in the struggle for freedom.
After slavery was abolished on May 13, 1888, many quilombos became villages, where people depended on subsistence agriculture and small-scale trade.
Zumbi dos Palmares was a famous colonial period quilombo located in the Serra da Barriga mountains in what is today the northern state of Alagoas.
Palmares, which defended its freedom for over a century and at one point was home to as many as 50,000 escaped slaves, became a symbol in the fight against slavery in Brazil.
“People there lived and worked together, and consolidated their own values. That’s why Palmares is a really strong reference point for us, because here in Campinho the land is collectively owned, and we have collective forms of producing, of generating culture, of working,” says Nascimento.
Campinho has a unique history. Its 80 or so families descend from just three slave women: Antonia, Marcelina and Luiza. And according to the history that was orally transmitted from generation to generation, they weren’t just “ordinary” slaves, but came from the “Big House” and had culture and education.
As the story goes, shortly after the abolition of slavery, the landowners of the three haciendas in the region distributed the property to their former slaves and left.
Antonia, Marcelina and Luiza “gathered all the dispersed slaves together and brought them here with them,” says a member of the fifth generation of descendants, Laura María dos Santos, Campinho’s head of educational and cultural projects.
Dos Santos, two young women named Daniele and Silvia, and the elderly Albertina do Nascimento head IPS’s welcoming committee – three generations of women who represent the strength of those who founded their community.
“This heritage is transmitted to our girls, who become women who know what their role is,” says dos Santos. She goes on to tell an anecdote: “When a man made a sexist remark, his niece, a girl from our community, said ‘uncle, in a woman’s land, the woman never dies’.”
Nor do the women in this community want their cultural heritage to die. The local residents association led by Nascimento is carrying out projects for the recuperation of the historical memory, craftsmaking, ethnic tourism, and the revival and sustainable production of traditional crops like cassava, rice, beans and corn.
Campinho was the first quilombo in the state of Rio de Janeiro to obtain a formal collective land ownership title, on Mar. 21, 1999, after a struggle that began in the 1960s.
On one hand, the creation of the Bocaína National Park kept them from hunting and collecting fruits in the forest – activities they depended on for a living.
In addition, the construction of the Rio de Janeiro-to-Santos stretch of the BR-101 highway, between 1970 and 1973, drove up land prices and led to property speculation in the area.
The entire Paraty region became the target of interest on the part of large tourism ventures, and many people began to sell and leave their lands.
Those who stayed, like the community of Campinho, eventually won their battle. But other battles emerged, like the struggle for coexistence with a new touristy world of “rich people,” and the effort to preserve the local culture, says dos Santos.
“It’s a question of continuing to fight for our ethnic and traditional identity, while at the same time incorporating the technology to which young people have a right,” she adds.
On the other hand, as “Auntie Albertina” adds, there are other problems, like the fact that many of the local families have ended up working in nearby tourist condominiums.
“Soon, no one is going to want to work the land any more,” laments the old woman, who says she would never give up her land, where she grows beans and rice, makes cachaça – a Brazilian liquor made from distilled sugar cane juice – and is her own boss.
The haciendas in the region produced homemade cachaça since the time of the three original slave women.
Referring to the women working outside the community, Silvia says “they see the rich women with their styles and want to imitate them. They have a nice house, but when they see their boss’s home, they start to suffer, because they want one just like it.”
Another challenge is adopting new technologies, to which the children and young people in Campinho have access, generally in internet cafés, without forgetting their culture and traditional values.
According to Silvia, “there’s no formula for how to do that,” other than raising awareness in the community about the importance of keeping the culture alive.
The women see access to technologies like the internet, community radio stations and video cameras as important, to be able to record their history and culture.
But the challenge is “to dominate technology, and not let technology dominate us.”
“We teach our youngsters that the worst sin is to let themselves be enslaved by anything,” she says.
Silvia moved away from the community when she was a little girl, and later became a community leader in a favela in Rio de Janeiro, where she went to live, like so many other quilombolas.
Her fear is that the quilombos will go down the road followed by the favelas and become mere shantytowns without proper infrastructure or sanitation, due to a lack of space to expand. As the local families grow, and the land is increasingly subdivided, the houses are more and more crowded together.
She recalls that many of the crowded favelas surrounding Rio and other cities today used to be areas similar to the quilombos, with gardens, open areas and natural water sources.
That is why she wants the government to grant more land to the descendants of the original escaped slaves, who argue that they have a right to the property.
“The houses have to be spaced out, so that if a woman fights with her husband, no one can hear,” Silvia jokes.
Campinho is also involved in ecological projects. “Although some environmentalists say the opposite, the best-preserved areas are the ones where the quilombolas live,” she argues.
The projects include the sustainable production of palm hearts and agroforestry. “First we feed the community, and later, we sell what’s left over, outside,” says Daniele.
“Auntie Albertina”, who the younger people in the community listen to with respect, speaks up again.
“On my plot of land I don’t let anyone kill even a little bird,” she says, recalling that in the past, her ancestors even ate toucans to survive, but pointing out that the birds have made a comeback in the surrounding jungle, which she calls “a great achievement.”
The people of Campinho have other collective subsistence activities, such as the sale of traditional crafts in tourist areas of Paraty, and a restaurant that specialises in typical Afro-Brazilian dishes, like “feijoada”, a stew of black beans, pork and beef, which was traditionally made with what was left over after the master’s family was served.
The community also has an ethnic tourism project, giving tours of the community that include visits to the old “senzala” or slave compound, the cassava flour production areas, and the community garden, as well as hikes in the jungle and cultural activities like traditional dance recitals.
Nov. 20 is National Black Awareness Day in Brazil, in commemoration of the Nov. 20, 1695 death of the leader of black resistance in Brazil, Zumbi dos Palmares
In some Brazilian states, including Rio de Janeiro and Sao Paulo, Nov. 20 is a holiday aimed at “reflection on the insertion of blacks in Brazilian society.”
With blacks officially making up half of the population of Brazil, the country has the second largest black population in the world, after Nigeria.
For Vagner do Nascimento, the leader of the community, there is no doubt: the question is to “survive with dignity. That is the essence of black people in Brazil today, whose ancestors were brought over from Africa.” | http://www.ipsnews.net/2009/11/brazil-quilombos-keep-black-cultural-identity-alive/ |
4.25 | IN THIS ARTICLE
How the Virus Is Spread
Coxsackievirus is spread from person to person. The virus is present in the secretions and bodily fluids of infected people. The virus may be spread by coming into contact with respiratory secretions from infected patients. If infected people rub their runny noses and then touch a surface, that surface can harbor the virus and become a source of infection. The infection is spread when another person touches the contaminated surface and then touches his or her mouth or nose.
People who have infected eyes (conjunctivitis) can spread the virus by touching their eyes and touching other people or touching a surface. Conjunctivitis may spread rapidly and appear within one day of exposure to the virus. Coxsackieviruses are also shed in stool, which may be a source of transmission among children. The virus can be spread if unwashed hands get contaminated with fecal matter and then touch the face. This is particularly important for spread within day-care centers or nurseries where diapers are handled.
Medically Reviewed by a Doctor on 8/13/2015
Must Read Articles Related to Coxsackievirus
Infectious Disease Resources
- Early Care for Your Premature Baby
- What to Eat When You Have Cancer
- When to Take More Pain Medication | http://www.emedicinehealth.com/coxsackievirus/page2_em.htm |
4.03125 | |This article does not cite any sources. (December 2009)|
Carbonate rocks are a class of sedimentary rocks composed primarily of carbonate minerals. The two major types are limestone, which is composed of calcite or aragonite (different crystal forms of CaCO3) and dolostone, which is composed of the mineral dolomite (CaMg(CO3)2).
Calcite can be either dissolved by groundwater or precipitated by groundwater, depending on several factors including the water temperature, pH, and dissolved ion concentrations. Calcite exhibits an unusual characteristic called retrograde solubility in which it becomes less soluble in water as the temperature increases.
When conditions are right for precipitation, calcite forms mineral coatings that cement the existing rock grains together or it can fill fractures.
Karst topography and caves develop in carbonate rocks because of their solubility in dilute acidic groundwater. Cooling groundwater or mixing of different groundwaters will also create conditions suitable for cave formation.
|This article related to petrology is a stub. You can help Wikipedia by expanding it.| | https://en.wikipedia.org/wiki/Carbonate_rock |
4.46875 | Skip to main content
Wikispaces Classroom is now free, social, and easier than ever.
Try it today.
Pages and Files
Ch 1 - Basics
Ch 2 - Atoms, Molecules, and Ions
Ch 3 - Mass Relationships in Chemical Reactions
Ch 4 - Reactions in Aqueous Solutions
Ch 5 - Gases
Ch 6 - Thermochemistry
Ch 7 - Quantum Theory
Ch 8 - Periodic Relationships
Ch 9 - Chemical Bonding I
Ch_10 - Chemical Bonding II
Ch_11 - Intermolecular Forces & Liquids and Solids
Ch_12 - Physical Properties of Solutions
Ch_13 - Chemical Kinetics
Ch_14 - Chemical Equilibrium
Ch_15 - Acids and Bases
Ch_16 - Additional Equilibrium Topics
Free Response Questions
Add "All Pages"
Ch_14 - Chemical Equilibrium
Chapter 14 - Chemical Equilibrium
In this chapter we will see how the kinetics of a reaction can lead to a state of chemical equilibrium where there is no observable change in the reaction. This means that the concentrations of the reactants and products are not change and therefore remain constant. We will learn how to show this state by writing equilibrium expressions and calculating the equilibrium constant. Related to the equilibrium expression is the reaction quotient which can tell us if a reaction is moving towards forming more reactants or products at any given time. Finally, we will focus on what is called an ICE table as the primary problem solving method for solving for concentration values at equilibrium. We will immediately look at some applications of equilibrium in Chapter 16, and then go back to Chapter 15.
The Concept of Equilibrium and the Equilibrium Constant
Writing Equilibrium Constant Expressions
The Relationship Between Chemical Kinetics and Chemical Equilibrium
What does the Equilibrium Constant Tell Us?
Factors that Affect Chemical Equilibrium
Along with the embedded videos, I have included some links to some tutorials, simulations and animations. Pay attention to my worked out Practice Problems and follow along with what we are doing in class! The listed problems from the book will be due the day of the Chapter 14/16 Test. This material will be covered very quickly so keep up.
Practice Problems:14.1, 14.3, 14.6, 14.8, 14.9, 14.10, 14.11, 14.15, 14.16, 14.17, 14.18, 14.19, 14.22, 14.29, 14.30, 14.31, 14.32, 14.77, 14.33, 14.36, 14.37, 14.38, 14.40, 14.41, 14.43, 14.44, 14.48, 14.82, 14.51, 14.52, 14.53, 14.54, 14.55, 14.56, 14.58, 14.59, 14.98
14.1 - The Concept of Equilibrium and the Equilibrium Constant
We have come across the term equilibrium several times already this year. Up to this point you probably have the basic thought that equilibrium means that two processes are occurring at the same time, and while this is a very generalized definition, we need something a lot more specific in order to clear up some common misconceptions. The first thing to forever remember about chemical equilibrium is that it means that the forward and reverse rates of a chemical reaction are exactly the same. Exactly. The same. The second is that the concentration of the reactants and products do not change at equilibrium and instead remain constant. Constant. This does NOT mean however the concentration of the reactants and products is exactly the same, absolutely not. Nope. Constant, not equal. Sear these two things into you brain forever.
You can see what equilibrium looks like in the following graph of the formation of ammonia from hydrogen and nitrogen gases. It shows the disappearance of reactants (H2 & N2) and appearance of products (NH3) over time. The fact that the lines level off is what is meant by constant concentration. When this occurs, that is when equilibrium is achieved. So in the zone from A to B, the reaction is occurring and products are being made, but once you reach B and continue on through C and D, the concentrations are stable and a state of equilibrium exists.
The equilibrium constant and expression are what govern a chemical reaction at equilibrium. Unlike writing rate laws, the stoichiometric coefficients in equilibrium reactions do play a part. They become the exponents on the reactants and products in the equilibrium expression. The equilibrium expression is always listed as the products over the reactants as you can see below. This is simply the mathematical expression of the law of mass action.
Finally, the size of the equilibrium constant tells you a lot about the chemical reaction. If you have the exact same amount of reactants and products then mathematically the equilibrium expression must equal 1. If you have more products than reactants then the numerator will be larger and the equilibrium constant will be more than 1. If you have more reactants than products then the denominator will be larger and the equilibrium constant will be less than 1.
Related Problems - 14.1, 14.3
14.2 - Writing Equilibrium Constant Expressions
We will focus a lot of time now on writing equilibrium expressions as they are both fairly easy, and frequently asked for. The first thing to note is that there are reactions where everything is in the same phase (homogeneous) and those where the reactants and products are in different phases (heterogeneous). These cannot exactly be treated the same way, as they won’t have the same effect on the equilibrium. What we find is that gases and aqueous solutions greatly affect the equilibrium expression, but solid substances and pure liquids do not affect the equilibrium expression really at all. So when you look at a chemical reaction and need to write the equilibrium expression, you can ignore anything that is listed as a solid or a liquid.
Writing Equilibrium Expressions
So that leaves us with aqueous solutions and gases. Both of these can be written into an equilibrium expression based on concentration called Kc. These must have units in molarity for the concentration equilibrium expression. Unless specified otherwise, an equilibrium expression is always considered to be a concentration equilibrium expression, Kc.
For reactions that are all expressed as gases, you can also write an equilibrium expression in units of pressure. So when you have units of atm or mm Hg then you have a pressure equilibrium expression called Kp. Now an equation that is all gases can be expressed as wither a Kc or a Kp depending on whether you use molarity or atm. Surprisingly, for the exact same reaction, the Kc and Kp will not be the same number so you must be certain which thing they are asking for. The two are related however and can be converted using equation 14.5 on pg 606.
Ex 14.1 Prac Ex.png
Ex 14.2 Prac Ex.png
Ex 14.3 Prac Ex.png
Ex 14.4 Prac Ex.png
Ex 14.5 Prac Ex.png
Ex 14.6 Prac Ex.png
Converting between Kc and Kp
Finally, we have seen over several chapters how adding chemical reactions together can lead to new ways to show information. Adding chemical reactions at equilibrium allows you to also combine the equilibrium expressions. To get the new equilibrium expression all you have to do is multiply the original two equilibrium expressions. Subtracting reactions means dividing equilibrium expressions.
Ex 14.7 Prac Ex.png
Related Problems - 14.6, 14.8, 14.9, 14.10, 14.11, 14.15, 14.16, 14.17, 14.18, 14.19, 14.22, 14.29, 14.30, 14.31, 14.32, 14.77
14.3 - The Relationship Between Chemical Kinetics and Chemical Equilibrium
Remembering what we started off saying, the rate of the forward and reverse reactions at equilibrium are the same. So in theory you could set the two rate laws equal to each other. By rearranging the constants on to the same side, you can see that the ratio of the rate constants looks an awful lot like the equilibrium expressions we just finished writing. In fact, the equilibrium constant is simply the ratio of the two rate constants. Since the rate constant is dependent on temperature as we found in chapter 13, it makes sense that the equilibrium constant is also temperature dependent. So in order to write an equilibrium expression and calculate the equilibrium constant, the correctly balanced chemical equation and the temperature must be known.
Related Problems - 14.33, 14.36
14.4 - What does the Equilibrium Constant Tell Us?
We can use the equilibrium expression in several different ways. The most obvious one is to plug in concentration or pressure values and calculate the equilibrium constant. Make sure you take any exponents into account in this calculation. The second easiest thing to do is to calculate an unknown reactant or product concentration at equilibrium when the equilibrium constant and other concentrations are known. This should also be a simple calculation.
As stated earlier, the value of the equilibrium constant can tell you if reactants or products are favored in the equilibrium. But how do you know if you are at equilibrium? As we saw in the graphs in the first section, it can take a while for a reaction to reach equilibrium. If you plug in the initial concentrations of reactants and products into the equilibrium expression you can calculate something called the reaction quotient, Q. The size of the reaction quotient in comparison to the equilibrium constant will tell you if you are at equilibrium, creating products, or creating reactants. If Q = K then you are at equilibrium. If Q < K then you haven’t reached equilibrium yet and still need to create more products, moving the reaction to the right. If Q > K then you have passed equilibrium and need to go back to reactants, moving the reaction to the left.
Ex 14.8 Prac Ex.png
The most common thing you will do with an equilibrium expression is to calculate the concentration values of reactants and products given the equilibrium constant and some initial concentration values. All problems of this nature are solved the same way using something typically called an “ICE” table. ICE stands for initial, change, and equilibrium and allows you to account for all of the possible changes in an equilibrium reaction.
ICE Table example
The common scenario is to know the initial concentration of reactants and assume that there are no products in existence. This means that any change in this condition will be to reduce the concentration of reactants and add to the concentration of the products. We always make this change equal x. We use the stoichiometric coefficients from the balanced equation as coefficients on the x. Then you can just use basic algebra to plug values into the equilibrium expression and solve for x. This is the calculation you will see most often on the free response portion of the exam and the one we will practice the most often.
Ex 14.9 Prac Ex.png
Related Problems - 14.37, 14.38, 14.40, 14.41, 14.43, 14.44, 14.48, 14.82
14.5 - Factors that Affect Chemical Equilibrium
Work through this quick tutorial about a chicken (I'm not kidding) and stresses to its breathing to help you understand how "stress" affects the direction of equilibrium:
A reaction at equilibrium is something dynamic and constant movement. It is not static. As such, it can change. But something must be done to it to make it change. We term these things “stresses” and a chemical reaction at equilibrium will react to stresses according to Le Chatlier’s principle. Le Chatlier’s Principle basically states that a chemical reaction at equilibrium will react to a stress in such a way to reduce that stress and establish a new equilibrium state. It is important to realize that the applied stress prevents a reaction from achieving the same equilibrium state, it will always be a new equilibrium state. Our most common types of stress include changes in concentration, pressure, volume, and temperature.
Ex 14.11 Prac Ex.png
For changes in concentration, the first thing to remember is that changes in concentration only matter if the thing that is being changed is represented in the equilibrium expression. So changes in concentration of solids and liquids DO NOT affect the equilibrium. For everything else, you need to see if the changed item is a reactant or product. Increasing a product will shift a reaction to the reactants so the extra product gets used up. The opposite is true if more reactant is used. The addition of something shifts the reaction in such a way to use it up. You can also take something out of a reaction in which case the reaction shifts to replace it. So you can keep a reaction continually producing products if you create a way to keep removing one of the products.
Work through this
LC Virtual Lab #1
to see these types of changes.
Changing volume and pressure only affects gaseous reactants and products. In general we think of it as changes in pressure, but remember from chapter 5 that changing the volume will change the pressure. Increases in pressure will cause a reaction to shift towards the side with fewer moles of gas. If there are equal amounts of moles of gas, then there is no change with changes in pressure. Decreases in pressure will shift a reaction to the side with fewer moles of gas. Remember that a decrease in volume is an increase in pressure and vice versa. Please note that the addition of an inert gas such as helium is a favorite thing to ask about. Adding this inert gas at constant volume may increase the overall pressure but not the partial pressures of the individual gases so no change occurs.
Ex 14.12 Prac Ex.png
Work through this
LC Virtual Lab #2
to see these types of changes.
Finally, changes in temperature depend on whether a reaction is endo or exothermic. If a reaction is endothermic that means that energy is added to the reaction and we can think of heat/energy as a reactant. If a reaction is exothermic that means that energy is released from the reaction and we can think of heat/energy as a product. Then you can treat changes in temperature as simply changes in the concentration of heat/energy and it works the same way as originally discussed. If you don’t know if a reaction is endo or exothermic, then you cannot determine how changes in temperature will affect the equilibrium. But ANY change in temperature will change the value of the equilibrium constant.
Ex 14.13 Prac Ex.png
Work through this
LC Virtual Lab #3
to see these types of changes.
Finally, we need to explore how a catalyst affects equilibrium. As we found in chapter 14, the addition of a catalyst lowers the activation energy for a reaction making the rate of both the forward and reverse reactions faster. Since both the forward and reverse reactions speed up, there is actually no change in the equilibrium constant or the position of equilibrium. The only thing that does happen is that the state of equilibrium is achieved faster than would normally happen without the catalyst.
Le Chatelier's Principle
Related Problems - 14.51, 14.52, 14.53, 14.54, 14.55, 14.56, 14.58, 14.59, 14.98
help on how to format text
Turn off "Getting Started" | http://baskinapchem.wikispaces.com/Ch_14%20-%20Chemical%20Equilibrium?responseToken=69e13420700c5925069bccb83dd9c8a4 |
4.3125 | Oxygen saturation (medicine)
Oxygen saturation is a term referring to the fraction of oxygen-saturated hemoglobin relative to total hemoglobin (unsaturated + saturated) in the blood. The human body requires and regulates a very precise and specific balance of oxygen in the blood. Normal blood oxygen levels in humans are considered 95-100 percent. If the level is below 90 percent, it is considered low resulting in hypoxemia. Blood oxygen levels below 80 percent may compromise organ function, such as the brain and heart, and should be promptly addressed. Continued low oxygen levels may lead to respiratory or cardiac arrest. Oxygen therapy may be used to assist in raising blood oxygen levels. Oxygenation occurs when oxygen molecules (O
2) enter the tissues of the body. For example, blood is oxygenated in the lungs, where oxygen molecules travel from the air and into the blood. Oxygenation is commonly used to refer to medical oxygen saturation.
In medicine, oxygen saturation (SO2), commonly referred to as "sats," measures the percentage of hemoglobin binding sites in the bloodstream occupied by oxygen. At low partial pressures of oxygen, most hemoglobin is deoxygenated. At around 90% (the value varies according to the clinical context) oxygen saturation increases according to an oxygen-hemoglobin dissociation curve and approaches 100% at partial oxygen pressures of >10 kPa. A pulse oximeter relies on the light absorption characteristics of saturated hemoglobin to give an indication of oxygen saturation.
The body maintains a stable level of oxygen saturation for the most part by chemical processes of aerobic metabolism associated with breathing. Using the respiratory system, red blood cells, specifically the hemoglobin, gather oxygen in the lungs and distribute it to the rest of the body. The needs of the body's blood oxygen may fluctuate such as during exercise when more oxygen is required or when living at higher altitudes. A blood cell is said to be "saturated" when carrying a normal amount of oxygen. Both too high and too low levels can have adverse effects on the body.
|This section does not cite any sources. (March 2015)|
An SaO2 (arterial oxygen saturation) value below 90% causes hypoxemia (which can also be caused by anemia). Hypoxemia due to low SaO is indicated by cyanosis. Oxygen saturation can be measured in different tissues:
- Venous oxygen saturation (SvO2) is measured to see how much oxygen the body consumes. Under clinical treatment, a SvO2 below 60% indicates that the body is in lack of oxygen, and ischemic diseases occur. This measurement is often used under treatment with a heart-lung machine (extracorporeal circulation), and can give the perfusionist an idea of how much flow the patient needs to stay healthy.
- Tissue oxygen saturation (StO2) can be measured by near infrared spectroscopy. Although the measurements are still widely discussed, they give an idea of tissue oxygenation in various conditions.
- Peripheral capillary oxygen saturation (SpO2) is an estimation of the oxygen saturation level usually measured with a pulse oximeter device. It can be calculated with the pulse oximetry according to the following formula:
Pulse oximetry is a method used to measure the concentration of oxygen in the blood. A small device that clips to the body (typically a finger but may be other areas), called a pulse oximeter, uses infrared light to estimate the amount of oxygen in the blood. The clip attaches to a reading meter by a wire to collect the data. Oxygen levels may also be checked through an arterial blood gas test (ABG), where blood taken from an artery is analysed for oxygen level, carbon dioxide level and acidity. Oxygen saturation taken with a pulse oximeter is often designated SpO2.
|85% and above||No evidence of impairment|
|65% and less||Impaired mental function on average|
|55% and less||Loss of consciousness on average|
Healthy individuals at sea level usually exhibit oxygen saturation values between 96% and 99%, and should be above 94%. At 1600 meters altitude (about one mile high) oxygen saturation should be above 92%.
An SaO2 (arterial oxygen saturation) value below 90% causes hypoxemia (which can also be caused by anemia). Hypoxemia due to low SaO2 is indicated by cyanosis, but oxygen saturation does not directly reflect tissue oxygenation. The affinity of hemoglobin to oxygen may impair or enhance oxygen release at the tissue level. Oxygen is more readily released to the tissues (i.e., hemoglobin has a lower affinity for oxygen) when pH is decreased, body temperature is increased, arterial partial pressure of carbon dioxide (PaCO2) is increased, and 2,3-DPG levels (a byproduct of glucose metabolism also found in stored blood products) are increased. When the hemoglobin has greater affinity for oxygen, less is available to the tissues. Conditions such as increased pH, decreased temperature, decreased PaCO2, and decreased 2,3-DPG will increase oxygen binding to the hemoglobin and limit its release to the tissue.
- "Hypoxemia (low blood oxygen)". Mayo Clinic. mayoclinic.com. Retrieved 6 June 2013.
- Kenneth D. McClatchey (2002). Clinical Laboratory Medicine. Philadelphia: Lippincott Williams & Wilkins. p. 370.
- "Understanding Blood Oxygen Levels at Rest". fitday.com. fitday.com. Retrieved 6 June 2013.
- Elllison, Bronwyn. "NORMAL RANGE OF BLOOD OXYGEN LEVEL". Livestrong.com. Livestrong.com. Retrieved 6 June 2013.
- "Your Oxygen Level" (PDF). The Ohio State University Wexner Medical Center. Retrieved 6 June 2013.
- "SPO2". TheFreeDictionary.com. 1998–2008. Retrieved 2014-01-28.
- Oxymoron: Our Love-Hate Relationship with Oxygen, By Mike McEvoy at Albany Medical College, New York. 11/14/2012
- "Normal oxygen level". National Jewish Health. MedHelp. Feb 23, 2009. Retrieved 2014-01-28.
- Schutz, Oxygen Saturation Monitoring by Pulse Oximetry, 2001 | https://en.wikipedia.org/wiki/Oxygenation_(medical) |
4.09375 | February 10, 2016,
Hardening of the Arteries
Coronary artery disease (CAD), also called heart disease or ischemic heart disease, results from a complex process known as atherosclerosis (commonly called "hardening of the arteries"). In atherosclerosis, fatty deposits (plaques) of cholesterol and other cellular waste products build up in the inner linings of the heart’s arteries. This causes blockage of arteries (ischemia) and prevents oxygen-rich blood from reaching the heart. There are many steps in the process leading to atherosclerosis, some not fully understood.
Cholesterol and Lipoproteins. The atherosclerosis process begins with cholesterol and sphere-shaped bodies called lipoproteins that transport cholesterol.
- Cholesterol is a substance found in all animal cells and animal-based foods. It is critical for many functions, but under certain conditions cholesterol can be harmful.
- The lipoproteins that transport cholesterol are referred to by their size. The most commonly known are low-density lipoproteins (LDL) and high density lipoproteins (HDL). LDL is often referred to as "bad" cholesterol; HDL is often called "good" cholesterol.
Oxidation. The damaging process called oxidation is an important trigger in the atherosclerosis story.
- Oxidation is a chemical process in the body caused by the release of unstable particles known as oxygen-free radicals. It is one of the normal processes in the body, but under certain conditions (such as exposure to cigarette smoke or other environment stresses) these free radicals are overproduced.
- In excess amounts, they can be very dangerous, causing damaging inflammation and even affecting genetic material in cells.
- In heart disease, free radicals are released in artery linings and oxidize low-density lipoproteins (LDL). The oxidized LDL is the basis for cholesterol build-up on the artery walls and damage leading to heart disease.
Inflammatory Response. For the arteries to harden there must be a persistent reaction in the body that causes ongoing harm. Researchers now believe that this reaction is an immune process known as the inflammatory response.
There is growing evidence that the inflammatory response may be present not only in local plaques in single arteries but also throughout the arteries leading to the heart.
Blockage in the Arteries. Eventually these calcified (hardened) arteries become narrower (a condition known as stenosis).
- As this narrowing and hardening process continues, blood flow slows, preventing sufficient oxygen-rich blood from reaching the heart muscles.
- Such oxygen deprivation in vital cells is called ischemia. When it affects the coronary arteries, it causes injury to the tissues of the heart.
- These narrow and inelastic arteries not only slow down blood flow but also become vulnerable to injury and tears.
The End Result: Heart Attack. A heart attack can occur as a result of one or two effects of atherosclerosis:
- The artery becomes completely blocked and ischemia becomes so extensive that oxygen-bearing tissues around the heart die.
- The plaque itself develops fissures or tears. Blood platelets stick to the site to seal off the plaque, and a blood clot (thrombus) forms. A heart attack can then occur if the blood clot completely blocks the passage of oxygen-rich blood to the heart.
MOST POPULAR - HEALTH
- Well: Why We Get Running Injuries (and How to Prevent Them)
- Well: Simple Remedies for Constipation
- Well: To Reduce the Risk of Alzheimer’s, Eat Fish
- Study in Brazil Links Zika Virus to Eye Damage in Babies
- Well: 12 Minutes of Yoga for Bone Health
- Well: Ask Well: The Sugar in Fruit
- Well: Opening Up About Depression
- Johns Hopkins to Perform First H.I.V.-Positive Organ Transplants in U.S.
- Well: A Diet and Exercise Plan to Lose Weight and Gain Muscle
- How a Medical Mystery in Brazil Led Doctors to Zika | http://www.nytimes.com/health/guides/disease/atherosclerosis/background.html |
4.1875 | First the bad news. Humans are driving species to extinction at around 1000 times the natural rate, at the top of the range of an earlier estimate. We also don’t know how many species we can afford to lose.
Now the good news. Armed with your smartphone, you can help conservationists save them.
Interactive map: “Where the threatened wild things are”
The new estimate of the global rate of extinction comes from Stuart Pimm of Duke University in Durham, North Carolina, and colleagues. It updates a calculation Pimm’s team released in 1995, that human activities were driving species out of existence at 100 to 1000 times the background rate (Science, doi.org/fq2sfs).
It turns out that Pimm’s earlier calculations both underestimated the rate at which species are now disappearing, and overestimated the background rate over the past 10 to 20 million years.
Gone gone gone
The Red List assessments of endangered species, conducted by the International Union for Conservation of Nature (IUCN), are key to Pimm’s analysis. They have evolved from patchy lists of threatened species into comprehensive surveys of animal groups and regions.
“Twenty years ago we simply didn’t have the breadth of underlying data with 70,000 species assessments in hand,” says team member Thomas Brooks of the IUCN in Gland, Switzerland.
By studying animals’ DNA, biologists have also created family trees for many groups of animals, allowing them to calculate when new species emerged. On average, it seems each vertebrate species gives rise to a new species once every 10 million years.
It’s hard to measure the natural rate of extinction, but there is a workaround. Before we started destroying habitats, new species seem to have been appearing faster than old ones disappeared. That means the natural extinction rate cannot be higher than the rate at which they were forming, says Pimm.
For the most part, the higher estimate of the modern extinction rate is not caused by any acceleration in extinctions since 1995. One exception is an increase in threats to amphibians, partly due to the global spread of the killer chytrid fungus.
The big unknown is what the high current extinction rate means for the health of entire ecosystems. Some researchers have suggested “sustainable” targets for species’ loss, but there’s still no scientific way to predict at what point cumulative extinctions cause an ecosystem to collapse. “People who say that are pulling numbers out of the air,” says Pimm.
Still, it seems unlikely that extinctions running at 1000 times the background rate can be sustained for long. “You can be sure that there will be a price to be paid,” says Brooks.
Pimm’s team has also compiled detailed global maps of biodiversity, showing the numbers of threatened species and total species richness in a global grid consisting of squares 10 kilometres across.
Such maps can help conservationists decide what to do.
For instance, Pimm and his colleague Clinton Jenkins of the Institute for Ecological Research in Nazaré Paulista, Brazil, noticed high numbers of threatened species on Brazil’s Atlantic coast. Local forests were being cleared for cattle ranching. So they are working with a Brazilian group, the Golden Lion Tamarin Association, to buy land and reconnect isolated forest fragments.
But conservationists need more data, and you can help, through projects like iNaturalist. Users share photos of the creatures they see via iPhone and Android apps, and experts identify them. “Right now, someone is posting an observation about every 30 seconds,” says co-director Scott Loarie of the California Academy of Sciences in San Francisco.
Journal reference: Science, DOI: 10.1126/science.1246752
More on these topics: | https://www.newscientist.com/article/dn25645-we-are-killing-species-at-1000-times-the-natural-rate |
4.09375 | September 7, 2005
Why Are Birds’ Eggs Speckled?
Birds' eggs are unique in their diverse pigmentation. This diversity is greatest amongst perching birds (order Passeriformes: 60% of all bird species), which include many familiar species including tits and warblers. Despite intense interest, the purpose, in most species, of these patterns was unknown.
Most passerines lay eggs speckled with reddish protoporphyrin spots forming a ring around the egg's blunt end, on an otherwise unpigmented shell. Evidence in a paper by Gosler, Higham & Reynolds soon to appear in Ecology Letters now suggests that rather than giving a visual signal, protoporphyrins strengthen the eggshell by compensating for reduced eggshell-thickness caused by calcium deficiency.Pigment spots on great tit eggs specifically marked thinner areas of shell, with darker spots marking yet thinner shell than paler spots, and females nesting on low-calcium soils, laid thinner-shelled, more-spotted eggs than those on high-calcium soils nearby. Pigmentation may offer a way to assess eggshell quality.
On the World Wide Web: | http://www.redorbit.com/news/science/233301/why_are_birds_eggs_speckled/ |
4.4375 | ELA: KINDERGARTEN - GRADE 12
LITERACY: GRADES 6 - 12
College and Career Readiness Anchor Standards for Language
Vocabulary Acquisition and Use
4. Determine or clarify the meaning of unknown and multiple-meaning words and phrases by using context clues, analyzing meaningful word parts, and consulting general and specialized reference materials, as appropriate.
Toddlers and young children learn vocabulary without thinking about it—their brains simply absorb new words and new meanings for familiar words with little to no effort on their part. Would that we all were so lucky!
As humans age, we lose the sponge-like ability to soak up new words and meanings—even though, in a globalized world, we need those skills more than ever. Luckily, we never completely lose the ability to learn vocabulary.
Students preparing for college and/or a career should practice the skills they’ll need to decipher the forest of new words they’ll inevitably encounter. Specifically, the Anchor Standards recommend focusing on the following skills:
Understanding words in context. The sentence or paragraph a new word lives in can provide plenty of information about what it means. Often, the sentence alone tells the reader enough to give him a good shot at guessing what the word means. Context is especially important when a single word has multiple meanings.
For instance, the word “glass” may mean a container for a beverage, a flat pane of material used in a window or mirror, or the substance from which both of these are made. A sentence like “I raised my glass to toast their happy marriage,” however, immediately brings to mind the drink-holder type of “glass,” not the used-in-windows type of “glass.” (Similarly, “toast” in this context brings to mind a type of speech, not a piece of cooked bread.)
Examining word parts. English may read like it picks the pockets of other languages for spare vocabulary, but many English words, especially those used in technical or professional fields, use recognizable word parts that give a clear view of what the word means. Perhaps the best-known example is the suffix “-ology,” which means “the study of.”
Using reference materials. When all else fails, look it up! Most students preparing for college or a career are familiar with the basic reference trio of dictionary/thesaurus/encyclopedia, but most specialties have their own specific reference materials along with the tried-and-true favorites.
P.S. If your students need to brush up on their spelling and grammar, send 'em over to our Grammar Learning Guides so they can hone their skills before conquering the Common Core.
Sample Activities for Use in Class
Reading Outside Your Sphere: This activity can be done in an hour, or it can serve as an ongoing semester project. Students will need access to magazines, books, and other reading materials. For a one-day assignment, the school library may suffice; for an entire semester, you may want to have students subscribe to a magazine, or use resources at a local public or university library, if available.
Assign, or have each student choose, a topic or area of study about which the student knows little to nothing. More than one student may be assigned to each topic, if necessary. For the duration of the assignment, have the students read a magazine, newsletter, book, or other publication in their unknown topic. As they encounter words they don’t know, students should write down:
1. The word;
2. The sentence the word appears in;
3. Their best guess as to what the word means and what they base that guess on - the context, a definition or example given in the text, the parts of the word, etc. (one or two sentences will suffice); and
4. What the word actually means and where they learned that information (dictionary, website, asking a professional in the field, etc.)
Reading topics they know nothing about will not only expose students to vocabulary they’ve never seen before, but also challenges them to decide what resources are best for finding out.
For this activity, you’ll need a stack of general and specialized reference materials and a stack of cards containing vocabulary words, phrases and concepts that might be found in these reference works. For instance, if your pile of reference materials contains a medical dictionary and a copy of Gray’s Anatomy, your cards should include a few medical terms, such as the technical name for certain organs, diseases, or medical procedures.
Divide the class into two or more teams of four to seven students each. Put the stack of reference materials on a table at the front of the room, and divide the cards into one stack per team and set them at the front of the room either with the reference materials or on their own table. (For scoring purposes, it may be easier to color-code the cards for each team.) Students should line up in single-file lines, one per team, facing the reference materials and stacks of cards.
On “go!” (or some similar signal to begin), the first student in each line will race to the front of the room and grab the top card off her team’s stack. The student reads the card, then has to decide as quickly as possible which reference materials are most likely to have the definition of the word, phrase or concept on the card. The student should go through the reference material(s) until she finds the definition, then mark that page in the book with her card and race to the back of her team’s line, at which point the second student on the team runs forward and does the same thing. Once everyone on the team has stuck a card in a book, the entire team should raise its hands.
A team has “won” if all its cards bookmark a page that defines the word on the card. Read the words and the definitions out loud, or have each student read his or hers out loud. | http://www.shmoop.com/common-core-standards/ccss-ela-literacy-ccra-l-4.html |
4.0625 | By Steve Whitmoyer
You need to be logged in to use this feature.
Log In or Register
Learners identify the parts of speech by following a certain order until each word in a sentence is labeled. In a variety of exercises, learners practice finding verbs, prepositional phrases, subjects, nouns, pronouns, adjectives, adverbs, and conjunctions.
In this animated object, learners examine neutral fats, phospholipids, and cholesterol. The molecular formula and general function for each are shown.
Students examine standard pressure in this interactive object.
You'll practice converting between units of measure for volume in the English Measurement System.
Spin to Win with conduction, convection, and radiation terminology! | https://www.wisc-online.com/learn/career-clusters/stem/eng3502/weight--volume-relationships-saturated-densit |
4.375 | This text was copied from Wikipedia on 29 November 2015 at 3:22PM.
The slide rule, also known colloquially in the United States as a slipstick, is a mechanical analog computer. The slide rule is used primarily for multiplication and division, and also for functions such as roots, logarithms and trigonometry, but is not normally used for addition or subtraction. Though similar in name and appearance to a standard ruler, the slide rule is not ordinarily used for measuring length or drawing straight lines.
Slide rules exist in a diverse range of styles and generally appear in a linear or circular form with a standardized set of markings (scales) essential to performing mathematical computations. Slide rules manufactured for specialized fields such as aviation or finance typically feature additional scales that aid in calculations common to those fields.
The Reverend William Oughtred and others developed the slide rule in the 17th century based on the emerging work on logarithms by John Napier. Before the advent of the pocket calculator, it was the most commonly used calculation tool in science and engineering. The use of slide rules continued to grow through the 1950s and 1960s even as digital computing devices were being gradually introduced; but around 1974 the electronic scientific calculator made it largely obsolete and most suppliers left the business.
- 1 Basic concepts
- 2 Operation
- 3 Physical design
- 4 History
- 5 Compared to electronic digital calculators
- 6 The slide rule today
- 7 See also
- 8 Notes
- 9 External links
In its most basic form, the slide rule uses two logarithmic scales to allow rapid multiplication and division of numbers. These common operations can be time-consuming and error-prone when done on paper. More elaborate slide rules allow other calculations, such as square roots, exponentials, logarithms, and trigonometric functions.
Scales may be grouped in decades, which are numbers ranging from 1 to 10 (i.e. 10n to 10n+1). Thus single decade scales C and D range from 1 to 10 across the entire width of the slide rule while double decade scales A and B range from 1 to 100 over the width of the slide rule.
In general, mathematical calculations are performed by aligning a mark on the sliding central strip with a mark on one of the fixed strips, and then observing the relative positions of other marks on the strips. Numbers aligned with the marks give the approximate value of the product, quotient, or other calculated result.
The user determines the location of the decimal point in the result, based on mental estimation. Scientific notation is used to track the decimal point in more formal calculations. Addition and subtraction steps in a calculation are generally done mentally or on paper, not on the slide rule.
Most slide rules consist of three linear strips of the same length, aligned in parallel and interlocked so that the central strip can be moved lengthwise relative to the other two. The outer two strips are fixed so that their relative positions do not change.
Some slide rules ("duplex" models) have scales on both sides of the rule and slide strip, others on one side of the outer strips and both sides of the slide strip (which can usually be pulled out, flipped over and reinserted for convenience), still others on one side only ("simplex" rules). A sliding cursor with a vertical alignment line is used to find corresponding points on scales that are not adjacent to each other or, in duplex models, are on the other side of the rule. The cursor can also record an intermediate result on any of the scales.
A logarithm transforms the operations of multiplication and division to addition and subtraction according to the rules and . Moving the top scale to the right by a distance of , by matching the beginning of the top scale with the label on the bottom, aligns each number , at position on the top scale, with the number at position on the bottom scale. Because , this position on the bottom scale gives , the product of and . For example, to calculate 3×2, the 1 on the top scale is moved to the 2 on the bottom scale. The answer, 6, is read off the bottom scale where 3 is on the top scale. In general, the 1 on the top is moved to a factor on the bottom, and the answer is read off the bottom where the other factor is on the top. This works because the distances from the "1" are proportional to the logarithms of the marked values:
Operations may go "off the scale;" for example, the diagram above shows that the slide rule has not positioned the 7 on the upper scale above any number on the lower scale, so it does not give any answer for 2×7. In such cases, the user may slide the upper scale to the left until its right index aligns with the 2, effectively dividing by 10 (by subtracting the full length of the C-scale) and then multiplying by 7, as in the illustration below:
Here the user of the slide rule must remember to adjust the decimal point appropriately to correct the final answer. We wanted to find 2×7, but instead we calculated (2/10)×7=0.2×7=1.4. So the true answer is not 1.4 but 14. Resetting the slide is not the only way to handle multiplications that would result in off-scale results, such as 2×7; some other methods are:
- Use the double-decade scales A and B.
- Use the folded scales. In this example, set the left 1 of C opposite the 2 of D. Move the cursor to 7 on CF, and read the result from DF.
- Use the CI inverted scale. Position the 7 on the CI scale above the 2 on the D scale, and then read the result off of the D scale below the 1 on the CI scale. Since 1 occurs in two places on the CI scale, one of them will always be on-scale.
- Use both the CI inverted scale and the C scale. Line up the 2 of CI with the 1 of D, and read the result from D, below the 7 on the C scale.
- Using a circular slide rule.
Method 1 is easy to understand, but entails a loss of precision. Method 3 has the advantage that it only involves two scales.
The illustration below demonstrates the computation of 5.5/2. The 2 on the top scale is placed over the 5.5 on the bottom scale. The 1 on the top scale lies above the quotient, 2.75. There is more than one method for doing division, but the method presented here has the advantage that the final result cannot be off-scale, because one has a choice of using the 1 at either end.
In addition to the logarithmic scales, some slide rules have other mathematical functions encoded on other auxiliary scales. The most popular were trigonometric, usually sine and tangent, common logarithm (log10) (for taking the log of a value on a multiplier scale), natural logarithm (ln) and exponential (ex) scales. Some rules include a Pythagorean scale, to figure sides of triangles, and a scale to figure circles. Others feature scales for calculating hyperbolic functions. On linear rules, the scales and their labeling are highly standardized, with variation usually occurring only in terms of which scales are included and in what order:
|A, B||two-decade logarithmic scales, used for finding square roots and squares of numbers|
|C, D||single-decade logarithmic scales|
|K||three-decade logarithmic scale, used for finding cube roots and cubes of numbers|
|CF, DF||"folded" versions of the C and D scales that start from π rather than from unity; these are convenient in two cases. First when the user guesses a product will be close to 10 but is not sure whether it will be slightly less or slightly more than 10, the folded scales avoid the possibility of going off the scale. Second, by making the start π rather than the square root of 10, multiplying or dividing by π (as is common in science and engineering formulas) is simplified.|
|CI, DI, CIF, DIF||"inverted" scales, running from right to left, used to simplify 1/x steps|
|S||used for finding sines and cosines on the C (or D) scale|
|T, T1, T2||used for finding tangents and cotangents on the C and CI (or D and DI) scales|
|ST, SRT||used for sines and tangents of small angles and degree–radian conversion|
|L||a linear scale, used along with the C and D scales for finding base-10 logarithms and powers of 10|
|LLn||a set of log-log scales, used for finding logarithms and exponentials of numbers|
|Ln||a linear scale, used along with the C and D scales for finding natural (base e) logarithms and|
|The scales on the front and back of a Keuffel and Esser (K&E) 4081-3 slide rule.|
Roots and powers
There are single-decade (C and D), double-decade (A and B), and triple-decade (K) scales. To compute , for example, locate x on the D scale and read its square on the A scale. Inverting this process allows square roots to be found, and similarly for the powers 3, 1/3, 2/3, and 3/2. Care must be taken when the base, x, is found in more than one place on its scale. For instance, there are two nines on the A scale; to find the square root of nine, use the first one; the second one gives the square root of 90.
For problems, use the LL scales. When several LL scales are present, use the one with x on it. First, align the leftmost 1 on the C scale with x on the LL scale. Then, find y on the C scale and go down to the LL scale with x on it. That scale will indicate the answer. If y is "off the scale," locate and square it using the A and B scales as described above. Alternatively, use the rightmost 1 on the C scale, and read the answer off the next higher LL scale. For example, aligning the rightmost 1 on the C scale with 2 on the LL2 scale, 3 on the C scale lines up with 8 on the LL3 scale.
The S, T, and ST scales are used for trig functions and multiples of trig functions, for angles in degrees.
For angles from around 5.7 up to 90 degrees, sines are found by comparing the S scale with C (or D) scale; though on many closed-body rules the S scale relates to the A scale instead, and what follows must be adjusted appropriately. The S scale has a second set of angles (sometimes in a different color), which run in the opposite direction, and are used for cosines. Tangents are found by comparing the T scale with the C (or D) scale for angles less than 45 degrees. For angles greater than 45 degrees the CI scale is used. Common forms such as can be read directly from x on the S scale to the result on the D scale, when the C-scale index is set at k. For angles below 5.7 degrees, sines, tangents, and radians are approximately equal, and are found on the ST or SRT (sines, radians, and tangents) scale, or simply divided by 57.3 degrees/radian. Inverse trigonometric functions are found by reversing the process.
Many slide rules have S, T, and ST scales marked with degrees and minutes (e.g. some Keuffel and Esser models, late-model Teledyne-Post Mannheim-type rules). So-called decitrig models use decimal fractions of degrees instead.
Logarithms and exponentials
Base-10 logarithms and exponentials are found using the L scale, which is linear. Some slide rules have a Ln scale, which is for base e. Logarithms to any other base can be calculated by reversing the procedure for calculating powers of a number. For example, log2 values can be determined by lining up either leftmost or rightmost 1 on the C scale with 2 on the LL2 scale, finding the number whose logarithm is to be calculated on the corresponding LL scale, and reading the log2 value on the C scale.
Addition and subtraction
Slide rules are not typically used for addition and subtraction, but it is nevertheless possible to do so using two different techniques.
The first method to perform addition and subtraction on the C and D (or any comparable scales) requires converting the problem into one of division. For addition, the quotient of the two variables plus one times the divisor equals their sum:
For subtraction, the quotient of the two variables minus one times the divisor equals their difference:
This method is similar to the addition/subtraction technique used for high-speed electronic circuits with the logarithmic number system in specialized computer applications like the Gravity Pipe (GRAPE) supercomputer and hidden Markov models.
The second method utilizes a sliding linear L scale available on some models. Addition and subtraction are performed by sliding the cursor left (for subtraction) or right (for addition) then returning the slide to 0 to read the result.
Standard linear rules
The width of the slide rule is quoted in terms of the nominal width of the scales. Scales on the most common "10-inch" models are actually 25 cm, as they were made to metric standards, though some rules offer slightly extended scales to simplify manipulation when a result overflowed. Pocket rules are typically 5 inches. Models a couple of metres wide were sold to be hung in classrooms for teaching purposes.
Typically the divisions mark a scale to a precision of two significant figures, and the user estimates the third figure. Some high-end slide rules have magnifier cursors that make the markings easier to see. Such cursors can effectively double the accuracy of readings, permitting a 10-inch slide rule to serve as well as a 20-inch.
Various other conveniences have been developed. Trigonometric scales are sometimes dual-labeled, in black and red, with complementary angles, the so-called "Darmstadt" style. Duplex slide rules often duplicate some of the scales on the back. Scales are often "split" to get higher accuracy.
Circular slide rules
Circular slide rules come in two basic types, one with two cursors (left), and another with a free dish and one cursor (right). The dual cursor versions perform multiplication and division by holding a fast angle between the cursors as they are rotated around the dial. The onefold cursor version operates more like the standard slide rule through the appropriate alignment of the scales.
The basic advantage of a circular slide rule is that the widest dimension of the tool was reduced by a factor of about 3 (i.e. by π). For example, a 10 cm circular would have a maximum precision approximately equal to a 31.4 cm ordinary slide rule. Circular slide rules also eliminate "off-scale" calculations, because the scales were designed to "wrap around"; they never have to be reoriented when results are near 1.0—the rule is always on scale. However, for non-cyclical non-spiral scales such as S, T, and LL's, the scale width is narrowed to make room for end margins.
Circular slide rules are mechanically more rugged and smoother-moving, but their scale alignment precision is sensitive to the centering of a central pivot; a minute 0.1 mm off-centre of the pivot can result in a 0.2mm worst case alignment error. The pivot, however, does prevent scratching of the face and cursors. The highest accuracy scales are placed on the outer rings. Rather than "split" scales, high-end circular rules use spiral scales for more complex operations like log-of-log scales. One eight-inch premium circular rule had a 50-inch spiral log-log scale.
The main disadvantages of circular slide rules are the difficulty in locating figures along a dish, and limited number of scales. Another drawback of circular slide rules is that less-important scales are closer to the center, and have lower precisions. Most students learned slide rule use on the linear slide rules, and did not find reason to switch.
One slide rule remaining in daily use around the world is the E6B. This is a circular slide rule first created in the 1930s for aircraft pilots to help with dead reckoning. With the aid of scales printed on the frame it also helps with such miscellaneous tasks as converting time, distance, speed, and temperature values, compass errors, and calculating fuel use. The so-called "prayer wheel" is still available in flight shops, and remains widely used. While GPS has reduced the use of dead reckoning for aerial navigation, and handheld calculators have taken over many of its functions, the E6B remains widely used as a primary or backup device and the majority of flight schools demand that their students have some degree of proficiency in its use.
Proportion wheels are simple circular slide rules used in graphic design to broaden or slim images and photographs. Lining up the desired values on the emmer and inner wheels (which correspond to the original and desired sizes) will display the proportion as a percentage in a small window. They are not as common since the advent of computerized layout, but are still made and used.
In 1952, Swiss watch company Breitling introduced a pilot's wristwatch with an integrated circular slide rule specialized for flight calculations: the Breitling Navitimer. The Navitimer circular rule, referred to by Breitling as a "navigation computer", featured airspeed, rate/time of climb/descent, flight time, distance, and fuel consumption functions, as well as kilometer—nautical mile and gallon—liter fuel amount conversion functions.
A Russian circular slide rule built like a pocket watch that works as single cursor slide rule since the two needles are ganged together.
Breitling Navitimer wristwatch with circular slide rule
Cylindrical slide rules
There are two main types of cylindrical slide rules: those with helical scales such as the Fuller, the Otis King and the Bygrave slide rule, and those with bars, such as the Thacher and some Loga models. In either case, the advantage is a much longer scale, and hence potentially greater precision, than afforded by a straight or circular rule.
In 1895, a Japanese firm, Hemmi, started to make slide rules from bamboo, which had the advantages of being dimensionally stable, strong and naturally self-lubricating. These bamboo slide rules were introduced in Sweden in September, 1933, and probably only a little earlier in Germany. Scales were made of celluloid, plastic, or painted aluminium. Later cursors were acrylics or polycarbonates sliding on Teflon bearings.
All premium slide rules had numbers and scales engraved, and then filled with paint or other resin. Painted or imprinted slide rules were viewed as inferior because the markings could wear off. Nevertheless, Pickett, probably America's most successful slide rule company, made all printed scales. Premium slide rules included clever catches so the rule would not fall apart by accident, and bumpers to protect the scales and cursor from rubbing on tabletops. The recommended cleaning method for engraved markings is to scrub lightly with steel-wool. For painted slide rules use diluted commercial window-cleaning fluid and a soft cloth.
The slide rule was invented around 1620–1630, shortly after John Napier's publication of the concept of the logarithm. Edmund Gunter of Oxford developed a calculating device with a single logarithmic scale; with additional measuring tools it could be used to multiply and divide. The first description of this scale was published in Paris in 1624 by Edmund Wingate (c.1593–1656), an English mathematician, in a book entitled L'usage de la reigle de proportion en l'arithmetique & geometrie. The book contains a double scale, logarithmic on one side, tabular on the other. In 1630, William Oughtred of Cambridge invented a circular slide rule, and in 1632 combined two handheld Gunter rules to make a device that is recognizably the modern slide rule. Like his contemporary at Cambridge, Isaac Newton, Oughtred taught his ideas privately to his students. Also like Newton, he became involved in a vitriolic controversy over priority, with his one-time student Richard Delamain and the prior claims of Wingate. Oughtred's ideas were only made public in publications of his student William Forster in 1632 and 1653.
In 1722, Warner introduced the two- and three-decade scales, and in 1755 Everard included an inverted scale; a slide rule containing all of these scales is usually known as a "polyphase" rule.
In 1815, Peter Mark Roget invented the log log slide rule, which included a scale displaying the logarithm of the logarithm. This allowed the user to directly perform calculations involving roots and exponents. This was especially useful for fractional powers.
In 1821, Nathaniel Bowditch, described in the American Practical Navigator a "sliding rule" that contained scales trigonometric functions on the fixed part and a line of log-sines and log-tans on the slider used to solve navigation problems.
A more modern form of slide rule was created in 1859 by French artillery lieutenant Amédée Mannheim, "who was fortunate in having his rule made by a firm of national reputation and in having it adopted by the French Artillery." It was around this time that engineering became a recognized profession, resulting in widespread slide rule use in Europe–but not in the United States. There Edwin Thacher's cylindrical rule took hold after 1881. The duplex rule was invented by William Cox in 1891, and was produced by Keuffel and Esser Co. of New York.
Astronomical work also required fine computations, and in 19th-century Germany a steel slide rule about 2 meters long was used at one observatory. It had a microscope attached, giving it accuracy to six decimal places.
Throughout the 1950s and 1960s the slide rule was the symbol of the engineer's profession in the same way the stethoscope is of the medical profession's. German rocket scientist Wernher von Braun brought two 1930s vintage Nestler slide rules with him when he moved to the U.S. after World War 2 to work on the American space effort. Throughout his life he never used any other pocket calculating device, even while heading the NASA program that landed a man on the moon in 1969.
Aluminium Pickett-brand slide rules were carried on Project Apollo space missions. The model N600-ES owned by Buzz Aldrin that flew with him to the moon on Apollo 11 was sold at auction in 2007. The model N600-ES taken along on Apollo 13 in 1970 is owned by the National Air and Space Museum.
Some engineering students and engineers carried ten-inch slide rules in belt holsters, a common sight on campuses even into the mid-1970s. Until the advent of the pocket digital calculator students also might keep a ten- or twenty-inch rule for precision work at home or the office while carrying a five-inch pocket slide rule around with them.
In 2004, education researchers David B. Sher and Dean C. Nataro conceived a new type of slide rule based on prosthaphaeresis, an algorithm for rapidly computing products that predates logarithms. However, there has been little practical interest in constructing one beyond the initial prototype.
Slide rules have often been specialized to varying degrees for their field of use, such as excise, proof calculation, engineering, navigation, etc., but some slide rules are extremely specialized for very narrow applications. For example, the John Rabone & Sons 1892 catalog lists a "Measuring Tape and Cattle Gauge", a device to estimate the weight of a cow from its measurements.
There were many specialized slide rules for photographic applications; for example, the actinograph of Hurter and Driffield was a two-slide boxwood, brass, and cardboard device for estimating exposure from time of day, time of year, and latitude.
Specialized slide rules were invented for various forms of engineering, business and banking. These often had common calculations directly expressed as special scales, for example loan calculations, optimal purchase quantities, or particular engineering equations. For example, the Fisher Controls company distributed a customized slide rule adapted to solving the equations used for selecting the proper size of industrial flow control valves.
In World War II, bombardiers and navigators who required quick calculations often used specialized slide rules. One office of the U.S. Navy actually designed a generic slide rule "chassis" with an aluminium body and plastic cursor into which celluloid cards (printed on both sides) could be placed for special calculations. The process was invented to calculate range, fuel use and altitude for aircraft, and then adapted to many other purposes.
The E6-B is a circular slide rule used by pilots & navigators.
The importance of the slide rule began to diminish as electronic computers, a new but rare resource in the 1950s, became more widely available to technical workers during the 1960s. (See History of computing hardware (1960s–present).)
Computers also changed the nature of calculation. With slide rules a great emphasis was put on working the algebra to get expressions into the most computable form. Users would simply approximate or drop small terms to simplify a calculation. FORTRAN allowed complicated formulas to be typed in from textbooks without the effort of reformulation. Numerical integration was often easier than trying to find closed-form solutions for difficult problems. The young engineer asking for computer time to solve a problem that could have been done by a few swipes on the slide rule became a humorous cliché.
The availability of mainframe computing did not however significantly affect the ubiquitous use of the slide rule until cheap hand held electronic calculators for scientific and engineering purposes became available in the mid-1970s, at which point it rapidly declined. The first included the Wang Laboratories LOCI-2, introduced in 1965, which used logarithms for multiplication and division and the Hewlett-Packard HP-9100, introduced in 1968. The HP-9100 had trigonometric functions (sin, cos, tan) in addition to exponentials and logarithms. It used the CORDIC (coordinate rotation digital computer) algorithm, which allows for calculation of trigonometric functions using only shift and add operations. This method facilitated the development of ever smaller scientific calculators.
As calculator price declined geometrically and functionality increased exponentially the slide rule's fate was sealed. The pocket-sized Hewlett-Packard HP-35 scientific calculator cost US$395 in 1972, too expensive for most students. By 1975 basic four-function electronic calculators could be purchased for less than $50, and by 1976 the TI-30 scientific calculator could be purchased for less than $25.
Compared to electronic digital calculators
Most people find slide rules difficult to learn and use. Even during their heyday, they never caught on with the general public. Addition and subtraction are not well-supported operations on slide rules and doing a calculation on a slide rule tends to be slower than on a calculator. This led engineers to take mathematical shortcuts favoring operations that were easy on a slide rule, creating inaccuracies and mistakes. On the other hand, the spatial, manual operation of slide rules cultivates in the user an intuition for numerical relationships and scale that people who have used only digital calculators often lack. A slide rule will also display all the terms of a calculation along with the result, thus eliminating uncertainty about what calculation was actually performed.
A slide rule requires the user to separately compute the order of magnitude of the answer in order to position the decimal point in the results. For example, 1.5 × 30 (which equals 45) will show the same result as 1,500,000 × 0.03 (which equals 45,000). This separate calculation is less likely to lead to extreme calculation errors, but forces the user to keep track of magnitude in short-term memory (which is error-prone), keep notes (which is cumbersome) or reason about it in every step (which distracts from the other calculation requirements).
The typical precision of a slide rule is about three significant digits, compared to many digits on digital calculators. As order of magnitude gets the greatest prominence when using a slide rule, users are less likely to make errors of false precision.
When performing a sequence of multiplications or divisions by the same number, the answer can often be determined by merely glancing at the slide rule without any manipulation. This can be especially useful when calculating percentages (e.g. for test scores) or when comparing prices (e.g. in dollars per kilogram). Multiple speed-time-distance calculations can be performed hands-free at a glance with a slide rule. Other useful linear conversions such as pounds to kilograms can be easily marked on the rule and used directly in calculations.
Being entirely mechanical, a slide rule does not depend on electricity or batteries. However, mechanical imprecision in slide rules that were poorly constructed or warped by heat or use will lead to errors.
Many sailors keep slide rules as backups for navigation in case of electric failure or battery depletion on long route segments. Slide rules are still commonly used in aviation, particularly for smaller planes. They are being replaced only by integrated, special purpose and expensive flight computers, and not general-purpose calculators. The E6B circular slide rule used by pilots has been in continuous production and remains available in a variety of models. Some wrist watches designed for aviation use still feature slide rule scales to permit quick calculations. The Citizen Skyhawk AT is a notable example.
The slide rule today
||This section possibly contains original research. (February 2015)|
Even today some people prefer a slide rule over an electronic calculator as a practical computing device. Others keep their old slide rules out of a sense of nostalgia, or collect them as a hobby.
A popular collectible model is the Keuffel & Esser Deci-Lon, a premium scientific and engineering slide rule available both in a ten-inch "regular" (Deci-Lon 10) and a five-inch "pocket" (Deci-Lon 5) variant. Another prized American model is the eight-inch Scientific Instruments circular rule. Of European rules, Faber-Castell's high-end models are the most popular among collectors.
Although there is a large supply of slide rules circulating on the market, specimens in good condition tend to be expensive. Many rules found for sale on online auction sites are damaged or have missing parts, and the seller may not know enough to supply the relevant information. Replacement parts are scarce, expensive, and generally available only for separate purchase on individual collectors' web sites. The Keuffel and Esser rules from the period up to about 1950 are particularly problematic, because the end-pieces on the cursors, made of celluloid, tend to chemically break down over time.
There are still a handful of sources for brand new slide rules. The Concise Company of Tokyo, which began as a manufacturer of circular slide rules in July 1954, continues to make and sell them today. In September 2009, on-line retailer ThinkGeek introduced its own brand of straight slide rules, described as "faithful replica[s]" that are "individually hand tooled". These are no longer available in 2012. In addition, Faber-Castell has a number of slide rules still in inventory, available for international purchase through their web store. Proportion wheels are still used in graphic design.
Various slide rule simulator apps are available for Android and iOS-based smart phones and tablets.
|Wikimedia Commons has media related to Slide rule.|
- Lester V. Berrey and Melvin van den Bark (1953). American Thesaurus of Slang: A Complete Reference Book of Colloquial Speech. Crowell.
- Roger R. Flynn (June 2002). Computer sciences 1. Macmillan. p. 175. ISBN 978-0-02-865567-3. Retrieved 30 March 2013.
The slide rule is an example of a mechanical analog computer...
- Swedin, Eric G.; Ferro, David L. (24 October 2007). Computers: The Life Story of a Technology. JHU Press. p. 26. ISBN 978-0-8018-8774-1. Retrieved 30 March 2013.
Other analog mechanical computers included slide rules, the differential analyzer built by Vannevar E. Bush (1890–1974) at the ...
- Peter Grego (2009). Astronomical cybersketching. Springer. p. 12. ISBN 978-0-387-85351-2. Retrieved 30 March 2013.
It is astonishing to think that much of the routine mathematical work that put people into orbit around Earth and landed astronauts on the Moon in the 1960s was performed using an unassuming little mechanical analog computer – the 'humble' slide rule.
- Ernst Bleuler; Robert Ozias Haxby (21 September 2011). Electronic Methods. Academic Press. p. 638. ISBN 978-0-08-085975-0. Retrieved 30 March 2013.
For example, slide rules are mechanical analog computers,
- Harry Henderson (1 January 2009). Encyclopedia of Computer Science and Technology, Revised Edition. Infobase Publishing. p. 13. ISBN 978-1-4381-1003-5. Retrieved 30 March 2013.
Another analog computer, the slide rule, became the constant companion of scientists, engineers, and students until it was replaced ... logarithmic proportions, allowing for quick multiplication, division, the extraction of square roots, and sometimes the calculation of trigonometric functions.
- Behrens, Lawrence; Rosen, Leonard J. (1982). Writing and reading across the curriculum. Little, Brown. p. 273.
Then, just a decade ago, the invention of the pocket calculator made the slide rule obsolete almost overnight...
- Maor, Eli (2009). e: The Story of a Number. Princeton University Press. p. 16. ISBN 978-0-691-14134-3.
Then in the early 1970s the first electronic hand-held calculators appeared on the market, and within ten years the slide rule was obsolete.
- Castleden, Rodney (2007). Inventions that Changed the World. Futura. p. 157. ISBN 978-0-7088-0786-6.
With the invention of the calculator the slide rule became instantly obsolete.
- Denning, Peter J.; Metcalfe, Robert M. (1998). Beyond calculation: the next fifty years of computing. Springer. p. xiv. ISBN 978-0-387-98588-6.
The first hand calculator appeared in 1972 and made the slide rule obsolete overnight.
- "instruction manual". sphere.bc.ca. pp. 7–8. Retrieved March 14, 2007.
- "AntiQuark: Slide Rule Tricks". antiquark.com.
- "Slide Rules". Tbullock.com. 2009-12-08. Retrieved 2010-02-20.
- At least one circular rule, a 1931 Gilson model, sacrificed some of the scales usually found in slide rules in order to obtain additional resolution in multiplication and division. It functioned through the use of a spiral C scale, which was claimed to be 50 feet and readable to five significant figures. See http://www.sphere.bc.ca/test/gilson/gilson-manual2.jpg. A photo can be seen at http://www.hpmuseum.org/srcirc.htm. An instruction manual for the unit marketed by Dietzgen can be found at http://www.sliderulemuseum.com/SR_Library_General.htm. All retrieved March 14, 2007.
- "336 (Teknisk Tidskrift / 1933. Allmänna avdelningen)". Runeberg.org. Retrieved 2010-02-20.
- "Cameron's Nautical Slide Rule", The Practical Mechanic and Engineer's Magazine, April 1845, p187 and Plate XX-B
- Kells, Lyman M.; Kern, Willis F.; Bland, James R. (1943). The Log-Log Duplex Decitrig Slide Rule No. 4081: A Manual. Keuffel & Esser. p. 92. Archived from the original on 14 February 2009.
- The Polyphase Duplex Slide Rule, A Self-Teaching Manual, Breckenridge, 1922, p. 20.
- "Lot 25368 Buzz Aldrin's Apollo 11 Slide Rule - Flown to the Moon. ... 2007 September Grand Format Air & Space Auction #669". Heritage Auctions. Retrieved 3 September 2013.
- "Slide Rule, 5-inch, Pickett N600-ES, Apollo 13". Smithsonian National Air and Space Museum. Retrieved 3 September 2013.
- Charles Overton Harris, Slide rule simplified, American Technical Society, 1961, p. 5.
- "Prosthaphaeretic Slide Rule: A Mechanical Multiplication Device Based On Trigonometric Identities, The | Mathematics And Computer Education | Find Articles At Bnet". Findarticles.com. 2009-06-02. Retrieved 2010-02-20.
- "Fisher sizing rules". natgasedu.com. Archived from the original on 6 January 2010. Retrieved 2009-10-06.
- "The Wang LOCI-2". oldcalculatormuseum.com.
- Wang Laboratories (December 1966). "Now you can determine Copolymer Composition in a few minutes at your desk". American Chemical Society 38 (13): 62A–63A. doi:10.1021/ac50155a005. Retrieved 2010-10-29.
- "The HP 9100 Project". hp9825.com.
- Volder, J. E. (2000). "The Birth of CORDIC". dx.doi.org (J. VLSI Signal Processing) 25 (2): 101. doi:10.1023/A:1008110704586.
- Stoll, Cliff. "When Slide Rules Ruled," Scientific American, May 2006, pp. 80–87. "The difficulty of learning to use slide rules discouraged their use among the hoi polloi. Yes, the occasional grocery store manager figured discounts on a slipstick, and this author once caught his high school English teacher calculating stats for trifecta horse-race winners on a slide rule during study hall. But slide rules never made it into daily life because you could not do simple addition and subtraction with them, not to mention the difficulty of keeping track of the decimal point. Slide rules remained tools for techies."
- Watson, George H. "Problem-based learning and the three C's of technology," The Power of Problem-Based Learning, Barbara Duch, Susan Groh, Deborah Allen, eds., Stylus Publishing, LLC, 2001. "Numerical computations in freshman physics and chemistry were excruciating; however, this did not seem to be the case for those students fortunate enough to already own a calculator. I vividly recall that at the end of 1974, the students who were still using slide rules were given an additional 15 minutes on the final examination to compensate for the computational advantage afforded by the calculator, hardly adequate compensation in the opinions of the remaining slide rule practitioners."
- Stoll, Cliff. "When Slide Rules Ruled," Scientific American, May 2006, pp. 80–87. "With computation moving literally at a hand's pace and the lack of precision a given, mathematicians worked to simplify complex problems. Because linear equations were friendlier to slide rules than more complex functions were, scientists struggled to linearize mathematical relations, often sweeping high-order or less significant terms under the computational carpet. So a car designer might calculate gas consumption by looking mainly at an engine's power, while ignoring how air friction varies with speed. Engineers developed shortcuts and rules of thumb. At their best, these measures led to time savings, insight and understanding. On the downside, these approximations could hide mistakes and lead to gross errors."
- Stoll, Cliff. "When Slide Rules Ruled", Scientific American, May 2006, pp. 80–87. "One effect was that users felt close to the numbers, aware of rounding-off errors and systematic inaccuracies, unlike users of today's computer-design programs. Chat with an engineer from the 1950s, and you will most likely hear a lament for the days when calculation went hand-in-hand with deeper comprehension. Instead of plugging numbers into a computer program, an engineer would understand the fine points of loads and stresses, voltages and currents, angles and distances. Numeric answers, crafted by hand, meant problem solving through knowledge and analysis rather than sheer number crunching."
- "Citizen Watch Company – Citizen Eco-Drive / US, Canada, UK, IrelandCitizen Watch". citizenwatch.com.
- "Greg's Slide Rules - Links to Slide Rule Collectors". Sliderule.ozmanor.com. 2004-07-29. Retrieved 2010-02-20.
- "About CONCISE". Concise.co.jp. Archived from the original on 2012-03-12. Retrieved 2010-02-20.
- "Slide Rule". ThinkGeek. Archived from the original on 2010-03-27. Retrieved 2015-04-08.
- "Slide Rule". ThinkGeek. Archived from the original on 2012-10-12. Retrieved 2015-04-08.
- "Rechenschieber". Faber-Castell. Retrieved 2012-01-17.
- General information, history
- International Slide Rule Museum
- The history, theory and use of the engineering slide rule — By Dr James B. Calvert, University of Denver
- United Kingdom Slide Rule Circle Home Page
- Oughtred Society Slide Rule Home Page — Dedicated to the preservation and history of slide rules
- Rod Lovett's Slide Rules - Comprehensive Aristo site with many search facilities
- "Slide rule". New International Encyclopedia. 1905.
- "Slide-rule". Encyclopedia Americana. 1920.
- Reglas de Cálculo — A very big Faber Castell collection
- Collection of slide rules — French Slide Rules (Graphoplex, Tavernier-Gravet and others)
- Eric's Slide Rule Site — History and use | http://www.pepysdiary.com/encyclopedia/6104/ |
4.15625 | Scientists studying the oceans depend on data from rivers to estimate how much fresh water and natural elements the continents are dumping into the oceans. But a new study in the Aug. 24 issue of Science finds that water quietly trickling along underground may double the amount of debris making its way into the seas. This study changes the equation for everything from global climate to understanding the ocean's basic chemistry.
Since the late 1990s, Asish Basu, professor of earth and environmental sciences at the University of Rochester, has been sampling water and sediments from two of the world's largest rivers, the Ganges and the Brahmaputra of the Indian subcontinent, to understand a period in Earth's history called the Great Cool-Down.
Forty million years ago, the global climate changed from the steamy world of the dinosaurs to the cooler world of today, largely because the amount of carbon dioxide, a greenhouse gas in the atmosphere, dropped significantly. Scientists have speculated that the cause of this cooling and the decline in atmospheric carbon dioxide was the result of the rise of the Himalayan mountains as the Indian and Asian continental plates pushed into one another.
They believe the erosion of the new mountains increased the rate of removal of carbon dioxide from the atmosphere since the process of weathering silicate rocks such as those in the Himalayas absorbs carbon dioxide. This erosion may have depleted the atmosphere of a potent greenhouse gas and triggered the Great Cool-Down.
Coinciding with the cooling period and Himalayan uplift 40 million years ago was a consistent change in the ratio of two isotopes of the element strontium in the oceans' water -- a change that continues to this day.
Since strontium often comes from eroding silicates, it seemed obvious to scientists that the Ganges and Brahmaputra rivers were simply eroding the Himalayas into the ocean, but when they measured the amount of strontium in those rivers, they found it was far too low to account for the mysterious ratio change in the oceans, and thus too low to account for triggering the cool-down.
To determine if enough silicate had eroded to spark the climate change, Basu and his colleagues analyzed both ground water and river water samples from the Bengal delta where the Ganges and Brahmaputra rivers empty. They found the missing strontium and confirmed the culprit that nudged down the thermostat.
"Deep underground in the Bengal Basin, strontium concentration levels in the ground water are approximately 10 times higher than in the Ganges and Brahmaputra river waters," Basu explains.
Knowing the speed the water is moving underground, Basu and his team calculated how much strontium could be leached out of the Bengal Basin and into the Indian Ocean. They calculated that about 1.4 times more strontium flows into the ocean through the groundwater than through the rivers above-easily enough to account for the 40 million-year rise.
This study has other impacts in understanding ocean chemistry. "This means that we have to re-evaluate the residence times, the time a particular element remains in the ocean water before settling out, of various chemical elements and species," says Basu.
"Most current studies on the ocean's chemistry are based on the supposition that the global rivers are the only carriers responsible for bringing in dissolved materials to the oceans. Our study changes that perception permanently."
In addition, since the oceans are the biggest factor driving global weather, doubling the influx of fresh water will demand that global climate models must be restructured as well. Fresh water is lighter than salt water and so tends to float to the surface in the sea. This difference in density could move volumes of warm and cold water in ways that scientists gauging only the water's temperature would not normally predict.
Working with Basu on the project were Stein Jacobsen of Harvard University, Robert Poreda and Carolyn Dowling of the University of Rochester, and Pradeep Agarwal of the International Atomic Energy Agency in Vienna, Austria. The research was partially supported by grants from the National Science Foundation.
University of Rochester
Subscribe To SpaceDaily Express
Tropical Glaciers Formed While Earth Was Giant Snowball
Boston - May 29, 2001
Glacial deposits that formed on tropical land areas during snowball Earth episodes around 600 million years ago, lead to questions about how the glaciers that left the deposits were created. Now, Penn State geoscientists believe that these glaciers could only have formed after the Earth's oceans were entirely covered by thick sea ice.
New Research Documents Extremely High Atmospheric Carbon 14 During Last Ice Age
by Lori Stiles
Tucson - May 14, 2001
A team of American and British scientists report that radiocarbon levels in Earth's atmosphere during the last Ice Age were more than twice as high as today, higher even than the nuclear weapons tests of nearly half a century ago. They also reported in the May 11 issue of the journal Science of having extended the record for atmospheric radiocarbon more than 45,000 years.
Climate Wobble Linked To Rare Anomaly In Earth's Orbit
Santa Cruz - April 12, 2001
About 23 million years ago, a huge ice sheet spread over Antarctica, temporarily reversing a general trend of global warming and decreasing ice volume. Now a team of researchers has discovered that this climatic blip at the boundary between the Oligocene and Miocene epochs corresponded with a rare combination of events in the pattern of Earth's orbit around the Sun.
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2016 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement All images and articles appearing on Space Media Network have been edited or digitally altered in some way. Any requests to remove copyright material will be acted upon in a timely and appropriate manner. Any attempt to extort money from Space Media Network will be ignored and reported to Australian Law Enforcement Agencies as a potential case of financial fraud involving the use of a telephonic carriage device or postal service.| | http://www.spacedaily.com/news/iceage-01d.html |
4.40625 | Since the observations made by English naturalist Charles Darwin on the Galapagos Islands, researchers have been interested in how physical barriers, such as isolation on a particular island, can lead to the formation of new species through the process of natural selection. Natural selection is a process whereby heritable traits that enhance survival become more common in successive generations, while unfavorable heritable traits become less common. Over time, animals and plants that have morphologies or other attributes that enhance their suitability to a particular environment become more common and more adapted to that specific environment.
Researchers today are intimately familiar with how physical barriers and reproduction isolation can lead to the formation of new species on land, especially among plants and animals with short generation times such as insects and annual plants. Michael E. Hellberg, associate professor in the Department of Biological Sciences at LSU, however, is interested in a more obscure form of speciation: the speciation of animals in the ocean.
"Marine plants and animals can drift around in the ocean extremely long distances," Hellberg said. "So how do they specialize?"
In a recent publication in the Proceedings of the National Academy of Sciences, or PNAS, Hellberg and his graduate student Carlos Prada investigate how corals specialize to particular environments in the ocean. Corals, animals that form coral reefs and some of the most diverse ecosystems in the world, start their lifecycle with a free floating larval stage. Coral larvae can disperse vast distances in open water. Different coral species share similar geographical locations, with different species often existing only yards apart. As Prada and Hellberg propose in their recent publication, the large dispersal potential of coral larvae in open water and the proximity of different species on the ocean floor creates a mystery for researchers who study speciation. Hellberg and Prada ask, "How can new marine species emerge without obvious geographic isolation?"
When it comes to corals within the relatively small confines of the Caribbean, which spans approximately 3 million square kilometers, the key to the puzzle appears to be habitat depth in the ocean. In others words, natural selection has led to the formation of different coral species according to how deep in the ocean these different corals grow.
Prada and Hellberg study candelabrum corals of the genus Eunicea, generally known as "sea fans," for which sister species have been shown to be segregated by ocean depth. One sister species survives better in shallow waters, while the other is better adapted to deep waters. These corals, like other corals, are very slow-growing animals. In fact, sea fan corals don't reach reproduction age until they are 15-30 years old, and can continue reproducing until they are 60 or more years old. So while candelabrum coral larvae can disperse large distances from their parents, landing and beginning to grow in either shallow or deep water habitats, small differences in survival rates at different depths between the two species and long generation times can combine to produce segregation.
"When these coral larvae first settle out after dispersal, they are all mixed up," Hellberg said. "But long larvae-to-reproduction times can compound small differences in survival at different depths. By the time these corals get to reproduction age, a lot has changed."
The shallow water sea fan coral even has a different morphology than its deep water sister. The shallow water coral fans out into a wide network of branches, while the deep water coral grows tall and spindly. According to Hellberg, these differences in morphology might well be genetic, with the different corals having different protein structures and levels of expression that are better adapted to their specific water depth environment. Hellberg hopes in future research to investigate the genetic basis of these different morphologies.
In other interesting results, Prada explained how transplanting the shallow coral species to deep water environments, and vice versa, can cause the coral to take on a morphology more like that of its sister species.
"Their morphologies are not super fixed," Prada said. "But they can't change all the way to a different morphology."
Prada observed that while shallow water sea fans can become taller and more spindly when transplanted in deep water environments, they don't seem to be able to make a complete transition to the morphology of the deep water sea fan. This suggests that the two corals, while they likely had a common ancestor, have adapted genetically and biochemically to their respective water depths.
Prada did ocean dives in the Bahamas, Panama, Puerto Rico and Curaao to sample candelabrum coral colonies. Back in the lab, he performed tests on the coral samples' genes to determine how shallow and deep corals become genetically different.
"Normally, organisms are differentiated by geography," Prada said. "But these corals are differentiated by depth."
Prada and Hellberg's research provides new insights into how new species form in the ocean, a topic of relatively limited research as opposed to speciation of terrestrial organisms.
|Contact: Ashley Berthelot|
Louisiana State University | http://www.bio-medicine.org/biology-news-1/LSU-professor-discovers-how-new-corals-species-form-in-the-ocean-28752-1/ |
4.09375 | Birthplace: Saukenuk, Ill.
In the late 18th century, the Indians of the upper Mississippi Valley witnessed the replacement of the relatively sympathetic French, Spanish, and British with the aggressive Americans pushing westward. The Sauk warrior, Black Hawk, resisted the pioneers and fought to have Indians retain their lands and traditions. Black Hawk was especially incensed by an 1804 treaty between the Sauk and Fox tribes and the United States that ceded all tribal lands east of the Mississippi. The treaty had never been ratified by the tribe, and Black Hawk repeatedly condemned it as spurious.
Black Hawk and his band of warriors fought on the side of the British during the War of 1812, hoping to halt the American westward expansion. While Black Hawk was fighting the United States, the young Keokuk, who was friendlier towards the Americans, became leader of the Sauk and Fox. By 1814, Black Hawk and his forces defeated the Americans, who were under the command of Gen. Zachary Taylor. Treaties were signed in 1816 and an uneasy peace reigned until 1832 when the Black Hawk War broke out. The Sauk, Fox, and other tribes refused to move westward to accommodate the increasingly large population of American pioneers. President Andrew Jackson sent troops and a massacre at Black Axe River ended the war and resulted in Black Hawk's imprisonment for several months. After Black Hawk's defeat, Keokuk, who had maintained good relations with the U.S. government, was granted a tract of land in Iowa for his people. Black Hawk joined what remained of his tribe in Iowa and died there in 1838.Died: 1838 | http://www.infoplease.com/ipa/A0909619.html |
4.1875 | X-ray crystallography, the study of crystal structures through X-ray diffraction techniques. When an X-ray beam bombards a crystalline lattice in a given orientation, the beam is scattered in a definite manner characterized by the atomic structure of the lattice. This phenomenon, known as X-ray diffraction, occurs when the wavelength of X-rays and the interatomic distances in the lattice have the same order of magnitude. In 1912, the German scientist Max von Laue predicted that crystals exhibit diffraction qualities. Concurrently, W. Friedrich and P. Knipping created the first photographic diffraction patterns. A year later Lawrence Bragg successfully analyzed the crystalline structures of potassium chloride and sodium chloride using X-ray crystallography, and developed a rudimentary treatment for X-ray/crystal interaction (Bragg's Law). Bragg's research provided a method to determine a number of simple crystal structures for the next 50 years. In the 1960s, the capabilities of X-ray crystallography were greatly improved by the incorporation of computer technology. Modern X-ray crystallography provides the most powerful and accurate method for determining single-crystal structures. Structures containing 100–200 atoms now can be analyzed on the order of 1–2 days, whereas before the 1960s a 20-atom structure required 1–2 years for analysis. Through X-ray crystallography the chemical structure of thousands of organic, inorganic, organometallic, and biological compounds are determined every year.
See M. Buerger, X-Ray Crystallography (1980).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | http://www.factmonster.com/encyclopedia/science/x-ray-crystallography.html |
4.125 | 1789 to Present
Article I, section 7 of the Constitution grants the President the authority to veto legislation passed by Congress. This authority is one of the most significant tools the President can employ to prevent the passage of legislation. Even the threat of a veto can bring about changes in the content of legislation long before the bill is ever presented to the President. The Constitution provides the President 10 days (excluding Sundays) to act on legislation or the legislation automatically becomes law. There are two types of vetoes: the “regular veto” and the “pocket veto.”
The regular veto is a qualified negative veto. The President returns the unsigned legislation to the originating house of Congress within a 10 day period usually with a memorandum of disapproval or a “veto message.” Congress can override the President’s decision if it musters the necessary two–thirds vote of each house. President George Washington issued the first regular veto on April 5, 1792. The first successful congressional override occurred on March 3, 1845, when Congress overrode President John Tyler’s veto of S. 66.
The pocket veto is an absolute veto that cannot be overridden. The veto becomes effective when the President fails to sign a bill after Congress has adjourned and is unable to override the veto. The authority of the pocket veto is derived from the Constitution’s Article I, section 7, “the Congress by their adjournment prevent its return, in which case, it shall not be law.” Over time, Congress and the President have clashed over the use of the pocket veto, debating the term “adjournment.” The President has attempted to use the pocket veto during intra- and inter- session adjournments and Congress has denied this use of the veto. The Legislative Branch, backed by modern court rulings, asserts that the Executive Branch may only pocket veto legislation when Congress has adjourned sine die from a session. President James Madison was the first President to use the pocket veto in 1812.
|Congresses||President||Regular Vetoes||Pocket Vetoes||Total Vetoes||Vetoes Overriden|
|19th–20th||John Quincy Adams||.....||.....||.....||.....|
|25th–26th||Martin Van Buren||.....||1||1||.....|
|27th||William Henry Harrison||.....||.....||.....||.....|
|29th–30th||James K. Polk||2||1||3||.....|
|41st–44th||Ulysses S. Grant||45||48||93||4|
|45th–46th||Rutherford B. Hayes||12||1||13||1|
|47th||James A. Garfield||.....||.....||.....||.....|
|47th–48th||Chester A. Arthur||4||8||12||1|
|61st–62nd||William H. Taft||30||9||39||1|
|67th||Warren G. Harding||5||1||6||.....|
|71st–72nd||Herbert C. Hoover||21||16||37||3|
|73rd–79th||Franklin D. Roosevelt||372||263||635||9|
|79th–82nd||Harry S. Truman||180||70||250||12|
|83rd–86th||Dwight D. Eisenhower||73||108||181||2|
|87th–88th||John F. Kennedy||12||9||21||.....|
|88th–90st||Lyndon B. Johnson||16||14||30||.....|
|91st–93rd||Richard M. Nixon||26||17||43||7|
|93rd–94th||Gerald R. Ford||48||18||66||12|
|95th–96th||James Earl Carter||13||18||31||2|
|101st–102nd||George H. W. Bush1||29||15||44||1|
|103rd–106th||William J. Clinton2||36||1||37||2|
|107th–110th||George W. Bush3||12||.....||12||4|
|111th–114th||Barack H. Obama4||8||.....||8||.....|
1President George H. W. Bush withheld his signature from two measures during intrasession recess periods (H.J. Res. 390, 101st Congress, 1st sess. and S. 1176, 102nd Congress, 1st sess.). See, “Permission to Insert in the Record Correspondence of the Speaker and the Minority Leader to the President Regarding Veto of House Joint Resolution 390, Authorizing Hand Enrollment of H.R. 1278, Financial Institutions Reform, Recovery and Enforcement Act of 1989, Along With Response From the Attorney General (House of Representatives - January 23, 1990),” Congressional Record, 101st Cong., 2nd sess., (January 23, 1990): H3. See, “Morris K. Udall Scholarship and Excellence in National Environmental and Native American Public Policy Act of 1992 (House of Representatives - March 03, 1992),” Congressional Record, 102nd Cong., 2nd sess., (March 3, 1992): H885-H889. The President withheld his signature from another measure during an intrasession recess period (H.R. 2699, 102nd Congress, 1st sess.) and from a measure during an intersession recess period (H.R. 2712, 101st Congress, 1st sess.) but returned both measures to the House, which proceeded to reconsider them. The measures are not included as pocket vetoes in this table.
2President William J. Clinton withheld his signature from two measures during intrasession recess periods (H.R. 4810, 106th Congress, 2nd sess., and H.R. 8, 106th Congress, 2nd sess.) but returned the bills to the House, which proceeded to reconsider them. See, “Pocket-Veto Power -- Hon. J. Dennis Hastert – (Extensions of Remarks - September 19, 2000),” Congressional Record, 106th Cong., 2nd sess., (September 19, 2000): E1523. The bills are not included as pocket vetoes in this table.
3President George W. Bush withheld his signature from a measure during an intersession recess period (H.R. 1585, 110th Congress, 1st Sess.) but returned the bill to the House, which proceeded to reconsider it. See, “Pocket-Veto Power – (Extensions of Remarks – October 2, 2008),” Congressional Record, 110th Cong., 1st Sess., (October 2, 2008): E2197. The bill is not included as a pocket veto in this table.
4President Barack H. Obama withheld his signature from a measure during an intersession recess period (H.J. Res 64, 111th Congress, 1st sess.) and from a measure during an intrasession recess period (H.R. 3808, 111th Congress, 2nd sess.) but returned both measures to the House, which proceeded to reconsider them. “Pocket-Veto Power – (Extensions of Remarks – May 26, 2010),” Congressional Record, 111th Cong., 1st sess., (May 26, 2010): E941. The measures are not included as pocket vetoes in this table. | http://history.house.gov/Institution/Presidential-Vetoes/Presidential-Vetoes/ |
4.09375 | The Elves and the Shoemaker Teacher Resources
Find The Elves and the Shoemaker educational ideas and activities
Showing 1 - 20 of 22 resources
Folk and Fairy Tale Readers: The Elves and the Shoemaker
Engage young readers in a unit on fairy tales with their very own copy of the classic story "The Elves and the Shoemaker." Including dialogue, three-syllable words, and up to five lines of text on a page, this printable book is best...
Pre-K - 3rd English Language Arts CCSS: Adaptable
Guided Reading Lesson: The Elves and the Shoemaker
Student participate in a guided reading lesson for the Level H book The Elves and the Shoemaker. In this guided reading lesson, 2nd graders learn vocabulary associated with the book and focus on inferences, connections, and other...
2nd English Language Arts
Library Skills and Literature
New ReviewThe library is such a valuable resource for kids of all ages. Help elementary readers learn all about parts of the library, text features for both fiction and nonfiction text, and different ways to find books that they want to read.
K - 5th English Language Arts CCSS: Adaptable
Discovering Shoes, Step by Step
Students discuss and write about their interpretations of the art. They compare and contrast the numerous types of shoes and how they are used in a certain time and place. Students memorize the Shel Silverstein's poem, "Ickle Me, Pickle...
4th - 6th English Language Arts | http://www.lessonplanet.com/lesson-plans/the-elves-and-the-shoemaker |
4.4375 | An Introduction to Vegetative Reproduction
is a form of asexual reproduction in plants. It does not involve flowers, pollination and seed production. Instead, a new plant grows from a vegetative part, usually a stem, of the parent plant. However, plants which reproduce asexually almost always reproduce sexually as well, bearing flowers, fruits and seeds. Vegetative reproduction from a stem usually involves the buds. Instead of producing a branch, the bud grows into a complete plant which eventually becomes self-supporting. Since no gametes are involved, the plants produced asexually have identical genomes and the offspring form what is known as a clone. In some cases of vegetative reproduction, the structures involved also become storage organs and swell with stored food, e.g. potatoes.
The principal types of vegetative reproduction structures are bulbs, corms, rhizomes and runners.
Bulbs consist of very short stems with closely packed leaves arranged in concentric circles round the stem. These leaves are swollen with stored food e.g. onion. A terminal bud will produce next year’s flowering shoot and the lateral (axillary) buds will produce new plants.
Corms also have a short stem but in this case it is the stem itself which swells and stores food. The circular leaves form only papery scales. As with bulbs, the terminal bud grows into a flowering shoot and the lateral buds produce new plants.
Rhizomes are stems which grow horizontally under the ground. In some cases the underground stems are swollen with food reserves e.g. iris. The terminal bud turns upwards to produce the flowering shoot and the lateral buds may grow out to form new rhizomes.
Runners are also horizontal stems growing from the parent plant, but they grow above ground. When their terminal buds touch the ground they take root and produce new plants.
Advantages of vegetative reproduction
Since food stores are available throughout the year and the parent plant with its root system can absorb water from quite a wide area, two of the hazards of seed germination are reduced. Buds are produced in an environment where the parent is able to flourish, but many seeds dispersed from plants never reach a suitable situation for effective germination.
Vegetative reproduction does not usually result in rapid and widespread distribution of offspring in the same way as seed dispersal, but tends to produce a dense clump of plants with little room for competitors between them. Such groups of plants are very persistent and, because of their buds and underground food stores, can still grow after their foliage has been destroyed by insects, fire, or cultivation. Those of them regarded as weeds are difficult to eradicate, since even a small piece of rhizome bearing a bud can give rise to a new colony (clone).
Bulbs - Snowdrop
In the snowdrop and daffodil, the bulb is formed by the leaf bases which completely encircle the short, conical stem. The part of the leaf above ground makes food by photosynthesis and sends it to the leaf bases which swell as they store the food. In the following year the stored food is used for the early growth of the bulb.
Life cycle of Daffodil
In the spring, adventitious roots grow out of the stem, and the leaves begins to grow above ground, making use of the stored food in the fleshy leaf bases which consequently shrivel. During late spring some of the food made in the leaves in the daffodil is sent to the leaf bases which swell and form a new bulb inside the old one.
Life cycle of Tulip and Onion
In these bulbs, the food is not sent to the leaf bases but to the lateral buds. As these buds enlarge they form two or more ‘daughter’ bulbs inside the old bulb. The leaves of the old bulb shrivel and dry out forming the dry scales which surround the daughter bulbs. In both cases, when the daughter bulbs grow, they form a clump, together with the parent bulb.
Corms - Crocus
Plants with bulbs store food in special leaves or leaf bases. Plants with corms store food in the stem, which is very short and swollen. When the foliage has died off, the leaf bases, where they encircle the short stem, form protective scaly coverings. A familiar corms is that of the crocus, and the wild arum corm is illustrated on p.1. Since the corm is a stem, it has lateral buds which can grow into new plants. The stem remains below ground all its life, only the leaves and flower stalk coming above ground.
Life Cycle of Corm
In Spring, the food stored in the corm enables the terminal bud to grow rapidly and produce leaves and flowers above ground. Later in the year, food made by the leaves is sent back, not to the old corm, but to the base of the stem immediately above it. This region swells and forms a new corm on top of the old, now shrivelled, corm. Some of the lateral buds on the old corm have also grown and produced new plants with corms.
The formation of one corm on top of another tends to bring the successive corms nearer and nearer to the soil surface. Adventitious roots develop from the base of the new corm. Once these have grown firmly into the soil, a region near their junction with the stem contracts and pulls the new corm down, keeping it at a constant level in the soil. Wrinkles can be seen on these contractile roots where shrinkage has taken place. Bulbs also have contractile roots which counteract the tendency in successive generations to grow out of the soil.
In plants with rhizomes, the stem remains below ground but continues to grow horizontally. The old part of the stem does not die away as in bulbs and corms, but lasts for several years. In the iris, the terminal bud turns up and produces leaves and flowers above ground. The old leaf bases form circular scales round the rhizome, which is swollen with food reserves. Lateral buds grow into new rhizomes.
Life Cycle of Rhizome
The annual cycle of a rhizome is similar to that of a corm. In late spring/early summer, food from the leaves passes back to the rhizome, and a lateral bud uses it, grows horizontally underground, and so continues the rhizome. Other lateral buds produce new rhizomes which branch from the parent stem. The terminal buds of these branches curve upwards and produce new leafy shoots and flowers. Contractile, adventitious roots grow from the nodes of the underground stem and keep it at a constant depth.
Plants such as the strawberry have a very short stem, called a rootstock, with thin scale leaves,. Foliage leaves and flowers grow from the buds in the axils of the scale leaves. Some of the lower buds produce shoots which grow horizontally over the surface of the ground and bear scale leaves and buds. The terminal buds of these runners turn up and produce daughter plants some distance away from the parent, the new plants developing adventitious roots. Later, the runner shrivels away. The runner does not store food but conducts it from the parent plant to the daughters, until they are well developed.
Stem tubers - potato
In the potato plant, lateral buds at the base of the stem produce shoots which grow laterally at first and then down into the ground. These are comparable to rhizomes, as they are underground stems with tiny scale leaves and lateral buds. They do not swell evenly along their length with stored food.
Annual cycle - potato
Food made in the leaves passes to the ends of these rhizomes, which swell and form the tubers we call potatoes. Since the potato tuber is a stem, it has leaves and lateral buds; these are the familiar ‘eyes’. Each one of these can produce a new shoot in the following year, using the food stored in the tuber. The old tubers shrivel and rot away at the end of the season
Blackberry stems form a rather different type of runner in which the main shoot forms the new individual. When the growing end of a shoot arches over and touches the ground, the terminal bud curves up, producing a new shoot which soon develops adventitious roots.
A bud or shoot from one plant is inserted into a cleft or under the bark on the stem of a closely related variety. The rooted portion is called the stock; the bud or shoot being grafted is the scion. The stock is obtained by growing a plant from seed then cutting away the shoot. The scion is a branch or a bud cut from a cultivated variety with the required characteristics of flower colour, fruit quality, etc.
Rose plants grown from seed would produce a wide variety of plants, only a few of which would retain all the desirable features of the parent plant. Most of them would be like wild roses. Similarly, most of the apple trees grown from seed would bear only small, sour ‘crab-apples'. By taking cuttings and making grafts, the inbred characteristics of the plant are preserved and you can guarantee that all the new individuals produced by this kind of artificial propagation will be the same.
It is possible to produce new individuals from certain plants by putting the cut end of a shoot into water or moist earth. Roots grow from the base of the stem into the soil while the shoot continues to grow and produce leaves.
In some cases the cut end of the stem may be treated with a rooting 'hormone' to promote root growth. Evaporation from the shoot is reduced by covering it with polythene or a glass jar. Carnations, geraniums and chrysanthemums are commonly propagated from cuttings.
Once a cell has become part of a tissue it usually loses the ability to reproduce. However, the nucleus of any plant cell still holds all the 'instructions' (genes) for making a complete plant and in certain circumstances they can be brought back into action. In laboratory conditions single plant cells can be induced to divide and grow into complete plants. One technique is to take a small piece of plant tissue from a root or stem and treat it with enzymes to separate it into individual cells The cells are then provided with particular plant 'hormones’ which induce cell division and, eventually the formation of roots, stems and leaves.
An alternative method is to start with a small piece of tissue and place it on a nutrient jelly (agar). Cells in the tissue start to divide and produce many cells forming a shapeless mass called a callus. If the callus is then provided with the appropriate ‘hormones’ it develops into a complete plant.
|Search this site|
|Search the web|
© Copyright 2004 - 2016 D G Mackean & Ian Mackean. All rights reserved. | http://www.biology-resources.com/plants-vegetative-reproduction-01.html |
4.125 | A Recycling for Kids, theme need not be complicated or abstract. Instead take advantage of daily classroom routines when planning recycling type activities. Each day at snack or lunch time, set up a classification area on one table in your classroom. Make it one of your special helper’s jobs. Students recycle their garbage into three categories when they finish eating, recycling, compost, and landfill garbage.
Kids are very enthusiastic to do this and quickly learn what can be recycled, what goes in the compost and what ends up in the landfill.
Teach young children about recycling
- a plastic ice cream bucket with a lid for the compost
- a photocopy paper box lid for items that go in the garbage
- a photocopy paper box lid for items that can be recycled.
The size of the photocopy paper box lids allow the items to be spread out for easy counting.
Keep it Simple
To keep recycling for kids simple, have the special helper weigh the compost (with help) and then point to each piece of recycling and garbage in the lids as the whole class counts along.
The same student records the observations on a chart similar to the sample on the above. The teacher records the date as the students suggest letters and assists the students when necessary.
The children benefited from the extra practice counting the recycling and garbage items, and recording, or watching the numbers be recorded under the correct pictures on the chart.
Each day use the chart as a teaching tool, reinforcing the children’s knowledge of numbers, letters and letter sounds. In many schools, older students pick up the compost after lunch for the school garden. The recycling items went into the class blue box and the landfill garbage went into the wastebasket. Next, have the students record their classroom activities. | http://www.kindergarten-lessons.com/recycling_for_kids/ |
4.59375 | Science Fair1. Present Scientific Method slides containing information on a display board Exhibit the experiment with the display board2. Parents are encouraged to assist children but are asked to maintain the role of assistant only.3. If parents type the display board information, they should have children dictate the information as it is being typed.4. Construction paper or colored paper should be used behind all display board items to give a neat, organized, and creative effect.5. Students are expected to use these Scientific Method slides in the given order for continuity on the display boards as it is helpful to judges. Students may change the theme of the slides to make their presentations more unique or creative.Remember that the Science Fair should be a fun method of learning and is intended to help in preparing students for the STAAR test.
Science Fair Project Type/write your project title here. Put your assigned number without your name for the display board
QUESTIONType/write your question here. (This is the question that your experiment answers.)
Materials• Type/write a detailed list of the items you need to complete your experiments.• Be specific about the amounts used.
Procedure• List all of the steps used in completing your experiment.• Remember to number your steps.• If possible, add photos of your experiments.
Research Summarize your research here in five or more bullet points: Do not give more than ten points, however.• 1st bullet point• 2nd bullet point• 3rd bullet point• 4th bullet point• 5th bullet point
HypothesisWhat do you think the result of your experiment willshow?It is okay if you are incorrect.
Variables• Controlled variables: These are the things that are kept the same throughout your experiments.• Independent variable: The ONE variable that you purposely change and test.• For example, a plant experiment would use the type of plant as the control variable while the type of soil may be the independent variable that changes.
Data/Observations• It is easier to understand the data (information) if it is put into a table or graph.• Draw a graph or make one in a software package such as PowerPoint or Microsoft Excel.• Make sure all data (information) is clearly labeled.
ConclusionType/write a brief summary here of your discoverybased on the results of the experiment. Indicatewhether or not the data (information) supports thehypothesis and explain why or why not.
Works Cited• Be sure to list books, magazines, encyclopedias, and Internet sources that you used.• Put them in alphabetical order.
Presentation• Students will be expected to stand beside their projects on Parent Night and during Science class period as visitors tour the exhibits.• The students’ behavior while presenting their information will be included in the grade for the Science Fair.• Each topic will be judged separately.• Topics Include: Earth Science, Life Science, and Physical Science• The students’ grades will be determined by Mrs. Hagood and will not be influenced by judges decisions.• Scoring will be based on how well students meet the criteria.
Unacceptable Projects• Remember that the experiment must prove a hypothesis and cannot be a model of how something works or a report on another person’s research . For example, students cannot actually go into outer space to research how plants grow; therefore, a project of this type would be research. Also a volcano could be created to show chemical reactions with research on that area but should not be presented as an experiment on how a volcano works as vinegar and baking soda are not materials found in lava and cannot be utilized as an experiment on Earth processes where a variable is used to prove a hypothesis.
Tips on Placing in the Fair• Select an experiment that few others have chosen.• The more technical the experiment, the better.• Colorful and creative displays are a plus.• Multiple charts and graphs are beneficial.• Research that goes beyond the basic requirement of five points but does not exceed ten points .
Remember to Follow Safety Procedures• Students should wear safety glasses,• Do not taste anything unless it is approved by a responsible adult,• Wear protective clothing if needed,• Avoid using dangerous materials or fire/heat unless supervised by a responsible adult,• If animals are used, take precautions that prevent students from being bitten or harmed in any way,• Use good judgment and common sense at all times.
Judging SheetJudges should score projects within a range of 1 to 5. Consider a 1 to be very low, a 2 is close to meetingthe expectation, 3 should reflect that the student did what was expected. A 4 will be above average with a 5 being a superior rating. If possible, it will be appreciated if scores are totaled. 1. Title Page includes Number & Tchr.___________________ 2. Question_________________ 3. Materials _________________________________________ 4. Procedure- detailed steps of experiment__________________ 5. Research (Must have 5 facts)__________________________ Project Number 6. Variables _________________________________________ ___________________ 1. Controlled variable- is the step/material that stays the same 2. Independent variable- is the step/material that changes. 7. Hypothesis –What student thinks will happen______________ Project Title 8. Data/Observations – In graph or table form________________ ______________________ 9. Conclusion- What they discovered__ _____________________ 10. Works Cited- Where information was found ________________ 11. Neat -Board has slides displayed with construction paper backing Judge Number and is neatly arranged__________________________________ ____________________ 12. Technical (Bonus Points)_______________________________ 13. Creative-(Bonus Points)_________________________________ Total ____________ | http://www.slideshare.net/test3student/science-fair-training-slides |
4.28125 | |Part of a series on the|
|Spanish conquest of the Maya|
The Spanish conquest of Yucatán was the campaign undertaken by the Spanish conquistadores against the Late Postclassic Maya states and polities in the Yucatán Peninsula, a vast limestone plain covering south-eastern Mexico, northern Guatemala, and all of Belize. The Spanish conquest of the Yucatán Peninsula was hindered by its politically fragmented state. The Spanish engaged in a strategy of concentrating native populations in newly founded colonial towns. Native resistance to the new nucleated settlements took the form of the flight into inaccessible regions such as the forest or joining neighbouring Maya groups that had not yet submitted to the Spanish. Among the Maya, ambush was a favoured tactic. Spanish weaponry included broadswords, rapiers, lances, pikes, halberds, crossbows, matchlocks and light artillery. Maya warriors fought with flint-tipped spears, bows and arrows and stones, and wore padded cotton armour to protect themselves. The Spanish introduced a number of Old World diseases previously unknown in the Americas, initiating devastating plagues that swept through the native populations.
The first encounter with the Yucatán Maya occurred in 1502, when the fourth voyage of Christopher Columbus came across a large Maya trading canoe off Honduras. In 1517, Francisco Hernández de Córdoba made landfall on the tip of the peninsula. His expedition continued along the coast and suffered heavy losses in a pitched battle at Champotón, forcing a retreat to Cuba. Juan de Grijalva explored the coast in 1518, and heard tales of the wealthy Aztec Empire further west. As a result of these rumours, Hernán Cortés set sail with another fleet. From Cozumel he continued around the peninsula to Tabasco where he fought a battle at Potonchán; from there Cortés continued onward to conquer the Aztec Empire. In 1524, Cortés led a sizeable expedition to Honduras, cutting across southern Campeche, and through Petén in what is now northern Guatemala. In 1527 Francisco de Montejo set sail from Spain with a small fleet. He left garrisons on the east coast, and subjugated the northeast of the peninsula. Montejo then returned to the east to find his garrisons had almost been eliminated; he used a supply ship to explore southwards before looping back around the entire peninsula to central Mexico. Montejo pacified Tabasco with the aid of his son, also named Francisco de Montejo.
In 1531 the Spanish moved their base of operations to Campeche, where they repulsed a significant Maya attack. After this battle, the Spanish founded a town at Chichen Itza in the north. Montejo carved up the province amongst his soldiers. In mid-1533 the local Maya rebelled and laid siege to the small Spanish garrison, which was forced to flee. Towards the end of 1534, or the beginning of 1535, the Spanish retreated from Campeche to Veracruz. In 1535, peaceful attempts by the Franciscan Order to incorporate Yucatán into the Spanish Empire failed after a renewed Spanish military presence at Champotón forced the friars out. Champotón was by now the last Spanish outpost in Yucatán, isolated among a hostile population. In 1541–42 the first permanent Spanish town councils in the entire peninsula were founded at Campeche and Mérida. When the powerful lord of Mani converted to the Roman Catholic religion, his submission to Spain and conversion to Christianity encouraged the lords of the western provinces to accept Spanish rule. In late 1546 an alliance of eastern provinces launched an unsuccessful uprising against the Spanish. The eastern Maya were defeated in a single battle, which marked the final conquest of the northern portion of the Yucatán Peninsula.
The polities of Petén in the south remained independent and received many refugees fleeing from Spanish jurisdiction. In 1618 and in 1619 two unsuccessful Franciscan missions attempted the peaceful conversion of the still pagan Itza. In 1622 the Itza slaughtered two Spanish parties trying to reach their capital Nojpetén. These events ended all Spanish attempts to contact the Itza until 1695. Over the course of 1695 and 1696 a number of Spanish expeditions attempted to reach Nojpetén from the mutually independent Spanish colonies in Yucatán and Guatemala. In early 1695 the Spanish began to build a road from Campeche south towards Petén and activity intensified, sometimes with significant losses on the part of the Spanish. Martín de Urzúa y Arizmendi, governor of Yucatán, launched an assault upon Nojpetén in March 1697; the city fell after a brief battle. With the defeat of the Itza, the last independent and unconquered native kingdom in the Americas fell to the Spanish.
- Yucatán before the conquest
- Impact of Old World diseases
- Weaponry, strategies and tactics
- First encounters: 1502 and 1511
- Francisco Hernández de Córdoba, 1517
- Juan de Grijalva, 1518
- Hernán Cortés, 1519
- Hernán Cortés in the Maya lowlands, 1524–25
- Francisco de Montejo, 1527–28
- Francisco de Montejo and Alonso d'Avila, 1531–35
- Conquest and settlement in northern Yucatán, 1540–46
- Petén Basin, 1618–97
- Further reading
The Yucatán Peninsula is bordered by the Caribbean Sea to the east and by the Gulf of Mexico to the north and west. It can be delimited by a line running from the Laguna de Términos on the Gulf coast through to the Gulf of Honduras on the Caribbean coast. It incorporates the modern Mexican states of Yucatán, Quintana Roo and Campeche, the eastern portion of the state of Tabasco, most of the Guatemalan department of Petén, and all of Belize. Most of the peninsula is formed by a vast plain with few hills or mountains and a generally low coastline. A 15-kilometre (9.3 mi) stretch of high, rocky coast runs south from the city of Campeche on the Gulf Coast. A number of bays are situated along the east coast of the peninsula, from north to south they are Ascensión Bay, Espíritu Santo Bay, Chetumal Bay and Amatique Bay. The north coast features a wide, sandy littoral zone. The extreme north of the peninsula, roughly corresponding to Yucatán State, has underlying bedrock consisting of flat Cenozoic limestone. To the south of this the limestone rises to form the low chain of Puuc Hills, with a steep initial scarp running 160 kilometres (99 mi) east from the Gulf coast near Champotón, terminating some 50 kilometres (31 mi) from the Caribbean coast near the border of Quintana Roo. The hills reach a maximum altitude of 170 metres (560 ft).
The northwestern and northern portions of the Yucatán Peninsula experience lower rainfall than the rest of the peninsula; these regions feature highly porous limestone bedrock resulting in less surface water. This limestone geology results in most rainwater filtering directly through the bedrock to the phreatic zone, from whence it slowly flows to the coasts to form large submarine springs. Various freshwater springs rise along the coast to form watering holes. The filtering of rainwater through the limestone has caused the formation of extensive cave systems. These cave rooves are subject to collapse forming deep sinkholes; if the bottom of the cave is deeper than the groundwater level then a cenote is formed.
In contrast, the northeastern portion of the peninsula is characterised by forested swamplands. The northern portion of the peninsula lacks rivers, except for the Champotón River – all other rivers are located in the south. The Sibun River flows from west to east from south central Quintana Roo to Lake Bacalar on the Caribbean Coast; the Río Hondo flows northwards from Belize to empty into the same lake. Bacalar Lake empties into Chetumal Bay. The Río Nuevo flows from Lamanai Lake in Belize northwards to Chetumal Bay. The Mopan River and the Macal River flow through Belize and join to form the Belize River, which empties into the Caribbean Sea. In the southwest of the peninsula, the San Pedro River, the Candelaría River and the Mamantel River, which all form a part of the Gulf of Mexico drainage.
The Petén region consists of densely forested low-lying limestone plain featuring karstic topography. The area is crossed by low east–west oriented ridges of Cenozoic limestone and is characterised by a variety of forest and soil types; water sources include generally small rivers and low-lying seasonal swamps known as bajos. A chain of fourteen lakes runs across the central drainage basin of Petén; during the rainy season some of these lakes become interconnected. This drainage area measures approximately 100 kilometres (62 mi) east–west by 30 kilometres (19 mi) north–south. The largest lake is Lake Petén Itza, near the centre of the drainage basin; it measures 32 by 5 kilometres (19.9 by 3.1 mi). A broad savannah extends south of the central lakes. To the north of the lakes region bajos become more frequent, interspersed with forest. In the far north of Petén the Mirador Basin forms another interior drainage region. To the south the plain gradually rises towards the Guatemalan Highlands. The canopy height of the forest gradually decreases from Petén northwards, averaging from 25 to 35 metres (82 to 115 ft). This dense forest covers northern Petén and Belize, most of Quinatana Roo, southern Campeche and a portion of the south of Yucatán State. Further north, the vegetation turns to lower forest consisting of dense scrub.
The climate becomes progressively drier towards the north of the peninsula. In the north, the annual mean temperature is 27 °C (81 °F) in Mérida. Average temperature in the peninsula varies from 24 °C (75 °F) in January to 29 °C (84 °F) in July. The lowest temperature on record is 6 °C (43 °F). For the peninsula as a whole, the mean annual precipitation is 1,100 millimetres (43 in). The rainy season lasts from June to September, while the dry season runs from October to May. During the dry season, rainfall averages 300 millimetres (12 in); in the wet season this increases to an average 800 to 900 millimetres (31 to 35 in). The prevailing winds are easterly and have created an east-west precipitation gradient with average rainfall in the east exceeding 1,400 millimetres (55 in) and the north and northwestern portions of the peninsula receiving a maximum of 800 millimetres (31 in). The southeastern portion of the peninsula has a tropical rainy climate with a short dry season in winter.
Petén has a hot climate and receives the highest rainfall in all Mesoamerica. The climate is divided into wet and dry seasons, with the rainy season lasting from June to December, although these seasons are not clearly defined in the south; with rain occurring through most of the year. The climate of Petén varies from tropical in the south to semitropical in the north; temperature varies between 12 and 40 °C (54 and 104 °F), although it does not usually drop beneath 18 °C (64 °F). Mean temperature varies from 24.3 °C (75.7 °F) in the southeast to 26.9 °C (80.4 °F) in the northeast. Highest temperatures are reached from April to June, while January is the coldest month; all Petén experiences a hot dry period in late August. Annual precipitation is high, varying from a mean of 1,198 millimetres (47.2 in) in the northeast to 2,007 millimetres (79.0 in) in central Petén.
Yucatán before the conquest
The first large Maya cities developed in the Petén Basin in the far south of the Yucatán Peninsula as far back as the Middle Preclassic (c. 600–350 BC), and Petén formed the heartland of the ancient Maya civilization during the Classic period (c. AD 250–900). The 16th century Maya provinces of northern Yucatán are likely to have evolved out of polities of the Maya Classic period. From the mid-13th century AD through to the mid-15th century, the League of Mayapán united several of the northern provinces; for a time they shared a joint form of government. The great cities that dominated Petén had fallen into ruin by the beginning of the 10th century AD with the onset of the Classic Maya collapse. A significant Maya presence remained in Petén into the Postclassic period after the abandonment of the major Classic period cities; the population was particularly concentrated near permanent water sources.
In the early 16th century, when the Spanish discovered the Yucatán Peninsula, the region was still dominated by the Maya civilization. It was divided into a number of independent provinces referred to as kuchkabal (plural kuchkabaloob) in the Yucatec Maya language. The various provinces shared a common culture but the internal sociopolitical organisation varied from one province to the next, as did access to important resources. These differences in political and economic makeup often led to hostilities between the provinces. The politically fragmented state of the Yucatán Peninsula at the time of conquest hindered the Spanish invasion, since there was no central political authority to be overthrown. However, the Spanish were also able to exploit this fragmentation by taking advantage of pre-existing rivalries between polities. Estimates of the number of kuchkabal in the northern Yucatán vary from sixteen to twenty-four. The boundaries between polities were not stable, being subject to the effects of alliances and wars; those kuchkabaloob with more centralised forms of government were likely to have had more stable boundaries than those of loose confederations of provinces. When the Spanish discovered Yucatán, the provinces of Mani and Sotuta were two of the most important polities in the region. They were mutually hostile; the Xiu Maya of Mani allied themselves with the Spanish, while the Cocom Maya of Sotuta became the implacable enemies of the European colonisers.
At the time of conquest, polities in the north included Mani, Cehpech and Chakan. Chakan was largely landlocked with a small stretch of coast on the north of the peninsula. Cehpech was a coastal province to its east; further east along the north coast were Ah Kin Chel, Cupul, and Chikinchel. The modern city of Valladolid is situated upon the site of the former capital of Cupul. Cupul and Chinkinchel are known to have been mutually hostile, and to have engaged in wars to control the salt beds of the north coast. Tases was a small landlocked province south of Chikinchel. Ecab was a large province in the east. Uaymil was in the southeast, and Chetumal was to the south of it; all three bordered on the Caribbean Sea. Cochuah was also in the eastern half of the peninsula; it was southwest of Ecab and northwest of Uaymil. Its borders are poorly understood and it may have been landlocked, or have extended to occupy a portion of the Caribbean coast between the latter two kuchkabaloob. The capital of Cochuah was Tihosuco. Hocaba and Sotuta were landlocked provinces north of Mani and southwest of Ah Kin Chel and Cupul. Ah Canul was the northernmost province on the Gulf coast of the peninsula. Canpech (modern Campeche) was to the south of it, followed by Chanputun (modern Champotón). South of Chanputun, and extending west along the Gulf coast was Acalan. This Chontal Maya-speaking province extended east of the Usumacinta River in Tabasco, as far as what is now the southern portion of Campeche state, where their capital was located. In the southern portion of the peninsula, a number of polities occupied the Petén Basin. The Kejache occupied a territory to the north of the Itza and east of Acalan, between the Petén lakes and what is now Campeche, and to the west of Chetumal. The Cholan Maya-speaking Lakandon (not to be confused with the modern inhabitants of Chiapas by that name) controlled territory along the tributaries of the Usumacinta River spanning southwestern Petén in Guatemala and eastern Chiapas. The Lakandon had a fierce reputation amongst the Spanish.
Although there is insufficient data to accurately estimate population sizes at the time of contact with the Spanish, early Spanish reports suggest that sizeable Maya populations existed in Petén, particularly around the central lakes and along the rivers. Before their defeat in 1697 the Itza controlled or influenced much of Petén and parts of Belize. The Itza were warlike, and their martial prowess impressed both neighbouring Maya kingdoms and their Spanish enemies. Their capital was Nojpetén, an island city upon Lake Petén Itzá; it has developed into the modern town of Flores, which is the capital of the Petén department of Guatemala. The Itza spoke a variety of Yucatecan Maya. The Kowoj were the second in importance; they were hostile towards their Itza neighbours. The Kowoj were located to the east of the Itza, around the eastern Petén lakes: Lake Salpetén, Lake Macanché, Lake Yaxhá and Lake Sacnab. The Yalain appear to have been one of the three dominant polities in Postclassic central Petén, alongside the Itza and the Kowoj. The Yalain territory had its maximum extension from the east shore of Lake Petén Itzá eastwards to Tipuj in Belize. In the 17th century the Yalain capital was located at the site of that name on the north shore of Lake Macanché. At the time of Spanish contact the Yalain were allied with the Itza, an alliance cemented by intermarriage between the elites of both groups. In the late 17th century Spanish colonial records document hostilities between Maya groups in the lakes region, with the incursion of the Kowoj into former Yalain sites including Zacpeten on Lake Macanché and Ixlu on Lake Salpetén. Other groups in Petén are less well known, and their precise territorial extent and political makeup remains obscure; among them were the Chinamita, the Icaiche, the Kejache, the Lakandon Ch'ol, the Manche Ch'ol, and the Mopan.
Impact of Old World diseases
A single soldier arriving in Mexico in 1520 was carrying smallpox and thus initiated the devastating plagues that swept through the native populations of the Americas. The European diseases that ravaged the indigenous inhabitants of the Americas also severely affected the various Maya groups of the entire Yucatán Peninsula. Modern estimates of native population decline vary from 75% to 90% mortality. The terrible plagues that swept the peninsula were recorded in Yucatec Maya written histories, which combined with those of neighbouring Maya peoples in the Guatemalan Highlands, suggest that smallpox was rapidly transmitted throughout the Maya area the same year that it arrived in central Mexico with the forces under the command of Pánfilo Narváez. Old World diseases are often mentioned only briefly in indigenous accounts, making it difficult to identify the exact culprit. Among the most deadly were the aforementioned smallpox, influenza, measles and a number of pulmonary diseases, including tuberculosis; the latter disease was attributed to the arrival of the Spanish by the Maya inhabitants of Yucatán.
These diseases swept through Yucatán in the 1520s and 1530s, with periodic recurrences throughout the 16th century. By the late 16th century, the reports of high fevers suggest the arrival of malaria in the region, and yellow fever was first reported in the mid-17th century, with a terse mention in the Chilam Balam of Chumayel for 1648. That particular outbreak was traced back to the island of Guadaloupe in the Caribbean, from whence it was introduced to the port city of Campeche, and from there was transmitted to Mérida. Mortality was high, with approximately 50% of the population of some Yucatec Maya settlements being wiped out. Sixteen Franciscan friars are reported to have died in Mérida, probably the majority of the Franciscans based there at the time, and who had probably numbered not much more than twenty before the outbreak. Those areas of the peninsula that experience damper conditions, particularly those possessing swamplands, became rapidly depopulated after the conquest with the introduction of malaria and other waterborne parasites. An example was the one-time well-populated province of Ecab occupying the northeastern portion of the peninsula. In 1528, when Francisco de Montejo occupied the town of Conil for two months, the Spanish recorded approximately 5,000 houses in the town; the adult male population at the time has been conservatively estimated as 3,000. By 1549, Spanish records show that only 80 tributaries were registered to be taxed, indicating a population drop in Conil of more than 90% in 21 years. The native population of the northeastern portion of the peninsula was almost completely eliminated within fifty years of the conquest.
In the south, conditions conducive to the spread of malaria existed throughout Petén and Belize. At the time of the fall of Nojpetén in 1697, there are estimated to have been 60,000 Maya living around Lake Petén Itzá, including a large number of refugees from other areas. It is estimated that 88% of them died during the first ten years of colonial rule owing to a combination of disease and war. Likewise, in Tabasco the population of approximately 30,000 was reduced by an estimated 90%, with measles, smallpox, catarrhs, dysentery and fevers being the main culprits.
Weaponry, strategies and tactics
The Spanish engaged in a strategy of concentrating native populations in newly founded colonial towns, or reducciones (also known as congregaciones). Native resistance to the new nucleated settlements took the form of the flight of the indigenous inhabitants into inaccessible regions such as the forest or joining neighbouring Maya groups that had not yet submitted to the Spanish. Those that remained behind in the reducciones often fell victim to contagious diseases. An example of the effect on populations of this strategy is the province of Acalan, which occupied an area spanning southern Campeche and eastern Tabasco. When Hernán Cortés passed through Acalan in 1525 he estimated the population size as at least 10,000. In 1553 the population was recorded at around 4,000. In 1557 the population was forcibly moved to Tixchel on the Gulf coast, so as to be more easily accessible to the Spanish authorities. In 1561 the Spanish recorded only 250 tribute-paying inhabitants of Tixchel, which probably had a total population of about 1,100. This indicates a 90% drop in population over a 36-year span. Some of the inhabitants had fled Tixchel for the forest, while others had succumbed to disease, malnutrition and inadequate housing in the Spanish reducción. Coastal reducciones, while convenient for Spanish administration, were vulnerable to pirate attacks; in the case of Tixchel, pirate attacks and contagious European diseases led to the eradication of the reducción town and the extinction of the Chontal Maya of Campeche. Among the Maya, ambush was a favoured tactic.
Spanish weaponry and armour
The 16th-century Spanish conquistadors were armed with broadswords, rapiers, crossbows, matchlocks and light artillery. Mounted conquistadors were armed with a 3.7-metre (12 ft) lance, that also served as a pike for infantrymen. A variety of halberds and bills were also employed. As well as the one-handed broadsword, a 1.7-metre (5.5 ft) long two-handed version was also used. Crossbows had 0.61-metre (2 ft) arms stiffened with hardwoods, horn, bone and cane, and supplied with a stirrup to facilitate drawing the string with a crank and pulley. Crossbows were easier to maintain than matchlocks, especially in the humid tropical climate of the Caribbean region that included much of the Yucatán Peninsula.
Native weaponry and armour
Maya warriors entered battle against the Spanish with flint-tipped spears, bows and arrows and stones. They wore padded cotton armour to protect themselves. Members of the Maya aristocracy wore quilted cotton armour, and some warriors of lesser rank wore twisted rolls of cotton wrapped around their bodies. Warriors bore wooden or animal hide shields decorated with feathers and animal skins.
First encounters: 1502 and 1511
On 30 July 1502, during his fourth voyage, Christopher Columbus arrived at Guanaja, one of the Bay Islands off the coast of Honduras. He sent his brother Bartholomew to scout the island. As Bartholomew explored the island with two boats, a large canoe approached from the west, apparently en route to the island. The canoe was carved from one large tree trunk and was powered by twenty-five naked rowers. Curious as to the visitors, Bartholomew Columbus seized and boarded it. He found it was a Maya trading canoe from Yucatán, carrying well-dressed Maya and a rich cargo that included ceramics, cotton textiles, yellow stone axes, flint-studded war clubs, copper axes and bells, and cacao. Also among the cargo were a small number of women and children, probably destined to be sold as slaves, as were a number of the rowers. The Europeans looted whatever took their interest from amongst the cargo and seized the elderly Maya captain to serve as an interpreter; the canoe was then allowed to continue on its way. This was the first recorded contact between Europeans and the Maya. It is likely that news of the piratical strangers in the Caribbean passed along the Maya trade routes – the first prophecies of bearded invaders sent by Kukulkan, the northern Maya feathered serpent god, were probably recorded around this time, and in due course passed into the books of Chilam Balam.
In 1511 the Spanish caravel Santa María de la Barca set sail along the Central American coast under the command of Pedro de Valdivia. The ship was sailing to Santo Domingo from Darién to inform the colonial authorities there of ongoing conflict between conquistadors Diego de Nicuesa and Vasco Nuñez de Balboa in Darién. The ship foundered upon a reef known as Las Víboras ("The Vipers") or, alternatively, Los Alacranes ("The Scorpions"), somewhere off Jamaica. There were just twenty survivors from the wreck, including Captain Valdivia, Gerónimo de Aguilar and Gonzalo Guerrero. They set themselves adrift in one of the ship's boats, with bad oars and no sail; after thirteen days during which half of the survivors died, they made landfall upon the coast of Yucatán. There they were seized by Halach Uinik, a Maya lord. Captain Vildivia was sacrificed with four of his companions, and their flesh was served at a feast. Aguilar and Guerrero were held prisoner and fattened for killing, together with five or six of their shipmates. Aguilar and Guerrero managed to escape their captors and fled to a neighbouring lord who was an enemy of Halach Uinik; he took them prisoner and kept them as slaves. After a time, Gonzalo Guerrero was passed as a slave to the lord Nachan Can of Chetumal. Guerrero became completely Mayanised and served his new lord with such loyalty that he was married to one of Nachan Chan's daughters, Zazil Ha, by whom he had three children. By 1514, Guerrero had achieved the rank of nacom, a war leader who served against Nachan Chan's enemies.
Francisco Hernández de Córdoba, 1517
In 1517, Francisco Hernández de Córdoba set sail from Cuba with a small fleet, consisting of two caravels and a brigantine, with the dual intention of exploration and of rounding up slaves. The experienced Antón de Alaminos served as pilot; he had previously served as pilot under Christopher Columbus on his final voyage. Also among the approximately 100-strong expedition members was Bernal Díaz del Castillo. The expedition sailed west from Cuba for three weeks, and weathered a two-day storm a week before sighting the coast of the northeastern tip of the Yucatán Peninsula. The ships could not put in close to the shore due to the shallowness of the coastal waters. However, they could see a Maya city some two leagues inland, upon a low hill. The Spanish called it Gran Cairo (literally "Great Cairo") due to its size and its pyramids. Although the location is not now known with certainty, it is believed that this first sighting of Yucatán was at Isla Mujeres.
The following morning, the Spanish sent the two ships with a shallower draught to find a safe approach through the shallows. The caravels anchored about one league from the shore. Ten large canoes powered by both sails and oars rowed out to meet the Spanish ships. Over thirty Maya boarded the vessels and mixed freely with the Spaniards. The Maya visitors accepted gifts of beads, and the leader indicated with signs that they would return to take the Spanish ashore the following day.
The Maya leader returned the following day with twelve canoes, as promised. The Spanish could see from afar that the shore was packed with natives. The conquistadors put ashore in the brigantine and the ships' boats; a few of the more daring Spaniards boarded the native canoes. The Spanish named the headland Cape Catoche, after some words spoken by the Maya leader, which sounded to the Spanish like cones catoche. Once ashore, the Spaniards clustered loosely together and advanced towards the city along a path among low, scrub-covered hillocks. At this point the Maya leader gave a shout and the Spanish party was ambushed by Maya warriors armed with spears, bows and arrows, and stones. Thirteen Spaniards were injured by arrows in the first assault, but the conquistadors regrouped and repulsed the Maya attack. They advanced to a small plaza bordered by temples upon the outskirts of the city. When the Spaniards ransacked the temples they found a number of low-grade gold items, which filled them with enthusiasm. The expedition captured two Mayas to be used as interpreters and retreated to the ships. Over the following days the Spanish discovered that although the Maya arrows had struck with little force, the flint arrowheads tended to shatter on impact, causing infected wounds and a slow death; two of the wounded Spaniards died from the arrow-wounds inflicted in the ambush.
Over the next fifteen days the fleet slowly followed the coastline west, and then south. The casks brought from Cuba were leaking and the expedition was now running dangerously low on fresh water; the hunt for more became an overriding priority as the expedition advanced, and shore parties searching for water were left dangerously exposed because the ships could not pull close to the shore due to the shallows. On 23 February 1517, the day of Saint Lazarus, another city was spotted and named San Lázaro by the Spanish – it is now known by its original Maya name, Campeche. A large contingent put ashore in the brigantine and the ships' boats to fill their water casks in a freshwater pool. They were approached by about fifty finely dressed and unarmed Indians while the water was being loaded into the boats; they questioned the Spaniards as to their purpose by means of signs. The Spanish party then accepted an invitation to enter the city. They were led amongst large buildings until they stood before a blood-caked altar, where many of the city's inhabitants crowded around. The Indians piled reeds before the visitors; this act was followed by a procession of armed Maya warriors in full war paint, followed by ten Maya priests. The Maya set fire to the reeds and indicated that the Spanish would be killed if they were not gone by the time the reeds had been consumed. The Spanish party withdrew in defensive formation to the shore and rapidly boarded their boats to retreat to the safety of the ships.
The small fleet continued for six more days in fine weather, followed by four stormy days. By this time water was once again dangerously short. The ships spotted an inlet close to another city, Champotón, and a landing party discovered fresh water. Armed Maya warriors approached from the city while the water casks were being filled. Communication was once again attempted with signs. Night fell by the time the water casks had been filled and the attempts at communication concluded. In the darkness the Spaniards could hear the movements of large numbers of Maya warriors. They decided that a night-time retreat would be too risky; instead, they posted guards and waited for dawn. At sunrise, the Spanish saw that they had been surrounded by a sizeable army. The massed Maya warriors launched an assault with missiles, including arrows, darts and stones; they then charged into hand-to-hand combat with spears and clubs. Eighty of the defenders were wounded in the initial barrage of missiles, and two Spaniards were captured in the frantic melee that followed. All of the Spanish party received wounds, including Hernández de Córdoba. The Spanish regrouped in a defensive formation and forced passage to the shore, where their discipline collapsed and a frantic scramble for the boats ensued, leaving the Spanish vulnerable to the pursuing Maya warriors who waded into the sea behind them. Most of the precious water casks were abandoned on the beach. When the surviving Spanish reached the safety of the ships, they realised that they had lost over fifty men, more than half their number. Five men died from their wounds in the following days. The battle had lasted only an hour, and the Spanish named the locale as the Coast of the Disastrous Battle. They were now far from help and low on supplies; too many men had been lost and injured to sail all three ships back to Cuba. They decided to abandon their smallest ship, the brigantine, although it was purchased on credit from Governor Velásquez of Cuba.
The few men who had not been wounded because they were manning the ships during the battle were reinforced with three men who had suffered relatively minor wounds; they put ashore at a remote beach to dig for water. They found some and brought it back to the ships, although it sickened those who drank it. The two ships sailed through a storm for two days and nights; Alaminos, the pilot, then steered a course for Florida, where they found good drinking water, although they lost one man to the local Indians and another drank so much water that he died. The ships finally made port in Cuba, where Hernández de Cordóba wrote a report to Governor Velázquez describing the voyage, the cities, the plantations, and, most importantly, the discovery of gold. Hernández died soon after from his wounds. The two captured Maya survived the voyage to Cuba and were interrogated; they swore that there was abundant gold in Yucatán.
Juan de Grijalva, 1518
Diego Velázquez, the governor of Cuba, was enthused by Hernández de Córdoba's report of gold in Yucatán. He organised a new expedition consisting of four ships and 260 men. He placed his nephew Juan de Grijalva in command. Francisco de Montejo, who would eventually conquer much of the peninsula, was captain of one of the ships,; Pedro de Alvarado and Alonso d'Avila captained the other ships. Bernal Díaz del Castillo served on the crew; he was able to secure a place on the expedition as a favour from the governor, who was his kinsman. Antón de Alaminos once again served as pilot. Governor Velázquez provided all four ships, in an attempt to protect his claim over the peninsula. The small fleet was stocked with crossbows, muskets, barter goods, salted pork and cassava bread. Grijalva also took one of the captured Indians from the Hernández expedition.
The fleet left Cuba in April 1518, and made its first landfall upon the island of Cozumel, off the east coast of Yucatán. The Maya inhabitants of Cozumel fled the Spanish and would not respond to Grijalva's friendly overtures. The fleet sailed south from Cozumel, along the east coast of the peninsula. The Spanish spotted three large Maya cities along the coast, one of which was probably Tulum. On Ascension Thursday the fleet discovered a large bay, which the Spanish named Bahía de la Ascensión. Grijalva did not land at any of these cities and turned back north from Ascensión Bay. He looped around the north of the Yucatán Peninsula to sail down the west coast. At Campeche the Spanish tried to barter for water but the Maya refused, so Grijalva opened fire against the city with small cannon; the inhabitants fled, allowing the Spanish to take the abandoned city. Messages were sent with a few Maya who had been too slow to escape but the Maya remained hidden in the forest. The Spanish boarded their ships and continued along the coast.
At Champotón, where the inhabitants had routed Hernández and his men, the fleet was approached by a small number of large war canoes, but the ships' cannon soon put them to flight. At the mouth of the Tabasco River the Spanish sighted massed warriors and canoes but the natives did not approach. By means of interpreters, Grijalva indicated that he wished to trade and bartered wine and beads in exchange for food and other supplies. From the natives they received a few gold trinkets and news of the riches of the Aztec Empire to the west. The expedition continued far enough to confirm the reality of the gold-rich empire, sailing as far north as Pánuco River. As the fleet returned to Cuba, the Spanish attacked Champotón to avenge the previous year's defeat of the Spanish expedition led by Hernández. One Spaniard was killed and fifty were wounded in the ensuing battle, including Grijalva. Grijalva put into the port of Havana five months after he had left.
Hernán Cortés, 1519
Grijalva's return aroused great interest in Cuba, and Yucatán was believed to be a land of riches waiting to be plundered. A new expedition was organised, with a fleet of eleven ships carrying 500 men and some horses. Hernán Cortés was placed in command, and his crew included officers that would become famous conquistadors, including Pedro de Alvarado, Cristóbal de Olid, Gonzalo de Sandoval and Diego de Ordaz. Also aboard were the Francisco de Montejo and Bernal Díaz del Castillo, veterans of the Grijalva expedition.
The fleet made its first landfall at Cozumel, and Cortés remained there for several days. Maya temples were cast down and a Christian cross was put up on one of them. At Cozumel Cortés heard rumours of bearded men on the Yucatán mainland, who he presumed were Europeans. Cortés sent out messengers to them and was able to rescue the shipwrecked Gerónimo de Aguilar, who had been enslaved by a Maya lord. Aguilar had learnt the Yucatec Maya language and became Cortés' interpreter.
From Cozumel, the fleet looped around the north of the Yucatán Peninsula and followed the coast to the Tabasco River, which Cortés renamed as the Grijalva River in honour of the Spanish captain who had discovered it. In Tabasco, Cortés anchored his ships at Potonchán, a Chontal Maya town. The Maya prepared for battle but the Spanish horses and firearms quickly decided the outcome. The defeated Chontal Maya lords offered gold, food, clothing and a group of young women in tribute to the victors. Among these women was a young Maya noblewoman called Malintzin, who was given the Spanish name Marina. She spoke Maya and Nahuatl and became the means be which Cortés was able to communicate with the Aztecs. Marina became Cortés' consort and eventually bore him a son. From Tabasco, Cortés continued to Cempoala in Veracruz, a subject city of the Aztec Empire, and from there went on to conquer the Aztecs.
In 1519 Cortés sent the veteran Francisco de Montejo back to Spain with treasure for the king. While he was in Spain he pleaded Cortés' cause against the supporters of Diego de Velasquez. Montejo remained in Spain for seven years, and eventually succeeded in acquiring the hereditary military title of adelantado.
Hernán Cortés in the Maya lowlands, 1524–25
In 1524, after the Spanish conquest of the Aztec Empire, Hernán Cortés led an expedition to Honduras over land, cutting across Acalan in southern Campeche and the Itza kingdom in what is now the northern Petén Department of Guatemala. His aim was to subdue the rebellious Cristóbal de Olid, whom he had sent to conquer Honduras; Olid had, however, set himself up independently on his arrival in that territory. Cortés left Tenochtitlan on 12 October 1524 with 140 Spanish soldiers, 93 of them mounted, 3,000 Mexican warriors, 150 horses, a herd of pigs, artillery, munitions and other supplies. He also had with him the captured Aztec emperor Cuauhtemoc, and Cohuanacox and Tetlepanquetzal, the captive Aztec lords of Texcoco and Tlacopan. Cortés marched into Maya territory in Tabasco; the army crossed the Usumacinta River near Tenosique and crossed into the Chontal Maya province of Acalan, where he recruited 600 Chontal Maya carriers. In Acalan, Cortés believed that the captive Aztec lords were plotting against him and he ordered Cuauhtemoc and Tetlepanquetzal to be hanged. Cortés and his army left Acalan on 5 March 1525.
The expedition passed onwards through Kejache territory and reported that the Kejache towns were situated in easily defensible locations and were often fortified. One of these was built on a rocky outcrop near a lake and a river that fed into it. The town was fortified with a wooden palisade and was surrounded by a moat. Cortés reported that the town of Tiac was even larger and was fortified with walls, watchtowers and earthworks; the town itself was divided into three individually fortified districts. Tiac was said to have been at war with the unnamed smaller town. The Kejache claimed that their towns were fortified against the attacks of their aggressive Itza neighbours.
They arrived at the north shore of Lake Petén Itzá on 13 March 1525. The Roman Catholic priests accompanying the expedition celebrated mass in the presence of Aj Kan Ek', the king of the Itza, who was said to be so impressed that he pledged to worship the cross and to destroy his idols. Cortés accepted an invitation from Kan Ek' to visit Nojpetén (also known as Tayasal), and crossed to the Maya city with 20 Spanish soldiers while the rest of his army continued around the lake to meet him on the south shore. On his departure from Nojpetén, Cortés left behind a cross and a lame horse that the Itza treated as a deity, attempting to feed it poultry, meat and flowers, but the animal soon died. The Spanish did not officially contact the Itza again until the arrival of Franciscan priests in 1618, when Cortés' cross was said to still be standing at Nojpetén.
From the lake, Cortés continued south along the western slopes of the Maya Mountains, a particularly arduous journey that took 12 days to cover 32 kilometres (20 mi), during which he lost more than two-thirds of his horses. When he came to a river swollen with the constant torrential rains that had been falling during the expedition, Cortés turned upstream to the Gracias a Dios rapids, which took two days to cross and cost him more horses.
On 15 April 1525 the expedition arrived at the Maya village of Tenciz. With local guides they headed into the hills north of Lake Izabal, where their guides abandoned them to their fate. The expedition became lost in the hills and came close to starvation before they captured a Maya boy who led them to safety. Cortés found a village on the shore of Lake Izabal, perhaps Xocolo. He crossed the Dulce River to the settlement of Nito, somewhere on the Amatique Bay, with about a dozen companions, and waited there for the rest of his army to regroup over the next week. By this time the remnants of the expedition had been reduced to a few hundred; Cortés succeeded in contacting the Spaniards he was searching for, only to find that Cristóbal de Olid's own officers had already put down his rebellion. Cortés then returned to Mexico by sea.
Francisco de Montejo, 1527–28
The richer lands of Mexico engaged the main attention of the Conquistadors for some years, then in 1526 Francisco de Montejo (a veteran of the Grijalva and Cortés expeditions) successfully petitioned the King of Spain for the right to conquer Yucatán. On 8 December of that year he was issued with the hereditary military title of adelantado and permission to colonise the Yucatán Peninsula. In 1527 he left Spain with 400 men in four ships, with horses, small arms, cannon and provisions. He set sail for Santo Domingo, where more supplies and horses were collected, allowing Montejo to increase his cavalry to fifty. One of the ships was left at Santo Domingo as a supply ship to provide later support; the other ships set sail and reached Cozumel, an island off the east coast of Yucatán, in the second half of September 1527. Montejo was received in peace by the lord of Cozumel, Aj Naum Pat, but the ships only stopped briefly before making for the Yucatán coast. The expedition made landfall somewhere near Xelha in the Maya province of Ekab, in what is now Mexico's Quintana Roo state.
Montejo garrisoned Xelha with 40 soldiers under his second-in-command, Alonso d'Avila, and posted 20 more at nearby Pole. Xelha was renamed Salamanca de Xelha and became the first Spanish settlement in the peninsula. The provisions were soon exhausted and additional food was seized from the local Maya villagers; this too was soon consumed. Many local Maya fled into the forest and Spanish raiding parties scoured the surrounding area for food, finding little. With discontent growing among his men, Montejo took the drastic step of burning his ships; this strengthened the resolve of his troops, who gradually acclimatised to the harsh conditions of Yucatán. Montejo was able to get more food from the still-friendly Aj Nuam Pat, when the latter made a visit to the mainland. Montejo took 125 men and set out on an expedition to explore the north-eastern portion of the Yucatán peninsula. His expedition passed through the towns of Xamanha, Mochis and Belma, none of which survives today.[nb 1] At Belma, Montejo gathered the leaders of the nearby Maya towns and ordered them to swear loyalty to the Spanish Crown. After this, Montejo led his men to Conil, a town in Ekab that was described as having 5,000 houses, where the Spanish party halted for two months.
In the spring of 1528, Montejo left Conil for the city of Chauaca, which was abandoned by its Maya inhabitants under cover of darkness. The following morning the inhabitants attacked the Spanish party but were defeated. The Spanish then continued to Ake, some 16 kilometres (9.9 mi) north of Tizimín, where they engaged in a major battle against the Maya, which killed more than 1,200 Maya. After this Spanish victory, the neighbouring Maya leaders all surrendered. Montejo's party then continued to Sisia and Loche before heading back to Xelha. Montejo arrived at Xelha with only 60 of his party, and found that only 12 of his 40-man garrison survived, while the garrison at Pole had been entirely wiped out.
The support ship eventually arrived from Santo Domingo, and Montejo used it to sail south along the coast, while he sent D'Avila via land. Montejo discovered the thriving port city of Chaktumal (modern Chetumal). At Chaktumal, Montejo learnt that shipwrecked Spanish sailor Gonzalo de Guerrero was in the region, and Montejo sent messages to him, inviting him to return to join his compatriots, but Guerrero declined.
The Maya at Chaktumal fed false information to the Spanish, and Montejo was unable to find d'Avila and link up with him. D'Avila returned overland to Xelha, and transferred the fledgling Spanish colony to nearby Xamanha, modern Playa del Carmen, which Montejo considered to be a better port. After waiting for d'Avila without result, Montejo sailed south as far as the Ulúa River in Honduras before turning around and heading back up the coast to finally meet up with his lieutenant at Xamanha. Late in 1528, Montejo left d'Avila to oversee Xamanha and sailed north to loop around the Yucatán Peninsula and head for the Spanish colony of New Spain in central Mexico.
Francisco de Montejo and Alonso d'Avila, 1531–35
Montejo was appointed alcalde mayor (a local colonial governor) of Tabasco in 1529, and pacified that province with the aid of his son, also named Francisco de Montejo. D'Avila was sent from eastern Yucatán to conquer Acalan, which extended southeast of the Laguna de Terminos. Montejo the Younger founded Salamanca de Xicalango as a base of operations. In 1530 D'Avila established Salamanca de Acalán as a base from which to launch new attempts to conquer Yucatán. Salamanca de Acalán proved a disappointment, with no gold for the taking and with lower levels of population than had been hoped. D'Avila soon abandoned the new settlement and set off across the lands of the Kejache to Champotón, arriving there towards the end of 1530. During a colonial power struggle in Tabasco, the elder Montejo was imprisoned for a time. Upon his release he met up with his son in Xicalango, Tabasco, and they then both rejoined d'Avila at Champotón.
In 1531 Montejo moved his base of operations to Campeche. Alonso d'Avila was sent overland to Chauaca in the east of the peninsula, passing through Maní where he was well received by the Xiu Maya. D'Avila continued southeast to Chetumal where he founded the Spanish town of Villa Real ("Royal Town"). The local Maya fiercely resisted the placement of the new Spanish colony and d'Avila and his men were forced to abandon Villa Real and make for Honduras in canoes.
At Campeche, the Maya amassed a strong force and attacked the city; the Spanish were able to fight them off, although the elder Montejo was almost killed. Aj Canul, the lord of the attacking Maya, surrendered to the Spanish. After this battle, the younger Francisco de Montejo was despatched to the northern Cupul province, where the lord Naabon Cupul reluctantly allowed him to found the Spanish town of Ciudad Real at Chichen Itza. Montejo carved up the province amongst his soldiers and gave each of his men two to three thousand Maya in encomienda. After six months of Spanish rule, Cupul dissatisfaction could no longer be contained and Naabon Cupul was killed during a failed attempt to kill Montejo the Younger. The death of their lord only served to inflame Cupul anger and, in mid 1533, they laid siege to the small Spanish garrison at Chichen Itza. Montejo the Younger abandoned Ciudad Real by night after arranging a distraction for their attackers, and he and his men fled west, where the Chel, Pech and Xiu provinces remained obedient to Spanish rule. Montejo the Younger was received in friendship by Namux Chel, the lord of the Chel province, at Dzilam. In the spring of 1534 he rejoined his father in the Chakan province at Dzikabal, near T'ho (the modern city of Mérida).
While his son had been attempting to consolidate the Spanish control of Cupul, Francisco de Montejo the Elder had met the Xiu ruler at Maní. The Xiu Maya maintained their friendship with the Spanish throughout the conquest and Spanish authority was eventually established over Yucatán in large part due to Xiu support. The Montejos, after reuniting at Dzikabal, founded a new Spanish town at Dzilam, although the Spanish suffered hardships there. Montejo the Elder returned to Campeche, where he was received with friendship by the local Maya. He was accompanied by the friendly Chel lord Namux Chel, who travelled on horseback, and two of the lord's cousins, who were taken in chains. Montejo the Younger remained behind in Dzilam to continue his attempts at conquest of the region but, finding the situation too difficult, he soon retreated to Campeche to rejoin his father and Alonso d'Avila, who had returned to Campeche shortly before Montejo the Younger. Around this time the news began to arrive of Francisco Pizarro's conquests in Peru and the rich plunder that his soldiers were taking there, undermining the morale of Montejo's already disenchanted band of followers. Montejo's soldiers began to abandon him to seek their fortune elsewhere; in seven years of attempted conquest in the northern provinces of the Yucatán Peninsula, very little gold had been found. Towards the end of 1534 or the beginning of the next year, Montejo the Elder and his son retreated from Campeche to Veracruz, taking their remaining soldiers with them.
Montejo the Elder became embroiled in colonial infighting over the right to rule Honduras, a claim that put him in conflict with Pedro de Alvarado, captain general of Guatemala, who also claimed Honduras as part of his jurisdiction. Alvarado was ultimately to prove successful. In Montejo the Elder's absence, first in central Mexico, and then in Honduras, Montejo the Younger acted as lieutenant governor and captain general in Tabasco.
Conflict at Champoton
The Franciscan friar Jacobo de Testera arrived in Champoton in 1535 to attempt the peaceful incorporation of Yucatán into the Spanish Empire. Testera had been assured by the Spanish authorities that no military activity would be undertaken in Yucatán while he was attempting its conversion to the Roman Catholic faith, and that no soldiers would be permitted to enter the peninsula. His initial efforts were proving successful when Captain Lorenzo de Godoy arrived in Champoton at the command of soldiers despatched there by Montejo the Younger. Godoy and Testera were soon in conflict and the friar was forced to abandon Champoton and return to central Mexico.
Godoy's attempt to subdue the Maya around Champoton was unsuccessful and the local Kowoj Maya resisted his attempts to assert Spanish dominance of the region. This resistance was sufficiently tenacious that Montejo the Younger sent his cousin from Tabasco to Champoton to take command. His diplomatic overtures to the Champoton Kowoj were successful and they submitted to Spanish rule. Champoton was the last Spanish outpost in the Yucatán Peninsula; it was increasingly isolated and the situation there became difficult.
Conquest and settlement in northern Yucatán, 1540–46
In 1540 Montejo the Elder, who was now in his late 60s, turned his royal rights to colonise Yucatán over to his son, Francisco Montejo the Younger. In early 1541 Montejo the Younger joined his cousin in Champton; he did not remain there long, and quickly moved his forces to Campeche. Once there Montejo the Younger, commanding between three and four hundred Spanish soldiers, established the first permanent Spanish town council in the Yucatán Peninsula. Shortly after establishing the Spanish presence in Campeche, Montejo the Younger summoned the local Maya lords and commanded them to submit to the Spanish Crown. A number of lords submitted peacefully, including the ruler of the Xiu Maya. The lord of the Canul Maya refused to submit and Montejo the Younger sent his cousin against them; Montejo himself remained in Campeche awaiting reinforcements.
Montejo the Younger's cousin met the Canul Maya at Chakan, not far from T'ho. On 6 January 1542 he founded the second permanent town council, calling the new colonial town Mérida. On 23 January, Tutul Xiu, the lord of Mani, approached the Spanish encampment at Mérida in peace, bearing sorely needed food supplies. He expressed interest in the Spanish religion and witnessed a Roman Catholic mass celebrated for his benefit. Tutul Xiu was greatly impressed and converted to the new religion; he was baptised as Melchor and stayed with the Spanish at Mérida for two months, receiving instruction in the Catholic faith. Tutul Xiu was the ruler of the most powerful province of northern Yucatán and his submission to Spain and conversion to Christianity had repercussions throughout the peninsula, and encouraged the lords of the western provinces of the peninsula to accept Spanish rule. The eastern provinces continued to resist Spanish overtures.
Montejo the Younger then sent his cousin to Chauaca where most of the eastern lords greeted him in peace. The Cochua Maya resisted fiercely but were soon defeated by the Spanish. The Cupul Maya also rose up against the newly imposed Spanish domination, but their opposition was quickly put down. Montejo continued to the eastern Ekab province, reaching the east coast at Pole. Stormy weather prevented the Spanish from crossing to Cozumel, and nine Spaniards were drowned in the attempted crossing. A further Spaniard was killed by hostile Maya. Rumours of this setback grew in the telling and both the Cupul and Cochua provinces once again rose up against their would-be European overlords. The Spanish hold on the eastern portion of the peninsula remained tenuous and a number of Maya polities remained independent, including Chetumal, Cochua, Cupul, Sotuta and the Tazes.
On 8 November 1546 and alliance of eastern provinces launched a coordinated uprising against the Spanish. The provinces of Cupul, Cochua, Sotuta, Tazes, Uaymil, Chetumal and Chikinchel united in a concerted effort to drive the invaders from the peninsula; the uprising lasted four months. Eighteen Spaniards were surprised in the eastern towns, and were sacrificed. A contemporary account described the slaughter of over 400 allied Maya, as well as livestock. Mérida and Campeche were forewarned of the impending attack; Montejo the Younger and his cousin were in Campeche. Montejo the Elder arrived in Mérida from Chiapas in December 1546, with reinforcements gathered from Champoton and Campeche. The rebellious eastern Maya were finally defeated in a single battle, in which twenty Spaniards and several hundred allied Maya were killed. This battle marked the final conquest of the northern portion of the Yucatán Peninsula. As a result of the uprising and the Spanish response, many of the Maya inhabitants of the eastern and southern territories fled to the still unconquered Petén Basin, in the extreme south. The Spanish only achieved dominance in the north and the polities of Petén remained independent and continued to receive many refugees from the north.
Petén Basin, 1618–97
The Petén Basin covers an area that is now part of Guatemala; in colonial times it originally fell under the jurisdiction of the Governor of Yucatán, before being transferred to the jurisdiction of the Audiencia Real of Guatemala in 1703. The Itza kingdom centred upon Lake Petén Itzá had been visited by Hernán Cortés on his march to Honduras in 1525.
Early 17th century
Following Cortés' visit, no Spanish attempted to visit the warlike Itza inhabitants of Nojpetén for almost a hundred years. In 1618 two Franciscan friars set out from Mérida on a mission to attempt the peaceful conversion of the still-pagan Itza in central Petén. Bartolomé de Fuensalida and Juan de Orbita were accompanied by some Christianised Maya. After an arduous six-month journey the travellers were well received at Nojpetén by the current Kan Ek'. They stayed for some days in an attempt to evangelise the Itza, but the Aj Kan Ek' refused to renounce his Maya religion, although he showed interest in the masses held by the Catholic missionaries. Attempts to convert the Itza failed, and the friars left Nojpetén on friendly terms with Kan Ek'. The friars returned in October 1619, and again Kan Ek' welcomed them in a friendly manner, but this time the Maya priesthood were hostile and the missionaries were expelled without food or water, but survived the journey back to Mérida.
In March 1622, the governor of Yucatán, Diego de Cardenas, ordered Captain Francisco de Mirones Lezcano to launch an assault upon the Itza; he set out from Yucatán with 20 Spanish soldiers and 80 Mayas from Yucatán. His expedition was later joined by Franciscan friar Diego Delgado. In May the expedition advanced to Sakalum, southwest of Bacalar, where there was a lengthy delay while they waited for reinforcements. En route to Nojpetén, Delgado believed that the soldiers' treatment of the Maya was excessively cruel, and he left the expedition to make his own way to Nojpetén with eighty Christianised Maya from Tipuj in Belize. In the meantime the Itza had learnt of the approaching military expedition and had become hardened against further Spanish missionary attempts. When Mirones learnt of Delgado's departure, he sent 13 soldiers to persuade him to return or continue as his escort should he refuse. The soldiers caught up with him just before Tipuj, but he was determined to reach Nojpetén. From Tipuj, Delgado sent a messenger to Kan Ek', asking permission to travel to Nojpetén; the Itza king replied with a promise of safe passage for the missionary and his companions. The party was initially received in peace at the Itza capital, but as soon as the Spanish soldiers let their guard down, the Itza seized and bound the new arrivals. The soldiers were sacrificed to the Maya gods. After their sacrifice, the Itza took Delgado, cut his heart out and dismembered him; they displayed his head on a stake with the others. The fortune of the leader of Delgado's Maya companions was no better. With no word from Delgado's escort, Mirones sent two Spanish soldiers with a Maya scout to learn their fate. When they arrived upon the shore of Lake Petén Itzá, the Itza took them across to their island capital and imprisoned them. Bernardino Ek, the scout, escaped and returned to Mirones with the news. Soon afterwards, on 27 January 1624, an Itza war party led by AjK'in P'ol caught Mirones and his soldiers off guard and unarmed in the church at Sakalum, and killed them all. Spanish reinforcements arrived too late. A number of local Maya men and women were killed by Spanish attackers, who also burned the town.
Following these killings, Spanish garrisons were stationed in several towns in southern Yucatán, and rewards were offered for the whereabouts of AjK'in P'ol. The Maya governor of Oxkutzcab, Fernando Kamal, set out with 150 Maya archers to track the warleader down; they succeeded in capturing the Itza captain and his followers, together with silverware from the looted Sakalum church and items belonging to Mirones. The prisoners were taken back to the Spanish Captain Antonio Méndez de Canzo, interrogated under torture, tried, and condemned to be hanged, drawn and quartered. They were decapitated, and the heads were displayed in the plazas of towns throughout the colonial Partido de la Sierra in what is now Mexico's Yucatán state. These events ended all Spanish attempts to contact the Itza until 1695. In the 1640s internal strife in Spain distracted the government from attempts to conquer unknown lands; the Spanish Crown lacked the time, money or interest in such colonial adventures for the next four decades.
Late 17th century
Part of a series on the
|History of New Spain|
|New Spain portal|
In 1692 Basque nobleman Martín de Ursúa y Arizmendi proposed to the Spanish king the construction of a road from Mérida southwards to link with the Guatemalan colony, in the process "reducing" any independent native populations into colonial congregaciones; this was part of a greater plan to subjugate the Lakandon and Manche Ch'ol of southern Petén and the upper reaches of the Usumacinta River. The original plan was for the province of Yucatán to build the northern section and for Guatemala to build the southern portion, with both meeting somewhere in Ch'ol territory; the plan was later modified to pass further east, through the kingdom of the Itza.
The governor of Yucatán, Martín de Ursúa y Arizmendi, began to build the road from Campeche south towards Petén. At the beginning of March 1695, Captain Alonso García de Paredes led a group of 50 Spanish soldiers, accompanied by native guides, muleteers and labourers. The expedition advanced south into Kejache territory, which began at Chunpich, about 5 kilometres (3.1 mi) north of the modern border between Mexico and Guatemala. He rounded up some natives to be moved into colonial settlements, but met with armed Kejache resistance. García decided to retreat around the middle of April.
In March 1695, Captain Juan Díaz de Velasco set out from Cahabón in Alta Verapaz, Guatemala, with 70 Spanish soldiers, accompanied by a large number of Maya archers from Verapaz, native muleteers, and four Dominican friars. The Spanish pressed ahead to Lake Petén Itzá and engaged in a series of fierce skirmishes with Itza hunting parties. At the lakeshore, within sight of Nojpetén, the Spanish encountered such a large force of Itzas that they retreated south, back to their main camp. Interrogation of an Itza prisoner revealed that the Itza kingdom was in a state of high alert to repel the Spanish; the expedition almost immediately withdrew back to Cahabón.
In mid-May 1695 García again marched southwards from Campeche, with 115 Spanish soldiers and 150 Maya musketeers, plus Maya labourers and muleteers; the final tally was more than 400 people, which was regarded as a considerable army in the impoverished Yucatán province. Ursúa also ordered two companies of Maya musketeers from Tek'ax and Oxk'utzkab' to join the expedition at B'olonch'en Kawich, some 60 kilometres (37 mi) southeast of the city of Campeche. At the end of May three friars were assigned to join the Spanish force, accompanied by a lay brother. A second group of Franciscans would continue onwards independently to Nojpetén to make contact with the Itzas; it was led by friar Andrés de Avendaño, who was accompanied by another friar and a lay brother. García ordered the construction of a fort at Chuntuki, some 25 leagues (approximately 65 miles or 105 km) north of Lake Petén Itzá, which would serve as the main military base for the Camino Real ("Royal Road") project.
The Sajkab'chen company of native musketeers pushed ahead with the road builders from Tzuktzok' to the first Kejache town at Chunpich, which the Kejache had fled. The company's officers sent for reinforcements from García at Tzuktok' but before any could arrive some 25 Kejache returned to Chunpich with baskets to collect their abandoned food. The nervous Sajkab'chen sentries feared that the residents were returning en masse and discharged their muskets at them, with both groups then retreating. The musketeer company then arrived to reinforce their sentries and charged into battle against approaching Kejache archers. Several musketeers were injured in the ensuing skirmish and, the Kejache retreated along a forest path without injury. The Sajkab'chen company followed the path and found two more deserted settlements with large amounts of abandoned food. They seized the food and retreated back along the path.
Around 3 August García moved his entire army forward to Chunpich, and by October Spanish soldiers had established themselves near the source of the San Pedro River. By November Tzuktok' was garrisoned with 86 soldiers and more at Chuntuki. In December 1695 the main force was reinforced with 250 soldiers, of which 150 were Spanish and pardo and 100 were Maya, together with labourers and muleteers.
Avendaño's expedition, June 1695
In May 1695 Antonio de Silva had appointed two groups of Franciscans to head for Petén; the first group was to join up with García's military expedition. The second group was to head for Lake Petén Itza independently. This second group was headed by friar Andrés de Avendaño. Avendaño was accompanied by another friar, a lay brother, and six Christian Maya. This latter group left Mérida on 2 June 1695. Avendaño continued south along the course of the new road, finding increasing evidence of Spanish military activity. The Franciscans overtook García at B'uk'te, about 12 kilometres (7.5 mi) before Tzuktok'. On 3 August García advanced to Chunpich but tried to persuade Avendaño to stay behind to minister to the prisoners from B'uk'te. Avendaño instead split his group and left in secret with just four Christian Maya companions, seeking the Chunpich Kejache that had attacked one of García's advance companies and had now retreated into the forest. He was unable to find the Kejache but did manage to get information regarding a path that led southwards to the Itza kingdom. Avendaño returned to Tzuktok' and reconsidered his plans; the Franciscans were short of supplies, and the forcefully congregated Maya that they were charged with converting were disappearing back into the forest daily. Antonio de Silva ordered Avendaño to return to Mérida, and he arrived there on 17 September 1695. Meanwhile, the other group of Franciscans, led by Juan de San Buenaventura Chávez, continued following the roadbuilders into Kejache territory, through IxB'am, B'atkab' and Chuntuki (modern Chuntunqui near Carmelita, Petén).
Juan de San Buenaventura's small group of Franciscans arrived in Chuntuki on 30 August 1695, and found that the army had opened the road southwards for another seventeen leagues (approximately 44.2 miles or 71.1 km), almost half way to Lake Petén Itzá, but returned to Chuntuki due to the seasonal rains. San Buenaventura was accompanied by two friars and a lay brother. With Avendaño's return to Mérida, provincial superior Antonio de Silva despatched two additional friars to join San Buenaventura's group. One of these was to convert the Kejache in Tzuktok', and the other was to do the same at Chuntuki. On 24 October San Buenaventura wrote to the provincial superior reporting that the warlike Kejache were now pacified and that they had told him that the Itza were ready to receive the Spanish in friendship. On that day 62 Kejache men had voluntarily come to Chuntuki from Pak'ek'em, where another 300 Kejache resided. In early November 1695, friar Tomás de Alcoser and brother Lucas de San Francisco were sent to establish a mission at Pak'ek'em, where they were well received by the cacique (native chief) and his pagan priest. Pak'ek'em was sufficiently far from the new Spanish road that it was free from military interference, and the friars oversaw the building of a church in what was the largest mission town in Kejache territory. A second church was built at B'atkab' to attend to over 100 K'ejache refugees who had been gathered there under the stewardship of a Spanish friar; a further church was established at Tzuktok', overseen by another friar.
Avendaño's expedition, December 1695 – January 1696
Franciscan Andrés de Avendaño left Mérida on 13 December 1695, and arrived in Nojpetén around 14 January 1696, accompanied by four companions. From Chuntuki they followed an Indian trail that led them past the source of the San Pedro River and across steep karst hills to a watering hole by some ruins. From there they followed the small Acté River to a Chak'an Itza town called Saklemakal. They arrived at the western end of Lake Petén Itzá to an enthusiastic welcome by the local Itza. The following day, the current Aj Kan Ek' travelled across the lake with eighty canoes to greet the visitors at the Chak'an Itza port town of Nich, on the west shore of Lake Petén Itza. The Franciscans returned to Nojpetén with Kan Ek' and baptised over 300 Itza children over the following four days. Avendaño tried to convince Kan Ek' to convert to Christianity and surrender to the Spanish Crown, without success. The king of the Itza, cited Itza prophecy and said the time was not yet right.
On 19 January AjKowoj, the king of the Kowoj, arrived at Nojpetén and spoke with Avendaño, arguing against the acceptance of Christianity and Spanish rule. The discussions between Avendaño, Kan Ek' and AjKowoj exposed deep divisions among the Itza. Kan Ek' learnt of a plot by the Kowoj and their allies to ambush and kill the Franciscans, and the Itza king advised them to return to Mérida via Tipuj. The Spanish friars became lost and suffered great hardships, including the death of one of Avendaño's companions, but after a month wandering in the forest found their way back to Chuntuki, and from there returned to Mérida.
Battle at Ch'ich', 2 February 1696
By mid-January Captain García de Paredes had arrived at the advance portion of the Camino Real at Chuntuki. By now he only had 90 soldiers plus labourers and porters. Captain Pedro de Zubiaur, García’s senior officer, arrived at Lake Petén Itza with 60 musketeers, two Franciscans, and allied Yucatec Maya warriors. They were also accompanied by about 40 Maya porters. They were approached by about 300 canoes carrying approximately 2,000 Itza warriors. The warriors began to mingle freely with the Spanish party and a scuffle then broke out; a dozen of the Spanish party were forced into canoes, and three of them were killed. At this point the Spanish soldiers opened fire with their muskets, and the Itza retreated across the lake with their prisoners, who included the two Franciscans. The Spanish party retreated from the lake shore and regrouped on open ground where they were surrounded by thousands of Itza warriors. Zubiaur ordered his men to fire a volley that killed between 30 and 40 Itzas. Realising that they were hopelessly outnumbered, the Spanish retreated towards Chuntuki, abandoning their captured companions to their fate.
Martín de Ursúa was now convinced that Kan Ek' would not surrender peacefully, and he began to organise an all-out assault on Nojpetén. Work on the road was redoubled and about a month after the battle at Ch'ich' the Spanish arrived at the lakeshore, now supported by artillery. Again a large number of canoes gathered, and the nervous Spanish soldiers opened fire with cannons and muskets; no casualties were reported among the Itza, who retreated and raised a white flag from a safe distance.
Expedition from Verapaz, February – March 1696
Oidor Bartolomé de Amésqueta led the next Guatemalan expedition against the Itza. He marched his men from Cahabón to Mopán, arriving on 25 February 1696. On 7 March, Captain Díaz de Velasco led a party ahead to the lake; he was accompanied by two Dominican friars and by AjK'ixaw, an Itza nobleman who had been taken prisoner on Díaz's previous expedition. When they drew close to the shore of Lake Petén Itzá, AjK'ixaw was sent ahead as an emissary to Nojpetén. Díaz's party was lured into an Itza trap and the expedition members were killed to a man. The two friars were captured and sacrificed. The Itza killed a total of 87 expedition members, including 50 soldiers, two Dominicans and about 35 Maya helpers.
Amésqueta left Mopán three days after Díaz and followed Díaz’s trail to the lakeshore. He arrived at the lake over a week later with 36 men. As they scouted along the south shore near Nojpetén they were shadowed by about 30 Itza canoes and more Itzas approached by land but kept a safe distance. Amésqueta was extremely suspicious of the small canoes being offered by the Itza to transport his party across to Nojpetén; as nightfall approached Amésqueta retreated from the lakeshore and his men took up positions on a small hill nearby. In the early hours of the morning he ordered a retreat by moonlight. At San Pedro Martír he received news of an Itza embassy to Mérida in December 1695, and an apparent formal surrender of the Itza to Spanish authority. Unable to reconcile the news with the loss of his men, and with appalling conditions in San Pedro Mártir, Amésqueta abandoned his unfinished fort and retreated to Guatemala.
Assault on Nojpetén
The Itzas' continued resistance had become a major embarrassment for the Spanish colonial authorities, and soldiers were despatched from Campeche to take Nojpetén once and for all. Martín de Urzúa y Arizmendi arrived on the western shore of Lake Petén Itzá with his soldiers on 26 February 1697, and once there built the heavily armed galeota attack boat. The galeota carried 114 men and at least five artillery pieces. The piragua longboat used to cross the San Pedro River was also transported to the lake to be used in the attack on the Itza capital.
On 10 March a number of Itza and Yalain emissaries arrived at Ch'ich' to negotiate with Ursúa. Kan Ek' then sent a canoe with a white flag raised bearing emissaries, who offered peaceful surrender. Ursúa received the embassy in peace and invited Kan Ek' to visit his encampment three days later. On the appointed day Kan Ek' failed to arrive; instead Maya warriors amassed both along the shore and in canoes upon the lake.
A waterbourne assault was launched upon Kan Ek's capital on the morning of 13 March. Ursúa boarded the galeota with 108 soldiers, two secular priests, five personal servants, the baptised Itza emissary AjChan and his brother-in-law and an Itza prisoner from Nojpetén. The attack boat was rowed east towards the Itza capital; half way across the lake it encountered a large fleet of canoes spread in an arc across the approach to Nojpetén – Ursúa simply gave the order to row through them. A large quantity of defenders had gathered along the shore of Nojpetén and on the roofs of the city. Itza archers began to shoot at the invaders from the canoes. Ursúa ordered his men not to return fire but arrows wounded a number of his soldiers; one of the wounded soldiers discharged his musket and at that point the officers lost control of their men. The defending Itza soon fled from the withering Spanish gunfire.
The city fell after a brief but bloody battle in which many Itza warriors died; the Spanish suffered only minor casualties. The Spanish bombardment caused heavy loss of life on the island; the surviving Itza abandoned their capital and swam across to the mainland with many dying in the water. After the battle the surviving defenders melted away into the forests, leaving the Spanish to occupy an abandoned Maya town. Martín de Ursúa planted his standard upon the highest point of the island and renamed Nojpetén as Nuestra Señora de los Remedios y San Pablo, Laguna del Itza ("Our Lady of Remedy and Saint Paul, Lake of the Itza"). The Itza nobility fled, dispersing to Maya settlements throughout Petén; in response the Spanish scoured the region with search parties. Kan Ek' was soon captured with help from the Yalain Maya ruler Chamach Xulu; The Kowoj king (Aj Kowoj) was also soon captured, together with other Maya nobles and their families. With the defeat of the Itza, the last independent and unconquered native kingdom in the Americas fell to the European colonisers.
- Belma has been tentatively identified with the modern settlement and Maya archaeological site of El Meco.
- Quezada 2011, p. 13.
- Quezada 2011, p. 14.
- White and Hood 2004, p. 152.
Quezada 2011, p. 14.
- Thompson 1966, p. 25.
- Quezada 2011, p. 15.
- Quezada 2011, pp. 14–15.
- Lovell 2005, p. 17.
- Sharer and Traxler 2006, pp. 46–47.
- Rice and Rice 2009, p. 5.
- Quezada 2011, p. 16.
- Quezada 2011, p. 17.
- White and Hood 2004, p. 152.
- Schwartz 1990, p. 17.
- Schwartz 1990, p. 18.
- Estrada-Belli 2011, p. 52.
- Coe 1999, p. 31.
Webster 2002, p. 45.
- Andrews 1984, p. 589.
- Sharer and Traxler 2006, pp. 499–500.
- Sharer and Traxler 2006, pp. 613, 616.
- Andrews 1984, p. 590.
- Caso Barrera 2002, p. 17.
- Andrews 1984, p. 591.
- Andrews 1984, p. 593.
- Andrews 1984, p. 592.
- Jones 2000, p. 353.
- Houwald 1984, p. 257.
- Jones 2000, p. 351.
- Jones 2000, p. 352.
- Rice and Rice 2009, p. 10.
Rice 2009, p. 17.
- Cecil et al 1999, p. 788.
- Rice and Rice 2005, p. 149.
- Rice 2009, p. 17.
Feldman 2000, p. xxi.
- Smith 2003, p. 279.
- Thompson 1966, p. 24.
- Thompson 1966, p. 26.
- Jones 2000, p. 364.
- Rice 2009, p. 83.
- Pugh 2009, p. 191.
Houwald 1984, p. 256.
- Houwald 1984, p. 256.
- Clendinnen 2003, p. 7.
- Pohl and Hook 2008, p. 26.
- Pohl and Hook 2008, pp. 26–27.
- Pohl and Hook 2008, p. 27.
- Wise and McBride 2008, pp. 33–34.
- Wise and McBride 2008, p. 34.
- Clendinnen 2003, p. 3.
- Perramon 1986, p. 242.
Clendinnen 2003, p. 3.
- Clendinnen 2003, pp. 3–4.
- Sharer and Traxler 2006, p. 758.
- Clendinnen 2003, p. 4.
- de Díos González 2008, p. 25.
Gómez Martín June 2013, p. 56.
- Gómez Martín June 2013, p. 56.
- de Díos González 2008, pp. 25–26.
- de Díos González 2008, p. 26.
- Clendinnen 2003, pp. 4–5.
- Clendinnen 2003, pp. 6.
- Clendinnen 2003, pp. 5.
- Clendinnen 2003, p. 8.
- Clendinnen 2003, pp. 8–9.
- Clendinnen 2003, p. 9.
- Clendinnen 2003, pp. 9–10.
- Clendinnen 2003, p. 10.
- Clendinnen 2003, pp. 10–11.
- Clendinnen 2003, p. 11.
- Clendinnen 2003, p. 12.
- Clendinnen 2003, pp. 11–12.
- Clendinnen 2003, pp. 12–13.
- Clendinnen 2003, p. 13.
- Clendinnen 2003, p. 14.
- Sharer and Traxler 2006, p. 759. Clendinnen 2003, p. 14.
- Sharer and Traxler 2006, p. 759. Recinos 1986, p. 18.
- Recinos 1986, p. 18.
- Clendinnen 2003, p. 15.
- Clendinnen 2003, pp. 14–15.
- Clendinnen 2003, pp. 15–16.
- Clendinnen 2003, p. 16.
- Sharer and Traxler 2006, pp. 760–761.
- Sharer and Traxler 2006, pp. 758–759, 760–761.
- Townsend 1995, p. 16.
- Hernández et al 2010, p. 26.
- Townsend 1995, pp. 16ff.
- Jones 2000, p. 358.
- Rice and Rice 2009, p. 12.
- Rice et al 2009, p. 127.
- Rice and Rice 2005, p. 152.
- Sharer and Traxler 2006, p. 762.
Jones 2000, p. 358.
- Sharer and Traxler 2006, p. 773.
Jones 2000, p. 358.
- Feldman 1998, p. 6.
- Webster 2002, p. 83.
- Sharer and Traxler 2006, pp. 766–767.
- Sharer and Traxler 2006, p. 767. Clendinnen 2003, p. 20.
- Clendinnen 2003, p. 20.
- ITMB 2000.
- Clendinnen 2003, p. 21.
- Sharer and Traxler 2006, p. 767. Clendinnen 1989, 2003, p. 21.
- Sharer and Traxler 2006, p. 767.
- Sharer and Traxler 2006, p. 767-768.
- Sharer and Traxler 2006, p. 768. Clendinnen 2003, p. 21.
- Quezada 2011, p. 37.
- Quezada 2011, pp. 37–38.
- Clendinnen 2003, p. 23.
Sharer and Traxler 2006, p. 768.
- Sharer and Traxler 2006, pp. 768–769.
- Sharer and Traxler 2006, pp. 769–770.
- Sharer and Traxler 2006, pp. 770–771.
- Caso Barrera 2002, pp. 17, 19.
- Caso Barrera 2002, p. 19.
- Fialko Coxemans 2003, pp. 72–73.
- Sharer and Traxler 2006, p. 774.
Jones 1998, p. 46.
Chuchiak IV 2005, p. 131.
- Jones 1998, pp. 42, 47.
- Chuchiak IV 2005, p. 132.
- Means 1917, p. 79.
- Means 1917, p. 80.
- Means 1917, p. 81.
- Sharer and Traxler 2006, p. 774.
Means 1917, p. 81.
- Means 1917, p. 81.
Jones 1998, pp. 47–48.
- Sharer and Traxler 2006, p. 774.
Jones 1998, p. 48.
- Jones 1998, p. 48.
- Jones 1998, pp. 48–49.
- Feldman 2000, p. 151.
- Jones 1998, pp. 111, 132–133, 145.
- Jones 1998, pp. 129–130.
- Jones 1998, pp. 130–131.
- Jones 1998, p. 131.
- Jones 1998, pp. 132, 134.
Means 1917, p. 97.
- Jones 1998, pp. 135–136, 139–140.
- Jones 1998, p. 141.
- Jones 1998, p. 140.
- Jones 1998, p. 142.
- Jones 1998, p. 143.
- Jones 1998, pp. 130, 144.
- Jones 1998, pp. 148–149.
- Jones 1998, p. 147.
- Jones 1998, p. 154.
Means 1917, pp. 117–118.
- Jones 1998, p. 154.
- Jones 1998, p. 163.
- Jones 1998, p. 162.
- Jones 1998, pp. 148, 150.
- Jones 1998, pp. 130, 151–152.
- Jones 1998, p. 152.
- Jones 1998, pp. 150, 154.
- Jones 1998, pp. 154–155.
- Jones 1998, p. 155.
- Jones 1998, p. 156.
- Jones 1998, pp. 148, 157.
Quezada 2011, p. 23.
- Jones 1998, p. 157.
- Jones 1998, p. 148.
- Jones 1998, p. 158.
- Jones 1998, pp. 158–159.
- Jones 1998, pp. 159–160.
- Jones 1998, p. 160.
- Jones 1998, pp. 160–161.
- Jones 1998, pp. 187, 189.
- Jones 1998, pp. 189–190.
Means 1917, p. 128.
- Jones 1998, p. 190.
- Sharer and Traxler 2006, p. 775.
Jones 1998, p. 192.
- Jones 1998, p. 205.
- Jones 1998, p. 207.
- Jones 1998, pp. 209–210.
- Sharer and Traxler 2006, p. 775.
Jones 1998, pp. 214–215.
- Vayhinger-Scheer 2011, p. 383.
- Sharer and Traxler 2006, pp. 775–776.
Jones 1998, pp. 218–219.
- Jones 1998, pp. 189, 226.
- Jones 1998, p. 226.
- Jones 1998, p. 227.
Sharer and Traxler 2006, p. 776.
- Jones 1998, p. 227.
- Jones 1998, p. 228.
Sharer and Traxler 2006, p. 776.
- Jones 1998, p. 228.
- Jones 1998, p. 229.
- Jones 1998, pp. 232–233.
- Jones 1998, p. 233.
- Jones 1998, pp. 233–234.
- Jones 1998, p. 479n59.
- Jones 1998, p. 234-235.
- Jones 1998, pp. 237–238.
- Jones 1998, pp. 238–239.
- Jones 1998, p. 240.
- Jones 1998, pp. 241–242.
- Jones 2000, p. 362.
- Jones 2009, p. 59.
Jones 1998, pp. 253, 265–266.
- Jones 1998, pp. 268–269.
- Jones 1998, pp. 252, 268.
- Jones 1998, pp. 269–270.
- Sharer and Traxler 2006, p. 777.
Jones 1998, p. 295.
- Jones 1998, p. 297.
- Jones 1998, pp. 298–299.
- Jones 2009, p. 59.
- Sharer and Traxler 2006, pp. 777–778.
- Sharer and Traxler 2006, p. 778.
Jones 2009, p. 59.
- Jones 1998, p. 295.
- Jones 1998, p. 306.
- Jones 1998, p. xix.
|Wikisource has original text related to this article:|
- Andrews, Anthony P. (Winter 1984). "The Political Geography of the Sixteenth Century Yucatan Maya: Comments and Revisions". Journal of Anthropological Research (Albuquerque, New Mexico, US: University of New Mexico) 40 (4): 589–596. JSTOR 3629799. Retrieved 2013-12-19. (subscription required)
- Athena Review (1999a). "The Spanish Conquest of Yucatán (1526–46)". Athena Review 2 (1). Retrieved 2006-07-25.
- Athena Review (1999b). "The Valdivia Shipwreck (1511)". Athena Review 2 (1). Retrieved 2006-07-25.
- Caso Barrera, Laura (2002). Caminos en la selva: migración, comercio y resistencia: Mayas yucatecos e itzaes, siglos XVII–XIX [Roads in the Forest: Migration, Commerce and Resistance: Yucatec and Itza Maya, 17th–19th Centuries] (in Spanish). Mexico City, Mexico: El Colegio de México, Fondo de Cultura Económica. ISBN 978-968-16-6714-6. OCLC 835645038.
- Cervantes de Salazar, Francisco (n.d.) [ca. 1560]. Crónica de la Nueva España (in Spanish). readme.it. Retrieved 2006-07-26.
- Cecil, Leslie; Prudence M. Rice; Don S. Rice (1999). J.P. Laporte and H.L. Escobedo, ed. "Los estilos tecnológicos de la cerámica Postclásica con engobe de la región de los lagos de Petén" [The Technological Styles of Postclassic Slipped Ceramics in the Petén Lakes Region] (PDF). Simposio de Investigaciones Arqueológicas en Guatemala (in Spanish) (Guatemala City, Guatemala: Museo Nacional de Arqueología y Etnología). XII (1998): 788–795. OCLC 42674202. Retrieved 2012-11-26.
- Chamberlain, Robert Stoner (1948). The Conquest and Colonization of Yucatan, 1517–1550. Washington, D.C.: Carnegie Institution. OCLC 459181680.
- Chuchiak IV, John F. (2005). ""Fide, Non Armis": Franciscan Reducciónes and the Maya Mission Experience on the Colonial Frontier of Yucatán, 1602–1640". In John F. Schwaller. Francis in the Americas: Essays on the Franciscan Family in North and South America (PDF). Berkeley, California, US: Academy of American Franciscan History. pp. 119–142. ISBN 0-88382-306-3. OCLC 61229653.
- Clendinnen, Inga (2003) . Ambivalent Conquests: Maya and Spaniard in Yucatan, 1517–1570 (2nd ed.). Cambridge, UK: Cambridge University Press. ISBN 0-521-52731-7. OCLC 50868309.
- Coe, Michael D. (1987). The Maya (4th edition (revised) ed.). London; New York: Thames & Hudson. ISBN 0-500-27455-X. OCLC 15895415.
- Coe, Michael D. (1999). The Maya. Ancient Peoples and Places (6th edition, fully revised and expanded ed.). London, UK and New York, US: Thames & Hudson. ISBN 0-500-28066-5. OCLC 59432778.
- de Dios González, Juan (2008). "Gonzalo Guerrero, primer mexicano por voluntad propia" [Gonzalo Guerrero, First Mexican by his Own Free Will] (PDF). Inventio: la génesis de la cultura universitaria en Morelos (in Spanish) (Cuernavaca, Morelos, Mexico: Universidad Autónoma del Estado de Morelos) (4): 23–26. OCLC 613144193. Retrieved 2013-12-17.
- Díaz del Castillo, Bernal (1963) . The Conquest of New Spain. Penguin Classics. J. M. Cohen (trans.) (6th printing (1973) ed.). Harmondsworth, England: Penguin Books. ISBN 0-14-044123-9. OCLC 162351797.
- Estrada-Belli, Francisco (2011). The First Maya Civilization: Ritual and Power Before the Classic Period. Abingdon, Oxfordshire, UK and New York, US: Routledge. ISBN 978-0-415-42994-8.
- Feldman, Lawrence H. (1998). Motagua Colonial. Raleigh, North Carolina, US: Boson Books. ISBN 1-886420-51-3. OCLC 82561350.
- Feldman, Lawrence H. (2000). Lost Shores, Forgotten Peoples: Spanish Explorations of the South East Maya Lowlands. Durham, North Carolina, US: Duke University Press. ISBN 0-8223-2624-8. OCLC 254438823.
- Fialko Coxemans, Vilma (2003). "Domingo Fajardo: vicario y defensor de indios en Petén. 1795–1828." [Domingo Fajardo: Vicar and Defender of Indians in Petén] (PDF). Mayab (in Spanish) (Madrid, Spain: Sociedad Española de Estudios Mayas) (16): 72–78. ISSN 1130-6157. OCLC 14209890. Retrieved 2012-12-06.
- Gómez Martín, Jorge Angel (June 2013). "El Descubrimiento del Yucatán" (PDF). Revista de Estudios Colombinos (in Spanish) (Tordesillas, Valladolid, Spain: Seminario Iberoamericano de Descubrimientos y Cartografía) (9): 53–60. ISSN 1699-3926. OCLC 436472699. Retrieved 2013-12-17.
- Hernández, Christine; Anthony P. Andrews; Gabrielle Vail (2010). "Introduction". In Gabrielle Vail and Christine L. Hernández. Astronomers, Scribes, and Priests: Intellectual Interchange Between the Northern Maya Lowlands and Highland Mexico in the Late Postclassic Period. Dumbarton Oaks Pre-Columbian symposia and colloquia. Washington, D.C, US: Harvard University Press. pp. 17–36. ISBN 9780884023463. OCLC 845573515.
- Houwald, Götz von (1984). "Mapa y Descripción de la Montaña del Petén e Ytzá. Interpretación de un documento de los años un poco después de la conquista de Tayasal" [Map and Description of the Jungle of Petén and Itza. Interpretation of a Document from the Years Soon After the Conquest of Tayasal] (PDF). Indiana (in Spanish) (Berlin, Germany: Ibero-Amerikanisches Institut) (9). ISSN 0341-8642. OCLC 2452883. Retrieved 2012-12-03.
- INAH (2010). "Zona Arqueológica El Meco" (in Spanish). Mexico City, Mexico: Instituto Nacional de Antropología e Historia (INAH) and Consejo Nacional para la Cultura y las Artes (CONACULTA). Archived from the original on 2013-08-23. Retrieved 2013-12-07.
- Guatemala (Map) (3rd ed.). 1:500000. International Travel Maps. Richmond, British Columbia, Canada: ITMB Publishing. 1998. ISBN 0-921463-64-2. OCLC 421536238.
- México South East (Map) (2nd ed.). 1:1000000. International Travel Maps. Richmond, British Columbia, Canada: ITMB Publishing. 2000. ISBN 0-921463-22-7. OCLC 46660694.
- Jones, Grant D. (1998). The Conquest of the Last Maya Kingdom. Stanford, California, US: Stanford University Press. ISBN 978-0-8047-3522-3. OCLC 9780804735223.
- Jones, Grant D. (2000). "The Lowland Maya, from the Conquest to the Present". In Richard E.W. Adams and Murdo J. Macleod (eds.). The Cambridge History of the Native Peoples of the Americas, Vol. II: Mesoamerica, part 2. Cambridge, UK: Cambridge University Press. pp. 346–391. ISBN 0-521-65204-9. OCLC 33359444.
- Jones, Grant D. (2009). "The Kowoj in Ethnohistorical Perspective". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: Identity, Migration, and Geopolitics in Late Postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 55–69. ISBN 978-0-87081-930-8. OCLC 225875268.
- Lovell, W. George (2005). Conquest and Survival in Colonial Guatemala: A Historical Geography of the Cuchumatán Highlands, 1500–1821 (3rd ed.). Montreal, Canada: McGill-Queen's University Press. ISBN 0-7735-2741-9. OCLC 58051691.
- Means, Philip Ainsworth (1917). History of the Spanish Conquest of Yucatan and of the Itzas. Papers of the Peabody Museum of American Archaeology and Ethnology, Harvard University VII. Cambridge, Massachusetts, US: Peabody Museum of Archaeology and Ethnology. OCLC 681599.
- Perramon, Francesc Ligorred (1986). "Los primeros contactos lingüísticos de los españoles en Yucatán". In Miguel Rivera and Andrés Ciudad. Los mayas de los tiempos tardíos (PDF) (in Spanish). Madrid, Spain: Sociedad Española de Estudios Mayas. pp. 241–252. ISBN 9788439871200. OCLC 16268597.
- Pohl, John; Hook, Adam (2008) . The Conquistador 1492–1550. Warrior 40. Oxford, UK and New York, US: Osprey Publishing. ISBN 978-1-84176-175-6. OCLC 47726663.
- Pugh, Timothy W. (2009). "Residential and Domestic Contexts at Zacpetén". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: Identity, Migration, and Geopolitics in Late Postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 141–191. ISBN 978-0-87081-930-8. OCLC 225875268.
- Quezada, Sergio (2011). La colonización de los mayas peninsulares [The Colonisation of the Peninsula Maya] (PDF). Biblioteca Básica de Yucatán (in Spanish) 18. Merida, Yucatan, Mexico: Secretaría de Educación del Gobierno del Estado de Yucatán. ISBN 978-607-7824-27-5. OCLC 796677890. Retrieved 2013-01-20.
- Rice, Prudence M.; Don S. Rice (2005). "Sixteenth- and Seventeenth-Century Maya Political Geography". In Susan Kepecs and Rani T. Alexander. The Postclassic to Spanish-Era Transition in Mesoamerica: Archaeological Perspectives. Albuquerque, New Mexico, US: University of New Mexico Press. ISBN 9780826337399. OCLC 60550555. External link in
- Rice, Prudence M. (2009). "The Archaeology of the Kowoj: Settlement and Architecture at Zacpetén". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: Identity, Migration, and Geopolitics in Late Postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 81–83. ISBN 978-0-87081-930-8. OCLC 225875268.
- Rice, Prudence M.; Don S. Rice (2009). "Introduction to the Kowoj and their Petén Neighbors". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: Identity, Migration, and Geopolitics in Late Postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 3–15. ISBN 978-0-87081-930-8. OCLC 225875268.
- Rice, Prudence M. (2009). "Who were the Kowoj?". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: Identity, Migration, and Geopolitics in Late Postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 17–19. ISBN 978-0-87081-930-8. OCLC 225875268.
- Rice, Prudence M.; Don S. Rice; Timothy W. Pugh; Rómulo Sánchez Polo (2009). "Defensive architecture and the context of warfare at Zacpetén". In Prudence M. Rice and Don S. Rice (eds.). The Kowoj: identity, migration, and geopolitics in late postclassic Petén, Guatemala. Boulder, Colorado, US: University Press of Colorado. pp. 123–140. ISBN 978-0-87081-930-8. OCLC 225875268. Cite uses deprecated parameter
- Romero, Rolando J. (1992). "Texts, Pre-Texts, Con-Texts: Gonzalo Guerrero in the Chronicles of Indies" (pdf). Retrieved 2006-07-26.
- Rugeley, Terry L. (1996). Yucatan's Maya Peasantry and the Origins of the Caste War. Austin: University of Texas Press. ISBN 0-292-77078-2.
- Sharer, Robert J.; Loa P. Traxler (2006). The Ancient Maya (6th (fully revised) ed.). Stanford, California, US: Stanford University Press. ISBN 0-8047-4817-9. OCLC 57577446.
- Smith, Michael E. (2003) . The Aztecs (2nd ed.). Malden, Massachusetts, US and Oxford, UK: Blackwell Publishing. ISBN 978-0-631-23016-8. OCLC 59452395.
- Thompson, J. Eric S. (1966). "The Maya Central Area at the Spanish Conquest and Later: A Problem in Demography". Proceedings of the Royal Anthropological Institute of Great Britain and Ireland (Royal Anthropological Institute of Great Britain and Ireland) (1966): 23–37. JSTOR 3031712. Retrieved 2013-12-04. (subscription required)
- Townsend, Richard F. (1995) . The Aztecs. London, UK: Thames and Hudson. ISBN 0-500-27720-6. OCLC 27825022.
- Vayhinger-Scheer, Temis (2011) . "Kanek': El Último Rey Maya Itzaj" [Kanek': The Last Itza Maya King]. In Nikolai Grube. Los Mayas: Una Civilización Milenaria [The Maya: An Ancient Civilization] (hardback) (in Spanish). Potsdam, Germany: Tandem Verlag. pp. 382–383. ISBN 978-3-8331-6293-0. OCLC 828120761.
- Webster, David L. (2002). The Fall of the Ancient Maya: Solving the Mystery of the Maya Collapse. London, UK: Thames & Hudson. ISBN 0-500-05113-5. OCLC 48753878.
- White, D. A.; C. S. Hood (April 2004). "Vegetation Patterns and Environmental Gradients in Tropical Dry Forests of the Northern Yucatan Peninsula". Journal of Vegetation Science (Uppsala, Sweden: Opulus Press) 15 (2): 151–160. doi:10.1111/j.1654-1103.2004.tb02250.x. ISSN 1654-1103. JSTOR 3236749. OCLC 50781866. Retrieved 2014-01-13. (subscription required)
- Wise, Terence; McBride, Angus (2008) . The Conquistadores. Men-at-Arms 101. Oxford, UK and New York, US: Osprey Publishing. ISBN 978-0-85045-357-7. OCLC 12782941.
- Graham, Elizabeth; David M. Pendergast; Grant D. Jones (8 December 1989). "On the Fringes of Conquest: Maya-Spanish Contact in Colonial Belize". Science. New Series (American Association for the Advancement of Science) 246 (4935): 1254–1259. doi:10.1126/science.246.4935.1254. JSTOR 1704619. (subscription required)
- Roukema, E. (1956). "A Discovery of Yucatan Prior to 1503". Imago Mundi (Imago Mundi, Ltd.) 13: 30–38. doi:10.1080/03085695608592123. ISSN 0308-5694. JSTOR 1150238. OCLC 4651172881. (subscription required) | https://readtiger.com/wkp/en/Spanish_conquest_of_Yucat%C3%A1n |
4.03125 | Skip to main content
Get your Wikispaces Classroom now:
the easiest way to manage your class.
Pages and Files
Anticipatory Guide for Reading Instruction
Assessment and Evaluation of ELLs
Assessment and Placement Procedures for Incoming ELLs
Choral Reading to Promote Oral Language Development
Developing Literacy in the Classroom and Home Environments
ESL Laws, Legislation and Legal Decisions
ESL Professional Development Homepage
Evaluating Materials for ESL Reading and Content Instruction
Idividualized Education Plan
Input, Output and Comprehension
Integrated View of SLA
Krashen and the Natural Approach
Language and Language Learning
Language and Linguistics - Resources for ESL Teachers and Students Homepage
Language Learning Resources for Students
Add "All Pages"
Input, Output and Comprehension
Input, Output and Comprehension:
Input: What is input in the ESL classroom?
Input is the information that students receive from each other and the teacher. It is very important in second language acquisition and must be “comprehensible, developmentally appropriate, redundant, and accurate” (Kagan, 1995). So, students must be able to understand the basic message of the information they are receiving. “It is especially critical for them to receive comprehensible input from their teachers and classmates” (Haynes, 2005). The input must be received from many different sources for the information to move from “short-term comprehension to long-term acquisition” (Kagan, 1995). Also, the messages must be “slightly above their current English level” (Haynes, 2005). Krashen, working off of Vygotsky’s theory of the zone of proximal development, “devised a similar notion for the kind of input that an ESL student needs in order to make progress in acquiring English. He called this gap
is the current level of proficiency” (ESL Workshop: Scaffolding Theory).
Output: What does "output mean and how is it important to the language acquisition process?
“Output takes an active role in the acquisition process in that output serves as an avenue for experimenting with language and for testing language hypotheses” (Module 6 Lecture Notes). It can be used as “feedback for intake because it assists the second language learner to check and correct language use,” or it “forces the second language learner to examine a syntactic rather than semantic analysis and examination of language” (Module 2 Lecture Notes). “Language acquisition is fostered by output that is functional and communicative,” as well as comprehensible (Kagan, 1995). For the output to be comprehensible, it must be ordered, structured, coherent and understandable. So, the learner must “draw on L2 syntax” (Module 6 Lecture Notes). It is important to note that “students, to a large extent, learn to speak by speaking” (Kagan, 1995). To become fluent they must be given multiple opportunities to speak about and discuss the same topic, express their own ideas and practice speaking on their own level and terms (Kagan, 1995; Excerpt from Teaching English Language Learners with Learning Difficulties; Haynes, 2005). “There are four ways in which output provides learners with opportunities for authentic language learning: testing hypotheses, receiving feedback, developing automaticity and shifting from meaning-based processing” (Module 6 Lecture Notes).
The Role of Interaction: What is the role of interaction and how does it relate to language use and acquisition?
Interaction provides ample opportunity for plenty of input and output. “Teachers must constantly involve students, ask many questions, encourage students to express their ideas and thoughts in the new language” (Excerpt from Teaching English Language Learners with Learning Difficulties, 2010). Small groups also help facilitate interaction and adaptation to different levels of proficiency. “You cannot learn a language without interaction” (Foppoli, 2008). Of course, there is the fear of miscommunication. However, miscommunication can encourage “conversational negotiation as well as negotiation of meaning – both beneficial to the process of language acquisition” (Module 6 Lecture Notes). Interaction enables the learner in developing an understanding of grammar and grammatical structures and developing knowledge of L2 syntax (Module 6 Lecture Notes).
Comprehension: How do input, output and interaction encourage and promote comprehension?
Comprehension is the result of language acquisition. The best way for a second language learner to acquire a new language is through receiving lots of input and having the opportunity to produce a lot of output. The best way for a student to have ample opportunities for input and output is through interaction.
To help make input comprehensible to ELL students, make sure to give visual clues along with auditory and involve them in hands-on activities.
To encourage output it is important that the students talk more than the teacher. Discussions and small group activities are great ways to make sure the students have ample opportunity for output.
To foster interaction the teacher must make sure to have a lot of activities for the students to participate in and to limit the amount of individual and silent work they must participate in.
To aid in comprehension it is important that the teacher make sure instructions and information are given clearly and multiple times. Repetition can be very helpful in comprehension.
These tips and many more can be accessed by clicking the following link:
Quick Tips to Help English Language Learner (ELL) Students in the Classroom
Fopolli, J. (2008).
The Role of Interaction to Acquire a Second Language.
Retrieved from Free Language Lessons Board:
Haynes, J. (2005).
Comprehensible Input and Output.
Retrieved from everythingESL.net:
Kagan, S. (1995).
We Can Talk: Cooperative Learning in the Elementary ESL Classroom.
Retrieved from CAL Digests:
What is Comprehensible Input?
Teaching English-Language Learners with Learning Difficulties)
. (2010). Retrieved from
ESL Workshops: Scaffolding Theory.
(n.d.). Retrieved from
Ms. Cristina Hudgins
Back to Professional Development Homepage
help on how to format text
Turn off "Getting Started" | http://esl-professional-development.wikispaces.com/Input%2C+Output+and+Comprehension?responseToken=a56078a14d3bbe9c36032cee113e9b3d |
4.03125 | Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
The femur, or thigh bone, is the most proximal (closest to the center of the body) bone of the leg in tetrapod vertebrates capable of walking or jumping, such as most land mammals, birds, many reptiles such as lizards, and amphibians such as frogs . In vertebrates with four legs such as dogs and horses, the femur is found only in the rear legs. The head of the femur articulates with the acetabulum in the pelvic bone to form the hip joint, while the distal part of the femur articulates with the tibia and patella to form the knee joint. By most measures, the femur is the strongest bone in the body.
The femur is the longest bone of the human skeleton and is located between the hip bone and the knee. It is the only bone in the thigh. This bone is also one of the strongest bones in the human skeleton. It functions in supporting the weight of the body and allowing motion of the lower extremity.
The head (at the proximal extremity) of the femur articulates with the acetabulum of the pelvis to form the hip joint . The lower extremity of the femur (or distal extremity), which is larger, is somewhat cuboid in form and consists of two oblong eminences known as the condyles. The articular surface of the lower end of the femur occupies the anterior, inferior, and posterior surfaces of the condyles. The front or anterior portion is the patellar surface and articulates with the patella. The lower and posterior parts articulate with the corresponding condyles of the tibia to form the knee joint.
the femur head articulates with the acetabulum to form the hip joint, the femur is the sole bone in the leg, the femur is the longest bone in the body, or the distal femur articulates with the proximal tibia to form the knee joint | https://www.boundless.com/physiology/textbooks/boundless-anatomy-and-physiology-textbook/the-skeletal-system-7/the-lower-limb-88/femur-the-thigh-495-4822/ |
4.09375 | |Location||Colony of Vancouver Island|
|Parties||First Nations of Vancouver Island and the Colony of Vancouver Island|
The Douglas Treaties, also known as the Vancouver Island Treaties or the Fort Victoria Treaties, were a series of treaties signed between certain indigenous groups on Vancouver Island and the Colony of Vancouver Island.
With the signing of the Oregon Treaty in 1846, the Hudson's Bay Company (HBC) determined that its trapping rights in the Oregon Territory were tenuous. Thus in 1849, it moved its western headquarters from Fort Vancouver on the Columbia River (present day Vancouver, Washington) to Fort Victoria. Fort Vancouver's Chief Factor, James Douglas, was relocated to the young trading post to oversee the Company's operations west of the Rockies.
This development prompted the British colonial office to designate the territory a crown colony on January 13, 1849. The new colony, Colony of Vancouver Island, was immediately leased to the HBC for a ten-year period, and Douglas was charged with encouraging British settlement. Richard Blanshard was named the colony's governor. Blanshard discovered that the hold of the HBC over the affairs of the new colony was all but absolute, and that it was Douglas who held all practical authority in the territory. There was no civil service, no police, no militia, and virtually every British colonist was an employee of the HBC.
As the colony expanded the HBC started buying up lands for colonial settlement and industry from aboriginal peoples on Vancouver Island. For four years the governor, James Douglas, made a series of fourteen land purchases from aboriginal peoples.
To negotiate the terms, Douglas met first in April 1850 with leaders of the Songhees nation, and made verbal agreements. Each leader made an X at the bottom of a blank ledger. The actual terms of the treaty were only incorporated in August, and modelled on the New Zealand Company's deeds of purchase for Maori land, used after the signing of Treaty of Waitangi.
The Douglas Treaties cover approximately 930 square kilometres (360 sq mi) of land around Victoria, Saanich, Sooke, Nanaimo and Port Hardy, all on Vancouver Island that were exchanged for cash, clothing and blankets. They were able to retain existing village lands and fields for their use, and also were allowed to hunt and fish on the surrendered lands.
These fourteen land purchases became the fourteen Treaties that make up the Douglas Treaties. Douglas didn't continue buying land due to lack of money and the slow growth of the Vancouver Island colony.
|Treaty Group Name||Modern First Nation (band government)||Land covered by Treaty||Money exchanged for land||Ref|
|Teechamitsa||Esquimalt First Nation||Country lying between Esquimalt and Point Albert||£27 10 shillings (UK £2,626 in 2016)|||
|Kosampson||Esquimalt First Nation||Esquimalt Peninsula and Colquitz Valley||£52 10 shillings (UK £5,014 in 2016)|||
|Whyomilth||Esquimalt First Nation||Northwest of Esquimalt Harbour||£30 (UK £2,865 in 2016)|||
|Chewhaytsum||Becher Bay Band||Sooke||£45 ten shillings (UK £4,345 in 2016)|||
|Chilcowitch||Songhees First Nation||Point Gonzales||£45 (UK £4,298 in 2016)|||
|Che-ko-nein||Songhees First Nation||Point Gonzales to Cedar Hill||£79 10 shillings (UK £7,593 in 2016)|||
|Sooke||T'sou-ke Nation||North-west of Sooke Inlet||£48 6 shillings 8 pence (UK £4,622 in 2016)|||
|Ka-ky-aakan||Becher Bay Band||Metchosin||£43 6 shillings 8 pence (UK £4,145 in 2016)|||
|Saanich Tribe (South)||Tsawout First Nation and Tsartlip First Nation First Nations||South Saanich||£41 13 shillings 4 pence (UK £3,973 in 2016)|||
|Saanich Tribe (North)||Pauquachin First Nation and Tseycum First Nations||North Saanich||[amount not stated]|||
|Saalequun||Snuneymuxw First Nation (Former Nanaimo Band)||[area not stated]||[amount not stated]|||
|Swengwhung||Songhees First Nation||[area not stated]||[amount not stated]|||
|Queackar||Kwakiutl (Kwawkelth) Band||Fort Rupert.||£64 (UK £6,112 in 2016)|||
|Quakiolth||Kwakiutl (Kwawkelth) Band||Fort Rupert.||£86 (UK £8,213 in 2016)|||
- "Douglas Treaties: 1850-1854". Executive Council of British Columbia. 2009. Retrieved July 28, 2009.
- B.C. Archives seeks world heritage status for Douglas treaties, Victoria News, August 08, 2013 8:21 AM
- Robin Fisher , 'With or Without Treaty : Indian Land Claims in Western Canada' , in Renwick , ed. . Sovereignty & Indigenous Rights, pp.53
- "1811 - 1867: Pre-Confederation Treaties II". canadiana.org. 2009. Retrieved July 28, 2009.
- "Douglas Treaty Payments" (PDF). Executive Council of British Columbia. llbc.leg.bc.ca. 2009. Retrieved July 28, 2009.
- British Columbia Indian Treaties In Historical Perspective, Dennis F. K. Madill, Research Branch, Corporate Policy, Department of Indian and Northern Affairs Canada, 1981 | https://en.wikipedia.org/wiki/Douglas_Treaties |
4.34375 | How the values of A and B affect the shape of the graph y = A sin(Bx).
How to graph y = tan(theta) for 0 <= theta < pi/2.
How to graph y = tan(q) for one or more periods.
How to find the x-intercepts and vertical asymptotes of the graph of y = tan(q).
How to recognize when y = 0 is the horizontal asymptote of a rational function.
Concept of slope and graphing lines using the slope and y intercept
How to graph a quadratic equation by hand.
How to graph inequalities in the xy plane.
How to write the equation of a graphed line.
The description of sex chromosomes.
How the value of h affects the shape of the graph y = A sin(B(x-h)).
How to recognize the graph of an even or odd function.
All about ellipses. | https://www.brightstorm.com/tag/y/page/2 |
4.03125 | Bleeding most often occurs due to injury, and depending upon the circumstances, the amount of force required to cause bleeding can be quite variable.
Most people understand that falling from a height or being involved in a car accident can inflict great force and trauma upon the body. If blunt force is involved, the outside of the body may not necessarily be damaged, but enough compression may occur to internal organs to cause injury and bleeding.
- Imagine a football player being speared by a helmet to the abdomen. The spleen or liver may be compressed by the force and cause bleeding inside the organ. If the hit is hard enough, the capsule or lining of the organ can be torn, and the bleeding can spill into the peritoneum (the space in the abdominal cavity that contains abdominal organs such as the intestines, liver, and spleen).
- If the injury occurs in the area of the back or flank, where the kidney is located, retroperitoneal bleeding (retro=behind; behind the abdominal cavity) may occur.
- The same mechanism causes bleeding due to crush injuries. For example, when a weight falls on a foot, the weight doesn't give, nor does the ground. The force needs to be absorbed by either the bone or the muscles of the foot. This can cause the bone to break and/or the muscle fibers to tear and bleed.
- Other structures are compressible and may cause internal bleeding. For example, the eye can be compressed in the orbit when it is hit by a fist or a ball. The globe deforms and springs back to its original shape. Intraorbital hemorrhage may occur.
Deceleration may cause organs in the body to be shifted inside the body. This may shear blood vessels away from the organ and cause bleeding to occur. This is often the mechanism for intracranial bleeding such as epidural or subdural hematomas. Force applied to the head causes an acceleration/deceleration injury to the brain, causing the brain to "bounce around" inside the skull. This can tear some of the small veins on the surface of the brain and cause bleeding. Since the brain is encased in the skull, which is a solid structure, even a small amount of blood can increase pressure inside the skull and decrease brain function.
Bleeding may occur with broken bones. Bones contain the bone marrow in which blood production occurs. They have rich blood supplies, and significant amounts of blood can be lost with fractures. The break of a long bone such as the femur (thigh bone) can result in the loss of one unit (350-500cc) of blood. Flat bones such as the pelvis require much more force to cause a fracture, and many blood vessels that surround the structure can be torn by the trauma and cause massive bleeding.
Bleeding in pregnancy is never normal, though not uncommon in the first trimester, and is a sign of a potential miscarriage. Early on, the concern is a potential ectopic or tubal pregnancy, in which the placenta and the fetus implant in the Fallopian tube or another location outside of the uterine cavity. As the placenta grows, it erodes through the tube or other involved organs and may cause fatal bleeding.
Bleeding after 20 weeks of pregnancy may be due to placenta previa or placental abruption, and urgent medical care should be accessed. Placenta previa describes the situation in which the placenta attaches to the uterus close to the opening of the cervix and may cause painless vaginal bleeding. Abruption occurs when the placenta partially separates from the uterine wall and causes significant pain with or without bleeding from the vagina.
Internal bleeding may occur spontaneously, especially in those people who take anticoagulation medications or who have inherited bleeding disorders. Routine bumps that occur in daily life may cause significant bleeding issues.
Internal bleeding may be caused as a side effect of medications (most often from nonsteroidal anti-inflammatory drugs such as ibuprofen and aspirin) and alcohol. These substances can cause inflammation and bleeding of the esophagus, stomach, and duodenum, the first part of the small intestine as it leaves the stomach.
Long-term alcohol abuse can also cause liver damage, which can cause bleeding problems through a variety of mechanisms.
This answer should not be considered medical advice...This answer should not be considered medical advice and should not take the place of a doctor’s visit. Please see the bottom of the page for more information or visit our Terms and Conditions.
Archived: March 20, 2014
Thanks for your feedback.
44 of 91 found this helpful
Read the Original Article: Internal Bleeding | http://answers.webmd.com/answers/1178338/what-causes-internal-bleeding |
4.03125 | 3 Answers | Add Yours
For historical background, Old English is one of the many precursors to the Modern English language, and was spoken and written between the 5th and 12th centuries C.E. (Wikipedia). It originated with the entrance of Germanic Anglo-Saxons. Latin influence left from the Roman Britain period is not clearly discernible (OED). Old English was a non-standardized collection of regional dialects, so there is no single dictionary for translation as there was no single language.
The Old English literary Period started sometime in the 5th century, but there are no surviving documents from that time to serve as examples (runic texts and carvings allow the generalization of the time-frame). The fluxtuating dialect emphases continued throughout the centuries until the 11th century, when it began to change into Middle English based on the London dialect. Middle English held dominance until the standardization of Modern English in the 16th and 17th centuries (the works of Shakespeare and his contemporaries like Spenser and Philips are considered the first properly documented works of Modern English). Therefore, the Old English Period would start sometime in the 5th century and last until the end of the 11th century, when Old English became obsolete.
The most famous work written in Old English is the epic poem Beowulf, of unknown author, which is still translated and performed today. The oldest surviving Old English document is Cædmon's Hymn, from the 7th century, which was originally a verbal poem and was never written down by the author. The last surviving document in Old English is a historical record, the Anglo-Saxon Chronicle dated 1154, and shows the beginning influence of Middle English. Middle English was Chaucer's period.
The glee—wood rang, a song uprose
When Hrothgar’s scop gave the hall good cheer.
We’ve answered 300,944 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/what-old-english-period-364363 |
4.25 | An oxo-acid is an acid that contains oxygen. To be more specific, it is a compound that contains hydrogen, oxygen, and at least one other element, with at least one hydrogen atom bound to oxygen that can dissociate to produce the H+ cation and the anion of the acid.
Under Lavoisier's original theory, all acids contained oxygen, which was named from the Greek ὀξύς (oxys) (acid, sharp) and the root –γενής (–genes) (engender). It was later discovered that some acids, notably hydrochloric acid, did not contain oxygen and so acids were divided into oxoacids and these new hydracids.
All oxy-acids have the acidic hydrogen bound to an oxygen atom, so bond strength (length) is not a factor, as it is with binary nonmetal hydrides. Rather, the electronegativity of the central atom (E) and the number of O atoms determine oxy-acid acidity. With the same "central atom" E to which the O is attached, acid strength increases as the number of oxygen attached to E increases. With the same number of oxygens around E, acid strength increases with the electronegativity of E.
An oxy-acid molecule contains the structure M-O-H, where other atoms or atom groups can be connected to the central atom M. In a solution, such a molecule can be dissociated to ions in two distinct ways:
- M-O-H <=> (M-O)− + H+
- M-O-H <=> (M)+ + OH−
If the central atom M is strongly electronegative, then it attracts strongly the electrons of the oxygen atom. In that case, the bond between the oxygen and hydrogen atom is weak, and the compound ionizes easily in the way of the former of the two chemical equations above. In this case, the compound MOH is thus an acid, because it releases a proton, that is, a hydrogen ion. For example, nitrogen, sulfur and chlorine are strongly electronegative elements, and therefore nitric acid, sulfuric acid, and perchloric acid, are strong acids.
If, however, the electronegativity of M is weak, then the compound is dissociated to ions according to the latter chemical equation, and MOH is an alkaline hydroxide. Examples of such compounds are sodium hydroxide NaOH and calcium hydroxide Ca(OH)2. If the electronegativity of M is somewhere in between, the compound can even be amphoteric, and in that case, it can dissociate to ions in both ways, in the former case when reacting with bases, and in the latter case when reacting with acids.
Inorganic oxy-acids typically have a chemical formula of type HmXOn, where X is some atom functioning as a central atom, whereas parameters m and n depend on the oxidation state of the element X. In most cases, the element X is a nonmetal, but even some metals, for example chromium and manganese, can form oxy-acids when occurring at their highest oxidation state.
When oxy-acids are heated, many of them dissociate to water and the anhydride of the acid. In most cases, such anhydrides are oxides of nonmetals. For example, carbon dioxide, CO2, is the anhydride of carbonic acid, H2CO3, and sulfur trioxide, SO3, is the anhydride of sulfuric acid, H2SO4. These anhydrides react quickly with water and form those oxy-acids again.
Most acids are oxoacids. Indeed, in the 18th century, Lavoisier assumed that all acids contain oxygen and that oxygen causes their acidity. Because of this, he gave to this element its name, oxygenium, derived from Greek and meaning sharp-maker, which is still, in a more or less modified form, used in most languages. Later, however, Humphry Davy showed that the so-called muriatic acid did not contain oxygen, despite its being a strong acid; instead, it is a solution of hydrogen chloride, HCl. Such acids which do not contain oxygen are nowadays known as hydracids.
Names of inorganic oxoacids
Many inorganic oxoacids are traditionally called with names ending with the word acid and which also contain, in a somewhat modified form, the name of the element they contain in addition to hydrogen and oxygen. Well-known examples of such acids are sulfuric acid, nitric acid and phosphoric acid.
This practice is fully well-established, and even IUPAC has accepted such names. In light of the current chemical nomenclature, this practice is, however, very exceptional, because systematic names of all other compounds are formed only according to what elements they contain and what is their molecular structure, not according to what other properties (for example, acidity) they have.
IUPAC, however, does not recommend to call future compounds not yet discovered with a name ending with the word acid. Indeed, acids can even be called with names formed by adding the word hydrogen in front of the corresponding anion; for example, sulfuric acid could just as well be called hydrogen sulfate (or dihydrogen sulfate). In fact, the fully systematic name of sulfuric acid, according to IUPAC's rules, would be dihydroxidodioxidosulfur and that of the sulfate ion, tetraoxidosulfate(2-), Such names, however, are almost never used.
However, the same element can form more than one acid when compounded with hydrogen and oxygen. In such cases, the English practice to distinguish such acids is to use the suffix -ic in the name of the element in the name of the acid containing more oxygen atoms, and the suffix -ous in the name of the element in the name of the acid containing fewe oxygen atoms. Thus, for example, sulfuric acid is H2SO4, and sulfurous acid, H2SO3. Analogously, nitric acid is HNO3, and nitrous acid, HNO2. If there are more than two oxoacids having the same element as the central atom, then, in some cases, acids are distinguished by adding the prefix per- or hypo- to their names. The prefix per-, however, is used only when the central atom is a halogen or a group 7 element. For example, chlorine has the four following oxoacids:
The suffix -ite occurs in names of anions and salts derived from acids whose names end to the suffix -ous. On the other hand, the suffix -ate occurs in names of anions and salts derived from acids whose names end to the suffix -ic. Prefixes hypo- and per- occur even in name of anions and salts; for example the ion ClO4− is called perchlorate.
In a few cases, even prefixes ortho- and para- occur in names of some oxoacids and their derivative anions. In such cases, the para acid is what can be thought as remaining of the ortho acid if a water molecule is separated from the ortho acid molecule. For example, phosphoric acid,H3PO4, has sometimes even be called as orthophosphoric acid, in order to distinguish it from metaphosphoric acid, HPO3. However, according to IUPAC' s current rules, the prefix ortho- should only be used in names of orthotelluric acid and orthoperiodic acid, and their corresponding anions and salts.
In the following table, the formula and the name of the anion refer to what remains of the acid when it cedes all hydrogen atoms as protons. Many of these acids, however, are polyprotic, and in such cases, there exists also one or more intermediate anions. In name of such anions, the prefix hydro-, is added if needed, with some numeral prefixes. For example, SO42− is the sulfate anion, and HSO4−, the hydrosulfate anion. In a similar way, PO43− is the phosphate, H2PO42−, the dihydrophosphate, and HPO4−, the hydrophosphate ion.
- Kivinen, Antti; Mäkitie, Osmo (1988). Kemia (in Finnish). Helsinki, Finland: Otava. ISBN 951-1-10136-6.
- Nomenclature of Inorganic Compounds, IUPAC Recommendations 2005 (Red Book 2005). International Union of Pure and Applied Chemistry. 2005. ISBN 0-85404-438-8.[dead link]
- Otavan suuri ensyklopedia, volume 2 (Cid-Harvey) (in Finnish). Helsinki, Finland: Otava. 1977. ISBN 951-1-04170-3.
- Kivinen, Mäkitie: Kemia, p. 202-203, chapter=Happihapot
- "Hapot". Otavan iso Fokus, Part 2 (El-Io). Otava. 1973. p. 990. ISBN 951-1-00272-4.
- Otavan suuri Ensyklopedia, s. 1606, art. Happi
- Otavan suuri Ensyklopedia, s. 1605, art. Hapot ja emäxet
- Red Book 2005, s. 124, chapter IR-8: Inorganic Acids and Derivatives
- Kivinen, Mäkitie: Kemia, p. 459-461, chapter Kemian nimistö: Hapot
- Red Book 2005, p. 129-132, table IR-8-1
- Red Book 2005, p. 132, note a | https://en.wikipedia.org/wiki/Oxoacid |
4.125 | Electrons and Electric Fields
Please edit this page to improve it. See this module's talk page for discussion.
Lua error in package.lua at line 80: module 'Module:Yesno' not found.
Electrons and Electric Fields
Welcome to the lesson. If this is your first exposure to electrical theory, you need some foundational concepts in order to proceed to more difficult lessons. The nuts and bolts of electricity are the electrons, proton-packed atomic nuclei, atoms, molecules, void spaces and the forces that extend across the voids between the solid particles. First, consider those particles.
Know subatomic nuclear physics?
Electrons have a negative charge. Call it negative 1 electron volt. Neutrons are neutral. Protons have a plus 1 electron volt charge. So, what is a charge? Charge is the potential to attract or repel a charged particle. Positively charged particles are mutually attracted to negatively charged particles. Positively charged particles repel positively charged particles. Negatively charged particles repel negatively charged particles. Unlike charges attract. Like charges repel.
It is a quirk of the universe that electrons and protons carry charges of identical magnitude. Use simple addition and subtraction to add or take away the charges of electrons or protons. There are other physical effects that bias this behavior but the description above will suffice in almost every circumstance you will encounter. We have to cover one immediately
The nuclear force is short ranged. Something about the order of the diameter of four or five protons is where it begins to drop off. However, it is quite strong enough over its short range to keep the nucleus of an atom together. That is, until you get past Lead on the periodic table of the elements. The reason we have to brush on this topic is so that we can move past the problem of positively charged particles sticking together in the nuclei of our atoms. Take it as given, until the nucleus gets pretty sizeable, it does not want to split.
Attraction and repulsion
Atoms are formed with a central, positively charged nucleus. This nucleus contains one or many protons and some neutrons. The protons each carry a charge that attracts nearby electrons. An atomic nucleus typically holds electrons in orbit by this positive charge. However, an atom is defined by its nucleus, not by its electrons.
So an atom has a cloud of electrons spinning at some distance from the nucleus. In a molecule or compound, the amount of attractive force in one atom may be different from its neighbor. An electron will migrate toward the more attractive atom. The one with the most apparent positive charge.
Electrons can change their orbit. With the application of energy, an electron can be induced to speed up, rise to a higher orbit, then stay there until induced to change orbit again. As the electron's orbit gets higher and higher, the positive charge of the nucleus to the orbiting electron is diminished. Taken to its conclusion, the electron can be forced to leave its atom.
In conductors, which include most metals, electrons are frequently shared between adjacent atoms. Should one electron be forced off its neighbor, it may in turn displace the neighbor's shared electron, which displaces the next and so on.
Long distance relationship
Force felt is related to charge by the inverse square of the distance. At a distance of 1 unit, electron A feels 1 unit of repulsion from electron B. At twice that distance, or 2 units, the repulsion would be the inverse of 2 squared or 1/2^2 or 1/4 the force at 1 unit of distance.
Electrons orbit atoms, are shared by atoms, and atoms that fly off on their own. Happily, their behavior is predictable. Before an electron can accelerate to a higher orbit, it must be supplied with the force to speed it to the new orbit. In another galactic convenience, it has been demonstrated that electrons that move to a higher shell do so only in fixed units. Heard of quantum mechanics? Quantum physics? Well, a specific, exact, measurable quantum (quanta?) of energy is required to raise the electron exactly one orbital level. No more and no less will suffice. What happens when that orbit is allowed to decay and the electron loses energy? Can you guess? Put down your hand, brainiac. Each orbital unit of decay releases a quantum of energy precisely equal to the one that raised the electron in the first place. Where this becomes important is in things like light bulbs. Remember the loose treatment of electrons by nuclei in metals? Well, when the tungsten filament in a light bulb gets way, way hot, the electrons actually leave and bounce around inside the bulb! When they happen to fall back in toward the filament, they give off those quanta of energy in the form of light and heat.
Since we understand the forces involved between charged particles, we now have the mental tools to understand a bit about magnetism. Magnets posses mechanically and chemically fixed electron biases. That is, electrons would appear to be trapped toward one side of the atoms and molecules that make up fixed magnets. This fixing means that one end of the magnet has a relative surplus of negatively charged electrons. The converse holds true, where positively charged atomic nuclei are lain relatively bare. As this is at the molecular level, one can split a fixed magnet and the polarization remains for both fractions of the original magnet.
With every electron to the back of its attendant atom, the total charge across the magnet remains neutral. However the near surface is absolutely full of protons (for example) while the rear surface is absolutely full of protons. By our distance squared rule, we can easily see how the near positive charge outweighs the relatively distant negative charge.
Splitting a fixed magnet should produce diminishing magnet strength consistent with the decreasing inverse distance squared. [I have not seen this theory presented in any textbook so I must bear the responsibility if this is just plain wrong]
There is a way to create a magnetic field artificially. By coiling a conductor as you might coil thread on a spool, then applying an electric current, a magnetic field is induced. There is a left-hand rule for electromagnets. Look at the electromagnet. Put your left thumb in parallel with the core of the coil. If you can close your hand in the direction the electrons flow, then your thumb is pointing toward the North pole of the magnetic field. Otherwise, reverse your hand..
North and South
The north pole of a magnet attracts the south pole of another while like poles repel. Magnetic field forces electrons to rotate around magnetic field lines. There are units of measure for magnetic field strength. They are beyond the scope of this lesson.
I may have the polar electron attraction/repulsion backwards. Same with the left-hand rule. Experiment among yourselves. Document your results.
For extra credit, visit Earth's magnetic north and south poles. Report your observations. | https://en.wikiversity.org/wiki/Electrons_and_Electric_Fields |
4.0625 | On the basis of observations of many equilibrium reactions, two Norwegian chemists Goldberg and Waage suggested (1864) a quantitative relationship between the rates of reactions and the concentration of the reacting substances. This relationship is known as law of mass action. It states that
“The rate of a chemical reaction is directly proportional to the product of the molar concentrations of the reactants at a constant temperature at any given time.”
The molar concentration i.e. number of moles per litre is also called active mass. It is expressed by enclosing the symbols of formulae of the substance in square brackets. For example, molar concentration of A is expressed as [A].
Consider a simple reversible reaction
aA + bB ? cC + dD (At a certain temperature)
According to law of mass action
Rate of forward reaction ∝ [A]a[B]b = kf[A]a[B]b
Rate of backward reaction ∝ [C]c[D]d = kb[C]c[D]d
Rate of forward reaction = Rate of backward reaction
Kf[A]a[B]b = kb[C]c[D]d
kf/kb = Kc = [C]c[D]d/[A]a[B]b
Where, Kc is called equilibrium constant.
In terms of partial pressures, equilibrium constant is denoted by Kp and
In terms of mole fraction, equilibrium constant is denoted by and
Relation between Kp, Kc and Kx
Kp = Kc(RT)Δn
Kp = Kk(P)Δn
Δn = number of moles of gaseous products – number of moles of gaseous reactants in chemical equation.
As a general rule, the concentration of pure solids and pure liquids are not included when writing an equilibrium equation.
Characteristics of equilibrium constant
(1) The value of equilibrium constant is independent of the original concentration of reactants.
(2) The equilibrium constant has a definite value for every reaction at a particular temperature.However, it varies with change in temperature.
(3) For a reversible reaction, the equilibrium constant for the forward reaction is inverse of the equilibrium constant for the backward reaction.
In general, Kforwardreaction = 1/K'backwardreaction
(4) The value of an equilibrium constant tells the extent to which a reaction proceeds in the forward or reverse direction.
(5) The equilibrium constant is independent of the presence of catalyst.
(6) The value of equilibrium constant changes with the change of temperature. Thermodynamically, it can be shown that if K1 and K2 be the equilibrium constants of a reaction at absolute temperatures T1 and T2. If ?H is the heat of reaction at constant volume, then
log K2 – log K1 = –ΔH/2.303R [1/T2 – 1/T1] (Van’t Hoff equation)
Transtutors is the best place to get answers to all your doubts regarding law of mass action, equilibrium constant, characteristics of equilibrium constant and Vant Hoff equation. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers you need which are timely and also cost effective. Our tutors are available round the clock to help you out in any way with chemistry.
Transtutors has a vast panel of experienced chemistry tutors who specialize in law of mass action and can explain the different concepts to you effectively. You can also interact directly with our chemistry tutors for a one to one session and get answers to all your problems in your school, college or university level chemical equilibrium homework. Our tutors will make sure that you achieve the highest grades for your chemistry assignments. We will make sure that you get the best help possible for exams such as the AP, AS, A level, GCSE, IGCSE, IB, Round Square etc.
1 sec ago
Science/Math, Math, Calculus, College
ask similar question
Show transcribed image text Marvin the Miner is quickly walking north on...
58 secs ago
Show transcribed image text Marvin the Miner is quickly walking north on 16th street at 5 feet per second while watching Blaster the Burro trotting west on Illinois street a
Show transcribed image text Evaluate each of the following without using a...
1 min ago
Show transcribed image text Evaluate each of the following without using a calculator. tan^-1 (-3/sqquareroot3 cos^-1(cos 7pi/6) Find the equation of the line tangent to t
Show transcribed image text Determine whether each of the statements that...
2 mins ago
Show transcribed image text Determine whether each of the statements that follow are TRUE or FALSE. Circle your selection. I
Show transcribed image text The figure below show a telephone wire hanging...
3 mins ago
Show transcribed image text The figure below show a telephone wire hanging between two poles at x = - b and x = b. It takes the shape of a catenary with equation y = c + a
Show transcribed image text Sketch the circle of radius 1 centered at the...
4 mins ago
Show transcribed image text Sketch the circle of radius 1 centered at the origin and the circle of radius 1 centered at the point (1,0) both on the same axis. Write an integral which represents the area of the intersection of...
Show transcribed image text Write an integral for the arc length of the...
5 mins ago
Show transcribed image text Write an integral for the arc length of the curve y = 2/5Square root 25 - x^2 from x = 0 to x = 5. You need not evaluate the integral. Approximate your answer with LEFT(2) and RIGHT(2). Write an...
Show transcribed image text In On the Equilibrium of Planes Book 2 ,...
6 mins ago
Show transcribed image text In On the Equilibrium of Planes Book 2 , Proposition 8,Archimedes says: "If AO be the diameter of a parabolic segment, and G its center of gravit
Show transcribed image text Determine the convergency of the given improper...
7 mins ago
Show transcribed image text Determine the convergency of the given improper integra : integrate ^ 2 0 w/w-2dw
Show transcribed image text Find dy/dx using the method of logarithmic...
8 mins ago
Show transcribed image text Find dy/dx using the method of logarithmic ferentiation. Y = (1nx)^2 tanx Express your answer in terms of x only.
Thermodynamics in Materials...
Introduction to Biomedical...
Fundamentals of Photonics...
A History of Modern Psychology
Thermodynamics and an...
Math for Biology - An...
acid.) Key Question 1. Finish...
Simply stated, the...
(conversion of products to...
4,104,906 Questions Asked
419,286 Questions Answered
When you can become a premium member and find your question in our 600,000 solved Q&A Bank | http://www.transtutors.com/chemistry-homework-help/chemical-equilibrium/law-of-mass-action.aspx |
4.28125 | Finding the surface area of cones is not that hard. But it can require some patience and ingenuity, depending on what information is available at the beginning of the problem. Below are some suggested steps to keep track of everything.
1Identify the radius of the cone's base circle. If you have the diameter, cut it in half to get the radius. If you have the slant height and perpendicular height, use the Pythagorean theorem (see "Tips" below).
2Write the radius somewhere off to the side, where it's labelled and easy to find, because you will need it several times in several different calculations.It also helps to just find the radius instead of having to look at all your notes and look for the radius.
3Find the area of the base circle by squaring the radius and multiplying by pi.
- If the instructions say anything like "exact value", it means that you write the Greek letter for pi and leave it. So a radius of 3 gives an area of 9pi.
- Otherwise, use 3.14 or your calculator's pi button to finish the multiplication and get a decimal version for the area.
- You can round, but keep at least 3 digits after the decimal point for now.
4Write that answer off to one side, somewhere where it is labelled "base area" and easy to find.
5Identify the slant height of the cone. This refers to the height along the slanted side of the cone, not the height from the tip of the cone to the center of the circle.
- The radius, the perpendicular height (from tip to center), and the slant height are related by the Pythagorean theorem. See the "tips" section below.
6Multiply the slant height times the radius times pi. Again, "exact value" means write pi as pi; otherwise, use 3.14 to get the decimal approximation.
7Write that answer off to one side, somewhere where it is labelled "lateral area" and easy to find.
8Add the "base area" from step 4 with the "lateral area" from step 7.
9Round, as needed. This is your final answer.
Questions and Answers
Give us 3 minutes of knowledge!
- The Pythagorean theorem applies to the radius, perpendicular height, and slant height, with the slant height acting as the hypotenuse: (radius)2 + (perpendicular height)2 = (slant height)2
- General rounding rules: any answer under 20 needs at least 2 decimal places. Any answer between 20 and 100 needs only 1 decimal place. Any answer over 100 can be rounded to the nearest whole number.
- If either your radius or your slant height has a square root, you will not be able to finish the addition on step 8.
Categories: Surface Area
In other languages:
Español: encontrar el área de la superficie de un cono, Português: Descobrir a Área Superficial de um Cone, Italiano: Calcolare l'Area Totale di un Cono, Deutsch: Die Oberfläche eines Kegels berechnen, Français: calculer la surface d’un cône, 中文: 求出圆锥体的表面积, Русский: найти площадь конуса, Bahasa Indonesia: Menghitung Luas Permukaan Kerucut
Thanks to all authors for creating a page that has been read 107,843 times. | http://www.wikihow.com/Find-the-Surface-Area-of-Cones |
4 | Motion sickness is a sensation of wooziness. It usually occurs when someone is traveling by car, boat, plane, or train. The body's sensory organs send mixed messages to the brain, causing dizziness, lightheadedness, and/or nausea. Some people learn early in their lives that they are prone to the condition.
Motion sickness frequently causes vomiting.
There are many prevention and treatment measures that can prevent or treat motion sickness.
A person maintains balance with the help of signals sent by many parts of the body—for instance, the eyes and inner ears. Other sensory receptors in the legs and feet let the nervous system know what parts of the body are touching the ground. Conflicting signals can cause motion sickness. For example, an airplane traveler cannot see turbulence, but his or her body can feel it. The resulting confusion can cause nausea or even vomiting.
Any form of travel, on land, in the air, or on the water, can bring on the uneasy feeling of motion sickness. Sometimes, amusement rides and children’s playground equipment can induce motion sickness.
Children between the ages of 3 and 12 are most likely to suffer from motion sickness (Medscape, 2013).
Motion sickness usually causes an upset stomach. Other symptoms include a cold sweat and dizziness. A person with motion sickness may become pale or complain of a headache.
Motion sickness resolves itself quickly and does not usually require a professional diagnosis. Most people know the feeling when it's coming on because the illness only occurs during travel or other specific activities.
Sometimes, pregnant women or people suffering from migraines are misdiagnosed as having motion sickness (Medscape, 2013).
Several medications exist for people afflicted with motion sickness. Most prevent only the onset of symptoms. Also, many induce sleepiness, so someone who is operating a vehicle cannot take them.
Most people who are susceptible to motion sickness are aware of the fact. If you are prone to motion sickness, the following preventive measure may help:
Plan ahead when booking a trip. If traveling by air, ask for a window or wing seat. On trains, sit toward the front. On a ship, ask for a cabin close to the front or the middle of the vessel at water level (Mayo Clinic, 2011).
Sitting at the front of a car or bus, or doing the driving yourself, often helps. Many people who experience motion sickness in a vehicle find that they don't have the symptoms when they're driving.
It is important to get plenty of rest the night before traveling and avoid drinking alcohol. Dehydration, headache, and anxiety all lead to poorer outcomes for people prone to motion sickness.
Eat well so that your stomach is settled. Stay away from greasy or acidic foods during and prior to travel.
Have a home remedy on hand or try alternative therapies. Many experts say peppermint can help, as well as ginger and black horehound. Although not very well proved by science, homeopathic remedies do exist. For pilots, astronauts, or others who experience motion sickness regularly or as part of their profession, cognitive therapy and biofeedback have been solutions. Breathing exercises have also been found to help. These treatments also work for people who feel unwell when they merely think about traveling.
Written by: David Heitz
Published on Dec 18, 2013
Medically reviewed on Dec 18, 2013 by [Ljava.lang.Object;@4b616074 | http://healthtools.aarp.org/learning-center/nausea-and-vomiting |
4.03125 | Native Americans full Moon names were created to help different tribes track the seasons. Think of it as a “nickname” for the Moon! See our list of other full Moon names for each month of the year and their meanings.
Why Native Americans Named the Moons
The early Native Americans did not record time by using the months of the Julian or Gregorian calendar. Many tribes kept track of time by observing the seasons and lunar months, although there was much variability. For some tribes, the year contained 4 seasons and started at a certain season, such as spring or fall. Others counted 5 seasons to a year. Some tribes defined a year as 12 Moons, while others assigned it 13. Certain tribes that used the lunar calendar added an extra Moon every few years, to keep it in sync with the seasons.
Each tribe that did name the full Moons (and/or lunar months) had its own naming preferences. Some would use 12 names for the year while others might use 5, 6, or 7; also, certain names might change the next year. A full Moon name used by one tribe might differ from one used by another tribe for the same time period, or be the same name but represent a different time period. The name itself was often a description relating to a particular activity/event that usually occurred during that time in their location.
Colonial Americans adopted some of the Native American full Moon names and applied them to their own calendar system (primarily Julian, and later, Gregorian). Since the Gregorian calendar is the system that many in North America use today, that is how we have presented the list of Moon names, as a frame of reference. The Native American names have been listed by the month in the Gregorian calendar to which they are most closely associated.
Native American Full Moon Names and Their Meanings
The Full Moon Names we use in the Almanac come from the Algonquin tribes who lived in regions from New England to Lake Superior. They are the names the Colonial Americans adapted most. Note that each full Moon name was applied to the entire lunar month in which it occurred.
Link on the names below for your monthly Full Moon Guide!
|January||Full Wolf Moon||This full Moon appeared when wolves howled in hunger outside the villages. It is also known as the Old Moon. To some Native American tribes, this was the Snow Moon, but most applied that name to the next full Moon, in February.|
|February||Full Snow Moon||Usually the heaviest snows fall in February. Hunting becomes very difficult, and hence to some Native American tribes this was the Hunger Moon.|
|March||Full Worm Moon||At the time of this spring Moon, the ground begins to soften and earthworm casts reappear, inviting the return of robins. This is also known as the Sap Moon, as it marks the time when maple sap begins to flow and the annual tapping of maple trees begins.|
|April||Full Pink Moon||This full Moon heralded the appearance of the moss pink, or wild ground phlox—one of the first spring flowers. It is also known as the Sprouting Grass Moon, the Egg Moon, and the Fish Moon.|
|May||Full Flower Moon||Flowers spring forth in abundance this month. Some Algonquin tribes knew this full Moon as the Corn Planting Moon or the Milk Moon.|
|June||Full Strawberry Moon||The Algonquin tribes knew this Moon as a time to gather ripening strawberries. It is also known as the Rose Moon and the Hot Moon.|
|July||Full Buck Moon||Bucks begin to grow new antlers at this time. This full Moon was also known as the Thunder Moon, because thunderstorms are so frequent during this month.|
|August||Full Sturgeon Moon||Some Native American tribes knew that the sturgeon of the Great Lakes and Lake Champlain were most readily caught during this full Moon. Others called it the Green Corn Moon.|
|September||Full Corn Moon||This full Moon corresponds with the time of harvesting corn. It is also called the Barley Moon, because it is the time to harvest and thresh the ripened barley. The Harvest Moon is the full Moon nearest the autumnal equinox, which can occur in September or October and is bright enough to allow finishing all the harvest chores.|
|October||Full Hunter’s Moon||This is the month when the leaves are falling and the game is fattened. Now is the time for hunting and laying in a store of provisions for the long winter ahead. October’s Moon is also known as the Travel Moon and the Dying Moon.|
|November||Full Beaver Moon||For both the colonists and the Algonquin tribes, this was the time to set beaver traps before the swamps froze, to ensure a supply of warm winter furs. This full Moon was also called the Frost Moon.|
|December||Full Cold Moon||This is the month when the winter cold fastens its grip and the nights become long and dark. This full Moon is also called the Long Nights Moon by some Native American tribes.|
Note: The Harvest Moon is the full Moon that occurs closest to the autumnal equinox. It can occur in either September or October. At this time, crops such as corn, pumpkins, squash, and wild rice are ready for gathering. | http://www.almanac.com/content/full-moon-names |
4.5625 | Universal Declaration of Human Rights
|Universal Declaration of Human Rights|
|Ratified||16 December 1948|
|Location||Palais de Chaillot, Paris|
|Rights by claimant|
|Other groups of rights|
The Universal Declaration of Human Rights (UDHR) is a declaration adopted by the United Nations General Assembly on 10 December 1948 at the Palais de Chaillot, Paris. The Declaration arose directly from the experience of the Second World War and represents the first global expression of rights to which all human beings are inherently entitled. The full text is published by the United Nations on its website.
The Declaration consists of thirty articles which have been elaborated in subsequent international treaties, regional human rights instruments, national constitutions, and other laws. The International Bill of Human Rights consists of the Universal Declaration of Human Rights, the International Covenant on Economic, Social and Cultural Rights, and the International Covenant on Civil and Political Rights and its two Optional Protocols. In 1966, the General Assembly adopted the two detailed Covenants, which complete the International Bill of Human Rights. In 1976, after the Covenants had been ratified by a sufficient number of individual nations, the Bill took on the force of international law.
- 1 History
- 2 Structure
- 3 International Human Rights Day
- 4 Significance and legal effect
- 5 Reaction
- 6 Organizations promoting the UDHR
- 7 See also
- 8 Notes
- 9 References
- 10 Further reading
- 11 External links
|Problems playing this file? See media help.|
During World War II, the Allies adopted the Four Freedoms—freedom of speech, freedom of religion, freedom from fear, and freedom from want—as their basic war aims. The United Nations Charter "reaffirmed faith in fundamental human rights, and dignity and worth of the human person" and committed all member states to promote "universal respect for, and observance of, human rights and fundamental freedoms for all without distinction as to race, sex, language, or religion".
When the atrocities committed by Nazi Germany became apparent after the war, the consensus within the world community was that the United Nations Charter did not sufficiently define the rights to which it referred. A universal declaration that specified the rights of individuals was necessary to give effect to the Charter's provisions on human rights.
Creation and drafting
The Declaration was commissioned in 1946 and was drafted over two years by the Commission on Human Rights. The Commission consisted of 18 members from various nationalities and political backgrounds. The Universal Declaration of Human Rights Drafting Committee was chaired by Eleanor Roosevelt, who was known for her human rights advocacy.
Canadian John Peters Humphrey was called upon by the United Nations Secretary-General to work on the project and became the Declaration's principal drafter. At the time, Humphrey was newly appointed as Director of the Division of Human Rights within the United Nations Secretariat. The Commission on Human Rights, a standing body of the United Nations, was constituted to undertake the work of preparing what was initially conceived as an International Bill of Rights.
British representatives were extremely frustrated that the proposal had moral but no legal obligation. (It was not until 1976 that the International Covenant on Civil and Political Rights came into force, giving a legal status to most of the Declaration.)
The membership of the Commission was designed to be broadly representative of the global community, served by representatives from the following countries: Australia, Belgium, Byelorussian Soviet Socialist Republic, Chile, Republic of China, Egypt, France, India, Iran, Lebanon, Panama, Philippines, United Kingdom, United States, Union of Soviet Socialist Republics, Uruguay, and Yugoslavia. Well-known members of the Commission included Eleanor Roosevelt of the United States (who was the Chairperson), René Cassin of France, Charles Malik of Lebanon, P. C. Chang of the Republic of China, and Hansa Mehta of India. Humphrey provided the initial draft which became the working text of the Commission.
The draft was further discussed by the Commission on human rights, the Economic and Social Council, the Third Committee of the General Assembly before being put to vote. During these discussions many amendments and propositions were made by UN Member States.
On 10 December 1948, the Universal Declaration was adopted by the General Assembly by a vote of 48 in favor, none against, and eight abstentions (the Soviet Union, Ukrainian SSR, Byelorussian SSR, People's Federal Republic of Yugoslavia, People's Republic of Poland, Union of South Africa, Czechoslovakia, and the Kingdom of Saudi Arabia). Honduras and Yemen—both members of UN at the time—failed to vote or abstain. South Africa's position can be seen as an attempt to protect its system of apartheid, which clearly violated any number of articles in the Declaration. The Saudi Arabian delegation's abstention was prompted primarily by two of the Declaration's articles: Article 18, which states that everyone has the right "to change his religion or belief"; and Article 16, on equal marriage rights. The six communist nations abstentions centered around the view that the Declaration did not go far enough in condemning fascism and Nazism. Eleanor Roosevelt attributed the abstention of the Soviet bloc nations to Article 13, which provided the right of citizens to leave their countries.
The following countries voted in favor of the Declaration:
- Costa Rica
- Dominican Republic
- El Salvador
- New Zealand
- United Kingdom
- United States
Despite the central role played by the Canadian John Peters Humphrey, the Canadian Government at first abstained from voting on the Declaration's draft, but later voted in favor of the final draft in the General Assembly.
The underlying structure of the Universal Declaration was introduced in its second draft, which was prepared by René Cassin. Cassin worked from a first draft, which was prepared by John Peters Humphrey. The structure was influenced by the Code Napoléon, including a preamble and introductory general principles.
Cassin compared the Declaration to the portico of a Greek temple, with a foundation, steps, four columns, and a pediment. Articles 1 and 2 are the foundation blocks, with their principles of dignity, liberty, equality, and brotherhood. The seven paragraphs of the preamble—setting out the reasons for the Declaration—represent the steps. The main body of the Declaration forms the four columns. The first column (articles 3–11) constitutes rights of the individual such as the right to life and the prohibition of slavery. Articles 6 through 11 refer to the fundamental legality of human rights with specific remedies cited for their defense when violated. The second column (articles 12–17) constitutes the rights of the individual in civil and political society (including such things as freedom of movement). The third column (articles 18–21) is concerned with spiritual, public, and political freedoms such as freedom of association, thought, conscience, and religion. The fourth column (articles 22–27) sets out social, economic, and cultural rights. In Cassin's model, the last three articles of the Declaration provide the pediment which binds the structure together. These articles are concerned with the duty of the individual to society and the prohibition of use of rights in contravention of the purposes of the United Nations Organisation.
International Human Rights Day
The adoption of the Universal Declaration is a significant international commemoration marked each year on 10 December, and is known as Human Rights Day or International Human Rights Day. The commemoration is observed by individuals, community and religious groups, human rights organizations, parliaments, governments, and the United Nations. Decadal commemorations are often accompanied by campaigns to promote awareness of the Declaration and human rights. 2008 marked the 60th anniversary of the Declaration, and was accompanied by year-long activities around the theme "Dignity and justice for all of us".
Significance and legal effect
The Guinness Book of Records describes the Declaration as the world's "Most Translated Document" (464 different translations). In its preamble, governments commit themselves and their people to progressive measures which secure the universal and effective recognition and observance of the human rights set out in the Declaration. Eleanor Roosevelt supported the adoption of the Declaration as a declaration rather than as a treaty because she believed that it would have the same kind of influence on global society as the United States Declaration of Independence had within the United States. In this, she proved to be correct. Even though it is not legally binding, the Declaration has been adopted in or has influenced most national constitutions since 1948. It has also served as the foundation for a growing number of national laws, international laws, and treaties, as well as for a growing number of regional, sub national, and national institutions protecting and promoting human rights.
While not a treaty itself, the Declaration was explicitly adopted for the purpose of defining the meaning of the words "fundamental freedoms" and "human rights" appearing in the United Nations Charter, which is binding on all member states. For this reason, the Universal Declaration is a fundamental constitutive document of the United Nations. In addition, many international lawyers believe that the Declaration forms part of customary international law and is a powerful tool in applying diplomatic and moral pressure to governments that violate any of its articles. The 1968 United Nations International Conference on Human Rights advised that the Declaration "constitutes an obligation for the members of the international community" to all persons. The Declaration has served as the foundation for two binding UN human rights covenants: the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights. The principles of the Declaration are elaborated in international treaties such as the International Convention on the Elimination of All Forms of Racial Discrimination, the International Convention on the Elimination of Discrimination Against Women, the United Nations Convention on the Rights of the Child, the United Nations Convention Against Torture, and many more. The Declaration continues to be widely cited by governments, academics, advocates, and constitutional courts, and by individuals who appeal to its principles for the protection of their recognised human rights.
The Universal Declaration has received praise from a number of notable people. The Lebanese philosopher and diplomat Charles Malik called it "an international document of the first order of importance", while Eleanor Roosevelt—first chairwoman of the Commission on Human Rights (CHR) that drafted the Declaration—stated that it "may well become the international Magna Carta of all men everywhere." In a speech on 5 October 1995, Pope John Paul II called the Declaration "one of the highest expressions of the human conscience of our time". In a statement on 10 December 2003 on behalf of the European Union, Marcello Spatafora said that the Declaration "placed human rights at the centre of the framework of principles and obligations shaping relations within the international community."
However, in 1948, Saudi Arabia abstained from the ratification vote on the Declaration, claiming that it violated Sharia law. Pakistan—which had signed the declaration—disagreed and critiqued the Saudi position. In 1982, the Iranian representative to the United Nations, Said Rajaie-Khorassani, said that the Declaration was "a secular understanding of the Judeo-Christian tradition" which could not be implemented by Muslims without conflict with Sharia. On 30 June 2000, members of the Organisation of the Islamic Conference (now the Organisation of Islamic Cooperation) officially resolved to support the Cairo Declaration on Human Rights in Islam, an alternative document that says people have "freedom and right to a dignified life in accordance with the Islamic Shari'ah", without any discrimination on grounds of "race, colour, language, sex, religious belief, political affiliation, social status or other considerations". Turkey—a secular state with an overwhelmingly Muslim population—signed the Declaration in 1948.
A number of scholars in different fields have expressed concerns with the Declaration's alleged Western bias. These include Irene Oh, Abdulaziz Sachedina, Riffat Hassan, and Faisal Kutty. Hassan has argued:
What needs to be pointed out to those who uphold the Universal Declaration of Human Rights to be the highest, or sole, model, of a charter of equality and liberty for all human beings, is that given the Western origin and orientation of this Declaration, the "universality" of the assumptions on which it is based is – at the very least – problematic and subject to questioning. Furthermore, the alleged incompatibility between the concept of human rights and religion in general, or particular religions such as Islam, needs to be examined in an unbiased way.
Kutty writes: "A strong argument can be made that the current formulation of international human rights constitutes a cultural structure in which western society finds itself easily at home ... It is important to acknowledge and appreciate that other societies may have equally valid alternative conceptions of human rights." On the other hand, others[who?] have written that some of these "cultural arguments" can go so far as to undermine the very nature of human freedom and choice, the protection of which is the purpose of the UN declaration. For example, typical versions of Sharia law forbid Muslims from leaving Islam under the penalty of capital punishment. Islamic legal scholar Faisal Kutty argues that existing blasphemy laws in Muslim countries are actually un-Islamic and are a legacy of colonial rule. Mohsen Haredy, an Islamic scholar, states that Muslim countries have their own views of Sharia and blasphemies are the internal issues of those countries.
Ironically, a number of Islamic countries that as of 2014[update] are among the most resistant to UN intervention in domestic affairs, played an invaluable role in the creation of the Declaration, with countries such as Syria and Egypt having been strong proponents of the universality of human rights and the right of countries to self-determination.
"The Right to Refuse to Kill"
Groups such as Amnesty International and War Resisters International have advocated for "The Right to Refuse to Kill" to be added to the Universal Declaration. War Resisters International has stated that the right to conscientious objection to military service is primarily derived from—but not yet explicit in—Article 18 of the UDHR: the right to freedom of thought, conscience, and religion.
Steps have been taken within the United Nations to make this right more explicit, but —to date (2015)—[update] those steps have been limited to less significant United Nations documents. Sean MacBride—Assistant Secretary-General of the United Nations and Nobel Peace Prize laureate—has said: "To the rights enshrined in the Universal Declaration of Human Rights one more might, with relevance, be added. It is 'The Right to Refuse to Kill'."
American Anthropological Association
The American Anthropological Association criticized the UDHR while it was in its drafting process. The AAA warned that the document would be defining universal rights from a Western paradigm which would be unfair to countries outside of that scope. They further argued that the West's history of colonialism and Evangelicalism made them a problematic moral representative for the rest of the world. They proposed three notes for consideration with underlying themes of cultural relativism: "1. The individual realizes his personality through his culture, hence respect for individual differences entails a respect for cultural differences", "2. Respect for differences between cultures is validated by the scientific fact that no technique of qualitatively evaluating cultures has been discovered.", and "3. Standards and values are relative to the culture from which they derive so that any attempt to formulate postulates that grow out of the beliefs or moral codes of one culture must to that extent detract from the applicability of any Declaration of Human Rights to mankind as a whole."
During the lead up to the World Conference on Human Rights held in 1993, ministers from Asian states adopted the Bangkok Declaration, reaffirming their governments' commitment to the principles of the United Nations Charter and the Universal Declaration of Human Rights. They stated their view of the interdependence and indivisibility of human rights and stressed the need for universality, objectivity, and non-selectivity of human rights. However, at the same time, they emphasized the principles of sovereignty and non-interference, calling for greater emphasis on economic, social, and cultural rights—in particular, the right to economic development over civil and political rights. The Bangkok Declaration is considered to be a landmark expression of the Asian values perspective, which offers an extended critique of human rights universalism.
Organizations promoting the UDHR
International Federation for Human Rights
The International Federation for Human Rights (FIDH) is nonpartisan, nonsectarian, and independent of any government, and its core mandate is to promote respect for all the rights set out in the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Covenant on Economic, Social and Cultural Rights.
In 1988, director Stephen R. Johnson and 41 international animators, musicians, and producers created a 20-minute video for Amnesty International to celebrate the 40th Anniversary of the Universal Declaration. The video was to bring to life the Declaration's 30 articles.
Amnesty International celebrated Human Rights Day and the 60th anniversary of the Universal Declaration all over the world by organizing the "Fire Up!" event.
Unitarian Universalist Service Committee
The Unitarian Universalist Service Committee (UUSC) is a non-profit, nonsectarian organization whose work around the world is guided by the values of Unitarian Universalism and the Universal Declaration of Human Rights. It works to provide disaster relief and promote human rights and social justice around the world.
Quaker United Nations Office and American Friends Service Committee
The Quaker United Nations Office and the American Friends Service Committee work on many human rights issues, including improving education on the Universal Declaration of Human Rights. They have developed a Curriculum to help introduce High School students to the Universal Declaration of Human Rights.
American Library Association
In 1997, the council of the American Library Association (ALA) endorsed Article 19 from the Universal Declaration of Human Rights. Along with Article 19, Article 18 and 20 are also fundamentally tied to the ALA Universal Right to Free Expression and the Library Bill of Rights. Censorship, the invasion of privacy, and interference of opinions are human rights violations according to the ALA.
- Human rights
- Non-binding agreements
- Cairo Declaration on Human Rights in Islam (1990)
- Vienna Declaration and Programme of Action (1993)
- United Nations Millennium Declaration (2000)
- International human rights law
- Fourth Geneva Convention (1949)
- European Convention on Human Rights (1952)
- Convention Relating to the Status of Refugees (1954)
- Convention on the Elimination of All Forms of Racial Discrimination (1969)
- International Covenant on Civil and Political Rights (1976)
- International Covenant on Economic, Social and Cultural Rights (1976)
- Convention on the Elimination of All Forms of Discrimination Against Women (1981)
- Convention on the Rights of the Child (1990)
- Charter of Fundamental Rights of the European Union (2000)
- Convention on the Rights of Persons with Disabilities (2007)
- Thinkers influencing the Declaration
- Charles Malik
- Jacques Maritain
- John Peters Humphrey
- Tommy Douglas
- John Sankey, 1st Viscount Sankey
- Wu Teh Yao
- Peng Chun Chang
- Slavery in international law
- Slave Trade Acts
- Human rights in China (PRC)
- Command responsibility
- Declaration on Great Apes, an as-yet unsuccessful effort to extend some human rights to great apes.
- "Consent of the governed"
- Racial equality proposal (1919)
- The Farewell Sermon (632 CE)
- Youth for Human Rights International
- "The Universal Declaration of Human Rights". un.org.
- Williams 1981. This is the first book edition of the Universal Declaration of Human Rights, with a foreword by Jimmy Carter.
- "United Nations Charter, preamble and article 55". United Nations. Retrieved 2013-04-20.
- Cataclysm and World Response in Drafting and Adoption : The Universal Declaration of Human Rights, udhr.org.
- "UDHR50: Didn't Nazi tyranny end all hope for protecting human rights in the modern world?". Udhr.org. 1998-08-28. Retrieved 2012-07-07.
- "UDHR – History of human rights". Universalrights.net. Retrieved 2012-07-07.
- Morsink 1999, p. 5
- Morsink 1999, p. 133
- Morsink 1999, p. 4
- Universal Declaration of Human Rights. Final authorized text. The British Library. September 1952. Retrieved 16 August 2015.
- The Declaration was drafted during the Chinese Civil War. P.C. Chang was appointed as a representative by the Republic of China, then the recognised government of China, but which was driven from mainland China and now administers only Taiwan and nearby islands ().
- "Drafting of the Universal Declaration of Human Rights". Research Guides. United Nations. Dag Hammarskjöld Library. Retrieved 2015-04-17.
- Carlson, Allan: Globalizing Family Values, 12 January 2004.
- CCNMTL. "default". Center for New Media Teaching and Learning (CCNMTL). Columbia University. Retrieved 2013-07-12.
- UNAC. "Questions and answers about the Universal Declaration of Human Rights". United Nations Association in Canada (UNAC). p. "Who are the signatories of the Declaration?". Archived from the original on 2012-09-12. External link in
- Jost Müller-Neuhof (2008-12-10). "Menschenrechte: Die mächtigste Idee der Welt". Der Tagesspiegel (in German). Retrieved 2013-07-12.
- Peter Danchin. "The Universal Declaration of Human Rights: Drafting History - 10. Plenary Session of the Third General Assembly Session". Retrieved 2015-02-25.
- Glendon 2002, pp. 169–70
- "Yearbook of the United Nations 1948–1949 p 535" (PDF). Retrieved 24 July 2014.
- Schabas, William (1998). "Canada and the Adoption of Universal Declaration of Human Rights" (PDF). McGill Law Journal 43: 403.
- Glendon 2002, pp. 62–64.
- Glendon 2002, Chapter 10.
- "The Universal Declaration of Human Rights: 1948–2008". United Nations. Retrieved 15 February 2011.
- "Universal Declaration of Human Rights". United Nations Office of the High Commissioner for Human rights.
- Humphrey JP, "The Universal Declaration of Human Rights: Its History, Impact and Juridical Character", in Ramcharan BG (ed), Human Rights: Thirty Years After the Universal Declaration (1979) pp. 2l, 37; Sohn 1, "The Human Rights Law of the Charter" (1977) 12 Texas Int LJ 129, 133; McDougal MS, Lasswell H and Chen I, Human Rights and World Public Order (1980) pp. 273–274, 325–327; D'Amato A, International Law: Process and Prospect( 1986) pp. 123–147.
- Office of the High Commissioner for Human Rights. "Digital record of the UDHR". United Nations.
- "Statement by Charles Malik as Representative of Lebanon to the Third Committee of the UN General Assembly on the Universal Declaration". 6 November 1948. Archived from the original on 28 September 2008.
- Michael E. Eidenmuller (1948-12-09). "Eleanor Roosevelt: Address to the United Nations General Assembly". Americanrhetoric.com. Retrieved 2012-07-07.
- "John Paul II, Address to the U.N., October 2, 1979 and October 5, 1995". Vatican.va. Retrieved 2012-07-07.
- Nisrine Abiad (2008). Sharia, Muslim states and international human rights treaty obligations: a comparative study. BIICL. pp. 60–65. ISBN 978-1-905221-41-7.
- Price 1999, p. 163
- Littman, D (February–March 1999). "Universal Human Rights and Human Rights in Islam". Midstream. Archived from the original on 2006-05-12.
- "Resolution No 60/27-P". Organisation of the Islamic Conference. 2000-06-27. Retrieved 2011-06-02.
- "Universal Declaration of Human Rights". Retrieved 2015-10-30.
- "Are Human Rights Compatible with Islam?". religiousconsultation.org. Retrieved 2012-11-12.
- "The Rights of God". Georgetown University Press, 2007.
- "Non-Western Societies Have Influenced Human Rights". in Jacqueline Langwith (ed.), Opposing Viewpoints: Human Rights, Gale/Greenhaven Press: Chicago, 2007.
- "Why Blasphemy Laws Are Actually Anti-Islamic". The Huffington Post.
- "Why Should Blasphemy Be Punishable at All?". onislam.net.
- Professor Susan Waltz: Universal Rights Group, Syria calls for greater UN intervention in domestic human rights situations….
- Out of the margins: the right to conscientious objection to military service in Europe: An announcement of Amnesty International's forthcoming campaign and briefing for the UN Commission on Human Rights, 31 March 1997. Amnesty International.
- A Conscientious Objector's Guide to the UN Human Rights System, Parts 1, 2 & 3, Background Information on International Law for COs, Standards which recognise the right to conscientious objection, War Resisters' International.
- Sean MacBride, The Imperatives of Survival, Nobel Lecture, 12 December 1974, The Nobel Foundation – Official website of the Nobel Foundation. (English index page; hyperlink to Swedish site.) From Nobel Lectures in Peace 1971–1980.
- "Statement on Human Rights" (PDF). Retrieved 2015-10-30.
- "Final Declaration Of The Regional Meeting For Asia Of The World Conference On Human Rights". Law.hku.hk. Retrieved 2012-07-07.
- Contribution to the EU Multi-stakeholder Forum on CSR (Corporate Social Responsibility), 10 February 2009; accessed on 9 November 2009
- Information Partners, web site of the UNHCR, last updated 25 February 2010, 16:08 GMT (web retrieval 25 February 2010, 18:11 GMT)
- "UDHR film". Amnesty International. Retrieved 2013-07-19.
- "Fire Up!". Amnesty International. Retrieved 2013-07-19.
- "UNHCR Partners". UNHCR. Retrieved 11 November 2014.
- "AFSC Universal Declaration of Human Rights web page". American Friends Service Committee. Retrieved 11 November 2014.
- "Resolution on IFLA, Human Rights and Freedom of Expression". ala.org.
- "The Universal Right to Free Expression:". ala.org.
- Glendon, Mary Ann (2002). A world made new: Eleanor Roosevelt and the Universal Declaration of Human Rights. Random House. ISBN 978-0-375-76046-4.
- Hashmi, Sohail H. (2002). Islamic political ethics: civil society, pluralism, and conflict. Princeton University Press. ISBN 978-0-691-11310-4.
- Morsink, Johannes (1999). The Universal Declaration of Human Rights: origins, drafting, and intent. University of Pennsylvania Press. ISBN 978-0-8122-1747-6.
- Price, Daniel E. (1999). Islamic political culture, democracy, and human rights: a comparative study. Greenwood Publishing Group. ISBN 978-0-275-96187-9.
- Williams, Paul (1981). The International bill of human rights. United Nations General Assembly. Entwhistle Books. ISBN 978-0-934558-07-5.
- Feldman, Jean-Philippe. "Hayek's Critique Of The Universal Declaration Of Human Rights". Journal des Economistes et des Etudes Humaines, Volume 9, Issue 4 (December 1999): 1145-6396.
- Nurser, John. "For All Peoples and All Nations. Christian Churches and Human Rights.". (Geneva: WCC Publications, 2005).
- Universal Declaration of Human Rights pages at Columbia University (Centre for the Study of Human Rights), including article by article commentary, video interviews, discussion of meaning, drafting and history.
- Introductory note by Antônio Augusto Cançado Trindade and procedural history on the Universal Declaration of Human Rights in the Historic Archives of the United Nations Audiovisual Library of International Law
||This article's use of external links may not follow Wikipedia's policies or guidelines. (November 2015)|
|Wikisource has original text related to this article:|
|Wikimedia Commons has media related to Universal Declaration of Human Rights.|
- Text of the UDHR
- Official translations of the UDHR
- Resource Guide on the Universal Declaration of Human Rights at the UN Library, Geneva.
- Drafting of the Universal Declaration of Human Rights - documents and meetings records - United Nations Dag Hammarskjöld Library
- Questions and answers about the Universal Declaration
- Text, Audio, and Video excerpt of Eleanor Roosevelt's Address to the United Nations on the Universal Declaration of Human Rights
- UDHR – Education
- Revista Envío – A Declaration of Human Rights For the 21st Century
- Introductory note by Antônio Augusto Cançado Trindade and procedural history note on the Universal Declaration of Human Rights in the Historic Archives of the United Nations Audiovisual Library of International Law
- DHpedia: Universal Declaration of Human Rights
- The Laws of Burgos: 500 Years of Human Rights from the Law Library of Congress blog.
- Librivox: Human-read audio recordings in several Languages
- Text, Audio, and Video excerpt of Eleanor Roosevelt's Address to the United Nations on the Universal Declaration of Human Rights
- Animated presentation of the Universal Declaration of Human Rights by Amnesty International on YouTube (in English duration 20 minutes and 23 seconds).
- Audio: Statement by Charles Malik as Representative of Lebanon to the Third Committee of the UN General Assembly on the Universal Declaration, 6 November 1948
- UN Department of Public Information introduction to the drafters of the Declaration
- Audiovisual material on the Universal Declaration of Human Rights in the Historic Archives of the United Nations Audiovisual Library of International Law | https://en.wikipedia.org/wiki/Universal_Declaration_of_Human_Rights |
4.5625 | Nowadays many pupils, when given a research task, immediately might think to themselves, “I’ll just Google that.” Internet search engines (of which Google is only one of many) are powerful tools but many pupils use only a fraction of the power of them, and then can also have difficulty finding the information specific to the task. There are many resources now available to help in developing pupil skills in searching more effectively using online search engines. And, of course, when they do find information how do pupils know it is appropriate for the task? Or how do they evaluate what is suitable, and how do they present it and show where the information was found.
Tools to Help Teach Research Skills
The Big 6
One method of teaching information skills for investigating sources of information from databases, encyclopedias and the Internet is that known as “the Big Six.” This process sets out the steps as follows:
1. Define the task – what needs to be done?
2. Information Seeking Strategies – what resources can I use?
3. Location and Access – where can I find these resources?
4. Use of information – what can I use from these resources?
5. Synthesis – what can I make to finish the job?
6. Evaluation – how will I know I did my job well?
The Kentucky Virtual Library How to Do Research
The Kentucky Virtual Library has an online poster-style How to Do Research site for guiding younger pupils through the steps to finding the information they need on any topic, whether in print form, multimedia or online. Presented in a visual comic/game style it explains in child-friendly language the process to find the information being sought. And each page of advice is presented as a set of easy to digest straightforward steps, breaking down each task (whether finding the information, recording it, evaluating it, or presenting it) in cartoon-style visual interactive style making it attractive to primary users.
Finding Dulcinea – How to Search the Internet – aimed at older pupils, this provides a host of helpful tips and links to a variety of resources about searching and using information from the Internet. It includes sections on What Is the Internet, Web Site Credibility, How Search Engines Work, Choosing a Search Engine, Online Databases, Social Bookmarking Tools, How to Cite a Source.
Ergo – Teaching Research Skills
Ergo – Teaching Research Skills from the State Library of Victoria, Australia, is a guide for pupils to finding the information they need for a school assignment. The guide provides helpful explanations, hints, tips and further resources for each of the steps: Define the task, Locate information, Select resources, Organise notes, Present the ideas, Evaluate your work.
Common Sense Media Digital Curriculum
Common Sense Media Digital Curriculum has a section on teaching online research for various age groups. Each section has lesson plans with ideas and resources for teaching different aspects of research online with pupils. Teachers can select resources according to age group or stage (all stages in primary school are included, and resources are age-appropriate), resources to support the research topics which best suit the needs of pupils.
All About Explorers
All About Explorers has been designed as an interactive Internet search task to guide pupils through making more discerning use of information presented online. The task includes a range of spoof material to help show primary pupils how to evaluate what they read online, and how to be selective about the information they find. The tasks are presented to pupils as an interactive Webquest. There is a section for teachers which includes a series of lessons and explanations of what the pupils are learning about better online searching as they complete each webquest.
Save the Tree Octopus – an example of a spoof website which could be used to show pupils that, even though the site looks very well put together and with a host of features to make it look authoritative, websites can provide completely fictitious information.
Ten Tips for Teaching How to Research and Filter Information
Ten Tips for Teaching Students How to Research and Filter Information – a post by Kathleen Morris which details advice for ten steps for showing primary school pupils how to find and use information: Search, Delve, Source, Validity, Purpose, Background, Teach, Justify, Path, Cite.
Google Tools for Better Searching
Google has produced a series of posters for educators to help support pupils use the Internet search engine more effectively.
Google A Day is a daily-changing search challenge which could be used by a class to make better use of a search engine. Each day a new challenge is presented (and you can go back to previous challenges if you wish). Each challenge is presneted as a question which pupils are challenged to answer by using the search engine. If not sure how to get started pupils can click on the hint to get a bit of help to guide how to make a better search to find the answer. And the answer itself is provided. In addition there are links to tips and techniques for better Internet searching.
Google for Educators is a collection of resources collated by David Andrade on his Educational Technology Guy blog. This brings together a series of resources providing tips, ideas and guides to how the vast array of Google tools can be used in schools, including how to find what you’re wanting using the search engine.
Interesting Ways to Use Google Search in the Classroom is a collection of ideas collected by Tom Barrett shared by many teachers – like others in the “Interesting ideas” series it grows as more teachers contribute ideas. So if you have a way you have used Google Search in your classroom then you too can add yours there too.
Google Guide is an online interactive tutorial and reference for experienced users, novices, and everyone in between. Nancy Blachman developed Google Guide to provide more information about Google’s capabilities, features, and services. There are hints and ideas, a printable sheet of tips, and interactive exercises teachers can use with pupils to guide them them through making use of different techniques for more effective searching for information.
Entire Guide to Google Search Features for teachers and Students by Mohamed Kharbach details steps, tips and tools to make better use of the Google search engine, from the basics to advanced searching to using a variety of features of the search engine in many different contexts.
Google Search Education Evangelism is a site with lessons to download for free, including Powerpoint presentations and guides for printing about making the best of Google search tools. These are arranged in categories and for different audiences, whether teachers self-study or for use with pupils.
Google Search Education has lesson plans and video tutorials in categories of various search skills in using Google. Within each category there are then tutorials presneted to suit different skill levels.
10 Google Search Tips by Catlin Tucker provides 10 Questions & 10 Answers to Help You or your pupils Search Smarter!
12 Ways to use Google Search by Degree of Difficulty is a series of lessons by Jeff Dunn providing graded techniques for being better at using the search facility with Google – for each step there are three levels of difficulty so you just choose which best suits your need for your class.
PhD in Googling! An Animated presentation on tips to using Google search engine. Thanks to David Andrade for sharing this. PhD in Googling presents a series of graphically interesting screens with nugget-sized tips on each page, and with animated text appearing, explaining the tip.
Update Your Search methods – a blog post by Chris Betcher explaining how in 2013 Google changed the way the seacrh engibe works to better interpret plain English questions, the way someone would ask a question if speaking, rather than relying on keywords.
Get More Out of Google is a poster with advice and practical tips for making more eficient use of Google search engine. | https://blogs.glowscotland.org.uk/fa/ICTFalkirkPrimaries/tag/google/ |
4 | Threatened Species of Shark Bay
Shark Bay World Heritage Area is a refuge for some of the world’s most endangered animals and plants. Its isolated islands and peninsulas have been largely spared the feral predators and habitat destruction that wreaked havoc on mainland Australia. The importance of these habitats in protecting vulnerable wildlife, and providing scientific information on the impact of habitat change, was a major factor in Shark Bay being declared a World Heritage site.
Two of Shark Bay’s islands, Bernier and Dorre Islands, are the last stronghold for five critically endangered land mammals – four of which occur in the wild nowhere else on Earth.
An ecological restoration project is underway on the largest of Shark Bay's islands, Dirk Hartog Island. This project aims to restore habitats, remove feral cats and reintroduce ten species lost from the island during its pastoral era. It will also introduce two native species not previously known to occur on the island.
In other parts of Shark Bay, cats, foxes and grazing stock have been removed in order to allow the ecosystem to rejuvenate. Captive-bred animals have been introduced to places such as Francois Peron National Park as part of Project Eden, a local conservation initiative.
The Australian Wildlife Conservancy has established a wildlife sanctuary on Faure Island. After removing cats and goats they successfully introduced a number of species. Find out more here.
Shark bay mouse (Pseudomys fieldi)
Spiny-tailed skink (Egernia stokesii)
Small dragon orchid Caladenia barbarella
Shark Bay also features many endemic plants, including two threatened species.
For more information about Western Australian wildlife, check out the WA Museum Fauna Base website. Learn more about Western Australia’s plants at the West Australian Herbarium’s FloraBase website. | http://www.sharkbay.org.au/nature-of-shark-bay-threatened-species.aspx |
4.0625 | May 31, 2002
Antarctica is home to more than 70 lakes that lie thousands of meters under the surface of the continental ice sheet, including one under the South Pole itself. Lake Vostok, beneath Russia's Vostok Station, is one of the largest of these subglacial lakes, comparable in size and depth to Lake Ontario, one of the North American Great Lakes. There is some evidence that Vostok's waters may contain microbial life. Exploration of the lake to confirm that life exists will be an international effort and will require the development of ultra-clean technologies to prevent contaminating the waters.
The National Science Foundation, as manager of the U.S. Antarctic Program, coordinates nearly all U.S. research in Antarctica and would lead U.S. participation in any international effort to explore the lake. NSF's Office of Polar Programs has established a steering committee to study the possible scientific exploration of Antarctic subglacial lakes. See: http://www.nsf.gov/od/opp/antarct/subglclk.jsp
Vostok Station is located in one of the world's most inaccessible places, near the South Geomagnetic Pole, at the center of the East Antarctic Ice Sheet. The station is 3.5 kilometers (11,484 feet) above sea level. The coldest temperature ever recorded on Earth, -89.2 degree Celsius (-128.6 degrees Fahrenheit), was measured at Vostok Station on July 21, 1983.
Lake Vostok's physical characteristics have led scientists to argue that it might serve as an earthbound analog for Europa, a moon of Jupiter. Confirming that life can survive in Lake Vostok might strengthen the argument for the presence of life on Europa.
Russian and British scientists confirmed the lake's existence in 1996 by integrating a variety of data, including airborne ice-penetrating radar observations and spaceborne radar altimetry.
Researchers working at Vostok Station have already contributed greatly to climatology by producing one of the world's longest ice cores in 1998. A joint Russian, French and U.S, team drilled and analyzed the core, which is 3,623 meters (11,886 feet) long.
The core contains layers of ice deposited over millennia, representing a record of Earth's climate stretching back more than 420,000 years. Drilling of the core was deliberately halted roughly 150 meters (492 feet) above the suspected boundary where the ice sheet and the liquid waters of the lake are thought to meet to prevent contamination of the lake.
It is from samples of this ice core, specifically from ice that is thought to have formed from lake water freezing onto the base of the ice sheet, that NSF-funded scientists believe they have found evidence that the lake water supports life. Their research was published in Science in 1999.
For more information, see: http://www.nsf.gov/od/lpa/news/press/99/pr9972.htm
More recently, NSF-funded researchers from the Lamont-Doherty Earth Observatory at Columbia University, using data gathered by the University of Texas Institute for Geophysics, published a paper in Nature suggesting that the hydrodynamics of Lake Vostok may make it possible to search for evidence of life in the layers of ice that accumulate on the lake's eastern shore. Scientists say such a possibility would provide another avenue for exploring the lake's potential as a harbor for microscopic life, in addition to exploring the lake itself. For more information, see: http://www.nsf.gov/od/lpa/news/02/pr0219.htm
International consensus building
Discussions at an NSF workshop for U.S. researchers held in 1998, a subsequent international meeting held in Cambridge, England in 1999, as well as other international meetings about subglacial lakes, have formed the basis for a developing scientific consensus on whether, and how, to proceed with exploring the lake's waters.
To read a report from the 1998 NSF workshop "Lake Vostok: A Curiosity or a Focus for Interdisciplinary Study?" see: http://www.ldeo.columbia.edu/vostok/
The Scientific Committee on Antarctic Research (SCAR) hosts a Web site on subglacial lake exploration with links to several reports from various international workshops. See: http://salegos-scar.montana.edu/
Peter West, NSF, (703) 292-8070, email@example.com
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2016, its budget is $7.5 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives more than 48,000 competitive proposals for funding and makes about 12,000 new funding awards. NSF also awards about $626 million in professional and service contracts yearly.
Get News Updates by Email
Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/ | http://nsf.gov/news/news_summ.jsp?cntn_id=103062 |
4.03125 | Food preservation involves preventing the growth of bacteria, fungi (such as yeasts), or other micro-organisms (although some methods work by introducing benign bacteria or fungi to the food), as well as retarding the oxidation of fats that cause rancidity. Food preservation may also include processes that inhibit visual deterioration, such as the enzymatic browning reaction in apples after they are cut during food preparation.
Many processes designed to preserve food will involve a number of food preservation methods. Preserving fruit by turning it into jam, for example, involves boiling (to reduce the fruit’s moisture content and to kill bacteria, etc.), sugaring (to prevent their re-growth) and sealing within an airtight jar (to prevent recontamination). Some traditional methods of preserving food have been shown to have a lower energy input and carbon footprint, when compared to modern methods. However, some methods of food preservation are known to create carcinogens, and in 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat, i.e. meat that has undergone salting, curing, fermenting, and smoking, as "carcinogenic to humans".
Maintaining or creating nutritional value, texture and flavor is an important aspect of food preservation, although, historically, some methods drastically altered the character of the food being preserved. In many cases these changes have come to be seen as desirable qualities – cheese, yogurt and pickled onions being common examples.
- 1 Traditional techniques
- 2 Curing
- 3 Industrial/modern techniques
- 4 See also
- 5 Notes
- 6 References
- 7 External links
Drying is one of the oldest techniques used to hamper the decomposition of food products. As early as 12,000 B.C., Middle Eastern and Oriental cultures were drying foods using the power of the sun. Vegetables and fruit are naturally dried by the sun and wind, but "still houses" were built in areas that did not have enough sunlight to dry things. A fire would be built inside the building to provide the heat to dry the various fruits, vegetables, and herbs.
Cooling preserves foods by slowing down the growth and reproduction of micro-organisms and the action of enzymes that cause food to rot. The introduction of commercial and domestic refrigerators drastically improved the diets of many in the Western world by allowing foods such as fresh fruit, salads and dairy products to be stored safely for longer periods, particularly during warm weather.
Freezing is also one of the most commonly used processes, both commercially and domestically, for preserving a very wide range of foods, including prepared foods that would not have required freezing in their unprepared state. For example, potato waffles are stored in the freezer, but potatoes themselves require only a cool dark place to ensure many months' storage. Cold stores provide large-volume, long-term storage for strategic food stocks held in case of national emergency in many countries.
Heating to temperatures which are sufficient to kill microorganisms inside the food is a method used with perpetual stews. Milk is also boiled before storing to kill many microorganisms.
Salting or curing draws moisture from the meat through a process of osmosis. Meat is cured with salt or sugar, or a combination of the two. Nitrates and nitrites are also often used to cure meat and contribute the characteristic pink color, as well as inhibition of Clostridium botulinum. It was a main method of preservation in medieval times and around the 1700s.
The earliest cultures have used sugar as a preservative, and it was commonplace to store fruit in honey. Similar to pickled foods, sugar cane was brought to Europe through the trade routes. In northern climates without sufficient sun to dry foods, preserves are made by heating the fruit with sugar. "Sugar tends to draw water from the microbes (plasmolysis). This process leaves the microbial cells dehydrated, thus killing them. In this way, the food will remain safe from microbial spoilage." Sugar is used to preserve fruits, either in an anti-microbial syrup with fruit such as apples, pears, peaches, apricots and plums, or in crystallized form where the preserved material is cooked in sugar to the point of crystallization and the resultant product is then stored dry. This method is used for the skins of citrus fruit (candied peel), angelica and ginger. Also sugaring can be used in jam jellies.
Smoking is used to lengthen the shelf life of perishable food items. This effect is achieved by exposing the food to smoke from burning plant materials such as wood. Smoke deposits a number of pyrolysis products onto the food, including the phenols syringol, guaiacol and catechol. These compounds aid in the drying and preservation of meats and other foods. Most commonly subjected to this method of food preservation are meats and fish that have undergone curing. Fruits and vegetables like paprika, cheeses, spices, and ingredients for making drinks such as malt and tea leaves are also smoked, but mainly for cooking or flavoring them. It is one of the oldest food preservation methods, which probably arose after the development of cooking with fire.
Pickling is a method of preserving food in an edible anti-microbial liquid. Pickling can be broadly classified into two categories: chemical pickling and fermentation pickling.
In chemical pickling, the food is placed in an edible liquid that inhibits or kills bacteria and other micro-organisms. Typical pickling agents include brine (high in salt), vinegar, alcohol, and vegetable oil, especially olive oil but also many other oils. Many chemical pickling processes also involve heating or boiling so that the food being preserved becomes saturated with the pickling agent. Common chemically pickled foods include cucumbers, peppers, corned beef, herring, and eggs, as well as mixed vegetables such as piccalilli.
In fermentation pickling, the food itself produces the preservation agent, typically by a process that produces lactic acid. Fermented pickles include sauerkraut, nukazuke, kimchi, surströmming, and curtido. Some pickled cucumbers are also fermented.
Sodium hydroxide (lye) makes food too alkaline for bacterial growth. Lye will saponify fats in the food, which will change its flavor and texture. Lutefisk uses lye in its preparation, as do some olive recipes. Modern recipes for century eggs also call for lye.
Canning involves cooking food, sealing it in sterile cans or jars, and boiling the containers to kill or weaken any remaining bacteria as a form of sterilization. It was invented by the French confectioner Nicolas Appert. By 1806, this process was used by the French Navy to preserve meat, fruit, vegetables, and even milk. Although Appert had discovered a new way of preservation, it wasn't understood until 1864 when Louis Pasteur found the relationship between microorganisms, food spoilage, and illness.
Foods have varying degrees of natural protection against spoilage and may require that the final step occur in a pressure cooker. High-acid fruits like strawberries require no preservatives to can and only a short boiling cycle, whereas marginal vegetables such as carrots require longer boiling and addition of other acidic elements. Low-acid foods, such as vegetables and meats, require pressure canning. Food preserved by canning or bottling is at immediate risk of spoilage once the can or bottle has been opened.
Lack of quality control in the canning process may allow ingress of water or micro-organisms. Most such failures are rapidly detected as decomposition within the can causes gas production and the can will swell or burst. However, there have been examples of poor manufacture (underprocessing) and poor hygiene allowing contamination of canned food by the obligate anaerobe Clostridium botulinum, which produces an acute toxin within the food, leading to severe illness or death. This organism produces no gas or obvious taste and remains undetected by taste or smell. Its toxin is denatured by cooking, however. Cooked mushrooms, handled poorly and then canned, can support the growth of Staphylococcus aureus, which produces a toxin that is not destroyed by canning or subsequent reheating.
Food may be preserved by cooking in a material that solidifies to form a gel. Such materials include gelatin, agar, maize flour, and arrowroot flour. Some foods naturally form a protein gel when cooked, such as eels and elvers, and sipunculid worms, which are a delicacy in Xiamen, in the Fujian province of the People's Republic of China. Jellied eels are a delicacy in the East End of London, where they are eaten with mashed potatoes. Potted meats in aspic (a gel made from gelatine and clarified meat broth) were a common way of serving meat off-cuts in the UK until the 1950s. Many jugged meats are also jellied.
Meat can be preserved by jugging. Jugging is the process of stewing the meat (commonly game or fish) in a covered earthenware jug or casserole. The animal to be jugged is usually cut into pieces, placed into a tightly-sealed jug with brine or gravy, and stewed. Red wine and/or the animal's own blood is sometimes added to the cooking liquid. Jugging was a popular method of preserving meat up until the middle of the 20th century.
Burial of food can preserve it due to a variety of factors: lack of light, lack of oxygen, cool temperatures, pH level, or desiccants in the soil. Burial may be combined with other methods such as salting or fermentation. Most foods can be preserved in soil that is very dry and salty (thus a desiccant) such as sand, or soil that is frozen.
Many root vegetables are very resistant to spoilage and require no other preservation than storage in cool dark conditions, for example by burial in the ground, such as in a storage clamp. Century eggs are created by placing eggs in alkaline mud (or other alkaline substance), resulting in their "inorganic" fermentation through raised pH instead of spoiling. The fermentation preserves them and breaks down some of the complex, less flavorful proteins and fats into simpler, more flavorful ones. Cabbage was traditionally buried in the fall in northern farms in the U.S. for preservation. Some methods keep it crispy while other methods produce sauerkraut. A similar process is used in the traditional production of kimchi. Sometimes meat is buried under conditions that cause preservation. If buried on hot coals or ashes, the heat can kill pathogens, the dry ash can desiccate, and the earth can block oxygen and further contamination. If buried where the earth is very cold, the earth acts like a refrigerator.
In Orissa, India, it is practical to store rice by burying it underground. This method helps to store for three to six months during the dry season.
The earliest form of curing was dehydration. To accelerate this process, salt is usually added. In the culinary world, it was common to choose raw salts from various sources (rock salt, sea salt, etc.). More modern "examples of salts that are used as preservatives include sodium chloride (NaCl), sodium nitrate (NaNO3) and sodium nitrite (NaNO2). Even at mild concentrations (up to 2%), sodium chloride, found in many food products, is capable of neutralizing the antimicrobial character of natural compounds."
Some foods, such as many cheeses, wines, and beers, use specific micro-organisms that combat spoilage from other less-benign organisms. These micro-organisms keep pathogens in check by creating an environment toxic for themselves and other micro-organisms by producing acid or alcohol. Methods of fermentation include, but are not limited to, starter micro-organisms, salt, hops, controlled (usually cool) temperatures and controlled (usually low) levels of oxygen. These methods are used to create the specific controlled conditions that will support the desirable organisms that produce food fit for human consumption.
Fermentation is the microbial conversion of starch and sugars into alcohol. Not only can fermentation produce alcohol, but it can also be a valuable preservation technique. Fermentation can also make foods more nutritious and palatable. For example, drinking water in the Middle Ages was dangerous because it often contained pathogens that could spread disease. When the water is made into beer, the resulting alcohol kills any bacteria in the water that could make people sick. Additionally, the water now has the nutrients from the barley and other ingredients, and the microorganisms can also produce vitamins as they ferment.
Techniques of food preservation were developed in research laboratories for commercial applications.
Pasteurization is a process for preservation of liquid food. It was originally applied to combat the souring of young local wines. Today, the process is mainly applied to dairy products. In this method, milk is heated at about 70 °C for 15 to 30 seconds to kill the bacteria present in it and cooling it quickly to 10 °C to prevent the remaining bacteria from growing. The milk is then stored in sterilized bottles or pouches in cold places. This method was invented by Louis Pasteur, a French chemist, in 1862.
Vacuum-packing stores food in a vacuum environment, usually in an air-tight bag or bottle. The vacuum environment strips bacteria of oxygen needed for survival. Vacuum-packing is commonly used for storing nuts to reduce loss of flavor from oxidization. A major drawback to vacuum packaging, at the consumer level, is that vacuum sealing can deform contents and rob certain foods, such as cheese, of its flavor.
Artificial food additives
Preservative food additives can be antimicrobial, which inhibit the growth of bacteria or fungi, including mold, or antioxidant, such as oxygen absorbers, which inhibit the oxidation of food constituents. Common antimicrobial preservatives include calcium propionate, sodium nitrate, sodium nitrite, sulfites (sulfur dioxide, sodium bisulfite, potassium hydrogen sulfite, etc.) and disodium EDTA. Antioxidants include BHA and BHT. Other preservatives include formaldehyde (usually in solution), glutaraldehyde (kills insects), ethanol, and methylchloroisothiazolinone.
Irradiation of food is the exposure of food to ionizing radiation. The two types of ionizing radiation used are beta particles (high-energy electrons) and gamma rays (emitted from radioactive sources as cobalt-60 or cesium-137). Treatment effects include killing bacteria, molds, and insect pests, reducing the ripening and spoiling of fruits, and at higher doses inducing sterility. The technology may be compared to pasteurization; it is sometimes called "cold pasteurization", as the product is not heated.
The irradiation process is not directly related to nuclear energy, but does use radioactive isotopes produced in nuclear reactors. Cobalt-60, for example does not occur naturally and can only be produced through neutron bombardment of cobalt-59. Ionizing radiation at high energy levels is hazardous to life (hence its usefulness in sterilisation); for this reason, irradiation facilities have a heavily shielded irradiation room where the process takes place. Radiation safety procedures are used to ensure that neither the workers in such facilities nor the environment receives any radiation dose above administrative limits. Irradiated food does not and cannot become radioactive, and national and international expert bodies have declared food irradiation as wholesome. However, the wholesomeness of consuming such food is disputed by opponents and consumer organizations. National and international expert bodies have declared food irradiation as "wholesome"; organizations of the United Nations, such as the World Health Organization and Food and Agriculture Organization, endorse food irradiation. International legislation on whether food may be irradiated or not varies worldwide from no regulation to full banning. Irradiation may allow lower-quality or contaminated foods to be rendered marketable.
Approximately 500,000 tons of food items are irradiated per year worldwide in over 40 countries. These are mainly spices and condiments with an increasing segment of fresh fruit irradiated for fruit fly quarantine.
Pulsed electric field electroporation
Pulsed electric field (PEF) electroporation is a method for processing cells by means of brief pulses of a strong electric field. PEF holds potential as a type of low-temperature alternative pasteurization process for sterilizing food products. In PEF processing, a substance is placed between two electrodes, then the pulsed electric field is applied. The electric field enlarges the pores of the cell membranes, which kills the cells and releases their contents. PEF for food processing is a developing technology still being researched. There have been limited industrial applications of PEF processing for the pasteurization of fruit juices. To date, several PEF treated juices are available on the market in Europe. Furthermore, for several years a juice pasteurization application in the US has used PEF. For cell disintegration purposes especially potato processors show great interest in PEF technology as an efficient alternative for their preheaters. Potato applications are already operational in the US and Canada. There are also commercial PEF potato applications in various countries in Europe, as well as in Australia, India and China.
Modifying atmosphere is a way to preserve food by operating on the atmosphere around it. Salad crops that are notoriously difficult to preserve are now being packaged in sealed bags with an atmosphere modified to reduce the oxygen (O2) concentration and increase the carbon dioxide (CO2) concentration. There is concern that, although salad vegetables retain their appearance and texture in such conditions, this method of preservation may not retain nutrients, especially vitamins. There are two methods for preserving grains with carbon dioxide. One method is placing a block of dry ice in the bottom and filling the can with the grain. Another method is purging the container from the bottom by gaseous carbon dioxide from a cylinder or bulk supply vessel.
Nitrogen gas (N2) at concentrations of 98% or higher is also used effectively to kill insects in the grain through hypoxia. However, carbon dioxide has an advantage in this respect, as it kills organisms through hypercarbia and hypoxia (depending on concentration), but it requires concentrations of above 35%, or so. This makes carbon dioxide preferable for fumigation in situations where a hermetic seal cannot be maintained.
Controlled Atmospheric Storage (CA): "CA storage is a non-chemical process. Oxygen levels in the sealed rooms are reduced, usually by the infusion of nitrogen gas, from the approximate 21 percent in the air we breathe to 1 percent or 2 percent. Temperatures are kept at a constant 0 to 2 °C (32 to 36 °F). Humidity is maintained at 95 percent and carbon dioxide levels are also controlled. Exact conditions in the rooms are set according to the apple variety. Researchers develop specific regimens for each variety to achieve the best quality. Computers help keep conditions constant." "Eastern Washington, where most of Washington’s apples are grown, has enough warehouse storage for 181 million boxes of fruit, according to a report done in 1997 by managers for the Washington State Department of Agriculture Plant Services Division. The storage capacity study shows that 67 percent of that space —enough for 121,008,000 boxes of apples — is CA storage."
Air-tight storage of grains (sometimes called hermetic storage) relies on the respiration of grain, insects, and fungi that can modify the enclosed atmosphere sufficiently to control insect pests. This is a method of great antiquity, as well as having modern equivalents. The success of the method relies on having the correct mix of sealing, grain moisture, and temperature.
This process subjects the surface of food to a "flame" of ionized gas molecules, such as helium or nitrogen. This causes micro-organisms to die off on the surface.
High-pressure food preservation
High-pressure food preservation or pascalization refers to the use of a food preservation technique that makes use of high pressure. "Pressed inside a vessel exerting 70,000 pounds per square inch (480 MPa) or more, food can be processed so that it retains its fresh appearance, flavor, texture and nutrients while disabling harmful microorganisms and slowing spoilage." By 2005, the process was being used for products ranging from orange juice to guacamole to deli meats and widely sold.
Biopreservation is the use of natural or controlled microbiota or antimicrobials as a way of preserving food and extending its shelf life. Beneficial bacteria or the fermentation products produced by these bacteria are used in biopreservation to control spoilage and render pathogens inactive in food. It is a benign ecological approach which is gaining increasing attention.
Of special interest are lactic acid bacteria (LAB). Lactic acid bacteria have antagonistic properties that make them particularly useful as biopreservatives. When LABs compete for nutrients, their metabolites often include active antimicrobials such as lactic acid, acetic acid, hydrogen peroxide, and peptide bacteriocins. Some LABs produce the antimicrobial nisin, which is a particularly effective preservative.
These days, LAB bacteriocins are used as an integral part of hurdle technology. Using them in combination with other preservative techniques can effectively control spoilage bacteria and other pathogens, and can inhibit the activities of a wide spectrum of organisms, including inherently resistant Gram-negative bacteria.
Hurdle technology is a method of ensuring that pathogens in food products can be eliminated or controlled by combining more than one approach. These approaches can be thought of as "hurdles" the pathogen has to overcome if it is to remain active in the food. The right combination of hurdles can ensure all pathogens are eliminated or rendered harmless in the final product.
Hurdle technology has been defined by Leistner (2000) as an intelligent combination of hurdles that secures the microbial safety and stability as well as the organoleptic and nutritional quality and the economic viability of food products. The organoleptic quality of the food refers to its sensory properties, that is its look, taste, smell, and texture.
Examples of hurdles in a food system are high temperature during processing, low temperature during storage, increasing the acidity, lowering the water activity or redox potential, and the presence of preservatives or biopreservatives. According to the type of pathogens and how risky they are, the intensity of the hurdles can be adjusted individually to meet consumer preferences in an economical way, without sacrificing the safety of the product.
|Principal hurdles used for food preservation (after Leistner, 1995)|
|Low temperature||T||Chilling, freezing|
|Reduced water activity||aw||Drying, curing, conserving|
|Increased acidity||pH||Acid addition or formation|
|Reduced redox potential||Eh||Removal of oxygen or addition of ascorbate|
|Biopreservatives||Competitive flora such as microbial fermentation|
|Other preservatives||Sorbates, sulfites, nitrites|
- "Preserving Food without Freezing or Canning, Chelsea Green Publishing, 1999"
- Stacy Simon (October 26, 2015). "World Health Organization Says Processed Meat Causes Cancer". Cancer.org.
- James Gallagher (26 October 2015). "Processed meats do cause cancer - WHO". BBC.
- "IARC Monographs evaluate consumption of red meat and processed meat" (PDF). International Agency for Research on Cancer. 26 October 2015.
- Nummer, B. (2002). "Historical Origins of Food Preservation" http://nchfp.uga.edu/publications/nchfp/factsheets/food_pres_hist.html. (Accessed on May 5, 2014)
- Msagati, T. (2012). "The Chemistry of Food Additives and Preservatives"
- Nicolas Appert inventeur et humaniste by Jean-Paul Barbier, Paris, 1994 and http://www.appert-aina.com
- anon., Food Irradation – A technique for preserving and improving the safety of food, WHO, Geneva, 1991
- World Health Organization. Wholesomeness of irradiated food. Geneva, Technical Report Series No. 659, 1981
- World Health Organization. High-Dose Irradiation: Wholesomeness of Food Irradiated With Doses Above 10 kGy. Report of a Joint FAO/IAEA/WHO Study Group. Geneva, Switzerland: World Health Organization; 1999. WHO Technical Report Series No. 890
- Hauther,W. & Worth, M., Zapped! Irradiation and the Death of Food, Food & Water Watch Press, Washington, DC, 2008
- Consumers International – Home[dead link]
- NUCLEUS – Food Irradiation Clearances
- Food irradiation – Position of ADA J Am Diet Assoc. 2000;100:246-253
- C.M. Deeley, M. Gao, R. Hunter, D.A.E. Ehlermann, The development of food irradiation in the Asia Pacific, the Americas and Europe; tutorial presented to the International Meeting on Radiation Processing, Kuala Lumpur, 2006. http://www.doubleia.org/index.php?sectionid=43&parentid=13&contentid=494
- Annis, P.C. and Dowsett, H.A. 1993. Low oxygen disinfestation of grain: exposure periods needed for high mortality. Proc. International Conference on Controlled Atmosphere and Fumigation. Winnipeg, June 1992, Caspit Press, Jerusalem, pp 71-83.
- Annis, P.C. and Morton, R. 1997. The acute mortality effects of carbon dioxide on various life stages of Sitophilus oryzae. J. Stored Prod.Res. 33. 115-124
- Controlled Atmospheric Storage (CA) :: Washington State Apple Commission
- Various authors, Session 1: Natural Air-Tight Storage In: Shejbal, J., ed., Controlled Atmosphere Storage of Grains, Elsevier: Amsterdam, 1-33
- Annis P.C. and Banks H.J. 1993. Is hermetic storage of grains feasible in modern agricultural systems? In "Pest control and sustainable agriculture" Eds S.A. Corey, D.J. Dall and W.M. Milne. CSIRO, Australia. 479-482
- Laine Welch (May 18, 2013). "Laine Welch: Fuel cell technology boosts long-distance fish shipping". Anchorage Daily News. Retrieved May 19, 2013.
- NWT magazine, December 2012
- "High-Pressure Processing Keeps Food Safe". Military.com. Archived from the original on 2008-02-02. Retrieved 2008-12-16.
Pressed inside a vessel exerting 70,000 pounds per square inch or more, food can be processed so that it retains its fresh appearance, flavor, texture and nutrients while disabling harmful microorganisms and slowing spoilage.
- Ananou S, Maqueda M, Martínez-Bueno M and Valdivia E (2007) "Biopreservation, an ecological approach to improve the safety and shelf-life of foods" In: A. Méndez-Vilas (Ed.) Communicating Current Research and Educational Topics and Trends in Applied Microbiology, Formatex. ISBN 978-84-611-9423-0.
- Yousef AE and Carolyn Carlstrom C (2003) Food microbiology: a laboratory manual Wiley, Page 226. ISBN 978-0-471-39105-0.
- FAO: Preservation techniques Fisheries and aquaculture department, Rome. Updated 27 May 2005. Retrieved 14 March 2011.
- Alzamora SM, Tapia MS and López-Malo A (2000) Minimally processed fruits and vegetables: fundamental aspects and applications Springer, Page 266. ISBN 978-0-8342-1672-3.
- Alasalvar C (2010) Seafood Quality, Safety and Health Applications John Wiley and Sons, Page 203. ISBN 978-1-4051-8070-2.
- Leistner I (2000) "Basic aspects of food preservation by hurdle technology" International Journal of Food Microbiology, 55:181–186.
- Leistner L (1995) "Principles and applications of hurdle technology" In Gould GW (Ed.) New Methods of Food Preservation, Springer, pp. 1-21. ISBN 978-0-8342-1341-8.
- Lee S (2004) "Microbial Safety of Pickled Fruits and Vegetables and Hurdle Technology" Internet Journal of Food Safety, 4: 21–32.
- Riddervold, Astri. Food Conservation. ISBN 978-0-907325-40-6.
- Abakarov, Nunes. "Thermal food processing optimization: algorithms and software" (PDF). Food Engineering.
- Abakarov, Sushkov, Mascheroni. "Multi-criteria optimization and decision-making approach for improving of food engineering processes" (PDF). International Journal of Food Studies.
|Wikimedia Commons has media related to Food preservation.|
- A ca. 1894 Gustav Hammer & Co. commercial cooking machinery catalogue.
- Dehydrating Food
- Preserving foods ~ from the Clemson Extension Home and Garden Information Center
- National Center for Home Food Preservation[dead link]
- BBC News Online – US army food... just add urine
- Home Economics Archive: Tradition, Research, History (HEARTH)
An e-book collection of over 1,000 classic books on home economics spanning 1850 to 1950, created by Cornell University's Mann Library.
- Survival guide – Refrigerate food without electrical power
- Pulsed electric field processing for the food and beverage industry and scientific sectors | https://en.wikipedia.org/wiki/Food_preservation |
4.15625 | Georgia General Assembly
A form of representative government has existed in Georgia since January 1751. Its modern embodiment, known as the Georgia General Assembly, is one of the largest state legislatures in the nation. The General Assembly consists of two chambers, the House of Representatives and the senate.
The General Assembly has operated continuously since 1777, when Georgia became one of the thirteen original states and revoked its status as a colony of Great Britain. Since the General Assembly is the legislative body for the state, the location of its meetings has moved along with each move of the state capital. In its earliest days the legislature met first in Savannah, and subsequently in Augusta, Louisville, and Milledgeville. In 1868 the capital—and the assembly—settled permanently in Atlanta. Today the General Assembly meets in the state capitol, an impressive limestone and marble building with a distinctive gold dome and granite foundation. Each chamber is housed in a separate wing.
Every two years, Georgia voters elect members of the legislature. These elections occur in even-numbered years (e.g., 2002, 2004, 2006). The qualifications for holding office in both houses, as well as the size of both chambers, are established in the Georgia state constitution.
Thegovernor's interests in the chamber.
Much of the work of the house is done in thirty-six standing committees. At the start of each two-year session, each member is assigned to two or three committees, which are organized by such topics as agriculture, education, or taxes. Each political party's leadership selects members to serve on the committees, which ensures that the parties are effectively represented in the process. Thus the party composition of committees is proportional to the party composition of the house. The Speaker of the House selects the chairs of each committee; since the Speaker belongs to the majority party in the chamber, all the committees are chaired by members of the majority party. Legislation passes through the committees, where it can be amended, changed, or killed. Members, therefore, actively seek to be placed on committees that deal with issues important to them personally and to their constituents.
To serve in the House of Representatives, an individual must be at least twenty-one years old. Other requirements include residency for at least a year in the district that he or she represents and residency in Georgia for at least two years.
The lieutenant governor. Unlike the Speaker, who is elected by the members of the house, the lieutenant governor is elected by all the voters of the state. Thus, the lieutenant governor may belong to a different political party than the majority of the senators, as was the case in the 2003-4 and 2005-6 sessions when Lieutenant Governor Mark Taylor, a Democrat, presided over a majority-Republican senate. This scenario requires careful political balancing and the investment of significant authority in the president pro tempore of the senate, who is the leader of the majority party.
There are twenty-six committees in the senate, and senators are required to serve on at least three committees during their two-year terms in the General Assembly. As in the house, the party affiliations of senate committees are proportional to the party affiliations of the senate as a whole. The lieutenant governor appoints the chairs of the committees, which resulted in an unusual situation in the 2003-4 session. The Republican Party was the majority party in the senate, but the lieutenant governor appointed Democrats to chair some committees.
To serve in the senate, an individual must be at least twenty-five years old. Other requirements include residency for at least a year in the district that he or she represents and residency in Georgia for at least two years.
Each January representatives congregate at the state capitol for the start of the legislative session, which lasts for forty days, to deliberate matters of importance to the citizens of the state. The forty days are not always continuous, and during the time when the chambers are not in session, members generally work in committees or return home to meet with constituents. The General Assembly uses a committee system to accomplish its legislative tasks. Since they meet year round, even when the legislature is not in session, committees can consider legislative process in Georgia to move more efficiently. Typically, the legislature adjourns in late March, after the major legislative business has been completed. From time to time the governor may call the General Assembly into a special session for a set number of days.
The most important function of the General Assembly is to pass the state's operating budget each year. In fact, approximately half of the hours spent in session are related to the budget. This includes establishing spending priorities and setting tax rates. Additionally, lawmakers must enact other laws on a broad array of topics from education to roads and transportation.
Another task of the General Assembly is to consider all proposed amendments to the Georgia constitution. A two-thirds vote in both houses is the primary means for approving resolutions to place proposed constitutional changes on the ballot. Voters will then decide if the constitution is to be amended.
A special task that the General Assembly must undertake every ten years is the drawing of legislative district lines to create the maps used for the state house and state senate district boundaries. The General Assembly also establishes the district lines for Georgia's delegation to the U.S. House of Representatives.
A number of famous Georgians have served in the General Assembly. Jimmy Carter, the only Georgian ever to be elected president of the United States, served in the state senate during the 1960s. Several civil rights leaders, including Julian Bond and Hosea Williams, have served in the General Assembly. Most governors and U.S. senators from the state served in one of the two chambers before running for higher office.
Several Hugh Gillis of Soperton, with more than fifty years of combined service in both houses of the legislature. No discussion of longevity in the General Assembly would be complete without mention of Tom Murphy of Bremen, who was Speaker of the House between 1974 and 2002. Murphy was the longest-serving Speaker in the nation when he was defeated in his 2002 reelection bid.
The average General Assembly member is white and male; in 2015, 23 percent of the members were women and 25 percent were African American.
Media Gallery: Georgia General Assembly | http://www.georgiaencyclopedia.org/articles/government-politics/georgia-general-assembly |
4.0625 | 5 Written questions
5 Matching questions
- homozygous recessive
- DNA repair mechanism
- cytosine (C)
- a a pair of recessive alleles at a locus on homologous chromosomes; e.g., aa.
- b One of several processes by which enzymes repair broken or mismatched DNA strands
- c A nitrogen-containing base in nucleotides; also, base-pairs with guanine in DNA and RNA.
- d allele effects that are masked by a dominant allele on the chromosome.
- e The stage between mitotic divisions when a cell grows in mass, doubles its cytoplasmic, and replicates its DNA.
5 Multiple choice questions
- Process by which a cell duplicates its DNA before it divides.
- an allele that masks the effects of a recessive allele paired with it
- removes phosphate group and inhibits mitosis
- Stage of mitosis and meiosis in which chromosomes condense a become attached to spindles.
- The location of a gene on a chromosome
5 True/False questions
linkage group → All genes tend to stay together during meiosis but may be separated by cross-overs.
genotype → specific alleles carried by an individual
homologous chromosome → except for the nonidentical sex chromosomes, members of a pair have the same length, shape, and genes.
continuous variation → a range of small differences in a trait
zygote → Mature, haploid reproductive cell | https://quizlet.com/3276647/test |
4.125 | The following sets of guidelines or 'ground rules' are examples that can be given to a class for use, or can provide a basis for a discussion about developing an atmosphere of mutual respect and collective inquiry. Many teachers also find it productive to have a discussion with their students in which they collaboratively generate a list of discussion guidelines or community agreements to set expectations for their interactions.
(from the CRLT GSI Guidebook.)
Guidelines for Class Participation
1. Respect others’ rights to hold opinions and beliefs that differ from your own. Challenge or criticize the idea, not the person.
2. Listen carefully to what others are saying even when you disagree with what is being said. Comments that you make (asking for clarification, sharing critiques, expanding on a point, etc.) should reflect that you have paid attention to the speaker’s comments.
3. Be courteous. Don’t interrupt or engage in private conversations while others are speaking.
4. Support your statements. Use evidence and provide a rationale for your points.
5. Allow everyone the chance to talk. If you have much to say, try to hold back a bit; if you are hesitant to speak, look for opportunities to contribute to the discussion. Read more » | http://www.crlt.umich.edu/category/tags/inclusive-teaching?page=1 |
4.3125 | If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course.
The Born-Oppenheimer approximation is a way to simplify the complicated Schrödinger equation for a molecule. The nucleus and electrons are attracted to each other with the same magnitude of electric charge, thus they exert the same force and momentum. While exerting the same kind of momentum, the nucleus, with a much larger mass in comparison to electron’s mass, will have a very small velocity that is almost negligible. Born-Oppenheimer takes advantage of this phenomenon and makes the assumption that since the nucleus is way heavier in mass compared to the electron, its motion can be ignored while solving the electronic Schrödinger equation; that is, the nucleus is assumed to be stationary while electrons move around it. The motion of the nuclei and the electrons can be separated and the electronic and nuclear problems can be solved with independent wavefunctions.
The wavefunction for the molecule thus becomes:
Ψmolecule= Ψelectronx Ψnuclei
The principle of Born-Oppenheimer can be applied to calculate the bond length energy between molecules. By focusing on the specific separation between nucleus and electron, their wavefunction can be calculated. Thus, a molecule’s energy in relationship with its bond length can be examined. | http://chemwiki.ucdavis.edu/Theoretical_Chemistry/Chemical_Bonding/General_Principles_of_Chemical_Bonding/Born_Oppenheimer_Approximation |
4.3125 | The Choctaw freedmen were enslaved African Americans who were emancipated after the American Civil War and were granted citizenship in the Choctaw Nation. Their freedom and citizenship were requirements of the 1866 treaty the US made with the Choctaw; it required a new treaty because the Choctaw had sided with the Confederate States of America during the war. The Confederacy had promised the Choctaw and other tribes of Indian Territory a Native American state if it won the war.
"Freedmen" is one of the terms given to the newly emancipated people after slavery was abolished in the United States. The Choctaw freedmen were officially adopted as full members into the Choctaw Nation in 1885.
Like other Native American tribes, the Choctaw had customarily held slaves as captives from warfare. As they adopted elements of European culture, such as larger farms and plantations, they began to adapt their system to that of purchasing and holding chattel slave workers of African-American descent. Moshulatubbee had slaves, as did many of the European men, generally fur traders, who married into the Choctaw nation. The Folsom and LeFlore families were some of the Choctaw planters who held the most slaves at the time of Indian Removal and afterward.
Slavery lasted in the Choctaw Nation until 1866. Former slaves of the Choctaw Nation would be called the Choctaw freedmen, and then and later, a number had Choctaw as well as African and sometimes European ancestry. At the time of Indian Removal, the Beams family was a part of the Choctaw Nation. They were known to have been of African descent and also free.
|Wikimedia Commons has media related to Choctaw.|
- African Americans with native heritage
- Cherokee freedmen
- Creek Freedmen
- Black Seminoles
- Oak Hill Industrial Academy
- "1885 Choctaw & Chickasaw Freedmen Admitted To Citizenship". Retrieved 2008-09-04.
- "The Choctaw Freedmen of Oklahoma". Retrieved 2008-02-14. | https://en.wikipedia.org/wiki/Choctaw_Freedmen |
4.25 | Civil rights movement
The civil rights movement was a series of worldwide political movements for equal rights. There were many events where unequal people refused to do anything. They did this to show others that they are as equal in a nonviolent way. Other events were more violent with people who began to rebel against others. The process was long and sometimes did not affect anything in many countries. Many of these movements did not fully achieve their goals. However, their efforts helped improved legal rights of unequal groups of people.
The main goals of the civil rights movement includes that the rights of all people are equally protected by the law, including the rights of minorities. Civil rights movements are different from each country. The LGBT rights movement, Women's rights movement and many racial minority rights movements are still continuing to fight for equal rights.
Africa[change | change source]
Angola[change | change source]
The Angolan War of Independence was from 1961 to 1975. Angola fought against Portugal. Portugal was making people in Angola farm cotton. Three different groups in Angola were against Portugal. Millions of people died during the war.
Guinea[change | change source]
The Guinea-Bissau War of Independence was an armed conflict and national liberation struggle that took place in Portuguese Guinea (modern Guinea-Bissau) between 1963 and 1974.
Mozambican[change | change source]
The Mozambican War of Independence was a conflict from 1964 until 1975. It was between the Mozambique Liberation Front or Frelimo (French: Frente de Libertação de Moçambique) and Portugal. The Portuguese were successful during the conflict with guerilla forces. Because of a coup d'état in Portugal, Mozambique succeeded in having independence on 25 June 1975.
Ireland[change | change source]
Northern Ireland saw the formation of the Campaign for Social Justice in Belfast in 1964. This was followed by the Northern Ireland Civil Rights Association (NICRA) on 29 January 1967. They wanted to repeal Special Powers Acts of 1922, 1933, and 1943, the end of B Specials police force, and end the gerrymandering of local elections, an end to discrimination in housing and government jobs.
These demands for reform caused a backlash by some in the unionist majority. This causes The Troubles to start a civil war that lasted for more than 30 years. The NICRA used the same plans used by the American Civil Rights Movement. Which were marches, pickets, sit-ins and protest. The first civil rights march in Northern Ireland was held on 24 August 1968 between Coalisland and Dungannon.
United States[change | change source]
Segregation[change | change source]
Segregation was an attempt by white Southerners to separate the races. They did this to strengthen white pride and to be more powerful over African Americans. Segregation was often called the Jim Crow system. Segregation became common in Southern states following the end of Reconstruction in 1877. During Reconstruction, which followed the Civil War (1861-1865), Republican governments in the Southern states were run by blacks. The Reconstruction governments had passed laws opening up economic and political opportunities for blacks. By 1877 the Democratic Party had gained control of government in the Southern states, and these Southern Democrats wanted to reverse black advances made during Reconstruction. To that end, they began to pass local and state laws that specified certain places "For Whites Only" and others for "Colored". Blacks had separate schools, transportation, restaurants, and parks. Over the next 75 years, Jim Crow signs went up to separate the races in every possible place.
The system of segregation also included the denial of voting rights, known as disfranchisement. Between 1890 and 1910 all Southern states passed laws that did not allow blacks to vote. These requirements included: the ability to read and write. Many blacks had no access to education and property ownership. Because blacks could not vote, they were powerless to prevent whites from segregating all aspects of Southern life. They could do little to stop discrimination in public places, education, economic opportunities, or housing.
Conditions for blacks in Northern states were somewhat better. Blacks were usually free to vote in the North, but there were so few blacks that their voices were barely heard. Segregated facilities were not as common in the North. Blacks were usually denied entrance to the best hotels and restaurants. Schools in New England were usually integrated. However, those in the Midwest generally were not.
Montgomery Bus Boycott[change | change source]
On December 1, 1955, Rosa Parks, a member of the Montgomery, Alabama, branch of the NAACP (National Association for the Advancement of Colored People), was told to give up her seat on a city bus to a white person. When Parks refused to move, she was arrested. The local NAACP, led by Edgar D. Nixon, recognized that the arrest of Parks might rally local blacks to protest segregated buses. Montgomery's black community had long been angry about their mistreatment on city buses where white drivers were often rude and abusive. The community had previously considered a boycott of the buses. The Montgomery Bus Boycott was a success, with support from the 50,000 blacks in Montgomery. It lasted for more than a year. This event showed the American public that blacks in the South will not stop protesting until the end of segregation. A federal court ordered Montgomery's buses desegregated in November 1956. The boycott ended with the blacks winning the right to sit wherever they want.
A young Baptist minister named Martin Luther King, Jr., was president of the Montgomery Improvement Association, the organization that directed the boycott. The protest made King a national figure. King became the president of the Southern Christian Leadership Conference (SCLC) when it was founded in 1957. SCLC wanted to celebrate the NAACP legal strategy by encouraging the use of nonviolence. These activities included marches, demonstrations, and boycotts. The violent white response to black direct action eventually forced the federal government to confront the issues of injustice and racism in the South.
In addition to his large following among blacks, King had a powerful appeal to liberal Northerners that helped him influence national public opinion. His advocacy of nonviolence attracted supporters among peace activists. He formed alliances in the American Jewish community. He also developed supporters from the ministers of wealthy, influential Protestant congregations in Northern cities. King often preached to those congregations, where he raised funds for SCLC.
Chicano Movement[change | change source]
The Chicano Movement is a political, social, and cultural movements by Mexican Americans. The Chicano Movement addresses negative ethnic stereotypes of Mexican people in the media and by Americans. People such as Tiburcio Vasquez and Joaquin Murietta became folk heros to Mexican Americans. They refused to obey White Americans.
American Indian Movement[change | change source]
The American Indian Movement (AIM) is a Native American activist organization in the United States. It was founded in 1968 in Minneapolis, Minnesota. The organization was formed to stop issues concerning the Native American urban community in Minneapolis. They wanted to stop poverty, housing, treaty issues, and police harassment.
Gender equality[change | change source]
The first feminism equality was the suffrage rights. This led women the right to vote. The second feminism was the issue of economic equality. Lesbians are also part of women's rights. The lesbian feminist groups, such as the Lavender Menace, are a lesbian activism group.
LGBT rights and gay liberation[change | change source]
The events of the Hawaii Supreme Court promoted the United States Congress to create the Defense of Marriage Act in 1996. This act forbids federal government from accepting same-sex relationships to get married. Currently 30 states have passed state constitutional amendments that ban same-sex marriage: Currently 30 states have passed state constitutional amendments that ban same-sex marriage. However, Connecticut, Massachusetts, New Mexico, New Jersey, New York, Rhode Island, and Vermont legalized gay marriage.
Before 1993, lesbian and gay people were not allowed to serve in the US military. Under the "Don't ask, don't tell" (DADT) policy, they were only allowed to serve in the military if they did not tell anyone of their sexual orientation. The Don't Ask, Don't Tell Repeal Act of 2010 allowed homosexual men and women to serve openly in the armed forces. Since September 20, 2011, gays, lesbians, and bisexuals have been able to serve openly. Transsexual and intersex service-members however are still banned from serving openly, due to Department of Defense medical policies which consider gender identity disorder to be a medically disqualifying condition.
People who oppose gay rights in the United States have been political and religious conservatives. These people cite a number of Bible passages from the Old and New Testaments as their reason. The most opposition of gay rights are in the South and other states with a large rural population. Many organizations have opposed the gay rights movement. These include, American Family Association, the Christian Coalition, Family Research Council, Focus on the Family, Save Our Children, NARTH, the national Republican Party, the Roman Catholic Church, The Church of Jesus Christ of Latter-day Saints (LDS Church), the Southern Baptist Convention, Alliance for Marriage, Alliance Defense Fund, Liberty Counsel, and the National Organization for Marriage. A number of these groups have been named as anti-gay hate groups by the Southern Poverty Law Center.
Germany[change | change source]
The Civil Rights Movement in Germany was a left-wing backlash against the post-Nazi Party era of the country. The movement took place mostly among disillusioned students and was largely a protest movement to others around the globe during the late 1960s.
France[change | change source]
A general strike broke out across France in May 1968. It became a revolutionary problem. It was discouraged by the French Communist Party. It was finally suppressed by the government, which accused the communists of plotting against the Republic. Some philosophers and historians have argued that the rebellion was the single most important revolutionary event of the 20th century because it wasn't participated in by a lone demographic, such as workers or racial monorities, but was rather a purely popular uprising, superseding ethnic, cultural, age and class boundaries.
Books[change | change source]
- Manfred Berg and Martin H. Geyer; Two Cultures of Rights: The Quest for Inclusion and Participation in Modern America and Germany Cambridge University Press, 2002
- Jack Donnelly and Rhoda E. Howard; International Handbook of Human Rights Greenwood Press, 1987
- David P. Forsythe; Human Rights in the New Europe: Problems and Progress University of Nebraska Press, 1994
- Joe Foweraker and Todd Landman; Citizenship Rights and Social Movements: A Comparative and Statistical Analysis Oxford University Press, 1997
- Mervyn Frost; Constituting Human Rights: Global Civil Society and the Society of Democratic States Routledge, 2002
- Marc Galanter; Competing Equalities: Law and the Backward Classes in India University of California Press, 1984
- Raymond D. Gastil and Leonard R. Sussman, eds.; Freedom in the World: Political Rights and Civil Liberties, 1986-1987 Greenwood Press, 1987
- David Harris and Sarah Joseph; The International Covenant on Civil and Political Rights and United Kingdom Law Clarendon Press, 1995
- Steven Kasher; The Civil Rights Movement: A Photographic History (1954–1968) Abbeville Publishing Group (Abbeville Press, Inc.), 2000
- Francesca Klug, Keir Starmer, Stuart Weir; The Three Pillars of Liberty: Political Rights and Freedoms in the United Kingdom Routledge, 1996
- Fernando Santos-Granero and Frederica Barclay; Tamed Frontiers: Economy, Society, and Civil Rights in Upper Amazonia Westview Press, 2000
- Paul N. Smith; Feminism and the Third Republic: Women's Political and Civil Rights in France, 1918-1940 Clarendon Press, 1996
- Jorge M. Valadez; Deliberative Democracy: Political Legitimacy and Self-Determination in Multicultural Societies Westview Press, 2000
References[change | change source]
- The Decolonization of Portuguese Africa: Metropolitan Revolution and the Dissolution of Empire by Norrie MacQueen - Mozambique since Independence: Confronting Leviathan by Margaret Hall, Tom Young - Author of Review: Stuart A. Notholt African Affairs, Vol. 97, No. 387 (Apr., 1998), pp. 276-278, JSTOR
- Miner, Marlyce. "The American Indian Movement"
- "Republican Party 2004 Platform" (PDF). http://www.gop.com/images/2004platform.pdf.
- "LDS Newsroom – Same-Gender Attraction". April 8, 2008. http://newsroom.lds.org/ldsnewsroom/eng/public-issues/same-gender-attraction. Retrieved April 8, 2008.
- "SBC Officially Opposes "Homosexual Marriage". The Southern Baptist Convention. July 26, 2003. http://www.reclaimamerica.org/PAGES/NEWS/news.aspx?story=1264. Retrieved July 5, 2006.
- Schlatter, Evelyn, "18 Anti-Gay Groups and Their Propaganda", Intelligence Report Winter 2010 (140), http://www.splcenter.org/get-informed/intelligence-report/browse-all-issues/2010/winter/the-hard-liners, retrieved January 31, 2011
Other websites[change | change source]
- We Shall Overcome: Historic Places of the Civil Rights Movement, a National Park Service Discover Our Shared Heritage at Travel Itinerary
- A Columbia University Resource for Teaching African American History
- Martin Luther King, Jr. and the Global Freedom Struggle, an encyclopedia presented by the Martin Luther King, Jr. Research and Education Institute at Stanford University
- Civil Rights entry by Andrew Altman in the Stanford Encyclopedia of Philosophy
- Martin Luther King, Jr. and the Global Freedom Struggle ~ an online multimedia encyclopedia presented by the King Institute at Stanford University, includes information on over 1000 civil rights movement figures, events and organizations
- "CivilRightsTravel.com" ~ a visitors guide to key sites from the civil rights movement
- The History Channel: Civil Rights Movement
- Civil Rights in America: Connections to a Movement | https://simple.wikipedia.org/wiki/Civil_rights_movement |
4.125 | Although you might have heard people talk about a gene for red hair, green eyes or other characteristics, it's important to remember that genes code for proteins, not traits. While your genetic makeup does indeed determine physical traits like eye color, hair color and so forth, your genes affect these traits indirectly by way of the proteins created via DNA.
Your DNA carries information in the sequence of base pairs of its nucleotides. These biological molecules, the building blocks of DNA, are often abbreviated with the first letter of their names: adenine (A), thymine (T), guanine (G) and cytosine (C). The types and sequence of nucleotides in DNA determine the types and sequence of nucleotides in RNA. This in turn determines the types and order of amino acids included in proteins. Specific three-letter groups of RNA nucleotides code for specific amino acids. The combination TTT, for example, codes for the amino acid phenylalanine. Regulatory regions of the gene also contribute to protein synthesis by determining when the gene will be switched on or off.
In active genes, genetic information determines which proteins are synthesized and when synthesis is turned on or off. These proteins fold into complicated three-dimensional structures, somewhat like molecular origami. Because each amino acid has specific chemical characteristics, the sequence of amino acids determine the structure and shape of a protein. For example, some amino acids attract water, and others are repelled by it. Some amino acids can form weak bonds to each other, but others cannot. Different combinations and sequences of these chemical characteristics determine the unique three-dimensional folded shape of each protein
Structure & Function
The structure of a protein determines its function. Proteins that catalyze (accelerate) chemical reactions, for example, have "pockets," which can bind specific chemicals and make it easier for a particular reaction to occur. Variations in the DNA code of a gene can change either the structure of a protein or when and where it is produced. If these variations change the protein structure, they could also change its function.
Variations in a gene can affect traits in several ways. Variations in proteins involved in growth and development, for example, can give rise to differences in physical features like height. Pigments of skin and hair color are produced by enzymes, proteins that catalyze chemical reactions. Variations in both the structure and quantity of the proteins produced give rise to different amounts of skin and hair pigment and therefore different colors of hair and skin.
- Kimball's Biology Pages: The Genetic Code
- Molecular Cell Biology; Harvey Lodish, Arnold Berk, Chris Kaiser, Monty Krieger, Matthew P. Scott, Anthony Bretscher, Hidde Ploegh and Paul Matsudaira
- Kimball's Biology Pages: Proteins
- Kimball's Biology Pages: Enzymes
- Sandwalk: Human MC1R Gene Controls Hair Color and Skin Color
- Jupiterimages/liquidlibrary/Getty Images | http://science.opposingviews.com/relationship-between-dna-bases-genes-proteins-traits-2074.html |
4.0625 | |This article does not cite any sources. (March 2013)|
Electrical resonance occurs in an electric circuit at a particular resonance frequency when the imaginary parts of impedances or admittances of circuit elements cancel each other. In some circuits this happens when the impedance between the input and output of the circuit is almost zero and the transfer function is close to one.
Resonance of a circuit involving capacitors and inductors occurs because the collapsing magnetic field of the inductor generates an electric current in its windings that charges the capacitor, and then the discharging capacitor provides an electric current that builds the magnetic field in the inductor. This process is repeated continually. An analogy is a mechanical pendulum.
At resonance, the series impedance of the two elements is at a minimum and the parallel impedance is at maximum. Resonance is used for tuning and filtering, because it occurs at a particular frequency for given values of inductance and capacitance. It can be detrimental to the operation of communications circuits by causing unwanted sustained and transient oscillations that may cause noise, signal distortion, and damage to circuit elements.
Parallel resonance or near-to-resonance circuits can be used to prevent the waste of electrical energy, which would otherwise occur while the inductor built its field or the capacitor charged and discharged. As an example, asynchronous motors waste inductive current while synchronous ones waste capacitive current. The use of the two types in parallel makes the inductor feed the capacitor, and vice versa, maintaining the same resonant current in the circuit, and converting all the current into useful work.
Since the inductive reactance and the capacitive reactance are of equal magnitude, ωL = 1/ωC, so:
The quality of the resonance (how long it will ring when excited) is determined by its Q factor, which is a function of resistance. A true LC circuit would have infinite Q, but all real circuits have some resistance and smaller Q and are usually approximated more accurately by an RLC circuit.
An RLC circuit (or LCR circuit) is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The RLC part of the name is due to those letters being the usual electrical symbols for resistance, inductance and capacitance respectively. The circuit forms a harmonic oscillator for current and resonates similarly to an LC circuit. The main difference stemming from the presence of the resistor is that any oscillation induced in the circuit decays over time if it is not kept going by a source. This effect of the resistor is called damping. The presence of the resistance also reduces the peak resonant frequency. Some resistance is unavoidable in real circuits, even if a resistor is not specifically included as a component. A pure LC circuit is an ideal that exists only in theory.
There are many applications for this circuit. It is used in many different types of oscillator circuits. An important application is for tuning, such as in radio receivers or television sets, where they are used to select a narrow range of frequencies from the ambient radio waves. In this role the circuit is often referred to as a tuned circuit. An RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter or high-pass filter. The tuning application, for instance, is an example of band-pass filtering. The RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis.
The three circuit elements can be combined in a number of different topologies. All three elements in series or all three elements in parallel are the simplest in concept and the most straightforward to analyse. There are, however, other arrangements, some with practical importance in real circuits. One issue often encountered is the need to take into account inductor resistance. Inductors are typically constructed from coils of wire, the resistance of which is not usually desirable, but it often has a significant effect on the circuit.
- Antenna theory
- Cavity resonator
- Electronic oscillator
- Electronic filter
- Resonant energy transfer - wireless energy transmission between two resonant coils | https://en.wikipedia.org/wiki/Electrical_resonance |
4.34375 | X-ray crystallography is a tool used for identifying the atomic and molecular structure of a crystal, in which the crystalline atoms cause a beam of incident X-rays to diffract into many specific directions. By measuring the angles and intensities of these diffracted beams, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal. From this electron density, the mean positions of the atoms in the crystal can be determined, as well as their chemical bonds, their disorder and various other information.
Since many materials can form crystals—such as salts, metals, minerals, semiconductors, as well as various inorganic, organic and biological molecules—X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences among various materials, especially minerals and alloys. The method also revealed the structure and function of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the chief method for characterizing the atomic structure of new materials and in discerning materials that appear similar by other experiments. X-ray crystal structures can also account for unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases.
In a single-crystal X-ray diffraction measurement, a crystal is mounted on a goniometer. The goniometer is used to position the crystal at selected orientations. The crystal is illuminated with a finely focused monochromatic beam of X-rays, producing a diffraction pattern of regularly spaced spots known as reflections. The two-dimensional images taken at different orientations are converted into a three-dimensional model of the density of electrons within the crystal using the mathematical method of Fourier transforms, combined with chemical data known for the sample. Poor resolution (fuzziness) or even errors may result if the crystals are too small, or not uniform enough in their internal makeup.
X-ray crystallography is related to several other methods for determining atomic structures. Similar diffraction patterns can be produced by scattering electrons or neutrons, which are likewise interpreted by Fourier transformation. If single crystals of sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods include fiber diffraction, powder diffraction and (if the sample is not crystallized) small-angle X-ray scattering (SAXS). If the material under investigation is only available in the form of nanocrystalline powders or suffers from poor crystallinity, the methods of electron crystallography can be applied for determining the atomic structure.
For all above mentioned X-ray diffraction methods, the scattering is elastic; the scattered X-rays have the same wavelength as the incoming X-ray. By contrast, inelastic X-ray scattering methods are useful in studying excitations of the sample, rather than the distribution of its atoms.
- 1 History
- 2 Contributions to chemistry and material science
- 3 Relationship to other scattering techniques
- 4 Methods
- 4.1 Overview of single-crystal X-ray diffraction
- 4.2 Crystallization
- 4.3 Data collection
- 4.4 Data analysis
- 4.5 Deposition of the structure
- 5 Diffraction theory
- 6 Nobel Prizes for X-ray Crystallography
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
Early scientific history of crystals and X-rays
Crystals have long been admired for their regularity and symmetry, but they were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles.
Crystal symmetry was first investigated experimentally by Danish scientist Nicolas Steno (1669), who showed that the angles between the faces are the same in every exemplar of a particular type of crystal, and by René Just Haüy (1784), who discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which are still used today for identifying crystal faces. Haüy's study led to the correct idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions that are not necessarily perpendicular. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel, Auguste Bravais, Evgraf Fedorov, Arthur Schönflies and (belatedly) William Barlow. From the available data and physical reasoning, Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive.
X-rays were discovered by Wilhelm Röntgen in 1895, just as the studies of crystal symmetry were being concluded. Physicists were initially uncertain of the nature of X-rays, although it was soon suspected (correctly) that they were waves of electromagnetic radiation, in other words, another form of light. At that time, the wave model of light—specifically, the Maxwell theory of electromagnetic radiation—was well accepted among scientists, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Single-slit experiments in the laboratory of Arnold Sommerfeld suggested the wavelength of X-rays was about 1 angstrom. However, X-rays are composed of photons, and thus are not only waves of electromagnetic radiation but also exhibit particle-like properties. The photon concept was introduced by Albert Einstein in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. Therefore, these particle-like properties of X-rays, such as their ionization of gases, caused William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Nevertheless, Bragg's view was not broadly accepted and the observation of X-ray diffraction by Max von Laue in 1912 confirmed for most scientists that X-rays were a form of electromagnetic radiation.
Crystals are regular arrays of atoms, and X-rays can be considered waves of electromagnetic radiation. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as elastic scattering, and the electron (or lighthouse) is known as the scatterer. A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions through destructive interference, they add constructively in a few specific directions, determined by Bragg's law:
Here d is the spacing between diffracting planes, is the incident angle, n is any integer, and λ is the wavelength of the beam. These specific directions appear as spots on the diffraction pattern called reflections. Thus, X-ray diffraction results from an electromagnetic wave (the X-ray) impinging on a regular array of scatterers (the repeating arrangement of atoms within the crystal).
X-rays are used to produce the diffraction pattern because their wavelength λ is typically the same order of magnitude (1–100 angstroms) as the spacing d between planes in the crystal. In principle, any wave impinging on a regular array of scatterers produces diffraction, as predicted first by Francesco Maria Grimaldi in 1665. To produce significant diffraction, the spacing between the scatterers and the wavelength of the impinging wave should be similar in size. For illustration, the diffraction of sunlight through a bird's feather was first reported by James Gregory in the later 17th century. The first artificial diffraction gratings for visible light were constructed by David Rittenhouse in 1787, and Joseph von Fraunhofer in 1821. However, visible light has too long a wavelength (typically, 5500 angstroms) to observe diffraction from crystals. Prior to the first X-ray diffraction experiments, the spacings between lattice planes in a crystal were not known with certainty.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed to observe such small spacings, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914.
As described in the mathematical derivation below, the X-ray scattering is determined by the density of electrons within the crystal. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson scattering, the interaction of an electromagnetic ray with a free electron. This model is generally adopted to describe the polarization of the scattered radiation.
The intensity of Thomson scattering for one particle with mass m and charge q is:
Hence the atomic nuclei, which are much heavier than an electron, contribute negligibly to the scattered X-rays.
Development from 1912 to 1920
After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the observed scattering with reflections from evenly spaced planes within the crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple and marked by one-dimensional symmetry. However, as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated two- and three-dimensional arrangements of atoms in the unit-cell.
The potential of X-ray crystallography for determining the structure of molecules and minerals—then only known vaguely from chemical and hydrodynamic experiments—was realized immediately. The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table salt. The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was 1.52 angstroms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite Mn(OH)2 and, by extension, brucite Mg(OH)2 in 1919;. Also in 1919 sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure became known in 1920.
The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium.
Cultural and aesthetic importance of X-ray crystallography
In what has been called his scientific autobiography, The Development of X-ray Analysis, Sir William Lawrence Bragg mentioned that he believed the field of crystallography was particularly welcoming to women because the techno-aesthetics of the molecular structures resembled textiles and household objects. Bragg was known to compare crystal formation to "curtains, wallpapers, mosaics, and roses."
In 1951, the Festival Pattern Group at the Festival of Britain hosted a collaborative group of textile manufacturers and experienced crystallographers to design lace and prints based on the X-ray crystallography of insulin, china clay, and hemoglobin. One of the leading scientists of the project was Dr. Helen Megaw (1907–2002), the Assistant Director of Research at the Cavendish Laboratory in Cambridge at the time. Megaw is credited as one of the central figures who took inspiration from crystal diagrams and saw their potential in design. In 2008, the Wellcome Collection in London curated an exhibition on the Festival Pattern Group called "From Atom to Patterns."
Contributions to chemistry and material science
X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement.
Also in the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide.
The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host-guest chemistry.
In material sciences, many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry, due to recent problems with polymorphs. The major factors affecting the quality of single-crystal structures are the crystal's size and regularity; recrystallization is a commonly used technique to improve these factors in small-molecule crystals. The Cambridge Structural Database contains over 500,000 structures; over 99% of these structures were determined by X-ray diffraction.
Mineralogy and metallurgy
Since the 1920s, X-ray diffraction has been the principal method for determining the arrangement of atoms in minerals and metals. The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy likewise occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals.
On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes.
Early organic and small biological molecules
The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was followed by several studies of long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll.
X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years.
Biological macromolecular crystallography
Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew, for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. Since that success, over 86817 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. For comparison, the nearest competing method in terms of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved 9561 chemical structures. Moreover, crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is now used routinely by scientists to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other means to solubilize them in isolation, and such detergents often interfere with crystallization. Such membrane proteins are a large component of the genome and include many proteins of great physiological importance, such as ion channels and receptors. Helium cryogenics are used to prevent radiation damage in protein crystals.
On the other end of the size scale, relative small molecules are able to lure the resolving power of X-ray crystallography. The structure assigned in 1991 to the antibiotic isolated from a marina organism, diazonamide A - C40H34Cl2N6O6 with M = 765.65 - proved to be incorrect by the classical prove of structure: a synthetic sample was not identical to the natural product. The mistake was possible because of the inability of X-ray crystallography to distinguish between the correct -OH / >NH and the interchanged -NH2 / -O- groups in the incorrect structure.
Relationship to other scattering techniques
Elastic vs. inelastic scattering
X-ray crystallography is a form of elastic scattering; the outgoing X-rays have the same energy, and thus same wavelength, as the incoming X-rays, only with altered direction. By contrast, inelastic scattering occurs when energy is transferred from the incoming X-ray to the crystal, e.g., by exciting an inner-shell electron to a higher energy level. Such inelastic scattering reduces the energy (or increases the wavelength) of the outgoing beam. Inelastic scattering is useful for probing such excitations of matter, but not in determining the distribution of scatterers within the matter, which is the goal of X-ray crystallography.
X-rays range in wavelength from 10 to 0.01 nanometers; a typical wavelength used for crystallography is 1 Å (0.1 nm), which is on the scale of covalent chemical bonds and the radius of a single atom. Longer-wavelength photons (such as ultraviolet radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter, producing particle-antiparticle pairs. Therefore, X-rays are the "sweetspot" for wavelength when determining atomic-resolution structures from the scattering of electromagnetic radiation.
Other X-ray techniques
Other forms of elastic X-ray scattering include powder diffraction, Small-Angle X-ray Scattering (SAXS) and several types of X-ray fiber diffraction, which was used by Rosalind Franklin in determining the double-helix structure of DNA. In general, single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently large and regular crystal, which is not always available.
These scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events (Time resolved crystallography). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple atomic arrangements.
The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used to interpret the back reflection Laue photograph.
Electron and neutron diffraction
Other particles, such as electrons and neutrons, may be used to produce a diffraction pattern. Although electron, neutron, and X-ray scattering are based on different physical processes, the resulting diffraction patterns are analyzed using the same coherent diffraction imaging techniques.
As derived below, the electron density within the crystal and the diffraction patterns are related by a simple mathematical method, the Fourier transform, which allows the density to be calculated relatively easily from the patterns. However, this works only if the scattering is weak, i.e., if the scattered beams are much less intense than the incoming beam. Weakly scattered beams pass through the remainder of the crystal without undergoing a second scattering event. Such re-scattered waves are called "secondary scattering" and hinder the analysis. Any sufficiently thick crystal will produce secondary scattering, but since X-rays interact relatively weakly with the electrons, this is generally not a significant concern. By contrast, electron beams may produce strong secondary scattering even for relatively thin crystals (>100 nm). Since this thickness corresponds to the diameter of many viruses, a promising direction is the electron diffraction of isolated macromolecular assemblies, such as viral capsids and molecular machines, which may be carried out with a cryo-electron microscope. Moreover, the strong interaction of electrons with matter (about 1000 times stronger than for X-rays) allows determination of the atomic structure of extremely small volumes. The field of applications for electron crystallography ranges from bio molecules like membrane proteins over organic thin films to the complex structures of (nanocrystalline) intermetallic compounds and zeolites.
Neutron diffraction is an excellent method for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although the new Spallation Neutron Source holds much promise in the near future. Being uncharged, neutrons scatter much more readily from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is very useful for observing the positions of light atoms with few electrons, especially hydrogen, which is essentially invisible in the X-ray diffraction. Neutron scattering also has the remarkable property that the solvent can be made invisible by adjusting the ratio of normal water, H2O, and heavy water, D2O.
Overview of single-crystal X-ray diffraction
The oldest and most precise method of X-ray crystallography is single-crystal X-ray diffraction, in which a beam of X-rays strikes a single crystal, producing scattered beams. When they land on a piece of film or other detector, these beams make a diffraction pattern of spots; the strengths and angles of these beams are recorded as the crystal is gradually rotated. Each spot is called a reflection, since it corresponds to the reflection of the X-rays from one set of evenly spaced planes within the crystal. For single crystals of sufficient purity and regularity, X-ray diffraction data can determine the mean chemical bond lengths and angles to within a few thousandths of an angstrom and to within a few tenths of a degree, respectively. The atoms in a crystal are not static, but oscillate about their mean positions, usually by less than a few tenths of an angstrom. X-ray crystallography allows measuring the size of these oscillations.
The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning.
In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections.
In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database.
As the crystal's repeating unit, its unit cell, becomes larger and more complex, the atomic-level picture provided by X-ray crystallography becomes less well-resolved (more "fuzzy") for a given number of observed reflections. Two limiting cases of X-ray crystallography—"small-molecule" and "macromolecular" crystallography—are often discerned. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. By contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved (more "smeared out"); the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses with hundreds of thousands of atoms. Though normally x-ray crystallography can only be performed if the sample is in crystal form, new research has been done into sampling non-crystalline forms of samples.
Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure.
Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded.
Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The crystallographer's goal is to identify solution conditions that favor the development of a single, large crystal, since larger crystals offer improved resolution of the molecule. Consequently, the solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involves, crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture. The technique relies on bringing the protein directly into the nucleation zone by mixing protein with the appropriate amount of percipient to prevent the diffusion of water out of the drop.
It is extremely difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter).
Several factors are known to inhibit or mar crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Ironically, molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior.
Mounting the crystal
The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Nowadays, crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as the noise in the Bragg peaks due to thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. Unfortunately, this pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error.
The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer.
Small scale can be done on a local X-ray tube source, typically coupled with an image plate detector. These have the advantage of being (relatively) inexpensive and easy to maintain, and allow for quick screening and collection of samples. However, the wavelength light produced is limited by anode material, typically copper. Further, intensity is limited by the power applied and cooling capacity available to avoid melting the anode. In such systems, electrons are boiled off of a cathode and accelerated through a strong electric potential of ~50 kV; having reached a high speed, the electrons collide with a metal plate, emitting bremsstrahlung and some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper, which can be kept cool easily, due to its high thermal conductivity, and which produces strong Kα and Kβ lines. The Kβ line is sometimes suppressed with a thin (~10 µm) nickel foil. The simplest and cheapest variety of sealed X-ray tube has a stationary anode (the Crookes tube) and run with ~2 kW of electron beam power. The more expensive variety has a rotating-anode type source that run with ~14 kW of e-beam power.
X-rays are generally filtered (by use of X-Ray Filters) to a single wavelength (made monochromatic) and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube) or with a clever arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with large unit cells (over 150 Å)
Synchrotron radiation are some of the brightest lights on earth. It is the single most powerful tool available to X-ray crystallographers. It is made of X-ray beams generated in large machines called synchrotrons. These machines accelerate electrically charged particles, often electrons, to nearly the speed of light and confine them in a (roughly) circular loop using magnetic fields.
Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected without interruption. Synchrotrons were originally designed for use by high-energy physicists studying subatomic particles and cosmic phenomena. The largest component of each synchrotron is its electron storage ring. This ring is actually not a perfect circle, but a many-sided polygon. At each corner of the polygon, or sector, precisely aligned magnets bend the electron stream. As the electrons’ path is bent, they emit bursts of energy in the form of X-rays.
Using synchrotron radiation frequently has specific requirements for X-ray crystallography. The intense ionizing radiation can cause radiation damage to samples, particularly macromolecular crystals. Cryo crystallography protects the sample from radiation damage, by freezing the crystal at liquid nitrogen temperatures (~100 K). However, synchrotron radiation frequently has the advantage of user selectable wavelengths, allowing for anomalous scattering experiments which maximizes anomalous signal. This is critical in experiments such as SAD and MAD.
Free Electron Laser
Recently, free electron lasers have been developed for use in X-ray crystallography. These are the brightest X-ray sources currently available; with the X-rays coming in femtosecond bursts. The intensity of the source is such that atomic resolution diffraction patterns can be resolved for crystals otherwise too small for collection. However, the intense light source also destroys the sample, requiring multiple crystals to be shot. As each crystal is randomly oriented in the beam, hundreds of thousands of individual diffraction images must be collected in order to get a complete data-set. This method, serial femtosecond crystallography, has been used in solving the structure of a number of protein crystal structures, sometimes noting differences with equivalent structures collected from synchrotron sources.
Recording the reflections
When a crystal is mounted and exposed to an intense beam of X-rays, it scatters the X-rays into a pattern of spots or reflections that can be observed on a screen behind the crystal. A similar pattern may be seen by shining a laser pointer at a compact disc. The relative intensities of these spots provide the information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point.
One image of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full Fourier transform. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space.
Multiple data sets may be necessary for certain phasing methods. For example, MAD phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken.
Crystal symmetry, unit cell, and image scaling
The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional model of the electron density; the conversion uses the mathematical technique of Fourier transforms, which is explained below. Each spot corresponds to a different type of variation in the electron density; the crystallographer must determine which variation corresponds to which spot (indexing), the relative strengths of the spots in different images (merging and scaling) and how the variations should be combined to yield the total electron density (phasing).
Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)).
A full data set may consist of hundreds of separate images taken at different orientations of the crystal. The first step is to merge and scale these various images, that is, to identify which peaks appear in two or more images (merging) and to scale the relative images so that they have a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections,[clarification needed] thus assessing the quality of the data.
The data collected from a diffraction experiment is a reciprocal space representation of the crystal lattice. The position of each diffraction 'spot' is governed by the size and shape of the unit cell, and the inherent symmetry within the crystal. The intensity of each diffraction 'spot' is recorded, and this intensity is proportional to the square of the structure factor amplitude. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways:
- Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections.
- Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps.
- Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A MAD experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases.
- Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in MAD phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by MAD phasing with selenomethionine.
Model building and phase refinement
Having obtained initial phases, an initial model can be built. This model can be used to refine the phases, leading to an improved model, and so on. Given a model of some atomic positions, these positions and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and a further round of refinement is carried out. This continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R-factor defined as
where F is the structure factor. A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. Phase bias is a serious problem in such iterative model building. Omit maps are a common technique used to check for this.[clarification needed]
It may not be possible to observe every atom of the crystallized molecule – it must be remembered that the resulting electron density is an average of all the molecules within the crystal. In some cases, there is too much residual disorder in those atoms, and the resulting electron density for atoms existing in many conformations is smeared to such an extent that it is no longer detectable in the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization.
Deposition of the structure
Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein structures). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases.
The main goal of X-ray crystallography is to determine the density of electrons f(r) throughout the crystal, where r represents the three-dimensional position vector within the crystal. To do this, X-ray scattering is used to collect data about its Fourier transform F(q), which is inverted mathematically to obtain the density defined in real space, using the formula
where the integral is taken over all values of q. The three-dimensional real vector q represents a point in reciprocal space, that is, to a particular oscillation in the electron density as one moves in the direction in which q points. The length of q corresponds to 2 divided by the wavelength of the oscillation. The corresponding formula for a Fourier transform will be used below
where the integral is summed over all possible values of the position vector r within the crystal.
The intensities of the reflections observed in X-ray diffraction give us the magnitudes |F(q)| but not the phases φ(q). To obtain the phases, full sets of reflections are collected with known alterations to the scattering, either by modulating the wavelength past a certain absorption edge or by adding strongly scattering (i.e., electron-dense) metal atoms such as mercury. Combining the magnitudes and phases yields the full Fourier transform F(q), which may be inverted to obtain the electron density f(r).
Crystals are often idealized as being perfectly periodic. In that ideal case, the atoms are positioned on a perfect lattice, the electron density is perfectly periodic, and the Fourier transform F(q) is zero except when q belongs to the reciprocal lattice (the so-called Bragg peaks). In reality, however, crystals are not perfectly periodic; atoms vibrate about their mean position, and there may be disorder of various types, such as mosaicity, dislocations, various point defects, and heterogeneity in the conformation of crystallized molecules. Therefore, the Bragg peaks have a finite width and there may be significant diffuse scattering, a continuum of scattered X-rays that fall between the Bragg peaks.
Intuitive understanding by Bragg's law
An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and let their spacing be noted by d. William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ.
A reflection is said to be indexed when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters, the lengths and angles of the unit-cell, as well as its space group. Since Bragg's law does not interpret the relative intensities of the reflections, however, it is generally inadequate to solve for the arrangement of atoms within the unit-cell; for that, a Fourier transform method must be carried out.
Scattering as a Fourier transform
The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, let it be represented here as a scalar wave. We also ignore the complication of the time dependence of the wave and just concentrate on the wave's spatial dependence. Plane waves can be represented by a wave vector kin, and so the strength of the incoming wave at time t=0 is given by
At position r within the sample, let there be a density of scatterers f(r); these scatterers should produce a scattered spherical wave of amplitude proportional to the local amplitude of the incoming wave times the number of scatterers in a small volume dV about r
where S is the proportionality constant.
Let's consider the fraction of scattered waves that leave with an outgoing wave-vector of kout and strike the screen at rscreen. Since no energy is lost (elastic, not inelastic scattering), the wavelengths are the same as are the magnitudes of the wave-vectors |kin|=|kout|. From the time that the photon is scattered at r until it is absorbed at rscreen, the photon undergoes a change in phase
The net radiation arriving at rscreen is the sum of all the scattered waves throughout the crystal
which may be written as a Fourier transform
where q = kout – kin. The measured intensity of the reflection will be square of this amplitude
Friedel and Bijvoet mates
For every reflection corresponding to a point q in the reciprocal space, there is another reflection of the same intensity at the opposite point -q. This opposite reflection is known as the Friedel mate of the original reflection. This symmetry results from the mathematical fact that the density of electrons f(r) at a position r is always a real number. As noted above, f(r) is the inverse transform of its Fourier transform F(q); however, such an inverse transform is a complex number in general. To ensure that f(r) is real, the Fourier transform F(q) must be such that the Friedel mates F(−q) and F(q) are complex conjugates of one another. Thus, F(−q) has the same magnitude as F(q) but they have the opposite phase, i.e., φ(q) = −φ(q)
The equality of their magnitudes ensures that the Friedel mates have the same intensity |F|2. This symmetry allows one to measure the full Fourier transform from only half the reciprocal space, e.g., by rotating the crystal slightly more than 180° instead of a full 360° revolution. In crystals with significant symmetry, even more reflections may have the same intensity (Bijvoet mates); in such cases, even less of the reciprocal space may need to be measured. In favorable cases of high symmetry, sometimes only 90° or even only 45° of data are required to completely explore the reciprocal space.
The Friedel-mate constraint can be derived from the definition of the inverse Fourier transform
Since Euler's formula states that eix = cos(x) + i sin(x), the inverse Fourier transform can be separated into a sum of a purely real part and a purely imaginary part
The function f(r) is real if and only if the second integral Isin is zero for all values of r. In turn, this is true if and only if the above constraint is satisfied
since Isin = −Isin implies that Isin=0.
Each X-ray diffraction image represents only a slice, a spherical slice of reciprocal space, as may be seen by the Ewald sphere construction. Both kout and kin have the same length, due to the elastic scattering, since the wavelength has not changed. Therefore, they may be represented as two radial vectors in a sphere in reciprocal space, which shows the values of q that are sampled in a given diffraction image. Since there is a slight spread in the incoming wavelengths of the incoming X-ray beam, the values of|F(q)|can be measured only for q vectors located between the two spheres corresponding to those radii. Therefore, to obtain a full set of Fourier transform data, it is necessary to rotate the crystal through slightly more than 180°, or sometimes less if sufficient symmetry is present. A full 360° rotation is not needed because of a symmetry intrinsic to the Fourier transforms of real functions (such as the electron density), but "slightly more" than 180° is needed to cover all of reciprocal space within a given resolution because of the curvature of the Ewald sphere. In practice, the crystal is rocked by a small amount (0.25-1°) to incorporate reflections near the boundaries of the spherical Ewald's shells.
A well-known result of Fourier transforms is the autocorrelation theorem, which states that the autocorrelation c(r) of a function f(r)
has a Fourier transform C(q) that is the squared magnitude of F(q)
Therefore, the autocorrelation function c(r) of the electron density (also known as the Patterson function) can be computed directly from the reflection intensities, without computing the phases. In principle, this could be used to determine the crystal structure directly; however, it is difficult to realize in practice. The autocorrelation function corresponds to the distribution of vectors between atoms in the crystal; thus, a crystal of N atoms in its unit cell may have N(N-1) peaks in its Patterson function. Given the inevitable errors in measuring the intensities, and the mathematical difficulties of reconstructing atomic positions from the interatomic vectors, this technique is rarely used to solve structures, except for the simplest crystals.
Advantages of a crystal
In principle, an atomic structure could be determined from applying X-ray scattering to non-crystalline samples, even to a single molecule. However, crystals offer a much stronger signal due to their periodicity. A crystalline sample is by definition periodic; a crystal is composed of many unit cells repeated indefinitely in three independent directions. Such periodic systems have a Fourier transform that is concentrated at periodically repeating points in reciprocal space known as Bragg peaks; the Bragg peaks correspond to the reflection spots observed in the diffraction image. Since the amplitude at these reflections grows linearly with the number N of scatterers, the observed intensity of these spots should grow quadratically, like N2. In other words, using a crystal concentrates the weak scattering of the individual unit cells into a much more powerful, coherent reflection that can be observed above the noise. This is an example of constructive interference.
In a liquid, powder or amorphous sample, molecules within that sample are in random orientations. Such samples have a continuous Fourier spectrum that uniformly spreads its amplitude thereby reducing the measured signal intensity, as is observed in SAXS. More importantly, the orientational information is lost. Although theoretically possible, it is experimentally difficult to obtain atomic-resolution structures of complicated, asymmetric molecules from such rotationally averaged data. An intermediate case is fiber diffraction in which the subunits are arranged periodically in at least one dimension.
Nobel Prizes for X-ray Crystallography
|1914||Max von Laue||Physics||"For his discovery of the diffraction of X-rays by crystals", an important step in the development of X-ray spectroscopy.|
|1915||William Henry Bragg||Physics||"For their services in the analysis of crystal structure by means of X-rays",|
|1915||William Lawrence Bragg||Physics||"For their services in the analysis of crystal structure by means of X-rays",|
|1962||Max F. Perutz||Chemistry||"for their studies of the structures of globular proteins"|
|1962||John C. Kendrew||Chemistry||"for their studies of the structures of globular proteins"|
|1962||James Dewey Watson||Medicine||"For their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material"|
|1962||Francis Harry Compton Crick||Medicine||"For their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material"|
|1962||Maurice Hugh Frederick Wilkins||Medicine||"For their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material"|
|1964||Dorothy Hodgkin||Chemistry||"For her determinations by X-ray techniques of the structures of important biochemical substances"|
|1972||Stanford Moore||Chemistry||"For their contribution to the understanding of the connection between chemical structure and catalytic activity of the active centre of the ribonuclease molecule"|
|1972||William H. Stein||Chemistry||"For their contribution to the understanding of the connection between chemical structure and catalytic activity of the active centre of the ribonuclease molecule"|
|1976||William N. Lipscomb||Chemistry||"For his studies on the structure of boranes illuminating problems of chemical bonding"|
|1985||Jerome Karle||Chemistry||"For their outstanding achievements in developing direct methods for the determination of crystal structures"|
|1985||Herbert A. Hauptman||Chemistry||"For their outstanding achievements in developing direct methods for the determination of crystal structures"|
|1988||Johann Deisenhofer||Chemistry||"For their determination of the three-dimensional structure of a photosynthetic reaction centre"|
|1988||Hartmut Michel||Chemistry||"For their determination of the three-dimensional structure of a photosynthetic reaction centre"|
|1988||Robert Huber||Chemistry||"For their determination of the three-dimensional structure of a photosynthetic reaction centre"|
|1997||John E. Walker||Chemistry||"For their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)"|
|2003||Roderick MacKinnon||Chemistry||"For discoveries concerning channels in cell membranes [...] for structural and mechanistic studies of ion channels"|
|2003||Peter Agre||Chemistry||"For discoveries concerning channels in cell membranes [...] for the discovery of water channels"|
|2006||Roger D. Kornberg||Chemistry||"For his studies of the molecular basis of eukaryotic transcription"|
|2009||Ada E. Yonath||Chemistry||"For studies of the structure and function of the ribosome"|
|2009||Thomas A. Steitz||Chemistry||"For studies of the structure and function of the ribosome"|
|2009||Venkatraman Ramakrishnan||Chemistry||"For studies of the structure and function of the ribosome"|
|2012||Brian Kobilka||Chemistry||"For studies of G-protein-coupled receptors"|
- Beevers–Lipson strip
- Bragg diffraction
- Bravais lattice
- Crystallographic database
- Crystallographic point groups
- Difference density map
- Dorothy Hodgkin
- Electron crystallography
- Electron diffraction
- Energy Dispersive X-Ray Diffraction
- Henderson limit
- International Year of Crystallography
- John Desmond Bernal
- John Kendrew
- Max Perutz
- Max von Laue
- Neutron diffraction
- Powder diffraction
- Rosalind Franklin
- Scherrer equation
- Small angle X-ray scattering (SAXS)
- Structure determination
- Ultrafast x-rays
- Wide angle X-ray scattering (WAXS)
- William Henry Bragg
- William Lawrence Bragg
- Kepler J (1611). Strena seu de Nive Sexangula. Frankfurt: G. Tampach. ISBN 3-321-00021-0.
- Steno N (1669). De solido intra solidum naturaliter contento dissertationis prodromus. Florentiae.
- Hessel JFC (1831). Kristallometrie oder Kristallonomie und Kristallographie. Leipzig.
- Bravais A (1850). "Mémoire sur les systèmes formés par des points distribués regulièrement sur un plan ou dans l'espace". Journal de l'Ecole Polytechnique 19: 1.
- Shafranovskii I I & Belov N V (1962). Paul Ewald, ed. "E. S. Fedorov" (PDF). 50 Years of X-Ray Diffraction (Springer): 351. ISBN 90-277-9029-9.
- Schönflies A (1891). Kristallsysteme und Kristallstruktur. Leipzig.
- Barlow W (1883). "Probable nature of the internal symmetry of crystals". Nature 29 (738): 186. Bibcode:1883Natur..29..186B. doi:10.1038/029186a0. See also Barlow, William (1883). "Probable Nature of the Internal Symmetry of Crystals". Nature 29 (739): 205. Bibcode:1883Natur..29..205B. doi:10.1038/029205a0. Sohncke, L. (1884). "Probable Nature of the Internal Symmetry of Crystals". Nature 29 (747): 383. Bibcode:1884Natur..29..383S. doi:10.1038/029383a0. Barlow, WM. (1884). "Probable Nature of the Internal Symmetry of Crystals". Nature 29 (748): 404. Bibcode:1884Natur..29..404B. doi:10.1038/029404b0.
- Einstein A (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" [A Heuristic Model of the Creation and Transformation of Light]. Annalen der Physik (in German) 17 (6): 132. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.. An English translation is available from Wikisource.
- Einstein A (1909). "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" [The Development of Our Views on the Composition and Essence of Radiation)]. Physikalische Zeitschrift (in German) 10: 817.. An English translation is available from Wikisource.
- Pais A (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. ISBN 0-19-853907-X.
- Compton A (1923). "A Quantum Theory of the Scattering of X-rays by Light Elements". Phys. Rev. 21 (5): 483. Bibcode:1923PhRv...21..483C. doi:10.1103/PhysRev.21.483.
- Bragg WH (1907). "The nature of Röntgen rays". Transactions of the Royal Society of Science of Australia 31: 94.
- Bragg WH (1908). "The nature of γ- and X-rays". Nature 77 (1995): 270. Bibcode:1908Natur..77..270B. doi:10.1038/077270a0. See also Bragg, W. H. (1908). "The Nature of the γ and X-Rays". Nature 78 (2021): 271. Bibcode:1908Natur..78..271B. doi:10.1038/078271a0. Bragg, W. H. (1908). "The Nature of the γ and X-Rays". Nature 78 (2022): 293. Bibcode:1908Natur..78..293B. doi:10.1038/078293d0. Bragg, W. H. (1908). "The Nature of X-Rays". Nature 78 (2035): 665. Bibcode:1908Natur..78R.665B. doi:10.1038/078665b0.
- Bragg WH (1910). "The consequences of the corpuscular hypothesis of the γ- and X-rays, and the range of β-rays". Phil. Mag. 20 (117): 385. doi:10.1080/14786441008636917.
- Bragg WH (1912). "On the direct or indirect nature of the ionization by X-rays". Phil. Mag. 23 (136): 647. doi:10.1080/14786440408637253.
- Friedrich W; Knipping P; von Laue M (1912). "Interferenz-Erscheinungen bei Röntgenstrahlen". Sitzungsberichte der Mathematisch-Physikalischen Classe der Königlich-Bayerischen Akademie der Wissenschaften zu München 1912: 303.
- von Laue M (1914). "Concerning the detection of x-ray interferences" (PDF). Nobel Lectures, Physics. 1901–1921. Retrieved 2009-02-18.
- Dana ES; Ford WE (1932). A Textbook of Mineralogy (fourth ed.). New York: John Wiley & Sons. p. 28.
- Andre Guinier (1952). X-ray Crystallographic Technology. London: Hilger and Watts LTD. p. 271.
- Bragg WL (1912). "The Specular Reflexion of X-rays". Nature 90 (2250): 410. Bibcode:1912Natur..90..410B. doi:10.1038/090410b0.
- Bragg WL (1913). "The Diffraction of Short Electromagnetic Waves by a Crystal". Proceedings of the Cambridge Philosophical Society 17: 43.
- Bragg (1914). "Die Reflexion der Röntgenstrahlen". Jahrbuch der Radioaktivität und Elektronik 11: 350.
- Bragg (1913). "The Structure of Some Crystals as Indicated by their Diffraction of X-rays". Proc. R. Soc. Lond. A89 (610): 248–277. Bibcode:1913RSPSA..89..248B. doi:10.1098/rspa.1913.0083. JSTOR 93488.
- Bragg WL; James RW; Bosanquet CH (1921). "The Intensity of Reflexion of X-rays by Rock-Salt". Phil. Mag. 41 (243): 309. doi:10.1080/14786442108636225.
- Bragg WL; James RW; Bosanquet CH (1921). "The Intensity of Reflexion of X-rays by Rock-Salt. Part II". Phil. Mag. 42 (247): 1. doi:10.1080/14786442108633730.
- Bragg WL; James RW; Bosanquet CH (1922). "The Distribution of Electrons around the Nucleus in the Sodium and Chlorine Atoms". Phil. Mag. 44 (261): 433. doi:10.1080/14786440908565188.
- Bragg WH; Bragg WL (1913). "The structure of the diamond". Nature 91 (2283): 557. Bibcode:1913Natur..91..557B. doi:10.1038/091557a0.
- Bragg WH; Bragg WL (1913). "The structure of the diamond". Proc. R. Soc. Lond. A89 (610): 277. Bibcode:1913RSPSA..89..277B. doi:10.1098/rspa.1913.0084.
- Bragg WL (1914). "The Crystalline Structure of Copper". Phil. Mag. 28 (165): 355. doi:10.1080/14786440908635219.
- Bragg WL (1914). "The analysis of crystals by the X-ray spectrometer". Proc. R. Soc. Lond. A89 (613): 468. Bibcode:1914RSPSA..89..468B. doi:10.1098/rspa.1914.0015.
- Bragg WH (1915). "The structure of the spinel group of crystals". Phil. Mag. 30 (176): 305. doi:10.1080/14786440808635400.
- Nishikawa S (1915). "Structure of some crystals of spinel group". Proc. Tokyo Math. Phys. Soc. 8: 199.
- Vegard L (1916). "Results of Crystal Analysis". Phil. Mag. 32 (187): 65. doi:10.1080/14786441608635544.
- Aminoff G (1919). "Crystal Structure of Pyrochroite". Stockholm Geol. Fören. Förh. 41: 407.
- Aminoff G (1921). "Über die Struktur des Magnesiumhydroxids". Z. Kristallogr. 56: 505.
- Bragg WL (1920). "The crystalline structure of zinc oxide". Phil. Mag. 39 (234): 647. doi:10.1080/14786440608636079.
- Debije P, Scherrer P (1916). "Interferenz an regellos orientierten Teilchen im Röntgenlicht I". Physikalische Zeitschrift 17: 277.
- Friedrich W (1913). "Eine neue Interferenzerscheinung bei Röntgenstrahlen". Physikalische Zeitschrift 14: 317.
- Hull AW (1917). "A New Method of X-ray Crystal Analysis". Phys. Rev. 10 (6): 661. Bibcode:1917PhRv...10..661H. doi:10.1103/PhysRev.10.661.
- Bernal JD (1924). "The Structure of Graphite". Proc. R. Soc. Lond. A106 (740): 749–773. JSTOR 94336.
- Hassel O; Mack H (1924). "Über die Kristallstruktur des Graphits". Zeitschrift für Physik 25: 317. Bibcode:1924ZPhy...25..317H. doi:10.1007/BF01327534.
- Hull AW (1917). "The Crystal Structure of Iron". Phys. Rev. 9: 84. doi:10.1103/PhysRev.9.83.
- Hull AW (1917). "The Crystal Structure of Magnesium". PNAS 3 (7): 470. Bibcode:1917PNAS....3..470H. doi:10.1073/pnas.3.7.470.
- Black, Susan AW (2005). "Domesticating the Crystal: Sir Lawrence Bragg and the Aesthetics of "X-ray Analysis"". Configurations 13 (2): 257. doi:10.1353/con.2007.0014.
- "From Atoms To Patterns". Wellcome Collection. Archived from the original on September 7, 2013. Retrieved 17 October 2013.
- Wyckoff RWG; Posnjak E (1921). "The Crystal Structure of Ammonium Chloroplatinate". J. Amer. Chem. Soc. 43 (11): 2292. doi:10.1021/ja01444a002.
- Bragg WH (1921). "The structure of organic crystals". Proc. R. Soc. Lond. 34: 33. Bibcode:1921PPSL...34...33B. doi:10.1088/1478-7814/34/1/306.
- Lonsdale K (1928). "The structure of the benzene ring". Nature 122 (3082): 810. Bibcode:1928Natur.122..810L. doi:10.1038/122810c0.
- Pauling L. The Nature of the Chemical Bond (3rd ed.). Ithaca, NY: Cornell University Press. ISBN 0-8014-0333-2.
- Bragg WH (1922). "The crystalline structure of anthracene". Proc. R. Soc. Lond. 35: 167. Bibcode:1922PPSL...35..167B. doi:10.1088/1478-7814/35/1/320.
- Powell HM; Ewens RVG (1939). "The crystal structure of iron enneacarbonyl". J. Chem. Soc.: 286. doi:10.1039/jr9390000286.
- Bertrand JA, Cotton, Dollase (1963). "The Metal-Metal Bonded, Polynuclear Complex Anion in CsReCl4". J. Amer. Chem. Soc. 85 (9): 1349. doi:10.1021/ja00892a029.
- Robinson WT; Fergusson JE; Penfold BR (1963). "Configuration of Anion in CsReCl4". Proceedings of the Chemical Society of London: 116.
- Cotton FA, Curtis, Harris, Johnson, Lippard, Mague, Robinson, Wood (1964). "Mononuclear and Polynuclear Chemistry of Rhenium (III): Its Pronounced Homophilicity". Science 145 (3638): 1305–7. Bibcode:1964Sci...145.1305C. doi:10.1126/science.145.3638.1305. PMID 17802015.
- Cotton FA, Harris (1965). "The Crystal and Molecular Structure of Dipotassium Octachlorodirhenate(III) Dihydrate". Inorganic Chemistry 4 (3): 330. doi:10.1021/ic50025a015.
- Cotton FA (1965). "Metal-Metal Bonding in [Re2X8]2− Ions and Other Metal Atom Clusters". Inorganic Chemistry 4 (3): 334. doi:10.1021/ic50025a016.
- Eberhardt WH; Crawford W, Jr.; Lipscomb WN (1954). "The valence structure of the boron hydrides". J. Chem. Phys. 22 (6): 989. Bibcode:1954JChPh..22..989E. doi:10.1063/1.1740320.
- Martin TW; Derewenda ZS (1999). "The name is Bond—H bond". Nature Structural Biology 6 (5): 403–6. doi:10.1038/8195. PMID 10331860.
- Dunitz JD; Orgel LE; Rich A (1956). "The crystal structure of ferrocene". Acta Crystallographica 9 (4): 373. doi:10.1107/S0365110X56001091.
- Seiler P; Dunitz JD (1979). "A new interpretation of the disordered crystal structure of ferrocene". Acta Crystallographica B 35 (5): 1068. doi:10.1107/S0567740879005598.
- Wunderlich JA; Mellor DP (1954). "A note on the crystal structure of Zeise's salt". Acta Crystallographica 7: 130. doi:10.1107/S0365110X5400028X.
- Jarvis JAJ; Kilbourn BT; Owston PG (1970). "A re-determination of the crystal and molecular structure of Zeise's salt, KPtCl3.C2H4.H2O. A correction". Acta Crystallographica B 26 (6): 876. doi:10.1107/S056774087000328X.
- Jarvis JAJ; Kilbourn BT; Owston PG (1971). "A re-determination of the crystal and molecular structure of Zeise's salt, KPtCl3.C2H4.H2O". Acta Crystallographica B 27 (2): 366. doi:10.1107/S0567740871002231.
- Love RA; Koetzle TF; Williams GJB; Andrews LC; Bau R (1975). "Neutron diffraction study of the structure of Zeise's salt, KPtCl3(C2H4).H2O". Inorganic Chemistry 14 (11): 2653. doi:10.1021/ic50153a012.
- Brown, Dwayne (October 30, 2012). "NASA Rover's First Soil Studies Help Fingerprint Martian Minerals". NASA. Retrieved October 31, 2012.
- Westgren A; Phragmén G (1925). "X-ray Analysis of the Cu-Zn, Ag-Zn and Au-Zn Alloys". Phil. Mag. 50: 311.
- Bradley AJ; Thewlis J (1926). "The structure of γ-Brass". Proc. R. Soc. Lond. 112 (762): 678. Bibcode:1926RSPSA.112..678B. doi:10.1098/rspa.1926.0134.
- Hume-Rothery W (1926). "Researches on the Nature, Properties and Conditions of Formation of Intermetallic Compounds (with special Reference to certain Compounds of Tin)". Journal of the Institute of Metals 35: 295.
- Bradley AJ; Gregory CH (1927). "The Structure of certain Ternary Alloys". Nature 120 (3027): 678. Bibcode:1927Natur.120..678.. doi:10.1038/120678a0.
- Westgren A (1932). "Zur Chemie der Legierungen". Angewandte Chemie 45 (2): 33. doi:10.1002/ange.19320450202.
- Bernal JD (1935). "The Electron Theory of Metals". Annual Reports on the Progress of Chemistry 32: 181.
- Pauling L (1923). "The Crystal Structure of Magnesium Stannide". J. Amer. Chem. Soc. 45 (12): 2777. doi:10.1021/ja01665a001.
- Pauling L (1929). "The Principles Determining the Structure of Complex Ionic Crystals". J. Amer. Chem. Soc. 51 (4): 1010. doi:10.1021/ja01379a006.
- Dickinson RG; Raymond AL (1923). "The Crystal Structure of Hexamethylene-Tetramine". J. Amer. Chem. Soc. 45: 22. doi:10.1021/ja01654a003.
- Müller A (1923). "The X-ray Investigation of Fatty Acids". Journal of the Chemical Society (London) 123: 2043. doi:10.1039/ct9232302043.
- Saville WB; Shearer G (1925). "An X-ray Investigation of Saturated Aliphatic Ketones". Journal of the Chemical Society (London) 127: 591. doi:10.1039/ct9252700591.
- Bragg WH (1925). "The Investigation of thin Films by Means of X-rays". Nature 115 (2886): 266. Bibcode:1925Natur.115..266B. doi:10.1038/115266a0.
- de Broglie M, Trillat JJ (1925). "Sur l'interprétation physique des spectres X d'acides gras". Comptes rendus hebdomadaires des séances de l'Académie des sciences 180: 1485.
- Trillat JJ (1926). "Rayons X et Composeés organiques à longe chaine. Recherches spectrographiques sue leurs structures et leurs orientations". Annales de physique 6: 5.
- Caspari WA (1928). "Crystallography of the Aliphatic Dicarboxylic Acids". Journal of the Chemical Society (London) ?: 3235. doi:10.1039/jr9280003235.
- Müller A (1928). "X-ray Investigation of Long Chain Compounds (n. Hydrocarbons)". Proc. R. Soc. Lond. 120 (785): 437. Bibcode:1928RSPSA.120..437M. doi:10.1098/rspa.1928.0158.
- Piper SH (1929). "Some Examples of Information Obtainable from the long Spacings of Fatty Acids". Transactions of the Faraday Society 25: 348. doi:10.1039/tf9292500348.
- Müller A (1929). "The Connection between the Zig-Zag Structure of the Hydrocarbon Chain and the Alternation in the Properties of Odd and Even Numbered Chain Compounds". Proc. R. Soc. Lond. 124 (794): 317. Bibcode:1929RSPSA.124..317M. doi:10.1098/rspa.1929.0117.
- Robertson JM (1936). "An X-ray Study of the Phthalocyanines, Part II". Journal of the Chemical Society: 1195.
- Crowfoot Hodgkin D (1935). "X-ray Single Crystal Photographs of Insulin". Nature 135 (3415): 591. Bibcode:1935Natur.135..591C. doi:10.1038/135591a0.
- Kendrew J. C., et al. (1958-03-08). "A Three-Dimensional Model of the Myoglobin Molecule Obtained by X-Ray Analysis". Nature 181 (4610): 662–6. Bibcode:1958Natur.181..662K. doi:10.1038/181662a0. PMID 13517261.
- "Table of entries in the PDB, arranged by experimental method".
- "PDB Statistics". RCSB Protein Data Bank. Retrieved 2010-02-09.
- Scapin G (2006). "Structural biology and drug discovery". Curr. Pharm. Des. 12 (17): 2087–97. doi:10.2174/138161206777585201. PMID 16796557.
- Lundstrom K (2006). "Structural genomics for membrane proteins". Cell. Mol. Life Sci. 63 (22): 2597–607. doi:10.1007/s00018-006-6252-y. PMID 17013556.
- Lundstrom K (2004). "Structural genomics on membrane proteins: mini review". Comb. Chem. High Throughput Screen. 7 (5): 431–9. doi:10.2174/1386207043328634. PMID 15320710.
- Cryogenic (<20 K) helium cooling mitigates radiation damage to protein crystals” Acta Crystallographica Section D. 2007 63 (4) 486-492
- J. Claydon, N. Greeves, S. Warren: Organic Chemistry 2nd edition page 45; Oxford University Press 2012
- Greninger AB (1935). "A back-reflection Laue method for determining crystal orientation". Zeitschrift für Kristallographie 91: 424.
- An analogous diffraction pattern may be observed by shining a laser pointer on a compact disc or DVD; the periodic spacing of the CD tracks corresponds to the periodic arrangement of atoms in a crystal.
- Miao, J., Charalambous, P., Kirz, J., & Sayre, D. (1999). "Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens." Nature, 400(6742), 342.
- Harp, JM; Timm, DE; Bunick, GJ (1998). "Macromolecular crystal annealing: overcoming increased mosaicity associated with cryocrystallography". Acta crystallographica D 54 (Pt 4): 622–8. doi:10.1107/S0907444997019008. PMID 9761858.
- Harp, JM; Hanson, BL; Timm, DE; Bunick, GJ (1999). "Macromolecular crystal annealing: evaluation of techniques and variables". Acta Crystallographica D 55 (Pt 7): 1329–34. doi:10.1107/S0907444999005442. PMID 10393299.
- Hanson, BL; Harp, JM; Bunick, GJ (2003). "The well-tempered protein crystal: annealing macromolecular crystals". Methods in enzymology. Methods in Enzymology 368: 217–35. doi:10.1016/S0076-6879(03)68012-2. ISBN 978-0-12-182271-2. PMID 14674276.
- Geerlof A, et al. (2006). "The impact of protein characterization in structural proteomics". Acta Crystallographica D 62 (Pt 10): 1125–36. doi:10.1107/S0907444906030307. PMID 17001090.
- Chernov AA (2003). "Protein crystals and their growth". J. Struct. Biol. 142 (1): 3–21. doi:10.1016/S1047-8477(03)00034-0. PMID 12718915.
- Rupp B; Wang J (2004). "Predictive models for protein crystallization". Methods 34 (3): 390–407. doi:10.1016/j.ymeth.2004.03.031. PMID 15325656.
- Chayen NE (2005). "Methods for separating nucleation and growth in protein crystallization". Prog. Biophys. Mol. Biol. 88 (3): 329–37. doi:10.1016/j.pbiomolbio.2004.07.007. PMID 15652248.
- Stock D; Perisic O; Lowe J (2005). "Robotic nanolitre protein crystallisation at the MRC Laboratory of Molecular Biology". Prog Biophys Mol Biol 88 (3): 311–27. doi:10.1016/j.pbiomolbio.2004.07.009. PMID 15652247.
- Jeruzalmi D (2006). "First analysis of macromolecular crystals: biochemistry and x-ray diffraction". Methods Mol. Biol. 364: 43–62. doi:10.1385/1-59745-266-1:43. ISBN 1-59745-266-1. PMID 17172760.
- Helliwell JR (2005). "Protein crystal perfection and its application". Acta Crystallographica D 61 (Pt 6): 793–8. doi:10.1107/S0907444905001368. PMID 15930642.
- Garman, E. F.; Schneider, T. R. (1997). "Macromolecular Cryocrystallography". Journal of Applied Crystallography 30 (3): 211. doi:10.1107/S0021889897002677.
- Schlichting, I; Miao, J (2012). "Emerging opportunities in structural biology with X-ray free-electron lasers". Current Opinion in Structural Biology 22 (5): 613–26. doi:10.1016/j.sbi.2012.07.015. PMC 3495068. PMID 22922042.
- Neutze, R; Wouts, R; Van Der Spoel, D; Weckert, E; Hajdu, J (2000). "Potential for biomolecular imaging with femtosecond X-ray pulses". Nature 406 (6797): 752–7. Bibcode:2000Natur.406..752N. doi:10.1038/35021099. PMID 10963603.
- Liu, W; Wacker, D; Gati, C; Han, G. W.; James, D; Wang, D; Nelson, G; Weierstall, U; Katritch, V; Barty, A; Zatsepin, N. A.; Li, D; Messerschmidt, M; Boutet, S; Williams, G. J.; Koglin, J. E.; Seibert, M. M.; Wang, C; Shah, S. T.; Basu, S; Fromme, R; Kupitz, C; Rendek, K. N.; Grotjohann, I; Fromme, P; Kirian, R. A.; Beyerlein, K. R.; White, T. A.; Chapman, H. N.; et al. (2013). "Serial femtosecond crystallography of G protein-coupled receptors". Science 342 (6165): 1521–4. Bibcode:2013Sci...342.1521L. doi:10.1126/science.1244142. PMC 3902108. PMID 24357322.
- Ravelli RB; Garman EF (2006). "Radiation damage in macromolecular cryocrystallography". Curr. Opin. Struct. Biol. 16 (5): 624–9. doi:10.1016/j.sbi.2006.08.001. PMID 16938450.
- Powell HR (1999). "The Rossmann Fourier autoindexing algorithm in MOSFLM". Acta Crystallographica D 55 (Pt 10): 1690–5. doi:10.1107/S0907444999009506. PMID 10531518.
- Hauptman H (1997). "Phasing methods for protein crystallography". Curr. Opin. Struct. Biol. 7 (5): 672–80. doi:10.1016/S0959-440X(97)80077-2. PMID 9345626.
- Usón I; Sheldrick GM (1999). "Advances in direct methods for protein crystallography". Curr. Opin. Struct. Biol. 9 (5): 643–8. doi:10.1016/S0959-440X(99)00020-2. PMID 10508770.
- Taylor G (2003). "The phase problem". Acta Crystallographica D 59 (11): 1881. doi:10.1107/S0907444903017815.
- Ealick SE (2000). "Advances in multiple wavelength anomalous diffraction crystallography". Current Opinion in Chemical Biology 4 (5): 495–9. doi:10.1016/S1367-5931(00)00122-8. PMID 11006535.
- Patterson AL (1935). "A Direct Method for the Determination of the Components of Interatomic Distances in Crystals". Zeitschrift für Kristallographie 90: 517. doi:10.1524/zkri.19126.96.36.1997.
- "The Nobel Prize in Physics 1914". Nobel Foundation. Retrieved 2008-10-09.
- "The Nobel Prize in Physics 1915". Nobel Foundation. Retrieved 2008-10-09.
- "The Nobel Prize in Chemistry 1962". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Physiology or Medicine 1962". Nobel Foundation. Retrieved 2007-07-28.
- "The Nobel Prize in Chemistry 1964". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Chemistry 1972". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Chemistry 1976". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Chemistry 1985". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Chemistry 1988". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Chemistry 1997". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Chemistry 2003". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Chemistry 2006". Nobelprize.org. Retrieved 2008-10-06.
- "The Nobel Prize in Chemistry 2009". Nobelprize.org. Retrieved 2009-10-07.
- "The Nobel Prize in Chemistry 2012". Nobelprize.org. Retrieved 2012-10-13.
International Tables for Crystallography
- Theo Hahn, ed. (2002). International Tables for Crystallography. Volume A, Space-group Symmetry (5th ed.). Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-7923-6590-9.
- Michael G. Rossmann; Eddy Arnold, eds. (2001). International Tables for Crystallography. Volume F, Crystallography of biological molecules. Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-7923-6857-6.
- Theo Hahn, ed. (1996). International Tables for Crystallography. Brief Teaching Edition of Volume A, Space-group Symmetry (4th ed.). Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0-7923-4252-6.
Bound collections of articles
- Charles W. Carter; Robert M. Sweet., eds. (1997). Macromolecular Crystallography, Part A (Methods in Enzymology, v. 276). San Diego: Academic Press. ISBN 0-12-182177-3.
- Charles W. Carter Jr.; Robert M. Sweet., eds. (1997). Macromolecular Crystallography, Part B (Methods in Enzymology, v. 277). San Diego: Academic Press. ISBN 0-12-182178-1.
- A. Ducruix; R. Giegé, eds. (1999). Crystallization of Nucleic Acids and Proteins: A Practical Approach (2nd ed.). Oxford: Oxford University Press. ISBN 0-19-963678-8.
- B.E. Warren (1969). X-ray Diffraction. New York. ISBN 0-486-66317-5.
- Blow D (2002). Outline of Crystallography for Biologists. Oxford: Oxford University Press. ISBN 0-19-851051-9.
- Burns G.; Glazer A M (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 0-12-145761-3.
- Clegg W (1998). Crystal Structure Determination (Oxford Chemistry Primer). Oxford: Oxford University Press. ISBN 0-19-855901-1.
- Cullity B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 0-534-55396-6.
- Drenth J (1999). Principles of Protein X-Ray Crystallography. New York: Springer-Verlag. ISBN 0-387-98587-5.
- Giacovazzo C (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 0-19-855578-4.
- Glusker JP; Lewis M; Rossi M (1994). Crystal Structure Analysis for Chemists and Biologists. New York: VCH Publishers. ISBN 0-471-18543-4.
- Massa W (2004). Crystal Structure Determination. Berlin: Springer. ISBN 3-540-20644-2.
- McPherson A (1999). Crystallization of Biological Macromolecules. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. ISBN 0-87969-617-6.
- McPherson A (2003). Introduction to Macromolecular Crystallography. John Wiley & Sons. ISBN 0-471-25122-4.
- McRee DE (1993). Practical Protein Crystallography. San Diego: Academic Press. ISBN 0-12-486050-8.
- O'Keeffe M; Hyde B G (1996). Crystal Structures; I. Patterns and Symmetry. Washington, DC: Mineralogical Society of America, Monograph Series. ISBN 0-939950-40-5.
- Rhodes G (2000). Crystallography Made Crystal Clear. San Diego: Academic Press. ISBN 0-12-587072-8., PDF copy of select chapters
- Rupp B (2009). Biomolecular Crystallography: Principles, Practice and Application to Structural Biology. New York: Garland Science. ISBN 0-8153-4081-8.
- Zachariasen WH (1945). Theory of X-ray Diffraction in Crystals. New York: Dover Publications. LCCN 67026967.
Applied computational data analysis
- Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 0-19-855577-6.
- Bijvoet JM, Burgers WG, Hägg G, eds. (1969). Early Papers on Diffraction of X-rays by Crystals I. Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V.
- Bijvoet JM; Burgers WG; Hägg G, eds. (1972). Early Papers on Diffraction of X-rays by Crystals II. Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V.
- Bragg W L; Phillips D C & Lipson H (1992). The Development of X-ray Analysis. New York: Dover. ISBN 0-486-67316-2.
- Ewald, PP, and numerous crystallographers, eds. (1962). Fifty Years of X-ray Diffraction. Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V. doi:10.1007/978-1-4615-9961-6. ISBN 978-1-4615-9963-0.
- Ewald, P. P., editor 50 Years of X-Ray Diffraction (Reprinted in pdf format for the IUCr XVIII Congress, Glasgow, Scotland, International Union of Crystallography).
- Friedrich W (1922). "Die Geschichte der Auffindung der Röntgenstrahlinterferenzen". Die Naturwissenschaften 10 (16): 363. Bibcode:1922NW.....10..363F. doi:10.1007/BF01565289.
- Lonsdale, K (1949). Crystals and X-rays. New York: D. van Nostrand.
- "The Structures of Life". U.S. Department of Health and Human Services. 2007.
|Library resources about
|Wikibooks has a book on the topic of: Xray Crystallography|
- Learning Crystallography
- Simple, non technical introduction[dead link]
- The Crystallography Collection, video series from the Royal Institution
- "Small Molecule Crystalization" (PDF) at Illinois Institute of Technology website
- International Union of Crystallography
- Crystallography 101
- Interactive structure factor tutorial, demonstrating properties of the diffraction pattern of a 2D crystal.
- Picturebook of Fourier Transforms, illustrating the relationship between crystal and diffraction pattern in 2D.
- Lecture notes on X-ray crystallography and structure determination
- Online lecture on Modern X-ray Scattering Methods for Nanoscale Materials Analysis by Richard J. Matyi
- Interactive Crystallography Timeline from the Royal Institution
- Crystallography Open Database (COD)
- Protein Data Bank[dead link] (PDB)
- Nucleic Acid Databank (NDB)
- Cambridge Structural Database (CSD)
- Inorganic Crystal Structure Database (ICSD)
- Biological Macromolecule Crystallization Database[dead link] (BMCD)
- Proteopedia – the collaborative, 3D encyclopedia of proteins and other molecules
- RNABase[dead link]
- HIC-Up database of PDB ligands
- Structural Classification of Proteins database
- CATH Protein Structure Classification
- List of transmembrane proteins with known 3D structure
- Orientations of Proteins in Membranes database
- MolProbity structural validation suite
- NQ-Flipper (check for unfavorable rotamers of Asn and Gln residues)
- DALI server (identifies proteins similar to a given protein) | https://en.wikipedia.org/wiki/X-ray_diffraction |
4.09375 | The Chain Rule in Leibniz Notation
We stated the chain rule first in Lagrange notation. Since Leibniz notation lets us be a little more precise about what we're differentiating and what we're differentiating with respect to, we need to also be comfortable with the chain rule in Leibniz notation.
Suppose y is a function of x:
y = g(x)
and z is a function of y:
z = f(y)
Then z is a function of x:
z = f(y) = f(g(x))
Once again, we have an outside function and an inside function. The chain rule in Lagrange notation states that
(f(g(x))' = f ' (g(x)) · g ' (x).
In Leibniz notation, we would say
- and (f(g(x))' both mean "the derivative of z with respect to x,"
- and f ' (g(x)) = f ' (y) both mean "the derivative of z with respect to y," and
- and g ' (x) both mean "the derivative of y with respect to x,"
the two statements of the chain rule do mean the same thing.
We can remember the chain rule in Leibniz notation because it looks like a nice fraction equation where the dy terms cancel:
This may or may not be what's actually going on, but it works for our purposes and it's a great memory aid.
There are three steps to apply the chain rule in this form:
- determine what y is (this is the same step as determining the inside and outside functions)
- apply the chain rule formula
- put everything in terms of the correct variable (for example, writing y in terms of x)
The chain rule will be especially useful when we discuss related rates, where there will be problems with three different variables that all depend on each other in funny ways.
We can also use the chain rule with different letters, as long as we put the letters in the correct places. If we have
y = g(x) and z = f(y) = f(g(x)), the chain rule says
The inside function is the one that "cancels out":
The innermost variable, x, goes only in the denominators:
and the outermost variable, z, goes only in the numerators:
If we switch the letters, we need to make sure they go in the appropriate places.
The important thing is that the inside function needs to be the term that cancels out:
Now that we know how to write the chain rule with different letters, we can use it to find derivatives. | http://www.shmoop.com/computing-derivatives/chain-rule-leibniz.html |
4.125 | ToUpper converts all characters to uppercase characters. It causes a copy to be made of the VB.NET String, which is returned. We look at ToUpper and its behavior on non-lowercase characters. Example. This simple console program shows the result of ToUpper on the input String "abc123". Notice how "abc" are the only characters that were changed. The non-lowercase letters are not changed.
Also:Characters that are already uppercase are not changed by the ToUpper Function.Char
Based on: .NET 4.6
VB.NET program that calls ToUpper on String
Dim value1 As String = "abc123"
Dim upper1 As String = value1.ToUpper()
Uppercased. How can you determine if a String is already uppercase? This is possible by using ToUpper and then comparing the results of that against the original String. If they are equal, the string was already uppercased. However, this is not the most efficient way. A faster way uses a For-loop and then the Char.IsLower function. If a Char is lowercase, the String is not already uppercase. It would return False early at that point.
Summary. We explored some aspects of the ToUpper function. This function changes no characters except lowercase characters. Digits and uppercase letters (as well as punctuation and spaces) are left the same.
Finally:We noted how to test Strings with a For-loop to see if they are uppercase. | http://www.dotnetperls.com/toupper-vbnet |
4.09375 | Large Electron–Positron Collider
|Intersecting Storage Rings||CERN, 1971–1984|
|Super Proton Synchrotron||CERN, 1981–1984|
|ISABELLE||BNL, cancelled in 1983|
|Relativistic Heavy Ion Collider||BNL, 2000–present|
|Superconducting Super Collider||Cancelled in 1993|
|Large Hadron Collider||CERN, 2009–present|
|Very Large Hadron Collider||Theoretical|
The Large Electron–Positron Collider (LEP) was one of the largest particle accelerators ever constructed.
It was built at CERN, a multi-national centre for research in nuclear and particle physics near Geneva, Switzerland. LEP collided electrons with positrons at energies that reached 209 GeV. It was a circular collider with a circumference of 27 kilometres built in a tunnel roughly 100 m (300 ft) underground and passing through Switzerland and France. LEP was used from 1989 until 2000. Around 2001 it was dismantled to make way for the LHC, which re-used the LEP tunnel. To date, LEP is the most powerful accelerator of leptons ever built.
LEP was a circular lepton collider – the most powerful such ever built. For context, modern colliders can be generally categorized based on their shape (circular or linear) and on what types of particles they accelerate and collide (leptons or hadrons). Leptons are point particles and are relatively light. Because they are point particles, their collisions are clean and amenable to precise measurements; however, because they are light, the collisions cannot reach the same energy that can be achieved with heavier particles. Hadrons are composite particles (composed of quarks) and are relatively heavy; protons, for example, have a mass 2000 times greater than electrons. Because of their higher mass, they can be accelerated to much higher energies, which is the key to directly observing new particles or interactions that are not predicted by currently accepted theories. However, hadron collisions are very messy (there are often lots of unrelated tracks, for example, and it is not straightforward to determine the energy of the collisions), and therefore more challenging to analyze and less amenable to precision measurements.
The shape of the collider is also important. High energy physics colliders collect particles into bunches, and then collide the bunches together. However, only a very tiny fraction of particles in each bunch actually collide. In circular colliders, these bunches travel around a roughly circular shape in opposite directions and therefore can be collided over and over. This enables a high rate of collisions and facilitates collection of a large amount of data, which is important for precision measurements or for observing very rare decays. However, the energy of the bunches is limited due to losses from synchrotron radiation. In linear colliders, particles move in a straight line and therefore do not suffer from synchrotron radiation, but bunches cannot be re-used and it is therefore more challenging to collect large amounts of data.
As a circular lepton collider, LEP was well suited for precision measurements of the electroweak interaction at energies that were not previously achievable.
When the LEP collider started operation in August 1989 it accelerated the electrons and positrons to a total energy of 45 GeV each to enable production of the Z boson, which has a mass of 91 GeV. The accelerator was upgraded later to enable production of a pair of W bosons, each having a mass of 80 GeV. LEP collider energy eventually topped at 209 GeV at the end in 2000. At a Lorentz factor ( = particle energy/rest mass = [104.5 GeV/0.511 MeV]) of over 200,000, LEP still holds the particle accelerator speed record, extremely close to the limiting speed of light. At the end of 2000, LEP was shut down and then dismantled in order to make room in the tunnel for the construction of the Large Hadron Collider (LHC).
The Super Proton Synchrotron (an older ring collider) was used to accelerate electrons and positrons to nearly the speed of light. These are then injected into the ring. As in all ring colliders, the LEP's ring consists of many magnets which force the charged particles into a circular trajectory (so that they stay inside the ring), RF accelerators which accelerate the particles with radio frequency waves, and quadrupoles that focus the particle beam (i.e. keep the particles together). The function of the accelerators is to increase the particles' energies so that heavy particles can be created when the particles collide. When the particles are accelerated to maximum energy (and focused to so-called bunches), an electron and a positron bunch is made to collide with each other at one of the collision points of the detector. When an electron and a positron collide, they annihilate to a virtual particle, either a photon or a Z boson. The virtual particle almost immediately decays into other elementary particles, which are then detected by huge particle detectors.
The Large Electron–Positron Collider had four detectors, built around the four collision points within underground halls. Each was the size of a small house and was capable of registering the particles by their energy, momentum and charge, thus allowing physicists to infer the particle reaction that had happened and the elementary particles involved. By performing statistical analysis of this data, knowledge about elementary particle physics is gained. The four detectors of LEP were called Aleph, Delphi, Opal, and L3. They were built differently to allow for complementary experiments.
ALEPH stands for Apparatus for LEP PHysics at CERN. The detector determined the mass of the W-boson and Z-boson to within one part in a thousand. The number of families of particles with light neutrinos was determined to be ±0.013, which is consistent with the 2.982standard model value of 3. The running of the quantum chromodynamics (QCD) coupling constant was measured at various energies and found to run in accordance with perturbative calculations in QCD.
DELPHI stands for DEtector with Lepton, Photon and Hadron Identification.
OPAL stands for Omni-Purpose Apparatus for LEP. The name of the experiment was a play, as some of the founding members of the scientific collaboration which first proposed the design had previously worked on the JADE detector at DESY in Hamburg. OPAL was a general-purpose detector designed to collect a broad range of data. Its data were used to make high precision measurements of the Z boson lineshape, perform detailed tests of the Standard Model, and place limits on new physics. The detector was dismantled in 2000 to make way for LHC equipment. The lead glass blocks from the OPAL barrel electromagnetic calorimeter are currently being re-used in the large-angle photon veto detectors at the NA62 experiment at CERN.
The results of the LEP experiments allowed precise values of many quantities of the Standard Model—most importantly the mass of the Z boson and the W boson (which were discovered in 1983 at an earlier CERN collider [the Intersecting Storage Rings project]) to be obtained—and so confirm the Model and put it on a solid basis of empirical data.
A not quite discovery of the Higgs boson
Near the end of the scheduled run time, data suggested tantalizing but inconclusive hints that the Higgs particle of a mass around 115 GeV might have been observed, a sort of Holy Grail of current high-energy physics. The run-time was extended for a few months, to no avail. The strength of the signal remained at 1.7 standard deviations which translates to the 91% confidence level, much less than the confidence expected by particle physicists to claim a discovery, and was at the extreme upper edge of the detection range of the experiments with the collected LEP data. There was a proposal to extend the LEP operation by another year in order to seek confirmation, which would have delayed the start of the LHC. However, the decision was made to shut down LEP and progress with the LHC as planned.
For years, this observation was the only hint of a Higgs Boson; subsequent experiments until 2010 at the Tevatron had not been sensitive enough to confirm or refute these hints. Beginning in July 2012, however, the ATLAS and CMS experiments at LHC presented evidence of a Higgs particle around 125 GeV, and strongly excluded the 115 GeV region.
- http://sl-div.web.cern.ch/sl-div/history/lep_doc.html CERN 1990 historical reference with much information on the design issues and details of LEP.
- "Welcome to ALEPH". Retrieved 2011-09-14.
- "The OPAL Experiment at LEP 1989–2000". Retrieved 2011-09-14.
- "L3 Homepage". Retrieved 2011-09-14.
- CDF Collaboration, D0 Collaboration, Tevatron New Physics, Higgs Working Group (2010-06-26). "Combined CDF and D0 Upper Limits on Standard Model Higgs-Boson Production with up to 6.7 fb−1 of Data". arXiv:1007.4587 [hep-ex].
- LEP Working Groups
- The LEP Collider from Design to Approval and Commissioning excerpts from the John Adams memorial lecture delivered at CERN on 26 November 1990
- A short but good (though slightly outdated) overview (with nice photographs) about LEP and related subjects can be found in this online booklet of the British Particle Physics and Astronomy Research Council. | https://en.wikipedia.org/wiki/L3_(CERN) |
4.09375 | Animals could be used to predict earthquakes because certain species are able to sense chemical changes in groundwater immediately before seismic activity, a study suggests.
Experts began investigating the theory after a colony of toads was observed abandoning a pond in L’Aquila, Italy, in 2009, days before the devastating earthquake.
They believe that stressed rocks in the Earth’s crust release charged particles before an earthquake, which react with groundwater.
Animals living in or near groundwater, such as toads, are highly sensitive to such changes and may therefore notice signs of an impending quake.
The researchers, led by Friedemann Freund from Nasa and Rachel Grant from the UK’s Open University, hope their findings will inspire biologists and geologists to work together in improving earthquake prediction.
Although not the first example of abnormal animal activity observed prior to earthquakes, the case of the L’Aquila toads was different in that they were being studied in detail at the time
Miss Grant, a biologist, was monitoring the toad colony as part of her PhD project in the days before the Italian earthquake disaster.
"It was very dramatic. It went from 96 toads to almost zero over three days. After that, I was contacted by Nasa," she told the BBC.
Scientists at the US space agency had been studying the chemical changes that occur when rocks are put under extreme stress and questioned whether they were linked to the toads’ departure.
Lab tests have since suggested that changes in the Earth’s crust could have directly affected the chemistry of the pond that the toads were living and breeding in at the time.
Dr Freund, a Nasa geophysicist, said that the charged particles, released from stressed rocks, react with the air when they reach the Earth’s surface, converting air molecules into charged particles known as ions.
"Positive airborne ions are known in the medical community to cause headaches and nausea in humans and to increase the level of serotonin, a stress hormone, in the blood of animals," said Dr Freund.
They can also react with water, turning it into hydrogen peroxide, the scientist added.
This chemical chain of events could affect the organic material dissolved in the pond water, turning harmless organic material into substances that are toxic to aquatic animals.
Pegida's Multi - Culti (state) Agenda! 2016-02-08 4:49 This guy raises some very interesting points regarding the recent PEGIDA launch in the UK and around Europe.
Make sure to check out the videos below. The focus on the criticism descends into a Nazi accusation contest. "No no THEY are the REAL Nazi's."
Pegida UK is fronted by Tommy Robinson, Paul Weston and Anne Marie Waters. They held a demo in ...
Sweden plans to expel up to 80,000 asylum-seekers (that didn't seek asylum) 2016-02-08 3:58
Sweden intends to expel up to 80,000 migrants who arrived in 2015 and whose application for asylum has been rejected, Interior Minister Anders Ygeman said Wednesday.
Ed: Wait, so they are in the country despite being rejected asylum? How did that happen and who let them in then?
"We are talking about 60,000 people but the number could climb to 80,000," the ...
An Occupied Country 2016-02-06 5:42
When people refer to occupation governments or occupied countries, the first thought is often of military occupation—the garrisoning of foreign troops in one’s cities and civil administration by their military executives. The other vision is the trope of a cabal of Haredim sitting in a darkly-lit boardroom with a map of the world on the wall, a dated reading of ...
Immortal Symbols 1941 2016-02-06 2:07
Youtube description: Dutch film "Eeuwig Leevende Tekens" by Hamer - "Volksche Werkgemeenschap" (Folkish Study Group) - ancestral heritage, solar wheel, sun cross, tree of life, etc.
Subs by Otharus - http://fryskednis.blogspot.com | http://www.redicecreations.com/article.php?id=17810 |
4.09375 | Wildlife conservation is the practice of protecting wild plant and animal species and their habitats. The goal of wildlife conservation is to ensure that nature will be around for future generations to enjoy and also to recognize the importance of wildlife and wilderness for humans and other species alike. Many nations have government agencies and NGO's dedicated to wildlife conservation, which help to implement policies designed to protect wildlife. Numerous independent non-profit organizations also promote various wildlife conservation causes.
According to the National Wildlife Federation, wildlife in the United States gets a majority of their funding through appropriations from the federal budget, annual federal and state grants, and financial efforts from programs such as the Conservation Reserve Program, Wetlands Reserve Program and Wildlife Habitat Incentive Program. Furthermore, a substantial amount of funding comes from the state through the sale of hunting/fishing licenses, game tags, stamps, and excise taxes from the purchase of hunting equipment and ammunition, which collects around $200 million annually.
Wildlife conservation has become an increasingly important practice due to the negative effects of human activity on wildlife. The science of extinction is called dirology. An endangered species is defined as a population of a living species that is in the danger of becoming extinct because of several reasons.Some of The reasons can be, that 1. the species have a very low population, or 2. they are threatened by the varying environmental or prepositional parameters.
Major dangers to wildlife
Fewer natural wildlife habitat areas remain each year. Moreover, the habitat that remains has often been degraded to bear little resemblance to the wild areas which existed in the past.Habitat loss—due to destruction, fragmentation and degradation of habitat—is the primary threat to the survival of wildlife in the United States. When an ecosystem has an ecosystem) are some of the ways habitats can become so degraded that they no longer support native wildlife.
- Climate change: Global warming is making hot days hotter, rainfall and flooding heavier, hurricanes stronger and droughts more severe. This intensification of weather and climate extremes will be the most visible impact of global warming in our everyday lives. It is also causing dangerous changes to the landscape of our world, adding stress to wildlife species and their habitat. Since many types of plants and animals have specific habitat requirements, climate change could cause disastrous loss of wildlife species. A slight drop or rise in average rainfall will translate into large seasonal changes. Hibernating mammals, reptiles, amphibians and insects are harmed and disturbed. Plants and wildlife are sensitive to moisture change so, they will be harmed by any change in moisture level. Natural phenomena like floods, earthquakes, volcanoes, lightning, forest fires.
- Unregulated Hunting and poaching: Unregulated hunting and poaching causes a major threat to wildlife. Along with this, mismanagement of forest department and forest guards triggers this problem.
- Pollution: Pollutants released into the environment are ingested by a wide variety of organisms. Pesticides and toxic chemical being widely used, making the environment toxic to certain plants, insects, and rodents.
- Perhaps the largest threat is the extreme growing indifference of the public to wildlife, conservation and environmental issues in general. Over-exploitation of resources, i.e., exploitation of wild populations for food has resulted in population crashes (over-fishing and over-grazing for example).
- Over exploitation is the over use of wildlife and plant species by people for food, clothing, pets, medicine, sport and many other purposes. People have always depended on wildlife and plants for food, clothing, medicine, shelter and many other needs. But today we are taking more than the natural world can supply. The danger is that if we take too many individuals of a species from their natural environment, the species may no longer be able to survive. The loss of one species can affect many other species in an ecosystem. The hunting, trapping, collecting and fishing of wildlife at unsustainable levels is not something new. The passenger pigeon was hunted to extinction, early in the last century, and over-hunting nearly caused the extinction of the American bison and several species of whales.
Population: The increasing population of human beings is the most major threat to wildlife. More people on the globe means more consumption of food,water and fuel . Therefore,more waste is generated. Every major threat to wildlife as seen above, is directly related to increasing population of human beings. If the population is altered so is the amount of risk to wildlife. The less is the population, less is the disturbance to wildlife.
Today, the [Endangered Species Act] protects some U.S. species that were in danger from over exploitation, and the Convention on International Trade in Endangered Species of Fauna and Flora (CITES) works to prevent the global trade of wildlife. But there are many species that are not protected from being illegally traded or over-harvested.
Wildlife conservation as a government involvement
In 1972, the Government of India enacted a law called the Wildlife Conservation Act. Soon after enactment, a trend emerged whereby policymakers enacted regulations on conservation. State and non-state actors began to follow a detailed "framework" to work toward successful conservation. The World Conservation Strategy was developed in 1980 by the "International Union for Conservation of Nature and Natural Resources" (IUCN) with advice, cooperation and financial assistance of the United Nations Environment Programme (UNEP) and the World Wildlife Fund and in collaboration with the Food and Agriculture Organization of the United Nations (FAO) and the United Nations Educational, Scientific and Cultural Organization (Unesco)" The strategy aims to "provide an intellectual framework and practical guidance for conservation actions." This thorough guidebook covers everything from the intended "users" of the strategy to its very priorities. It even includes a map section containing areas that have large seafood consumption and are therefore endangered by over fishing. The main sections are as follows:
- The objectives of conservation and requirements for their achievement:
- Maintenance of essential ecological processes and life-support systems.
- Preservation of genetic diversity that is flora and fauna.
- Sustainable utilization of species and ecosystems.
- Priorities for national action:
- A framework for national and sub-national conservation strategies.
- Policy making and the integration of conservation and development.
- Environmental planning and rational use allocation.
- Priorities for international action:
- International action: law and assistance.
- Tropical forests and dry lands.
- A global programme for the protection of genetic resource areas.
- Tropical forests
- Deserts and areas subject to desertification.
As major development agencies became discouraged with the public sector of environmental conservation in the late 1980s, these agencies began to lean their support towards the “private sector” or non-government organizations (NGOs). In a World Bank Discussion Paper it is made apparent that “the explosive emergence of nongovernmental organizations” was widely known to government policy makers. Seeing this rise in NGO support, the U.S. Congress made amendments to the Foreign Assistance Act in 1979 and 1986 “earmarking U.S. Agency for International Development (USAID) funds for biodiversity”. From 1990 moving through recent years environmental conservation in the NGO sector has become increasingly more focused on the political and economic impact of USAID given towards the “Environment and Natural Resources”. After the terror attacks on the World Trade Centers on September 11, 2001 and the start of former President Bush’s War on Terror, maintaining and improving the quality of the environment and natural resources became a “priority” to “prevent international tensions” according to the Legislation on Foreign Relations Through 2002 and section 117 of the 1961 Foreign Assistance Act. Furthermore, in 2002 U.S. Congress modified the section on endangered species of the previously amended Foreign Assistance Act.
Active non-government organizations
Many NGOs exist to actively promote, or be involved with wildlife conservation:
- The Nature Conservancy is a US charitable environmental organization that works to preserve the plants, animals, and natural communities that represent the diversity of life on Earth by protecting the lands and waters they need to survive.
- World Wide Fund for Nature (WWF) is an international non-governmental organization working on issues regarding the conservation, research and restoration of the environment, formerly named the World Wildlife Fund, which remains its official name in Canada and the United States. It is the world's largest independent conservation organization with over 5 million supporters worldwide, working in more than 90 countries, supporting around 1300 conservation and environmental projects around the world. It is a charity, with approximately 60% of its funding coming from voluntary donations by private individuals. 45% of the fund's income comes from the Netherlands, the United Kingdom and the United States.
- Wild-life Conservation Society
- Audubon Society
- Traffic (conservation programme)
- Born Free Foundation
- WildEarth Guardians
- Wildlife farming
- Conservation biology
- Conservation movement
- Wildlife management
- Conservation of plants and animals
- "Cooperative Alliance for Refuge Enhancement". CARE. Retrieved 1 June 2012.
- "Wildlife Conservation". Conservation and Wildlife. Retrieved 1 June 2012.
- "Conservation Funding - National Wildlife Federation". www.nwf.org. Retrieved 2016-01-21.
- "Wildlife and the Farm Bill - National Wildlife Federation". www.nwf.org. Retrieved 2016-01-21.
- Service, U.S. Fish and Wildlife. "Fish and Wildlife Service". www.fws.gov. Retrieved 2016-01-21.
- McCallum, M.L. 2010. Future climate change spells catastrophe for Blanchard's Cricket Frog (Acris blanchardi). Acta Herpetologica 5:119 - 130.
- McCallum, M.L., J.L. McCallum, and S.E. Trauth. 2009. Predicted climate change may spark box turtle declines. Amphibia-Reptilia 30:259 - 264.
- McCallum, M.L. and G.W. Bury. 2013. Google search patterns suggest declining interest in the environment. Biodiversity and Conservation DOI: 10.1007/s10531-013-0476-6
- "World Conservation Strategy" (PDF). Retrieved 2011-05-01.
- Meyer, Carrie A. (1993). "Environmental NGOs in Ecuador: An Economic Analysis of Institutional Change". The Journal of Developing Areas 27 (2): 191–210.
- "The Foreign Assistance Act of 1961, as amended" (PDF). Retrieved 2011-05-01.
- "About Us - Learn More About The Nature Conservancy". Nature.org. 2011-02-23. Retrieved 2011-05-01.
- "WWF in Brief". World Wildlife Fund. Retrieved 2011-05-01. | https://en.wikipedia.org/wiki/Wildlife_conservation |
4.25 | Quantitative Methods - Confidence Intervals
While a normally-distributed random variable can have many potential outcomes, the shape of its distribution gives us confidence that the vast majority of these outcomes will fall relatively close to its mean. In fact, we can quantify just how confident we are. By using confidence intervals - ranges that are a function of the properties of a normal bell-shaped curve - we can define ranges of probabilities.
The diagram below has a number of percentages - these numbers (which are approximations and rounded off) indicate the probability that a random outcome will fall into that particular section below the curve.
In other words, by assuming normal distribution, we are 68% confident that a variable will fall within one standard deviation. Within two standard deviation intervals, our confidence grows to 95%. Within three standard deviations, 99%. Take an example of a distribution of returns of a security with a mean of 10% and a standard deviation of 5%:
- 68% of the returns will be between 5% and 15% (within 1 standard deviation, 10 + 5).
- 95% of the returns will be between 0% and 20% (within 2 std. devs., 10 + 2*5).
- 99% of the returns will be between -5% and 25% (within 3 std. devs., 10 + 3*5)
Standard Normal Distribution
Standard normal distribution is defined as a normal distribution where mean = 0 and standard deviation = 1. Probability numbers derived from the standard normal distribution are used to help standardize a random variable - i.e. express that number in terms of how many standard deviations it is away from its mean.
Standardizing a random variable X is done by subtracting X from the mean value (μ), and then dividing the result by the standard deviation (σ). The result is a standard normal random variable which is denoted by the letter Z.
Z = (X - μ)/σ
Again, we'd start with standardizing random variable X, which in this case is 10%:
If a distribution has a mean of 10 and standard deviation of 5, and a random observation X is -2, we would standardize our random variable with the equation for Z.
Z = (X - μ)/ σ = (-2 - 10)/5 = -12/5 = -2.4
The standard normal random variable Z tells us how many standard deviations the observation is from the mean. In this case, -2 translates to 2.4 standard deviations away from 10.
You are considering an investment portfolio with an expected return of 10% and a standard deviation of 8%. The portfolio's returns are normally distributed. What is the probability of earning a return less than 2%?
Z = (X - μ)/ σ = (2 - 10)/8 = -8/8 = -1.0
Next, one would often consult a Z-table for cumulative probabilities for a standard normal distribution in order to determine the probability. In this case, for Z = -1, P(Z ≤ x) - 0.158655, or 16%.
Therefore, there is a 16% probability of earning a return of less than 2%.
Keep in mind that your upcoming exam will not provide Z-tables, so, how would you solve this problem on test day?
The answer is that you need to remember that 68% of observations fall + 1 standard deviation on a normal curve, which means that 32% are not within one standard deviation. This question essentially asked for probability of more than one standard deviation below, or 32%/2 = 16%. Study the earlier diagram that shows specific percentages for certain standard deviation intervals on a normal curve - in particular, remember 68% for + one away, and remember 95% for + two away.
Shortfall risk is essentially a refinement of the modern-day development of mean-variance analysis, that is, the idea that one must focus on both risk and return as opposed to simply the return. Risk is typically measured by standard deviation, which measures all deviations - i.e. both positive and negative. In other words, positive deviations are treated as if they were equal to negative deviations. In the real world, of course, negative surprises are far more important to quantify and predict with clarity if one is to accurately define risk. Two mutual funds could have the same risk if measured by standard deviation, but if one of those funds tends to have more extreme negative outcomes, while the other had a high standard deviation due to a preponderance of extreme positive surprises, then the actual risk profiles of those funds would be quite different. Shortfall risk defines a minimum acceptable level, and then focuses on whether a portfolio will fall below that level over a given time period.
Roy's Safety-First Ratio
An optimal portfolio is one that minimizes the probability that the portfolio's return will fall below a threshold level. In probability notation, if RP is the return on the portfolio, and RL is the threshold (the minimum acceptable return), then the portfolio for which P(RP < RL) is minimized will be the optimal portfolio according to
Again, we'd start with standardizing random variable X, which in this case is 10%:
SFRatio = (E(RP) - RL)/ σP
Let's say our minimum threshold is -2%, and we have the following expectations for portfolios A and B:
|Portfolio A||Portfolio B|
|Expected Annual Return||8%||12%|
The SFRatio for portfolio A is (8 - (-2))/10 = 1.0
The SFRatio for portfolio B is (12 - (-2))/16 = 0.875
In other words, the minimum threshold is one standard deviation away in Portfolio A, and just 0.875 away in Portfolio B, so by safety-first rules we opt for Portfolio A.
A lognormal distribution has two distinct properties: it is always positive (bounded on the left by zero), and it is skewed to the right. Prices for stocks and many other financial assets (anything which by definition can never be negative) are often found to be lognormally distributed. Also, the lognormal and normal distributions are related: if a random variable X is lognormally distributed, then its natural log, ln(X) is normally distributed. (Thus the term "lognormal" - the log is normal.) Figure 2.11 below demonstrates a typical lognormal distribution.
Career Education & ResourcesLearn about the difficulty of the CFA exams with a description of the tests, some statistics on pass rates and suggestions that can help you pass the exams.
ProfessionalsA financial analyst researches companies and economic conditions to make business, sector and industry recommendations.
Career Education & ResourcesRead about what it takes to become a financial analyst in a corporation or securities firm, and learn how far you can rise in the profession.
Career Education & ResourcesLearn what education and certifications you need to become a financial planner, as well as the future prospects and earnings potential for financial planners.
Career Education & ResourcesThe non-profit sector offers a stable selection of jobs for those who seek other types of fulfillment from their jobs than just purely financial.
Career Education & ResourcesLearn about the basic requirements for getting hired as a portfolio manager, and discover how most professionals in the field rise into the position.
Your PracticeThese four professional organizations are among the most respected and well known in the industry.
ProfessionalsFind out what equity research analysts do on a day-to-day basis, and learn more about the typical career progression for these securities professionals.
ProfessionalsThe Chartered Financial Analyst Level II exam is the second of three tests that CFA candidates must pass.
ProfessionalsLearn more about the career options available to financial data analysts, and determine whether the profession is a good match for you.
Professionals who help individuals manage their finances by providing ...
Formerly known as the Association for Investment Management and ...
A financial professional who studies various industries and companies, ...
A professional designation given by the CFA Institute (formerly ...
The differences between a Chartered Financial Analyst (CFA) and a Certified Financial Planner (CFP) are many, but comes down ... Read Full Answer >>
According to the CFA Institute, a person who holds a CFA charter is not a chartered financial analyst. The CFA Institute ... Read Full Answer >>
The types of positions that a Chartered Financial Analyst (CFA) is likely to hold include any position that deals with large ... Read Full Answer >>
Prepaid expenses benefit both businesses and individuals. Prepaid expenses are the types of expenses that are bought or paid ... Read Full Answer >>
If you are looking specifically for an investment banking position, an MBA may be marginally preferable over the CFA. The ... Read Full Answer >>
You may still pass the Chartered Financial Analysis (CFA) Level I even if you fare poorly in the ethics section, but don't ... Read Full Answer >> | http://www.investopedia.com/exam-guide/cfa-level-1/quantitative-methods/confidence-intervals.asp |
4.0625 | As part of the global carbon cycle, underwater volcanoes emit between 66 to 97 million tonnes of CO2 per year. However, this is balanced by the carbon sink provided by newly formed ocean floor lava. (NOAA)
The carbon cycle describes the exchange of carbon among Earth’s biosphere (life), atmosphere (air), hydrosphere (water), pedosphere (soil) and lithosphere (rocks, crust, and mantle). It is one of several biogeochemical cycles on Earth that play a key role in making life possible and in regulating many planetary systems.
Exchanges between these spheres take many forms. Atmospheric carbon dioxide can readily dissolve into surface waters, and both atmospheric and carbon dioxide dissolved in the ocean are easily and frequently taken up living organisms. Transfer of carbon into the lithosphere takes much longer. Carbon in the lithosphere is also less mobile, often remaining stored there for millions of years, but large amounts can be released in an instant during a volcanic eruption. Human use of fossil fuels and other activities is also releasing an increasing amount stored in hydrocarbons back to the atmosphere as carbon dioxide.
Some organisms—such as photosynthetic plants and microbes and chemosynthetic bacteria—are able to take inorganic carbon, primarily in the form of carbon dioxide, and combine it with water to form simple carbohydrates (sugars). These carbohydrates formed by photosynthesis or chemosynthesis serve as the basic building blocks of all organic (carbon-containing) molecules that are necessary for life. Carbon dioxide dissolved in water is likewise readily incorporated into the marine food chain and into the carbonate minerals that make up the shells or skeletons of many marine organisms. | https://www.whoi.edu/page.do?pid=83340 |
4.21875 | Zachary Taylor, Wright Center, Teachers' Domain
Learn more about Teaching Climate Literacy and Energy Awareness»
See how this Static Visualization supports the Next Generation Science Standards»
Middle School: 3 Cross Cutting Concepts
High School: 2 Performance Expectations, 1 Disciplinary Core Idea, 4 Cross Cutting Concepts, 4 Science and Engineering Practices
About Teaching Climate Literacy
Other materials addressing 4e
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- The graphs in the analysis section can be used in other activities. The comparison of the methane, calcium, and insolation graphs to the temperature graph is particularly useful.
- Educator may want to explain oxygen isotope ratios before doing this activity.
About the Science
- Comment from expert scientist: This exercise presents a nice summary of how and why ice cores are drilled and presents some of the results. It provides insights into how scientists understand past climate and shows data that puts current climate change in perspective.
About the Pedagogy
- A background essay and discussion questions are provided with the resource.
- Resource has a nice set of graphs that can be overlaid with a temperature curve for analysis and comparison.
- Students can use the data and overlays to draw and defend their own conclusions.
- This resource engages students in using scientific data.
See other data-rich activities
Next Generation Science Standards See how this Static Visualization supports:
Performance Expectations: 2
HS-ESS2-2: Analyze geoscience data to make the claim that one change to Earth's surface can create feedbacks that cause changes to other Earth systems.
HS-ESS2-4: Use a model to describe how variations in the flow of energy into and out of Earth’s systems result in changes in climate.
Disciplinary Core Ideas: 1
HS-ESS2.A3:The geological record shows that changes to global and regional climate can be caused by interactions among changes in the sun’s energy output or Earth’s orbit, tectonic events, ocean circulation, volcanic activity, glaciers, vegetation, and human activities. These changes can occur on a variety of time scales from sudden (e.g., volcanic ash clouds) to intermediate (ice ages) to very long-term tectonic cycles.
Cross Cutting Concepts: 4
HS-C1.5:Empirical evidence is needed to identify patterns.
HS-C2.1:Empirical evidence is required to differentiate between cause and correlation and make claims about specific causes and effects.
HS-C2.2:Cause and effect relationships can be suggested and predicted for complex natural and human designed systems by examining what is known about smaller scale mechanisms within the system.
HS-C2.4:Changes in systems may have various causes that may not have equal effects.
Science and Engineering Practices: 4
HS-P3.4:Select appropriate tools to collect, record, analyze, and evaluate data.
HS-P4.2:Apply concepts of statistics and probability (including determining function fits to data, slope, intercept, and correlation coefficient for linear fits) to scientific and engineering questions and problems, using digital tools when feasible.
HS-P4.3:Consider limitations of data analysis (e.g., measurement error, sample selection) when analyzing and interpreting data
HS-P4.4:Compare and contrast various types of data sets (e.g., self-generated, archival) to examine consistency of measurements and observations. | http://cleanet.org/resources/43452.html |
4.0625 | The Cretaceous–Paleogene (K–Pg) boundary,[a] formerly known as the Cretaceous–Tertiary (K–T) boundary,[b] is a geological signature, usually a thin band. It defines the end of the Mesozoic Era, and is usually estimated at around 66 Ma (million years ago), with more specific radioisotope dating yielding an age of 66.043 ± 0.011 Ma. K is the traditional abbreviation for the Cretaceous Period, and Pg is the abbreviation for the Paleogene Period. The boundary marks the end of the Cretaceous Period, the last period of the Mesozoic Era, and marks the beginning of the Paleogene Period of the Cenozoic Era. The boundary is associated with the Cretaceous–Paleogene extinction event, a mass extinction which is considered to be the demise of the non-avian dinosaurs in addition to a majority of the world's Mesozoic species.
Alvarez impact hypothesis
In 1980, a team of researchers consisting of Nobel prize-winning physicist Luis Alvarez, his son, geologist Walter Alvarez, and chemists Frank Asaro and Helen Michel discovered that sedimentary layers found all over the world at the K–T boundary contain a concentration of iridium many times greater than normal (30 times the average crustal content in Italy and 160 times at Stevns on the Danish island of Zealand). Iridium is extremely rare in the earth's crust because it is a siderophile element, and therefore most of it sank with iron into the earth's core during planetary differentiation. As iridium remains abundant in most asteroids and comets, the Alvarez team suggested that an asteroid struck the earth at the time of the K–T boundary. There were other earlier speculations on the possibility of an impact event, but no evidence had been uncovered at that time.
The evidence for the Alvarez impact theory is supported by chondritic meteorites and asteroids which have an iridium concentration of ~455 parts per billion, much higher than ~0.3 parts per billion typical of the Earth's crust. Chromium isotopic anomalies found in Cretaceous–Paleogene boundary sediments are similar to those of an asteroid or a comet composed of carbonaceous chondrites. Shocked quartz granules and tektite glass spherules, indicative of an impact event, are also common in the K–T boundary, especially in deposits from around the Caribbean. All of these constituents are embedded in a layer of clay, which the Alvarez team interpreted as the debris spread all over the world by the impact.
Using estimates of the total amount of iridium in the K–T layer, and assuming that the asteroid contained the normal percentage of iridium found in chondrites, the Alvarez team went on to calculate the size of the asteroid. The answer was about 10 km (6.2 mi) in diameter, about the size of Manhattan. Such a large impact would have had approximately the energy of 100 trillion tons of TNT, or about 2 million times greater than the most powerful thermonuclear bomb ever tested.
One of the consequences of such an impact is a dust cloud which would block sunlight and inhibit photosynthesis for a few years. This would account for the extinction of plants and phytoplankton and of organisms dependent on them (including predatory animals as well as herbivores). However, small creatures whose food chains were based on detritus might have still had a reasonable chance of survival. Vast amounts of sulfuric acid aerosols were ejected into the stratosphere as a result of the impact, leading to a 10–20% reduction in sunlight reaching the Earth's surface. It would have taken at least ten years for those aerosols to dissipate.
Global firestorms may have resulted as incendiary fragments from the blast fell back to Earth. Analyses of fluid inclusions in ancient amber suggest that the oxygen content of the atmosphere was very high (30–35%) during the late Cretaceous. This high O
2 level would have supported intense combustion. The level of atmospheric O
2 plummeted in the early Paleogene Period. If widespread fires occurred, they would have increased the CO
2 content of the atmosphere and caused a temporary greenhouse effect once the dust cloud settled, and this would have exterminated the most vulnerable survivors of the "long winter".
The impact may also have produced acid rain, depending on what type of rock the asteroid struck. However, recent research suggests this effect was relatively minor. Chemical buffers would have limited the changes, and the survival of animals vulnerable to acid rain effects (such as frogs) indicates that this was not a major contributor to extinction. Impact theories can only explain very rapid extinctions, since the dust clouds and possible sulphuric aerosols would wash out of the atmosphere in a fairly short time—possibly under ten years.
When it was originally proposed, one issue with the "Alvarez hypothesis" (as it came to be known) had been that no documented crater matched the event. This was not a lethal blow to the theory; while the crater resulting from the impact would have been larger than 250 km (160 mi) in diameter, Earth's geological processes hide or destroy craters over time.
Subsequent research, however, identified the Chicxulub Crater buried under Chicxulub on the coast of Yucatan, Mexico as the impact crater which matched the Alvarez hypothesis dating. Identified in 1990 based on the work of Glen Penfield done in 1978, this crater is oval, with an average diameter of about 180 km (110 mi), about the size calculated by the Alvarez team.
Gerta Keller, however, suggests that the Chicxulub impact occurred approximately 300,000 years before the K–T boundary. This dating is based on evidence collected in Northeast Mexico, including stratigraphic layers bearing impact spherules, the earliest of which is approximately 10 m (33 ft) below the K–T boundary. According to Keller's interpretation, the interval between the oldest spherule layer and the K-T boundary represents about 300,000 years of long-term sedimentation. However, Schulte and other 40 co-authors reject that the spherule is slumped from the upper spherule layer that lies on the K-T boundary. Also, Keller's conclusion is unsupported by radioisotope dating and deep-sea cores.
The shape and location of the crater indicate further causes of devastation in addition to the dust cloud. The asteroid landed right on the coast and would have caused gigantic tsunamis, for which evidence has been found all around the coast of the Caribbean and eastern United States—marine sand in locations which were then inland, and vegetation debris and terrestrial rocks in marine sediments dated to the time of the impact.
The asteroid landed in a bed of anhydrite (CaSO
4) or gypsum (CaSO4·2(H2O)), which would have ejected large quantities of sulfur trioxide SO
3 that combined with water to produce a sulfuric acid aerosol. This would have further reduced the sunlight reaching the Earth's surface and then over several days, precipitated planet-wide as acid rain, killing vegetation, plankton and organisms which build shells from calcium carbonate (coccolithophorids and molluscs).
Before 2000, arguments that the Deccan Traps flood basalts caused the extinction were usually linked to the view that the extinction was gradual, as the flood basalt events were thought to have started around 68 Ma and lasted for over 2 million years. However, there is evidence that two-thirds of the Deccan Traps were created within 1 million years about 65.5 Ma, so these eruptions would have caused a fairly rapid extinction, possibly a period of thousands of years, but still a longer period than what would be expected from a single impact event.
The Deccan Traps could have caused extinction through several mechanisms, including the release of dust and sulphuric aerosols into the air which might have blocked sunlight and thereby reduced photosynthesis in plants. In addition, Deccan Trap volcanism might have resulted in carbon dioxide emissions which would have increased the greenhouse effect when the dust and aerosols cleared from the atmosphere.
In the years when the Deccan Traps theory was linked to a slower extinction, Luis Alvarez (who died in 1988) replied that paleontologists were being misled by sparse data. While his assertion was not initially well-received, later intensive field studies of fossil beds lent weight to his claim. Eventually, most paleontologists began to accept the idea that the mass extinctions at the end of the Cretaceous were largely or at least partly due to a massive Earth impact. However, even Walter Alvarez has acknowledged that there were other major changes on Earth even before the impact, such as a drop in sea level and massive volcanic eruptions that produced the Indian Deccan Traps, and these may have contributed to the extinctions.
Multiple impact event
Several other craters also appear to have been formed about the time of the K–T boundary. This suggests the possibility of nearly simultaneous multiple impacts, perhaps from a fragmented asteroidal object, similar to the Shoemaker-Levy 9 cometary impact with Jupiter. Among these are the Boltysh crater, a 24-km (15-mi) diameter impact crater in Ukraine (65.17 ± 0.64 Ma); and the Silverpit crater, a 20-km (12-mi) diameter impact crater in the North Sea (60–65 Ma). Any other craters that might have formed in the Tethys Ocean would have been obscured by erosion and tectonic events such as the relentless northward drift of Africa and India.
A very large structure in the sea floor off the west coast of India has recently been interpreted as a crater by some researchers. The potential Shiva crater, 450–600 km (280–370 mi) in diameter, would substantially exceed Chicxulub in size and has also been dated at about 66 mya, an age consistent with the K–T boundary. An impact at this site could have been the triggering event for the nearby Deccan Traps. However, this feature has not yet been accepted by the geologic community as an impact crater and may just be a sinkhole depression caused by salt withdrawal.
Maastrichtian marine regression
Clear evidence exists that sea levels fell in the final stage of the Cretaceous by more than at any other time in the Mesozoic era. In some Maastrichtian stage rock layers from various parts of the world, the later ones are terrestrial; earlier ones represent shorelines and the earliest represent seabeds. These layers do not show the tilting and distortion associated with mountain building; therefore, the likeliest explanation is a regression, that is, a buildout of sediment, but not necessarily a drop in sea level. No direct evidence exists for the cause of the regression, but the explanation which is currently accepted as the most likely is that the mid-ocean ridges became less active and therefore sank under their own weight as sediment from uplifted orogenic belts filled in structural basins.
A severe regression would have greatly reduced the continental shelf area, which is the most species-rich part of the sea, and therefore could have been enough to cause a marine mass extinction. However, research concludes that this change would have been insufficient to cause the observed level of ammonite extinction. The regression would also have caused climate changes, partly by disrupting winds and ocean currents and partly by reducing the Earth's albedo and therefore increasing global temperatures.
Marine regression also resulted in the reduction in area of epeiric seas, such as the Western Interior Seaway of North America. The reduction of these seas greatly altered habitats, removing coastal plains that ten million years before had been host to diverse communities such as are found in rocks of the Dinosaur Park Formation. Another consequence was an expansion of freshwater environments, since continental runoff now had longer distances to travel before reaching oceans. While this change was favorable to freshwater vertebrates, those that prefer marine environments, such as sharks, suffered.
Another discredited cause for the K–T extinction event is cosmic radiation from a nearby supernova explosion. An iridium anomaly at the boundary could support this hypothesis. The fallout from a supernova explosion should contain 244
Pu, the longest-lived plutonium isotope with a half-life of 81 million years. If the supernova hypothesis were correct, traces of 244
Pu should be detected in rocks deposited at the time. However, analysis of the boundary layer sediments failed to find 244
It is possible that more than one of these hypotheses may be a partial solution to the mystery, and that more than one of these events may have occurred. The location of the Deccan Traps, for example, would have been close to the antipodal point of Chicxulub in the late Cretaceous; a sufficiently large asteroid impact might have sent shock waves around the planet sufficient to trigger an effect on weakened crust on the other side of the globe.
References and notes
- The abbreviation is derived from the juxtaposition of K, the common abbreviation for the Cretaceous, which in turn originates from the correspondent German term Kreide, and Pg, which is the abbreviation for the Paleogene.
- This former designation has as a part of it a term, 'Tertiary' (abbreviated as T), that is now discouraged as a formal geochronological unit by the International Commission on Stratigraphy.
- Ogg, James G.; Gradstein, F. M; Gradstein, Felix M. (2004). A geologic time scale 2004. Cambridge, UK: Cambridge University Press. ISBN 0-521-78142-6.
- "International Chronostratigraphic Chart" (pdf). International Commission on Stratigraphy. 2012. Retrieved 2013-12-18.
- Renne et al., (2013). "Time Scales of Critical Events Around the Cretaceous-Paleogene Boundary". Science. doi:10.1126/science.1230492.
- Fortey, R (1999). Life: A Natural History of the First Four Billion Years of Life on Earth. Vintage. pp. 238–260. ISBN 978-0-375-70261-7.
- Alvarez, LW, Alvarez, W, Asaro, F, and Michel, HV (1980). "Extraterrestrial cause for the Cretaceous–Tertiary extinction". Science 208 (4448): 1095–1108. Bibcode:1980Sci...208.1095A. doi:10.1126/science.208.4448.1095. PMID 17783054.
- De Laubenfels, MW (1956). "Dinosaur Extinctions: One More Hypothesis". Journal of Paleontology 30 (1): 207–218. Retrieved 2007-05-22.
- W. F. McDonough and S.-s. Sun (1995). "The composition of the Earth". Chemical Geology 120 (3–4): 223–253. doi:10.1016/0009-2541(94)00140-4.
- Pope, KO, Baines, KH, Ocampo, AC, & Ivanov, BA (1997). "Energy, volatile production, and climatic effects of the Chicxulub Cretaceous/Tertiary impact". Journal of Geophysical Research 102 (E9): 21645–21664. Bibcode:1997JGR...10221645P. doi:10.1029/97JE01743. PMID 11541145. Retrieved 2007-07-18.
- Ocampo, A, Vajda, V & Buffetaut, E (2006). Unravelling the Cretaceous–Paleogene (KT) Turnover, Evidence from Flora, Fauna and Geology in Biological Processes Associated with Impact Events (Cockell, C, Gilmour, I & Koeberl, C, editors). SpringerLink. pp. 197–219. ISBN 978-3-540-25735-6. Retrieved 2007-06-17.
- Kring, DA (2003). "Environmental consequences of impact cratering events as a function of ambient conditions on Earth". Astrobiology 3 (1): 133–152. Bibcode:2003AsBio...3..133K. doi:10.1089/153110703321632471. PMID 12809133.
- Keller, G, Adatte, T, Stinnesbeck, W, Rebolledo-Vieyra, Fucugauchi, JU, Kramar,U, & Stüben, D (2004). "Chicxulub impact predates the K-T boundary mass extinction". PNAS 101 (11): 3753–3758. Bibcode:2004PNAS..101.3753K. doi:10.1073/pnas.0400396101. PMC 374316. PMID 15004276.
- Pope KO, Ocampo AC, Kinsland GL, Smith R (1996). "Surface expression of the Chicxulub crater". Geology 24 (6): 527–30. Bibcode:1996Geo....24..527P. doi:10.1130/0091-7613(1996)024<0527:SEOTCC>2.3.CO;2. PMID 11539331.
- Keller, Gerta; Adatte, Thierry; Stinnesbeck, Wolfgang (2002). "Multiple spherule layers in the late Maastrichtian of northeastern Mexico". Geological Society of America Special Paper 356.
- Schulte P, Alegret L, Arenillas I, Arz J A, Barton P J, et al. (2010). "The Chicxulub Asteroid Impact and Mass Extinction at the Cretaceous–Paleogene Boundary". Science 327 (5970), 1214–1218.
- Dinosaur-Killing Asteroid Triggered Lethal Acid Rain, Livescience, March 09, 2014
- Hofman, C, Féraud, G & Courtillot, V (2000). "40Ar/39Ar dating of mineral separates and whole rocks from the Western Ghats lava pile: further constraints on duration and age of the Deccan traps". Earth and Planetary Science Letters 180: 13–27. Bibcode:2000E&PSL.180...13H. doi:10.1016/S0012-821X(00)00159-X.
- Duncan, RA & Pyle, DG (1988). "Rapid eruption of the Deccan flood basalts at the Cretaceous/Tertiary boundary". Nature 333 (6176): 841–843. Bibcode:1988Natur.333..841D. doi:10.1038/333841a0.
- Alvarez, W (1997). T. rex and the Crater of Doom. Princeton University Press. pp. 130–146. ISBN 978-0-691-01630-6.
- Mullen, L (October 13, 2004). "Debating the Dinosaur Extinction". Astrobiology Magazine. Retrieved 2007-07-11.
- Mullen, L (October 20, 2004). "Multiple impacts". Astrobiology Magazine. Retrieved 2007-07-11.
- Mullen, L (November 3, 2004). "Shiva: Another K–T impact?". Astrobiology Magazine. Retrieved 2007-07-11.
- Chatterjee, S, Guven, N, Yoshinobu, A, & Donofrio, R (2006). "Shiva structure: a possible KT boundary impact crater on the western shelf of India" (PDF). Special Publications of the Museum of Texas Tech University (50). Retrieved 2007-06-15.
- Chatterjee, S, Guven, N, Yoshinobu, A, & Donofrio, R (2003). "The Shiva Crater: Implications for Deccan Volcanism, India-Seychelles rifting, dinosaur extinction, and petroleum entrapment at the KT Boundary". Geological Society of America Abstracts with Programs 35 (6): 168. Retrieved 2007-08-02.
- MacLeod, N, Rawson, PF, Forey, PL, Banner, FT, Boudagher-Fadel, MK, Bown, PR, Burnett, JA, Chambers, P, Culver, S, Evans, SE, Jeffery, C, Kaminski, MA, Lord, AR, Milner, AC, Milner, AR, Morris, N, Owen, E, Rosen, BR, Smith, AB, Taylor, PD, Urquhart, E & Young, JR (1997). "The Cretaceous–Tertiary biotic transition". Journal of the Geological Society 154 (2): 265–292. doi:10.1144/gsjgs.154.2.0265.
- Liangquan, L; Keller, G (1998). "Abrupt deep-sea warming at the end of the Cretaceous". Geology 26 (11): 995–998. Bibcode:1998Geo....26..995L. doi:10.1130/0091-7613(1998)026<0995:ADSWAT>2.3.CO;2. Retrieved 2007-08-01.
- Marshall, C. R. & Ward, PD (1996). "Sudden and Gradual Molluscan Extinctions in the Latest Cretaceous of Western European Tethys". Science 274 (5291): 1360–1363. Bibcode:1996Sci...274.1360M. doi:10.1126/science.274.5291.1360. PMID 8910273.
- Archibald, J. David; Fastovsky, David E. (2004). "Dinosaur Extinction". In Weishampel, David B.; Dodson, Peter; and Osmólska, Halszka (eds.). The Dinosauria (2nd ed.). Berkeley: University of California Press. pp. 672–684. ISBN 0-520-24209-2.
- Ellis, J & Schramm, DN (1995). "Could a Nearby Supernova Explosion have Caused a Mass Extinction?". Proceedings of the National Academy of Sciences 92 (1): 235–238. Bibcode:1995PNAS...92..235E. doi:10.1073/pnas.92.1.235. PMC 42852. PMID 11607506. | https://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_boundary |
4.03125 | Ellyn Satter's Division of Responsibility in Feeding
Children have natural ability with eating. They eat as much as they need, they grow in the way that is right for them, and they learn to eat the food their parents eat. Step-by-step, throughout their growing-up years, they build on their natural ability and become eating competent. Parents let them learn and grow with eating when they follow the Division of Responsibility in Feeding.
The Division of Responsibility for infants:
- The parent is responsible for what.
- The child is responsible for how much (and everything else).
Parents choose breast- or formula-feeding, and help the infant be calm and organized. Then they feed smoothly, paying attention to information coming from the baby about timing, tempo, frequency, and amounts.
The Division of Responsibility for babies making the transition to family food:
- The parent is still responsible for what,and is becoming responsible for when and where the child is fed.
- The child is still and always responsible for how much and whether to eat the foods offered by the parent.
Based on what the child can do, not on how old s/he is, parents guide the child’s transition from nipple feeding through semi-solids, then thick-and-lumpy food, to finger food at family meals.
The Division of Responsibility for toddlers through adolescents
- The parent is responsible for what, when, where.
- The child is responsible for how much and whether.
Fundamental to parents’ jobs is trusting children to determine how much and whether to eat from what parents provide. When parents do their jobs with feeding, children do their jobs with eating:
Parents’ feeding jobs:
- Choose and prepare the food.
- Provide regular meals and snacks.
- Make eating times pleasant.
- Step-by-step, show children by example how to behave at family mealtime.
- Be considerate of children’s lack of food experience without catering to likes and dislikes.
- Not let children have food or beverages (except for water) between meal and snack times.
- Let children grow up to get bodies that are right for them.
Children’s eating jobs:
- Children will eat.
- They will eat the amount they need.
- They will learn to eat the food their parents eat.
- They will grow predictably.
- They will learn to behave well at mealtime.
For more about raising healthy children who are a joy to feed, read Part two, "How to raise good eaters," in Ellyn Satter’s Secrets of Feeding a Healthy Family. For the evidence, read The Satter Feeding Dynamics Model.
©2016 by Ellyn Satter published at www.EllynSatterInstitute.org. You may reproduce this article if you don't charge for it or change it in any way and if you do include the for more about and copyright statements. | http://ellynsatterinstitute.org/dor/divisionofresponsibilityinfeeding.php |
4.125 | (Linnaeus, 1758)
The wheat weevil (Sitophilus granarius) , also known as the grain weevil or granary weevil, occurs all over the world and is a common pest in many places. It can cause significant damage to harvested stored grains and may drastically decrease yields. The females lay many eggs and the larvae eat the inside of the grain kernels.
Adult wheat weevils are about 3–5 mm (0.12–0.20 in) long with elongated snouts and chewing mouthparts. Depending on the grain kernels, the sizes vary. In small grains, such as millet or grain sorghum, they are small in size, but are larger in maize (corn). The adults are a reddish-brown colour and lack distinguishing marks. Adult wheat weevils are not capable of flight. Larvae are legless, humpbacked, and white with a tan head. Weevils in the pupal stage have snouts like the adults.
Female wheat weevils lay between 36 and 254 eggs and usually one egg is deposited in each grain kernel. All larval stages and the pupal stage occur within the grain. The larvae feed inside the grain until pupation, after which they bore a hole out of the grain and emerge. They are rarely seen outside of the grain kernel. The lifecycle takes about 5 weeks in the summer, but may take up to 20 weeks in cooler temperatures. Adults can live up to 8 months after emerging.
Adult wheat weevils when threatened or disturbed will pull their legs close to their bodies and feign death. Female weevils can tell if a grain kernel has had an egg laid in it by another weevil. They avoid laying another egg in this grain. Females chew a hole, deposit an egg, and seal the hole with a gelatinous secretion. This may be how other females know the grain has an egg in it already. This ensures the young will survive and produce another generation. One pair of weevils may produce up to 6,000 offspring per year.
Wheat weevils are a pest of wheat, oats, rye, barley, rice and corn. Wheat weevils cause an unknown amount of damage worldwide because keeping track of so much information is difficult, especially in places where the grain harvests are not measured. They are hard to detect and usually all of the grain in an infested storage facility must be destroyed. Many methods attempt to get rid of the wheat weevil, such as pesticides, different methods of masking the odour of the grain with unpleasant scents, and introducing other organisms that are predators of the weevils.
Prevention and control
Sanitation and inspection are the keys to prevent the infestation. Grains should be stored in preferably metallic (cardboard, even fortified, is easily drilled through) containers with tight lids in a refrigerator or a freezer, and should be purchased in small quantities. If any suspicion has arisen, carefully examine the grains for adult insects or holes in the grain kernels. Another method is to immerse them in water. If they float to the surface, it is a good indication of infestation. Even if identified early, disposal may be the only effective solution.
Deltamethrin powder (WP) is another solution to weevil infestation in grains.
- Rice weevil (Sitophilus oryzae)
- Maize weevil (Sitophilus zeamais)
- Lixus concavus, the rhubarb curculio weevil
- "Sitophilus granarius (Linnaeus, 1758)". Integrated Taxonomic Information System. Retrieved September 6, 2012.
- "Granary and Rice Weevils" (PDF). Retrieved 2009-01-21.
- "Store Products Pests: Granary Weevil" (PDF). Retrieved 2009-01-21.
- Woodbury, N. 2008. Infanticide Avoidance by the Granary Weevil, Sitophilus granarius (L.) (Coleoptera: Curculionidae): The Role of Harbourage Markers, Oviposition Markers, and Egg-Plugs. Journal of Insect Behavior, 21: 55-62.
- Giacinto, G. S., Antonio, D. C., & Giuseppe, R. 2008. Behavioral responses of adult Sitophilus granarius to individual cereal volatiles. Journal of Chemical Ecology, 34: 523-529.
|Wikispecies has information related to: Sitophilus granarius|
|Wikimedia Commons has media related to Sitophilus granarius.| | https://en.wikipedia.org/wiki/Wheat_weevil |
4.15625 | |This article needs additional citations for verification. (July 2010)|
The Northern Renaissance was the Renaissance that occurred in Europe north of the Alps. Before 1497, Italian Renaissance humanism had little influence outside Italy. From the late 15th century, its ideas spread around Europe. This influenced the German Renaissance, French Renaissance, English Renaissance, Renaissance in the Low Countries, Polish Renaissance and other national and localized movements, each with different characteristics and strengths.
In France, King Francis I imported Italian art, commissioned Venetian artists (including Leonardo da Vinci), and built grand palaces at great expense, starting the French Renaissance. Trade and commerce in cities like Bruges in the 15th century and Antwerp in the 16th increased cultural exchange between Italy and the Low Countries, however in art, and especially architecture, late Gothic influences remained present until the arrival of Baroque even as painters increasingly drew on Italian models.
Universities and the printed book helped spread the spirit of the age through France, the Low Countries and the Holy Roman Empire, and then to Scandinavia and finally Britain by the late 16th century. Writers and humanists such as Rabelais, Pierre de Ronsard and Desiderius Erasmus were greatly influenced by the Italian Renaissance model and were part of the same intellectual movement. During the English Renaissance (which overlapped with the Elizabethan era) writers such as William Shakespeare and Christopher Marlowe composed works of lasting influence. The Renaissance was brought to Poland directly from Italy by artists from Florence and the Low Countries, starting the Polish Renaissance.
In some areas the Northern Renaissance was distinct from the Italian Renaissance in its centralization of political power. While Italy and Germany were dominated by independent city-states, most of Europe began emerging as nation-states or even unions of countries. The Northern Renaissance was also closely linked to the Protestant Reformation with the resulting long series of internal and external conflicts between various Protestant groups and the Roman Catholic Church having lasting effects.
Feudalism had dominated Europe for a thousand years, but was on the decline at the beginning of the Renaissance. The reasons for this decline include the post-plague environment, the increasing use of money rather than land as a medium of exchange, the growing number of serfs living as freemen, the formation of nation-states with monarchies interested in reducing the power of feudal lords, the increasing uselessness of feudal armies in the face of new military technology (such as gunpowder), and a general increase in agricultural productivity due to improving farming technology and methods. As in Italy, the decline of feudalism opened the way for the cultural, social, and economic changes associated with the Renaissance in Europe.
Finally, the Renaissance in Europe would also be kindled by a weakening of the Roman Catholic Church. The slow demise of feudalism also weakened a long-established policy in which church officials helped keep the population of the manor under control in return for tribute. Consequently, the early 15th century saw the rise of many secular institutions and beliefs. Among the most significant of these, humanism, would lay the philosophical grounds for much of Renaissance art, music, and science. Desiderius Erasmus, for example, was important in spreading humanist ideas in the north, and was a central figure at the intersection of classical humanism and mounting religious questions. Forms of artistic expression which a century ago would have been banned by the church were now tolerated or even encouraged in certain circles.
The velocity of transmission of the Renaissance throughout Europe can also be ascribed to the invention of the printing press. Its power to disseminate knowledge enhanced scientific research, spread political ideas and generally impacted the course of the Renaissance in northern Europe. As in Italy, the printing press increased the availability of books written in both vernacular languages and the publication of new and ancient classical texts in Greek and Latin. Furthermore, the Bible became widely available in translation, a factor often attributed to the spread of the Protestant Reformation.
Age of Discovery
One of the most important technological development of the Renaissance was the invention of the caravel. This combination of European and African ship building technologies for the first time made extensive trade and travel over the Atlantic feasible. While first introduced by the Italian states, and the early captains, such as Giovanni Caboto, who were Italian, the development would end Northern Italy's role as the trade crossroads of Europe, shifting wealth and power westwards to Spain, Portugal, France, England, and the Netherlands. These states all began to conduct extensive trade with Africa and Asia, and in the Americas began extensive colonisation activities. This period of exploration and expansion has become known as the Age of Discovery. Eventually European power spread around the globe.
The detailed realism of Early Netherlandish painting was greatly respected in Italy, but there was little reciprocal influence on the North until nearly the end of the 15th century. Despite frequent cultural and artistic exchange, the Antwerp Mannerists (1500–1530)—chronologically overlapping with but unrelated to Italian Mannerism—were among the first artists in the Low Countries to clearly reflect Italian formal developments.
Around the same time, Albrecht Dürer made his two trips to Italy, where he was greatly admired for his prints. Dürer, in turn, was influenced by the art he saw there. Other notable painters, such as Hans Holbein the Elder and Jean Fouquet, retained a Gothic influence that was still popular in the north, while highly individualistic artists such as Hieronymus Bosch and Pieter Bruegel the Elder developed styles that were imitated by many subsequent generations. Northern painters in the 16th century increasingly looked and travelled to Rome, becoming known as the Romanists. The High Renaissance art of Michelangelo and Raphael and the late Renaissance stylistic tendencies of Mannerism that were in vogue had a great impact on their work.
Renaissance humanism and the large number of surviving classical artworks and monuments encouraged many Italian painters to explore Greco-Roman themes more prominently than northern artists, and likewise the famous 15th-century German and Dutch paintings tend to be religious. In the 16th century, mythological and other themes from history became more uniform amongst northern and Italian artists. Northern Renaissance painters, however, had new subject matter, such as landscape and genre painting.
As Renaissance art styles moved through northern Europe, they changed and were adapted to local customs. In England and the northern Netherlands the Reformation brought religious painting almost completely to an end. Despite several very talented Artists of the Tudor Court in England, portrait painting was slow to spread from the elite. In France the School of Fontainebleau was begun by Italians such as Rosso Fiorentino in the latest Mannerist style, but succeeded in establishing a durable national style. By the end of the 16th century, artists such as Karel van Mander and Hendrik Goltzius collected in Haarlem in a brief but intense phase of Northern Mannerism that also spread to Flanders.
The Renaissance is one of the most interesting and disputed periods of European history. Many scholars see it as a unique time with characteristics all its own. A second group views the Renaissance as the first two to three centuries of a larger era in European history usually called early modern Europe, which began in the late fifteenth century and ended on the eve of the French Revolution (1789) or with the close of the Napoleonic era (1815). Some social historians reject the concept of the Renaissance altogether. Historians also argue over how much the Renaissance differed from the Middle Ages and whether it was the beginning of the modern world, however defined.
- Janson, H.W.; Anthony F. Janson (1997). History of Art (5th, rev. ed.). New York: Harry N. Abrams, Inc. ISBN 0-8109-3442-6.
- Although the notion of a north to south-only direction of influence arose in the scholarship of Max Jakob Friedländer and was continued by Erwin Panofsky, art historians are increasingly questioning its validity: Lisa Deam, "Flemish versus Netherlandish: A Discourse of Nationalism," in Renaissance Quarterly, vol. 51, no. 1 (Spring, 1998), pp. 28–29.
- Chipps Smith, Jeffrey (2004). The Northern Renaissance. Phaidon Press. ISBN 978-0-7148-3867-0.
- Campbell, Gordon, ed. (2009). The Grove Encyclopedia of Northern Renaissance Art. Oxford University Press. ISBN 978-0-19-533466-1.
- O'Neill, J, ed. (1987). The Renaissance in the North. New York: The Metropolitan Museum of Art. | https://en.wikipedia.org/wiki/Northern_Renaissance |
4.125 | Global warming, the phenomenon of increasing average air temperatures near the surface of Earth over the past one to two centuries. Climate scientists have since the mid-20th century gathered detailed observations of various weather phenomena (such as temperatures, precipitation, and storms) and of related influences on climate (such as ocean currents and the atmosphere’s chemical composition). These data indicate that Earth’s climate has changed over almost every conceivable timescale since the beginning of geologic time and that the influence of human activities since at least the beginning of the Industrial Revolution has been deeply woven into the very fabric of climate change.
Giving voice to a growing conviction of most of the scientific community, the Intergovernmental Panel on Climate Change (IPCC) was formed in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Program (UNEP). In 2013 the IPCC reported that the interval between 1880 and 2012 saw an increase in global average surface temperature of approximately 0.9 °C (1.5 °F). The increase is closer to 1.1 °C (2.0 °F) when measured relative to the preindustrial (i.e., 1750–1800) mean temperature. The IPCC stated that most of the warming observed over the second half of the 20th century could be attributed to human activities. It predicted that by the end of the 21st century the global mean surface temperature would increase by 0.3 to 4.8 °C (0.5 to 8.6 °F) relative to the 1986–2005 average. The predicted rise in temperature was based on a range of possible scenarios that accounted for future greenhouse gas emissions and mitigation (severity reduction) measures and on uncertainties in the model projections. Some of the main uncertainties include the precise role of feedback processes and the impacts of industrial pollutants known as aerosols which may offset some warming.
Many climate scientists agree that significant societal, economic, and ecological damage would result if global average temperatures rose by more than 2 °C (3.6 °F) in such a short time. Such damage would include increased extinction of many plant and animal species, shifts in patterns of agriculture, and rising sea levels. The IPCC reported that the global average sea level rose by some 19–21 cm (7.5–8.3 inches) between 1901 and 2010 and that sea levels rose faster in the second half of the 20th century than in the first half. It also predicted, again depending on a wide range of scenarios, that by the end of the 21st century the global average sea level could rise by another 26–82 cm (10.2–32.3 inches) relative to the 1986–2005 average and that a rise of well over 1 metre (3 feet) could not be ruled out.
The scenarios referred to above depend mainly on future concentrations of certain trace gases, called greenhouse gases, that have been injected into the lower atmosphere in increasing amounts through the burning of fossil fuels for industry, transportation, and residential uses. Modern global warming is the result of an increase in magnitude of the so-called greenhouse effect, a warming of Earth’s surface and lower atmosphere caused by the presence of water vapour, carbon dioxide, methane, nitrous oxides, and other greenhouse gases. In 2014 the IPCC reported that concentrations of carbon dioxide, methane, and nitrous oxides in the atmosphere surpassed those found in ice cores dating back 800,000 years. Of all these gases, carbon dioxide is the most important, both for its role in the greenhouse effect and for its role in the human economy. It has been estimated that, at the beginning of the industrial age in the mid-18th century, carbon dioxide concentrations in the atmosphere were roughly 280 parts per million (ppm). By the middle of 2014, carbon dioxide concentrations had briefly reached 400 ppm, and, if fossil fuels continue to be burned at current rates, they are projected to reach 560 ppm by the mid-21st century—essentially, a doubling of carbon dioxide concentrations in 300 years.
A vigorous debate is in progress over the extent and seriousness of rising surface temperatures, the effects of past and future warming on human life, and the need for action to reduce future warming and deal with its consequences. This article provides an overview of the scientific background and public policy debate related to the subject of global warming. It considers the causes of rising near-surface air temperatures, the influencing factors, the process of climate research and forecasting, the possible ecological and social impacts of rising temperatures, and the public policy developments since the mid-20th century. For a detailed description of Earth’s climate, its processes, and the responses of living things to its changing nature, see climate. For additional background on how Earth’s climate has changed throughout geologic time, see climatic variation and change. For a full description of Earth’s gaseous envelope, within which climate change and global warming occur, see atmosphere.
Climatic variation since the last glaciation
Global warming is related to the more general phenomenon of climate change, which refers to changes in the totality of attributes that define climate. In addition to changes in air temperature, climate change involves changes to precipitation patterns, winds, ocean currents, and other measures of Earth’s climate. Normally, climate change can be viewed as the combination of various natural forces occurring over diverse timescales. Since the advent of human civilization, climate change has involved an “anthropogenic,” or exclusively human-caused, element, and this anthropogenic element has become more important in the industrial period of the past two centuries. The term global warming is used specifically to refer to any warming of near-surface air during the past two centuries that can be traced to anthropogenic causes.
To define the concepts of global warming and climate change properly, it is first necessary to recognize that the climate of Earth has varied across many timescales, ranging from an individual human life span to billions of years. This variable climate history is typically classified in terms of “regimes” or “epochs.” For instance, the Pleistocene glacial epoch (about 2,600,000 to 11,700 years ago) was marked by substantial variations in the global extent of glaciers and ice sheets. These variations took place on timescales of tens to hundreds of millennia and were driven by changes in the distribution of solar radiation across Earth’s surface. The distribution of solar radiation is known as the insolation pattern, and it is strongly affected by the geometry of Earth’s orbit around the Sun and by the orientation, or tilt, of Earth’s axis relative to the direct rays of the Sun.
Worldwide, the most recent glacial period, or ice age, culminated about 21,000 years ago in what is often called the Last Glacial Maximum. During this time, continental ice sheets extended well into the middle latitude regions of Europe and North America, reaching as far south as present-day London and New York City. Global annual mean temperature appears to have been about 4–5 °C (7–9 °F) colder than in the mid-20th century. It is important to remember that these figures are a global average. In fact, during the height of this last ice age, Earth’s climate was characterized by greater cooling at higher latitudes (that is, toward the poles) and relatively little cooling over large parts of the tropical oceans (near the Equator). This glacial interval terminated abruptly about 11,700 years ago and was followed by the subsequent relatively ice-free period known as the Holocene Epoch. The modern period of Earth’s history is conventionally defined as residing within the Holocene. However, some scientists have argued that the Holocene Epoch terminated in the relatively recent past and that Earth currently resides in a climatic interval that could justly be called the Anthropocene Epoch—that is, a period during which humans have exerted a dominant influence over climate.
Though less dramatic than the climate changes that occurred during the Pleistocene Epoch, significant variations in global climate have nonetheless taken place over the course of the Holocene. During the early Holocene, roughly 9,000 years ago, atmospheric circulation and precipitation patterns appear to have been substantially different from those of today. For example, there is evidence for relatively wet conditions in what is now the Sahara Desert. The change from one climatic regime to another was caused by only modest changes in the pattern of insolation within the Holocene interval as well as the interaction of these patterns with large-scale climate phenomena such as monsoons and El Niño/Southern Oscillation (ENSO).
During the middle Holocene, some 5,000–7,000 years ago, conditions appear to have been relatively warm—indeed, perhaps warmer than today in some parts of the world and during certain seasons. For this reason, this interval is sometimes referred to as the Mid-Holocene Climatic Optimum. The relative warmth of average near-surface air temperatures at this time, however, is somewhat unclear. Changes in the pattern of insolation favoured warmer summers at higher latitudes in the Northern Hemisphere, but these changes also produced cooler winters in the Northern Hemisphere and relatively cool conditions year-round in the tropics. Any overall hemispheric or global mean temperature changes thus reflected a balance between competing seasonal and regional changes. In fact, recent theoretical climate model studies suggest that global mean temperatures during the middle Holocene were probably 0.2–0.3 °C (0.4–0.5 °F) colder than average late 20th-century conditions.
Over subsequent millennia, conditions appear to have cooled relative to middle Holocene levels. This period has sometimes been referred to as the “Neoglacial.” In the middle latitudes this cooling trend was associated with intermittent periods of advancing and retreating mountain glaciers reminiscent of (though far more modest than) the more substantial advance and retreat of the major continental ice sheets of the Pleistocene climate epoch. | http://www.britannica.com/science/global-warming |
4.125 | Ecliptic coordinate system
The ecliptic coordinate system is a celestial coordinate system commonly used for representing the positions and orbits of Solar System objects. Because most planets (except Mercury), and many small Solar System bodies have orbits with small inclinations to the ecliptic, it is convenient to use it as the fundamental plane. The system's origin can be either the center of the Sun or the center of the Earth, its primary direction is towards the vernal (northbound) equinox, and it has a right-handed convention. It may be implemented in spherical coordinates or rectangular coordinates.
- 1 Primary direction
- 2 Spherical coordinates
- 3 Rectangular coordinates
- 4 Conversion between celestial coordinate systems
- 5 See also
- 6 External links
- 7 Notes and references
The celestial equator and the ecliptic are slowly moving due to perturbing forces on the Earth, therefore the orientation of the primary direction, their intersection at the Northern Hemisphere vernal equinox, is not quite fixed. A slow motion of Earth's axis, precession, causes a slow, continuous turning of the coordinate system westward about the poles of the ecliptic, completing one circuit in about 26,000 years. Superimposed on this is a smaller motion of the ecliptic, and a small oscillation of the Earth's axis, nutation.
In order to reference a coordinate system which can be considered as fixed in space, these motions require specification of the equinox of a particular date, known as an epoch, when giving a position in ecliptic coordinates. The three most commonly used are:
- Mean equinox of a standard epoch (usually J2000.0, but may include B1950.0, B1900.0, etc.)
- is a fixed standard direction, allowing positions established at various dates to be compared directly.
- Mean equinox of date
- is the intersection of the ecliptic of "date" (that is, the ecliptic in its position at "date") with the mean equator (that is, the equator rotated by precession to its position at "date", but free from the small periodic oscillations of nutation). Commonly used in planetary orbit calculation.
- True equinox of date
- is the intersection of the ecliptic of "date" with the true equator (that is, the mean equator plus nutation). This is the actual intersection of the two planes at any particular moment, with all motions accounted for.
A position in the ecliptic coordinate system is thus typically specified true equinox and ecliptic of date, mean equinox and ecliptic of J2000.0, or similar. Note that there is no "mean ecliptic", as the ecliptic is not subject to small periodic oscillations.
|Heliocentric||l||b||r||x, y, z[note 1]|
Ecliptic longitude or celestial longitude (symbols: heliocentric , geocentric ) measures the angular distance of an object along the ecliptic from the primary direction. Like right ascension in the equatorial coordinate system, the primary direction (0° ecliptic longitude) points from the Earth towards the Sun at the vernal equinox of the Northern Hemisphere. Because it is a right-handed system, ecliptic longitude is measured positive eastwards in the fundamental plane (the ecliptic) from 0° to 360°.
Ecliptic latitude or celestial latitude (symbols: heliocentric , geocentric ), measures the angular distance of an object from the ecliptic towards the north (positive) or south (negative) ecliptic pole. For example, the north ecliptic pole has a celestial latitude of +90°.
Distance is also necessary for a complete spherical position (symbols: heliocentric , geocentric ). Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near the Earth, Earth radii or kilometers are used.
From antiquity through the 18th century, ecliptic longitude was commonly measured using twelve zodiacal signs, each of 30° longitude, a usage that continues in modern astrology. The signs approximately corresponded to the constellations crossed by the ecliptic. Longitudes were specified in signs, degrees, minutes, and seconds. For example, a longitude of 19° 55′ 58″ is 19.933° east of the start of the sign Leo. Since Leo begins 120° from the vernal equinox, the longitude in modern form is 139° 55′ 58″.
In China, ecliptic longitude is measured using 24 Solar terms, each of 15° longitude, and are used by Chinese lunisolar calendars to stay synchronized with the seasons, which is crucial for agrarian societies.
A rectangular variant of ecliptic coordinates is often used in orbital calculations and simulations. It has its origin at the center of the Sun (or at the barycenter of the solar system), its fundamental plane in the plane of the ecliptic, and the x axis toward the vernal equinox. The coordinates have a right-handed convention, that is, if one extends their right thumb upward, it simulates the z-axis, their extended index finger the x-axis, and the curl of the other fingers points generally in the direction of the y-axis.
These rectangular coordinates are related to the corresponding spherical coordinates by
Conversion between celestial coordinate systems
Converting Cartesian vectors
Conversion from ecliptic coordinates to equatorial coordinates
Conversion from equatorial coordinates to ecliptic coordinates
where is the obliquity of the ecliptic.
- The Ecliptic: the Sun's Annual Path on the Celestial Sphere Durham University Department of Physics
- MEASURING THE SKY A Quick Guide to the Celestial Sphere James B. Kaler, University of Illinois
Notes and references
- Nautical Almanac Office, U.S. Naval Observatory; H.M. Nautical Almanac Office, Royal Greenwich Observatory (1961). Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac. H.M. Stationery Office, London. pp. 24–27. Cite uses deprecated parameter
- Explanatory Supplement (1961), pp. 20, 28
- U.S. Naval Observatory, Nautical Almanac Office (1992). P. Kenneth Seidelmann, ed. Explanatory Supplement to the Astronomical Almanac. University Science Books, Mill Valley, CA. pp. 11–13. ISBN 0-935702-68-7.
- Meeus, Jean (1991). Astronomical Algorithms. Willmann-Bell, Inc., Richmond, VA. p. 137. ISBN 0-943396-35-2.
- Explanatory Supplement (1961), sec. 1G
- Leadbetter, Charles (1742). A Compleat System of Astronomy. J. Wilcox, London. p. 94., at Google books; numerous examples of this notation appear throughout the book.
- Explanatory Supplement (1961), pp. 20, 27
- Explanatory Supplement (1992), pp. 555-558 | https://en.wikipedia.org/wiki/Celestial_latitude |
4.28125 | Demo Case Study FOR SECONDARY STUDENTS
How have Indigenous people's citizenship rights changed over time?
Case Study Overview
Students explore the evidence to critically discuss the issue of Australians’ attitudes to Indigenous rights and racial equality. They explore how the apparent racism revealed by the 1965 Freedom Ride in places such as Walgett and Moree can be reconciled with the overwhelmingly positive example of the 1967 referendum. Or how the apparent hostility of many towards the Aboriginal Tent Embassy in 1972 can be reconciled with the awarding of equal pay to Aboriginal pastoral workers in 1966 and the adoption of the Racial Discrimination Act in 1975. The case study also compares the Yirrkala people’s claim to legal ownership of their land in 1971 and the Mabo case in 1992.
An interactive entitled, The 1967 Referendum, is also available for this case study.
Case Study unit of work inquiry structure (pdf)
- Teacher’s Guide
- Activity 1: Focusing on rights
Understanding the main concept(s) raised in the case study
- Activity 2: Video visit
Looking at the video segment of this case study and answering questions about it
- Activity 3: Decision makers
How have Indigenous people’s rights developed over time? Decision maker
- Activity 4: Case Study 1: 1967 Referendum
An historical case study for analysis and discussion by the whole class
- Activity 4: Case Study 2: Mutual obligation
Two contrasting case studies
About the Interactive
The 1967 Referendum — What do they tell us about Australian attitudes?
Students decide whether the 1967 Referendum should be included in a ‘human rights hall of fame’ by looking at a range of evidence and a range of different views. Does it deserve a place along with such events as women obtaining the right to vote in Australia in 1902 and the Mabo High Court decision of 1992?
HISTORY YEAR 6
• Australia as a nation
HISTORY YEAR 10
• Rights and freedoms
(1945 - present) | https://www.australianhistorymysteries.info/demo/secondary.php |
4.21875 | Tool use is so rare in the animal kingdom that it was once believed to be a uniquely human trait. While it is now known that some non-human animal species can use tools for foraging, the rarity of this behaviour remains a puzzle. It is generally assumed that tool use played a key role in human evolution, so understanding this behaviour's ecological context, and its evolutionary roots, is of major scientific interest. A project led by researchers from the Universities of Oxford and Exeter examined the ecological significance of tool use in New Caledonian crows, a species renowned for its sophisticated tool-use behaviour. The scientists found that a substantial amount of the crows' energy intake comes from tool-derived food, highlighting the nutritional significance of their remarkable tool-use skills. A report of the research appears in this week's Science.
To trace the evolutionary origins of specific behaviours, scientists usually compare the ecologies and life histories of those species that exhibit the trait of interest, searching for common patterns and themes. "Unfortunately, this powerful technique cannot be used for studying the evolution of tool use, because there are simply too few species that are known to show this behaviour in the wild," says Dr Christian Rutz from Oxford University's Department of Zoology, who led the project. But, as he explains further, some light can still be shed on this intriguing question. "Examining the ecological context, and adaptive significance, of a species' tool-use behaviour under contemporary conditions can uncover the selection pressures that currently maintain the behaviour, and may even point to those that fostered its evolution in the past. This was the rationale of our study on New Caledonian crows."
Observing New Caledonian crows in the wild, on their home island in the South Pacific, is extremely difficult, because they are easily disturbed and live in densely forested, mountainous terrain. To gather quantitative data on the foraging behaviour and diet composition of individual crows, the scientists came up with an unconventional study approach. New Caledonian crows consume a range of foods, but require tools to extract wood-boring longhorn beetle larvae from their burrows. These larvae, with their unusual diet, have a distinct chemical fingerprint--their stable isotope profile--that can be traced in the crows' feathers and blood, enabling efficient sample collection with little or no harm to the birds. "By comparing the stable isotope profiles of the crows' tissues with those of their putative food sources, we could estimate the proportion of larvae in crow diet, providing a powerful proxy for individual tool-use dependence," explains Dr Rutz.
The analysis of the samples presented further challenges. Dr Stuart Bearhop from Exeter University's School of Biosciences, who led the stable-isotope analyses, points out: "These crows are opportunistic foragers, and eat a range of different foods. The approach we used is very similar to that employed by forensic scientists trying to solve crimes, and has even appeared on CSI. We have developed very powerful statistical models that enabled us to use the unique fingerprints, or stable isotope profiles, of each food type to estimate the amount of beetle larvae consumed by individual New Caledonian crows."
The scientists found that beetle larvae are so energy rich, and full of fat, that just a few specimens can satisfy a crow's daily energy requirements, demonstrating that competent tool users can enjoy substantial rewards. "Our results show that tool use provides New Caledonian crows with access to an extremely profitable food source that is not easily exploited by beak alone," says Dr Rutz. And, Dr Bearhop adds: "This suggests that unusual foraging opportunities on the remote, tropical island of New Caledonia selected for, and currently maintain, these crows' sophisticated tool technology. Other factors have probably played a role, too, but at least we now have a much better understanding of the dietary significance of this remarkable behaviour."
The scientists believe that their novel methodological approach could prove key to investigating in the future whether particularly proficient tool users, with their privileged access to larvae, produce offspring of superior body condition, and whether a larva-rich diet has lasting effects on future survival and reproduction. "The fact that we can estimate the importance of tool use from a small tissue sample opens up exciting possibilities. This approach may even be suitable for studying other animal tool users, like chimpanzees," speculates Dr Rutz.
For more information contact Dr Christian Rutz (phone: +44 (0)1865 271179 or +44 (0)7792851538; e-mail: email@example.com) or Dr Stuart Bearhop (phone: +44 (0)1326 371835 or +44 (0)7881818150; e-mail: firstname.lastname@example.org). Alternatively, contact the press offices of the University of Oxford (phone: +44 (0)1865 283877; e-mail: email@example.com) or the University of Exeter (phone: +44 (0)1392 722062; e-mail: D.D.Williams@exeter.ac.uk).
NOTES TO EDITORS
A report of the research, entitled 'The ecological significance of tool use in New Caledonian crows' is to be published in Science on Friday, 17 September 2010 (authors: Christian Rutz, Lucas A. Bluff, Nicola Reed, Jolyon Troscianko, Jason Newton, Richard Inger, Alex Kacelnik, Stuart Bearhop).
The researchers studied the New Caledonian crow (Corvus moneduloides), a species that has attracted attention with its unusually sophisticated use of tools for extracting invertebrates from holes and crevices. The species is endemic to the tropical island of New Caledonia in the South Pacific, where fieldwork was conducted.
New Caledonian crows use stick tools to probe for longhorn beetle larvae (Agrianome fairmairei) in decaying trunks of candlenut trees (Aleurites moluccana). The larva-extraction technique of crows relies on exploiting defensive responses of their prey, similar to the well-known 'termite fishing' of chimpanzees. Crows insert a twig or leaf stem into a burrow, 'teasing' the larva by repeatedly poking it with the tool until it bites the tip of the tool with its powerful mandibles, and can be levered out.
The use of stable isotopes to examine the diets of wild animals is a well-established research technique. It relies on the premise "you are what you eat". Thus, the unique stable isotope profile of a food source can often be traced in the tissues of a consumer. Using relatively simple conversion factors (and some assumptions), it is possible to use this information to calculate the amount of any given food type in the diet of an animal. The Exeter-based research group has recently been involved in developing powerful Bayesian analysis techniques that are suitable for estimating animal diets in more complex situations, for example when consumers are known to eat many different food types. This advance was key to their collaboration with the Oxford-based scientists, who study the ecology and behaviour of the New Caledonian crow - a species that, like many other crows and ravens, is an opportunistic, generalist forager.
Previous studies on New Caledonian crows have shown that: wild crows manufacture and use at least three distinct tool types (including the most sophisticated animal tool yet discovered); the species has a strong genetic predisposition for basic stick-tool use (tool-related behaviour emerges in juvenile crows that had no opportunity to learn from others); crows have a preferred way of holding their tools (comparable to the way that humans are either left- or right-handed); adult crows can make or select tools of the appropriate length or diameter for experimental tasks; at least some birds can 'creatively' solve novel problems; and wild crows may socially transmit certain aspects of their tool-use behaviour (but claims for 'crow tool cultures' are still contentious).
An earlier paper in Science by Dr Christian Rutz's team (published in 2007) described the use of miniaturized, animal-borne video cameras to study the undisturbed foraging behaviour of wild, free-ranging New Caledonian crows.
This work was funded by the UK's Biotechnology and Biological Sciences Research Council (BBSRC) and Natural Environment Research Council (NERC). Dr Christian Rutz is a BBSRC David Phillips Fellow at the Department of Zoology, University of Oxford, and Dr Stuart Bearhop is a Senior Lecturer in the School of Biosciences, University of Exeter.
Stable isotope measurements were carried out by Dr Jason Newton, Senior Research Fellow and Manager of the NERC Life Science Mass Spectrometry Facility in East Kilbride. The Facility exists to provide access for UK scientists in the biological, environmental and other sciences to training and research facilities, offering an integrated and comprehensive suite of stable isotope techniques and expertise. | http://www.eurekalert.org/pub_releases/2010-09/uoe-fff091410.php |
4.0625 | How to Think Like a Computer Scientist: Learning with Python 2nd Edition/Case Study: Catch< How to Think Like a Computer Scientist: Learning with Python 2nd Edition
- 1 Case Study: Catch
- 1.1 Getting started
- 1.2 Using while to move a ball
- 1.3 Varying the pitches
- 1.4 Making the ball bounce
- 1.5 The break statement
- 1.6 Responding to the keyboard
- 1.7 Checking for collisions
- 1.8 Putting the pieces together
- 1.9 Displaying text
- 1.10 Abstraction
- 1.11 Glossary
- 1.12 Exercises
- 1.13 Project: pong.py
Case Study: CatchEdit
In our first case study we will build a small video game using the facilities in the GASP package. The game will shoot a ball across a window from left to right and you will manipulate a mitt at the right side of the window to catch it.
Using while to move a ballEdit
while statements can be used with gasp to add motion to a program. The following program moves a black ball across an 800 x 600 pixel graphics canvas. Add this to a file named pitch.py:
As the ball moves across the screen, you will see a graphics window that looks like this:
GASP ball on yellow background Trace the first few iterations of this program to be sure you see what is happening to the variables x and y.
Some new things to learn about GASP from this example:
- begin_graphics can take arguments for width, height, title, and background color of the graphics canvas.
- set_speed can takes a frame rate in frames per second.
- Adding filled=True to Circle(...) makes the resulting circle solid.
- ball = Circle stores the circle (we will talk later about what a circle actually is) in a variable named ball so that it can be referenced later.
- The move_to function in GASP allows a programmer to pass in a shape (the ball in this case) and a location, and moves the shape to that location.
- The update_when function is used to delay the action in a gasp program util a specified event occurs. The event 'next_tick' waits until the next frame, for an amount of time determined by the frame rate set with set_speed. Other valid arguments for update_when are 'key_pressed' and 'mouse_clicked'.
Varying the pitchesEdit
To make our game more interesting, we want to be able to vary the speed and direction of the ball. GASP has a function, random_between(low, high), that returns a random integer between low and high. To see how this works, run the following program:
Each time the function is called a more or less random integer is chosen between -5 and 5. When we ran this program we got:
-2 -1 -4 1 -2 3 -5 -3 4 -5
You will probably get a different sequence of numbers.
Let's use random_between to vary the direction of the ball. Replace the line in pitch.py that assigns 1 to y:
with an assignment to a random number between -4 and 4:
Making the ball bounceEdit
Running this new version of the program, you will notice that ball frequently goes off either the top or bottom edges of the screen before it completes its journey. To prevent this, let's make the ball bounce off the edges by changing the sign of dy and sending the ball back in the opposite verticle direction.
Add the following as the first line of the body of the while loop in pitch.py:
Run the program several times to see how it behaves.
The break statementEdit
The break statement is used to immediately leave the body of a loop. The following program impliments simple simple guessing game:
Using a break statement, we can rewrite this program to eliminate the duplication of the input statement:
This program makes use of the mathematical law of trichotomy (given real numbers a and b, a > b, a < b, or a = b). While both versions of the program are 15 lines long, it could be argued that the logic in the second version is clearer.
Put this program in a file named guess.py.
Responding to the keyboardEdit
The following program creates a circle (or mitt ) which responds to keyboard input. Pressing the j or k keys moves the mitt up and down, respectively. Add this to a file named mitt.py:
Run mitt.py, pressing j and k to move up and down the screen.
Checking for collisionsEdit
The following program moves two balls toward each other from opposite sides of the screen. When they collide , both balls disappear and the program ends:
Put this program in a file named collide.py and run it.
Putting the pieces togetherEdit
In order to combine the moving ball, moving mitt, and collision detection, we need a single while loop that does each of these things in turn:
Put this program in a file named catch.py and run it several times. Be sure to catch the ball on some runs and miss it on others.
This program displays scores for both a player and the computer on the graphics screen. It generates a random number of 0 or 1 (like flipping a coin) and adds a point to the player if the value is 1 and to the computer if it is not. It then updates the display on the screen.
Put this program in a file named scores.py and run it.
We can now modify catch.py to diplay the winner. Immediately after the if ball_x > 810: conditional, add the following:
It is left as an excercise to display when the player wins.
Our program is getting a bit complex. To make matters worse, we are about to increase its complexity. The next stage of development requires a nested loop. The outer loop will handle repeating rounds of play until either the player or the computer reaches a winning score. The inner loop will be the one we already have, which plays a single round, moving the ball and mitt, and determining if a catch or a miss has occured.
Research suggests there is are clear limits to our ability to process cognitive tasks (see George A. Miller's The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information_). The more complex a program becomes, the more difficult it is for even an experienced programmer to develop and maintain.
To handle increasing complexity, we can wrap groups of related statements in functions, using abstraction to hide program details. This allows us to mentally treat a group of programming statements as a single concept, freeing up mental bandwidth for further tasks. The ability to use abstraction is one of the most powerful ideas in computer programming.
Here is a completed version of catch.py:
Some new things to learn from this example:
Following good organizational practices makes programs easier to read. Use the following organization in your programs:
- global constants
- function definitions
- main body of the program
- Symbolic constants like COMPUTER_WINS, PLAYER_WINS, and QUIT can be used to enhance readability of the program. It is customary to name constants with all capital letters. In Python it is up to the programmer to never assign a new value to a constant , since the language does not provide an easy way to enforce this (many other programming languages do).
- We took the version of the program developed in section 8.8 and wrapped it in a function named play_round(). play_round makes use of the constants defined at the top of the program. It is much easier to remember COMPUTER_WINS than it is the arbitrary numeric value assigned to it.
- A new function, play_game(), creates variables for player_score and comp_score. Using a while loop, it repeatedly calls play_round, checking the result of each call and updating the score appropriately. Finally, when either the player or computer reach 5 points, play_game returns the winner to the main body of the program, which then displays the winner and then quits.
There are two variables named result---one in the play_game function and one in the main body of the program. While they have the same name, they are in different namespaces, and bear no relation to each other. Each function creates its own namespace, and names defined within the body of the function are not visible to code outside the function body. Namespaces will be discussed in greater detail in the next chapter.
- What happens when you press the key while running mitt.py? List the two lines from the program that produce this behavior and explain how they work.
- What is the name of the counter variable in guess.py? With a proper strategy, the maximum number of guesses required to arrive at the correct number should be 11. What is this strategy?
- What happens when the mitt in mitt.py gets to the top or bottom of the graphics window? List the lines from the program that control this behavior and explain in detail how they work.
- Change the value of ball1_dx in collide.py to 2. How does the program behave differently? Now change ball1_dx back to 4 and set ball2_dx to -2. Explain in detail how these changes effect the behavior of the program.
- Comment out (put a # in front of the statement) the break statement in collide.py. Do you notice any change in the behavior of the program? Now also comment out the remove_from_screen(ball1) statement. What happens now? Experiment with commenting and uncommenting the two remove_from_screen statements and the break statement until you can describe specifically how these statement work together to produce the desired behavior in the program.
Where can you add the linesto the version of catch.py in section 8.8 so that the program displays this message when the ball is caught?
- Trace the flow of execution in the final version of catch.py when you press the escape key during the execution of play_round. What happens when you press this key? Why?
- List the main body of the final version of catch.py. Describe in detail what each line of code does. Which statement calls the function that starts the game?
- Identify the function responsible for displaying the ball and the mitt. What other operations are provided by this function?
Which function keeps track of the score? Is this also the function that displays the score? Justify your answer by discussing specific parts of the code which implement these operations.
Pong_ was one of the first commercial video games. With a capital P it is a registered trademark, but pong is used to refer any of the table tennis like paddle and ball video games.
catch.py already contains all the programming tools we need to develop our own version of pong. Incrementally changing catch.py into pong.py is the goal of this project, which you will accomplish by completing the following series of exercises:
- Copy catch.py to pong1.py and change the ball into a paddle by using Box instead of the Circle. You can look at Appendix A for more information on Box. Make the adjustments needed to keep the paddle on the screen.
Copy pong1.py to pong2.py. Replace the distance function with a boolean function hit(bx, by, r, px, py, h) that returns True when the vertical coordinate of the ball (by) is between the bottom and top of the paddle, and the horizontal location of the ball (bx) is less than or equal to the radius (r) away from the front of the paddle. Use hit to determine when the ball hits the paddle, and make the ball bounce back in the opposite horizontal direction when hit returns True. Your completed function should pass these doctests:Finally, change the scoring logic to give the player a point when the ball goes off the screen on the left.
Copy pong2.py to pong3.py. Add a new paddle on the left side of the screen which moves up when 'a' is pressed and down when 's' is pressed. Change the starting point for the ball to the center of the screen, (400, 300), and make it randomly move to the left or right at the start of each round. | https://en.m.wikibooks.org/wiki/How_to_Think_Like_a_Computer_Scientist:_Learning_with_Python_2nd_Edition/Case_Study:_Catch |
4.03125 | Respect - a way of life
respect; regard; caring; working; together; honesty; rights; tolerance; discrimination; honesty ;
What is respect?
At the start of the school year, did you spend some time in your class talking about how the class would work?
If you did, I guess that you talked about some really great values like honesty, sharing and helping, responsibility, collaborating (or working together), organisation, and respect.
When you think about it, respect is probably the most important.
Respect has several meanings.
- Having regard for others. That means accepting that other people are different but just as important as you feel you are. Some people may call this tolerance (say tol-er-ans)
- Having a proper respect for yourself. That means that you stand up for yourself and don't let yourself be talked into doing stuff that you know is wrong or makes you feel uncomfortable.
- Not interfering with others (or their property.)
- To consider something worthy of high regard. That really means taking all those other values and living them.
Home is the place where you first learn about respect.
- You learn about using good manners, like saying 'please' and 'thank you'.
- You learn to share things like toys, games and food with other people in your family.
- You learn to look after your own things and take care of other things in the house (eg. not jumping on furniture, and wiping your feet etc, so that the house is a good place for everyone to be).
- You learn to wait your turn in talking.
- You learn to listen.
- You learn to understand that you will not always get what you want.
- You learn to respect others by helping with chores and not letting the family down.
- You learn to respect others in the community where you live.
- You learn how to talk to different adults in a way they expect to be spoken to eg grandma and her friends may not like to be called by their first name.
When you go to school you will have to learn some different ways to respect others and yourself.
- You will learn how to be a member of a class.
- You will learn how to behave with teachers and other 'school adults'.
- You learn to respect and keep school rules, which help to make your school a safe and caring place for everyone.
- You will learn to respect the property of classmates and the school.
- You will meet with people from different backgrounds, maybe different countries, cultures and religions.
- Some people will look very different to you and your family.
- Some people will behave very differently to you and your family.
- You can respect their differences and expect that they will respect yours.
If people are behaving badly towards you and hurting you or your feelings, then you cannot, and must not, respect their unkind behaviour.
Bullying and harassment should never be tolerated.
And of course you will not behave in an unkind way towards others, including spreading nasty rumours or gossip.
See our topic Dealing with bullies for some ideas on how to deal with this behaviour.
respect for yourself
Earning respect from yourself is probably harder than earning respect from others.
Remember those values again?
- If you aim to be an honest, caring person who accepts that everyone is different, always tries hard and is willing to share and help others, then living up to your aims can be very difficult.
- Don't give yourself too hard a time if you sometimes make mistakes. Mistakes are what we learn from.
- Earning respect from others is easy if you live by the values we talked about at the beginning of this topic. People will soon know that you are the kind of person who can be trusted to do the right thing, behave in a caring way and respect others' rights to be themselves.
Equity for everyone
Say sorry, please and thank you
People deserve respect
Ensure that everyone's rights are respected
Carry respect into all of your life
Take time to respect yourself
Kim and Kate say
"Make respect part of your life.
As you grow older and move out more into the world you will meet lots of different people. We live in a very diverse society and if you have learned to respect others then you will be able to fit in well with that society."
Check out the Related topics list under the Feedback button to find out more about why Respect should always be a way of life.
Outside everyone is different
Inside we're just the same.
Everyone has feelings.
How many can you name?
The way that you treat others
Is the way that they'll treat you.
So respect each other's differences
And they'll respect yours too.
We've provided this information to help you to understand important things about staying healthy and happy. However, if you feel sick or unhappy, it is important to tell your mum or dad, a teacher or another grown-up. | http://www.cyh.com/HealthTopics/HealthTopicDetailsKids.aspx?p=335&np=287&id=2356 |
4.15625 | Definition - What does Transport Layer mean?
The transport layer is the layer in the open system interconnection (OSI) model responsible for end-to-end communication over a network. It provides logical communication between application processes running on different hosts within a layered architecture of protocols and other network components.
The transport layer is also responsible for the management of error correction, providing quality and reliability to the end user. This layer enables the host to send and receive error corrected data, packets or messages over a network and is the network component that allows multiplexing.
In the OSI model, the transport layer is the fourth layer of this network structure.
Techopedia explains Transport Layer
Transport layers work transparently within the layers above to deliver and receive data without errors. The send side breaks application messages into segments and passes them on to the network layer. The receiving side then reassembles segments into messages and passes them to the application layer.
The transport layer can provide some or all of the following services:
- Connection-Oriented Communication: Devices at the end-points of a network communication establish a handshake protocol to ensure a connection is robust before data is exchanged. The weakness of this method is that for each delivered message, there is a requirement for an acknowledgment, adding considerable network load compared to self-error-correcting packets. The repeated requests cause significant slowdown of network speed when defective byte streams or datagrams are sent.
- Same Order Delivery: Ensures that packets are always delivered in strict sequence. Although the network layer is responsible, the transport layer can fix any discrepancies in sequence caused by packet drops or device interruption.
- Data Integrity: Using checksums, the data integrity across all the delivery layers can be ensured. These checksums guarantee that the data transmitted is the same as the data received through repeated attempts made by other layers to have missing data resent.
- Flow Control: Devices at each end of a network connection often have no way of knowing each other's capabilities in terms of data throughput and can therefore send data faster than the receiving device is able to buffer or process it. In these cases, buffer overruns can cause complete communication breakdowns. Conversely, if the receiving device is not receiving data fast enough, this causes a buffer underrun, which may well cause an unnecessary reduction in network performance.
- Traffic Control: Digital communications networks are subject to bandwidth and processing speed restrictions, which can mean a huge amount of potential for data congestion on the network. This network congestion can affect almost every part of a network. The transport layer can identify the symptoms of overloaded nodes and reduced flow rates.
- Multiplexing: The transmission of multiple packet streams from unrelated applications or other sources (multiplexing) across a network requires some very dedicated control mechanisms, which are found in the transport layer. This multiplexing allows the use of simultaneous applications over a network such as when different internet browsers are opened on the same computer. In the OSI model, multiplexing is handled in the service layer.
- Byte orientation: Some applications prefer to receive byte streams instead of packets; the transport layer allows for the transmission of byte-oriented data streams if required.
Join thousands of others with our weekly newsletter
3 Amazing Management Tools -- All Free:
Free 30 Day Trial – VMTurbo Operations Manager: | https://www.techopedia.com/definition/9760/transport-layer |
4.03125 | Daily in March and April
11:15 am and 2:15 pm
Children are natural mathematicians. They eagerly embrace math as they sort, count, measure, compare, match and problem solve in their everyday play. And that free-form fun translates into an early grasp of math concepts that builds lifelong math success!
Throughout March and April, we're doing up math BIG—and small. Join us every day for different math activities calculated to inspire a love of math in every child. Here are some highlights:
Size, shape, direction and position are basic concepts of geometry. In this activity, kids use geometric thinking to solve a puzzle or form a cat, sailboat, square or other "picture" using the same seven shapes.
Skill Sets: Geometry, spatial thinking
Spatial thinking is essential not only for school success, particularly in the STEM areas, but also for everyday life—assembling a model airplane, navigating a new town, remembering where the car is parked. Kids can transform a 2D greeting card into a 3D, lidded box, and see math magic at work!
Skill Sets: 2D, 3D, diagonal, center, size, shape
Make Your Own Play dough
Recipes are ideal for introducing children to early math concepts, such as measurement, counting and sequence. Measure, measure, mix, mix—voila, you've just made play dough!
Skill Sets: Counting, measuring, sequence
Children can sort by color, create patterns and become familiar with basic shapes as they engage in a simple hammering activity. They’ll build spatial awareness and muscles at the same time!
Skill Sets: Sets, attributes, shapes, patterns
Early math concepts lay the groundwork for scientific inquiry. Children can estimate, test, measure and compare which shapes fly the farthest using our powerful wind machine.
Skill Sets: Shape, size, measurement, farther, closer, comparison, estimation
Numbers gain meaning when they're represented by real objects. Kids can measure the circumference of their heads, and then see how many quarters it takes to make it all the way around!
Skill Sets: Number sense, measurement, comparing, adding
Make a Pattern
Patterns are sequences with an underlying rule. Children can use stamps to make a pattern. See if you or a sibling can crack the code to figure out what comes next. Then switch places!
Skill Sets: Making patterns, identifying and creating rules
Silly Pets in a Pen
Children incorporate geometric, sequencing, and counting principles to connect all four sides of a square to create a pen for a pretend pet.
Skill Sets: Shapes, counting, comparing, sets
Fraction Action Pizzeria
An early understanding of fractions paves the way for more complex math functions. The best way to communicate fraction sense to young children is visually, using concrete, familiar objects. Enter the pizza! Kids will learn the basics of fractions with the aid of our giant pizza pie.
Skill Sets: Fractions, matching, whole, part
Family Math Day
Thursday, March 31, 1-7 pm
Join us for an afternoon of fun, hands-on math activities designed for the whole family! Visitors will have the opportunity to play some tantalizingly tricky math games, centered on ancient mathematical games from around the world. This all-ages program is free with museum admission
Family Math Day is made possible by MIND Research Institue's MathMINDS Initiative.
Enhancing Math Talk in Classrooms and Homes
Exploring the Math in Play
Early Math Guide for Parents of Preschoolers
Math at Play Videos | http://www.chicagochildrensmuseum.org/index.php/experience/cardboard-adventures-in-cardboard |
4.375 | If you're seeing this message, it means we're having trouble loading external resources for Khan Academy.
If you're behind a web filter, please make sure that the domains
*.kastatic.org and *.kasandbox.org are unblocked.
Relationships can be any association between sets of numbers while functions have only one output for a given input. This tutorial works through a bunch of examples of testing whether something is a valid function. As always, we really encourage you to pause the videos and try the problems before Sal does!
Common Core Standard: 8.F.A.1
Testing if a relationship is a function
Relations and Functions
Sal checks whether a given set of points can represent a function. For the set to represent a function, each domain element must have one corresponding range element at most.
Sal checks whether a table of people and their heights can represent a function that assigns a height to a name.
Sal checks whether y can be described as a function of x if y is always three more than twice x.
Sal explains why a vertical line *doesn't* represent a function.
Determine whether a table of values of a relationship represents a function.
Sal checks whether a description of the price of an order can be represented as a function of the shipping cost.
Determine whether a given graph represents a function. | https://www.khanacademy.org/math/cc-eighth-grade-math/cc-8th-linear-equations-functions/cc-8th-function-intro |
4.34375 | 1 Answer | Add Yours
The three-dimensional coordinate system has three mutually perpendicular axes, x, y and z. Here, a point needs three coordinates to be defined completely. The coordinates of the point P is expressed as P (x,y,z). Plotting P in 3D is somewhat similar to plotting P in two dimensions, only an extra axis and its coordinate have to be kept in mind.
Consider the point P’(2,-3,4). Draw the three coordinate axes (refer to attached image), and note the positive and negative directions of all the axes. To plot the point (2,-3,4) notice that x=2, y=-3 and z=4. To help visualize the point, first locate the point (2,-3) in the xy-plane. It is represented by a cross in the attached image. The point P’(2,-3,4) will be 4 units above the cross, along the z-axis (represented by the bold circle in the image).
We’ve answered 301,482 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/how-do-you-pot-point-3d-plane-441851 |
4.03125 | Eye Level Math helps improve problem-solving skills by enabling them to master concepts through a small step approach.
Development of Mathematical Thinking
Basic Thinking Math enables students to complete the foundation of mathematics and covers the following study areas: Numbers, Arithmetic, Measurement, and Equations.
Critical Thinking Math enables students to develop depth perception, problem solving, reasoning skills, and covers the following study areas: Patterns and Relationships, Geometry, Measurement, Problem Solving, and Reasoning.
- Mastery learning with BTM repetition
- Maximization of motivation by online program
- Maximization of learning effect by using auto
scoring and instant feedback system
- Arithmetic game activity
- Easy access on accumulated records
- Simultaneous learning of BTM & CTM
- Learning of new concepts
- Improvement of problem solving skill with
various supplementary materials
- Well systemized assessment
What are the benefits of Eye Level Math?
1. Systematic study materials for all levels
Eye Level Math uses a systematic curriculum which is divided into various levels according to student abilities. This allows students to fully understand and master the required mathematical concepts in a progressive manner.
2. Study materials that develop the ability to solve problems independently
The Eye Level curriculum is progressive. Subtle increases in difficulty in each level makes it easy for all students to learn. This allows students to become comfortable with all necessary concepts before proceeding to the next level. Students will be able to solve questions that are presented as variations of similar concepts.
3. An interactive teaching methodology that incorporates proactive feedback
Eye Level is a proactive learning process. Students receive continual, ongoing feedback from our instructors to enhance the student learning process. Instructors also work with parents to maximize feedback.
Communication is an integral part of education. A positive environment makes learning optimal for all students. In this case, students are able to learn from both their parents and instructors.
4. Eye Level helps students develop their critical and analytical thinking skills.
The active use of learning materials creates a learning environment where students will develop critical and analytical thinking skills. This is accomplished through developing depth perception, location and spatial relationship skills, by utilizing our learning materials such as Numerical Figures, Blocks and Shapes, Clear Paper, Colored Blocks, Mirror, and Wooden Blocks. Difficulty and question variations are introduced systematically throughout all levels.
5. Eye Level allows students to utilize their skills in all areas of study.
Performing well in Eye Level not only helps students in mathematics, but is also helpful for applying their knowledge to other areas of academic studies. Skills that students will develop in Eye Level are broad. In most cases, students will be ahead of their class and their peers. Ideally, students will advance faster in all areas of academic studies and thus become more confident in all areas of study. | http://www.myeyelevel.com/America/programs/Math.aspx |
4.1875 | Appendix:Latin cardinal numerals
When someone counts items, that person uses cardinal values. In grammatical terms, a cardinal numeral is a word used to represent such a countable quantity. The English words one, two, three, four, etc. are all examples of cardinal numerals.
In Latin, most cardinal numerals behave as indeclinable adjectives. They are usually associated with a noun that is counted, but do not change their endings to agree grammatically with that noun. The exceptions are ūnus (“one”), duo (“two”), trēs (“three”), and multiples of centum (“hundred”), all of which decline. Additionally, although mīlle (“thousand”) is an indeclinable adjective in the singular, it becomes a declinable noun in the plural. These exceptions are further explained in later sections.
|1||I||ūnus, ūna, ūnum||11||XI||ūndecim||10||X||decem||100||C||centum|
|2||II||duo, duae, duo||12||XII||duodecim||20||XX||vīgintī||200||CC||ducentī, -ae, -a|
|3||III||trēs, tria||13||XIII||tredecim||30||XXX||trīgintā||300||CCC||trecentī, -ae, -a|
|4||IV||quattuor||14||XIV||quattuordecim||40||XL||quadrāgintā||400||CD||quadringentī, -ae, -a|
|5||V||quīnque||15||XV||quīndecim||50||L||quīnquāgintā||500||D||quīngentī, -ae, -a|
|6||VI||sex||16||XVI||sēdecim||60||LX||sexāgintā||600||DC||sescentī, -ae, -a|
|7||VII||septem||17||XVII||septendecim||70||LXX||septuāgintā||700||DCC||septingentī, -ae, -a|
|8||VIII||octō||18||XVIII||duodēvīgintī||80||LXXX||octōgintā||800||DCCC||octingentī, -ae, -a|
|9||IX||novem||19||XIX||ūndēvīgintī||90||XC||nōnāgintā||900||CM||nōngentī, -ae, -a|
The smaller cardinal numerals, from ūnus (“one”) to vīgintī (“twenty”), have spellings and forms that are not easily predictable and therefore must be learned by students of Latin. Larger cardinal numerals follow more regular patterns of assembly.
Inflection : The Latin ūnus (“one”) inflects like an irregular first and second declension adjective. The irregularities occur in the singular genitive, which ends in -īus instead of the usual -ī or -ae, and in the singular dative, which ends in -ī instead of the usual -ō or -ae.
The choice of ending will agree with the gender of the associated noun: ūnus equus ("one horse"), ūna clāvis ("one key"), ūnum saxum ("one stone"). The ending will also agree with the grammatical case of the associated noun: ūnīus equī (genitive), ūnam clāvem (accusative), ūnī saxō (dative).
Plural : Although it may seem strange at first sight, ūnus does have a set of plural forms. These forms are used when the associated noun has a plural form, but an inherently singular meaning. For example, the Latin noun castra (“camp”) occurs only as a plural neuter form and takes plural endings, even though it identifies one object, hence: ūnōrum castrōrum ("of one camp").
Compounds : When ūnus is used to form compound numerals, such as ūnus et vīgintī ("twenty-one"), the case and gender agree with the associated noun, although the singular is used: vīgintī et ūnam fēminās vīdī . Unlike duo and trēs, the word ūnus is almost never used with mīlle (“thousand”) to indicate how many thousand.
|gen||duōrum (duûm)||duārum||duōrum (duûm)|
|acc||duōs / duo||duās||duo|
Inflection : The Latin duo (“two”) has a highly irregular inflection, derived in part from the old Indo-European dual number. While some of the endings resemble those of a first and second declension adjective, others resemble those of a third declension adjective.
The choice of ending will agree with the gender of the associated noun, which will necessarily be plural: duo equī ("two horses"), duae clāvēs ("two keys"), duo saxa ("two stones"). The ending will also agree with the grammatical case of the associated noun: duōs equōs (accusative), duārum clāvum (genitive), duōbus saxīs (dative).
Compounds : When duo is used to form compound numerals, such as duo et vīgintī or vīgintī duo ("twenty-two"), the case and gender agree with the associated noun. This is also the case when used with the plural of mīlle (“thousand”) to indicate how many thousands: duo mīlia ("two thousands"), duōrum mīlium ("of two thousands").
The choice of ending will agree with the gender of the associated noun, which will necessarily be plural: trēs equī ("three horses"), trēs clāvēs ("three keys"), tria saxa ("three stones"). The ending will also agree with the grammatical case of the associated noun: trēs equōs (accusative), trium clāvum (genitive), tribus saxīs (dative).
Compounds : When trēs is used to form compound numerals, such as trēs et vīgintī or vīgintī trēs ("twenty-three"), the case and gender agree with the associated noun. This is also the case when used with the plural of mīlle (“thousand”) to indicate how many thousands: tria mīlia ("three thousands"), trium mīlium ("of three thousands").
IV to XX
|1||I||ūnus, ūna, ūnum||11||XI||ūndecim|
|2||II||duo, duae, duo||12||XII||duodecim|
Many of these numerals are mirrored in English words (such as quadrangle, quintuplet, sextuple, octopus). The numerals for 7 through 10 appear in the English names of months (September, October, November, and December). These months were the seventh through tenth of the Roman calendar, since the Roman year began with mārtius (“March”).
Teens : Latin cardinals larger than decem (“ten”) but less than vīgintī (“twenty”) are constructed by addition. The ending -decim (a form of decem) is attached to the numerals ūnūs through novem. The resultant compound carries the same value as the mathematical sum of the components. For example quattuordecim (“fourteen”) is quattuor (“four”) + decem (“ten”). English does much the same by attaching -teen (a form of ten) to smaller numerals, such as the numeral fourteen which is four + ten.
In some of these compounds, a spelling and pronunciation change occurs during the attachment, so that sex + decem drops the -x and lengthens the e to yield sēdecim. This kind of change also occurs in English, as in five + ten which softens the sound of the v and drops the e to yield fifteen.
Exceptions : There are two exceptions to the general pattern for forming the teens. In Classical Latin, the numerals for 18 and 19 are more frequently written as subtractive compounds. So, although 18 may be written as octōdecim, it is more often written as duodēvīgintī (literally "two from twenty"). Likewise, the numeral for 19 may be written as novemdecim, but is more often encountered as ūndēvīgintī (“one from twenty”).
For more information about the subtractive pattern of construction, see the section on "counting backwards".
|Multiples of ten|
|Multiples of one hundred|
|100||C||centum 1||600||DC||sescentī, -ae, -a|
|200||CC||ducentī, -ae, -a||700||DCC||septingentī, -ae, -a|
|300||CCC||trecentī, -ae, -a||800||DCCC||octingentī, -ae, -a|
|400||CD||quadringentī, -ae, -a||900||CM||nōngentī, -ae, -a|
|500||D||quīngentī, -ae, -a||1000||M||mīlle, mīlia (mīllia) 2|
|1 centum does not inflect.
2 see the following section on mīlle.
|C (adj.)||NN (noun)|
The Latin mīlle (“thousand”) is irregular in that it has two forms. In the singular, it is an indeclinable adjective, but in the plural it is a noun that declines like a third declension neuter i-stem. Notice that the genitive plural ending is -ium.
Singular : In the singular, mīlle (“thousand”) functions as an adjective. This singular form is indeclinable, so its ending will remain the same rather than agree with the case or gender of the associated noun. However, the associated noun will necessarily be plural: mīlle equī ("thousand horses"), mīlle clāvēs ("thousand keys"), mīlle saxa ("thousand stones"). This is true regardless of the case or gender of the associated noun.
Plural : In the plural, mīlia functions as a noun, and will inflect according to how it is used in the sentence (subject, direct object, etc.). The associated noun being counted will necessarily be in the genitive plural, and so will not agree with the grammatical case of mīlia. Note that, if the numeral before mīlia is duo or trēs, then it will take a neuter form in the same grammatical case as mīlia : octō mīlia equōrum (nominative, "eight thousand of horses"), cum tribus mīlibus clāvum (ablative, "with three thousand of keys"), duōrum mīlium saxōrum (genitive, "of two thousand of stones").
Latin cardinal numerals larger than vīgintī (“twenty”), that are not multiples of ten, are assembled as compound words. The components of these compounds are the numerals ūnus (“one”) through novem (“nine”) and the multiples of decem (“10”), the multiples of centum (“100”), and mīlle (“1000”).
Compound numerals in Latin are assembled by one of two basic methods: additive or subtractive. Most compound numerals are additive, meaning that the value of the compound numeral is calculated by adding the values of the component words. However, a few Latin compound numerals are subtractive, meaning that the value of the compound numeral is calculated by subtracting the values of the component words. A large-valued compound numeral may incorporate both additive and subtractive components.
|Tens +8 ( or –2 )||Tens +9 ( or –1 )|
Of the Latin compound numerals less than centum (“100”), seventeen are normally subtractive. All of these special cases represent values that are one or two less than a multiple of ten, and have names that subtract from a starting value rather than adding to that value. These seventeen exceptions are displayed in the table at right. Note that the compound numeral for 98 is not among the special cases, but instead is formed in the usual additive way. Subtractive compounds normally are written as single words (with no spaces) and are indeclinable.
Numerals representing cardinal values that are eight more (two less) than a multiple of ten are constructed literally as:
Thus, the numeral for 38 is normally written as duodēquadrāgintā (“two from forty”), rather than as the expected trīgintā octō (“thirty-eight”) or octō et trīgintā (“eight and thirty”). The latter two additive forms are possible, but are not found in Classical Latin as frequently as the subtractive form.
Numerals representing cardinal values that are nine more (one less) than a multiple of ten are constructed literally as:
Thus, the numeral for 39 is normally written as ūndēquadrāgintā (“one from forty”), rather than as the expected trīgintā novem (“thirty-nine”) or novem et trīgintā (“nine and thirty”). The latter two additive forms are possible, but are not found in Classical Latin as frequently as the subtractive form.
Numbers are almost always treated as adjectives, and often come before the noun. They may be used alone as substantive nouns, but as most are indeclinable, this tends to be ambiguous. Mille behaves differently; in the plural, as milia, the noun being counted must be in the genitive plural. For example, "two thousand soldiers" would be "duo milia militum" (literally, "two thousands of soldiers). Thus a mile is mille passūs (literally, "a thousand paces"), but two miles is duo milia passuum (literally, "two thousands of paces").
To denote one's age, which in English is expressed in the construction I am ... years old, in Latin one would most commonly say Habeo ... annos (literally, "I have ... years"). The numeral is in the accusative plural, if it declines.
on Wikipedia.Wikipedia:Roman numerals | https://en.wiktionary.org/wiki/Appendix:Latin_cardinal_numerals |
4.15625 | Energy How Solar Arrays Are Made A new lab is inventing alternative ways to package and install solar cells. by Kevin Bullis June 21, 2011 Sponsored by Once the cells are sorted by power output, another researcher, Adam Stokes, strings them together with a tool that solders flat strips of metal called busbars to electrical contacts on their front and back. The lab can test different ways to connect the cells, varying factors such as the number and type of busbars and then measuring the resulting performance to determine whether any extra costs are worthwhile. Researchers sandwich a short string of solar cells between glass and a protective film, a process designed to keep the cells dry. This panel will be small enough to fit in one of the specialized chambers the lab uses to test new materials being considered for adoption by the solar industry. A large laminating machine operated by Dan Doble, group leader for the PV Modules Group at Fraunhofer, seals solar cells inside a protective package. To earn back their cost, solar panels must perform well for decades, often under extreme conditions. If even a small amount of water vapor enters the panel, it can corrode contacts and degrade its performance. This chamber can subject solar panels to a wide range of temperatures and humidity levels. It includes a device invented at the Fraunhofer lab that presses on the surface of a panel by inflating a rubber bladder, simulating pressure from a load of snow. Solar power may be associated with warm, sunny climates, but some of the biggest markets are in snowy places such as Germany. Researchers Dan Doble and Carola Völker lower a solar panel into a tank of water to test how well the circuitry within it is sealed. A current of at least 500 volts are applied to the circuits, and an electrical lead in the water detects any current leakage. The test can help determine whether the panels are likely to survive exposure to extreme temperatures and mechanical pressure. The researchers also study micrographs to detect damage. In most solar panels, a hole is cut in a protective envelope surrounding the solar cells to allow a connection to an outside circuit. To speed manufacturing and avoid allowing water to leak in, the lab is developing a device (right) that can be installed before the cells are encapsulated. The yellow tabs can be inserted between a sheet of encapsulant and the cells and sealed in place during a standard lamination step. The cables sticking out of the device are connected to similar cables in neighboring solar panels on a roof before the panels are connected to an inverter and the power grid. In the current design, this is done by hand, but in a future design, the devices will snap together, allowing the panels to be installed quickly and cheaply. | https://www.technologyreview.com/s/424417/how-solar-arrays-are-made/ |
4.0625 | NPS Photo/Sarah Falzarano
Acoustic Technician Laura Levy samples sounds from Colorado River rapids.
IN FEBRUARY 1919, THE FIRST AIR TOUR over the Grand Canyon was recorded; that fall the area was officially designated as Grand Canyon National Park. Fifty-six years later, the 1975 Grand Canyon National Park Enlargement Act established that where impacts from aviation occur, natural quiet should be protected as both a resource and a value in the park. Following the National Parks Overflights Act of 1987, the Federal Aviation Administration established a special flight rules area for the park. In an effort to restore natural quiet at Grand Canyon and to improve aviation safety, flights were restricted below 14,500 feet, flight-free zones were established, and special routes for commercial sightseeing tours were created. After another 20 years of interim regulations, congressional interest, departmental reports, negotiations and consultation, and the establishment of a National Park Service-Federal Aviation Administration Grand Canyon Working Group, Grand Canyon National Park is finally on the verge of completing an environmental impact statement to achieve substantial restoration of natural quiet at the park.
The 1975 Grand Canyon National Park Enlargement Act established that … natural quiet should be protected as both a resource and a value in the park.
So where’s the science? In 2003, the park’s Science and Resources Management Program recognized the critical need to establish a soundscape program to collect and analyze local acoustic data. The Grand Canyon Soundscape Program has since played an active support role in park planning to better steward park soundscapes. In support of overflights planning, Grand Canyon staff recorded 12 months of continuous audio data and measured decibel levels under air tour corridors (see photo). These data allowed park managers to determine natural sound levels for winter and summer seasons in four vegetation zones. Because NPS Management Policies states that the natural ambient sound level is the baseline condition or standard for determining impacts to soundscapes, these data provide park managers with essential information needed for soundscape planning in the park. Data were used to compare noise models, assess developed and transitional area soundscapes, and create visual spectrograms for aircraft audibility analysis. In order to assess impacts to the threatened Mexican spotted owl, acoustic data were collected adjacent to breeding sites; data are currently being analyzed using sound analysis software such as Raven (http://www.birds.cornell.edu/brp/raven/RavenOverview.html) to look for correlations between aircraft noise and the disturbance of birds.
In addition to overflights monitoring and management, the park has been interested in a variety of other planning and stewardship activities relating to soundscapes. Activities included collection and analysis of acoustic data from river rapids (see photo), fire-fighting equipment (see photo), and popular visitor use areas. Recently, a sound system was deployed at Tusayan Ruins, located near Desert View, to quantify noise from air tours interfering with ranger programs (using the U.S. Environmental Protection Agency criterion for speech interference for interpretive programs). In 2008 and 2009, soundscape staff collaborated with the Grand Canyon Youth program to develop a soundscape-themed science project for visually impaired teenagers. Outdoor recreation planning staff also used acoustic data to determine if helicopters exchanging river trip passengers are complying with Colorado River Management Plan guidelines. Finally, in an effort to support our neighboring parks, Grand Canyon National Park staff established 2007 baseline sound levels at Walnut Canyon National Monument prior to runway expansion at Flagstaff’s Pulliam Airport.
While the current focus of the park’s soundscape work relates to overflights planning, park staff hopes to broaden the program across all cultural and natural soundscape issues. Future efforts will include the development of a parkwide soundscape management plan and implementation of the overflights environmental impact statement.
For more information and copies of all park reports and publications, please visit our Web site at http://www.nps.gov/grca/naturescience/soundscape.htm.
Rodgers, J. 2010. Case Study: Soundscape management at Grand Canyon National Park. Park Science 26(3):46–47.
Accessed 11 February 2016 from http://www.nature.nps.gov/ParkScience/index.cfm?ArticleID=351. | http://www.nature.nps.gov/ParkScience/index.cfm?ArticleID=351 |
4 | Epilepsy surgery is a procedure that either removes or isolates the area of your brain where seizures begin. It is a treatment option for people whose seizures are not well controlled with medication. About 30% of people with epilepsy have seizures that are "medically intractable," meaning the seizures continue to happen despite trying 3 or more antiepileptic drugs . People who are considered for surgery undergo extensive testing to locate the source of their seizures and to ensure that removing that region of the brain will not impact their speech, mobility or quality of life [2,3].
What is epilepsy surgery?
Epilepsy surgery is a procedure to 1) remove the seizure-producing area of the brain or 2) limit the spread of seizure activity. Surgical results can be considered curative (stopping the seizures) or palliative (restricting the spread of the seizure). The type of surgery performed depends on the type of seizures and where they begin in the brain (Fig 1). Curative procedures, such as lobectomy, cortical excision, or hemispherectomy aim to remove the area of the brain (seizure focus) causing seizures. The goal is to remove all of the seizure focus area without causing loss of brain function. Palliative procedures, such as corpus callosotomy or vagus nerve stimulation (VNS), aim to reduce seizure frequency or severity.
Types of epilepsy surgery
Curative procedures are performed when tests consistently point to a specific area of the brain where the seizures begin.
- Temporal lobectomy is the most common type of surgery for people with temporal lobe epilepsy. It removes a part of the anterior temporal lobe along with the amygdala and hippocampus. A temporal lobectomy leads to a significant reduction or complete seizure control about 70% to 80% of the time [4, 5]. However, memory and language can be affected if this procedure is performed on the dominant hemisphere.
- Cortical excision is the second most common type of epilepsy surgery. It removes the outer layer (cortex) of the brain at the seizure focus area. About 40% to 50% of patients have better seizure control.
- Hemispherectomy involves the removal of the brain's outer layer (cortex) and anterior temporal lobe on one half of the brain. It is usually performed in children who suffer intractable seizures, have a damaged hemisphere, and experience weakness on one side of the body. Surgery may control seizures for nearly 80% of these patients. Patients often improve in cognitive functioning, attention span, and behavior.
Palliative procedures are performed when a seizure focus cannot be determined or it overlaps brain areas critical for movement, speech, or vision.
- Corpus callosotomy prevents the spread of generalized seizures from one side of the brain to the other by disconnecting the nerve fibers across the corpus callosum. During surgery the anterior two thirds of the corpus callosum is sectioned. On occasion, a second surgery is performed to cut the posterior one third if the patient does not improve. This surgery is not curative. Rather, it prevents the spread and reduces seizure severity. Some patients experience disconnection syndrome after a complete callosotomy. They may have right-left confusion with motor problems, apathy, or mutism.
- Multiple subpial transections create small incisions in the brain to interfere with the spread of seizure impulses. This technique is used when the seizure focus is located in a vital area that cannot be removed. It may be used alone or in combination with a lobectomy.
- Vagus nerve stimulation (VNS) involves implantation of a device that produces electrical signals to prevent seizures. VNS is similar to a heart pacemaker. A wire (lead) is wrapped around the vagus nerve in the neck. The wire is connected to a generator-battery implanted under the skin near the collarbone. The generator is programmed to produce intermittent electrical signals that travel along the vagus nerve to the brain. In addition, some patients may turn on the device with a magnet when feeling a warning (aura) that a seizure is about to start. VNS is not a cure for epilepsy, it does not work for everyone, and it does not replace the need for anti-epileptic drugs. This procedure is reserved for those who are not candidates for potentially curative brain surgery. VNS reduces seizure frequency by about 30% (similar to the results of the newer AEDs) . Common side effects are a tingling sensation in the neck and mild hoarseness in the voice, both of which occur only during stimulation.
Who is a candidate?
Epilepsy surgery may be an option if you have:
- seizures that are uncontrolled with medications (intractable) or you have severe side effects to the medications
- partial seizures that always start in one area of the brain (localized seizure focus)
- seizures that significantly affect your quality of life
- seizures caused by a lesion such as scar tissue, a brain tumor, arteriovenous malformation (AVM), or birth defect
- seizure discharge that spreads to the whole brain (secondary generalization)
Most experts recommend that a patient who continues to have seizures after trials of 2 or 3 different medications should have an evaluation at a comprehensive epilepsy treatment program. likelihood of seizure freedom after failure of 3 different medications is less than 5% [1, 2]. The epilepsy team typically consists of epileptologists (neurologists with special expertise in epilepsy), neurosurgeons, neuropsychologists, epilepsy nurse clinicians, and EEG technicians.
Patients are initially evaluated by an epileptologist. A complete medical history and physical exam helps identify critical information, such as age of onset and type of seizures (including frequency, severity, and duration) (see Seizures). A patient’s physical exam is usually normal. However, some asymmetries may be seen related to early development when the structural brain lesions formed. For example, a difference in the size of one hand or foot compared to the other may correlate with atrophy of one of the brain’s hemispheres.
The following diagnostic studies may be used during an evaluation for epilepsy surgery. Not all tests are required. The epilepsy team will decide which tests are appropriate.
- Continuous video-EEG monitoring requires a hospital stay in an epilepsy monitoring unit. For the EEG, a technician glues electrodes onto your scalp to record the electrical activity of the brain. With safe and continuous monitoring, movement/behavior and EEG activity are captured during a seizure with simultaneous recordings by video camera and electroencephalogram (EEG). Careful analysis of activity and brain waves both during and between seizures can provide critical information about where the seizure starts and spreads. Certain behaviors during seizures, such as abnormal posturing of an arm, or specific speech problems during or after a seizure, help your physician to identify where in the brain the seizure begins.
- Magnetic Resonance Imaging (MRI) helps identify structural brain abnormalities that can cause epilepsy. These include hippocampal atrophy, cavernous angiomas, cortical dysplasias, and tumors.
- Positron Emission Tomography (PET) allows the doctor to study brain function by observing how glucose (sugar) is metabolized in the brain. A small amount of radioactive glucose is injected into your bloodstream. The PET scanner takes pictures of the brain that are interpreted by a computer to examine glucose metabolism. Glucose use can increase (hypermetabolism) during a seizure and decrease (hypometabolism) when not having a seizure. These results may help locate areas of dysfunctional brain or other abnormalities, which could correspond to EEG localization of epileptogenic activity.
- Single-Photon Emission Computed Tomography (SPECT) provides information about blood flow to brain tissue. Analyzing blood flow to the brain may help determine how specific areas are functioning. Blood flow to an area of the brain during a seizure increases, while blood flow to an area of the brain can decrease when a person is not having a seizure.
- Neuropsychological testing evaluates your current level of brain functioning, including memory and language. This test might correlate with diagnostic imaging and EEG.
- Wada Test (Intracarotid Amytal test) is used to determine which side of your brain is dominant for language and memory function. Identifying the dominant side, the surgeon plans the operation to avoid affecting these functions. The Wada test, which is performed as part of an angiogram, can show any vascular or blood flow problems (see Angiogram). Sodium amytal is a short-acting barbiturate that is injected into the carotid artery on the right or left side. For a short time, the drug puts one half of the brain (hemisphere) to sleep. You cannot move one side of your body and may be unable to speak. Next, you are asked to identify pictures, words, objects, or numbers. After 5 to 10 minutes when the drug wears off, you are asked if you remember what was shown. The Wada test is then repeated on the other side. Used with neuropsychological testing, results of the Wada test help identify memory and language deficits and predict surgical outcome.
- Functional MRI (fMRI) is used to determine the location of brain abnormalities in relation to areas of the brain responsible for speech, memory, and movement. FMRI also helps doctors predict the functional outcome of surgical treatment. FMRI is sometimes used instead of a Wada test.
Electrical brain mapping, or electrocorticography, are diagnostic tests that may be necessary if the seizure focus is believed to lie close to important functional areas or if the exact location of the seizure focus remains unclear despite standard EEG and other tests. During a craniotomy operation for these diagnostic tests, subdural or depth electrodes are placed directly on or in the brain through a hole in the skull (craniotomy).
- Subdural electrodes aligned on a plastic grid, are placed directly on the brain’s surface (Fig 2). Subdural electrodes allow for a wide area of EEG recording as well as cortical mapping of functional areas.
- Intracerebral depth electrodes look like a banded stick. These are placed stereotactically deep into the brain tissue, usually the amygdala and hippocampus of the temporal lobe. Depth electrodes are indicated for patients with bitemporal, bifrontal, or frontal temporal seizures.
After the electrodes are placed, the wound is completely closed and bandaged. The patient is then moved to the epilepsy monitoring unit (EMU). The EEG technician will connect the electrodes (via wires that pass through small incisions in the skin) to an EEG machine that shows the brain waves and seizure activity. The patient remains in the hospital until sufficient information has been gathered to guide further treatment (typically 5-10 days). If the seizure focus is found and is not in an area of the brain involved in communication, a second surgery may be recommended to remove that brain area. If the seizure focus is not found, the electrodes are removed; follow up consultation with the epileptologist and neurosurgeon will follow. Risks associated with electrical brain mapping include infection and hemorrhage in about 2% to 5% of cases.
The surgical decision
The epilepsy team meets to review all testing performed to decide if surgery is the best treatment option. All tests should point to a single region in the brain as the source for seizures. If this is the case, and the region of seizure onset is a safe distance away from areas of the brain that control language, movement, and vision, then surgery can be recommend to reduce or eliminate seizures.
Who performs epilepsy surgery?
Epilepsy surgery is done by a neurosurgeon specifically trained in this field. A patient should have a presurgical evaluation at a comprehensive epilepsy treatment program by a multidisciplinary team of specialists (neurologists, neurosurgeons, neuropsychologists, and nurse clinicians).
What happens before surgery?
First, in consultation during the office visit, the neurosurgeon will explain the procedure, its risks and benefits, and answer any questions. Next, you will sign consent forms and complete paperwork to inform the surgeon about your medical history (i.e., allergies, medicines, vitamins, bleeding history, anesthesia reactions, previous surgeries). Discuss all medications (prescription, over-the-counter, and herbal supplements) you are taking with your health care provider. Some medications need to be continued or stopped the day of surgery. You may be scheduled for presurgical tests (e.g., blood test, electrocardiogram, chest X-ray) several days before surgery.
Stop taking all non-steroidal anti-inflammatory medicines (Naprosyn, Advil, Motrin, Nuprin, Aleve) and blood thinners (coumadin, Plavix, aspirin) 1 week before surgery. Additionally, stop smoking and chewing tobacco 1 week before and 2 weeks after surgery as these activities can cause bleeding problems. No food or drink is permitted past midnight the night before surgery.
Morning of surgery
- Shower using antibacterial soap. Dress in freshly washed, loose-fitting clothing.
- Wear flat-heeled shoes with closed backs.
- If you have instructions to take regular medication the morning of surgery, do so with small sips of water.
- Remove make-up, hairpins, contacts, body piercings, nail polish, etc.
- Leave all valuables and jewelry at home (including wedding bands).
- Bring a list of medications (prescriptions, over-the-counter, and herbal supplements) with dosages and the times of day usually taken.
- Bring a list of allergies to medication or foods.
- Take your AED medication as usual.
Arrive at the hospital 2 hours before your scheduled surgery time to complete the necessary paperwork and pre-procedure work-ups. You will meet with a nurse who will ask your name, date of birth, and what procedure you’re having. They will explain the pre-operative process and discuss any questions you may have. An intravenous (IV) line will be placed in your arm. An anesthesiologist will talk with you and explain the effects of anesthesia and its risks.
What happens during surgery?
There are five main steps to the anterior temporal lobectomy. The surgery generally takes 3 to 4 hours.
Step 1: prepare the patient
You will lie on your back on the operative table and be given anesthesia. Once asleep, your head is placed in a skull fixation device attached to the table that holds your head in position during the surgery. Depending on where the incision will be made, your hair may be shaved.
Step 2: perform a craniotomy
After your scalp is prepped, the surgeon will make a skin incision to expose the skull. A circular opening in the skull, called a craniotomy, is drilled (see Craniotomy) (Fig 3). This bony opening exposes the protective covering of the brain, called the dura mater, which is opened with scissors.
Step 3: perform brain mapping
Depending on your specific case, intraoperative EEG recording and stimulation with subdural electrodes may be performed to map brain areas (Fig. 4), or reconfirm the epileptic zone, particularly how much of the lateral temporal cortex is involved. Using a small electrical probe, the surgeon tests locations on the brain’s surface one after another to create a map of functions. During mapping, areas involved with movement can be identified electrically even if the patient is under anesthesia. However, to map areas such as language, sensation, or vision, the patient is awakened to be able to communicate with the surgeon. Local anesthesia and numbing agents are given so you won’t feel any pain.
Step 4: remove the seizure focus area
Looking through an operative microscope, the surgeon gently retracts the brain and opens a corridor to the seizure focus area. The surgeon then removes that area of brain where seizures occur.
Step 5: close the craniotomy
The retractors are removed and the dura is closed with sutures. The bone flap is replaced and secured to the skull with titanium plates and screws. The muscles and skin are sutured back together.
What happens after surgery?
After surgery you'll be taken to the recovery room, where vital signs are monitored as you awake from anesthesia. You'll be transferred to the neuroscience intensive care unit (NSICU) for overnight observation and monitoring. Pain medication will be given as needed. If you experience nausea and headache after surgery, medication can be given to control these symptoms. Once your condition is stable, you will be moved to a room on the Neuroscience floor where you will stay for about 1 to 3 days.If you had a VNS implanted, you may go home after recovery from anesthesia. It is important to work with your neurologist to adjust your medications and refine the programming of the neurostimulator.
- After surgery, headache pain is managed with narcotic medication. Because narcotic pain pills are addictive, they are used for a limited period (2 to 4 weeks). Their regular use may also cause constipation, so drink lots of water and eat high fiber foods. Laxatives (e.g., Dulcolax, Senokot, Milk of Magnesia) may be bought without a prescription. Thereafter, pain is managed with acetaminophen (e.g., Tylenol) and nonsteroidal anti-inflammatory drugs (NSAIDs) (e.g., aspirin; ibuprofen, Advil, Motrin, Nuprin; naproxen sodium, Aleve).
- A medicine (anticonvulsant) may be prescribed temporarily to prevent seizures. Common anticonvulsants include Dilantin (phenytoin), Tegretol (carbamazepine), and Neurontin (gabapentin). Some patients develop side effects (e.g., drowsiness, balance problems, rashes) caused by these anticonvulsants; in these cases, blood samples are taken to monitor the drug levels and manage the side effects.
- Do not drive after surgery until discussed with your surgeon and avoid sitting for long periods of time.
- Do not lift anything heavier than 5 pounds (e.g., 2-liter bottle of soda), including children.
- Housework and yardwork are not permitted until the first follow-up office visit. This includes gardening, mowing, vacuuming, ironing, and loading/unloading the dishwasher, washer, or dryer.
- Do not drink alcoholic beverages.
- Gradually return to your normal activities. Fatigue is common.
- An early exercise program to gently stretch the neck and back may be advised.
- Walking is encouraged; start with short walks and gradually increase the distance. Wait to participate in other forms of exercise until discussed with your surgeon.
- You may shower and shampoo 3 to 4 days after surgery unless otherwise directed by your surgeon.
- Sutures or staples, which remain in place when you go home, will need to be removed 7 to 14 days after surgery. Ask your surgeon or call the office to find out when.
When to Call Your Doctor
If you experience any of the following:
- A temperature that exceeds 101º F
- An incision that shows signs of infection, such as redness, swelling, pain, or drainage.
- If you are taking an anticonvulsant, and notice drowsiness, balance problems, or rashes.
- Decreased alertness, increased drowsiness, weakness of arms or legs, increased headaches, vomiting, or severe neck pain that prevents lowering your chin toward the chest.
Patients usually can resume their normal activities after 3 to 4 weeks. However, you cannot drive an automobile until you have approval from your neurologist. Doctors usually recommend that surgical patients stay on AEDs for up to two years after the operation. Some people may have to continue with medication indefinitely for seizure control. If language or memory problems continue past the recovery period, your doctor may recommend speech or physical therapy.
What are the risks?
No surgery is without risks. General complications of any surgery include bleeding, infection, blood clots, and reactions to anesthesia. Specific complications related to a craniotomy may include:
- swelling of the brain, which may require a second craniotomy
- nerve damage, which may cause muscle paralysis or weakness
- CSF leak, which may require repair
- loss of mental functions
- permanent brain damage with associated disabilities
Specific complications may include:
- Memory and language problems after temporal lobectomy.
- Temporary double vision after temporal lobectomy.
- Increased number of seizures after corpus callosotomy, but the seizures should be less severe.
- Reduced visual field after a hemispherectomy.
- Partial, one-sided paralysis after a hemispherectomy. Intense rehabilitation often brings back nearly normal abilities.
Sources & links
If you have more questions, please contact the Mayfield Clinic at 800-325-7787 or 513-221-1100. For information about the University of Cincinnati Neuroscience Institute’s Epilepsy Center, call 866-941-8264.
- Wiebe S, Blume WT, Girvin JP, Eliasziw M: A randomized, controlled trial of surgery for temporal-lobe epilepsy. N Engl J Med 345:311-8, 2001
- Yasuda CL, Tedeschi H, Oliveira EL, et al: Comparison of short-term outcome between surgical and clinical treatment in temporal lobe epilepsy: a prospective study. Seizure 15(1):35-40, 2006.
- Engel J Jr, Wiebe S, French J, Sperling M, et al: Practice parameter: temporal lobe and localized neocortical resections for epilepsy. Epilepsia 44(6):741-751, 2003.
- Sperling MR, O'Connor MJ, Saykin: Temporal lobectomy for refractory epilepsy. JAMA 276(6):470-475, 1996.
- Dupont S, Tanguy ML, Clemenceau S, et al: Long-term prognosis and psychosocial outcomes after surgery for MTLE. Epilepsia 47(12):2115-24, 2006.
- Schachter SC: Vagus nerve stimulation therapy summary. Neurology 59:S15-20, 2002.
antiepileptic drug (AED): a medication used to control epileptic seizures.
cortical mapping: direct brain recording or stimulation to identify language, motor, and sensory areas of the cortex.
cortex: the outer layer of the brain containing nerve cell bodies.
disconnection syndrome: the interruption of information transferred from one brain region to another.
generalized seizure: a seizure involving the entire brain.
hippocampal atrophy: a wasting or decrease in size of the hippocampus.
hypermetabolism: faster than normal metabolism.
hypometabolism: slower than normal metabolism.
ictal: that which happens during a seizure.
interictal: that which happens between seizures.
intractable: difficult to control.
localization: finding the location in the brain where epileptic seizures start.
lobectomy: surgical removal of a lobe of the brain.
seizure focus: a specific area of the brain where seizures begin.
palliative: to alleviate without curing.
partial seizure: a seizure involving only a portion of the brain.
video EEG monitoring: simultaneous monitoring of a patient’s behavior with a video camera and the patient’s brain activity by EEG
reviewed by: Ellen Air, MD, PhD, David Ficker, MD
Mayfield Certified Health Info materials are written and developed by the Mayfield Clinic & Spine Institute in association with the University of Cincinnati Neuroscience Institute. This information is not intended to replace the medical advice of your health care provider. | http://www.mayfieldclinic.com/PE-EpilepsySurg.htm |
4.03125 | History of Baden-Württemberg
In the 1st century AD, Württemberg was occupied by the Romans, who defended their control of the territory by constructing a limes (fortified boundary zone). Early in the 3rd century, the Alemanni drove the Romans beyond the Rhine and the Danube, but they in turn succumbed to the Franks under Clovis I, the decisive battle taking place in 496. The area later became part of the Holy Roman Empire.
The history of Baden as a state began in the 12th century, as a fief of the Holy Roman Empire. As a fairly inconsequential margraviate that was divided between various branches of the ruling family for much of its history, it gained both status and territory during the Napoleonic era, when it was also raised to the status of grand duchy. In 1871, it became one of the founder states of the German Empire. The monarchy came to an end with the end of the First World War, but Baden itself continued in existence as a state of Germany until the end of the Second World War.
Württemberg, often spelled "Wirtemberg" or "Wurtemberg" in English, developed as a political entity in southwest Germany, with the core established around Stuttgart by Count Conrad (died 1110). His descendants expanded Württemberg while surviving Germany's religious wars, changes in imperial policy, and invasions from France. The state had a basic parliamentary system that changed to absolutism in the 18th century. Recognised as a kingdom in 1806–1918, its territory now forms part of the modern German state of Baden-Württemberg, one of the 16 states of Germany, a relatively young federal state that has only existed since 1952. The coat of arms represents the state's several historical component parts, of which Baden and Württemberg are the most important.
- 1 Celts, Romans and Alemani
- 2 Duchy of Swabia
- 3 Hohenstaufen, Welf and Zähringen
- 4 Further Austria and the Palatinate
- 5 Baden and Württemberg before the Reformation
- 6 Reformation period
- 7 Peasants' War
- 8 Thirty Years' War
- 9 Swabian Circle until the French Revolution
- 10 Southwest Germany up to 1918
- 11 German southwest up to World War II
- 12 Southwest Germany after the war
- 13 State of Baden-Württemberg from 1952 to the present
- 14 See also
- 15 References
Celts, Romans and Alemani
The origin of the name "Württemberg" remains obscure. Scholars have universally rejected the once-popular derivation from "Wirth am Berg". Some authorities derive it from a proper name: "Wiruto" or "Wirtino," others from a Celtic place-name, "Virolunum" or "Verdunum". In any event, from serving as the name of a castle near the Stuttgart city district of Rotenberg, the name extended over the surrounding country and, as the lords of this district increased their possessions, so the name covered an ever-widening area, until it reached its present extent. Early forms included Wirtenberg, Wirtembenc and Wirtenberc. Wirtemberg was long accepted, and in the latter part of the 16th century Würtemberg and Wurttemberg appeared. In 1806, Württemberg became the official spelling, though Wurtemberg also appears frequently and occurs sometimes in official documents, and even on coins issued after that date.
Württemberg's first known inhabitants, the Celts, preceded the arrival of the Suebi. In the first century AD, the Romans conquered the land and defended their position there by constructing a rampart (limes). Early in the third century, the Alemanni drove the Romans beyond the Rhine and the Danube, but they in turn succumbed to the Franks under Clovis, the decisive battle taking place in 496. For about 400 years, the district was part of the Frankish empire and was administered by counts until it was subsumed in the ninth century by the German Duchy of Swabia.
Duchy of Swabia
The Duchy of Swabia is to a large degree comparable to the territory of the Alemanni. The Suevi (Sueben or Swabians) belonged to the tribe of the Alemanni, reshaped in the 3rd century. The name of Swabia is also derived from them. From the 9th century on, in place of the area designation "Alemania," came the name "Schwaben" (Swabia). Swabia was one of the five stem duchies of the medieval Kingdom of the East Franks, and its dukes were thus among the most powerful magnates of Germany. The most notable family to hold Swabia were the Hohenstaufen, who held it, with a brief interruption, from 1079 until 1268. For much of this period, the Hohenstaufen were also Holy Roman Emperors. With the death of Conradin, the last Hohenstaufen duke, the duchy itself disintegrated although King Rudolf I attempted to revive it for his Habsburg family in the late 13th century.
With the decline of East Francia power, the House of Zähringen appeared to be ready as the local successor of the power in southwestern Germany and in the northwest in the Kingdom of Arles. With the founding of the city of Bern in 1191, Berthold V, Duke of Zähringen, shows one of the House of Zähringen power centers. East of the Jura Mountains and west of the Reuss was described as Upper Burgundy, and Bern was part of the Landgraviate of Burgundy, which was situated on both sides of the Aar, between Thun and Solothurn. However, Berthold died without an heir. Bern was declared a Free imperial city by Frederick II, Holy Roman Emperor, in 1218. Berthold's death without heirs meant the complete disintegration of southwest Germany and led to the development of the Old Swiss Confederacy and the Duchy of Burgundy. Bern joined Switzerland in the year 1353.
Swabia takes its name from the tribe of the Suebi, and the name was often used interchangeably with Alemannia during the existence of the stem-duchy in the High Middle Ages. Even Alsace belonged to it. Swabia was otherwise of great importance in securing the pass route to Italy. After the fall of the Staufers there was never again a Duchy of Swabia. The Habsburgs and the Württembergers endeavored in vain to resurrect it.
Hohenstaufen, Welf and Zähringen
Three of the noble families of the southwest attained a special importance: the Hohenstaufen, the Welf and the Zähringen. The most successful appear from the view of that time to be the Hohenstaufen, who, as dukes of Swabia from 1079 and as Frankish kings and emperors from 1138 to 1268, attained the greatest influence in Swabia. During the Middle Ages, various counts ruled the territory that now forms Baden, among whom the counts and duchy of Zähringen figure prominently. In 1112, Hermann, son of Hermann, Margrave of Verona (died 1074) and grandson of Berthold II, Duke of Carinthia and the Count of Zähringen, having inherited some of the German estates of his family, called himself Margrave of Baden. The separate history of Baden dates from this time. Hermann appears to have called himself "margrave" rather than "count," because of the family connection to the margrave of Verona.
His son and grandson, both called Hermann, added to their territories, which were then divided, and the lines of Baden-Baden and Baden-Hochberg were founded, the latter of which divided about a century later into Baden-Hochberg and Baden-Sausenberg. The family of Baden-Baden was very successful in increasing the area of its holdings.
The Hohenstaufen family controlled the duchy of Swabia until the death of Conradin in 1268, when a considerable part of its lands fell to the representative of a family first mentioned in about 1080, the count of Württemberg, Conrad von Beutelsbach, who took the name from his ancestral castle of Württemberg.
The earliest historical details of a Count of Württemberg relate to one Ulrich I, Count of Württemberg, who ruled from 1241 to 1265. He served as marshal of Swabia and advocate of the town of Ulm, had large possessions in the valleys of the Neckar and the Rems, and acquired Urach in 1260. Under his sons, Ulrich II and Eberhard I, and their successors, the power of the family grew steadily. The charcoal-burner gave him some of his treasure, and was elevated to Duke of Zähringen. To the Zähringer sphere of influence originally belonged Freiburg and Offenburg, Rottweil and Villingen, and, in modern Switzerland, Zürich and Bern. The three prominent noble families were in vigorous competition with one another, even though they were linked by kinship. The mother of the Stauffer King Friedrich Barbarossa (Red beard) was Judith Welfen. The Staufers, as well as the Zähringers, based their claims of rule on ties with the family of the Frankish kings from the House of Salier.
Further Austria and the Palatinate
Other than the Margraviate of Baden and the Duchy of Württemberg, Further Austria and the Palatinate lay on the edge of the southwestern area. Further Austria (in German: Vorderösterreich or die Vorlande) was the collective name for the old possessions of the Habsburgs in south-western Germany (Swabia), the Alsace, and in Vorarlberg after the focus of the Habsburgs had moved to Austria.
Further Austria comprised the Sundgau (southern Alsace) and the Breisgau east of the Rhine (including Freiburg im Breisgau after 1386) and included some scattered territories throughout Swabia, the largest being the margravate Burgau in the area of Augsburg and Ulm. Some territories in Vorarlberg that belonged to the Habsburgs were also considered part of Further Austria. The original homelands of the Habsburgs, the Aargau and much of the other original Habsburg possessions south of the Rhine and Lake Constance were lost in the 14th century to the expanding Old Swiss Confederacy after the battles of Morgarten (1315) and Sempach (1386) and were never considered part of Further Austria, except the Fricktal, which remained a Habsburg property until 1805.
The Palatinate arose as the County Palatine of the Rhine, a large feudal state lying on both banks of the Rhine, which seems to have come into existence in the 10th century. The territory fell to the Wittelsbach Dukes of Bavaria in the early 13th century, and during a later division of territory among one of the heirs of Duke Louis II of Upper Bavaria in 1294, the elder branch of the Wittelsbachs came into possession not only of the Rhenish Palatinate, but also of that part of Upper Bavaria itself which was north of the Danube, and which came to be called the Upper Palatinate (Oberpfalz), in contrast to the Lower Palatinate along the Rhine. In the Golden Bull of 1356, the Palatinate was made one of the secular electorates, and given the hereditary offices of Archsteward of the Empire and Imperial Vicar of the western half of Germany. From this time forth, the Count Palatine of the Rhine was usually known as the Elector Palatine.
Due to the practice of division of territories among different branches of the family, by the early 16th century junior lines of the Palatine Wittelsbachs came to rule in Simmern, Kaiserslautern, and Zweibrücken in the Lower Palatinate, and in Neuburg and Sulzbach in the Upper Palatinate. The Elector Palatine, now based in Heidelberg, converted to Lutheranism in the 1530s.
When the senior branch of the family died out in 1559, the Electorate passed to Frederick III of Simmern, a staunch Calvinist, and the Palatinate became one of the major centers of Calvinism in Europe, supporting Calvinist rebellions in both the Netherlands and France. Frederick III's grandson, Frederick IV, and his adviser, Christian of Anhalt, founded the Evangelical Union of Protestant states in 1608, and in 1619 Elector Frederick V (the son-in-law of King James I of England) accepted the throne of Bohemia from rebellious Protestant noblemen. He was soon defeated by the forces of Emperor Ferdinand II at the Battle of White Mountain in 1620, and Spanish and Bavarian troops soon occupied the Palatinate itself. In 1623, Frederick was put under the ban of the Empire, and his territories and Electoral dignity granted to the Duke (now Elector) of Bavaria, Maximilian I.
At the Treaty of Westphalia in 1648, the Sundgau became part of France, and in the 18th century, the Habsburgs acquired a few minor new territories in southern Germany such as Tettnang. In the Peace of Pressburg of 1805, Further Austria was dissolved and the formerly Habsburg territories were assigned to Bavaria, Baden, and Württemberg, and the Fricktal to Switzerland.
By the Peace of Westphalia in 1648, Frederick V's son, Charles Louis, was restored to the Lower Palatinate, and given a new electoral title, but the Upper Palatinate and the senior electoral title remained with the Bavarian line. In 1685, the Simmern line died out, and the Palatinate was inherited by the Count Palatine of Neuburg (who was also Duke of Jülich and Berg), a Catholic. The Neuburg line, which moved the capital to Mannheim, lasted until 1742, when it, too, became extinct, and the Palatinate was inherited by the Duke Karl Theodor of Sulzbach. The childless Karl Theodor also inherited Bavaria when its electoral line became extinct in 1777, and all the Wittelsbach lands save Zweibrücken on the French border (whose Duke was, in fact, Karl Theodor's presumptive heir) were now under a single ruler. The Palatinate was destroyed in the Wars of the French Revolution – first its left bank territories were occupied, and then annexed, by France starting in 1795, and then, in 1803, its right bank territories were taken by the Margrave of Baden. The provincial government in Alsace was alternately administered by the Palatinate (1408–1504, 1530–1558) and by the Habsburgs (13th and 14th centuries, 1504–1530). Only the margraves of Baden and the counts and dukes of Württemberg included both homelands within their territories. With the political reordering of the southwest after 1800, Further Austria and the Electorate Palatine disappeared from history.
Baden and Württemberg before the Reformation
The lords of Württemberg were first named in 1092. Supposedly a Lord of Virdeberg by Luxembourg had married an heiress of the lords of Beutelsbach. The new Wirtemberg Castle (castle chapel dedicated in 1083) was the central point of a rule that extended from the Neckar and Rems valleys in all directions over the centuries. The family of Baden-Baden was very successful in increasing the area of its holdings, which after several divisions were united by the margrave Bernard I in 1391. Bernard, a soldier of some renown, continued the work of his predecessors and obtained other districts, including Baden-Hochberg, the ruling family of which died out in 1418.
During the 15th century, a war with the Count Palatine of the Rhine deprived the Margrave Charles I (died 1475) of a part of his territories, but these losses were more than recovered by his son and successor, Christoph I of Baden (illustration, right). In 1503, the family Baden-Sausenberg became extinct, and the whole of Baden was united by Christophe.
Under his sons, Ulrich II and Eberhard I, and their successors, the power of the family grew steadily. Eberhard I (died 1325) opposed, sometimes successfully, three Holy Roman emperors. He doubled the area of his county and transferred his residence from Württemberg Castle to the "Old Castle" in today's city centre of Stuttgart.
His successors were not as prominent, but all added something to the land area of Württemberg. In 1381, the Duchy of Teck was bought, and marriage to an heiress added Montbéliard in 1397. The family divided its lands amongst collateral branches several times but, in 1482, the Treaty of Münsingen reunited the territory, declared it indivisible, and united it under Count Eberhard V, called im Bart (The Bearded). This arrangement received the sanction of the Holy Roman Emperor, Maximilian I, and of the Imperial Diet, in 1495.
Eberhard V proved one of the most energetic rulers that Württemberg ever had, and, in 1495, his county became a duchy. Eberhard was now Duke Eberhard I, Duke of Württemberg. Württemberg, after the partition from 1442 to 1482, had no further land partitions to endure and remained a relatively closed country. In Baden, however, a partitioning occurred that lasted from 1515 to 1771. Moreover, the various parts of Baden were always physically separated one from the other.
Martin Luther's theses and his writings left no one in Germany untouched after 1517. In 1503, the family Baden-Sausenberg became extinct, and the whole of Baden was united by Christoph, who, before his death in 1527, divided it among his three sons. Religious differences increased the family's rivalry. During the period of the Reformation some of the rulers of Baden remained Catholic and some became Protestants. One of Christoph's sons died childless in 1533. In 1535, his remaining sons Bernard and Ernest, having shared their brother's territories, made a fresh division and founded the lines of Baden-Baden and Baden-Pforzheim, called Baden-Durlach after 1565. Further divisions followed, and the weakness caused by these partitions was accentuated by a rivalry between the two main branches of the family, culminating in open warfare.
The long reign (1498–1550) of Duke Ulrich, who succeeded to the duchy while still a child, proved a most eventful period for the country, and many traditions cluster round the name of this gifted, unscrupulous and ambitious man. Duke Ulrich of Württemberg had been living in his County of Mömpelgard since 1519. He had been exiled from his duchy by his own fault and controversial encroachments into non-Württembergish possessions. In Basel, Duke Ulrich came into contact with the Reformation.
Aided by Philip, landgrave of Hesse, and other Protestant princes, he fought a victorious battle against Ferdinand's troops at Lauffen in May 1534. Then, by the treaty of Cadan, he again became duke, but perforce duke of the duchy as an Austrian fief. He subsequently introduced the reformed religious doctrines, endowed Protestant churches and schools throughout his land, and founded the Tübinger Stift seminary in 1536. Ulrich's connection with the Schmalkaldic League led to another expulsion but, in 1547, Charles V reinstated him, albeit on somewhat onerous terms.
The total population during the 16th century was between 300,000 and 400,000. Ulrich's son and successor, Christoph (1515–1568), completed the work of converting his subjects to the reformed faith. He introduced a system of church government, the Grosse Kirchenordnung, which endured in part into the 20th century. In this reign, a standing commission started to superintend the finances, and the members of this body, all of whom belonged to the upper classes, gained considerable power in the state, mainly at the expense of the towns.
Christopher's son Louis, the founder of the Collegium illustre in Tübingen, died childless in 1593. A kinsman, Frederick I (1557–1608) succeeded to the duchy. This energetic prince disregarded the limits placed on his authority by the rudimentary constitution. By paying a large sum of money, he induced the emperor Rudolph II in 1599 to free the duchy from the suzerainty of Austria. Austria still controlled large areas around the duchy, known as "Further Austria". Thus, once again, Württemberg became a direct fief of the empire, securing its independence. Even the Margraviate of Baden-Baden went over to Lutheranism that same year, but indeed only for a short time. Likewise, after the Peace of Augsburg the Reformation was carried out in the County of Hohenlohe. At the same time, however, the Counter-Reformation began. It was persistently supported by the Emperor and the clerical princes.
The living conditions of the peasants in the German southwest at the beginning of the 16th century were quite modest, but an increase in taxes and several bad harvests, with no improvement in sight, led to crisis. Under the sign of the sandal (Bundschuh), that is, the farmer's shoe that tied up with laces, rebellions broke out on the Upper Rhine, in the bishopric of Speyer, in the Black Forest and in the upper Neckar valley at the end of the 15th century. The extortions by which he sought to raise money for his extravagant pleasures excited an uprising known as the arme Konrad (Poor Conrad), not unlike the rebellion in England led by Wat Tyler. The authorities soon restored order, and, in 1514, by the Treaty of Tübingen, the people undertook to pay the duke's debts in return for various political privileges, which in effect laid the foundation of the constitutional liberties of the country. A few years later, Ulrich quarrelled with the Swabian League, and its forces (helped by William IV, Duke of Bavaria, angered by the treatment meted out by Ulrich to his wife Sabina, a Bavarian princess), invaded Württemberg, expelled the duke and sold his duchy to Charles V, Holy Roman Emperor, for 220,000 gulden.
Charles handed Württemberg over to his brother, the Holy Roman Emperor Ferdinand I, who served as nominal ruler for a few years. Soon, however, the discontent caused by the oppressive Austrian rule, the disturbances in Germany leading to the German Peasants' War and the commotions aroused by the Reformation gave Ulrich an opportunity to recover his duchy. Thus Marx Sittich of Hohenems went against the Hegenau and Klettgau rebels. On 4 November 1525 he struck down a last attempt by the peasants in that same countryside where the peasants' unrest had begun a year before. Emperor Karl V and even Pope Clement VII thanked the Swabian Union for its restraint in the Peasants' War.
Thirty Years' War
The longest war in German history became, with the intervention of major powers, a global war. The cause was mainly the conflict of religious denominations as a result of the Reformation. Thus, in the southwest of the empire, Catholic and Protestant princes faced one another as enemies – the Catholics (Emperor, Bavaria) united in the League, and the Protestants (Electorate Palatine, Baden-Durlach, Württemberg) in the Union. Unlike his predecessor, the next duke, Johann Frederick (1582–1628), failed to become an absolute ruler, and perforce recognised the checks on his power. During his reign, which ended in July 1628, Württemberg suffered severely from the Thirty Years' War although the duke himself took no part in it. His son and successor Eberhard III (1628–1674), however, plunged into it as an ally of France and Sweden as soon as he came of age in 1633, but after the battle of Nordlingen in 1634, Imperial troops occupied the duchy and the duke himself went into exile for some years. The Peace of Westphalia restored him, but to a depopulated and impoverished country, and he spent his remaining years in efforts to repair the disasters of the lengthy war. Württemberg was a central battlefield of the war. Its population fell by 57% between 1634 and 1655, primarily because of death and disease, declining birthrates, and the mass migration of terrified peasants.
From 1584 to 1622, Baden-Baden was in the possession of one of the princes of Baden-Durlach. The house was similarly divided during the Thirty Years' War. Baden suffered severely during this struggle, and both branches of the family were exiled in turn. The Peace of Westphalia in 1648 restored the status quo, and the family rivalry gradually died out. For one part of the southwest, a peace of 150 years began. On the Middle Neckar, in the whole Upper Rhine area and especially in the Electorate Palatine, the wars waged by the French King Louis XIV from 1674 to 1714 caused further terrible destruction. The Kingdom of France penetrated through acquired possessions in Alsace to the Rhine border. Switzerland separated from the Holy Roman Empire.
Swabian Circle until the French Revolution
The dukedom survived mainly because it was larger than its immediate neighbours. However, it was often under pressure during the Reformation from the Catholic Holy Roman Empire, and from repeated French invasions in the 17th and 18th centuries. Württemberg happened to be in the path of French and Austrian armies engaged in the long rivalry between the Bourbon and Habsburg dynasties.
During the wars of the reign of Louis XIV of France, the margravate was ravaged by French troops and the towns of Pforzheim, Durlach, and Baden were destroyed. Louis William, Margrave of Baden-Baden (died 1707), figured prominently among the soldiers who resisted the aggressions of France.
It was the life's work of Charles Frederick of Baden-Durlach to give territorial unity to his country. Beginning his reign in 1738, and coming of age in 1746, this prince is the most notable of the rulers of Baden. He was interested in the development of agriculture and commerce, sought to improve education and the administration of justice, and proved in general to be a wise and liberal ruler in the Age of Enlightenment.
In 1771, Augustus George of Baden-Baden died without sons, and his territories passed to Charles Frederick, who thus finally became ruler of the whole of Baden. Although Baden was united under a single ruler, the territory was not united in its customs and tolls, tax structure, laws or government. Baden did not form a compact territory. Rather, a number of separate districts lay on both banks of the upper Rhine. His opportunity for territorial aggrandisement came during the Napoleonic wars.
During the reign of Eberhard Louis (1676–1733), who succeeded as a one-year-old when his father Duke William Louis died in 1677, Württemberg had to face another destructive enemy, Louis XIV of France. In 1688, 1703 and 1707, the French entered the duchy and inflicted brutalities and suffering upon the inhabitants. The sparsely populated country afforded a welcome to fugitive Waldenses, who did something to restore it to prosperity, but the extravagance of the duke, anxious to provide for the expensive tastes of his mistress, Christiana Wilhelmina von Grävenitz, undermined this benefit.
Charles Alexander, who became duke in 1733, had become a Roman Catholic while an officer in the Austrian service. His favourite adviser was the Jew Joseph Süß Oppenheimer, and suspicions arose that master and servant were aiming at the suppression of the diet (the local parliament) and the introduction of Roman Catholicism. However, the sudden death of Charles Alexander in March 1737 put an abrupt end to any such plans, and the regent, Carl Rudolf, Duke of Württemberg-Neuenstadt, had Oppenheimer hanged.
Charles Eugene (1728–1793), who came of age in 1744, appeared gifted, but proved to be vicious and extravagant, and he soon fell into the hands of unworthy favourites. He spent a great deal of money in building the "New Castle" in Stuttgart and elsewhere, and sided against Prussia during the Seven Years' War of 1756–1763, which was unpopular with his Protestant subjects. His whole reign featured dissension between ruler and ruled, the duke's irregular and arbitrary methods of raising money arousing great discontent. The intervention of the emperor and even of foreign powers ensued and, in 1770, a formal arrangement removed some of the grievances of the people. Charles Eugene did not keep his promises, but later, in his old age, he made a few further concessions.
Charles Eugene left no legitimate heirs, and was succeeded by his brother, Louis Eugene (died 1795), who was childless, and then by another brother, Frederick Eugene (died 1797). This latter prince, who had served in the army of Frederick the Great, to whom he was related by marriage, and then managed his family's estates around Montbéliard, educated his children in the Protestant faith as francophones. All of the subsequent Württemberg royal family were descended from him. Thus, when his son Frederick II became duke in 1797, Protestantism returned to the ducal household, and the royal house adhered to this faith thereafter. Nevertheless, the district legislatures as well as the imperial diets offered a possibility of regulating matters in dispute. Much was left over from the trials before the imperial courts, which often lasted decades.
Southwest Germany up to 1918
In the wars after the French Revolution in 1789, Napoleon, the emperor of the French, rose to be the ruler of the European continent. An enduring result of his policy was a new order of the southwestern German political world. When the French Revolution threatened to be exported throughout Europe in 1792, Baden joined forces against France, and its countryside was devastated once more. In 1796, the margrave was compelled to pay an indemnity and to cede his territories on the left bank of the Rhine to France. Fortune, however, soon returned to his side. In 1803, largely owing to the good offices of Alexander I, emperor of Russia, he received the Bishopric of Konstanz, part of the Rhenish Palatinate, and other smaller districts, together with the dignity of a prince-elector. Changing sides in 1805, he fought for Napoleon, with the result that, by the peace of Pressburg in that year, he obtained the Breisgau and other territories at the expense of the Habsburgs (see Further Austria). In 1806, he joined the Confederation of the Rhine, declared himself a sovereign prince, became a grand duke, and received additional territory.
On January 1, 1806, Duke Frederick II assumed the title of King Frederick I, abrogated the constitution, and united old and new Württemberg. Subsequently, he placed church lands under the control of the state and received some formerly self-governing areas under the "mediatisation" process. In 1806, he joined the Confederation of the Rhine and received further additions of territory containing 160,000 inhabitants. A little later, by the peace of Vienna in October 1809, about 110,000 more persons came under his rule.
In return for these favours, Frederick joined Napoleon Bonaparte in his campaigns against Prussia, Austria and Russia, and of 16,000 of his subjects who marched to Moscow only a few hundred returned. Then, after the Battle of Leipzig in October 1813, King Frederick deserted the waning fortunes of the French emperor and, by a treaty made with Metternich at Fulda in November 1813, he secured the confirmation of his royal title and of his recent acquisitions of territory, while his troops marched with those of the allies into France.
In 1815, the king joined the German Confederation, but the Congress of Vienna made no change in the extent of his lands. In the same year, he laid before the representatives of his people the outline of a new constitution, but they rejected it and, in the midst of the commotion. Frederick died on October 30, 1816.
The new king, William I (reigned 1816–1864), at once took up the constitutional question and, after much discussion, granted a new constitution in September 1819. This constitution, with subsequent modifications, remained in force until 1918 (see Württemberg). A period of quiet now set in, and the condition of the kingdom, its education, agriculture trade and manufactures, began to receive earnest attention, while by frugality, both in public and in private matters, King William I helped to repair the shattered finances of the country. But the desire for greater political freedom did not entirely fade away under the constitution of 1819 and, after 1830, a certain amount of unrest occurred. This, however, soon died, while the inclusion of Württemberg in the German Zollverein and the construction of railways fostered trade.
The revolutionary movement of 1848 did not leave Württemberg untouched, although no actual violence took place within the kingdom. King William had to dismiss Johannes Schlayer (1792–1860) and his other ministers, calling to power men with more liberal ideas and the exponents of the idea of a united Germany. King William did proclaim a democratic constitution but, as soon as the movement had spent its force, he dismissed the liberal ministers. In October 1849, Schlayer and his associates returned to power. In Baden, by contrast, there was a serious uprising that had to be put down by force.
By interfering with popular electoral rights, the king and his ministers succeeded in assembling a servile diet in 1851, surrendering all the privileges gained since 1848. In this way, the authorities restored the constitution of 1819, and power passed into the hands of a bureaucracy. A concordat with the Papacy proved almost the last act of William's long reign, but the diet repudiated the agreement, preferring to regulate relations between church and state in its own way.
In July 1864, Charles (1823–1891, reigned 1864–91) succeeded his father William I as king. Almost at once, he was faced with considerable difficulties. In the duel between Austria and Prussia for supremacy in Germany, William I had consistently taken the Austrian side, and this policy was equally acceptable to the new king and his advisers.
In 1866, Württemberg took up arms on behalf of Austria in the Austro-Prussian War, but three weeks after the Battle of Königgrätz on July 3, 1866, her troops suffered a comprehensive defeat at Tauberbischofsheim, and the country lay at the mercy of Prussia. The Prussians occupied the northern part of Württemberg and negotiated a peace in August 1866. By this, Württemberg paid an indemnity of 8,000,000 gulden, but she at once concluded a secret offensive and defensive treaty with her conqueror. Württemberg was a party to the Saint Petersburg Declaration of 1868.
The end of the struggle against Prussia allowed a renewal of democratic agitation in Württemberg, but this achieved no tangible results when the great war between France and Prussia broke out in 1870. Although the policy of Württemberg had continued to be antagonistic to Prussia, the kingdom shared in the national enthusiasm which swept over Germany, and its troops took a creditable part in the Battle of Wörth and in other operations of the war.
In 1871, Württemberg became a member of the new German Empire, but retained control of her own post office, telegraphs and railways. She had also certain special privileges with regard to taxation and the army and, for the next 10 years, Württemberg's policy enthusiastically supported the new order. Many important reforms, especially in the area of finance, ensued, but a proposal for a union of the railway system with that of the rest of Germany failed. After reductions in taxation in 1889, the reform of the constitution became the question of the hour. King Charles and his ministers wished to strengthen the conservative element in the chambers, but the laws of 1874, 1876 and 1879 only effected slight reforms pending a more thorough settlement. On 6 October 1891, King Charles died suddenly. His cousin William II (1848–1921, reigned 1891–1918) succeeded and continued the policy of his predecessor.
Discussions on the reform of the constitution continued, and the election of 1895 memorably returned a powerful party of democrats. King William had no sons, nor had his only Protestant kinsman, Duke Nicholas (1833–1903). Consequently, the succession would ultimately pass to a Roman Catholic branch of the family, and this prospect raised certain difficulties about the relations between church and state. The heir to the throne in 1910 was the Roman Catholic Duke Albert (b. 1865).
Between 1900 and 1910, the political history of Württemberg centred round the settlement of the constitutional and the educational questions. The constitution underwent revision in 1906, and a settlement of the education difficulty occurred in 1909. In 1904, the railway system integrated with that of the rest of Germany.
The population in 1905 was 2,302,179, of whom 69% were Protestant, 30% Catholic and 0.5% Jewish. Protestants largely preponderated in the Neckar district, and Roman Catholics in that of the Danube. In 1910, an estimated 506,061 people worked in the agricultural sector, 432,114 in industrial occupations and 100,109 in trade and commerce. (see Demographics of Württemberg)
In the confusion at the end of World War I, Frederick abdicated on 22 November 1918. A republic had already been declared on 14 November.
In the course of the revolutionary activities at the close of World War I in November 1918, King William II abdicated on November 30, and a republican government ensued.
Württemberg became a state (Land) in the new Weimar Republic. Baden named itself a "democratic republic," Württemberg a "free popular state." Instead of monarchs, state presidents were in charge. They were elected by the state legislatures, in Baden by an annual change, in Württemberg after each legislative election.
German southwest up to World War II
Politics between 1918 and 1919 towards a merger of Württemberg and Baden remained largely unsuccessful. After the excitements of the 1918–1919 revolution, its five election results between 1919 and 1932 show a decreasing vote for left-wing parties. After the seizure of power by the National Socialist German Workers Party (NSDAP) in the year 1933, the state borders initially remained unchanged. The state of Baden, the state of Württemberg and the Hohenzollern states (the government district of Sigmaringen) continued to exist, albeit with much less autonomy with regard to the empire. From 1934, the Gau of Württemberg-Hohenzollern added the Province of Hohenzollern.
By 30 April 1945, all of Baden, Württemberg and Hohenzollern were completely occupied.
Southwest Germany after the war
After World War II was over, the states of Baden and Württemberg were split between the American occupation zone in the north and the French occupation zone in the south, which also got Hohenzollern. The border between the occupation zones followed the district borders, but they were drawn purposely in such a way that the autobahn from Karlsruhe to Munich (today the Bundesautobahn 8) ended up inside the American occupation zone. In the American occupation zone, the state of Württemberg-Baden was founded; in the French occupation zone, the southern part of former Baden became the new state of Baden while the southern part of Württemberg and Hohenzollern were fused into Württemberg-Hohenzollern.
Article 29 of the Basic Law of Germany provided for a way to change the German states via a community vote; however, it could not enter into force due to a veto by the Allied forces. Instead, a separate article 118 mandated the fusion of the three states in the southwest via a trilateral agreement. If the three affected states failed to agree, federal law would have to regulate the future of the three states. This article was based on the results of a conference of the German states held in 1948, where the creation of a Southwest State was agreed upon. The alternative, generally favored in South Baden, was to recreate Baden and Württemberg (including Hohenzollern) in its old, pre-war borders.
The trilateral agreement failed because the states couldn't agree on the voting system. As such, federal law decided on May 4, 1951 that the area be split into four electoral districts: North Württemberg, South Württemberg, North Baden and South Baden. Because it was clear that both districts in Württemberg as well as North Baden would support the merger, the voting system favored the supporters of the new Southwest State. The state of Baden brought the law to the German Constitutional Court to have it declared as unconstitutional, but failed.
The plebiscite took place on December 9, 1951. In both parts of Württemberg, 93% were in favor of the merger, in North Baden 57% were in favor, but in South Baden only 38% were. Because three of four electoral districts voted in favor of the new Southwest State, the merger was decided upon. Had Baden as a whole formed a single electoral district, the vote would have failed.
State of Baden-Württemberg from 1952 to the present
The members of the constitutional convention were elected on March 9, 1952, and on April 25 the Prime Minister was elected. With this, the new state of Baden-Württemberg was founded. After the constitution of the new state entered force, the members of the constitutional convention formed the state parliament until the first election in 1956. The name Baden-Württemberg was only intended as a temporary name, but ended up the official name of the state because no other name could be agreed upon.
In May 1954, the Baden-Württemberg landtag (legislature) decided on adoption of the following coat of arms: three black lions on a golden shield, framed by a deer and a griffin. This coat of arms once belonged to the Staufen family, emperors of the Holy Roman Empire and Dukes of Swabia. The golden deer stands for Württemberg, the griffin for Baden. Conversely the former Württemberg counties of Calw, Freudenstadt, Horb, Rottweil and Tuttlingen were incorporated into the Baden governmental districts of Karlsruhe and Freiburg. The last traces of Hohenzollern disappeared. Between county and district, regional associations were formed that are responsible for overlapping planning.
The opponents of the merger did not give up. After the General Treaty gave Germany full sovereignty, the opponents applied for a community vote to restore Baden to its old borders by virtue of paragraph 2 of Article 29 of the Basic Law, which allowed a community vote in states which had been changed after the war without a community vote. The Federal Ministry of the Interior refused the application on the grounds that a community vote had already taken place. The opponents sued in front of the German Constitutional Court and won in 1956, with the court deciding that the plebiscite of 1951 had not been a community vote as defined by the law because the more populous state of Württemberg had had an unfair advantage over the less populous state of Baden. Because the court did not set a date for the community vote, the government simply did nothing. The opponents eventually sued again in 1969, which led to the decision that the vote had to take place before June 30, 1970. On June 7, the majority voted against the proposal to restore the state of Baden.
- Andrea Schulte-Peevers; Anthony Haywood, Sarah Johnstone, Jeremy Gray, Daniel (2007). Germany. Lonely Planet. ISBN 978-1-74059-988-7. Retrieved 1 February 2009. Cite uses deprecated parameter
- "History of BW - The Duchy of Swabia". Retrieved 28 February 2015.
- "History of BW - Staufer, Welfen, Zähringer". Retrieved 28 February 2015.
- "History of BW - Anterior Austria and the Electorate of Palatinate". Retrieved 28 February 2015.
- "History of BW - The Margraviate of Baden and the County of Württemberg at the beginning of the 15th century". Retrieved 28 February 2015.
- This type of sovereign royal duke was known in Germany as a Herzog.
- "History of BW - The time of the Reformation". Retrieved 28 February 2015.
- "History of BW - The Peasants' War". Retrieved 28 February 2015.
- "History of BW - The Peasants' War". Retrieved 28 February 2015.
- "History of BW - The Thirty Years War". Retrieved 28 February 2015.
- Peter Wilson, The Thirty Years' War: Europe's tragedy (2009) p 789
- "Historical Map of Baden-Wurttemberg 1789 - Southern Part". Retrieved 28 February 2015.
- "History of BW - The German southwest at the end of the 18th century". Retrieved 28 February 2015.
- "History of BW - Southwest Germany up to 1918". Retrieved 28 February 2015.
- "DFR - BVerfGE 1, 14 - Südweststaat". Retrieved 28 February 2015.
- "25. April 1952 - Die Entstehung des Landes Baden-Württemberg". Retrieved 28 February 2015.
- "DFR - BVerfGE 5, 34 - Baden-Abstimmung". Retrieved 28 February 2015.
- This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Württemberg". Encyclopædia Britannica (11th ed.). Cambridge University Press. | https://en.wikipedia.org/wiki/History_of_Baden |
4.125 | Until about 140 million years ago, dinosaurs had been munching their way through a uniformly green plant world. What happened then is one of evolution's greatest success stories, heralding a new kind of ecological relationship that would transform the planet: The first flowers appeared, competing for the attention of animals to visit them and distribute their pollen to other flowers to ensure the plant's propagation.
The myriad of ways in which flowers attract pollinators have been studied since the beginning of biology, and few ecological relationships between organisms are as well understood as those between plants and their pollinators.
Despite decades of research, a team led by Martin von Arx, a postdoctoral fellow in the lab of Goggy Davidowitz in the University of Arizona department of entomology, now has discovered a previously unknown sensory channel that is used in plant-animal interactions.
The white-lined sphinx (Hyles lineata), the most common species of hawkmoth in North America, can detect minuscule differences in humidity when hovering near a flower that tells it if there is enough nectar inside to warrant a visit.
The findings constitute the first documented case of a pollinator using humidity as a direct cue in its foraging behavior and are published in the journal Proceedings of the National Academy of Sciences.
The study, "Floral humidity as a reliable sensory cue for profitability assessment by nectar-foraging hawkmoths," is co-authored by Davidowitz and Joaqun Goyret and Robert Raguso at the department of neurobiology and behavior at Cornell University in Ithaca, New York, where the work was carried out.
"Traditionally, most research on plant-pollinator interactions has focused on static cues like floral scent, color or shape," von Arx said. "All this time, evaporation from nectar was right under our noses, but few people ever looked. We were able to show that the insects actually perceive this cue, and it allows them to directly assess the reward that they might get from the flower."
Unlike previously recognized cues used by pollinators such as flower size, shape or color, which don't necessarily reveal anything about the actual nectar levels waiting inside, the humidity evaporating from the flower's nectar provides an "honest" signal to a potential visitor. Scent, for example, is independent of nectar, which is odorless in most plants, whereas the fragrance usually is produced by the petals.
"We were always intrigued by this question," von Arx said. "Given that the known cues like flower shape and color are independent of the abundance of nectar, we were wondering if there is some other cue the insects might use. You would expect natural selection to favor an ability to sense a cue that is directly linked to the nectar reward."
To a hawkmoth setting out at dusk to search for nectar-bearing flowers of one of its favorite plants, the tufted evening primrose (Oenothera cespitosa), being able to quickly tell whether a flower is worth visiting, can make the difference between life and death.
Hovering in front of a flower while probing it with its long proboscis the moth's "tongue" is one of the most energetically costly modes of flight, von Arx explained. And once the insect plunges its head deep inside to reach all of the nectar, it is very vulnerable to predators such as bats.
"The metabolic cost of hovering in hawkmoths is more than 100 times that of a moth at rest," said Davidowitz. "This is the most costly mode of locomotion ever measured. An individual hawkmoth may spend 5-10 seconds evaluating whether a flower has nectar, multiply that by hundreds of flowers visited a night, and the moth is expending a huge amount of energy searching for nectar that may not be there. The energy saved by avoiding such behavior can go into making more eggs. For a moth that lives only about a week, that is a very big deal."
Add to that the "Black Friday" effect: fierce competition for limited supplies while they last.
"Imagine: As soon as the sun sets, all the hawkmoths fly around flower patches in the desert," von Arx said. "These flowers open within minutes of each other, and as soon as they do, the moths go there. A big flower patch or a plant with multiple flowers might attract many moths at the same time, so it's very important for an individual to pick the most profitable one very quickly."
The research group first measured humidity levels around a nectar-bearing flower by enclosing primrose plants in a sealed container and scanning the air inside with highly sensitive humidity measuring devices called hygrometers. They found that humidity just above the opening flower was slightly higher than ambient levels, caused partly by a plume of water vapor emanating from the flower's nectar tube.
To study whether and how moths respond to the humidity evaporating from nectar stores, the research team put artificial flowers to exclude any other potential signal other than humidity levels in a flight cage large enough for the moths to fly about freely.
Even though none of the artificial flowers had nectar, the moths would preferentially hover and extend their proboscis into those that had slightly elevated humidity compared with those that matched the humidity around them. The animals were able to sense if humidity near a flower was elevated as little as 4 percent above ambient humidity in the flight cage, despite of the turbulence generated by many moths hovering about.
"It was really exciting to see their high sensitivity to humidity in that they can perceive such a minute amount of difference in such a dynamic environment," von Arx said.
The results help researchers better understand the ecological relationships between flowers and their pollinators, especially in arid environments such as the Southwestern U.S.
Even though most plant-pollinator relationships are mutually beneficial the plant rewarding the pollinator's help with food their interests are conflicting.
"Speaking in evolutionary terms, the flower wants to be visited by a pollinator, but it doesn't want to invest too much because sacrificing resources and energy to make nectar is expensive," von Arx explained. "Often, plants are dishonest in their advertising, by presenting attractive flowers with no nectar."
But under certain circumstances, especially in desert environments, where water is scarce, it is beneficial for a flower to be honest, the researchers believe.
"If you're one of only a few flowers and there are lots of pollinators out there, you don't have to be honest about how much nectar you have because they'll visit anyway," von Arx said. "But if you want the attention of just a few, you really have to go all out. So by saying, 'Hey, come here, I have lots of nectar,' you're giving a faithful signal about an actual benefit that the pollinators can perceive and evaluate."
"I think in this case we showed that honesty makes sense in this system, because plants pollinated by hawkmoths are often pollinator-limited, and this signal, especially in the desert environment, is very potent."
According to von Arx, relative humidity plays an important role in the insect world and has been associated with choosing a suitable habitat but never was studied in the context of foraging for nectar. For example, neurobiological experiments revealed that cockroaches are able to detect humidity changes of a fraction of a percent.
"As creatures who use vision and olfaction, humans think in odors and shape, and color," von Arx said. "We are biased by what we can perceive. We know that moths have hygroreceptors on the tips of their antennae, but they remain a mystery for the most part. We know a lot about olfactory receptors, mechanoreceptors and vision. The insect eye has been studied in and out. But hygroreception? We still don't really know how that actually works."
|Contact: Daniel Stolte|
University of Arizona | http://www.bio-medicine.org/biology-news-1/Got-nectar-3F-To-hawkmoths--humidity-is-a-cue-25177-1/ |
4.0625 | You Are Here
The first duty of love is to listen. — Paul Tillich
To explore the experience of empathy is to understand more deeply the first Unitarian Universalist principle: the inherent worth and dignity of all people (and all beings.) The Merriam-Webster online dictionary's definition of "empathy" includes "the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another." Empathy is the necessary action behind love, forgiveness, compassion and caring, and the driving forces of most good works in our world. Cures for disease, laws protecting the vulnerable, charitable contributions and even wars fought to end brutality, are examples of the results of empathy.
This session introduces empathy as a tool for discerning good and just action. It also guides children to recognize and respect multiple perspectives, and to understand that any given scenario can have multiple truths.
In this session the children will hear a Scottish folk tale about a seal hunter who wounds a seal and then is given a chance to experience this wounding from the seal's perspective. Following the story the children will have further opportunities to look at situations from multiple perspectives. They will also participate in an exercise of empathetic listening with their peers to learn one of the basic skills of empathy that can be practiced on a daily basis. As Kevin Ryan and Karen Bohlin wrote in Building Character in Schools (San Francisco: Jossey-Bass Publishers, l999), "Such experiences (as gaining empathy through hearing the stories of others) encourage students to resolve in the quiet of their hearts to stand up for the threatened and the vulnerable."
The Faith in Action component of this session offers an activity for practicing empathy, justice, and goodness by card- or letter-writing to protect seals that are being hunted now. A longer-term Faith in Action project brings an awareness and/or fundraising project to the larger congregational community. In this session, the children add "Empathy" to the Moral Compass poster.
This session will:
- Give participants an opportunity to share acts of goodness that they have done (or witnessed)
- Provide a story and active experiences that demonstrate the meaning of the word "Empathy" and how empathy feels
- Teach that an important part of acting out of goodness is to look at things from other perspectives besides one's own
- Help participants learn to identify, respect and value the perspectives and experiences of others which differ from their own
- Strengthen participants' connection to and sense of responsibility to their faith community
- Take pride in sharing acts of goodness and justice they have done (or witnessed) in the "Gems of Goodness" project
- Hear and act out a story about how someone learns to see things from another perspective.
- Learn to listen and speak empathetically
- Participate in clean-up together
- Optional: Practice using empathy as they write cards or letters to advocate protection of seals from hunting | http://www.uua.org/re/tapestry/children/tales/session4/123223.shtml |
4.03125 | Functions of Gerunds Because gerunds are nouns, they can be used just as nouns are used. HOW ARE NOUNS USED? Gerunds are subjects Example: Skiing is a popular winter sport. Gerunds are objects Example: I love skiing. Gerunds are objects of prepositions Example: I can’t live without skiing! I’m interested in skiing. Let’s talk about skiing. I’m afraid of skiing.
Functions of Gerunds Gerunds are also object complements He spends time reading. She found him working in the kitchen. Don’t waste your time studying this stuff. I caught the students cheating on the exam. Fahad found Abdulrahman smoking outside. Natsume and Paulina saw Mariana chatting with Rena and Celia.
Modifying Gerunds Gerunds can be modified with possessives and negatives. Gerunds can be modified with a noun or object pronoun or the possessive noun or the possessive pronoun.I appreciate YOUR participating in the survey.Hilal doesn’t mind MY reading her letter.I thanked Nayef for HIS coming early to help me.Would you mind NOT BEING late tomorrow?I’m unhappy about the students’ interrupting the lecture.
Modifying Gerunds Use the possessive noun or possessive pronoun with formal English. I’m unhappy with Paulina’s missing class. I’m unhappy with her missing class. I’m upset about your missing class. I’m disappointed with his missing class. Use the noun object or object pronoun with informal English. I’m unhappy with Paulina missing class. I’m unhappy with her missing class. I’m upset about you missing class. I’m disappointed with him missing class.
Gerunds follow certain verbs as objects Enjoy, quit (give up), appreciate, mind, finish, stop (quit), avoid, postpone, delay, keep, keep on, consider, discuss, mention, suggest
Examples Yichen doesn’t enjoy traveling. Fahad wants Abdulaziz to quit smoking. I appreciate the students’ coming on time to class. Would you mind opening the door for me? Would you mind helping me with these bags? You have to finish working by 5 p.m. tonight. Don’t avoid taking the test. Holly suggested my reviewing the material before taking the test. Keep moving! Don’t stop! I kept on speaking even after my mother hung up the phone on me. Would you consider staying for a while?
Go + Gerund Did you go shopping? We went fishing yesterday. Have you ever gone camping? Do you like to go hunting? Let’s go skating this weekend. My kids and I love to go swimming.
Special expressions followed by gerunds Have fun What do you have fun doing? Have a good time I had a good time hanging out with my friends. Spend time Many guys enjoy spending time playing video games. Waste time Let’s not waste time reviewing material that you already know. Have trouble Some people have trouble adapting to new situations.
Special expressions followed by gerunds Have difficulty Are you having difficulty staying awake today? Have a hard time Is anyone having a hard time understanding what I’m saying here? Have a difficult time I hope that nobody has a difficult time passing level 5!
Special expressions followed by gerunds Sit + expression of place+ gerundI was just sitting in my seat minding my own business when the teacher asked me to leave! Stand + expression of place + gerund I’m sorry I’m late. I was standing on line waiting for my name to be called.
Active and Passive Gerunds Active Gerund: Inviting them to her wedding was a nice gesture on her part. Passive Gerund: Being invited to her wedding was a great surprise to them.
Active and Passive Gerunds Past Active Gerund: Having invited them to her wedding made her feel good. (She was glad she had invited them to the wedding). I’m happy having been your teacher. (I’m happy now that I was your teacher before now). Past Passive Gerund: They were so happy having been invited to her wedding. (They were so happy that they had been invited). I hope you’re happy having been taught by me. (I hope that you’re happy now that I taught you in the past).
Active and Passive Gerunds You’re probably wondering why you would ever need to use something as complicated as a past passive gerund... It helps us express something that happened one step back in the past (similar to past perfect).EXAMPLE:I hate being ignored (in general).I’m so upset at having been ignored. (I was ignored yesterday at the meeting. It bothers meNOWthat they ignored meTHEN). | http://www.slideshare.net/holly_cin/gerunds-15361651 |
4.09375 | A close-up view of swirling clouds of gas and dust around young stars has given astronomers a new glimpse into how baby solar systems and their planets are formed.
After stars are born they usually retain clouds of leftover material around them that condense into rings called protoplanetary disks. Over time, the gas and dust in these disks clump together under the pull of gravity to build planets.
In the new study, astronomers peered into a group of nascent solar systems with unprecedented detail by combining the light collected by the two Keck telescopes on Mauna Kea in Hawaii. This allowed them to achieve the extremely fine resolution necessary to observe processes that occur at the border between a star 500 light-years from Earth and its surrounding disk of gas and dust.
The view, scientists said, is comparable to standing on a rooftop in Tucson, Ariz., and trying to observe an ant nibbling on a grain of rice in New York?s Central Park on the other side of the United States.
"We were able to get really, really close to the star and look right at the interface between the gas-rich protoplanetary disk and the star," said lead researcher Joshua Eisner, an astronomer at the University of Arizona.
The researchers looked at 15 young stars with protoplanetary disks in our Milky Way galaxy. All the stars weighed between half and 10 times the mass of the sun.
The astronomers were able to distinguish between gas ? mostly made of hydrogen atoms ? and dust in the disks to parse out what was happening in these budding solar systems, which were roughly a few million years old. [The strangest alien planets.]
"These disks will be around for a few million years more," Eisner said. "By that time, the first planets, gas giants similar to Jupiter and Saturn, may form, using up a lot of the disk material."
Scientists think gas giant planets can form quickly in only a few million years. After these giants use up most of the gas in the disk, the leftover dust and rock will cluster together to form the rocky terrestrial planets, such as Earth, Mars and Venus.
The new observations could also help astronomers understand how stars grow in size by sucking up some of the gas from their surrounding disks.
Scientists think stars accrete this matter in two ways. In one method, the gas washes up directly to the surface of the star, and then is incorporated into the star's body.
In another mechanism, a star's powerful magnetic field can push away surrounding material, creating an empty envelope between the star and its disk. Atoms from the disk can then accelerate along the magnetic field lines into the star.
"Once trapped in the star?s magnetic field, the gas is being funneled along the field lines arching out high above and below the disk?s plane," Eisner explained. "The material then crashes into the star?s polar regions at high velocities."
The researchers detail their findings in an upcoming issue of the Astrophysical Journal.
- Photos - The Strangest Alien Planets
- Amazing Milky Way Images
- Top 10 Extreme Planet Facts | http://www.space.com/8605-solar-system-baby-photos-reveal-planets-form.html |
4.03125 | It's very interesting to wonder what life would have been like in a normal Aztec society family. There are many things we do know, although the record is frustratingly sparse. Record keepers were more interested in other aspects of society, and family life was considered the sphere of women.
Still, there are many things we do know. Like other aspects of Aztec culture, life in an Aztec society family was permeated by religious beliefs, right from the start. Each decision was ruled by the laws of religion, and often tied to the sacred days in the Aztec calendar.
The life of a new family began at marriage, typically in the early 20s for a man and mid-teens for the woman. Marriages were arranged by the relatives (though the children may have had input). The parents would have to talk to the religious leaders, and discuss the signs under which both of the children had been born. The wedding day, of course, was chosen for similar religious reasons.
All this was full of ceremony and form. In Aztec society family a husband may have had more than one wife - but it would be his primary wife that would go through all the ceremony. The man may have many secondary wives, who would also be officially recognized. The children of the principal wife would be the inheritors - or, in the case of a ruler, only a child from the principal wife would be a successor. Still, the husband was supposed to treat all wives equally in daily life.
As you may imagine, one family could grow very large. As a result, most of the husbands with numerous wives and children were the wealthy ones, with the poor more likely to have one wife.
In one sense, society was dominated by the men. The man was considered the head of the home. However, women had a great deal of power as well. They may have had more power in earlier times, with men taking more power toward the end of the Aztec era.
Women often were able to run business out of their homes, and had a lot of influence in the family and the raising of children. The older widows were much respected, and people listened to their advice.
Adultery was a crime - death was the punishment. Divorce was allowed on certain grounds, presented by the man or woman, property was divided equally and both sides were free.
(more on Aztec crime and punishment here)
Marriage marked the entrance into Aztec adult and independat society. The family was given a piece of land, and they would have their own home. Depending on their situation, both the man and the woman may be involved in working the land. Of course, while a woman was involved in household tasks, a man would be more likely to become a warrior. Though there were many occupations (farmer, priest, doctor, etc), being an Aztec warrior was particularly glorified.
War was even used as a symbol of childbirth. The baby was a "captive" in the womb, struggling to be victorious. The woman, too, was in a battle. In fact, in many ways a woman who died in childbirth was glorified in the same way as a warrior who died in battle, and honoured for her courage.
A child was welcomed into the world and into the religious system. A hymn for the new child to the goddess of child birth went like this:
Down there, where Ayopechcatl lives, the jewel is born, a child has come into the world.
It is down there, in her own place, that the children are born.
Come, come here, new-born child, come here.
Come, come here, jewel-child, come here.
(from the Codex Florentino)
Fathers taking their sons to school
I've written a little more about occupations and education here. Education, at least in the early years, was the responsibility of the parents. The father would teach the sons, and the mother the daughters. Work and education then would a big part of the Aztec society family. Work could also break up the family - the father might travel, or in the case of warriors may die on the battlefield.
As I mentioned, discipline was often harsh. Up until the age of 8, the preferred method of discipline was simply verbal. But harsh punishments would be in store for the older child, as he was prepared for the harsher realities of Aztec life.
As children grew older, parents would still be in charge of education, but they would more often send the children to school. There were various branches of education that children would be involved in.
If a family member escaped death on the battlefield or death from illness and so on... they would be among the ueuetque - the wise elders of society. They would offer advice, either informally or on a council. Of course, they were held in high regard in the family itself. The elderly were important in the Aztec society family, and their health care, aging and death was also a matter of ritual and religion.
An Aztec society family was ruled in many ways by religion, tradition, and structure. Life was ruled by fate - from beginning to end your family life, occupation, and success depended on the important dates in your life and the structure of the universe and the nature of the gods. At the same time, life was full of celebration, hard work, joy, sorrow, and love, much as it has been in societies around the world for all of history.
For more on Aztec society family, I recommend Daily Life of the Aztecs by Jacques Soustelle. | http://www.aztec-history.com/aztec-society-family.html |
4.1875 | Rosa Parks held no elected office. She was not born into wealth or power. Yet sixty years ago today, Rosa Parks changed America. Refusing to give up a seat on a segregated bus was the simplest of gestures, but her grace, dignity, and refusal to tolerate injustice helped spark a Civil Rights Movement that spread across America. Just a few days after Rosa Parks’ arrest in Montgomery, Alabama, a little-known, 26 year-old pastor named Martin Luther King Jr. stood by her side, along with thousands of her fellow citizens. Together, they began a boycott. Three-hundred and eighty-five days later, the Montgomery buses were desegregated, and the entire foundation of Jim Crow began to crumble.
montgomery bus boycott
More Than Equals, co-authored by Chris Rice and the late Spencer Perkins, is considered one of the pivotal books in the Christian racial reconciliation movement that found its greatest momentum in the early and mid-1990s.
On Feb. 1, 1960, four African-American students sat down at the "whites-only" lunch counter at the F.W. Woolworth store in Greensboro, North Carolina. As I child, I was told by my late father that he took his youth group to participate in these sit-ins. | https://sojo.net/tags/montgomery-bus-boycott |
4.15625 | July 9, 2008
Scientists Find Evidence Of Water On The Moon
Scientists have decided that evidence collected from the surface of the Moon almost 40 years ago shows that water existed there since its infancy.
Small green and orange pebble-like beads collected decades ago from the Moon's surface were used to analyze the lunar sand samples that are thought to be some 3 billion years of age.The researchers believe these samples could support evidence that water persists in the shadowed craters of the Moon's surface and that it is native to the moon as opposed to being carried there by comets.
Alberto Saal, assistant professor of geological sciences at Brown University believes that the water was contained in magmas erupted from fire fountains onto the surface of the Moon more than 3 billion years ago. About 95 percent of the water vapor from the magma was lost to space during this eruptive "degassing".
He said that if the Moon's volcanoes released 95 percent of their water, it was possible that traces of water vapor may have drifted toward the cold poles of the Moon, where they may remain as ice in permanently shadowed craters.
He noted that several lunar missions have found just such evidence.
A technique called secondary ion mass spectrometry or SIMS, can detect minute amounts of elements in samples.
Erik Hauri of the Carnegie Institution for Science in Washington developed the technique along with his research team to find evidence of water in the Earth's molten mantle.
"Then one day I said, 'Look, why don't we go and try it on the Moon glass?'" Saal said.
It took them three years to convince NASA to fund a study of the samples brought back by astronauts during the Apollo missions in the 1970s.
After careful analysis of 40 of the tiny glass beads, which were broken apart, they discovered evidence that overturned decades of conventional wisdom that the moon is dry.
Saal, Hauri and colleagues did not find water directly, but they did measure hydrogen, and it resembled the measurements they have done to detect hydrogen, and eventually water, in samples from Earth's mantle.
They found that the hydrogen in the sample vaporized during volcanic activity would be similar to lava spurts seen on Earth today.
"We looked at many factors over a wide range of cooling rates that would affect all the volatiles simultaneously and came up with the right mix," said James Van Orman, a former Carnegie researcher now at Case Western Reserve University.
Hauri said the findings suggest the possibility that the moon's interior might have had as much water as the Earth's upper mantle.
"It suggests that water was present within the Earth before the giant collision that formed the Moon," Saal said.
"That points to two possibilities: Water either was not completely vaporized in that collision or it was added a short time "“ less than 100 million years "“ afterward by volatiles introduced from the outside, such as with meteorites."
NASA plans to send its Lunar Reconnaissance Orbiter later this year to search for evidence of water ice at the Moon's south pole. If water is found, the researchers may have figured out the origin.
Saal and his research team's study was published in the July edition of the journal Nature.
Image Caption: Watery Glasses Researchers led by Brown geologist Alberto Saal analyzed lunar volcanic glasses, such these gathered by the Apollo 15 mission, and used a new analytic technique to detect water. The discovery strongly suggests that water has been a part of the Moon since its early existence "“ and perhaps since it was first created. Credit: NASA
On the Net: | http://www.redorbit.com/news/space/1470218/scientists_find_evidence_of_water_on_the_moon/ |
4.0625 | The right hand rule is a way to predict the direction of a force in a magnetic field. To predict the behavior of positive charges, use your right hand. To predict the behavior of negative charges, use your left hand. If your thumb points in the direction of the velocity and your fingers point in the direction of the magnetic field, your palm points in the direction of the force.
So let's talk about the Right Hand Rule. This is one of the most major things that comes up when you're studying magnetic fields for the first time and really it first comes up when you do cross products maybe in pre-calculus but people kind of forget or maybe haven't taken pre-calc so let's talk about it because it's not difficult but its easy to kind of mess up if you're not used to how it works and I'll show you 3 different Right Hand Rules actually kind of 4 but really 3 all the same and then one is a little different.
Let's just go through it and just see how it works. Alright so we start off with the Lorentz force law f equals qv cross b. Alright cross products work like this, you take your right hand, you put your thumb in the direction of the first vector your fingers in the direction of the second vector and your palm points in the direction of the cross product so when we're doing this with the Lorentz force law, first vector velocity so that means my thumb always has to play the role of the velocity. Second vector magnetic field, so that means my fingers have to play the role of the magnetic field the cross product gives the force so that means my palm is always in the direction of the force.
Alright, so let's do a little bit of work with this but first of all I have to show you open major convention that you may or may not be aware of. Magnetic fields have to be in three dimensions but look I'm drawing everything on the board, that board only represents a two dimensional space so I can indicate over I can indicate up but how do I indicate out or in. The way that we do that is we have this convention we say look whenever you see a cross that means that you're talking about a vector that is pointing into the board okay? Basically you can think about it like you know when I put a vector like that it's a arrow what would it look like if the arrow was pointing into the board? Well you'd see the feathers and so that's what the cross is, the feathers. What if it's pointing out of the board? Well now I'm going to see the arrow tip so I just make a little dot now sometimes I'll circle that to indicate that its not just an errant dot that I just put on there but sometimes I'm not really that worried about it for example if I've got lot's of them, it's obvious that this represents the magnetic field so in this case I've got a positive charge moving downward in a magnetic field that's directed into the board. Alright here we go thumb is the velocity, fingers are the magnetic field and notice that my palm is now pointing to the right so that's the direction of the force on this charge to the right.
Alright, let's do this one. What about if the magnetic field is down but the positive charge is going into the board? Alright, thumb, fingers and now I've got a force that's directed to the left alright. What about here? This is weird because now I don't have the velocity what I've got instead is the force and the magnetic field but that's still fun I can still do exactly the same thing. I don't have a velocity so I don't know what I'm doing with my thumb yet, but I do have a magnetic field so that's comming out right? I've got a force so that means my palm has to point down and look at that! My thumb is now pointing in that direction so that must be the direction of a positive charge yet it feels a force down alright? One more a little twist, what if it's a negative charge? Now there's a really easy answer to that, you just pretend it's a positive charge and then just do whatever is the opposite of that but there's another way which is actually more useful in practice because the electrons have negative charge so a lot of times on these exams you'll be asked about electrons a lot and you don't want to have to always do it as if it was positive and then just not listen to it, so what you do instead is you use your left hand alright? So negative charges you use your left hand positive charges you use your right hand and as soon as I recognize that I'm going to use my left hand, everything goes exactly the same way and now the force into and that's the way that it goes.
Now you might wonder what happens to the charge after it goes into the magnetic field, well it turns out that because the force is always perpendicular to the velocity, charges that are moving in magnetic fields always move in circles that's called l'armoire precession so we can actually see that in each of the examples so it's a really easy idea if I've got a charge that's coming down and a force that's going to the right boom that's the l'armoire circle alright? What about here? Well I got a charge that's going in force to the left so here it is l'armoire circle alright? What about here? Now I'm going this way force is down l'armoire circle and how about this guy? Force is into so it's going be l'armoire circle I can't write that one right? But you see that it will always circle around the magnetic field lines alright that's the first and probably most useful form of the right hand rule but let's look at a couple of the other ones over here.
Alright, the first one that I want to mention and this one's really the exactly the same really is what happens when I've got a current in a magnetic field. Well currents are moving charges so that means that just got a lot of charges moving in this magnetic field. Current is going to be in the direction of the velocity so I just say, okay instead of velocity my thumb is the current boom boom left done, very very very simple and basically the same it's just instead of velocity my thumb now represents current. Most of the time we take the convention that the arrow here associated with the current is the direction of positive charge so it's right hand all the time unless they tell you explicitly that negative charges are moving in this direction and then of course just left hand.
Alright now, there's two other right hand rules and these are associated with magnetic fields that come from currents so this is associated with something called the biosovart law or something called amperes law so the idea is that whenever you've got a current like this, there will be magnetic field associated with it so if I got a current that goes like that, there's going to be a magnetic field that circulates around this current alright so this is it's a different physical situation we can't expect the right hand rule to be exactly the same but hopefully in this case it's almost the same. Thumb current, fingers again are the magnetic field but rather than keeping them out like that here's what we're going to do we're going to act like we're grabbing the wire alright? So we're going to grab the wire and our fingers are the magnetic field so that means that in this case the magnetic field will circulate around just like that in exactly the way that my fingers are circulating around it if I grab it so that means that above the current the magnetic field is coming out of the board and below it's going into so there it is, I've got magnetic field circulating around my wire in exactly that way.
Alright, here's the last one and this one is kind of the, the most different alright, but it's also very useful. What if I have a current loop? Alright, well I could play this game just like we just did and I could say "alright well let me grab the wire"5 okay? Well if I grab the wire like that with my thumb in the direction of the current then the magnetic field inside will be coming out of the board and outside will be going into the board so this is exactly the same as we just had no difference so why I'm I saying it's different? Well because will apply the right hand rule in a slightly different way here okay. You don't have to do this, you can always do it this way but sometimes it's more useful to instead put your fingers in the direction of the current and then your thumb will point in the direction of the magnetic field at the center of the current loop out, of course it gives us the same answer that we got the other way but this is associated with something called a magnetic moment and so you might be asked to think about magnetic moments and these current loops and it's easier when you're focusing on that to use the right hand rule when now your current is your fingers and the thumb is the magnetic field. | https://www.brightstorm.com/science/physics/magnetism/right-hand-rule/ |
4.28125 | Transuranium element, any of the chemical elements that lie beyond uranium in the periodic table—i.e., those with atomic numbers greater than 92. Twenty-six of these elements have been discovered and named or are awaiting confirmation of their discovery. Eleven of them, from neptunium through lawrencium, belong to the actinoid series. The others, which have atomic numbers higher than 103, are referred to as the transactinoids. All the transuranium elements are unstable, decaying radioactively, with half-lives that range from tens of millions of years to mere fractions of a second.
Since only two of the transuranium elements have been found in nature (neptunium and plutonium) and those only in trace amounts, the synthesis of these elements through nuclear reactions has been an important source of knowledge about them. That knowledge has expanded scientific understanding of the fundamental structure of matter and makes it possible to predict the existence and basic properties of elements much heavier than any currently known. Present theory suggests that the maximum atomic number could be found to lie somewhere between 170 and 210, if nuclear instability would not preclude the existence of such elements. All these still-unknown elements are included in the transuranium group.
Discovery of the first transuranium elements
The first attempt to prepare a transuranium element was made in 1934 in Rome, where a team of Italian physicists headed by Enrico Fermi and Emilio Segrè bombarded uranium nuclei with free neutrons. Although transuranium species may have been produced, the experiment resulted in the discovery of nuclear fission rather than new elements. (The German scientists Otto Hahn, Fritz Strassman, and Lise Meitner showed that the products Fermi found were lighter, known elements formed by the splitting, or fission, of uranium.) Not until 1940 was a transuranium element first positively produced and identified, when two American physicists, Edwin Mattison McMillan and Philip Hauge Abelson, working at the University of California at Berkeley, exposed uranium oxide to neutrons from a cyclotron target. One of the resulting products was an element found to have an atomic number of 93. It was named neptunium.
Transformations in atomic nuclei are represented by equations that balance all the particles of matter and the energy involved before and after the reaction. The above transformation of uranium into neptunium may be written as follows:
In the first equation the atomic symbol of the particular isotope reacted upon, in this case U for uranium, is given with its mass number at upper left and its atomic number at lower left: 23892U. The uranium-238 isotope reacts with a neutron (symbolized n, with its mass number 1 at upper left and its neutral electrical charge shown as 0 at lower left) to produce uranium-239 (23992U) and the quantum of energy called a gamma ray (γ). In the next equation the arrow represents a spontaneous loss of a negative beta particle (symbolized β−), an electron with very high velocity, from the nucleus of uranium-239. What has happened is that a neutron within the nucleus has been transformed into a proton, with the emission of a beta particle that carries off a single negative charge; the resulting nucleus now has one more positive charge than it had before the event and thus has an atomic number of 93. Because the beta particle has negligible mass, the mass number of the nucleus has not changed, however, and is still 239. The nucleus resulting from these events is an isotope of the element neptunium, atomic number 93 and mass number 239. The above process is called negative beta-particle decay. A nucleus may also emit a positron, or positive electron, thus changing a proton into a neutron and reducing the positive charge by one (but without changing the mass number); this process is called positive beta-particle decay. In another type of beta decay a nuclear proton is transformed into a neutron when the nucleus, instead of emitting a beta particle, “captures,” or absorbs, one of the electrons orbiting the nucleus; this process of electron capture (EC decay) is preferred over positron emission in transuranium nuclei.
The discovery of the next element after neptunium followed rapidly. In 1941 three American chemists, Glenn T. Seaborg, Joseph W. Kennedy, and Arthur C. Wahl, produced and chemically identified element 94, named plutonium (Pu). In 1944, after further discoveries, Seaborg hypothesized that a new series of elements called the actinoid series, akin to the lanthanoid series (elements 58–71), was being produced, and that this new series began with thorium (Th), atomic number 90. Thereafter, discoveries were sought, and made, in accordance with this hypothesis.
Synthesis of transuranium elements
The most abundant isotope of neptunium is neptunium-237. Neptunium-237 has a half-life of 2.1 × 106 years and decays by the emission of alpha particles. (Alpha particles are composed of two neutrons and two protons and are actually the very stable nucleus of helium.) Neptunium-237 is formed in kilogram quantities as a by-product of the large-scale production of plutonium in nuclear reactors. This isotope is synthesized from the reactor fuel uranium-235 by the reaction
and from uranium-238 by
Because of its ability to undergo fission with neutrons of all energies, plutonium-239 has considerable practical applications as an energy source in nuclear weapons and as fuel in nuclear power reactors.
The method of element production discussed thus far has been that of successive neutron capture resulting from the continuous intensive irradiation with slow (low-energy) neutrons of an actinoid target. The sequence of nuclides that can be synthesized in nuclear reactors by this process is shown in the figure, in which the light line indicates the principal path of neutron capture (horizontal arrows) and negative beta-particle decay (up arrows) that results in successively heavier elements and higher atomic numbers. (Down arrows represent electron-capture decay.) The heavier lines show subsidiary paths that augment the major path. The major path terminates at fermium-257, because the short half-life of the next fermium isotope (fermium-258)—for radioactive decay by spontaneous fission (370 microseconds)—precludes its production and the production of isotopes of elements beyond fermium by this means.
Heavy isotopes of some transuranium elements are also produced in nuclear explosions. Typically, in such events, a uranium target is bombarded by a high number of fast (high-energy) neutrons for a small fraction of a second, a process known as rapid-neutron capture, or the r-process (in contrast to the slow-neutron capture, or s-process, described above). Underground detonations of nuclear explosive devices during the late 1960s resulted in the production of significant quantities of einsteinium and fermium isotopes, which were separated from rock debris by mining techniques and chemical processing. Again, the heaviest isotope found was that of fermium-257.
An important method of synthesizing transuranium isotopes is by bombarding heavy element targets not with neutrons but with light charged particles (such as the helium nuclei mentioned above as alpha particles) from accelerators. For the synthesis of elements heavier than mendelevium, so-called heavy ions (with atomic number greater than 2 and mass number greater than 5) have been used for the projectile nuclei. Targets and projectiles relatively rich in neutrons are required so that the resulting nuclei will have sufficiently high neutron numbers; too low a neutron number renders the nucleus extremely unstable and unobservable because of its resultantly short half-life.
The elements from seaborgium to copernicium have been synthesized and identified (i.e., discovered) by the use of “cold,” or “soft,” fusion reactions. In this type of reaction, medium-weight projectiles are fused to target nuclei with protons numbering close to 82 and neutrons numbering about 126—i.e., near the doubly “magic” lead-208—resulting in a relatively “cold” compound system. The elements from 113 to 118 were made using “hot” fusion reactions, similar to those described above using alpha particles, in which a relatively light projectile collides with a heavier actinoid. Because the compound nuclei formed in cold fusion have lower excitation energies than those produced in hot fusion, they may emit only one or two neutrons and thus have a much higher probability of remaining intact instead of undergoing the competing prompt fission reaction. (Nuclei formed in hot fusion have higher excitation energy and emit three to five neutrons.) Cold fusion reactions were first recognized as a method for the synthesis of heavy elements by Yuri Oganessian of the Joint Institute for Nuclear Research at Dubna in the U.S.S.R. (now in Russia).
Isotopes of the transuranium elements are radioactive in the usual ways: they decay by emitting alpha particles, beta particles, and gamma rays; and they also fission spontaneously. The table lists significant nuclear properties of certain isotopes that are useful for chemical studies. Only the principal mode of decay is given, though in many cases other modes of decay also are exhibited by the isotope. In particular, with the isotope californium-252, alpha-particle decay is important because it determines the half-life, but the expected applications of the isotope exploit its spontaneous fission decay that produces an enormous neutron output. Other isotopes, such as plutonium-238, are useful because of their relatively large thermal power output during decay (given in the table in watts per gram). Research on the chemical and solid-state properties of these elements and their compounds obviously requires that isotopes with long half-lives be used. Isotopes of plutonium and curium, for example, are particularly desirable from this point of view. Beyond element 100 the isotopes must be produced by charged-particle reactions using particle accelerators, with the result that only relatively few atoms can be made at any one time.
|name and mass||principal decay mode||half-life||disintegrations per minute per microgram||watts per gram*|
|plutonium-239||alpha||24,110 years||138,000||1.91 |
|berkelium-249||beta (minus)||330 days||3.6(109)||0.358|
|mendelevium-256||electron capture||77 minutes|
|seaborgium-265||spontaneous fission||8 seconds|
|*Thermal power output. |
**Indicates an approximate value.
Nuclear structure and stability
Although the decay properties of the transuranium elements are important with regard to the potential application of the elements, these elements have been studied largely to develop a fundamental understanding of nuclear reactions and nuclear and atomic structure. Study of the known transuranium elements also helps in predicting the properties of yet-undiscovered isotopes and elements as a guide to the researcher who can then design experiments to prepare and identify them. As shown in the figure, the known isotopes can be represented graphically with the number of nuclear protons (Z) plotted along the left-hand axis and the number of neutrons (N) plotted on the top axis. The relative stabilities of the isotopes are indicated by their relative heights. In this metaphoric representation, the known isotopes resemble a peninsula rising above a sea of instability. The most stable isotopes, appearing as mountaintops, occur at specific values called magic numbers.
The magic numbers derive from calculations of the energy distribution based on the theoretical structure of the nucleus. According to theory, neutrons and protons (collectively, nucleons) are arranged within the nucleus in shells that are able to accommodate only fixed maximum numbers of them; when the shells are closed (i.e., unable to accept any more nucleons), the nucleus is much more stable than when the shells are only partially filled. The number of neutrons or protons in the closed shells yields the magic numbers. These are 2, 8, 20, 28, 50, 82, and 126. Doubly magic nuclei, such as helium-4, oxygen-16, calcium-40, calcium-48, and lead-208, which have both full proton shells and full neutron shells, are especially stable. As the proton and neutron numbers depart further and further from the magic numbers, the nuclei are relatively less stable.
As the highest atomic numbers are reached, decay by alpha-particle emission and spontaneous fission sets in (see below). At some point the peninsula of relatively stable isotopes (i.e., with an overall half-life of at least one second) is terminated. There has been, however, considerable speculation, based on a number of theoretical calculations, that an island of stability might exist in the neighbourhood of Z = 114 and N = 184, both of which are thought to be magic numbers. The longest-lasting isotope of flerovium, element 114, has N = 175 and a half-life of 2.7 seconds; this long half-life could be the “shore” of the island of stability. Isotopes in this region have significantly longer half-lives than neighbouring isotopes with fewer neutrons. There is also evidence for subshells (regions of somewhat increased stability) at Z = 108 and N = 162.
Processes of nuclear decay
The correlation and prediction of nuclear properties in the transuranium region are based on systematics (that is, extensions of observed relationships) and on the development of theoretical models of nuclear structure. The development of structural theories of the nucleus has proceeded rather rapidly, in part because valid parallels with atomic and molecular theory can be drawn.
A nucleus can decay to an alpha particle plus a daughter product if the mass of the nucleus is greater than the sum of the mass of the daughter product and the mass of the alpha particle—i.e., if some mass is lost during the transformation. The amount of matter defined by the difference between reacting mass and product mass is transformed into energy and is released mainly with the alpha particle. The relationship is given by Einstein’s equation E = mc2, in which the product of the mass (m) and the square of the velocity of light (c) equals the energy (E) produced by the transformation of that mass into energy. It can be shown that, because of the inequality between the mass of a nucleus and the masses of the products, most nuclei beyond about the middle of the periodic table are likely to be unstable because of the emission of alpha particles. In practice, however, because of the reaction rate, decay by ejection of an alpha particle is important only with the heavier elements. Indeed, beyond bismuth (element 83) the predominant mode of decay is by alpha-particle emission, and all the transuranium elements have isotopes that are alpha-unstable.
The regularities in the alpha-particle decay energies that have been noted from experimental data can be plotted on a graph and, since the alpha-particle decay half-life depends in a regular way on the alpha-particle decay energy, the graph can be used to obtain the estimated half-lives of undiscovered elements and isotopes. Such predicted half-lives are essential for experiments designed to discover new elements and new isotopes, because the experiments must take the expected half-life into account.
In elements lighter than lead, beta-particle decay—in which a neutron is transformed into a proton or vice versa by emission of either an electron or a positron or by electron capture—is the main type of decay observed. Beta-particle decay also occurs in the transuranium elements, but only by emission of electrons or by capture of orbital electrons; positron emission has not been observed in transuranium elements. When the beta-particle decay processes are absent in transuranium isotopes, the isotopes are said to be stable to beta decay.
Decay by spontaneous fission
The lighter actinoids such as uranium rarely decay by spontaneous fission, but at californium (element 98) spontaneous fission becomes more common (as a result of changes in energy balances) and begins to compete favourably with alpha-particle emission as a mode of decay. Regularities have been observed for this process in the very heavy element region. If the half-life of spontaneous fission is plotted against the ratio of the square of the number of protons (Z) in the nucleus divided by the mass of the nucleus (A)—i.e., the ratio Z2/A—then a regular pattern results for nuclei with even numbers of both neutrons and protons (even-even nuclei). Although this uniformity allows very rough predictions of half-lives for undiscovered isotopes, the methods actually employed are considerably more sophisticated.
The results of study of half-life systematics for alpha-particle, negative beta-particle, and spontaneous-fission decay in the near region of undiscovered transuranium elements can be plotted in graphs for even-even nuclei, for nuclei with an odd number of protons or neutrons, and for odd-odd nuclei (those with odd numbers for both protons and neutrons). These predicted values are in the general range of experimentally determined half-lives and correctly indicate trends, but individual points may differ appreciably from known experimental data. Such graphs show that isotopes with odd numbers of neutrons or protons have longer half-lives for alpha-particle decay and for spontaneous fission than do neighbouring even-even isotopes.
Nuclear structure and shape
Several models have been used to describe nuclei and their properties. In the liquid-drop model the nucleus is treated as a uniform, charged drop of liquid. This structure does not account for certain irregularities, however, such as the increased stability found for nuclei with particular magic numbers of protons or neutrons (see above). The shell model recognized that these magic numbers resulted from the filling, or closing, of nuclear shells. Nuclei with the exact number (or close to the exact number) of neutrons and protons dictated by closed shells have spherical shapes, and their properties are successfully described by the shell theory. However, the lanthanoid and actinoid nuclei, which do not have magic numbers of nucleons, are deformed into a prolate spheroid, or football, shape, and the spherical-shell model does not adequately explain their properties. The shell model nevertheless established the fact that the neutrons and protons within a nucleus are more likely to be found inside rather than outside certain nuclear shell regions and thus showed that the interior of the nucleus is inhomogeneous. A model incorporating the shell effects to correct the ordinary homogeneous liquid-drop model was developed. This hybrid model is used, in particular, to explain spontaneous-fission half-lives.
Since many transuranium nuclei do not have magic numbers of neutrons and protons and thus are nonspherical, considerable theoretical work has been done to describe the motions of the nucleons in their orbitals outside the spherical closed shells. These orbitals are important in explaining and predicting some of the nuclear properties of the transuranium and heavy elements.
The mutual interaction of fission theory and experiment brought about the discovery and interpretation of fission isomers. At Dubna, Russia, U.S.S.R., in 1962, americium-242 was produced in a new form that decayed with a spontaneous-fission half-life of 14 milliseconds, or about 1014 times shorter than the half-life of the ordinary form of that isotope. Subsequently, more than 30 other examples of this type of behaviour were found in the transuranium region. The nature of these new forms of spontaneously fissioning nuclei was believed to be explainable, in general terms at least, by the idea that the nuclei possess greatly distorted but quasi-stable nuclear shapes. The greatly distorted shapes are called isomeric states, and these new forms of nuclear matter are consequently called shape isomers. As mentioned above, calculations relating to spontaneous fission involve treating the nucleus as though it were an inhomogeneous liquid drop, and in practice this is done by incorporating a shell correction to the homogeneous liquid-drop model. In this case an apparently reasonable way to amalgamate the shell and liquid-drop energies was proposed, and the remarkable result obtained through the use of this method reveals that nuclei in the region of thorium through curium possess two energetically stable states with two different nuclear shapes. This theoretical result furnished a most natural explanation for the new form of fission, first discovered in americium-242.
This interpretation of a new nuclear structure is of great importance, but it has significance far beyond itself because the theoretical method and other novel approaches to calculation of nuclear stability have been used to predict an island of stability beyond the point at which the peninsula in the figure disappears into the sea of instability. | http://www.britannica.com/science/transuranium-element |
4.21875 | A-level Biology/Cells< A-level Biology
- 1 Cell Structure
- 2 Analysis of Cell Compounds
- 3 Plasma Membranes
- 4 Cholera
Organelles are parts of cells. Each organelle has a specific function.
- Largest organelle
- Surrounded by a nuclear envelope, which contains pores (holes)
- Contains chromatin and the nucleolus
- Store the genetic material
- Controls the cell's activities
- Pores allow substances to move between the nucleus and the cytoplasm
- The nucleolus makes ribosomes (see below)
- Oval shaped
- They have a double membrane - the inner one is folded to form structures called cristae
- Inside is the matrix, containing enzymes
- They are the site of aerobic respiration
- Makes energy in the form of ATP (adenosine triphosphate) as a source of energy for the cell's activities
- Cristae give a bigger surface area so more enzymes can fit in
- Smooth endoplasmic reticulum is a system of membranes which enclose a fluid-filled space
- Rough endoplasmic reticulum is similar, but covered in ribosomes
- Smooth endoplasmic reticulum synthesises and processes lipids and carbohydrates
- Rough endoplasmic reticulum folds and processes proteins that have been made at the ribosomes, transports proteins around the cell.
- A group of fluid-filled, flattened sacs
- Processes and packages new lipids and proteins
- Once finished, it makes vesicles which transport the molecules to the edge of the cell for ejection
- Makes lysosomes
- Very small
- Either floats free in the cytoplasm or is attached to rough endoplasmic reticulum
- The site where protein synthesis takes place
- No clear internal structure
- Contains digestive enzymes which can be used to digest invading cells or break down worn-out organelles (autolysis)
- These are folds in the plasma membrane
- Found in cells involved in absorption
- Stereotypically found on the villi in the small intestine
- Increase the surface area of the plasma membrane
Found on the surface of animal cells, it's mainly made of lipids and proteins. It controls the movement of substances in and out of the cell; further explanation can be found later in this book.
- Found in plant cells only
- Inner membrane is folded to form stacks of grana
- Molecules of chlorophyll are on the grana
- Chlorophyll captures photons of light used for photosynthesis
Refer to the below table for the differences between plant and animal cells.
|Typical animal cell||Typical plant cell|
Prokaryotes and EukaryotesEdit
Eukaryotic cells are complex, and include all animal and plant cells. Prokaryotic cells are smaller and simpler, like bacteria.
The table below is a comparison of prokaryotic and eukaryotic cells:
|Typical organisms||bacteria||fungi, plants, animals|
|Typical size||~ 1-10 µm||~ 10-100 µm (sperm cells, apart from the tail, are smaller)|
|Type of nucleus||none||nucleus with double membrane|
|Genetic material||ring of DNA, plasmids||chromosomes|
|Ribosomes||Smaller (18nm)||Larger (22nm)|
|Cytoplasmatic structure||very few structures||highly structured by endomembranes and a cytoskeleton|
|Mitochondria||none||one to several thousand (though some lack mitochondria)|
|Chloroplasts||none||in algae and plants|
|Organization||usually single cells||single cells, colonies, higher multicellular organisms with specialized cells|
|Cell division||Binary fission (simple division)||Mitosis
Analysis of Cell CompoundsEdit
Units of Size in MicroscopyEdit
- The basic biological unit of measure is the micrometer (µm).
- 1000µm = 1mm.
Calculations in MicroscopyEdit
Size in real life = Size in image ÷ Magnification.
Magnification = Size in image ÷ Size in real life.
- Measure the size of the image in millimetres.
- Convert to micrometers by multiplying by 1000.
Example: A micrograph shows mitochondrion 210mm magnified 2500x. What is its size in real life?
210mm x 1000 = 210,000µm
210,000µm ÷ 2500 = 84µm
Example: An object is 130µm in real life and 52mm in an image. What is the magnification?
52mm x 1000 = 52,000µm
52,000µm ÷ 130µm = 400x
- Light rays travel through the specimen and 2 lenses
- The objective lens provides the initial magnification of the image
- The eyepiece lens magnifies and focuses the image
- Transmission electron microscopes (T.E.M.) pass a beam of electrons through the specimen to produce an image on a fluorescent screen.
- Scanning electron microscopes (S.E.M.) scan a beam of electrons over the specimen.
- Electromagnets focus the image.
- Electrons are produced from a tungsten filament at the top of a column.
- The column is a vacuum. As a result, living speciments cannot be used.
- The preparation of specimens for microscopes can be drastic, and can produce artefacts. Artefacts are things you see under a microscope, but aren't actually there in real life. This could be due to something like an air bubble.
- Magnification refers to how much bigger the image is than the actual specimen.
- Resolution refers to how well a microscope distinguishes two different points that are close together. If a microscope can't separate two objects, then increasing the magnification won't help.
Below is a table comparing the different types of microscope.
|Depth of focus||Low||High||Medium|
|Field of view||Good||Good||Limited|
|Ease of specimen preparation||Easy||Fairly skilled||Skilled|
|Speed of specimen preparation||Rapid||Quite rapid||Slow|
- Cell fractionation breaks apart cells and separates its organelles.
Step 1: HomogenisationEdit
- This breaks open the cells.
- Usually done by vibrating the cells, or grinding them up in a blender.
- It is done with a cold, isotonic buffer:
- cold to slow down and stop organelle activity, particularly the hydrolytic enzymes in lysosomes
- isotonic to prevent the movement of water in and out of organelles by osmosis
- a buffer to prevent changes in pH levels.
Step 2: FiltrationEdit
- Filter the solution through a gauze to remove debris e.g. large cell debris or tissue debris.
Step 3: UltracentrifugationEdit
- Spin the solution in a centrifuge at a low speed.
- The heaviest organelles (nuclei, chloroplasts) fall the to the bottom.
- The rest of the organelles stay suspended in the fluid above this sediment. This is the supernatant.
- The supernatant is drained off, poured into another tube, and spun again at a higher speed.
- This time, organelles like mitochondria and lysosomes fall to the bottom.
- Again, the supernatant is drained off, poured into another tube and spun at a higher speed.
- Finally, the lightest organelles remain.
Plasma membranes are located around the edge of animal cells and surround the cytoplasm and other organelles. They are made up of a phospholipid bilayer which consists of two layer of phospholipids with the hydrophobic heads on the outer layers and the hydrophylic tails on the inner layers. lipid soluble molecules can diffuse straight through the bilayer, water can osmosise through as well. The phospholipid bilayer contains intrinsic and extrinsic proteins. The intrinsic proteins go all the way through the bilayer whereas extrinsic proteins only go through the outer phospholipid layer. Extrinsic Proteins are used for recognition of the cell and usually have a glycoprotein attached for the recognition. Intrinsic Proteins are for allowing molecules through. Ways the proteins are designed to allow molecules through are: Protein pump, Protein Channel, Gated protein channel.
Cholera is a disease commonly found in dirty water. Once ingested it sits in the endo thelial reticulum of the gut and sets up an contransporter with Na+ ions . this lowers the water potential of the gut this means thatv water moves in down the concentration gradient meaning that the person is constantly dehydrated and has wet loose faeces. These affects can be countered with oral dehydration therapy. | https://en.m.wikibooks.org/wiki/A-level_Biology/Cells |
4 | There was little formal description of the corbicula before Carl Linnaeus explained the biological function of pollen in the mid-18th century. In English the first edition of Encyclopædia Britannica described the structure in 1771 without giving it any special name. The second edition, 1777, refers to the corbicula simply as the "basket". By 1802 William Kirby had introduced the Latin term corbicula into English. He had borrowed it, with acknowledgement, from Réaumur. This New Latin term, like many other Latin anatomical terms, had the advantages of specificity, international acceptability, and culture neutrality. By 1820 the term pollen-basket seems to have gained acceptance in beekeeping vernacular, though a century later a compendium of entomological terminology recognised pollen-plate and corbicula without including "pollen-basket". Yet another century, and authorities as eminent as the current authors of "Imms" included only the terms scopa and corbicula in the index, though they did include "pollen basket" in the text.
The New Latin term corbicula is a diminutive of corbis, a basket or pannier. Corbula (not a term used in entomology) is given as the Late Latin diminutive, but at least one dictionary simply lists corbicula as a very small basket.
Corbicula is the singular; its plural is corbiculae, reflecting the fact that in Latin the gender is feminine, but a troublesome confusion has arisen ever since at least one author c. 1866 assumed corbicula to be the plural of (an actually non-existent neuter form) corbiculum. The error has propagated through successive textbooks and reference works and troublesomely, it still is to be found as a minority misconception in modern publications.
Structure and function
Bees in four tribes of the family Apidae, subfamily Apinae: the honey bees, bumblebees, stingless bees, and orchid bees have corbiculae. The corbicula is a polished cavity surrounded by a fringe of hairs, into which the bee collects the pollen; most other bees possess a structure called the scopa, which is similar in function, but is a dense mass of branched hairs into which pollen is pressed, with pollen grains held in place in the narrow spaces between the hairs. A honey bee moistens the forelegs with its protruding tongue and brushes the pollen that has collected on its head, body and forward appendages to the hind legs. The pollen is transferred to the pollen comb on the hind legs and then combed, pressed, compacted, and transferred to the corbicula on the outside surface of the tibia of the hind legs. In Apis species, a single hair functions as a pin that secures the middle of the pollen load. Either Honey or nectar is used to moisten the dry pollen, producing the product known as bee pollen or bee bread. The mixing of the pollen with nectar or honey changes the color of the pollen. The color of the pollen can help identify the pollen source.
- Society of Gentlemen in Scotland (1771). Encyclopaedia Britannica: Or, A Dictionary of Arts and Sciences, Compiled Upon a New Plan in which the Different Sciences and Arts are Digested Into Distinct Treatises Or Systems; and the Various Technical Terms, Etc., are Explained as They Occur in the Order of the Alphabet. Encyclopaedia Britannica. p. 89.
- Bees. J. Balfour and Company W. Gordon. 1778. p. 440.
- William Kirby (1802). Monographia Apum Angliae; Or, An Attempt to Divide Into Their Natural Genera and Families, Such Species of the Linnean Genus Apis as Have Been Discovered in England: with Descriptions and Observations. To which are Prefixed Some Introductory Remarks Upon the Class Hymenoptera, and a Synoptical Table of the Nomenclature of the External Parts of These Insects. With Plates. Vol. 1. [-2.]. By William Kirby .. p. 200.
- A.B. Herbert; A.P. Beresford; Alexander Dedekind; Charles C. Miller Memorial Apicultural Library, Andrew Jameson, Auguste de Saint-Hilaire, Benjamin Kidd, Bouffier de Sauvages, C.P. Cory, Charles Bucke, Charles Henry Bennett, Coleman Phillips, Edward Latham Ormerod, F.C. Harrison, Francis Whishaw, Frank R. Chesire, Frederic William Lambert Sladen, George Hubbard, Harry Wallis Kew, Harvey Goodwin, Henry Noel Humphreys, Herbert S. Shorthouse, I. Hopkins, J. Perez, James Caldwell, James Cavanah Murphy, John Lowe, L.S., Lippi, M.M.M., Oliver Goldsmith, R.L. Maddox, René-Antoine Ferchault de Réaumur, Robert Huish, Shirley Hibberd, Society of Practical Gardeners, T. Slevan, Thomas Hale (Esq.), Thomas James, Thomas Lamb Phipson, Thorsley, Travers James Briant, Ul Lambotte, William Carr, William Dunbar, William Hyde Wollaston, Alfred Edward Thomas Watson, Institut Pasteur, Jan Dzierżon, John Martyn, Robert Barnabas Brough, Royal Microscopical Society (Great Britain), Sir William Watson Cheyne, Ephraim Chambers (1820). Of bees. Royal Microscopical Society. p. 396. Cite uses deprecated parameter
- Smith, John. B. Explanation of terms used in entomology. Pub: Brooklyn Entomological Society 1906. May be downloaded from:
- Richards, O. W.; Davies, R.G. (1977). Imms' General Textbook of Entomology: Volume 1: Structure, Physiology and Development Volume 2: Classification and Biology. Berlin: Springer. ISBN 0-412-61390-5.
- Ainsworth, Robert; Eds: Morell, Thomas; Carey, John | An Abridgment of Ainsworth's Latin Dictionary, 13th ed. | London 1834
- Jaeger, Edmund Carroll (1959). A source-book of biological names and terms. Springfield, Ill: Thomas. ISBN 0-398-06179-3.
- William Young (1810). A new Latin-English dictionary: To which is prefixed an English-Latin dictionary. A. Wilson. pp. 22–.
- Alpheus Spring Packard (1868). Guide to the Study of Insects: And a Treatise on Those Injurious and Beneficial to Crops: for the Use of Colleges, Farm-schools, and Agriculturists. Naturalist's agency. p. 116.
- Frederick Augustus Porter Barnard; Arnold Guyot; A.J. Johnson & Co (1890). Johnson's universal cyclopædia: a scientific and popular treasury of useful knowledge. A.J. Johnson.
- George McGavin (1992). Insects of the northern hemisphere. Dragon's World. ISBN 978-1-85028-151-1.
- James L. Castner (2000). Photographic atlas of entomology and guide to insect identification. Feline Press. ISBN 978-0-9625150-4-0.
- George Gordh, Gordon Gordh, David Headrick, A Dictionary of Entomology, Science, 2003; 1040 pages; pg.713
- Bees (Hymenoptera: Apoidea: Apiformes) Encyclopedia of Entomology 2008. Vol. 2, pages 419–434
- Cedric Gillott, Entomology, Springer, 1995; 798 pages; pg. 79
- Dorothy Hodges, The Pollen Loads of the Honeybee, published by Bee Research Association, 1952 | https://en.wikipedia.org/wiki/Pollen_basket |
4.09375 | Activity 8-1: What Are the Issues?
Summary This activity is a class discussion about abortion. Students use a combination of ready-made and self-made scenarios to examine the moral and ethical issues of this potentially emotional subject.
Undefined control sequence \checkmark clarify their personal views on abortion.
Undefined control sequence \checkmark listen respectfully to the views of others.
- Activity Report (one copy per group)
- None if discussion is open to whole class
- One copy of Activity Report per group if group work is selected
The abortion issue can elicit emotional responses. Decide if you will handle the discussion as a whole class or by dividing the class into groups. If you divide the class into smaller groups, consider the groupings carefully.
If needed, define the term “moral issues.”
Estimated Time 40-45 minutes
This activity can be extended depending on class interest.
This activity has Guidance/Language Arts/Social Science/Science connections. It can be extended to include:
Language Arts or Social Studies Allow 2-3 days for groups to prepare an oral, written, or visual presentation on “Abortion: Is it ever right?” Limit oral presentations to 5 minutes. Groups should use factual information when possible to avoid a purely emotional response.
Prerequisites and Background Information
Introduce Activity 8-1 by asking the class what they think makes an issue controversial. Can they list some controversial items that have been in the news the last few years? How should people discuss issues when they have strongly held differing views? Have they seen examples in the news of bad ways of disagreeing? Have they seen examples of positive ways? Review the ground rules for class discussions. Since these are sensitive issues it is recommended that the class be reminded that everyone is entitled to their views and beliefs. People should be understanding and respectful of other people's views. Address the issues-do not attack the person.
Steps 1-2 Ask the students to respond to the following question: “Is it ever okay to have an abortion?” Then tell them to put their responses away. At the end of the class discussion they can refer back to their papers to see if they have changed their minds and why.
Step 3 Read the directions to the Activity Report with your students. The introduction to this activity gives these scenarios:
a. A woman is raped and later discovers she is pregnant.
b. A pregnant woman feels she is not yet prepared to give birth and raise a child.
c. The pregnant daughter of an abusive parent was told that she would be beaten “within an inch of your life” if she ever got pregnant.
d. A doctor warns a pregnant mother that giving birth again could be fatal to her.
e. A young, pregnant woman is told by her boyfriend that she must get an abortion or he will leave her.
Students are then asked to write at least one realistic scenario involving a pregnant woman. These should be collected and screened for inclusion in the discussions.
Step 4 Using any or all of these scenarios, lead the class in a discussion of the moral dilemma of abortion. For each scenario, bring the following questions into the discussion:
a. What are the moral issues involved? Is abortion “right” or “wrong”?
b. Who decides if something is “right” or “wrong”?
c. Where does freedom of personal choice stop and responsibility to others start?
d. Whose interests are at stake? The mother's? That of the fetus? Society? How do these conflicts get solved and by whom?
e. When does the fetus become a “person”? Who decides? How do they decide?
f. Do developing fetuses have a right to be born?
g. Since we are all entitled to our religious and moral beliefs, how do we decide what law must apply to all people?
An alternative to a whole-class discussion is a group discussion. Give each group a scenario to discuss among themselves, each in turn, as the rest of the class listens. The class can then ask the group to respond to their questions at the end of the group discussion. This will extend the class time and focus the attention of the class on a variety of scenarios.
Conclude Activity 8-1 by allowing the students about five minutes to refer back to the original question they answered. Did they change their minds? Why? This can be their private, quiet time to reflect, and to calm down before their next class period.
Use student responses during class discussions and the responses on the Activity Report to assess if students can
Undefined control sequence \checkmark identify the complexity of the issues surrounding abortion.
Undefined control sequence \checkmark explain the wide range of beliefs.
Undefined control sequence \checkmark explain why these questions have no “absolute” answers.
Undefined control sequence \checkmark identify the factors that affect an individual's opinion about abortion.
Write a letter to someone who is contemplating an abortion. In your letter, discuss your personal views about abortion.
- Sample answers to these questions will be provided upon request. Please send an email to firstname.lastname@example.org to request sample answers.
- What are two common side effects of the IUD?
- At what point in pregnancy do abortions become illegal in this country?
- What does pro-life mean? What does pro-choice mean?
Activity 8-1 Report: What Are the Issues? (Student Reproducible)
Your teacher will set the “ground rules” for your group discussion on abortion. Then, your teacher will assign your group a scenario involving a pregnant woman who must make a decision regarding abortion.
Think about and discuss the following as they apply to the scenario you are given:
- What are the moral issues involved? By choosing or not choosing to have an abortion is she making the “right” or “wrong” choice?
- Who decides if this choice is “right” or “wrong”?
- Where does freedom of personal choice stop and responsibility to others begin?
- Whose interests are involved? The mother's? That of the fetus? Society? How do these conflicts get resolved?
- When does the fetus become a “person”? Who decides? How?
- Does this developing fetus have a right to be born?
- Since we are all entitled to our religious and moral beliefs, how do we decide what law must apply to all people?
- Is it ever OK to have an abortion?
There are no easy answers to these questions. Your generation, like the present generation of adults, has to face the challenging issues of abortion. So, give serious thought to these issues now. | http://www.ck12.org/tebook/Human-Biology-Reproduction-Teacher%2527s-Guide/section/9.3/ |
4.09375 | The Continental Army was formed by the second continental congress after the outbreak of the American Revolutionary War by the colonies that became the United States of America. Established by a resolution of the Continental Congress on June 14, 1775, it was created to coordinate the military efforts of the Thirteen Colonies in their revolt against the rule of Great Britain. The Continental Army was supplemented by local militias and troops that remained under control of the individual states or were otherwise independent. General George Washington was the commander-in-chief of the army throughout the war.
Most of the Continental Army was disbanded in 1783 after the Treaty of Paris ended the war. The 1st and 2nd Regiments went on to form the nucleus of the Legion of the United States in 1792 under General Anthony Wayne. This became the foundation of the United States Army in 1796.
The Continental Army consisted of soldiers from all 13 colonies, and after 1776, from all 13 states. When the American Revolutionary War began at the Battles of Lexington and Concord on April 19th 1775, the colonial revolutionaries did not have an army. Previously, each colony had relied upon the militia, made up of part-time citizen-soldiers, for local defense, or the raising of temporary "provincial regiments" during specific crises such as the French and Indian War of 1754-1763. As tensions with Great Britain increased in the years leading to the war, colonists began to reform their militias in preparation for the perceived potential conflict. Training of militiamen increased after the passage of the Intolerable Acts in 1774. Colonists such as Richard Henry Lee proposed forming a national militia force, but the First Continental Congress rejected the idea.
The minimum enlistment age was 16 years of age, or 15 with parental consent.
On April 23, 1775, the Massachusetts Provincial Congress authorized the raising of a colonial army consisting of 26 company regiments. New Hampshire, Rhode Island, and Connecticut soon raised similar but smaller forces. On June 14, 1775, the Second Continental Congress decided to proceed with the establishment of a Continental Army for purposes of common defense, adopting the forces already in place outside Boston (22,000 troops) and New York (5,000). It also raised the first ten companies of Continental troops on a one-year enlistment, riflemen from Pennsylvania, Maryland, Delaware and Virginia to be used as light infantry, who became the 1st Continental Regiment in 1776. On June 15, 1775, the Congress elected by unanimous vote George Washington as Commander-in-Chief, who accepted and served throughout the war without any compensation except for reimbursement of expenses.
Four major-generals (Artemas Ward, Charles Lee, Philip Schuyler, and Israel Putnam) and eight brigadier-generals (Seth Pomeroy, Richard Montgomery, David Wooster, William Heath, Joseph Spencer, John Thomas, John Sullivan, and Nathanael Greene) were appointed by the Second Continental Congress in the course of a few days. After Pomeroy did not accept, John Thomas was appointed in his place.
As the Continental Congress increasingly adopted the responsibilities and posture of a legislature for a sovereign state, the role of the Continental Army became the subject of considerable debate. Some Americans had a general aversion to maintaining a standing army; but on the other hand the requirements of the war against the British required the discipline and organization of a modern military. As a result, the army went through several distinct phases, characterized by official dissolution and reorganization of units.
Soldiers in the Continental Army were citizens who had volunteered to serve in the army (but were paid), and at various times during the war, standard enlistment periods lasted from one to three years. Early in the war the enlistment periods were short, as the Continental Congress feared the possibility of the Continental Army evolving into a permanent army. The army never numbered more than 17,000 men. Turnover proved a constant problem, particularly in the winter of 1776-77, and longer enlistments were approved. Broadly speaking, Continental forces consisted of several successive armies, or establishments:
- The Continental Army of 1775, comprising the initial New England Army, organized by Washington into three divisions, six brigades, and 38 regiments. Major General Philip Schuyler's ten regiments in New York were sent to invade Canada.
- The Continental Army of 1776, reorganized after the initial enlistment period of the soldiers in the 1775 army had expired. Washington had submitted recommendations to the Continental Congress almost immediately after he had accepted the position of Commander-in-Chief, but the Congress took time to consider and implement these. Despite attempts to broaden the recruiting base beyond New England, the 1776 army remained skewed toward the Northeast both in terms of its composition and of its geographical focus. This army consisted of 36 regiments, most standardized to a single battalion of 768 men strong and formed into eight companies, with a rank-and-file strength of 640.
- The Continental Army of 1777-80 evolved out of several critical reforms and political decisions that came about when it became apparent that the British were sending massive forces to put an end to the American Revolution. The Continental Congress passed the "Eighty-eight Battalion Resolve", ordering each state to contribute one-battalion regiments in proportion to their population, and Washington subsequently received authority to raise an additional 16 battalions. Enlistment terms extended to three years or to "the length of the war" to avoid the year-end crises that depleted forces (including the notable near-collapse of the army at the end of 1776, which could have ended the war in a Continental, or American, loss by forfeit).
- The Continental Army of 1781-82 saw the greatest crisis on the American side in the war. Congress was bankrupt, making it very difficult to replenish the soldiers whose three-year terms had expired. Popular support for the war reached an all-time low, and Washington had to put down mutinies both in the Pennsylvania Line and in the New Jersey Line. Congress voted to cut funding for the Army, but Washington managed nevertheless to secure important strategic victories.
- The Continental Army of 1783-84 was succeeded by the United States Army, which persists to this day. As peace was restored with the British, most of the regiments were disbanded in an orderly fashion, though several had already been diminished.
In addition to the Continental Army regulars, local militia units, raised and funded by individual colonies/states, participated in battles throughout the war. Sometimes the militia units operated independently of the Continental Army, but often local militias were called out to support and augment the Continental Army regulars during campaigns. (The militia troops developed a reputation for being prone to premature retreats, a fact that Brigadier-General Daniel Morgan integrated into his strategy at the Battle of Cowpens in 1781.)
The financial responsibility for providing pay, food, shelter, clothing, arms, and other equipment to specific units was assigned to states as part of the establishment of these units. States differed in how well they lived up to these obligations. There were constant funding issues and morale problems as the war continued. This led to the army offering low pay, often rotten food, hard work, cold, heat, poor clothing and shelter, harsh discipline, and a high chance of becoming a casualty.
At the time of the Siege of Boston, the Continental Army at Cambridge, Massachusetts, in June 1775, is estimated to have numbered from 14-16,000 men from New England (though the actual number may have been as low as 11,000 because of desertions). Until Washington's arrival, it remained under the command of Artemas Ward, while John Thomas acted as executive officer and Richard Gridley commanded the artillery corps and was chief engineer.
The British force in Boston was increasing by fresh arrivals. It numbered then about 10,000 men. Major Generals Howe, Clinton, and Burgoyne, had arrived late in May and joined General Gage in forming and executing plans for dispersing the rebels. Feeling strong with these veteran officers and soldiers around him—and the presence of several ships-of-war under Admiral Graves—the governor issued a proclamation, declaring martial law, branding the entire Continental Army and supporters as "rebels" and "parricides of the Constitution." Amnesty was offered to those who gave up their allegiance to the Continental Army and Congress in favor of the British authorities, though Samuel Adams and John Hancock were still wanted for high treason. This proclamation only served to strengthen the resolve of the Congress and Army.
After the British evacuation of Boston (prompted by the placement of Continental artillery overlooking the city in March 1776), the Continental Army relocated to New York. For the next five years, the main bodies of the Continental and British armies campaigned against one another in New York, New Jersey, and Pennsylvania. These campaigns included the notable battles of Trenton, Princeton, Brandywine, Germantown, and Morristown, among many others.
The Continental Army was racially integrated, a condition the United States Army would not see again until the Korean War. African American slaves were promised freedom in exchange for military service in New England, and made up one fifth of the Northern Continental Army.
Throughout its existence, the Army was troubled by poor logistics, inadequate training, short-term enlistments, interstate rivalries, and Congress's inability to compel the states to provide food, money or supplies. In the beginning, soldiers enlisted for a year, largely motivated by patriotism; but as the war dragged on, bounties and other incentives became more commonplace. Two major mutinies late in the war drastically diminished the reliability of two of the main units, and there were constant discipline problems.
The army increased its effectiveness and success rate through a series of trials and errors, often at great human cost. General Washington and other distinguished officers were instrumental leaders in preserving unity, learning and adapting, and ensuring discipline throughout the eight years of war. In the winter of 1777-1778, with the addition of Baron von Steuben, of Prussian origin, the training and discipline of the Continental Army began to vastly improve. (This was the infamous winter at Valley Forge.) Washington always viewed the Army as a temporary measure and strove to maintain civilian control of the military, as did the Continental Congress, though there were minor disagreements about how this was carried out.
Near the end of the war, the Continental Army was augmented by a French expeditionary force (under General Rochambeau) and a squadron of the French navy (under the Comte de Barras), and in the late summer of 1781 the main body of the army travelled south to Virginia to rendezvous with the French West Indies fleet under Admiral Comte de Grasse. This resulted in the Siege of Yorktown, the decisive Battle of the Chesapeake, and the surrender of the British southern army. This essentially marked the end of the land war in America, although the Continental Army returned to blockade the British northern army in New York until the peace treaty went into effect two years later, and battles took place elsewhere between British forces and those of France and its allies.
Planning for the transition to a peacetime force had begun in April 1783 at the request of a congressional committee chaired by Alexander Hamilton. The commander-in-chief discussed the problem with key officers before submitting the army's official views on 2 May. Significantly, there was a broad consensus of the basic framework among the officers. Washington's proposal called for four components: a small regular army, a uniformly trained and organized militia, a system of arsenals, and a military academy to train the army's artillery and engineer officers. He wanted four infantry regiments, each assigned to a specific sector of the frontier, plus an artillery regiment. His proposed regimental organizations followed Continental Army patterns but had a provision for increased strength in the event of war. Washington expected the militia primarily to provide security for the country at the start of a war until the regular army could expand—the same role it had carried out in 1775 and 1776. Steuben and Duportail submitted their own proposals to Congress for consideration.
Although Congress declined on 12 May to make a decision on the peace establishment, it did address the need for some troops to remain on duty until the British evacuated New York City and several frontier posts. The delegates told Washington to use men enlisted for fixed terms as temporary garrisons. A detachment of those men from West Point reoccupied New York without incident on November 25. When Steuben's effort in July to negotiate a transfer of frontier forts with Major General Frederick Haldimand collapsed, however, the British maintained control over them, as they would into the 1790s. That failure and the realization that most of the remaining infantrymen's enlistments were due to expire by June 1784 led Washington to order Knox, his choice as the commander of the peacetime army, to discharge all but 500 infantry and 100 artillerymen before winter set in. The former regrouped as Jackson's Continental Regiment under Colonel Henry Jackson of Massachusetts. The single artillery company, New Yorkers under John Doughty, came from remnants of the 2nd Continental Artillery Regiment.
Congress issued a proclamation on October 18, 1783 which approved Washington's reductions. On November 2 Washington then released his Farewell Order to the Philadelphia newspapers for nationwide distribution to the furloughed men. In the message he thanked the officers and men for their assistance and reminded them that "the singular interpositions of Providence in our feeble condition were such, as could scarcely escape the attention of the most unobserving; while the unparalleled perseverance of the Armies of the United States, through almost every possible suffering and discouragement for the space of eight long years, was little short of a standing miracle."
Washington believed that the blending of persons from every colony into "one patriotic band of Brothers" had been a major accomplishment, and he urged the veterans to continue this devotion in civilian life.
Washington said farewell to his remaining officers on December 4 at Fraunces Tavern in New York City. On December 23 he appeared in Congress, then sitting at Annapolis, and returned his commission as commander-in-chief: "Having now finished the work assigned me, I retire from the great theatre of Action; and bidding an Affectionate farewell to this August body under whose orders I have so long acted, I here offer my Commission, and take my leave of all the employments of public life." Congress ended the War of American Independence on January 14, 1784 by ratifying the definitive peace treaty that had been signed in Paris on September 3.
Congress had again rejected Washington's concept for a peacetime force in October 1783. When moderate delegates then offered an alternative in April 1784 which scaled the projected army down to 900 men in one artillery and three infantry battalions, Congress rejected it as well, in part because New York feared that men retained from Massachusetts might take sides in a land dispute between the two states. Another proposal to retain 350 men and raise 700 new recruits also failed. On June 2 Congress ordered the discharge of all remaining men except twenty-five caretakers at Fort Pitt and fifty-five at West Point. The next day it created a peace establishment acceptable to all interests.
The plan required four states to raise 700 men for one year's service. Congress instructed the Secretary at War to form the troops into eight infantry and two artillery companies. Pennsylvania, with a quota of 260 men, had the power to nominate a lieutenant colonel, who would be the senior officer. New York and Connecticut each were to raise 165 men and nominate a major; the remaining 110 men came from New Jersey. Economy was the watchword of this proposal, for each major served as a company commander, and line officers performed all staff duties except those of chaplain, surgeon, and surgeon's mate. Under Josiah Harmar, the First American Regiment slowly organized and achieved permanent status as an infantry regiment of the new Regular Army. The lineage of the First American Regiment is carried on by the 3rd United States Infantry Regiment (The Old Guard).
However the United States military realised it needed a well-trained standing army following St. Clair's Defeat on November 4, 1791, when a force led by General Arthur St. Clair was almost entirely wiped out by the Western Confederacy near Fort Recovery, Ohio. The plans, which were supported by U.S. President George Washington and Henry Knox, Secretary of War, led to the disbandment of the Continental Army and the creation of the Legion of the United States. The command would be based on the 18th-century military works of Henry Bouquet, a professional Swiss soldier who served as a colonel in the British army, and French Marshal Maurice de Saxe. In 1792 Anthony Wayne, a renowned hero of the American Revolutionary War, was encouraged to leave retirement and return to active service as Commander-in-Chief of the Legion with the rank of Major General.
The legion was recruited and raised in Pittsburgh, Pennsylvania. It was formed into four sub-legions. These were created from elements of the 1st and 2nd Regiments from the Continental Army. These units then became the First and Second Sub-Legions. The Third and Fourth Sub-Legions were raised from further recruits. From June 1792 to November 1792, the Legion remained cantoned at Fort LaFayette in Pittsburgh. Throughout the winter of 1792-93, existing troops along with new recruits were drilled in military skills, tactics and discipline at Legionville on the banks of the Ohio River near present-day Baden, Pennsylvania. The following Spring the newly named Legion of the United States left Legionville for the Northwest Indian War, a struggle between American Indian tribes affiliated with the Western Confederacy in the area south of the Ohio River. The overwhelmingly successful campaign was concluded with the decisive victory at the Battle of Fallen Timbers on August 20, 1794, Maj. Gen. Anthony Wayne applied the techniques of wilderness operations perfected by Sullivan's 1779 expedition against the Iroquois. The training the troops received at Legionville was also seen as an instrumental to this overwhelming victory.
Nevertheless, Steuben's Blue Book remained the official manual for the legion, as well as for the militia of most states, until Winfield Scott in 1835. In 1796, the United States Army was raised following the discontinuation with the legion of the United States. This preceded the graduation of the first cadets from United States Military Academy at West Point, New York, which was established in 1802.
"As the Continental Army has unfortunately no uniforms, and consequently many inconveniences must arise from not being able to distinguish the commissioned officers from the privates, it is desired that some badge of distinction be immediately provided; for instance that the field officers may have red or pink colored cockades in their hats, the captains yellow or buff, and the subalterns green."
Later on in the war, the Continental Army established its own uniform with a black cockade (as used in much of the British Army) among all ranks and the following insignia:
|Ranks and insignia of the Continental Army|
|Major general||Brigadier general||Colonel||Lieutenant colonel||Aide-de-camp||Major||Captain||Subaltern||Lieutenant||Ensign||Sergeant Major||Sergeant||Corporal||Private|
Jacket with gold trim
|Silver epaulets||Gold epaulets
Hat with green cockade
|Gold epaulets||Gold epaulet
|No epaulets||Red epaulets||Red epaulet
- Siege of Boston
- Battle of Long Island
- Battle of Harlem Heights
- Battle of Trenton
- Battle of Princeton
- Battle of Brandywine
- Battle of Germantown
- Battle of Saratoga
- Battle of Monmouth
- Siege of Charleston
- Battle of Camden
- Battle of Cowpens
- Battle of Guilford Court House
- Siege of Yorktown
- Departments of the Continental Army
- Continental Navy
- Pluckemin Continental Artillery Cantonment Site
- History of the United States Army
- Peter Francisco, Revolutionary War soldier and hero
- Middlebrook encampment in Middlebrook, New Jersey, winter of 1776–77 and winter of 1778–79
- Valley Forge in Valley Forge, Pennsylvania, winter of 1777–78
- Wright, Continental Army, p. 10–11
- Cont'l Cong., Formation of the Continental Army, in 2 Journals of the Continental Congress, 1774–1789 89–90 (Library of Cong. eds., 1905).
- Cont'l Cong., Commission for General Washington, in 2 Journals of the Continental Congress, 1774-1789 96-7 (Library of Cong. eds., 1905).
- Cont'l Cong., Instructions for General Washington, in 2 Journals of the Continental Congress, 1774-1789 100-1 (Library of Cong. eds., 1905).
- Cont'l Cong., Resolution Changing "United Colonies" to "United States", in 5 Journals of the Continental Congress, 1774-1789 747 (Library of Cong. eds., 1905).
- Cont'l Cong., Acceptance of Appointment by General Washington, in 2 Journals of the Continental Congress, 1774–1789 91–92 (Library of Cong. eds., 1905).
- Cont'l Cong., Commissions for Generals Ward and Lee, in 2 Journals of the Continental Congress, 1774-1789 97 (Library of Cong. eds., 1905).
- Cont'l Cong., Commissions for Generals Schuyler and Putnam, in 2 Journals of the Continental Congress, 1774-1789 99 (Library of Cong. eds., 1905).
- Cont'l Cong., Commissions for Generals Pomeroy, Montgomery, Wooster, Heath, Spencer, Thomas, Sullivan, and Greene, in 2 Journals of the Continental Congress, 1774-1789 103 (Library of Cong. eds., 1905).
- Cont'l Cong., Commission for General Thomas, in 2 Journals of the Continental Congress, 1774-1789 191 (Library of Cong. eds., 1905).
- Liberty! The American Revolution (Documentary) Episode II: Blows Must Decide: 1774-1776. Twin Cities Public Television, 1997. ISBN 1-4157-0217-9
- Lengel, Edward G. General George Washington: A Military Life. New York: Random House, 2005. ISBN 1-4000-6081-8.
- Royster, Charles. A Revolutionary People at War: The Continental Army and American Character, 1775–1783. Chapel Hill: University of North Carolina Press, 1979. ISBN 0-8078-1385-0.
- Carp, E. Wayne. To Starve the Army at Pleasure: Continental Army Administration and American Political Culture, 1775–1783. Chapel Hill: University of North Carolina Press, 1984. ISBN 0-8078-1587-X.
- Gillett, Mary C. The Army Medical Department, 1775–1818. Washington: Center of Military History, U.S. Army, 1981.
- Martin, James Kirby, and Mark Edward Lender. A Respectable Army: The Military Origins of the Republic, 1763–1789. 2nd ed. Wheeling, Illinois: Harlan Davidson, 2006. ISBN 0-88295-239-0.
- Mayer, Holly A. Belonging to the Army: Camp Followers and Community during the American Revolution. Columbia: University of South Carolina Press, 1999. ISBN 1-57003-339-0; ISBN 1-57003-108-8.
- Risch, Erna (1981). Supplying Washington's Army. Washington, D.C.: United States Army Center of Military History.
- Reference materials
- RevWar75.com provides "an online cross-referenced index of all surviving orderly books of the Continental Army".
- Wright, Robert K. The Continental Army. Washington, D.C.: United States Army Center of Military History 1983. Available, in part, online from the CMH website
- Bibliography of the Continental Army compiled by the United States Army Center of Military History
- Primary Sources
- Wright, Jr., Robert K.; MacGregor Jr., Morris J. "Resolutions of the Continental Congress Adopting the Continental Army and other Sources from the Revolution". Soldier-Statesmen of the Constitution. E302.5.W85 1987. Washington D.C: United States Army Center of Military History. CMH Pub 71-25. | https://en.wikipedia.org/wiki/Continental_Army |
4.21875 | Remember Tania and Alex and the garden in the Frequency Tables to Organize and Display Data Concept? Tania had her hands full trying to figure out how many workers were in the garden on which days. Tania has a frequency table, but how can she make a visual display of the data?
|# of People Working||Frequency|
Using this frequency table, how can Tania make a line plot?
A line plot is another display method we can use to organize data.
Like a frequency table, it shows how many times each number appears in the data set. Instead of putting the information into a table, however, we graph it on a number line. Line plots are especially useful when the data falls over a large range. Take a look at the data and the line plot below.
This data represents the number of students in each class at a local community college.
30, 31, 31, 31, 33, 33, 33, 33, 37, 37, 38, 40, 40, 41, 41, 41
The first thing that we might do is to organize this data into a frequency table. That will let us know how often each number appears.
|# of students||Frequency|
Now if we look at this data, we can make a couple of conclusions.
- The range of students in each class is from 30 to 41.
- There aren’t any classes with 32, 34, 35, 36 or 39 students in them.
Now that we have a frequency table, we can build a line plot to show this same data.
Building the line plot involves counting the number of students and then plotting the information on a number line. We use
Notice that even if we didn’t have a class with 32 students in it that we had to include that number on the number line. This is very important. Each value in the range of numbers needs to be represented, even if that value is 0.
Now let's use this information to answer a few questions.
How many classes have 31 students in them?
How many classes have 38 students in them?
How many classes have 33 students in them?
Now Tania can take the frequency table and make a line plot for the farm.
|# of People Working||Frequency|
Now, let’s draw a line plot to show the data in another way.
Now that we have the visual representations of the data, it is time to draw some conclusions.
Remember that Tania and Alex know that there needs to be at least three people working on any given day. By analyzing the data, you can see that there are five days when there are only one or two people working. With the new data, Tania and Alex call a meeting of all of the workers. When they display the data, it is clear why everything isn’t getting done. Together, they are able to figure out which days need more people, and they solve the problem.
- how often something occurs
- information about something or someone-usually in number form
- to look at data and draw conclusions based on patterns or numbers
- Frequency table
- a table or chart that shows how often something occurs
- Line plot
- Data that shows frequency by graphing data over a number line
- Organized data
- Data that is listed in numerical order
Here is one for you to try on your own.
Jeff counted the number of ducks he saw swimming in the pond each morning on his way to school. Here are his results:
6, 8, 12, 14, 5, 6, 7, 8, 12, 11, 12, 5, 6, 6, 8, 11, 8, 7, 6, 13
Jeff’s data is unorganized. It is not written in numerical order. When we have unorganized data, the first thing that we need to do is to organize it in numerical order.
6, 6, 6, 6, 6, 7, 7, 8, 8, 8, 8, 11, 11, 12, 12, 12, 13, 14
Next, we can make a frequency table. There are two columns in the frequency table. The first is the number of ducks and the second is how many times each number of ducks was on the pond. The second column is the frequency of each number of ducks.
|Number of Ducks||Frequency|
Now that we have a frequency table, the next step is to make a line plot. Then we will have two ways of examining the same data. Here is a line plot that shows the duck information.
Here are some things that we can observe by looking at both methods of displaying data:
- In both, the range of numbers is shown. There were between 6 and 14 ducks seen, so each number from 6 to 14 is represented.
- There weren’t any days where 9 or 10 ducks were counted, yet both are represented because they fall in the range of ducks counted.
- Both methods help us to visually understand data and its meaning.
http://www.hstutorials.net/math/preAlg/php/php_12/php_12_01_x13.htm – Solving a problem using frequency tables and line plots.
Directions: Here is a line plot that shows how many seals came into the harbor in La Jolla California during an entire month. Use it to answer the following questions.
1. How many times did thirty seals appear on the beach?
2. Which two categories have the same frequency?
3. How many times were 50 or more seals counted on the beach?
4. True or False. This line plot shows us the number of seals that came on each day of the month.
5. True or False. There weren’t any days that less than 30 seals appeared on the beach.
6. How many times were 60 seals on the beach?
7. How many times were 70 seals on the beach?
8. What is the smallest number of seals that was counted on the beach?
9. What is the greatest number of seals that were counted on the beach?
10. Does the frequency table show any number of seals that weren't counted at all?
Directions: Organize each list of data. Then create a frequency table to show the results. There are two answers for each question.
11. 8, 8, 2, 2, 2, 2, 2, 5, 6, 3, 3, 4
12. 20, 18, 18, 19, 19, 19, 17, 17, 17, 17, 17
13. 100, 99, 98, 92, 92, 92, 92, 92, 92, 98, 98
14. 75, 75, 75, 70, 70, 70, 70, 71, 72, 72, 72, 74, 74, 74
15. 1, 1, 1, 1, 2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 5, 5 | http://www.ck12.org/statistics/Line-Plots-from-Frequency-Tables/lesson/Line-Plots-from-Frequency-Tables/r17/ |
4.09375 | Slowing the spin of the Earth
The moon's gravity deforms the Earth, causing bulges in land and sea (tides). As the Earth turns, it rotates those bulges away from the moon, but the moon's gravity pulls back on the bulges. This is called gravitational friction, which slows down the planet's rotation ever so slightly.
Days grow longer as the minuscule braking adds up over hundreds of millions of years:
How much does the Earth slow down every year?
AN ANALOGY TO PUT THINGS INTO PERSPECTIVE:
• Imagine that the 2,445-mile distance between Washington and San Francisco represents today's day length of 24 hours.
• 200 million years ago, an analagous distance for day length would stretch only from San Francisco to near the West Virginia/Virginia border.
• Every 100 years, the length of a day increases by 0.002 seconds — or 3.23 inches farther down the road to Washington. The annual gain is an infinitesimal 0.00002 seconds, which on our analogous cross-country trip would amount to 0.82 millimeters, a little more than the thickness of a thumbnail.
Moon's gravity pulls back on bulge displaced by the rotating Earth
Axis of bulge
Axis of moon's gravity
Tidal bulge (greatly exaggerated)
Moon's gravity causes tides
Distances not to scale | http://apps.washingtonpost.com/g/page/national/slowing-the-spin-of-the-earth/471/ |
4.28125 | Insect Structure and Function
For illustrations to accompany this article see Insect Structure and Function
The arthropods are a large group of invertebrate animals which include insects, spiders, millipedes, centipedes and crustacea such as lobsters and crabs. All arthropods have a hard exoskeleton or cuticle, segmented bodies and jointed legs. The crustacea and insects also have antennae, compound eyes and, often, three distinct regions to their bodies: head, thorax and abdomen.
General Characteristics of Insects
The insects differ from the rest of the arthropods in having only three pairs of jointed legs on the thorax and, typically, two pairs of wings. There are a great many different species of insects and some, during evolution, have lost one pair of wings, as in the houseflies, crane flies and mosquitoes. Other parasitic species like the fleas have lost both pairs of wings. In beetles, grasshoppers and cockroaches, the first pair of wings has become modified to form a hard outer covering over the second pair.
Cuticle and ecdysis. The value of the external cuticle is thought to lie mainly in reducing the loss from the body of water vapour through evaporation, but it also protects the animal from damage and bacterial invasion, maintains its shape and allows rapid locomotion. The cuticle imposes certain limitations in size, however, for if arthropods were to exceed the size of some of the larger crabs, the cuticle would become too heavy for the muscles to move the limbs.
Between the segments of the body and at the joints of the limbs and other appendages, the cuticle is flexible and allows movement. For the most part, however, the cuticle is rigid and prevents any increase in the size of the insect except during certain periods of its development when the insect sheds its cuticle (ecdysis) and increases its volume before the new cuticle has time to harden. Only the outermost layer of the cuticle is shed, the inner layers are digested by enzymes secreted from the epidermis and the fluid so produced is absorbed back into the body. Muscular contractions force the blood into the thorax, causing it to swell and so split the old cuticle along a predetermined line of weakness. The swallowing of air often accompanies ecdysis; assisting the splitting of the cuticle and keeping the body expanded while the new cuticle hardens. In insects, this moulting, or ecdysis, takes place only in the larval and pupal form and not in adults. In other words, mature insects do not grow.
Breathing. Running through the bodies of all insects is a branching system of tubes, tracheae which contain air. They open to the outside by pores called spiracles and they conduct air from the atmosphere to all living regions of the body The tracheae are lined with cuticle which is thickened in spiral bands This thickening keeps the tracheae open against the internal pressure of body fluids. The spiracles, typically, open on the flanks of each segment of the body, but in some insects there are only one or two openings. The entrance to the spiracle is usually supplied with muscles which control its opening or closure. Since the spiracles are one of the few areas of the body from which evaporation of water can occur, the closure of the spiracles when the insect is not active and therefore needs less oxygen, helps to conserve moisture. The tracheae branch repeatedly until they terminate in very fine tracheoles which invest or penetrate the tissues and organs inside the body. The walls of tracheae and tracheoles are permeable to gases, and oxygen is able to diffuse through them to reach the living cells. As might be expected the supply of tracheoles is most dense in the region of very active muscle, e.g. the flight muscles in the thorax.
The movement of oxygen from the atmosphere, through the spiracles, up the tracheae and tracheoles to the tissues, and the passage of carbon dioxide in the opposite direction, can be accounted for by simple diffusion but in active adult insects there is often a ventilation process which exchanges up to 60 per cent of the air in the tracheal system. In many beetles, locusts, grasshoppers and cockroaches, the abdomen is slightly compressed vertically (dorso-ventrally) by contraction of internal muscles. In bees and wasps the abdomen is compressed rhythmically along its length, slightly telescoping the segments. In both cases, the consequent rise of blood pressure in the body cavity compresses the tracheae along their length (like a concertina) and expels air from them. When the muscles relax, the abdomen springs back into shape, the tracheae expand and draw in air. Thus, unlike mammals, the positive muscular action in breathing is that which results in expiration.
This tracheal respiratory system is very different from the respiratory systems of the vertebrates, in which oxygen is absorbed by gills or lungs and conveyed in the blood stream to the tissues. In the insects, the oxygen diffuses through the trachea and tracheoles directly to the organ concerned. The carbon dioxide escapes through the same path although a proportion may diffuse from the body surface.
Blood system. The tracheal supply carrying oxygen to the organs gives the circulatory system a rather different role in insects from that in vertebrates. Except where the tracheoles terminate at some distance from a cell, the blood has little need to carry dissolved oxygen and, with a few exceptions, it contains no haemoglobin or cells corresponding to red blood cells. There is a single dorsal vessel which propels blood forward and releases it into the body cavity, thus maintaining a sluggish circulation. Apart from this vessel, the blood is not confined in blood vessels but occupies the free space between the cuticle and the organs in the body cavity. The blood therefore serves mainly to distribute digested food, collect excretory products and, in addition, has important hydraulic functions in expanding certain regions of the body to split the old cuticle and in pumping up the crumpled wings of the newly emerged adult insect.
Touch. From the body surface of the insect there arises a profusion of fine bristles most of which have a sensory function, responding principally to touch, vibration, or chemicals. The tactile (touch-sensitive) bristles are jointed at their bases and when a bristle is displaced to one side, it stimulates a sensory cell which fires impulses to the central nervous system.
The tactile bristles are numerous on the tarsal segments, the head, wing margins, or antennae according to the species and as well as informing the insect about contact stimuli, they probably respond to air currents and vibrations in the ground or in the air.
Proprioceptors. Small oval or circular areas of cuticle are differentially thickened and supplied with sensory fibres. They probably respond to distortions in the cuticle resulting from pressure, and so feed back information to the central nervous system about the position of the limbs. Organs of this kind respond to deflections of the antennae during flight and are thought to "measure" the air speed and help to adjust the wing movements accordingly. In some insects there are stretch receptors associated with muscle fibres, apparently similar to those in vertebrates.
Sound. The tactile bristles on the cuticle and on the antennae respond to low-frequency vibrations but many insects have more specialized sound detectors in the form of a thin area of cuticle overlying a distended trachea or air sac and invested with sensory fibres. Such tympanal organs appear on the thorax or abdomen or tibia according to species and are sensitive to sounds of high frequency. They can be used to locate the source of sounds as in the case of the male cricket "homing" on the sound of the female's "chirp", and in some cases can distinguish between sounds of different frequency.
Smell and taste. Experiments show that different insects can distinguish between chemicals which we describe as sweet, sour, salt and bitter, and in some cases more specific substances. The organs of taste are most abundant on the mouthparts, in the mouth, and on the tarsal segments but the nature of the sense organs concerned is not always clear.
Smell is principally the function of the antennae. Here there are bristles, pegs or plates with a very thin cuticle and fine perforations through which project nerve endings sensitive to chemicals. Sometimes these sense organs are grouped together and sunk into olfactory pits. In certain moths the sense of smell is very highly developed. The male Emperor moth will fly to an unmated female from a distance of a mile, attracted by the "scent" which she exudes. A male moth's antennae may carry many thousand chemo-receptors.
Sight. The compound eyes of insects consist of thousands of identical units called ommatidia packed closely together on each side of the head. Each ommatidium consists of a lens system formed partly from a thickening of the transparent cuticle and partly from a special crystalline cone. This lens system concentrates light from within a cone of 20°, on to a transparent rod, the rhabdom. The light, passing down this rhabdom, stimulates the eight or so retinal cells grouped round it to fire nervous impulses to the brain. Each ommatidium can therefore record the presence or absence of light, its intensity, in some cases its colour and, according to the position of the ommatidium in the compound eye, its direction. Although there may be from 2000 to 10,000 or more ommatidia in the compound eye of an actively flying insect, this number cannot reconstruct a very accurate picture of the outside world. Nevertheless, the "mosaic image" so formed, probably produces a crude impression of the form of well-defined objects enabling bees, for example, to seek out flowers and to use landmarks for finding their way to and from the hive. It is likely that the construction of compound eyes makes them particularly sensitive to moving objects, e.g. bees are more readily attracted to flowers which are being blown by the wind.
Flower-visiting insects, at least, can distinguish certain colours from shades of grey of equal brightness. Bees are particularly sensitive to blue, violet and ultra-violet but cannot distinguish red and green from black and grey unless the flower petals are reflecting ultra-violet light as well. Some butterflies can distinguish yellow, green and red. The simple eyes of, for example, caterpillars, consist of a cuticular lens with a group of light-sensitive cells beneath, rather like a single ommatidium. They show some colour sensitivity and, when grouped together, some ability to discriminate form. The ocelli which occur in the heads of many flying insects probably respond only to changes in light intensity.
Movement in insects depends, as it does in vertebrates, on muscles contracting and pulling on jointed limbs or other appendages. The muscles are within the body and limbs, however, and are attached to the inside of the cuticle. A pair of antagonistic muscles is attached across a joint in a way which could bend and straighten the limb. Many of the joints in the -insect are of the "peg and socket" type. They permit movement in one plane only, like a hinge joint, but since there are several such joints in a limb, each operating in a different direction, the limb as a whole can describe fairly free directional movement.
Walking. The characteristic walking pattern of an insect involves moving three legs at a time. The body is supported by a "tripod" of three legs while the other three are swinging forward to a new position. On the last tarsal joint are claws and, depending on the species, adhesive pads which enable the insect to climb very smooth surfaces. The precise mechanism of adhesion is uncertain. Modification of the limbs and their musculature enables insects to leap, e.g. grasshopper, or swim, e.g. water beetles.
Flying. In insects with relatively light bodies and large wings such as butterflies and dragonflies, the wing muscles in the thorax pull directly on the wing where it is articulated to the thorax, levering it up and down. Insects such as bees, wasps and flies, with compact bodies and a smaller wing area have indirect flight muscles which elevate and depress the wings very rapidly by pulling on the walls of the thorax and changing its shape. In both cases there are direct flight muscles which, by acting on the wing insertion, can alter its angle in the air. During the downstroke the wing is held horizontally, so thrusting downwards on the air and producing a lifting force. During the upstroke the wing is rotated vertically and offers little resistance during its upward movement through the air.
It is not possible to make very useful generalizations about the feeding methods of insects because they are so varied. However, insects do have in common three pairs of appendages called mouth parts, hinged to the head below the mouth and these extract or manipulate food in one way or another. The basic pattern of these mouth parts is the same in most insects but in the course of evolution they have become modified and adapted to exploit different kinds of food source. The least modified are probably those of insects such as caterpillars, grasshoppers, locusts and cockroaches in which the first pair of appendages, mandibles, form sturdy jaws, working sideways across the mouth and cutting off pieces of vegetation which are manipulated into the mouth by the other mouth parts, the maxillae and labium.
Aphids are small insects (e.g. greenfly) which feed on plant juices that they suck from leaves and stems. Their mouthparts are greatly elongated to form a piercing and sucking proboscis. The maxillae fit together to form a tube which can be pushed into plant tissues to reach the food-conducting vessels of the phloem and so extract nutrients.
The mosquito has mandibles and maxillae in the form of slender, sharp stylets which can cut through the skin of a mammal as well as penetrating plant tissues. To obtain a blood meal the mosquito inserts its mouth parts through the skin to reach a capillary and then sucks blood through a tube formed from the labrum or "front lip" which precedes the mouth parts.
Another tubular structure, the hypopharynx, serves to inject into the wound a substance which prevents the blood from clotting and so blocking the tubular labrum. In both aphid and mosquito the labium is rolled round the other mouth parts, enclosing them in a sheath when they are not being used.
In the butterfly, only the maxillae contribute to the feeding apparatus. The maxillae are greatly elongated and in the form of half tubes, i.e. like a drinking straw split down its length. They can be fitted together to form a tube through which nectar is sucked from the flowers.
The housefly also sucks liquid but its mouthparts cannot penetrate tissue. Instead the labium is enlarged to form a proboscis which terminates in two pads whose surface is channelled by grooves called pseudotracheae. The fly applies its proboscis to the food and pumps saliva along the channels and over the food. The saliva dissolves soluble parts of the food and may contain enzymes which digest some of the insoluble matter. The nutrient liquid is then drawn back along the pseudotracheae and pumped into the alimentary canal.
For illustrations to accompany this article see Insect Structure and Function
|Search this site|
|Search the web|
© Copyright 2004 - 2016 D G Mackean & Ian Mackean. All rights reserved. | http://www.biology-resources.com/insect-structure.html |
4.5 | Animation created by Tom H
If you have decided that an experiment is the best approach to testing your hypothesis, then you need to design the experiment.
Experimental design refers to how participants are allocated to the different conditions (or IV groups) in an experiment.
Probably the commonest way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group and not the control group.
The researcher must decide how he/she will allocate their sample to these IVs. For example, if there are 10 participants, will all 10 participants take part in both conditions (e.g. repeated measures) or will the participants be split in half and take part in only one condition each?
Three Types of Experimental Designs are Commonly Used:
1. Independent Measures:
Different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants. This should be done by random allocation, which ensures that each participant has an equal chance of being assigned to one group or the other.
Independent measures involves using two separate groups of participants; one in each condition. For example:
Pro: Avoids order effects (such as practice or fatigue) as people participate in one condition only. If a person is involved in several conditions they may become bored, tired and fed up by the time they come to the second condition, or becoming wise to the requirements of the experiment!
Con: More people are needed than with the repeated measures design (i.e. more time consuming).
Con: Differences between participants in the groups may affect results, for example; variations in age, sex or social background. These differences are known as participant variables (i.e. a type of extraneous variable).
2. Repeated Measures:
The same participants take part in each condition of the independent variable. This means that each condition of the experiment includes the same group of participants.
Pro: Fewer people are needed as they take part in all conditions (i.e. saves time)
Con: There may be order effects. Order effects refer to the order of the conditions having an effect on the participants’ behavior. Performance in the second condition may be better because the participants know what to do (i.e. practice effect). Or their performance might be worse in the second condition because they are tired (i.e. fatigue effect).
Suppose we used a repeated measures design in which all of the participants first learned words in loud noise and then learned it in no noise. We would expect the participants to show better learning in no noise simply because of order effects.
To combat order affects the research counter balances the order of the conditions for the participants. Alternating the order in which participants perform in different conditions of an experiment.
The sample is split into two groups experimental (A) and control (B). For example, group 1 does ‘A’ then ‘B’, group 2 does ‘B’ then ‘A’ this is to eliminate order effects. Although order effects occur for each participant, because they occur equally in both groups, they balance each other out in the results.
3. Matched Pairs:
One pair must be randomly assigned to the experimental group and the other to the control group.
Pro: Reduces participant (i.e. extraneous) variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
Pro: Avoids order effects, and so counterbalancing is not necessary.
Con: Very time-consuming trying to find closely matched pairs.
Con: Impossible to match people exactly, unless identical twins!
Experimental Design Summary
Experimental design refers to how participants are allocated to the different conditions (or IV groups) in an experiment. There are three types:
1. Independent measures / groups: Different participants are used in each condition of the independent variable.
2. Repeated measures: The same participants take part in each condition of the independent variable.
3. Matched pairs: Each condition uses different participants, but they are matched in terms of certain characteristics, e.g. sex, age, intelligence etc.
How to cite this article:
McLeod, S. A. (2007). Experimental Design. Retrieved from www.simplypsychology.org/experimental-designs.html | http://www.simplypsychology.org/experimental-designs.html |
4.125 | In addition to the seemingless infinite number of kanji, or Chinese characters, Japanese uses two sets of phonic characters called hiragana and katakana. During the Heian Period (794-1185), poetry written by aristocratic ladies used kanji (then referred to as Manyogana) to express the Japanese language. Over time, these ladies developed a simpler and more fluid style of writing which became known as onnade (woman's hand) and later as hiragana. This form of writing gained full acceptance in the early 10th century when it was used to write the Imperial anthology of waka (Japanese verse) known as the Kokin Wakashu. Katakana were developed as a way of phonetically writing Chinese Buddhist texts and were standardized in the 10th century. Anthologies of waka were written in katakana from this time. These days, romaji (roman letters) and English words can be seen quite often.
Hiragana are cursive characters usually used with kanji to add inflectional endings or other suffixes (such as to conjugate verbs and create adjectives); as a replacement or supplement for kanji which are difficult to read (particularly for children); for grammatical particles and function words; or simply for visual or graphic effect. (See examples below)
The non-cursive katakana are used to write loan words from other languages, especially English; to write onomatapoeic words (similar to the use of italics in English); or for visual or graphic effect. (See examples below).
The tables below show the hiragana and katakana alphabets and their romanized syllables. In each case, the upper left character is the hiragana and the upper right character is the katakana. There are five basic vowel sounds: a, i, u, e and o, which are pronounced pretty much the same as in Italian or Spanish. The other sounds are formed by combining the vowels with various consonants.
Table 1 shows the 46 basic kana forms in use today. Table 2 shows simple compounds formed by adding the kana for 'ya', 'yu' and 'yo' to other kana. Table 3 shows basic kana altered by the addition of two short strokes (to make a voiced consonant, such as 'ga') or a circular stroke (to make an unvoiced p-like bilabial stop, such as 'pa') to the upper right. Table 4 is a combination of Tables 2 and 3. Double consonants, such as in the word 'rokku' (rock music), are written with a small 'tsu' character between the 'ro' and 'ku' characters. In recent years, as more and more loanwords are introduced to Japanese, new kana symbols are being used to represent the pronunciation of such English letters as v (confused with b) and f (confused with h), although they are not official.
Some examples of words using hiragana and katakana | http://www.japan-zone.com/new/alphabet.shtml |