score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4 | The Asteraceae (Compositae, alternate name) with its approximately 1,620 genera and more than 23,600 species is the largest family of flowering plants (Stevens, 2001). The family is distributed worldwide except for Antarctica but is especially diverse in the tropical and subtropical regions of North America, the Andes, eastern Brazil, southern Africa, the Mediterranean region, central Asia, and southwestern China. The majority of Asteraceae species are herbaceous, yet an important component of the family is constituted by shrubs or even trees occurring primarily in the tropical regions of North and South America, Africa and Madagascar and on isolated islands in the Atlantic and Pacific Oceans. Many species of sunflowers are ruderal and especially abundant in disturbed areas, but a significant number of them, especially in mountainous tropical regions, are narrow endemics. Because of the relentless habitat transformation precipitated by human expansion in montane tropical regions, a number of these species are consequently in danger of extinction.
The family contains several species that are important sources of cooking oils, sweetening agents, and tea infusions. Members of several genera of the family are well-known for their horticultural value and popular in gardens across the world and include zinnias, marigolds, dahlias, and chrysanthemums. The commercial sunflower genus Helianthus has been used as a model in the study of hybridization and its role in speciation (Rieseberg et al., 2003). See list of economically important Asteraceae | http://www.eol.org/data_objects/10108486 |
4.09375 | Temporal range: Late Cretaceous, 85.8–66 Ma
|Mounted skeleton of Parasaurolophus cyrtocristatus, Field Museum of Natural History|
Hadrosaurids (Greek: ἁδρός, hadrós, "stout, thick"), or duck-billed dinosaurs, are members of the ornithischian family Hadrosauridae. The family, which includes ornithopods such as Edmontosaurus and Parasaurolophus, was a common herbivore in the Upper Cretaceous Period of what is now Asia, Europe and North America. Hadrosaurids are descendants of the Upper Jurassic/Lower Cretaceous iguanodontian dinosaurs and had a similar body layout. Hadrosaurids are divided into two principal subfamilies: the lambeosaurines (Lambeosaurinae), which had hollow cranial crests or tubes, and the saurolophines, identified as hadrosaurines in most pre-2010 works (Saurolophinae or Hadrosaurinae), which lacked hollow cranial crests (solid crests were present in some forms). Saurolophines tended to be bulkier than lambeosaurines. Lambeosaurines are divided into Aralosaurines, Lambeosaurines, Parasaurolophines and Tsintaosaurines, while Saurolophines include Saurolophus, Brachylophosaurines and Kritosaurines.
- 1 Characteristics
- 2 Cranial differences between subfamilies
- 3 Discoveries
- 4 Classification
- 5 Paleobiology
- 6 Ichnology
- 7 See also
- 8 References
- 9 External links
The hadrosaurs are known as the duck-billed dinosaurs due to the similarity of their head to that of modern ducks. In some genera, including Edmontosaurus, the whole front of the skull was flat and broadened out to form a beak, which was ideal for clipping leaves and twigs from the forests of Asia, Europe and North America. However, the back of the mouth contained thousands of teeth suitable for grinding food before it was swallowed. This has been hypothesized to have been a crucial factor in the success of this group in the Cretaceous compared to the sauropods.
In 2009, paleontologist Mark Purnell conducted a study into the chewing methods and diet of hadrosaurids from the Late Cretaceous period. By analyzing hundreds of microscopic scratches on the teeth of a fossilized Edmontosaurus jaw, the team determined hadrosaurs had a unique way of eating unlike any creature living today. In contrast to a flexible lower jaw joint prevalent in today's mammals, hadrosaurs had a unique hinge between the upper jaws and the rest of its skull. The team found the dinosaur's upper jaws pushed outwards and sideways while chewing, as the lower jaw slid against the upper teeth.
Cranial differences between subfamilies
The two major divisions of hadrosaurids are differentiated by their cranial ornamentation. While members of the Lambeosaurinae subfamily have hollow crests that differ depending on species, members of the Saurolophinae (Hadrosaurinae) subfamily have solid crests or none at all. Lambeosaurine crests had air chambers that may have produced a distinct sound and meant that their crests could have been used for both an audio and visual display.
Hadrosaurids were the first dinosaur family to be identified in North America - the first traces being found in 1855-1856 with the discovery of fossil teeth. Joseph Leidy examined the teeth, and erected the genera Trachodon and Thespesius (others included Troodon, Deinodon and Palaeoscincus). One species was named Trachodon mirabilis. Ultimately, Trachodon included all sorts of cerapod dinosaurs, including ceratopsids, and is now considered an invalid genus. In 1858, the teeth were associated with Leidy's eponymous Hadrosaurus foulkii, which was named after the fossil hobbyist William Parker Foulke. More and more teeth were found, resulting in even more (now obsolete) genera.
When a second duck-bill skeleton was unearthed, Edward Drinker Cope incorrectly named it Diclonius mirabilis in 1883 instead of Trachodon mirabilis. But Trachodon, together with other poorly typed genera, was used more widely and, when Cope's famous "Diclonius mirabilis" skeleton was mounted at the American Museum of Natural History, it was labeled as a "Trachodont dinosaur". The duck-billed dinosaur family was then named Trachodontidae.
A very well preserved complete hadrosaurid specimen, AMNH 5060, (Edmontosaurus annectens) was recovered in 1908 by the fossil collector Charles Hazelius Sternberg and his three sons, in Converse County, Wyoming. Analyzed by Henry Osborn in 1912, it has come to be known as the "Trachodon mummy". This specimen's skin was almost completely preserved in the form of impressions.
Lawrence Lambe erected the genus Edmontosaurus ("lizard from Edmonton") in 1917 from a find in the lower Edmonton Formation (now Horseshoe Canyon Formation), Alberta. Hadrosaurid systematics were addressed in a 1942 monograph by Richard Swann Lull and Nelda Wright. They proposed the genus Anatosaurus for several species of dubious genera. Cope's famous mount at the AMNH became Anatosaurus copei. In 1990, Anatosaurus was moved to Edmontosaurus. One former Anatosaurus species was distinct enough from Edmontosaurus to be placed in a separate genus, named Anatotitan, so in 1990 the AMNH mount was re-labelled Anatotitan copei.
One of the most complete fossilized specimens was found in 1999 in the Hell Creek Formation of North Dakota and is now nicknamed "Dakota". The hadrosaur fossil is so well preserved that scientists have been able to calculate its muscle mass and learn that it was more muscular than thought, probably giving it the ability to outrun predators, such as Tyrannosaurus rex. Dakota is more than fossilized bones, it is a fossilized mummy. It comes complete with skin (not merely skin impressions), ligaments, tendons and possibly some internal organs. It is being analyzed in the world's largest CT scanner, operated by the Boeing Co. The machine is usually used for detecting flaws in space shuttle engines and other large objects, but previously none as large as this. Researchers hope that the technology will help them learn more about the fossilized insides of the creature. They also found a gap of about a centimeter between each vertebra, indicating that there may have been a disk or other material between them, allowing more flexibility and meaning the animal was actually longer than what is shown in a museum. Skin impressions have been found from the following hadrosaurs: Edmontosaurus annectens, Corythosaurus casuarius, Brachylophosaurus canadensis, Gryposaurus notabilis, Parasaurolophus walkeri, Lambeosaurus magnicristatus, Lambeosaurus lambei, Saurolophus osborni, Magnapaulia laticaudus and, Saurolophus angustirostris.
Paleontologists from the Instituto Nacional de Antropología e Historia (INAH, Mexico's federal National Institute of Anthropology and History), identified a find in Mexico near the town of General Cepeda in the state of Coahuila, as a hadrosaur. Fifty of its tail vertebrae were found intact among other of its fossilized bones at the site.
In September 2015, researchers from Alaska Fairbanks University and Florida University concluded that the remains of a duck-billed dinosaur found in the high arctic of Alaska is a new species of hadrosaur, provisionally named Ugrunaaluk kuukpikensis. It apparently survived in conditions much harsher than those of other dinosaurs.
The family Hadrosauridae was first used by Edward Drinker Cope in 1869. Since its creation, a major division has been recognized in the group between the (generally crested) subfamily Lambeosaurinae and (generally crestless) subfamily Saurolophinae (or Hadrosaurinae). Phylogenetic analysis has increased the resolution of hadrosaurid relationships considerably (see Phylogeny below), leading to the widespread usage of tribes (a taxonomic unit below subfamily) to describe the finer relationships within each group of hadrosaurids. However, many hadrosaurid tribes commonly recognized in online sources have not yet been formally defined or seen wide use in the literature. Several were briefly mentioned under informal names, but not named as such, in the first edition of The Dinosauria. In this 1990 reference, "gryposaurs" included Aralosaurus, Gryposaurus, Hadrosaurus, and Kritosaurus; "brachylophosaurs" included Brachylophosaurus and Maiasaura; "saurolophs" included Lophorhothon, Prosaurolophus, and Saurolophus; and "edmontosaurs" included Edmontosaurus, and Shantungosaurus.
Lambeosaurines have also been traditionally split into Parasaurolophini (Parasaurolophus) and Corythosaurini (Corythosaurus, Hypacrosaurus, and Lambeosaurus). Corythosaurini and Parasaurolophini as terms entered the formal literature in Evans and Reisz's 2007 redescription of Lambeosaurus magnicristatus. Corythosaurini is defined as all taxa more closely related Corythosaurus casuarius than to Parasaurolophus walkeri, and Parasaurolophini as all those taxa closer to P. walkeri than to C. casuarius. In this study, Charonosaurus and Parasaurolophus are parasaurolophins, and Corythosaurus, Hypacrosaurus, Lambeosaurus, Nipponosaurus, and Olorotitan are corythosaurins. In recent years Tsintaosaurini (Tsintaosaurus + Pararhabdodon) and Aralosaurini (Aralosaurus + Canardia) have also emerged.
The use of the term Hadrosaurinae was questioned in a comprehensive study of hadrosaurid relationships by Albert Prieto-Márquez in 2010. Prieto-Márquez noted that, though the name Hadrosaurinae had been used for the clade of mostly crestless hadrosaurids by nearly all previous studies, its type species, Hadrosaurus foulkii, has almost always been excluded from the clade that bears its name, in violation of the rules for naming animals set out by the ICZN. Prieto-Márquez defined Hadrosaurinae as just the lineage containing H. foulkii, and used the name Saurolophinae instead for the traditional grouping.
The following taxonomy includes dinosaurs currently referred to the Hadrosauridae and its subfamilies. Hadrosaurids that were accepted as valid, but not placed in a cladogram at the time of Prieto-Márquez's 2010 study, are included at the highest level to which they were placed (either then, or in their description if they postdate the papers used here).
- Family Hadrosauridae
Hadrosauridae was first defined as a clade, by Forster, in a 1997 abstract, as simply "Lambeosaurinae plus Hadrosaurinae and their most recent common ancestor". In 1998, Paul Sereno defined the clade Hadrosauridae as the most inclusive possible group containing Saurolophus (a well-known saurolophine) and Parasaurolophus (a well-known lambeosaurine), later emending the definition to include Hadrosaurus, the type genus of the family, which ICZN rules state must be included, despite its status as a nomen dubium. According to Horner et al. (2004), Sereno's definition would place a few other well-known hadrosaurs (such as Telmatosaurus and Bactrosaurus) outside the family, which led them to define the family to include Telmatosaurus by default.
The following cladogram was recovered in a 2010 phylogenetic analysis by Prieto-Márquez.
Hadrosauridae has not been subjected to as many phylogenetic analyses as other dinosaur groups, so other workers may find quite different phylogenies. Gates and Sampson (2007) published the following alternate cladogram of Saurolophinae (identified as "Hadrosaurinae" in the study) in their description of Gryposaurus monumentensis:
The following cladogram is after the 2007 redescription of Lambeosaurus magnicristatus (Evans and Reisz, 2007):
|This section relies far too heavily on the Tanke & Brett-Surman (2001) reference. (March 2014)|
While studying the chewing methods of hadrosaurids in 2009, the paleontologists Vincent Williams, Paul Barrett, and Mark Purnell found that hadrosaurs likely grazed on horsetails and vegetation close to the ground, rather than browsing higher-growing leaves and twigs. This conclusion was based on the evenness of scratches on hadrosaur teeth, which suggested the hadrosaur used the same series of jaw motions over and over again. As a result, the study determined that the hadrosaur diet was probably made of leaves and lacked the bulkier items, such as twigs or stems, that might have required a different chewing method and created different wear patterns. However, Purnell said these conclusions were less secure than the more conclusive evidence regarding the motion of teeth while chewing.
The hypothesis that hadrosaurs were likely grazers rather than browsers appears to contradict previous findings from preserved stomach contents found in the fossilized guts in previous hadrosaur studies. The most recent such finding before the publication of the Purnell study was conducted in 2008, when a team led by University of Colorado at Boulder graduate student Justin S. Tweet found a homogeneous accumulation of millimeter-scale leaf fragments in the gut region of a well-preserved partially grown Brachylophosaurus. As a result of that finding, Tweet concluded in September 2008 that the animal was likely a browser, not a grazer. In response to such findings, Purnell said that preserved stomach contents are questionable because they do not necessarily represent the usual diet of the animal. The issue remains a subject of debate.
Mallon et al. (2013) examined herbivore coexistence on the island continent of Laramidia, during the Late Cretaceous. It was concluded that hadrosaurids could reach low-growing trees and shrubs that were out of the reach of ceratopsids, ankylosaurs, and other small herbivores. Hadrosaurids were capable of feeding up to 2 m when standing quadrupedally, and up to 5 m bipedally.
Coprolites (fossilized droppings) of some Late Cretaceous hadrosaurs show that the animals sometimes deliberately ate rotting wood. Wood itself is not nutritious, but decomposing wood would have contained fungi, decomposed wood material and detritus-eating invertebrates, all of which would have been nutritious.
In the Dinosaur Park Formation
In a 2001 review of hadrosaur eggshell and hatchling material from the Dinosaur Park Formation, Darren H. Tanke and M. K. Brett-Surman concluded that hadrosaurs nested in both the ancient upland and lowlands of the formation's depositional environment. The upland nesting grounds may have been preferred by the less common hadrosaurs, like Brachylophosaurus and Parasaurolophus. However, the authors were unable to determine what specific factors shaped nesting ground choice in the formation's hadrosaurs. They suggested that behavior, diet, soil condition, and competition between dinosaur species all potentially influenced where hadrosaurs nested.
Sub-centimeter fragments of pebbly-textured hadrosaur eggshell have been reported from the Dinosaur Park Formation. This eggshell is similar to the hadrosaur eggshell of Devil's Coulee in southern Alberta as well as that of the Two Medicine and Judith River Formations in Montana, United States. While present, dinosaur eggshell is very rare in the Dinosaur Park Formation and is only found in two different microfossil sites. These sites are distinguished by large numbers of pisidiid clams and other less common shelled invertebrates, like unionid clams and snails. This association is not a coincidence, as the invertebrate shells would have slowly dissolved and released enough basic calcium carbonate to protect the eggshells from naturally occurring acids that otherwise would have dissolved them and prevented fossilization.
In contrast with eggshell fossils, the remains of very young hadrosaurs are actually somewhat common. Darren Tanke has observed that an experienced collector could actually discover multiple juvenile hadrosaur specimens in a single day. The most common remains of young hadrosaurs in the Dinosaur Park Formation are dentaries, bones from limbs and feet, as well as vertebral centra. The material showed little or none of the abrasion that would have resulted from transport, meaning the fossils were buried near their point of origin. Bonebeds 23, 28, 47, and 50 are productive sources of young hadrosaur remains in the formation, especially bonebed 50. The bones of juvenile hadrosaurs and fossil eggshell fragments are not known to have preserved in association with each other, despite both being present in the formation.
The limbs of the juvenile hadrosaurs are anatomically and proportionally similar to those of adult animals. However, the joints often show "predepositional erosion or concave articular surfaces", which was probably due to the cartilaginous cap covering the ends of the bones. The pelvis of a young hadrosaur was similar to that of an older individual.
Daily activity patterns
Comparisons between the scleral rings of several hadrosaur genera (Corythosaurus, Prosaurolophus, and Saurolophus) and modern birds and reptiles suggest that they may have been cathemeral, active throughout the day at short intervals.
- Boyle, Alan (2009-06-29). "How dinosaurs chewed". MSNBC. Retrieved 2009-06-03.
- "Hadrosaur Forelimb Study". Palaeo-electronica.org. Retrieved 2013-07-23.
- Fassett, J, Zielinski, R.A., and Budahn, J.R. (2002). Dinosaurs that did not die; evidence for Paleocene dinosaurs in the Ojo Alamo Sandstone, San Juan Basin, New Mexico. In: Koeberl, C., and MacLeod, K. (eds.). Catastrophic events and mass extinctions: impacts and beyond. Special Paper – Geological Society of America 356:307-336.
- (Reuters News) "Mummified dinosaur reveals surprises: scientists" 3 December 2007.
- Schmid, Randolph (2007-12-03). "Mummified Dinosaur May Have Outrun T Rex". Associated Press. Retrieved 2010-11-10.
- Bell, P. R. (2012). Farke, Andrew A, ed. "Standardized Terminology and Potential Taxonomic Utility for Hadrosaurid Skin Impressions: A Case Study for Saurolophus from Canada and Mongolia". PLoS ONE 7 (2): e31295. doi:10.1371/journal.pone.0031295. PMC 3272031. PMID 22319623.
- Cohen, Luc, et al, Paleontologists discover dinosaur tail in northern Mexico, Reuters, Tuesday, July 23, 2013
- "New Duck-billed Dinosaur Species Discovered in Alaska". Sci-News.com. Retrieved 2015-09-23.
- Weishampel, David B.; Horner, Jack R. (1990). "Hadrosauridae". In Weishampel, David B.; Dodson, Peter; Osmólska, Halszka. The Dinosauria (1st ed.). Berkeley: University of California Press. pp. 534–561. ISBN 0-520-06727-4.
- Glut, Donald F. (1997). Dinosaurs: The Encyclopedia. Jefferson, North Carolina: McFarland & Co. p. 69. ISBN 0-89950-917-7.
- Evans, David C.; Reisz, Robert R. (2007). "Anatomy and relationships of Lambeosaurus magnicristatus, a crested hadrosaurid dinosaur (Ornithischia) from the Dinosaur Park Formation, Alberta". Journal of Vertebrate Paleontology 27 (2): 373–393. doi:10.1671/0272-4634(2007)27[373:AAROLM]2.0.CO;2. ISSN 0272-4634.
- "PLOS ONE: Diversity, Relationships, and Biogeography of the Lambeosaurine Dinosaurs from the European Archipelago, with Description of the New Aralosaurin Canardia garonnensis".
- name=PM2010>Prieto-Márquez, A. (2010). "Global phylogeny of Hadrosauridae (Dinosauria: Ornithopoda) using parsimony and Bayesian methods." Zoological Journal of the Linnean Society, 159: 435–502.
- Prieto-Márquez, A. (2010). "Global phylogeny of Hadrosauridae (Dinosauria: Ornithopoda) using parsimony and Bayesian methods." Zoological Journal of the Linnean Society, 159: 435–502.
- Gates, Terry A.; Sampson, Scott D. (2007). "A new species of Gryposaurus (Dinosauria: Hadrosauridae) from the late Campanian Kaiparowits Formation, southern Utah, USA". Zoological Journal of the Linnean Society 151 (2): 351–376. doi:10.1111/j.1096-3642.2007.00349.x.
- Williams, Vincent S.; Barrett, Paul M.; Purnell, Mark A. (2009). "Quantitative analysis of dental microwear in hadrosaurid dinosaurs, and the implications for hypotheses of jaw mechanics and feeding". Proceedings of the National Academy of Sciences 106 (27): 11194–11199. doi:10.1073/pnas.0812631106. PMC 2708679. PMID 19564603.
- Bryner, Jeanna (2009-06-29). "Study hints at what and how dinosaurs ate". LiveScience. Retrieved 2009-06-03.
- Tweet, Justin S.; Chin, Karen; Braman, Dennis R.; Murphy, Nate L. (2008). "Probable gut contents within a specimen of Brachylophosaurus canadensis (Dinosauria: Hadrosauridae) from the Upper Cretaceous Judith River Formation of Montana". PALAIOS 23 (9): 624–635. doi:10.2110/palo.2007.p07-044r.
- Lloyd, Robin (2008-09-25). "Plant-eating dinosaur spills his guts: Fossil suggests hadrosaur's last meal included lots of well-chewed leaves". MSNBC. Retrieved 2009-06-03.
- This information comes from the aforementioned Alan Boyle source from June 29, 2009. However, this specific information is not included in the body of the article, but rather a response by Boyle to comments in the article. Since the comments were written by Boyle himself, and since they cite information he received specifically from Purnell, they are as legitimate a source of information as the article itself.
- Mallon, Jordan C.; Evans, David C.; Ryan, Michael J.; Anderson, Jason S. (2013). "Feeding height stratification among the herbivorous dinosaurs from the Dinosaur Park Formation (upper Campanian) of Alberta, Canada". BMC Ecology 13: 14. doi:10.1186/1472-6785-13-14. PMC 3637170. PMID 23557203.
- Chin, K. (September 2007). "The Paleobiological Implications of Herbivorous Dinosaur Coprolites from the Upper Cretaceous Two Medicine Formation of Montana: Why Eat Wood?". PALAIOS 22 (5): 554. doi:10.2110/palo.2006.p06-087r. Retrieved 2008-09-10.
- Tanke, D.H. and Brett-Surman, M.K. 2001. Evidence of Hatchling and Nestling-Size Hadrosaurs (Reptilia:Ornithischia) from Dinosaur Provincial Park (Dinosaur Park Formation: Campanian), Alberta, Canada. pp. 206-218. In: Mesozoic Vertebrate Life—New Research Inspired by the Paleontology of Philip J. Currie. Edited by D.H. Tanke and K. Carpenter. Indiana University Press: Bloomington. xviii + 577 pp.
- Schmitz, L.; Motani, R. (2011). "Nocturnality in Dinosaurs Inferred from Scleral Ring and Orbit Morphology". Science 332 (6030): 705–708. doi:10.1126/science.1200043. PMID 21493820.
|Wikiquote has quotations related to: Hadrosaurid|
|Wikispecies has information related to: Hadrosauridae| | https://en.wikipedia.org/wiki/Hadrosaurinae |
4.125 | In the 19th century, Manifest Destiny was a widely held belief in the United States that American settlers were destined to expand throughout the continent. Historians have for the most part agreed that there are three basic themes to Manifest Destiny:
- The special virtues of the American people and their institutions
- America's mission to redeem and remake the west in the image of agrarian America
- An irresistible destiny to accomplish this essential duty
Historian Frederick Merk says this concept was born out of "a sense of mission to redeem the Old World by high example ... generated by the potentialities of a new earth for building a new heaven".
Historians have emphasized that "Manifest Destiny" was a contested concept—Democrats endorsed the idea but many prominent Americans (such as Abraham Lincoln, Ulysses S. Grant, and most Whigs) rejected it. Historian Daniel Walker Howe writes, "American imperialism did not represent an American consensus; it provoked bitter dissent within the national polity.... Whigs saw America's moral mission as one of democratic example rather than one of conquest."
Newspaper editor John O'Sullivan coined the term Manifest Destiny in 1845 to describe the essence of this mindset, which was a rhetorical tone. It was used by Democrats in the 1840s to justify the war with Mexico and it was also used to divide half of Oregon with the United Kingdom. But Manifest Destiny always limped along because of its internal limitations and the issue of slavery, says Merk. It never became a national priority. By 1843 John Quincy Adams, originally a major supporter of the concept underlying manifest destiny, had changed his mind and repudiated expansionism because it meant the expansion of slavery in Texas.
- From the outset Manifest Destiny—vast in program, in its sense of continentalism—was slight in support. It lacked national, sectional, or party following commensurate with its magnitude. The reason was it did not reflect the national spirit. The thesis that it embodied nationalism, found in much historical writing, is backed by little real supporting evidence.
- 1 Context
- 2 Themes and influences
- 3 Alternative interpretations
- 4 Era of continental expansion
- 5 Beyond North America
- 6 Relationship with German Lebensraum ideology
- 7 See also
- 8 Notes
- 9 References
- 10 Further reading
- 11 External links
There was never a set of principles defining manifest destiny therefore Manifest Destiny was always a general idea rather than a specific policy made with a motto. Ill-defined but keenly felt, manifest destiny was an expression of conviction in the morality and value of expansionism that complemented other popular ideas of the era, including American exceptionalism and Romantic nationalism. Andrew Jackson, who spoke of "extending the area of freedom", typified the conflation of America's potential greatness, the nation's budding sense of Romantic self-identity, and its expansion.
Yet Jackson would not be the only president to elaborate on the principles underlying manifest destiny. Owing in part to the lack of a definitive narrative outlining its rationale, proponents offered divergent or seemingly conflicting viewpoints. While many writers focused primarily upon American expansionism, be it into Mexico or across the Pacific, others saw the term as a call to example. Without an agreed upon interpretation, much less an elaborated political philosophy, these conflicting views of America's destiny were never resolved. This variety of possible meanings was summed up by Ernest Lee Tuveson, who writes:
A vast complex of ideas, policies, and actions is comprehended under the phrase "Manifest Destiny". They are not, as we should expect, all compatible, nor do they come from any one source.
Journalist John L. O'Sullivan, an influential advocate for Jacksonian democracy and a complex character described by Julian Hawthorne as "always full of grand and world-embracing schemes", wrote an article in 1839, which, while not using the term "manifest destiny", did predict a "divine destiny" for the United States based upon values such as equality, rights of conscience, and personal enfranchisement "to establish on earth the moral dignity and salvation of man". This destiny was not explicitly territorial, but O'Sullivan predicted that the United States would be one of a "Union of many Republics" sharing those values.
Six years later, in 1845, O'Sullivan wrote another essay titled Annexation in the Democratic Review, in which he first used the phrase manifest destiny. In this article he urged the U.S. to annex the Republic of Texas, not only because Texas desired this, but because it was "our manifest destiny to overspread the continent allotted by Providence for the free development of our yearly multiplying millions". Overcoming Whig opposition, Democrats annexed Texas in 1845. O'Sullivan's first usage of the phrase "manifest destiny" attracted little attention.
O'Sullivan's second use of the phrase became extremely influential. On December 27, 1845, in his newspaper the New York Morning News, O'Sullivan addressed the ongoing boundary dispute with Britain. O'Sullivan argued that the United States had the right to claim "the whole of Oregon":
And that claim is by the right of our manifest destiny to overspread and to possess the whole of the continent which Providence has given us for the development of the great experiment of liberty and federated self-government entrusted to us.
That is, O'Sullivan believed that Providence had given the United States a mission to spread republican democracy ("the great experiment of liberty"). Because Britain would not spread democracy, thought O'Sullivan, British claims to the territory should be overruled. O'Sullivan believed that manifest destiny was a moral ideal (a "higher law") that superseded other considerations.
O'Sullivan's original conception of manifest destiny was not a call for territorial expansion by force. He believed that the expansion of the United States would happen without the direction of the U.S. government or the involvement of the military. After Americans emigrated to new regions, they would set up new democratic governments, and then seek admission to the United States, as Texas had done. In 1845, O'Sullivan predicted that California would follow this pattern next, and that Canada would eventually request annexation as well. He disapproved of the Mexican–American War in 1846, although he came to believe that the outcome would be beneficial to both countries.
Ironically, O'Sullivan's term became popular only after it was criticized by Whig opponents of the Polk administration. Whigs denounced manifest destiny, arguing, "that the designers and supporters of schemes of conquest, to be carried on by this government, are engaged in treason to our Constitution and Declaration of Rights, giving aid and comfort to the enemies of republicanism, in that they are advocating and preaching the doctrine of the right of conquest". On January 3, 1846, Representative Robert Winthrop ridiculed the concept in Congress, saying "I suppose the right of a manifest destiny to spread will not be admitted to exist in any nation except the universal Yankee nation". Winthrop was the first in a long line of critics who suggested that advocates of manifest destiny were citing "Divine Providence" for justification of actions that were motivated by chauvinism and self-interest. Despite this criticism, expansionists embraced the phrase, which caught on so quickly that its origin was soon forgotten.
Themes and influences
Historian William E. Weeks has noted that three key themes were usually touched upon by advocates of manifest destiny:
- the virtue of the American people and their institutions;
- the mission to spread these institutions, thereby redeeming and remaking the world in the image of the United States;
- the destiny under God to do this work.
The origin of the first theme, later known as American Exceptionalism, was often traced to America's Puritan heritage, particularly John Winthrop's famous "City upon a Hill" sermon of 1630, in which he called for the establishment of a virtuous community that would be a shining example to the Old World. In his influential 1776 pamphlet Common Sense, Thomas Paine echoed this notion, arguing that the American Revolution provided an opportunity to create a new, better society:
We have it in our power to begin the world over again. A situation, similar to the present, hath not happened since the days of Noah until now. The birthday of a new world is at hand...
Many Americans agreed with Paine, and came to believe that the United States' virtue was a result of its special experiment in freedom and democracy. Thomas Jefferson, in a letter to James Monroe, wrote, "it is impossible not to look forward to distant times when our rapid multiplication will expand itself beyond those limits, and cover the whole northern, if not the southern continent." To Americans in the decades that followed their proclaimed freedom for mankind, embodied in the Declaration of Independence, could only be described as the inauguration of "a new time scale" because the world would look back and define history as events that took place before, and after, the Declaration of Independence. It followed that Americans owed to the world an obligation to expand and preserve these beliefs.
The second theme's origination is less precise. A popular expression of America's mission was elaborated by President Abraham Lincoln's description in his December 1, 1862, message to Congress. He described the United States as "the last, best hope of Earth". The "mission" of the United States was further elaborated during Lincoln's Gettysburg Address, in which he interpreted the Civil War as a struggle to determine if any nation with democratic ideals could survive; this has been called by historian Robert Johannsen "the most enduring statement of America's Manifest Destiny and mission".
The third theme can be viewed as a natural outgrowth of the belief that God had a direct influence in the foundation and further actions of the United States. Clinton Rossiter, a scholar, described this view as summing "that God, at the proper stage in the march of history, called forth certain hardy souls from the old and privilege-ridden nations ... and that in bestowing his grace He also bestowed a peculiar responsibility". Americans presupposed that they were not only divinely elected to maintain the North American continent, but also to "spread abroad the fundamental principles stated in the Bill of Rights". In many cases this meant neighboring colonial holdings and countries were seen as obstacles rather than the destiny God had provided the United States.
- "Most Democrats were wholehearted supporters of expansion, whereas many Whigs (especially in the North) were opposed. Whigs welcomed most of the changes wrought by industrialization but advocated strong government policies that would guide growth and development within the country's existing boundaries; they feared (correctly) that expansion raised a contentious issue, the extension of slavery to the territories. On the other hand, many Democrats feared industrialization the Whigs welcomed... For many Democrats, the answer to the nation's social ills was to continue to follow Thomas Jefferson's vision of establishing agriculture in the new territories in order to counterbalance industrialization."
Another possible influence is racial predominance, namely the idea that the American Anglo-Saxon race was "separate, innately superior" and "destined to bring good government, commercial prosperity and Christianity to the American continents and the world". This view also held that "inferior races were doomed to subordinate status or extinction." This was used to justify "the enslavement of the blacks and the expulsion and possible extermination of the Indians".
With the Louisiana Purchase in 1803, which doubled the size of the United States, Thomas Jefferson set the stage for the continental expansion of the United States. Many began to see this as the beginning of a new providential mission: If the United States was successful as a "shining city upon a hill", people in other countries would seek to establish their own democratic republics.
However, not all Americans or their political leaders believed that the United States was a divinely favored nation, or thought that it ought to expand. For example, many Whigs opposed territorial expansion based on the Democratic claim that the United States was destined to serve as a virtuous example to the rest of the world, and also had a divine obligation to spread its superordinate political system and a way of life throughout North American continent. Many in the Whig party "were fearful of spreading out too widely", and they "adhered to the concentration of national authority in a limited area". In July 1848, Alexander Stephens denounced President Polk's expansionist interpretation of America's future as "mendacious".
In the mid‑19th century, expansionism, especially southward toward Cuba, also faced opposition from those Americans who were trying to abolish slavery. As more territory was added to the United States in the following decades, "extending the area of freedom" in the minds of southerners also meant extending the institution of slavery. That is why slavery became one of the central issues in the continental expansion of the United States before the Civil War.
Before and during the Civil War both sides claimed that America's destiny were rightfully their own. Lincoln opposed anti-immigrant nativism, and the imperialism of manifest destiny as both unjust and unreasonable. He objected to the Mexican War and believed each of these disordered forms of patriotism threatened the inseparable moral and fraternal bonds of liberty and Union that he sought to perpetuate through a patriotic love of country guided by wisdom and critical self-awareness. Lincoln's "Eulogy to Henry Clay", June 6, 1852, provides the most cogent expression of his reflective patriotism.
Era of continental expansion
The phrase "manifest destiny" is most often associated with the territorial expansion of the United States from 1812 to 1860. This era, from the end of the War of 1812 to the beginning of the American Civil War, has been called the "age of manifest destiny". During this time, the United States expanded to the Pacific Ocean—"from sea to shining sea"—largely defining the borders of the contiguous United States as they are today.
War of 1812
One of the causes of the War of 1812 may have been an American desire to annex or threaten to annex British Canada in order to stop the Indian raids into the Midwest, expel Britain from North America, and gain additional land. The American victories at the Battle of Lake Erie and the Battle of the Thames in 1813 ended the Indian raids and removed one of the reasons for annexation. The American failure to occupy any significant part of Canada prevented them from annexing it for the second reason, which was largely ended by the Era of Good Feelings, which ensued after the war between Britain and the United States.
To end the War of 1812 John Quincy Adams, Henry Clay and Albert Gallatin (former Treasury Secretary and a leading expert on Indians) and the other American diplomats negotiated the Treaty of Ghent in 1814 with Britain. They rejected the British plan to set up an Indian state in U.S. territory south of the Great Lakes. They explained the American policy toward acquisition of Indian lands:
- The United States, while intending never to acquire lands from the Indians otherwise than peaceably, and with their free consent, are fully determined, in that manner, progressively, and in proportion as their growing population may require, to reclaim from the state of nature, and to bring into cultivation every portion of the territory contained within their acknowledged boundaries. In thus providing for the support of millions of civilized beings, they will not violate any dictate of justice or of humanity; for they will not only give to the few thousand savages scattered over that territory an ample equivalent for any right they may surrender, but will always leave them the possession of lands more than they can cultivate, and more than adequate to their subsistence, comfort, and enjoyment, by cultivation. If this be a spirit of aggrandizement, the undersigned are prepared to admit, in that sense, its existence; but they must deny that it affords the slightest proof of an intention not to respect the boundaries between them and European nations, or of a desire to encroach upon the territories of Great Britain. . . . They will not suppose that that Government will avow, as the basis of their policy towards the United States a system of arresting their natural growth within their own territories, for the sake of preserving a perpetual desert for savages.
The 19th-century belief that the United States would eventually encompass all of North America is known as "continentalism". An early proponent of this idea was John Quincy Adams, a leading figure in U.S. expansion between the Louisiana Purchase in 1803 and the Polk administration in the 1840s. In 1811, Adams wrote to his father:
The whole continent of North America appears to be destined by Divine Providence to be peopled by one nation, speaking one language, professing one general system of religious and political principles, and accustomed to one general tenor of social usages and customs. For the common happiness of them all, for their peace and prosperity, I believe it is indispensable that they should be associated in one federal Union.
Adams did much to further this idea. He orchestrated the Treaty of 1818, which established the United States–Canada border as far west as the Rocky Mountains, and provided for the joint occupation of the region known in American history as the Oregon Country and in British and Canadian history as the New Caledonia and Columbia Districts. He negotiated the Transcontinental Treaty in 1819, purchasing Florida from Spain and extending the U.S. border with Spanish Mexico all the way to the Pacific Ocean. And he formulated the Monroe Doctrine of 1823, which warned Europe that the Western Hemisphere was no longer open for European colonization.
The Monroe Doctrine and manifest destiny were closely related ideas: historian Walter McDougall calls manifest destiny a corollary of the Monroe Doctrine, because while the Monroe Doctrine did not specify expansion, expansion was necessary in order to enforce the Doctrine. Concerns in the United States that European powers (especially Great Britain) were seeking to acquire colonies or greater influence in North America led to calls for expansion in order to prevent this. In his influential 1935 study of manifest destiny, Albert Weinberg wrote, "the expansionism of the [1830s] arose as a defensive effort to forestall the encroachment of Europe in North America."
Manifest destiny played its most important role in, and was coined during the course of, the Oregon boundary dispute with Britain. The Anglo-American Convention of 1818 had provided for the joint occupation of the Oregon Country, and thousands of Americans migrated there in the 1840s over the Oregon Trail. The British rejected a proposal by President John Tyler to divide the region along the 49th parallel, and instead proposed a boundary line farther south along the Columbia River, which would have made most of what later became the state of Washington part of British North America. Advocates of manifest destiny protested and called for the annexation of the entire Oregon Country up to the Alaska line (54°40ʹ N). Presidential candidate James K. Polk used this popular outcry to his advantage, and the Democrats called for the annexation of "All Oregon" in the 1844 U.S. Presidential election.
As president, however, Polk sought compromise and renewed the earlier offer to divide the territory in half along the 49th parallel, to the dismay of the most ardent advocates of manifest destiny. When the British refused the offer, American expansionists responded with slogans such as "The Whole of Oregon or None!" and "Fifty-Four Forty or Fight!", referring to the northern border of the region. (The latter slogan is often mistakenly described as having been a part of the 1844 presidential campaign.) When Polk moved to terminate the joint occupation agreement, the British finally agreed to divide the region along the 49th parallel in early 1846, keeping the lower Columbia basin as part of the United States, and the dispute was settled by the Oregon Treaty of 1846, which the administration was able to sell to Congress because the United States was about to begin the Mexican–American war, and the president and others argued it would be foolish to also fight the British Empire.
Despite the earlier clamor for "All Oregon", the treaty was popular in the United States and was easily ratified by the Senate. The most fervent advocates of manifest destiny had not prevailed along the northern border because, according to Reginald Stuart, "the compass of manifest destiny pointed west and southwest, not north, despite the use of the term 'continentalism'."
Mexico and Texas
Manifest Destiny played an important role in the expansion of Texas and American relationship with Mexico. In 1836, the Republic of Texas declared independence from Mexico and, after the Texas Revolution, sought to join the United States as a new state. This was an idealized process of expansion that had been advocated from Jefferson to O'Sullivan: newly democratic and independent states would request entry into the United States, rather than the United States extending its government over people who did not want it. The annexation of Texas was controversial as it would add another slave state to the Union. Presidents Andrew Jackson and Martin Van Buren declined Texas's offer to join the United States in part because the slavery issue threatened to divide the Democratic Party.
Before the election of 1844, Whig candidate Henry Clay and the presumed Democratic candidate, former President Van Buren, both declared themselves opposed to the annexation of Texas, each hoping to keep the troublesome topic from becoming a campaign issue. This unexpectedly led to Van Buren being dropped by the Democrats in favor of Polk, who favored annexation. Polk tied the Texas annexation question with the Oregon dispute, thus providing a sort of regional compromise on expansion. (Expansionists in the North were more inclined to promote the occupation of Oregon, while Southern expansionists focused primarily on the annexation of Texas.) Although elected by a very slim margin, Polk proceeded as if his victory had been a mandate for expansion.
After the election of Polk, but before he took office, Congress approved the annexation of Texas. Polk moved to occupy a portion of Texas that had declared independence from Mexico in 1836, but was still claimed by Mexico. This paved the way for the outbreak of the Mexican–American War on April 24, 1846. With American successes on the battlefield, by the summer of 1847 there were calls for the annexation of "All Mexico", particularly among Eastern Democrats, who argued that bringing Mexico into the Union was the best way to ensure future peace in the region.
This was a controversial proposition for two reasons. First, idealistic advocates of manifest destiny like John L. O'Sullivan had always maintained that the laws of the United States should not be imposed on people against their will. The annexation of "All Mexico" would be a violation of this principle. And secondly, the annexation of Mexico was controversial because it would mean extending U.S. citizenship to millions of Mexicans. Senator John C. Calhoun of South Carolina, who had approved of the annexation of Texas, was opposed to the annexation of Mexico, as well as the "mission" aspect of manifest destiny, for racial reasons. He made these views clear in a speech to Congress on January 4, 1848:
We have never dreamt of incorporating into our Union any but the Caucasian race—the free white race. To incorporate Mexico, would be the very first instance of the kind, of incorporating an Indian race; for more than half of the Mexicans are Indians, and the other is composed chiefly of mixed tribes. I protest against such a union as that! Ours, sir, is the Government of a white race.... We are anxious to force free government on all; and I see that it has been urged ... that it is the mission of this country to spread civil and religious liberty over all the world, and especially over this continent. It is a great mistake.
This debate brought to the forefront one of the contradictions of manifest destiny: on the one hand, while identitarian ideas inherent in manifest destiny suggested that Mexicans, as non-whites, would present a threat to white racial integrity and thus were not qualified to become Americans, the "mission" component of manifest destiny suggested that Mexicans would be improved (or "regenerated", as it was then described) by bringing them into American democracy. Identitarianism was used to promote manifest destiny, but, as in the case of Calhoun and the resistance to the "All Mexico" movement, identitarianism was also used to oppose manifest destiny. Conversely, proponents of annexation of "All Mexico" regarded it as an anti-slavery measure.
The controversy was eventually ended by the Mexican Cession, which added the territories of Alta California and Nuevo México to the United States, both more sparsely populated than the rest of Mexico. Like the All Oregon movement, the All Mexico movement quickly abated.
Historian Frederick Merk, in Manifest Destiny and Mission in American History: A Reinterpretation (1963), argued that the failure of the "All Oregon" and "All Mexico" movements indicates that manifest destiny had not been as popular as historians have traditionally portrayed it to have been. Merk wrote that, while belief in the beneficent mission of democracy was central to American history, aggressive "continentalism" were aberrations supported by only a minority of Americans, all of them Democrats. Some Democrats were also opposed; the Democrats of Louisiana opposed annexation of Mexico, while those in Mississippi supported it.
After the Mexican–American War ended in 1848, disagreements over the expansion of slavery made further annexation by conquest too divisive to be official government policy. Some, such as John Quitman, governor of Mississippi, offered what public support they could offer. In one memorable case, Quitman simply explained that the state of Mississippi had "lost" its state arsenal, which began showing up in the hands of filibusters. Yet these isolated cases only solidified opposition in the North as many Northerners were increasingly opposed to what they believed to be efforts by Southern slave owners—and their friends in the North—to expand slavery through filibustering. Sarah P. Remond on January 24, 1859, delivered an impassioned speech at Warrington, England, that the connection between filibustering and slave power was clear proof of "the mass of corruption that underlay the whole system of American government". The Wilmot Proviso and the continued "Slave Power" narratives thereafter, indicated the degree to which manifest destiny had become part of the sectional controversy.
Without official government support the most radical advocates of manifest destiny increasingly turned to military filibustering. Originally filibuster had come from the Dutch vrijbuiter and referred to buccaneers in the West Indies that preyed on Spanish commerce. While there had been some filibustering expeditions into Canada in the late 1830s, it was only by mid-century did filibuster become a definitive term. By then, declared the New-York Daily Times "the fever of Fillibusterism is on our country. Her pulse beats like a hammer at the wrist, and there's a very high color on her face." Millard Fillmore's second annual message to Congress, submitted in December 1851, gave double the amount of space to filibustering activities than the brewing sectional conflict. The eagerness of the filibusters, and the public to support them, had an international hue. Clay's son, diplomat to Portugal, reported that Lisbon had been stirred into a "frenzy" of excitement and were waiting on every dispatch.
Although they were illegal, filibustering operations in the late 1840s and early 1850s were romanticized in the United States. The Democratic Party's national platform included a plank that specifically endorsed William Walker's filibustering in Nicaragua. Wealthy American expansionists financed dozens of expeditions, usually based out of New Orleans, New York, and San Francisco. The primary target of manifest destiny's filibusters was Latin America but there were isolated incidents elsewhere. Mexico was a favorite target of organizations devoted to filibustering, like the Knights of the Golden Circle. William Walker got his start as a filibuster in an ill-advised attempt to separate the Mexican states Sonora and Baja California. Narciso López, a near second in fame and success, spent his efforts trying to secure Cuba from the Spanish Empire.
The United States had long been interested in acquiring Cuba from the declining Spanish Empire. As with Texas, Oregon, and California, American policy makers were concerned that Cuba would fall into British hands, which, according to the thinking of the Monroe Doctrine, would constitute a threat to the interests of the United States. Prompted by John L. O'Sullivan, in 1848 President Polk offered to buy Cuba from Spain for $100 million. Polk feared that filibustering would hurt his effort to buy the island, and so he informed the Spanish of an attempt by the Cuban filibuster Narciso López to seize Cuba by force and annex it to the United States, foiling the plot. Nevertheless, Spain declined to sell the island, which ended Polk's efforts to acquire Cuba. O'Sullivan, on the other hand eventually landed in legal trouble.
Filibustering continued to be a major concern for presidents after Polk. Whigs presidents Zachary Taylor and Millard Fillmore tried to suppress the expeditions. When the Democrats recaptured the White House in 1852 with the election of Franklin Pierce, a filibustering effort by John A. Quitman to acquire Cuba received the tentative support of the president. Pierce backed off, however, and instead renewed the offer to buy the island, this time for $130 million. When the public learned of the Ostend Manifesto in 1854, which argued that the United States could seize Cuba by force if Spain refused to sell, this effectively killed the effort to acquire the island. The public now linked expansion with slavery; if manifest destiny had once enjoyed widespread popular approval, this was no longer true.
Filibusters like William Walker continued to garner headlines in the late 1850s, but to little effect. Expansionism was among the various issues that played a role in the coming of the war. With the divisive question of the expansion of slavery, Northerners and Southerners, in effect, were coming to define manifest destiny in different ways, undermining nationalism as a unifying force. According to Frederick Merk, "The doctrine of Manifest Destiny, which in the 1840s had seemed Heaven-sent, proved to have been a bomb wrapped up in idealism."
The Homestead Act of 1862 encouraged 600,000 families to settle the West by giving them land (usually 160 acres) almost free. They had to live on and improve the land for five years. Before the Civil War, Southern leaders opposed the Homestead Acts because they feared it would lead to more free states and free territories. After the mass resignation of Southern senators and representatives at the beginning of the war, Congress was subsequently able to pass the Homestead Act.
Manifest destiny had serious consequences for Native Americans, since continental expansion implicitly meant the occupation and annexation of Native American land, sometimes to expand slavery. This ultimately led to the ethnic cleansing of several groups of native peoples via Indian removal. The United States continued the European practice of recognizing only limited land rights of indigenous peoples. In a policy formulated largely by Henry Knox, Secretary of War in the Washington Administration, the U.S. government sought to expand into the west through the purchase of Native American land in treaties. Only the Federal Government could purchase Indian lands and this was done through treaties with tribal leaders. Whether a tribe actually had a decision-making structure capable of making a treaty was a controversial issue. The national policy was for the Indians to join American society and become "civilized", which meant no more wars with neighboring tribes or raids on white settlers or travelers, and a shift from hunting to farming and ranching. Advocates of civilization programs believed that the process of settling native tribes would greatly reduce the amount of land needed by the Native Americans, making more land available for homesteading by white Americans. Thomas Jefferson believed that while American Indians were the intellectual equals of whites, they had to live like the whites or inevitably be pushed aside by them. Jefferson's belief, rooted in Enlightenment thinking, that whites and Native Americans would merge to create a single nation did not last his lifetime, and he began to believe that the natives should emigrate across the Mississippi River and maintain a separate society, an idea made possible by the Louisiana Purchase of 1803.
In the age of manifest destiny, this idea, which came to be known as "Indian removal", gained ground. Humanitarian advocates of removal believed that American Indians would be better off moving away from whites. As historian Reginald Horsman argued in his influential study Race and Manifest Destiny, racial rhetoric increased during the era of manifest destiny. Americans increasingly believed that Native American ways of life would fade away as the United States expanded. As an example, this idea was reflected in the work of one of America's first great historians, Francis Parkman, whose landmark book The Conspiracy of Pontiac was published in 1851. Parkman wrote that after the British conquest of Canada in 1760, Indians were "destined to melt and vanish before the advancing waves of Anglo-American power, which now rolled westward unchecked and unopposed". Parkman emphasized that the collapse of Indian power in the late 18th century had been swift and was a past event.
Beyond North America
As the Civil War faded into history, the term manifest destiny experienced a brief revival. Protestant missionary Josiah Strong, in his best seller of 1885 Our Country argued that the future was devolved upon America since it had perfected the ideals of civil liberty, "a pure spiritual Christianity", and concluded "My plea is not, Save America for America's sake, but, Save America for the world's sake."
In the 1892 U.S. presidential election, the Republican Party platform proclaimed: "We reaffirm our approval of the Monroe doctrine and believe in the achievement of the manifest destiny of the Republic in its broadest sense." What was meant by "manifest destiny" in this context was not clearly defined, particularly since the Republicans lost the election.
In the 1896 election, however, the Republicans recaptured the White House and held on to it for the next 16 years. During that time, manifest destiny was cited to promote overseas expansion. Whether or not this version of manifest destiny was consistent with the continental expansionism of the 1840s was debated at the time, and long afterwards.
For example, when President William McKinley advocated annexation of the Republic of Hawaii in 1898, he said that "We need Hawaii as much and a good deal more than we did California. It is manifest destiny." On the other hand, former President Grover Cleveland, a Democrat who had blocked the annexation of Hawaii during his administration, wrote that McKinley's annexation of the territory was a "perversion of our national destiny". Historians continued that debate; some have interpreted American acquisition of other Pacific island groups in the 1890s as an extension of manifest destiny across the Pacific Ocean. Others have regarded it as the antithesis of manifest destiny and merely imperialism.
Spanish–American War and the Philippines
In 1898, the United States intervened in the Cuban insurrection and launched the Spanish–American War to force Spain out. According to the terms of the Treaty of Paris, Spain relinquished sovereignty over Cuba and ceded the Philippine Islands, Puerto Rico, and Guam to the United States. The terms of cession for the Philippines involved a payment of the sum of $20 million by the United States to Spain. The treaty was highly contentious and denounced by William Jennings Bryan, who tried to make it a central issue in the 1900 election. He was defeated in landslide by McKinley.
The Teller Amendment, passed unanimously by the U.S. Senate before the war, which proclaimed Cuba "free and independent", forestalled annexation of the island. The Platt Amendment (1902), however, established Cuba as a virtual protectorate of the United States.
The acquisition of Guam, Puerto Rico, and the Philippines after the war with Spain marked a new chapter in U.S. history. Traditionally, territories were acquired by the United States for the purpose of becoming new states on equal footing with already existing states. These islands, however, were acquired as colonies rather than prospective states. The process was validated by the Insular Cases. The Supreme Court ruled that full constitutional rights did not automatically extend to all areas under American control. Nevertheless, in 1917, Puerto Ricans were all made full American citizens via the Jones Act. This also provided for a popularly elected legislature, a bill of rights and authorized the election of a Resident Commissioner who has a voice (but no vote) in Congress.
According to Frederick Merk, these colonial acquisitions marked a break from the original intention of manifest destiny. Previously, "Manifest Destiny had contained a principle so fundamental that a Calhoun and an O'Sullivan could agree on it—that a people not capable of rising to statehood should never be annexed. That was the principle thrown overboard by the imperialism of 1899." Albert J. Beveridge maintained the contrary at his September 25, 1900, speech in the Auditorium, at Chicago. He declared that the current desire for Cuba and the other acquired territories was identical to the views expressed by Washington, Jefferson and Marshall. Moreover, "the sovereignty of the Stars and Stripes can be nothing but a blessing to any people and to any land." The Philippines was eventually given its independence in 1946; Guam and Puerto Rico have special status to this day, but all their people have United States citizenship.
Rudyard Kipling's poem "The White Man's Burden", which was subtitled "The United States and the Philippine Islands", was a famous expression of imperialist sentiments, which were common at the time. The nascent revolutionary government desirous of independence, however, resisted the United States in the Philippine–American War in 1899. After the war began, William Jennings Bryan, an opponent of overseas expansion, wrote, "'Destiny' is not as manifest as it was a few weeks ago."
The belief in an American mission to promote and defend democracy throughout the world, as expounded by Thomas Jefferson and his "Empire of Liberty" and Abraham Lincoln, was continued by Theodore Roosevelt and Woodrow Wilson. Under Harry Truman (and Douglas MacArthur) it was implemented in practice in the American rebuilding of Japan and Germany after World War II. George W. Bush in the 21st century applied it to the Middle East, in Afghanistan and Iraq. Tyner argues that in proclaiming a mission to combat terror, Bush was continuing a long tradition of prophetic presidential action to be the beacon of freedom in the spirit of Manifest Destiny.
After the turn of the nineteenth century to the twentieth, the phrase manifest destiny declined in usage, as territorial expansion ceased to be promoted as being a part of America's "destiny". Under President Theodore Roosevelt the role of the United States in the New World was defined, in the 1904 Roosevelt Corollary to the Monroe Doctrine, as being an "international police power" to secure American interests in the Western Hemisphere. Roosevelt's corollary contained an explicit rejection of territorial expansion. In the past, manifest destiny had been seen as necessary to enforce the Monroe Doctrine in the Western Hemisphere, but now expansionism had been replaced by interventionism as a means of upholding the doctrine.
President Woodrow Wilson continued the policy of interventionism in the Americas, and attempted to redefine both manifest destiny and America's "mission" on a broader, worldwide scale. Wilson led the United States into World War I with the argument that "The world must be made safe for democracy." In his 1920 message to Congress after the war, Wilson stated:
... I think we all realize that the day has come when Democracy is being put upon its final test. The Old World is just now suffering from a wanton rejection of the principle of democracy and a substitution of the principle of autocracy as asserted in the name, but without the authority and sanction, of the multitude. This is the time of all others when Democracy should prove its purity and its spiritual power to prevail. It is surely the manifest destiny of the United States to lead in the attempt to make this spirit prevail.
This was the only time a president had used the phrase "manifest destiny" in his annual address. Wilson's version of manifest destiny was a rejection of expansionism and an endorsement (in principle) of self-determination, emphasizing that the United States had a mission to be a world leader for the cause of democracy. This U.S. vision of itself as the leader of the "Free World" would grow stronger in the 20th century after World War II, although rarely would it be described as "manifest destiny", as Wilson had done.
"Manifest Destiny" is sometimes used by critics of U.S. foreign policy to characterize interventions in the Middle East and elsewhere. In this usage, "manifest destiny" is interpreted as the underlying cause of what is denounced by some as "American imperialism". The positive phrasing is "nation building", and State Department official Karin Von Hippel notes that the U.S. has "been involved in nation-building and promoting democracy since the middle of the nineteenth century and 'Manifest Destiny'".
The legacy is a complex one. The belief in an American mission to promote and defend democracy throughout the world, as expounded by Thomas Jefferson and his "Empire of Liberty", and by Abraham Lincoln, Woodrow Wilson and George W. Bush, continues to have an influence on American political ideology. Bush looked at the American success after 1945 in imposing democracy in Japan as a model. Under Douglas MacArthur, the Americans "were imbued with a sense of manifest destiny" says historian John Dower.
Relationship with German Lebensraum ideology
German geographer Friedrich Ratzel visited North America beginning in 1873 and saw the effects of American manifest destiny. Ratzel sympathized with the results of "manifest destiny", but he never used the term. Instead he relied on the Frontier Thesis of Frederick Jackson Turner. Ratzel promoted overseas colonies for Germany in Asia and Africa, but not an expansion into Slavic lands. Later German publicists misinterpreted Ratzel to argue for the right of the German race to expand within Europe; that notion was later incorporated into Nazi ideology, as Lebensraum. Harriet Wanklyn (1961) argues that Ratzel's theory was designed to advance science, and that politicians distorted it for political goals.
|History of U.S.
expansion and influence
|Timeline of military operations|
|List of wars|
|List of bases|
Authors and literature
- Thomas Hart Benton—Missouri senator, proponent of western expansion
- Stephen A. Douglas—prominent spokesman of "Young America"
- Horace Greeley—popularized the phrase "Go West, young man."
- Duff Green—writer, politician, and prominent manifest destiny advocate
- Frances Fuller Victor—prominent western historian and fiction writer who captured the spirit of western expansion
- "The White Man's Burden"—an influential poem by Rudyard Kipling advocating colonization by the United States
- Young America movement—a political and literary movement with connections to manifest destiny
- Expansionism—for expansionist ideas in other countries
- "John Gast, American Progress, 1872". Picturing U.S. History. City University of New York. External link in
- Robert J. Miller (2006). Native America, Discovered And Conquered: Thomas Jefferson, Lewis & Clark, And Manifest Destiny. Greenwood. p. 120.
- Merk 1963, p. 3
- Daniel Walker Howe, What Hath God Wrought: The Transformation of America 1815–1848, (2007) pp. 705–6
- "29. Manifest Destiny". American History. USHistory.org. External link in
- Merk 1963, pp. 215–216
- Merk 1963, p. 215
- Ward 1962, pp. 136–137
- Hidalgo, Dennis R. (2003). "Manifest Destiny". encyclopedia.com taken from Dictionary of American History. Retrieved June 11, 2014.
- Tuveson 1980, p. 91.
- Merk 1963, p. 27
- O'Sullivan, John. "The Great Nation of Futurity". The United States Democratic Review Volume 0006 Issue 23 (November 1839).
- O'Sullivan, John L., A Divine Destiny for America, 1845.
- O'Sullivan, John L. (July–August 1845). "Annexation". United States Magazine and Democratic Review 17 (1): 5–11. Retrieved 2008-05-20.
- See Julius Pratt, "The Origin Of 'Manifest Destiny'", American Historical Review, (1927) 32#4, pp. 795–98 in JSTOR. Linda S. Hudson has argued that it was coined by writer Jane McManus Storm; Greenburg, p. 20; Hudson 2001; O'Sullivan biographer Robert D. Sampson disputes Hudson's claim for a variety of reasons (See note 7 at Sampson 2003, pp. 244–245).
- Adams 2008, p. 188.
- Quoted in Thomas R. Hietala, Manifest design: American exceptionalism and Empire (2003) p. 255
- Robert W. Johannsen, "The Meaning of Manifest Destiny", in Johannsen 1997.
- McCrisken, Trevor B., "Exceptionalism: Manifest Destiny" in Encyclopedia of American Foreign Policy (2002), Vol. 2, p. 68
- Weinberg 1935, p. 145; Johannsen 1997, p. 9.
- Johannsen 1997, p. 10
- "Prospectus of the New Series", The American Whig Review Volume 7 Issue 1 (Jan 1848) p. 2
- Weeks 1996, p. 61.
- Justin B. Litke, "Varieties of American Exceptionalism: Why John Winthrop Is No Imperialist", Journal of Church and State, 54 (Spring 2012), 197–213.
- Ford 2010, pp. 315–319
- Somkin 1967, pp. 68–69
- Johannsen 1997, pp. 18–19.
- Rossiter 1950, pp. 19–20
- John Mack Faragher et al. Out of Many: A History of the American People, (2nd ed. 1997) page 413
- Reginald Horsman. Race and Manifest Destiny. pp. 2, 6.
- Witham, Larry (2007). A City Upon a Hill: How Sermons Changed the Course of American History. New York: Harper.
- Merk 1963, p. 40
- Byrnes, Mark Eaton (2001). James K. Polk: A Biographical Companion. Santa Barbara, Calif: ABC-CLIO. p. 145.
- Morrison, Michael A. (1997). Slavery and the American West: The Eclipse of Manifest Destiny and the Coming of the Civil War. Chapel Hill: University of North Carolina Press.
- Mountjoy, Shane (2009). Manifest Destiny: Westward Expansion. New York: Chelsea House Publishers.
- Joseph R. Fornieri (April–June 2010). "Lincoln's Reflective Patriotism". Perspectives on Political Science 39 (2): 108–117. doi:10.1080/10457091003685019.
- Kurt Hanson; Robert L. Beisner. American Foreign Relations since 1600: A Guide to the Literature, Second Edition. ABC-CLIO. pp. 313. ISBN 978-1-57607-080-2.
- Stuart and Weeks call this period the "era of manifest destiny" and the "age of manifest destiny", respectively.
- Nugent, pp. 74–79
- The acquisition of Canada this year, as far as the neighborhood of Quebec, will be a mere matter of marching, and will give us experience for the attack of Halifax the next, and the final expulsion of England from the American continent.—To William Duane. vi, 75. Ford ed., ix, 366. (M., August 1812.)
- Charles M. Gates (1940). "The West in American Diplomacy, 1812–1815". Mississippi Valley Historical Review 26 (4): 499–510. doi:10.2307/1896318. JSTOR 1896318. quote on p. 507.
- Continental and Continentalism, sociologyindex.com. Archived May 9, 2015 at the Wayback Machine
- Adams quoted in McDougall 1997, p. 78.
- McDougall 1997, p. 74; Weinberg 1935, p. 109.
- Treaty popular: Stuart 1988, p. 104; compass quote p. 84.
- Merk 1963, pp. 144–147; Fuller 1936; Hietala 2003.
- Calhoun, John C. (1848). "Conquest of Mexico". TeachingAmericanHistory.org. Retrieved 2007-10-19.
- McDougall 1997, pp. 87–95.
- Fuller 1936, pp. 119, 122, 162 and passim.
- Billy H. Gilley (1979). "'Polk's War' and the Louisiana Press". Louisiana History 20: 5–23. JSTOR 4231864.
- Robert A. Brent (1969). "Mississippi and the Mexican War". Journal of Mississippi History 31 (3): 202–14.
- Ripley 1985
- "A Critical Day". The New York Times. March 4, 1854.
- Crenshaw 1941
- Greene 2006, pp. 1–50[citation not found]
- Crocker 2006, p. 150.
- Weeks 1996, pp. 144–52.
- Merk 1963, p. 214.
- Lesli J. Favor (2005). "6. Settling the West". A Historical Atlas of America's Manifest Destiny. Rosen.
- "Teaching With Documents:The Homestead Act of 1862". The U.S. National Archives and Records Administration. Retrieved 2012-06-29.
- Robert E. Greenwood PhD (2007). Outsourcing Culture: How American Culture has Changed From "We the People" Into a One World Government. Outskirts Press. p. 97.
- Rajiv Molhotra (2009). "American Exceptionalism and the Myth of the American Frontiers". In Rajani Kannepalli Kanth. The Challenge of Eurocentrism. Palgrave MacMillan. pp. 180, 184, 189, 199.
- Paul Finkelman and Donald R. Kennon (2008). Congress and the Emergence of Sectionalism. Ohio University Press. pp. 15,141,254.
- Ben Kiernan (2007). Blood and Soil: A World History of Genocide and Extermination from Sparta to Darfur. Yale University Press. pp. 328, 330.
- Prucha 1995, p. 137, "I believe the Indian then to be in body and mind equal to the white man," (Jefferson letter to the Marquis de Chastellux, June 7, 1785).
- American Indians. Thomas Jefferson's Monticello. Retrieved April 26, 2015.
- Francis Parkman (1913) . The conspiracy of Pontiac and the Indian war after the conquest of Canada. p. 9.
- Strong 1885, pp. 107–108
- Official Manual of the State of Missouri. Office of the Secretary of State of Missouri. 1895. p. 245.
- Republican Party platform; context not clearly defined, Merk 1963, p. 241.
- McKinley quoted in McDougall 1997, pp. 112–13; Merk 1963, p. 257.
- Bailey, Thomas A. (1937). "Was the Presidential Election of 1900 a Mandate on Imperialism?". Mississippi Valley Historical Review 24 (1): 43–52. doi:10.2307/1891336. JSTOR 1891336.
- Merk 1963, p. 257.
- Beveridge 1908, p. 123
- Kipling, Rudyard. "The White Man's Burden".
- Bryan 1899.
- James A. Tyner (2005). Iraq, Terror, and the Philippines' Will to War. Rowman & Littlefield. p. 62.
- "Safe for democracy"; 1920 message; Wilson's version of manifest destiny: Weinberg 1935, p. 471.
- Karin Von Hippel (2000). Democracy by Force: U.S. Military Intervention in the Post-Cold War World. Cambridge University Press. p. 1.
- Charles Philippe David and David Grondin (2006). Hegemony Or Empire?: The Redefinition of Us Power Under George W. Bush. Ashgate. pp. 129–30.
- Stephanson 1996, pp. 112–29 examines the influence of manifest destiny in the 20th century, particularly as articulated by Woodrow Wilson.
- Scott, Donald. "The Religious Origins of Manifest Destiny". National Humanities Center. Retrieved 2011-10-26.
- John W. Dower (2000). Embracing Defeat: Japan in the Wake of World War II. W. W. Norton. p. 217.
- Mattelart 1996, pp. 212–216.
- Klinghoffer 2006, p. 86.
- "A German Appraisal of the United States". The Atlantic Monthly. January 1895. pp. 124–128. Retrieved 2009-10-17.
- Woodruff D. Smith (February 1980). "Friedrich Ratzel and the Origins of Lebensraum". German Studies Review 3 (1): 51–68. doi:10.2307/1429483. JSTOR 1429483.
- Wanklyn 1961, pp. 36–40.
- Adams, Sean Patrick (2008). The Early American Republic: A Documentary Reader. Wiley–Blackwell. ISBN 978-1-4051-6098-8.
- Bryan, William Jennings (1899). Republic or Empire?.
- Beveridge, Albert J. (1908). The Meaning of the Times and Other Speeches. Indianapolis: The Bobbs–Merrill Company.
- Crenshaw, Ollinger (1941). "The Knights of the Golden Circle: The Career of George Bickley". The American Historical Review 1 (42): 23–50.
- Crocker, H. W. (2006). Don't tread on me: a 400-year history of America at war, from Indian fighting to terrorist hunting. Crown Forum. ISBN 978-1-4000-5363-6.
- Cheery, Conrad (1998). God's New Israel. The University of North Carolina Press. p. 424. ISBN 978-0-8078-4754-1. Retrieved 2012-08-02.
- Greene, Laurence (2008). The Filibuster. New York: Kessinger Publishing, LLC. p. 384. ISBN 978-1-4366-9531-2. Retrieved 2012-08-02.
- Fisher, Philip (1985). Hard facts: setting and form in the American novel. Oxford University Press. ISBN 978-0-19-503528-5.
- Fuller, John Douglas Pitts (1936). The movement for the acquisition of all Mexico, 1846–1848. Johns Hopkins Press.
- Greenberg, Amy S. (2005). Manifest manhood and the antebellum American empire. Cambridge University Press. ISBN 978-0-521-84096-5.
- Hietala, Thomas R. (February 2003). Manifest Design: American Exceptionalism and Empire. Cornell University Press. ISBN 978-0-8014-8846-7. Previously published as Hietala, Thomas R. (1985). Manifest design: anxious aggrandizement in late Jacksonian America. Cornell University Press. ISBN 978-0-8014-1735-1.
- Hudson, Linda S. (2001). Mistress of Manifest Destiny: a biography of Jane McManus Storm Cazneau, 1807–1878. Texas State Historical Association. ISBN 978-0-87611-179-6.
- Johannsen, Robert Walter (1997). Manifest destiny and empire: American antebellum expansionism. Texas A&M University Press. ISBN 978-0-89096-756-0.
- Klinghoffer, Arthur Jay (2006). The power of projections: how maps reflect global politics and history. Greenwood Publishing Group. ISBN 978-0-275-99135-7.
- Ford, Paul L., ed. (2010). Works of Thomas Jefferson, IX. Cosmo Press Inc. ISBN 978-1-61640-210-5.
- May, Robert E. (2004). Manifest Destiny's Underworld. The University of North Carolina Press. p. 448. ISBN 978-0-8078-5581-2. Retrieved 2012-08-02.
- Mattelart, Armand (1996). The Invention of Communication. U of Minnesota Press. ISBN 978-0-8166-2697-7.
- McDougall, Walter A. (1997). Promised land, crusader state: the American encounter with the world since 1776. Houghton Mifflin. ISBN 978-0-395-83085-7.
- Merk, Frederick (1963). Manifest Destiny and Mission in American History. Harvard University Press. ISBN 978-0-674-54805-3.
- Prucha, Francis Paul (1995). The great father: the United States government and the American Indians. U of Nebraska Press. ISBN 978-0-8032-8734-1.
- Ripley, Peter C. (1985). The Black Abolitionist Papers. Chapel Hill, NC: University of North Carolina Press. p. 646.
- Rossiter, Clinton (1950). "The American Mission". The American Scholar (The American Scholar) (20): 19–20.
- Sampson, Robert (2003). John L. O'Sullivan and his times. Kent State University Press. ISBN 978-0-87338-745-3.
- Stephanson, Anders (1996). Manifest destiny: American expansionism and the empire of right. Hill and Wang. ISBN 978-0-8090-1584-9.
- Stuart, Reginald C. (1988). United States expansionism and British North America, 1775–1871. University of North Carolina Press. ISBN 978-0-8078-1767-4.
- Somkin, Fred (1967). Unquiet Eagle: Memory and Desire in the Idea of American Freedom, 1815–1860. Ithaca, N.Y.
- Strong, Josiah (1885). Our Country. Baker and Taylor Company.
- Tuveson, Ernest Lee (1980). Redeemer nation: the idea of America's millennial role. University of Chicago Press. ISBN 978-0-226-81921-1.
- Weeks, William Earl (1996). Building the continental empire: American expansion from the Revolution to the Civil War. Ivan R. Dee. ISBN 978-1-56663-135-8.
- Ward, John William (1962). Andrew Jackson : Symbol for an Age: Symbol for an Age. Oxford University Press. ISBN 978-0-19-992320-5.
- Weinberg, Albert Katz; Walter Hines Page School of International Relations (1935). Manifest destiny: a study of nationalist expansionism in American history. The Johns Hopkins Press. ISBN 0-404-14706-2.
- Wanklyn, Harriet (1961). Friedrich Ratzel: A Biographical Memoir and Bibliography.
- Dunning, Mike (2001). "Manifest Destiny and the Trans-Mississippi South: Natural Laws and the Extension of Slavery into Mexico.". Journal of Popular Culture 35 (2): 111–127. doi:10.1111/j.0022-3840.2001.00111.x. ISSN 0022-3840. Fulltext: Ebsco.
- Pinheiro, John C (2003). "'Religion Without Restriction': Anti-catholicism, All Mexico, and the Treaty of Guadalupe Hidalgo". Journal of the Early Republic 23 (1): 69–96. doi:10.2307/3124986. ISSN 0275-1275.
- Sampson, Robert D (2002). "The Pacifist-reform Roots of John L. O'Sullivan's Manifest Destiny". Mid-America 84 (1–3): 129–144. ISSN 0026-2927.
- Brown, Charles Henry (January 1980). Agents of manifest destiny: the lives and times of the filibusters. University of North Carolina Press. ISBN 978-0-8078-1361-4.
- Burns, Edward McNall (1957). The American idea of mission: concepts of national purpose and destiny. Rutgers University Press.
- Fresonke, Kris (2003). West of Emerson: the design of manifest destiny. University of California Press. ISBN 978-0-520-23185-6.
- Gould, Lewis L. (1980). The Presidency of William McKinley. Regents Press of Kansas. ISBN 978-0-7006-0206-3.
- Graebner, Norman A. (1968). Manifest destiny. Bobbs–Merrill. ISBN 0-672-50986-5.
- Heidler, David Stephen; Heidler, Jeanne T. (2003). Manifest destiny. Greenwood Press. ISBN 978-0-313-32308-9.
- Hofstadter, Richard (1965). "Cuba, the Philippines, and Manifest Destiny". The paranoid style in American politics: and other essays. Knopf.
- Horsman, Reginald (1981). Race and manifest destiny: The origins of American racial Anglo-Saxonism. Harvard University Press. ISBN 978-0-674-94805-1.
- McDonough, Matthew Davitian. Manifestly Uncertain Destiny: The Debate over American Expansionism, 1803–1848. PhD dissertation, Kansas State University, 2011.
- Merk, Frederick, and Lois Bannister Merk. Manifest Destiny and Mission in American History: A Reinterpretation. New York: Knopf, 1963.
- May, Robert E. (2002). Manifest destiny's underworld: filibustering in antebellum America. University of North Carolina Press. ISBN 978-0-8078-2703-1.
- Morrison, Michael A. (August 18, 1999). Slavery and the American West: The Eclipse of Manifest Destiny and the Coming of the Civil War. UNC Press Books. ISBN 978-0-8078-4796-1.
- Sampson, Robert (2003). John L. O'Sullivan and his times. Kent State University Press. ISBN 978-0-87338-745-3.
- Smith, Gene A. (2000). Thomas Ap Catesby Jones: commodore of Manifest Destiny. Naval Institute Press. ISBN 978-1-55750-848-5.
|Wikiquote has quotations related to: Manifest destiny|
- Manifest Destiny and the U.S.–Mexican War: Then and Now
- President Polk's Inaugural Address
- Gayle Olson-Raymer, "The Expansion of Empire", 15-page teaching guide for high school students, Zinn Education Project/Rethinking Schools | https://en.wikipedia.org/wiki/Manifest_Destiny |
4.21875 | Long landslides spotted on Saturn's moon, Iapetus, could help provide clues to similar movements of material on Earth. Scientists studying the icy satellite have determined that flash heating could cause falling ice to travel 10 to 15 times farther than previously expected on Iapetus.
Extended landslides can be found on Mars and Earth, but are more likely to be composed of rock than ice. Despite the differences in materials, scientists believe there could be a link between the long-tumbling debris on all three bodies.
"We think there's more likely a common mechanism for all of this, and we want to be able to explain all of the observations," lead scientist Kelsi Singer of Washington University told SPACE.com.
Giant landslides stretching as far as 50 miles (80 kilometers) litter the surface of Iapetus. Singer and her team identified 30 such displacements by studying images taken by NASA's Cassini spacecraft. [Photos: Latest Saturn Photos from NASA's Cassini Orbiter]
Composed almost completely of ice, Iapetus already stands out from other moons. While most bodies in the solar system have rocky mantles and metallic cores, with an icy layer on top, scientists think Iapetus is composed almost completely of frozen water. There are bits of rock and carbonaceous material that make half the moon appear darker than the other, but this seems to be only a surface feature.
Ice on Iapetus is different from ice found on Earth. Because the moon's temperature can get as low as 300 degrees Fahrenheit (150 degrees Celsius), the moon's ice is very hard and very dry.
"It's more like what we experience on Earth as rock, just because it's so cold," Singer said.
Slow-moving ice creates a lot of friction, so when the ice falls from high places, scientists expected that it would behave much like rock on Earth does. Instead, they found that it traveled significantly farther than predicted.
How far a landslide runs is usually related to how far it falls, Singer explained. Most of the time, debris of any type loses energy before traveling twice the distance it fell from. But on Iapetus, the pieces of ice move 20 to 30 times as far as their falling height.
Flash heating could be providing that extra push.
Faster and farther
Flash heating occurs when material falls so fast that the heat doesn't have time to dissipate. Instead, it stays concentrated in small areas, reducing the friction between the sliding objects and allowing them to travel faster and farther than they would under normal conditions.
"They're almost acting more like a fluid," Singer said.
On Iapetus, falling material has a good chance of reaching great speeds because there are a number of great heights to fall from. The moon hosts a ring of mountains around its bulging equator that can tower as high as 12 miles (20 km), and the longest run-outs discovered are associated with the ridge and with impact-basin walls.
Scientists think that the landslides are relatively recent, and could have been triggered by impacts in the last billion years or so.
"You don't see a lot of small craters on the landslide material itself," Singer said, although the surrounding terrain boasts evidence of bombardment. Over time, landscapes tend to be dotted by falling rocks, so the less cratered a surface is, the younger it is thought to be. [Photos of Saturn's Moons]
Resting on the ridges and walls, the material gradually becomes more unstable. Close impacts could set them off, but powerful, distant impacts reverberating through the ice could also send them tumbling.
The research was published in the July 29 issue of the journal Nature Geoscience.
Connecting ice and rock
Differences in gravity, atmosphere and water content make landslides seen on Iapetus difficult to duplicate in the laboratory. But the fact that they happen on different types of worlds makes it more likely that the mechanism triggering the extended slide is dependent on things unique to either environment.
"We have them on Iapetus, Earth and Mars," Singer said. "Theoretically, they should be very similar."
Singer pointed out the implications for friction within fault lines, which produces earthquakes. As plates on Earth move, the rocks within a fault snag on each other, until forces drag them apart. But sometimes, the faults slip farther than scientists can explain based on their understanding of friction. If flash heating occurs within the faults, it could explain why the two opposing faces slide the way they do, and provoke a better understanding of earthquakes.
In such cases, flash heating would cause minerals to melt and reform, producing an unexpected material around the faults. Some such materials have been identified at the base of long landslides on Earth.
"If something else is going on, like flash heating, or something making [the material] have a lower coefficient of friction, this would affect any models that use the coefficient of friction," Singer said. | http://www.foxnews.com/tech/2012/07/31/50-mile-landslides-spotted-on-saturn-icy-moon.html?intcmp=related |
4.125 | Commutative Property of Addition Teacher Resources
Find Commutative Property of Addition educational ideas and activities
Showing 21 - 40 of 148 resources
Solve for Unknowns Using the Commutative Property of Addition
What is the commutative property and how do you use it? Find out how to solve for unknowns using this very special property of addition. Excellent visuals and real-world stories are used to define the commutative property in a way that...
4 mins 3rd - 4th Math CCSS: Designed
Arithmetic Commutative Property of Addition 1 Worksheet
Skills and drill practice may not thrill your class but as they say, practice makes perfect. They solve 16 single digit addition problems that require them to use the commutative property. Additional worksheet are available through this...
1st - 2nd Math
How Do You Add and Subtract a Bunch of Numbers with Different Signs?
So you have an expression of positive and negative numbers and you want to add and subtract them. Do you do the operations in order that they are written? Do you combine some? Do you move them around? Do you change their signs? The...
3 mins 6th - 12th Math
Common Core State Standards 1st Grade Math
Here are the complete set of the math practice standards and the Common Core standards for first grade. They cover operations & algebraic thinking, operations in base 10, measurement and data, and geometry. Illustrated nicely with fun...
1st Math CCSS: Designed | http://www.lessonplanet.com/lesson-plans/commutative-property-of-addition/2 |
4.09375 | Gothic architecture is a style of architecture that flourished during the high and late medieval period. It evolved from Romanesque architecture and was succeeded by Renaissance architecture. Originating in 12th-century France and lasting into the 16th century, Gothic architecture was known during the period as Opus Francigenum ("French work") with the term Gothic first appearing during the later part of the Renaissance. Its characteristics include the pointed arch, the ribbed vault and the flying buttress. Gothic architecture is most familiar as the architecture of many of the great cathedrals, abbeys and churches of Europe. It is also the architecture of many castles, palaces, town halls, guild halls, universities and to a less prominent extent, private dwellings, such as dorms and rooms.
It is in the great churches and cathedrals and in a number of civic buildings that the Gothic style was expressed most powerfully, its characteristics lending themselves to appeals to the emotions, whether springing from faith or from civic pride. A great number of ecclesiastical buildings remain from this period, of which even the smallest are often structures of architectural distinction while many of the larger churches are considered priceless works of art and are listed with UNESCO as World Heritage Sites. For this reason a study of Gothic architecture is largely a study of cathedrals and churches.
A series of Gothic revivals began in mid-18th-century England, spread through 19th-century Europe and continued, largely for ecclesiastical and university structures, into the 20th century.
- 1 The term "Gothic"
- 2 Definition and scope
- 3 Influences
- 4 Architectural background
- 5 Architectural development
- 6 Characteristics of Gothic cathedrals and great churches
- 7 Regional differences
- 8 Other Gothic buildings
- 9 Gothic survival and revival
- 10 See also
- 11 Notes
- 12 References
- 13 Further reading
- 14 External links
The term "Gothic"
The term "Gothic architecture" originated as a pejorative description. Giorgio Vasari used the term "barbarous German style" in his Lives of the Artists to describe what is now considered the Gothic style, and in the introduction to the Lives he attributes various architectural features to "the Goths" whom he holds responsible for destroying the ancient buildings after they conquered Rome, and erecting new ones in this style. At the time in which Vasari was writing, Italy had experienced a century of building in the Classical architectural vocabulary revived in the Renaissance and seen as evidence of a new Golden Age of learning and refinement.
The Renaissance had then overtaken Europe, overturning a system of culture that, prior to the advent of printing, was almost entirely focused on the Church and was perceived, in retrospect, as a period of ignorance and superstition. Hence, François Rabelais, also of the 16th century, imagines an inscription over the door of his utopian Abbey of Thélème, "Here enter no hypocrites, bigots..." slipping in a slighting reference to "Gotz" and "Ostrogotz."
In English 17th-century usage, "Goth" was an equivalent of "vandal", a savage despoiler with a Germanic heritage, and so came to be applied to the architectural styles of northern Europe from before the revival of classical types of architecture.
According to a 19th-century correspondent in the London Journal Notes and Queries:
There can be no doubt that the term 'Gothic' as applied to pointed styles of ecclesiastical architecture was used at first contemptuously, and in derision, by those who were ambitious to imitate and revive the Grecian orders of architecture, after the revival of classical literature. Authorities such as Christopher Wren lent their aid in deprecating the old medieval style, which they termed Gothic, as synonymous with everything that was barbarous and rude.
On 21 July 1710, the Académie d'Architecture met in Paris, and among the subjects they discussed, the assembled company noted the new fashions of bowed and cusped arches on chimneypieces being employed "to finish the top of their openings. The Company disapproved of several of these new manners, which are defective and which belong for the most part to the Gothic."
Definition and scope
Gothic architecture is the architecture of the late medieval period, characterised by use of the pointed arch. Other features common to Gothic architecture are the rib vault, buttresses, including flying buttresses; large windows which are often grouped, or have tracery; rose windows, towers, spires and pinnacles; and ornate façades.
As an architectural style, Gothic developed primarily in ecclesiastical architecture, and its principles and characteristic forms were applied to other types of buildings. Buildings of every type were constructed in the Gothic style, with evidence remaining of simple domestic buildings, elegant town houses, grand palaces, commercial premises, civic buildings, castles, city walls, bridges, village churches, abbey churches, abbey complexes and large cathedrals.
The greatest number of surviving Gothic buildings are churches. These range from tiny chapels to large cathedrals, and although many have been extended and altered in different styles, a large number remain either substantially intact or sympathetically restored, demonstrating the form, character and decoration of Gothic architecture. The Gothic style is most particularly associated with the great cathedrals of Northern France, the Low Countries, England and Spain, with other fine examples occurring across Europe.
At the end of the 12th century, Europe was divided into a multitude of city states and kingdoms. The area encompassing modern Germany, southern Denmark, the Netherlands, Belgium, Luxembourg, Switzerland, Austria, Slovakia, Czech Republic and much of northern Italy (excluding Venice and Papal State) was nominally part of the Holy Roman Empire, but local rulers exercised considerable autonomy. France, Denmark, Poland, Hungary, Portugal, Scotland, Castile, Aragon, Navarre, Sicily and Cyprus were independent kingdoms, as was the Angevin Empire, whose Plantagenet kings ruled England and large domains in what was to become modern France. Norway came under the influence of England, while the other Scandinavian countries and Poland were influenced by trading contacts with the Hanseatic League. Angevin kings brought the Gothic tradition from France to Southern Italy, while Lusignan kings introduced French Gothic architecture to Cyprus.
Throughout Europe at this time there was a rapid growth in trade and an associated growth in towns. Germany and the Lowlands had large flourishing towns that grew in comparative peace, in trade and competition with each other, or united for mutual weal, as in the Hanseatic League. Civic building was of great importance to these towns as a sign of wealth and pride. England and France remained largely feudal and produced grand domestic architecture for their kings, dukes and bishops, rather than grand town halls for their burghers.
The Catholic Church prevailed across Europe at this time, influencing not only faith but also wealth and power. Bishops were appointed by the feudal lords (kings, dukes and other landowners) and they often ruled as virtual princes over large estates. The early Medieval periods had seen a rapid growth in monasticism, with several different orders being prevalent and spreading their influence widely. Foremost were the Benedictines whose great abbey churches vastly outnumbered any others in France and England. A part of their influence was that towns developed around them and they became centers of culture, learning and commerce. The Cluniac and Cistercian Orders were prevalent in France, the great monastery at Cluny having established a formula for a well planned monastic site which was then to influence all subsequent monastic building for many centuries.
In the 13th century St. Francis of Assisi established the Franciscans, or so-called "Grey Friars", a mendicant order. The Dominicans, another mendicant order founded during the same period but by St. Dominic in Toulouse and Bologna, were particularly influential in the building of Italy's Gothic churches.
From the 10th to the 13th century, Romanesque architecture had become a pan-European style and manner of construction, affecting buildings in countries as far apart as Ireland, Croatia, Sweden and Sicily. The same wide geographic area was then affected by the development of Gothic architecture, but the acceptance of the Gothic style and methods of construction differed from place to place, as did the expressions of Gothic taste. The proximity of some regions meant that modern country borders do not define divisions of style. On the other hand, some regions such as England and Spain produced defining characteristics rarely seen elsewhere, except where they have been carried by itinerant craftsmen, or the transfer of bishops. Regional differences that are apparent in the great abbey churches and cathedrals of the Romanesque period often become even more apparent in the Gothic.
The local availability of materials affected both construction and style. In France, limestone was readily available in several grades, the very fine white limestone of Caen being favoured for sculptural decoration. England had coarse limestone and red sandstone as well as dark green Purbeck marble which was often used for architectural features.
In Northern Germany, Netherlands, northern Poland, Denmark, and the Baltic countries local building stone was unavailable but there was a strong tradition of building in brick. The resultant style, Brick Gothic, is called "Backsteingotik" in Germany and Scandinavia and is associated with the Hanseatic League. In Italy, stone was used for fortifications, but brick was preferred for other buildings. Because of the extensive and varied deposits of marble, many buildings were faced in marble, or were left with undecorated façade so that this might be achieved at a later date.
The availability of timber also influenced the style of architecture, with timber buildings prevailing in Scandinavia. Availability of timber affected methods of roof construction across Europe. It is thought that the magnificent hammer-beam roofs of England were devised as a direct response to the lack of long straight seasoned timber by the end of the Medieval period, when forests had been decimated not only for the construction of vast roofs but also for ship building.
Gothic architecture grew out of the previous architectural genre, Romanesque. For the most part, there was not a clean break, as there was to be later in Renaissance Florence with the revival of the Classical style by Filippo Brunelleschi in the early 15th century, and the sudden abandonment in Renaissance Italy of both the style and the structural characteristics of Gothic.
By the 12th century, Romanesque architecture (termed Norman architecture in England because of its association with the Norman invasion), was established throughout Europe and provided the basic architectural forms and units that were to remain in evolution throughout the Medieval period. The important categories of building: the cathedral church, the parish church, the monastery, the castle, the palace, the great hall, the gatehouse, the civic building, had been established in the Romanesque period.
Many architectural features that are associated with Gothic architecture had been developed and used by the architects of Romanesque buildings. These include ribbed vaults, buttresses, clustered columns, ambulatories, wheel windows, spires and richly carved door tympana. These were already features of ecclesiastical architecture before the development of the Gothic style, and all were to develop in increasingly elaborate ways.
It was principally the widespread introduction of a single feature, the pointed arch, which was to bring about the change that separates Gothic from Romanesque. The technological change permitted a stylistic change which broke the tradition of massive masonry and solid walls penetrated by small openings, replacing it with a style where light appears to triumph over substance. With its use came the development of many other architectural devices, previously put to the test in scattered buildings and then called into service to meet the structural, aesthetic and ideological needs of the new style. These include the flying buttresses, pinnacles and traceried windows which typify Gothic ecclesiastical architecture. But while pointed arch is so strongly associated with the Gothic style, it was first used in Western architecture in buildings that were in other ways clearly Romanesque, notably Durham Cathedral in the north of England, Monreale Cathedral and Cathedral of Cefalù in Sicily, Autun Cathedral in France.
Possible Oriental influence
The pointed arch, one of the defining attributes of Gothic, was earlier incorporated into Islamic architecture following the Islamic conquests of Roman Syria and the Sassanid Empire in the Seventh Century. The pointed arch and its precursors had been employed in Late Roman and Sassanian architecture; within the Roman context, evidenced in early church building in Syria and occasional secular structures, like the Roman Karamagara Bridge; in Sassanid architecture, in the parabolic and pointed arches employed in palace and sacred construction.
Increasing military and cultural contacts with the Muslim world, including the Norman conquest of Islamic Sicily in 1090, the Crusades, beginning 1096, and the Islamic presence in Spain, may have influenced Medieval Europe's adoption of the pointed arch, although this hypothesis remains controversial. Certainly, in those parts of the Western Mediterranean subject to Islamic control or influence, rich regional variants arose, fusing Romanesque and later Gothic traditions with Islamic decorative forms, as seen, for example, in Monreale and Cefalù Cathedrals, the Alcázar of Seville, and Teruel Cathedral.
Transition from Romanesque to Gothic architecture
The characteristic forms that were to define Gothic architecture grew out of Romanesque architecture and developed at several different geographic locations, as the result of different influences and structural requirements. While barrel vaults and groin vaults are typical of Romanesque architecture, ribbed vaults were used in the naves of two Romanesque churches in Caen, Abbey of Saint-Étienne and Abbaye aux Dames in 1120. Another early example is the nave and apse area of the Cathedral of Cefalù in 1131. The ribbed vault over the north transept at Durham Cathedral in England, built from 1128 to 1133, is probably earlier still and was the first time pointed arches were used in a high vault.
Other characteristics of early Gothic architecture, such as vertical shafts, clustered columns, compound piers, plate tracery and groups of narrow openings had evolved during the Romanesque period. The west front of Ely Cathedral exemplifies this development. Internally the three tiered arrangement of arcade, gallery and clerestory was established. Interiors had become lighter with the insertion of more and larger windows.
The Basilica of Saint Denis is generally cited as the first truly Gothic building, however the distinction is best reserved for the choir, of which the ambulatory remains intact. Noyon Cathedral, also in France, saw the earliest completion of a rebuilding of an entire cathedral in the new style from 1150 to 1231. While using all those features that came to be known as Gothic, including pointed arches, flying buttresses and ribbed vaulting, the builders continued to employ many of the features and much of the character of Romanesque architecture including round-headed arch throughout the building, varying the shape to pointed where it was functionally practical to do so.
At the Abbey Saint-Denis, Noyon Cathedral, Notre Dame de Paris and at the eastern end of Canterbury Cathedral in England, simple cylindrical columns predominate over the Gothic forms of clustered columns and shafted piers. Wells Cathedral in England, commenced at the eastern end in 1175, was the first building in which the designer broke free from Romanesque forms. The architect entirely dispensed with the round arch in favour of the pointed arch and with cylindrical columns in favour of piers composed of clusters of shafts which lead into the mouldings of the arches. The transepts and nave were continued by Adam Locke in the same style and completed in about 1230. The character of the building is entirely Gothic. Wells Cathedral is thus considered the first truly Gothic cathedral.
The eastern end of the Basilica Church of Saint-Denis, built by Abbot Suger and completed in 1144, is often cited as the first truly Gothic building, as it draws together many of architectural forms which had evolved from Romanesque and typify the Gothic style.
Suger, friend and confidant of the French Kings, Louis VI and Louis VII, decided in about 1137, to rebuild the great Church of Saint-Denis, attached to an abbey which was also a royal residence. He began with the West Front, reconstructing the original Carolingian façade with its single door. He designed the façade of Saint-Denis to be an echo of the Roman Arch of Constantine with its three-part division and three large portals to ease the problem of congestion. The rose window is the earliest-known example above the West portal in France. The façade combines both round arches and pointed arches of the Gothic style.
At the completion of the west front in 1140, Abbot Suger moved on to the reconstruction of the eastern end, leaving the Carolingian nave in use. He designed a choir that would be suffused with light. To achieve his aims, his masons drew on the several new features which evolved or had been introduced to Romanesque architecture, the pointed arch, the ribbed vault, the ambulatory with radiating chapels, the clustered columns supporting ribs springing in different directions and the flying buttresses which enabled the insertion of large clerestory windows.
The new structure was finished and dedicated on 11 June 1144, in the presence of the King. The choir and west front of the Abbey of Saint-Denis both became the prototypes for further building in the royal domain of northern France and in the Duchy of Normandy. Through the rule of the Angevin dynasty, the new style was introduced to England and spread throughout France, the Low Countries, Germany, Spain, northern Italy and Sicily.
Characteristics of Gothic cathedrals and great churches
While many secular buildings exist from the Late Middle Ages, it is in the buildings of cathedrals and great churches that Gothic architecture displays its pertinent structures and characteristics to the fullest advantage. A Gothic cathedral or abbey was, prior to the 20th century, generally the landmark building in its town, rising high above all the domestic structures and often surmounted by one or more towers and pinnacles and perhaps tall spires. These cathedrals were the skyscrapers of that day and would have been the largest buildings by far that Europeans would ever have seen. It is in the architecture of these Gothic churches that a unique combination of existing technologies established the emergence of a new building style. Those technologies were the ogival or pointed arch, the ribbed vault, and the buttress.
The Gothic style, when applied to an ecclesiastical building, emphasizes verticality and light. This appearance was achieved by the development of certain architectural features, which together provided an engineering solution. The structural parts of the building ceased to be its solid walls, and became a stone skeleton comprising clustered columns, pointed ribbed vaults and flying buttresses.
Most large Gothic churches and many smaller parish churches are of the Latin cross (or "cruciform") plan, with a long nave making the body of the church, a transverse arm called the transept and, beyond it, an extension which may be called the choir, chancel or presbytery. There are several regional variations on this plan.
The nave is generally flanked on either side by aisles, usually single, but sometimes double. The nave is generally considerably taller than the aisles, having clerestory windows which light the central space. Gothic churches of the Germanic tradition, like St. Stephen of Vienna, often have nave and aisles of similar height and are called Hallenkirche. In the South of France there is often a single wide nave and no aisles, as at Sainte-Marie in Saint-Bertrand-de-Comminges.
In some churches with double aisles, like Notre Dame, Paris, the transept does not project beyond the aisles. In English cathedrals transepts tend to project boldly and there may be two of them, as at Salisbury Cathedral, though this is not the case with lesser churches.
The eastern arm shows considerable diversity. In England it is generally long and may have two distinct sections, both choir and presbytery. It is often square ended or has a projecting Lady Chapel, dedicated to the Virgin Mary. In France the eastern end is often polygonal and surrounded by a walkway called an ambulatory and sometimes a ring of chapels called a "chevet". While German churches are often similar to those of France, in Italy, the eastern projection beyond the transept is usually just a shallow apsidal chapel containing the sanctuary, as at Florence Cathedral.
Structure: the pointed arch
One of the defining characteristics of Gothic architecture is the pointed or ogival arch. Arches of a similar type were used in the Near East in pre-Islamic as well as Islamic architecture before they were structurally employed in medieval architecture. It is thought by some architectural historians that this was the inspiration for the use of the pointed arch in France, in otherwise Romanesque buildings, as at Autun Cathedral.
Contrary to the diffusionist theory, it appears that there was simultaneously a structural evolution towards the pointed arch, for the purpose of vaulting spaces of irregular plan, or to bring transverse vaults to the same height as diagonal vaults. This latter occurs at Durham Cathedral in the nave aisles in 1093. Pointed arches also occur extensively in Romanesque decorative blind arcading, where semi-circular arches overlap each other in a simple decorative pattern, and the points are accidental to the design.
The Gothic vault, unlike the semi-circular vault of Roman and Romanesque buildings, can be used to roof rectangular and irregularly shaped plans such as trapezoids. The other structural advantage is that the pointed arch channels the weight onto the bearing piers or columns at a steep angle. This enabled architects to raise vaults much higher than was possible in Romanesque architecture. While, structurally, use of the pointed arch gave a greater flexibility to architectural form, it also gave Gothic architecture a very different and more vertical visual character than Romanesque.
In Gothic architecture the pointed arch is used in every location where a vaulted shape is called for, both structural and decorative. Gothic openings such as doorways, windows, arcades and galleries have pointed arches. Gothic vaulting above spaces both large and small is usually supported by richly moulded ribs.
Rows of pointed arches upon delicate shafts form a typical wall decoration known as blind arcading. Niches with pointed arches and containing statuary are a major external feature. The pointed arch lent itself to elaborate intersecting shapes which developed within window spaces into complex Gothic tracery forming the structural support of the large windows that are characteristic of the style.
A characteristic of Gothic church architecture is its height, both absolute and in proportion to its width, the verticality suggesting an aspiration to Heaven. A section of the main body of a Gothic church usually shows the nave as considerably taller than it is wide. In England the proportion is sometimes greater than 2:1, while the greatest proportional difference achieved is at Cologne Cathedral with a ratio of 3.6:1. The highest internal vault is at Beauvais Cathedral at 48 metres (157 ft).
Externally, towers and spires are characteristic of Gothic churches both great and small, the number and positioning being one of the greatest variables in Gothic architecture. In Italy, the tower, if present, is almost always detached from the building, as at Florence Cathedral, and is often from an earlier structure. In France and Spain, two towers on the front is the norm. In England, Germany and Scandinavia this is often the arrangement, but an English cathedral may also be surmounted by an enormous tower at the crossing. Smaller churches usually have just one tower, but this may also be the case at larger buildings, such as Salisbury Cathedral or Ulm Minster, which has the tallest spire in the world, slightly exceeding that of Lincoln Cathedral, the tallest which was actually completed during the medieval period, at 160 metres (520 ft).
The pointed arch lends itself to a suggestion of height. This appearance is characteristically further enhanced by both the architectural features and the decoration of the building.
On the exterior, the verticality is emphasised in a major way by the towers and spires and in a lesser way by strongly projecting vertical buttresses, by narrow half-columns called attached shafts which often pass through several storeys of the building, by long narrow windows, vertical mouldings around doors and figurative sculpture which emphasises the vertical and is often attenuated. The roofline, gable ends, buttresses and other parts of the building are often terminated by small pinnacles, Milan Cathedral being an extreme example in the use of this form of decoration.
On the interior of the building attached shafts often sweep unbroken from floor to ceiling and meet the ribs of the vault, like a tall tree spreading into branches. The verticals are generally repeated in the treatment of the windows and wall surfaces. In many Gothic churches, particularly in France, and in the Perpendicular period of English Gothic architecture, the treatment of vertical elements in gallery and window tracery creates a strongly unifying feature that counteracts the horizontal divisions of the interior structure.
Expansive interior light has been a feature of Gothic cathedrals since the first structure was opened. The metaphysics of light in the Middle Ages led to clerical belief in its divinity and the importance of its display in holy settings. Much of this belief was based on the writings of Pseudo-Dionysius, a sixth century mystic whose book, The Celestial Hierarchy, was popular among monks in France. Pseudo-Dionysius held that all light, even light reflected from metals or streamed through windows, was divine. To promote such faith, the abbot in charge of the Saint-Denis church on the north edge of Paris, the Abbot Suger, encouraged architects remodeling the building to make the interior as bright as possible.
Ever since the remodeled Basilica of Saint-Denis opened in 1144, Gothic architecture has featured expansive windows, such as at Sainte Chapelle, York Minster, Gloucester Cathedral. The increase in size between windows of the Romanesque and Gothic periods is related to the use of the ribbed vault, and in particular, the pointed ribbed vault which channeled the weight to a supporting shaft with less outward thrust than a semicircular vault. Walls did not need to be so weighty.
A further development was the flying buttress which arched externally from the springing of the vault across the roof of the aisle to a large buttress pier projecting well beyond the line of the external wall. These piers were often surmounted by a pinnacle or statue, further adding to the downward weight, and counteracting the outward thrust of the vault and buttress arch as well as stress from wind loading.
The internal columns of the arcade with their attached shafts, the ribs of the vault and the flying buttresses, with their associated vertical buttresses jutting at right-angles to the building, created a stone skeleton. Between these parts, the walls and the infill of the vaults could be of lighter construction. Between the narrow buttresses, the walls could be opened up into large windows.
Through the Gothic period, thanks to the versatility of the pointed arch, the structure of Gothic windows developed from simple openings to immensely rich and decorative sculptural designs. The windows were very often filled with stained glass which added a dimension of colour to the light within the building, as well as providing a medium for figurative and narrative art.
The façade of a large church or cathedral, often referred to as the West Front, is generally designed to create a powerful impression on the approaching worshipper, demonstrating both the might of God and the might of the institution that it represents. One of the best known and most typical of such façades is that of Notre Dame de Paris.
Central to the façade is the main portal, often flanked by additional doors. In the arch of the door, the tympanum, is often a significant piece of sculpture, most frequently Christ in Majesty and Judgment Day. If there is a central doorjamb or a trumeau, then it frequently bears a statue of the Madonna and Child. There may be much other carving, often of figures in niches set into the mouldings around the portals, or in sculptural screens extending across the façade.
Above the main portal there is generally a large window, like that at York Minster, or a group of windows such as those at Ripon Cathedral. In France there is generally a rose window like that at Reims Cathedral. Rose windows are also often found in the façades of churches of Spain and Italy, but are rarer elsewhere and are not found on the façades of any English Cathedrals. The gable is usually richly decorated with arcading or sculpture or, in the case of Italy, may be decorated with the rest of the façade, with polychrome marble and mosaic, as at Orvieto Cathedral.
The West Front of a French cathedral and many English, Spanish and German cathedrals generally have two towers, which, particularly in France, express an enormous diversity of form and decoration. However some German cathedrals have only one tower located in the middle of the façade (such as Freiburg Münster).
Basic shapes of Gothic arches and stylistic character
The way in which the pointed arch was drafted and utilised developed throughout the Gothic period. There were fairly clear stages of development, which did not, however, progress at the same rate, or in the same way in every country. Moreover, the names used to define various periods or styles within Gothic architecture differs from country to country.
The simplest shape is the long opening with a pointed arch known in England as the lancet. Lancet openings are often grouped, usually as a cluster of three or five. Lancet openings may be very narrow and steeply pointed. Lancet arches are typically defined as two-centered arches whose radii are larger than the arch's span.
Salisbury Cathedral is famous for the beauty and simplicity of its Lancet Gothic, known in England as the Early English Style. York Minster has a group of lancet windows each fifty feet high and still containing ancient glass. They are known as the Five Sisters. These simple undecorated grouped windows are found at Chartres and Laon Cathedrals and are used extensively in Italy.
Many Gothic openings are based upon the equilateral form. In other words, when the arch is drafted, the radius is exactly the width of the opening and the centre of each arch coincides with the point from which the opposite arch springs. This makes the arch higher in relation to its width than a semi-circular arch which is exactly half as high as it is wide.
The Equilateral Arch gives a wide opening of satisfying proportion useful for doorways, decorative arcades and large windows.
The structural beauty of the Gothic arch means, however, that no set proportion had to be rigidly maintained. The Equilateral Arch was employed as a useful tool, not as a Principle of Design. This meant that narrower or wider arches were introduced into a building plan wherever necessity dictated. In the architecture of some Italian cities, notably Venice, semi-circular arches are interspersed with pointed ones.
The Equilateral Arch lends itself to filling with tracery of simple equilateral, circular and semi-circular forms. The type of tracery that evolved to fill these spaces is known in England as Geometric Decorated Gothic and can be seen to splendid effect at many English and French Cathedrals, notably Lincoln and Notre Dame in Paris. Windows of complex design and of three or more lights or vertical sections, are often designed by overlapping two or more equilateral arches.
The Flamboyant Arch is one that is drafted from four points, the upper part of each main arc turning upwards into a smaller arc and meeting at a sharp, flame-like point. These arches create a rich and lively effect when used for window tracery and surface decoration. The form is structurally weak and has very rarely been used for large openings except when contained within a larger and more stable arch. It is not employed at all for vaulting.
Some of the most beautiful and famous traceried windows of Europe employ this type of tracery. It can be seen at St Stephen's Vienna, Sainte Chapelle in Paris, at the Cathedrals of Limoges and Rouen in France. In England the most famous examples are the West Window of York Minster with its design based on the Sacred Heart, the extraordinarily rich nine-light East Window at Carlisle Cathedral and the exquisite East window of Selby Abbey.
Doorways surmounted by Flamboyant mouldings are very common in both ecclesiastical and domestic architecture in France. They are much rarer in England. A notable example is the doorway to the Chapter Room at Rochester Cathedral.
The style was much used in England for wall arcading and niches. Prime examples in are in the Lady Chapel at Ely, the Screen at Lincoln and externally on the façade of Exeter Cathedral. In German and Spanish Gothic architecture it often appears as openwork screens on the exterior of buildings. The style was used to rich and sometimes extraordinary effect in both these countries, notably on the famous pulpit in Vienna Cathedral.
The depressed or four-centred arch is much wider than its height and gives the visual effect of having been flattened under pressure. Its structure is achieved by drafting two arcs which rise steeply from each springing point on a small radius and then turn into two arches with a wide radius and much lower springing point.
This type of arch, when employed as a window opening, lends itself to very wide spaces, provided it is adequately supported by many narrow vertical shafts. These are often further braced by horizontal transoms. The overall effect produces a grid-like appearance of regular, delicate, rectangular forms with an emphasis on the perpendicular. It is also employed as a wall decoration in which arcade and window openings form part of the whole decorative surface.
The style, known as Perpendicular, that evolved from this treatment is specific to England, although very similar to contemporary Spanish style in particular, and was employed to great effect through the 15th century and first half of the 16th as Renaissance styles were much slower to arrive in England than in Italy and France.
It can be seen notably at the East End of Gloucester Cathedral where the East Window is said to be as large as a tennis court. There are three very famous royal chapels and one chapel-like Abbey which show the style at its most elaborate: King's College Chapel, Cambridge; St George's Chapel, Windsor; Henry VII's Chapel at Westminster Abbey and Bath Abbey. However very many simpler buildings, especially churches built during the wool boom in East Anglia, are fine examples of the style.
Symbolism and ornamentation
The Gothic cathedral represented the universe in microcosm and each architectural concept, including the loftiness and huge dimensions of the structure, were intended to convey a theological message: the great glory of God. The building becomes a microcosm in two ways. Firstly, the mathematical and geometrical nature of the construction is an image of the orderly universe, in which an underlying rationality and logic can be perceived.
Secondly, the statues, sculptural decoration, stained glass and murals incorporate the essence of creation in depictions of the Labours of the Months and the Zodiac and sacred history from the Old and New Testaments and Lives of the Saints, as well as reference to the eternal in the Last Judgment and Coronation of the Virgin.
Many churches were very richly decorated, both inside and out. Sculpture and architectural details were often bright with coloured paint of which traces remain at the Cathedral of Chartres. Wooden ceilings and panelling were usually brightly coloured. Sometimes the stone columns of the nave were painted, and the panels in decorative wall arcading contained narratives or figures of saints. These have rarely remained intact, but may be seen at the Chapterhouse of Westminster Abbey.
Some important Gothic churches could be severely simple such as the Basilica of Mary Magdalene in Saint-Maximin, Provence where the local traditions of the sober, massive, Romanesque architecture were still strong.
Wherever Gothic architecture is found, it is subject to local influences, and frequently the influence of itinerant stonemasons and artisans, carrying ideas between cities and sometimes between countries. Certain characteristics are typical of particular regions and often override the style itself, appearing in buildings hundreds of years apart.
The distinctive characteristic of French cathedrals, and those in Germany and Belgium that were strongly influenced by them, is their height and their impression of verticality. Each French cathedral tends to be stylistically unified in appearance when compared with an English cathedral where there is great diversity in almost every building. They are compact, with slight or no projection of the transepts and subsidiary chapels. The west fronts are highly consistent, having three portals surmounted by a rose window, and two large towers. Sometimes there are additional towers on the transept ends. The east end is polygonal with ambulatory and sometimes a chevette of radiating chapels. In the south of France, many of the major churches are without transepts and some are without aisles.
The distinctive characteristic of English cathedrals is their extreme length, and their internal emphasis upon the horizontal, which may be emphasised visually as much or more than the vertical lines. Each English cathedral (with the exception of Salisbury) has an extraordinary degree of stylistic diversity, when compared with most French, German and Italian cathedrals. It is not unusual for every part of the building to have been built in a different century and in a different style, with no attempt at creating a stylistic unity. Unlike French cathedrals, English cathedrals sprawl across their sites, with double transepts projecting strongly and Lady Chapels tacked on at a later date, such as at Westminster Abbey. In the west front, the doors are not as significant as in France, the usual congregational entrance being through a side porch. The West window is very large and never a rose, which are reserved for the transept gables. The west front may have two towers like a French Cathedral, or none. There is nearly always a tower at the crossing and it may be very large and surmounted by a spire. The distinctive English east end is square, but it may take a completely different form. Both internally and externally, the stonework is often richly decorated with carvings, particularly the capitals.
Germany and Central Europe
Romanesque architecture in Germany, Poland, the Czech Lands and Austria is characterised by its massive and modular nature. This is expressed in the Gothic architecture of Central Europe in the huge size of the towers and spires, often projected, but not always completed. The west front generally follows the French formula, but the towers are very much taller and, if complete, are surmounted by enormous openwork spires that are a regional feature. Because of the size of the towers, the section of the façade between them may appear narrow and compressed. The eastern end follows the French form. The distinctive character of the interior of German Gothic cathedrals is their breadth and openness. This is the case even when, as at Cologne, they have been modelled upon a French cathedral. German cathedrals, like the French, tend not to have strongly projecting transepts. There are also many hall churches (Hallenkirchen) without clerestory windows.
In Catalonia and the territories under its influence (Northern Catalonia in France, the Balearic Islands, the Valencian Country, among others in the Italian islands), the Gothic style allowed to create very wide spaces, with few ornaments; it is called Catalan Gothic style (different than the Spanish or French style).
The most important samples of Catalan Gothic style are the cathedrals of Girona, Barcelona, Perpignan and Palma (in Mallorca), the basilica of Santa Maria del Mar (in Barcelona), the basilica del Pi (in Barcelona), and the church of Santa Maria de l'Alba in Manresa.
Spain and Portugal
The distinctive characteristic of Gothic cathedrals of the Iberian Peninsula is their spatial complexity, with many areas of different shapes leading from each other. They are comparatively wide, and often have very tall arcades surmounted by low clerestories, giving a similar spacious appearance to the 'Hallenkirche of Germany, as at the Church of the Batalha Monastery in Portugal. Many of the cathedrals are completely surrounded by chapels. Like English cathedrals, each is often stylistically diverse. This expresses itself both in the addition of chapels and in the application of decorative details drawn from different sources. Among the influences on both decoration and form are Islamic architecture and, towards the end of the period, Renaissance details combined with the Gothic in a distinctive manner. The West front, as at Leon Cathedral, typically resembles a French west front, but wider in proportion to height and often with greater diversity of detail and a combination of intricate ornament with broad plain surfaces. At Burgos Cathedral there are spires of German style. The roofline often has pierced parapets with comparatively few pinnacles. There are often towers and domes of a great variety of shapes and structural invention rising above the roof.
The distinctive characteristic of Italian Gothic is the use of polychrome decoration, both externally as marble veneer on the brick façade and also internally where the arches are often made of alternating black and white segments, and where the columns may be painted red, the walls decorated with frescoes and the apse with mosaic. The plan is usually regular and symmetrical, Italian cathedrals have few and widely spaced columns. The proportions are generally mathematically equilibrated, based on the square and the concept of "armonìa", and except in Venice where they loved flamboyant arches, the arches are almost always equilateral. Colours and moldings define the architectural units rather than blending them. Italian cathedral façades are often polychrome and may include mosaics in the lunettes over the doors. The façades have projecting open porches and occular or wheel windows rather than roses, and do not usually have a tower. The crossing is usually surmounted by a dome. There is often a free-standing tower and baptistry. The eastern end usually has an apse of comparatively low projection. The windows are not as large as in northern Europe and, although stained glass windows are often found, the favourite narrative medium for the interior is the fresco.
Other Gothic buildings
- See also Castle
Synagogues were commonly built in the Gothic style in Europe during the Medieval period. A surviving example is the Old New Synagogue in Prague built in the 13th century.
The Palais des Papes in Avignon is the best complete large royal palace, alongside the Royal palace of Olite, built during the 13th and 14th centuries for the kings of Navarre. The Malbork Castle built for the master of the Teutonic order is an example of Brick Gothic architecture. Partial survivals of former royal residences include the Doge's Palace of Venice, the Palau de la Generalitat in Barcelona, built in the 15th century for the kings of Aragon, or the famous Conciergerie, former palace of the kings of France, in Paris.
Secular Gothic architecture can also be found in a number of public buildings such as town halls, universities, markets or hospitals. The Gdańsk, Wrocław and Stralsund town halls are remarkable examples of northern Brick Gothic built in the late 14th centuries. The Belfry of Bruges or Brussels Town Hall, built during the 15th century, are associated to the increasing wealth and power of the bourgeoisie in the late Middle Ages; by the 15th century, the traders of the trade cities of Burgundy had acquired such wealth and influence that they could afford to express their power by funding lavishly decorated buildings of vast proportions. This kind of expressions of secular and economic power are also found in other late mediaeval commercial cities, including the Llotja de la Seda of Valencia, Spain, a purpose built silk exchange dating from the 15th century, in the partial remains of Westminster Hall in the Houses of Parliament in London, or the Palazzo Pubblico in Siena, Italy, a 13th-century town hall built to host the offices of the then prosperous republic of Siena. Other Italian cities such as Florence (Palazzo Vecchio), Mantua or Venice also host remarkable examples of secular public architecture.
By the late Middle Ages university towns had grown in wealth and importance as well, and this was reflected in the buildings of some of Europe's ancient universities. Particularly remarkable examples still standing nowadays include the Collegio di Spagna in the University of Bologna, built during the 14th and 15th centuries; the Collegium Carolinum of the University of Prague in Bohemia; the Escuelas mayores of the University of Salamanca in Spain; the chapel of King's College, Cambridge; or the Collegium Maius of the Jagiellonian University in Kraków, Poland.
In addition to monumental secular architecture, examples of the Gothic style in private buildings can be seen in surviving medieval portions of cities across Europe, above all the distinctive Venetian Gothic such as the Ca' d'Oro. The house of the wealthy early 15th-century merchant Jacques Coeur in Bourges, is the classic Gothic bourgeois mansion, full of the asymmetry and complicated detail beloved of the Gothic Revival.
Other cities with a concentration of secular Gothic include Bruges and Siena. Most surviving small secular buildings are relatively plain and straightforward; most windows are flat-topped with mullions, with pointed arches and vaulted ceilings often only found at a few focal points. The country-houses of the nobility were slow to abandon the appearance of being a castle, even in parts of Europe, like England, where defence had ceased to be a real concern. The living and working parts of many monastic buildings survive, for example at Mont Saint-Michel.
Exceptional works of Gothic architecture can also be found on the islands of Sicily and Cyprus, in the walled cities of Nicosia and Famagusta. Also, the roofs of the Old Town Hall in Prague and Znojmo Town Hall Tower in the Czech Republic are an excellent example of late Gothic craftsmanship.
Gothic survival and revival
In 1663 at the Archbishop of Canterbury's residence, Lambeth Palace, a Gothic hammerbeam roof was built to replace that destroyed when the building was sacked during the English Civil War. Also in the late 17th century, some discrete Gothic details appeared on new construction at Oxford University and Cambridge University, notably on Tom Tower at Christ Church, Oxford, by Christopher Wren. It is not easy to decide whether these instances were Gothic survival or early appearances of Gothic revival.
Ireland was a focus for Gothic architecture in the 17th and 18th centuries. Derry Cathedral (completed 1633), Sligo Cathedral (c. 1730), and Down Cathedral (1790-1818) are notable examples. The term "Planter's Gothic" has been applied to the most typical of these.
In England in the mid-18th century, the Gothic style was more widely revived, first as a decorative, whimsical alternative to Rococo that is still conventionally termed 'Gothick', of which Horace Walpole's Twickenham villa "Strawberry Hill" is the familiar example.
19th- and 20th-century Gothic Revival
In England, partly in response to a philosophy propounded by the Oxford Movement and others associated with the emerging revival of 'high church' or Anglo-Catholic ideas during the second quarter of the 19th century, neo-Gothic began to become promoted by influential establishment figures as the preferred style for ecclesiastical, civic and institutional architecture. The appeal of this Gothic revival (which after 1837, in Britain, is sometimes termed Victorian Gothic), gradually widened to encompass "low church" as well as "high church" clients. This period of more universal appeal, spanning 1855–1885, is known in Britain as High Victorian Gothic.
The Houses of Parliament in London by Sir Charles Barry with interiors by a major exponent of the early Gothic Revival, Augustus Welby Pugin, is an example of the Gothic revival style from its earlier period in the second quarter of the 19th century. Examples from the High Victorian Gothic period include George Gilbert Scott's design for the Albert Memorial in London, and William Butterfield's chapel at Keble College, Oxford. From the second half of the 19th century onwards it became more common in Britain for neo-Gothic to be used in the design of non-ecclesiastical and non-governmental buildings types. Gothic details even began to appear in working-class housing schemes subsidised by philanthropy, though given the expense, less frequently than in the design of upper and middle-class housing.
In France, simultaneously, the towering figure of the Gothic Revival was Eugène Viollet-le-Duc, who outdid historical Gothic constructions to create a Gothic as it ought to have been, notably at the fortified city of Carcassonne in the south of France and in some richly fortified keeps for industrial magnates. Viollet-le-Duc compiled and coordinated an Encyclopédie médiévale that was a rich repertory his contemporaries mined for architectural details. He effected vigorous restoration of crumbling detail of French cathedrals, including the Abbey of Saint-Denis and famously at Notre Dame de Paris, where many of whose most "Gothic" gargoyles are Viollet-le-Duc's. He taught a generation of reform-Gothic designers and showed how to apply Gothic style to modern structural materials, especially cast iron.
In Germany, the great cathedral of Cologne and the Ulm Minster, left unfinished for 600 years, were brought to completion, while in Italy, Florence Cathedral finally received its polychrome Gothic façade. New churches in the Gothic style were created all over the world, including Mexico, Argentina, Japan, Thailand, India, Australia, New Zealand, Hawaii and South Africa.
As in Europe, the United States, Canada, Australia and New Zealand utilised Neo-Gothic for the building of universities, a fine example being the University of Sydney by Edmund Blacket. In Canada, the Canadian Parliament Buildings in Ottawa designed by Thomas Fuller and Chilion Jones with its huge centrally placed tower is influenced by Flemish Gothic buildings.
Although falling out of favour for domestic and civic use, Gothic for churches and universities continued into the 20th century with buildings such as Liverpool Cathedral, the Cathedral of Saint John the Divine, New York and São Paulo Cathedral, Brazil. The Gothic style was also applied to iron-framed city skyscrapers such as Cass Gilbert's Woolworth Building and Raymond Hood's Tribune Tower.
Post-Modernism in the late 20th and early 21st centuries has seen some revival of Gothic forms in individual buildings, such as the Gare do Oriente in Lisbon, Portugal and a finishing of the Cathedral of Our Lady of Guadalupe in Mexico.
About medieval Gothic in particular
- Czech Gothic architecture
- English Gothic architecture
- French Gothic architecture
- Italian Gothic architecture
- List of Gothic architecture
- Medieval architecture
- Middle Ages in history
- Polish Gothic architecture
- Portuguese Gothic architecture
- Renaissance of the 12th century
- Spanish Gothic architecture
- Gothic secular and domestic architecture
About Gothic architecture more generally or in other senses
- Architectural history
- Architectural style
- Architecture of cathedrals and great churches
- Gothic Revival architecture
- Carpenter Gothic
- Collegiate Gothic in North America
- Tented roof
- Vasari, G. The Lives of the Artists. Translated with an introduction and notes by J.C. and P. Bondanella. Oxford: Oxford University Press (Oxford World’s Classics), 1991, pp. 117 & 527. ISBN 9780199537198
- Vasari, Giorgio. (1907) Vasari on technique: being the introduction to the three arts of design, architecture, sculpture and painting, prefixed to the Lives of the most excellent painters, sculptors and architects. G. Baldwin Brown Ed. Louisa S. Maclehose Trans. London: Dent, pp. b & 83.
- "Gotz" is rendered as "Huns" in Thomas Urquhart's English translation.
- Notes and Queries, No. 9. 29 December 1849
- Christopher Wren, 17th-century architect of St. Paul's Cathedral.
- "pour terminer le haut de leurs ouvertures. La Compagnie a désapprové plusieurs de ces nouvelles manières, qui sont défectueuses et qui tiennent la plupart du gothique." Quoted in Fiske Kimball, The Creation of the Rococo, 1943, p 66.
- "L'art Gothique", section: "L'architecture Gothique en Angleterre" by Ute Engel: L'Angleterre fut l'une des premieres régions à adopter, dans la deuxième moitié du XIIeme siècle, la nouvelle architecture gothique née en France. Les relations historiques entre les deux pays jouèrent un rôle prépondérant: en 1154, Henri II (1154–1189), de la dynastie Française des Plantagenêt, accéda au thrône d'Angleterre." (England was one of the first regions to adopt, during the first half of the 12th century, the new Gothic architecture born in France. Historic relationships between the two countries played a determining role: in 1154, Henry II (1154–1189) became the first of the Anjou Plantagenet kings to ascend to the throne of England).
- Banister Fletcher, A History of Architecture on the Comparative Method.
- John Harvey, The Gothic World
- Alec Clifton-Taylor, The Cathedrals of England
- Nikolaus Pevsner, An Outline of European Architecture.
- Warren, John (1991). "Creswell's Use of the Theory of Dating by the Acuteness of the Pointed Arches in Early Muslim Architecture". Muqarnas (BRILL) 8: 59–65 (61–63). doi:10.2307/1523154. JSTOR 1523154.
- Petersen, Andrew (2002-03-11). Dictionary of Islamic Architecture at pp. 295-296. Routledge. ISBN 978-0-203-20387-3. Retrieved 2013-03-16.
- Scott, Robert A.: The Gothic enterprise: a guide to understanding the Medieval cathedral, Berkeley 2003, University of California Press, p. 113 ISBN 0-520-23177-5
- Cf. Bony (1983), especially p.17
- Le genie architectural des Normands a su s’adapter aux lieux en prenant ce qu’il y a de meilleur dans le savoir-faire des batisseurs arabes et byzantins”, Les Normands en Sicile, pp.14, 53-57.
- Harvey, L. P. (1992). "Islamic Spain, 1250 to 1500". Chicago : University of Chicago Press. ISBN 0-226-31960-1; Boswell, John (1978). Royal Treasure: Muslim Communities Under the Crown of Aragon in the Fourteenth Century. Yale University Press. ISBN 0-300-02090-2.
- Cannon, J. 2007. Cathedral: The Great English Cathedrals and the World that Made Them
- Erwin Panofsky argued that Suger was inspired to create a physical representation of the Heavenly Jerusalem, although the extent to which Suger had any aims higher than aesthetic pleasure has been called into doubt by more recent art historians on the basis of Suger's own writings.
- Wim Swaan, The Gothic Cathedral
- While the engineering and construction of the dome of Florence Cathedral by Brunelleschi is often cited as one of the first works of the Renaissance, the octagonal plan, ribs and pointed silhouette were already determined in the 14th century.
- *Warren, John (1991). "Creswell's Use of the Theory of Dating by the Acuteness of the Pointed Arches in Early Muslim Architecture". Muqarnas (BRILL) 8: 59–65. doi:10.2307/1523154. JSTOR 1523154.
- "Architectural Importance". Durham World Heritage Site. Retrieved 2013-03-26.
- The open-work spire was completed in 1890 to the original design.
- Ching, Francis D.K. (2012). A Visual Dictionary of Architecture (2nd ed.). John Wiley & Sons, Inc. p. 6. ISBN 978-0-470-64885-8.
- This does not happen in French or English Gothic and so to the British or French eye, to be a strange disregard for style.
- The Zodiac comprises a sequence of twelve constellations which appear overhead in the Northern Hemisphere at fixed times of year. In a rural community with neither clock nor calendar, these signs in the heavens were crucial in knowing when crops were to be planted and certain rural activities performed.
- Freiburg, Regensburg, Strasbourg, Vienna, Ulm, Cologne, Antwerp, Gdansk, Wroclaw.
- Begun in 1443. "House of Jacques Cœur at Bourges (Begun 1443), aerial sketch". Liam’s Pictures from Old Books. Retrieved 29 September 2007.
- -Bob Hunter "Londonderry Cathedtral". BBC.
- Bony, Jean (1983). French Gothic Architecture of the Twelfth and Thirteenth Centuries. Berkeley: University of California Press. ISBN 0-520-02831-7.
- Bumpus, T. Francis (1928). The Cathedrals and Churches of Belgium. T. Werner Laurie.
- Clifton-Taylor, Alec (1967). The Cathedrals of England. Thames and Hudson. ISBN 0-500-18070-9.
- Fletcher, Banister (2001). A History of Architecture on the Comparative method. Elsevier Science & Technology. ISBN 0-7506-2267-9.
- Gardner, Helen; Fred S. Kleiner; Christin J. Mamiya (2004). Gardner's Art through the Ages. Thomson Wadsworth. ISBN 0-15-505090-7.
- Harvey, John (1950). The Gothic World, 1100–1600. Batsford.
- Harvey, John (1961). English Cathedrals. Batsford.
- Huyghe, Rene (ed.) (1963). Larousse Encyclopedia of Byzantine and Medieval Art. Paul Hamlyn.
- Icher, Francois (1998). Building the Great Cathedrals. Harry N. Abrams. ISBN 0-8109-4017-5.
- Pevsner, Nikolaus (1964). An Outline of European Architecture. Pelican Books. ISBN 0-14-061613-6.
- Summerson, John (1983). Pelican History of Art, ed. Architecture in Britain, 1530–1830. ISBN 0-14-056003-3.
- Swaan, Wim (1988). The Gothic Cathedral. Omega Books. ISBN 090785348X.
- Swaan, Wim. Art and Architecture of the Late Middle Ages. Omega Books. ISBN 0-907853-35-8.
- Tatton-Brown, Tim; John Crook (2002). The English Cathedral. New Holland Publishers. ISBN 1-84330-120-2.
- Fletcher, Banister; Cruickshank, Dan, Sir Banister Fletcher's a History of Architecture, Architectural Press, 20th edition, 1996 (first published 1896). ISBN 0-7506-2267-9. Cf. Part Two, Chapter 14.
- von Simson, Otto Georg (1988). The Gothic cathedral: origins of Gothic architecture and the medieval concept of order. ISBN 0-691-09959-6.
- Glaser, Stephanie, "The Gothic Cathedral and Medievalism," in: Falling into Medievalism, ed. Anne Lair and Richard Utz. Special Issue of UNIversitas: The University of Northern Iowa Journal of Research, Scholarship, and Creative Activity, 2.1 (2006). (on the Gothic revival of the 19th century and the depictions of Gothic cathedrals in the Arts)
- Moore, Charles (1890). Development & Character of Gothic Architecture. Macmillan and Co. ISBN 1-4102-0763-3.
- Tonazzi, Pascal (2007) Florilège de Notre-Dame de Paris (anthologie), Editions Arléa, Paris, ISBN 2-86959-795-9
- Wilson, Christopher (2005). The Gothic Cathedral - Architecture of the Great Church. Thames and Hudson. ISBN 978-0500276815.
|Wikimedia Commons has media related to Gothic architecture.|
|Wikivoyage has a travel guide for Gothic architecture.|
|Wikisource has the text of the 1911 Encyclopædia Britannica article Gothic.|
- Mapping Gothic France, a project by Columbia University and Vassar College with a database of images, 360° panoramas, texts, charts and historical maps
- Gothic Architecture Encyclopædia Britannica
- Holbeche Bloxam, Matthew (1841). Gothic Ecclesiastical Architecture, Elucidated by Question and Answer. Gutenberg.org, from Project Gutenberg
- Brandon, Raphael; Brandon, Arthur (1849). An analysis of Gothick architecture: illustrated by a series of upwards of seven hundred examples of doorways, windows, etc., and accompanied with remarks on the several details of an ecclesiastical edifice., Archive.org, from Internet Archive | https://en.wikipedia.org/wiki/Gothic_style |
4.09375 | International Ladies Garment Workers Union
The International Ladies Garment Workers Union was founded in 1900. The eleven Jewish men who founded the union represented seven local unions from East Coast cities with heavy Jewish immigrant populations. This all-male convention was made up exclusively of cloak makers and one skirt maker, highly skilled Old World tailors who had been trying to organize in a well-established industry for a couple of decades. White goods workers, including skilled corset makers, were not invited to the first meeting. Nor were they or the largely young immigrant Jewish workers in the newly developing shirtwaist industry recruited for the union in the early years of its existence. But these women workers still tried to organize.
The shirtwaist was a woman’s garment with a mannish touch: a buttoned front. Charles Dana Gibson, an illustrious illustrator of the time, popularized this daring design by featuring his handsome Gibson girl wearing a shirtwaist.
The introduction of the shirtwaist lent itself to a system of inside contracting where the work done by women was moved into factories and workshops, still under the control of a contractor but not within the household. As a result, women workers faced different kinds of control, regulation, and, ultimately, sexual harassment. However, the new system also provided larger work sites where numbers of women could gather together to talk about their grievances, among other things. Thus the possibility for unionizing increased. A handful of these women workers, goaded on by the intolerable sweatshop conditions in which they toiled, joined the shirtwaist makers’ Local 25 of the International Ladies Garment Workers Union.
The struggling local had few members, fewer finances, and virtually no bargaining power until the historic Uprising of the 20,000 in 1909. This was partly due to the men’s insistence that only “skilled” workers could effectively organize, partly to the sex-segregated nature of the industry, which kept women in relatively less skilled jobs, and partly to the rapid turnover of women garment workers who moved from job to job in search of better wages. Nonetheless, small strikes and work protests by women pockmarked the first decade of the century. Most of them were quickly lost. Then came the 1909 uprising, itself preceded by a two-month strike at the soon-to-be-infamous Triangle Shirtwaist Company.
The uprising was more than a “strike.” It was the revolt of a community of “greenhorn” teenagers against a common oppression. The uprising set off shock waves in multiple directions: in the labor movement, which discovered women could be warriors; in American society, which found out that young “girls”—immigrants, no less—out of the disputatious Jewish community could organize; in the suffragist movement, which saw in the plight of these women a good reason why women should have the right to vote; and among feminists, who recognized this massive upheaval as a protest against sexual harassment. This strike and subsequent ones in the apparel industry stemmed from long days, low wages, manipulations of pay, and the denial of work in the absence of sexual favors, distinctive aspects of the garment trades.
The uprising had its Joan of Arc, a wisp of a “girl” arising out of nowhere, or so it seemed to the men who ran the union. Her name was Clara Lemlich [Shavelson]. She was not one of the scheduled speakers, although she had proved herself an outspoken activist and daring organizer in previous strikes. But she spoke the words that sparked the conflagration. The overflow meeting in the Great Hall at New York’s Cooper Union, the site of Abraham Lincoln’s historic speech on Union and Liberty, was to be addressed by Samuel Gompers, president of the American Federation of Labor; Benjamin Feigenbaum, later elected as a socialist to the New York State Assembly; Jacob Panken, later elected a judge; Bernard Weinstein, head of the United Hebrew Trades; Meyer London, a labor attorney and the first socialist to be elected to Congress from the Lower East Side; and Mary Dreier, a prominent progressive socialite who had been walking the picket line with the strikers and was head of the Women’s Trade Union League.
When Jacob Panken was introduced, he was interrupted by a high-pitched voice from the audience. “I want to say a few words,” she said. From the audience came a clamor of voices, “Get up on the platform.” Chairperson Feigenbaum sensed the mood of the moment. He ruled that since this girl was a striker and had been beaten up on the picket line, she should be heard. Panken acquiesced.
In what one press report called a “philippic in Yiddish,” Clara Lemlich concluded, “I offer that a general strike be declared now.” Although not everyone in the audience was conversant in Yiddish—there were many Italian immigrant workers in the garment industry—they all understood.
Feigenbaum reached into the Jewish past to endow the moment with a touch of tradition. He called upon all those present to raise their hand and to “take the old Jewish oath. If I turn traitor to the cause I now pledge, may this hand wither from the arm I now raise.”
The strike, directed against employer tyranny in the sweatshop, served many purposes, one of which was to draw the attention of suffragists to the plight of working women. Up to that time, those at the forefront of the fight for women’s right to vote came almost exclusively from the economic and educated elite in the United States. To these women, the conditions of the shirtwaist makers were evidence of what happens when women are denied a voice in the governance of their communities and country. The active resistance to economic exploitation of these young Jewish women indicated that they should, and could, add new legions to the ranks of the suffragists. Indeed, Jewish immigrants subsequently became outspoken supporters of suffrage, helping to pass the New York State law in 1917. As a consequence of the uprising, the crusades for the rights of working people as workers and of women as women and as citizens were coming together.
The lasting meaning of the uprising was summarized by Samuel Gompers at the American Federation of Labor convention after the shirtwaist strike. It “brought to the consciousness of the nation,” he declaimed, “a recognition of certain features looming up on its social development. These are the extent to which women are taking up with industrial life, their consequent tendency to stand together in the struggle to protect their common interests as wage-earners, the readiness of people in all classes to approve of trade-union methods on behalf of working women, and the capacity of women as strikers to suffer, to do, and to dare in support of their rights.”
Inspiring as the uprising was, its immediate consequence in terms of working conditions was limited. This was especially true in the case of the Triangle Shirtwaist Company, whose brutal mistreatment of its employees was the original cause célèbre that set off the uprising and which remained unorganized. The Jewish employers of Triangle had been cited several times for violation of the city’s fire safety code; the company paid the fine and then went about doing its business as usual.
On March 25, 1911, a fire broke out in the Triangle factory. It claimed 146 lives, mainly Jewish women. These victims became the martyred dead in a cause that, in time, revolutionized labor conditions and labor relations in America. It led to more effective fire and safety regulations in New York State, and it inspired women like Rose Schneiderman of the Women’s Trade Union League to argue forcefully that legislation was less important than organization.
The large numbers of women garment workers in the ILGWU shaped its labor philosophy, despite the conspicuous absence of women among the union’s top leadership. The ILGWU looked upon the union not only as a means to protect and promote the immediate interests of garment workers but also as part of a greater international movement to convert a dog-eat-dog economic system into a global cooperative commonwealth. Its leaders viewed the class struggle as a classroom where working men and women would learn about the whys and hows of improving their personal lives and remolding the social order. For members to pay dues was vital, but it was equally important for them to pay attention to their own development and to their role in the reshaping of society. Through education, working people would become their own messiah.
Women, especially activists in Local 25, championed this mission. In 1916, spurred by Local 25, the union convention voted to establish an education department to be headed by Juliet S. Poyntz, a former history teacher at Barnard College. To supplement the teaching skills of this outsider, the union chose Fannia Cohn, for many years the only woman on the union’s general executive board, to apply her organizing skills as an insider to enroll members en masse in this novel grass-roots educational program. When Poyntz resigned in 1918 under pressure from the general executive board, Cohn, named as executive secretary of the education department, carried on under a male education director, and she generated one of the most remarkable worker education programs in America.
Under Cohn’s guidance, the ILGWU instituted a Workers University in New York City’s Washington Irving High School where union members attended lectures by such distinguished college professors as Charles Beard, Harry Carmen, and Paul Brissenden. The U.S. Bureau of Labor Statistics noted in a 1920 report that “the first systematic scheme of education undertaken by organized [labor] in the United States was put in practice by the ILGWU.” It further reported that “up to the spring of 1919 eight hundred [members] had either completed one or more courses or were engaged in the study of various subjects.” Cohn greatly expanded the union’s educational offerings, setting up programs in Cleveland, Boston, and Philadelphia.
While the university was the jewel in the diadem of the union’s educational work, there were, in addition, eight Unity Centers that offered basic courses in literacy. Union leaders were trained in public speaking and parliamentary procedure. There were also classes in health—how to stay well. As the union grew, many of these activities proliferated. Members took classes on college campuses during the summer, and there was a formal Officers Training Institute, as well as intensive pretraining for citizenship and extensive education in health care.
Many of these, which became models for the entire American labor movement, derived from the early initiatives of Fannia Cohn. Women in Local 25, where the membership was more than 75 percent female in 1919, wanted the union to perform a social role that would create community and comradeship as well as loyalty to the union. So, for example, the ILGWU created vacation houses and developed a pioneer medical institution—the ILGWU Health Center. It was a unique and influential conception of unionization.
By 1919, drawing on their confidence gained from classes and discussion groups, women in Local 25 began to question why they had not a single woman officer. The demand for union democracy took hold. But soon women’s issues were taken over by male insurgents, many of them Communist Party organizers. Trusted women leaders like Fannia Cohn and Pauline Newman were caught in the middle between a battle of the “lefts and rights.” The political infighting seriously weakened the union; women declined as members from 75 percent to a mere 39 percent by 1924, and male union leaders were reluctant to start new organizing drives to unionize women.
In the 1930s, rejuvenated by the New Deal’s support of labor organizing, women once again came to dominate the membership of the ILGWU. And once again the issue of women’s leadership arose. Rose Pesotta was the only woman on the union’s executive board. When Miriam Speishandler of Local 22 was nominated as a delegate to the national convention, she took the opportunity to ask ILGWU president David Dubinsky why there were not more women on the executive board of a union that was 85 percent female. Pesotta’s visibility in California led to her election in 1934 as a vice president of the ILGWU, serving on the general executive board. Pesotta was conflicted about her ten years of service in that position. Sexism and a loss of personal independence continually troubled her, until she finally resigned from the position in 1942.
The heady days of the 1930s also led to such unusual innovations as the musical revue Pins and Needles. Written by Harold Rome, the successful musical ran for an impressive 1,108 performances in 1937 to consistently enthusiastic audiences who appreciated its humor, its political message, and its sharp social commentary. The show’s cast were all union members who effectively propagandized the trade union movement through song and dance.
More recently, as the ILGWU’s membership has shifted from Jewish and Italian women to Latino, African-American, and Asian women, one Jewish woman has served as the union’s legislative voice in the halls of Congress for almost half a century. Unlike the other Jewish women of prominence in the union, Evelyn Dubrow was not an immigrant. She grew up in New Jersey and was educated at the New York University School of Journalism. When American labor, through the Congress of Industrial Organizations, began to reach into mass manufacture in the 1930s, Dubrow served as education director for the New Jersey Textile Workers of America. As a writer, she worked as secretary of the New Jersey Newspaper Guild from 1943 to 1946.
Subsequently she became national director of organization of the Americans for Democratic Action and a founder of the Consumer Federation of America. By her performance on the Hill, Dubrow has won recognition and admiration from those who know how the wheels of government run. In 1982 the Washington Business Review named her as one of D.C.’s top ten lobbyists, and in 1994 Washingtonian Magazine listed her as one of America’s top 100 women.
Although Evelyn Dubrow is distinguished for her political work, she was exceptional among the Jewish women in the union only in her official assignment to that mission. All of the women leaders mentioned were intensely political, as were many of the rank and file. For them it was never enough to have a union to ease and enrich the lives of those in the apparel industry. They dreamed of and worked for a movement that would someday transform the world into a place where the ideals of equality and justice for all would be a reality.
Glenn, Susan A. Daughters of the Shtetl: Life and Labor in the Immigrant Generation (1990); Howe, Irving. World of Our Fathers: The Journey of East European Jews to America and the Life They Found and Made (1976); ILGWU. Pauline Newman (1986); Kessler-Harris, Alice. “Organizing the Unorganizable: Three Jewish Women and Their Union.” Labor History 17 (Winter 1976): 5–23, and “Rose Schneiderman and the Limits of Women’s Trade Unionism.” In Labor Leaders in America, edited by Melvyn Dubofsky and Warren Van Tine (1987); Leeder, Elaine. The Gentle General: Rose Pesotta, Anarchist and Labor Organizer (1993); Levine, Louis. The Women’s Garment Workers (1924); Orleck, Annelise. Common Sense and a Little Fire: Women and Working-Class Politics in the Unites States, 1900–1965 (1995); Pesotta, Rose. Bread upon the Waters (1945); Seidman, Joel. The Needle Trades (1942); Stein, Leon. Out of the Sweat Shop (1977), and The Triangle Fire (1962); Stolberg, Benjamin. Tailor’s Progress (1944); Tyler, Gus. Look for the Union Label (1995).
How to cite this page
. The Editors. "International Ladies Garment Workers Union." Jewish Women: A Comprehensive Historical Encyclopedia. 1 March 2009. Jewish Women's Archive. (Viewed on February 6, 2016) <http://jwa.org/encyclopedia/article/international-ladies-garment-workers-union>. | http://jwa.org/encyclopedia/article/international-ladies-garment-workers-union |
4.40625 | What if you were given two points that a line passes through like (-1, 0) and (2, 2)? How could you find the slope of that line? After completing this Concept, you'll be able to find the slope of any line.
Wheelchair ramps at building entrances must have a slope between and . If the entrance to a new office building is 28 inches off the ground, how long does the wheelchair ramp need to be?
We come across many examples of slope in everyday life. For example, a slope is in the pitch of a roof, the grade or incline of a road, or the slant of a ladder leaning on a wall. In math, we use the word slope to define steepness in a particular way.
To make it easier to remember, we often word it like this:
In the picture above, the slope would be the ratio of the height of the hill to the horizontal length of the hill. In other words, it would be , or 0.75.
If the car were driving to the right it would climb the hill - we say this is a positive slope. Any time you see the graph of a line that goes up as you move to the right, the slope is positive.
If the car kept driving after it reached the top of the hill, it might go down the other side. If the car is driving to the right and descending, then we would say that the slope is negative.
Here’s where it gets tricky: If the car turned around instead and drove back down the left side of the hill, the slope of that side would still be positive. This is because the rise would be -3, but the run would be -4 (think of the axis - if you move from right to left you are moving in the negative direction). That means our slope ratio would be , and the negatives cancel out to leave 0.75, the same slope as before. In other words, the slope of a line is the same no matter which direction you travel along it.
Find the Slope of a Line
A simple way to find a value for the slope of a line is to draw a right triangle whose hypotenuse runs along the line. Then we just need to measure the distances on the triangle that correspond to the rise (the vertical dimension) and the run (the horizontal dimension).
Find the slopes for the three graphs shown.
There are already right triangles drawn for each of the lines - in future problems you’ll do this part yourself. Note that it is easiest to make triangles whose vertices are lattice points (i.e. points whose coordinates are all integers).
a) The rise shown in this triangle is 4 units; the run is 2 units. The slope is .
b) The rise shown in this triangle is 4 units, and the run is also 4 units. The slope is .
c) The rise shown in this triangle is 2 units, and the run is 4 units. The slope is .
Find the slope of the line that passes through the points (1, 2) and (4, 7).
We already know how to graph a line if we’re given two points: we simply plot the points and connect them with a line. Here’s the graph:
Since we already have coordinates for the vertices of our right triangle, we can quickly work out that the rise is and the run is (see diagram). So the slope is .
If you look again at the calculations for the slope, you’ll notice that the 7 and 2 are the coordinates of the two points and the 4 and 1 are the coordinates. This suggests a pattern we can follow to get a general formula for the slope between two points and :
Slope between and
In the second equation the letter denotes the slope (this is a mathematical convention you’ll see often) and the Greek letter delta means change. So another way to express slope is change in divided by change in . In the next section, you’ll see that it doesn’t matter which point you choose as point 1 and which you choose as point 2.
Find the Slopes of Horizontal and Vertical lines
Determine the slopes of the two lines on the graph below.
There are 2 lines on the graph: and .
Let’s pick 2 points on line —say, and —and use our equation for slope:
If you think about it, this makes sense - if doesn’t change as increases then there is no slope, or rather, the slope is zero. You can see that this must be true for all horizontal lines.
Horizontal lines ( = constant) all have a slope of 0.
Now let’s consider line . If we pick the points and , our slope equation is . But dividing by zero isn’t allowed!
In math we often say that a term which involves division by zero is undefined. (Technically, the answer can also be said to be infinitely large—or infinitely small, depending on the problem.)
Vertical lines constant) all have an infinite (or undefined) slope.
Watch this video for help with the Examples above.
Find the slopes of the lines on the graph below.
Look at the lines - they both slant down (or decrease) as we move from left to right. Both these lines have negative slope.
The lines don’t pass through very many convenient lattice points, but by looking carefully you can see a few points that look to have integer coordinates. These points have been circled on the graph, and we’ll use them to determine the slope. We’ll also do our calculations twice, to show that we get the same slope whichever way we choose point 1 and point 2.
For Line :
You can see that whichever way round you pick the points, the answers are the same. Either way, Line has slope -0.364, and Line has slope -1.375.
Use the slope formula to find the slope of the line that passes through each pair of points.
- (-5, 7) and (0, 0)
- (-3, -5) and (3, 11)
- (3, -5) and (-2, 9)
- (-5, 7) and (-5, 11)
- (9, 9) and (-9, -9)
- (3, 5) and (-2, 7)
- (2.5, 3) and (8, 3.5)
For each line in the graphs below, use the points indicated to determine the slope.
- For each line in the graphs above, imagine another line with the same slope that passes through the point (1, 1), and name one more point on that line.
Answers for Explore More Problems
To view the Explore More answers, open this PDF file and look for section 4.6. | http://www.ck12.org/algebra/Slope/lesson/Slope---Intermediate/ |
4 | Electrical System of the Heart
What controls the timing of your heartbeat?
Your heart's electrical system controls the timing of your heartbeat by regulating your:
- Heart rate, which is the number of times your heart beats per minute.
- Heart rhythm, which is the synchronized pumping action of your four heart chambers.
Your heart's electrical system should maintain:
- A steady heart rate of 60 to 100 beats per minute at rest. The heart's electrical system also increases this rate to meet your body's needs during physical activity and lowers it during sleep.
- An orderly contraction of your atria and ventricles (this is called a sinus rhythm).
See a picture of the heart and its electrical system.
How does the heart's electrical system work?
Your heart muscle is made of tiny cells. Your heart's electrical system controls the timing of your heartbeat by sending an electrical signal through these cells.
Two different types of cells in your heart enable the electrical signal to control your heartbeat:
- Conducting cells carry your heart's electrical signal.
- Muscle cells enable your heart's chambers to contract, an action triggered by your heart's electrical signal.
The electrical signal travels through the network of conducting cell "pathways," which stimulates your upper chambers (atria) and lower chambers (ventricles) to contract. The signal is able to travel along these pathways by means of a complex reaction that allows each cell to activate one next to it, stimulating it to "pass along" the electrical signal in an orderly manner. As cell after cell rapidly transmits the electrical charge, the entire heart contracts in one coordinated motion, creating a heartbeat.
The electrical signal starts in a group of cells at the top of your heart called the sinoatrial (SA) node. The signal then travels down through your heart, triggering first your two atria and then your two ventricles. In a healthy heart, the signal travels very quickly through the heart, allowing the chambers to contract in a smooth, orderly fashion.
The heartbeat happens as follows:
- The SA node (called the pacemaker of the heart) sends out an electrical impulse.
- The upper heart chambers (atria) contract.
- The AV node sends an impulse into the ventricles.
- The lower heart chambers (ventricles) contract or pump.
- The SA node sends another signal to the atria to contract, which starts the cycle over again.
This cycle of an electrical signal followed by a contraction is one heartbeat.
SA node and atria
When the SA node sends an electrical impulse, it triggers the following process:
- The electrical signal travels from your SA node through muscle cells in your right and left atria.
- The signal triggers the muscle cells that make your atria contract.
- The atria contract, pumping blood into your left and right ventricles.
AV node and ventricles
After the electrical signal has caused your atria to contract and pump blood into your ventricles, the electrical signal arrives at a group of cells at the bottom of the right atrium called the atrioventricular node, or AV node. The AV node briefly slows down the electrical signal, giving the ventricles time to receive the blood from the atria. The electrical signal then moves on to trigger your ventricles.
When the electrical signal leaves the AV node, it triggers the following process:
- The signal travels down a bundle of conduction cells called the bundle of His, which divides the signal into two branches: one branch goes to the left ventricle, another to the right ventricle.
- These two main branches divide further into a system of conducting fibers that spreads the signal through your left and right ventricles, causing the ventricles to contract.
- When the ventricles contract, your right ventricle pumps blood to your lungs and the left ventricle pumps blood to the rest of your body.
After your atria and ventricles contract, each part of the system electrically resets itself.
How does the heart's electrical system regulate your heart rate?
The cells of the SA node at the top of the heart are known as the pacemaker of the heart because the rate at which these cells send out electrical signals determines the rate at which the entire heart beats (heart rate).
The normal heart rate at rest ranges between 60 and 100 beats per minute. Your heart rate can adjust higher or lower to meet your body's needs.
What makes your heart rate speed up or slow down?
Your brain and other parts of your body send signals to stimulate your heart to beat either at a faster or a slower rate. Although the way all of the chemical signals interact to affect your heart rate is complex, the net result is that these signals tell the SA node to fire charges at either a faster or slower pace, resulting in a faster or a slower heart rate.
For example, during periods of exercise, when the body requires more oxygen to function, signals from your body cause your heart rate to increase significantly to deliver more blood (and therefore more oxygen) to the body. Your heart rate can increase beyond 100 beats per minute to meet your body's increased needs during physical exertion.
Similarly, during periods of rest or sleep, when the body needs less oxygen, the heart rate decreases. Some athletes actually may have normal heart rates well below 60 because their hearts are very efficient and don't need to beat as fast. Changes in your heart rate, therefore, are a normal part of your heart's effort to meet the needs of your body.
How does your body control your heart rate?
Your body controls your heart by:
- The sympathetic and parasympathetic nervous systems, which have nerve endings in the heart.
- Hormones, such as epinephrine and norepinephrine (catecholamines), which circulate in the bloodstream.
Sympathetic and parasympathetic nervous systems
The sympathetic and parasympathetic nervous systems are opposing forces that affect your heart rate. Both systems are made up of very tiny nerves that travel from the brain or spinal cord to your heart. The sympathetic nervous system is triggered during stress or a need for increased cardiac output and sends signals to your heart to increase its rate. The parasympathetic system is active during periods of rest and sends signals to your heart to decrease its rate.
During stress or a need for increased cardiac output, the adrenal glands release a hormone called norepinephrine into the bloodstream at the same time that the sympathetic nervous system is also triggered to increase your heart rate. This hormone causes the heart to beat faster, and unlike the sympathetic nervous system that sends an instantaneous and short-lived signal, norepinephrine released into the bloodstream increases the heart rate for several minutes or more.
|Primary Medical Reviewer||Rakesh K. Pai, MD, FACC - Cardiology, Electrophysiology|
|Specialist Medical Reviewer||George Philippides, MD - Cardiology|
|Last Revised||March 7, 2012|
Last Revised: March 7, 2012
Author: Healthwise Staff
To learn more visit Healthwise.org
© 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. | http://www.cheshire-med.com/health_wellness/health_encyclopedia/te7147abc |
4.09375 | ||This article needs attention from an expert on the subject. The specific problem is: See Talk.|
|Beyond the Standard Model|
|Part of a series on|
Dark matter is a hypothetical substance that is believed by most astronomers to account for around five-sixths of the matter in the universe. Although it has not been directly observed, its existence and properties are inferred from its various gravitational effects: on the motions of visible matter; via gravitational lensing; its influence on the universe's large-scale structure, and its effects in the cosmic microwave background. Dark matter is transparent to electromagnetic radiation (light, cosmic rays, etc.) and/or is so dense and small that it fails to absorb or emit enough radiation to appear via imaging technology.
The standard model of cosmology indicates that the total mass–energy of the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Thus, dark matter constitutes 84.5%[note 1] of total mass, while dark energy plus dark matter constitute 95.1% of total mass–energy content.
The dark matter hypothesis plays a central role in state-of-the-art modeling of cosmic structure formation and galaxy formation and evolution and on explanations of the anisotropies observed in the cosmic microwave background (CMB). All these lines of evidence suggest that galaxies, clusters of galaxies and the universe as a whole contain far more matter than that which is observable via electromagnetic signals.
Although the existence of dark matter is generally accepted by most of the astronomical community, a minority of astronomers argue for various modifications of the standard laws of general relativity, such as MOND and TeVeS, that attempt to account for the observations without invoking additional matter.
Many experiments to detect proposed dark matter particles through non-gravitational means are under way.
- 1 History
- 2 Observational evidence
- 3 Composition
- 4 Detection
- 5 Synthesis
- 6 Alternative theories
- 7 Popular culture
- 8 See also
- 9 Notes
- 10 References
- 11 External links
The first to suggest using stellar velocities to infer the presence of dark matter was Dutch astronomer Jacobus Kapteyn in 1922. Fellow Dutchman and radio astronomy pioneer Jan Oort hypothesized the existence of dark matter, in 1932. Oort was studying stellar motions in the local galactic neighborhood and found that the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be erroneous.
In 1933, Swiss astrophysicist Fritz Zwicky, who studied galactic clusters while working at the California Institute of Technology, made a similar inference. Zwicky applied the virial theorem to the Coma cluster and obtained evidence of unseen mass that he called dunkle Materie 'dark matter'. Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated that the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred that some unseen matter provided the mass and associated gravitation attraction to hold the cluster together. This was the first formal inference about the existence of dark matter. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant,; the same calculation today shows a smaller fraction, using greater values for luminous mass. However, Zwicky did correctly infer that the bulk of the matter was dark.
The first robust indications that the mass to light ratio was anything other than unity came from measurements of galaxy rotation curves. In 1939, Horace W. Babcock reported the rotation curve for the Andromeda nebula, which suggested that the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral and not to missing matter.
Vera Rubin and Kent Ford in the 1960s–1970s were the first to postulate "dark matter" based upon robust evidence, using galaxy rotation curves. Rubin worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy. This result was independently confirmed in 1978. An influential paper presented Rubin's results in 1980. Rubin found that most galaxies must contain about six times as much dark as visible mass; thus, by around 1980 the apparent need for dark matter was widely recognized as a major unsolved problem in astronomy.
A stream of independent observations in the 1980s indicated its presence, including gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics.
Cosmic microwave background radiation
In cosmology, the CMB is explained as relic radiation which has travelled freely since the era of recombination, around 375,000 years after the Big Bang. The CMB's anisotropies are explained as the result of small primordial density fluctuations, and subsequentacoustic oscillations in the photon-baryon plasma whose restoring force is gravity.
The NASA Cosmic Background Explorer (COBE) found the CMB spectrum to be a very precise blackbody spectrum with a temperature of 2.726 K. In 1992, COBE detected CMB fluctuations (anisotropies) at a level of about one part in 105.
In the following decade, CMB anisotropies were investigated by ground-based and balloon experiments. Their primary goal was to measure the angular scale of the first acoustic peak of the anisotropies' power spectrum, for which COBE had insufficient resolution. During the 1990s, the first peak was measured with increasing sensitivity, and in 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree, showing that the Universe is close to flat. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the correct theory.
Ground-based interferometers provided fluctuation measurements with higher accuracy, including the Very Small Array, the Degree Angular Scale Interferometer (DASI) and the Cosmic Background Imager (CBI). DASI first detected the CMB polarization, and CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum. COBE's successor, the Wilkinson Microwave Anisotropy Probe (WMAP) provided the most detailed measurements of (large-scale) anisotropies in the CMB in 2003 - 2010. ESA's Planck spacecraft returned more detailed results in 2013-2015.
WMAP's measurements played the key role in establishing the Standard Model of Cosmology, namely the Lambda-CDM model, which posits a dark energy-dominated flat universe, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. Its basic properties are determined by six adjustable parameters: dark matter density, baryon (atom) density, the universe's age (or equivalently, the Hubble constant), the initial fluctuation amplitude and their scale dependence.
Much of the evidence comes from the motions of galaxies. Many of these appear to be fairly uniform, so by the virial theorem, the total kinetic energy should be half the galaxies' total gravitational binding energy. Observationally, the total kinetic energy is much greater. In particular, assuming the gravitational mass is due to only visible matter, stars far from the center of galaxies have much higher velocities than predicted by the virial theorem. Galactic rotation curves, which illustrate the velocity of rotation versus the distance from the galactic center, show the "excess" velocity. Dark matter is the most straightforward way of accounting for this discrepancy.
The distribution of dark matter in galaxies required to explain the motion of the observed matter suggests the presence of a roughly spherically symmetric, centrally concentrated halo of dark matter with the visible matter concentrated in a central disc.
Low surface brightness dwarf galaxies are important sources of information for studying dark matter. They have an uncommonly low ratio of visible to dark matter, and have few bright stars at the center that would otherwise impair observations of the rotation curve of outlying stars.
Gravitational lensing observations of galaxy clusters allow direct estimates of the gravitational mass based on its effect on light coming from background galaxies, since large collections of matter (dark or otherwise) gravitationally deflect light. In clusters such as Abell 1689, lensing observations confirm the presence of considerably more mass than is indicated by the clusters' light. In the Bullet Cluster, lensing observations show that much of the lensing mass is separated from the X-ray-emitting baryonic mass. In July 2012, lensing observations were used to identify a "filament" of dark matter between two clusters of galaxies, as cosmological simulations predicted.
Galaxy rotation curves
A galaxy rotation curve is a plot of the orbital velocities (i.e., the speeds) of visible stars or gas in that galaxy versus their radial distance from that galaxy's center. The rotational/orbital speeds of galaxies/stars does not decline with distance, unlike other orbital systems such as stars/planets and planets/moons that also have most of their mass at the centre. In the latter cases, this reflects the mass distributions within those systems. The mass observations for galaxies based on the light that they emit are far too low to explain the velocity observations.
The dark matter hypothesis supplies the missing mass, resolving the anomaly.
A universal rotation curve can be expressed as the sum of an exponential distribution of visible matter that tapers to zero with distance from the center, and a spherical dark matter halo with a flat core of radius r0 and density ρ0 = 4.5 × 10−2(r0/kpc)−2/3 M☉pc−3.
Low-surface-brightness (LSB) galaxies have a much larger visible mass deficit than others. This property simplifies the disentanglement of the dark and visible matter contributions to the rotation curves.
Rotation curves for some elliptical galaxies do display low velocities for outlying stars (tracked for example by the motion of embedded planetary nebulae). A dark-matter compliant hypothesis proposes that some stars may have been torn by tidal forces from disk-galaxy mergers from their original galaxies during the first close passage and put on outgoing trajectories, explaining the low velocities of the remaining stars even in the presence of a halo.
Velocity dispersions of galaxies
Diffuse interstellar gas measurements of galactic edges indicate missing ordinary matter beyond the visible boundary, but that galaxies are virialized (i.e., gravitationally bound and orbiting each other with velocities that correspond to predicted orbital velocities of general relativity) up to ten times their visible radii. This has the effect of pushing up the dark matter as a fraction of the total matter from 50% as measured by Rubin to the now accepted value of nearly 95%.
Dark matter seems to be a small component or absent in some places. Globular clusters show little evidence of dark matter, except that their orbital interactions with galaxies do support galactic dark matter. Star velocity profiles seemed to indicate a concentration of dark matter in the disk of the Milky Way. It now appears, however, that the high concentration of baryonic matter in the disk (especially in the interstellar medium) can account for this motion. Galaxy mass and light profiles appear to not match. The typical model for dark matter galaxies is a smooth, spherical distribution in virialized halos. This avoids small-scale (stellar) dynamical effects. A 2006 study explained the warp in the Milky Way's disk by the interaction of the Large and Small Magellanic Clouds and the 20-fold increase in predicted mass from dark matter.
In 2005, astronomers claimed to have discovered a galaxy made almost entirely of dark matter, 50 million light years away in the Virgo Cluster, which was named VIRGOHI21. Unusually, VIRGOHI21 does not appear to contain visible stars: it was discovered with radio frequency observations of hydrogen. Based on rotation profiles, the scientists estimate that this object contains approximately 1000 times more dark matter than hydrogen and has a mass of about 1/10 that of the Milky Way. The Milky Way is estimated to have roughly 10 times as much dark matter as ordinary matter. Models of the Big Bang and structure formation suggested that such dark galaxies should be very common, but VIRGOHI21 was the first to be detected.
Galaxy clusters and gravitational lensing
Galactic clusters also lack sufficient luminous matter to explain the measured orbital velocities of galaxies within them. Galaxy cluster masses have been estimated in three independent ways:
- Radial velocity scatter of the galaxies within clusters
- X-rays emitted by hot gas. Gas temperature and density can be estimated from the X-ray energy and flux; assuming pressure and gravity balance determines the cluster's mass profile. Chandra X-ray Observatory experiments use this technique to independently determine cluster mass. These observations generally indicate that baryonic mass is approximately 12–15 percent, in reasonable agreement with the Planck spacecraft cosmic average of 15.5–16 percent.
- Gravitational lensing (usually on more distant galaxies) predicts masses without relying on observations of dynamics (e.g., velocity). Multiple Hubble projects used this method to measure cluster masses.
Generally these methods find missing luminous matter.
Gravity acts as a lens to bend the light from a more distant source (such as a quasar) around a massive object (such as a cluster of galaxies) lying between the source and the observer in accordance with general relativity.
Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around a few distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the dozens of cases where this has been done, the mass-to-light ratios obtained correspond to the dynamical dark matter measurements of clusters.
Weak gravitational lensing investigates minute distortions of galaxies, using statistical analyses from vast galaxy surveys. By examining the apparent shear deformation of the adjacent background galaxies, astrophysicists can characterize the mean distribution of dark matter. The mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements.
Galactic cluster Abell 2029 comprises thousands of galaxies enveloped in a cloud of hot gas and dark matter equivalent to more than M☉. At the center of this cluster is an enormous elliptical galaxy likely formed from many smaller galaxies. 1014
The most direct observational evidence comes from the Bullet Cluster. In most regions dark and visible matter are found together, due to their gravitational attraction. In the Bullet Cluster however, the two matter types split apart. This was apparently caused by a collision between two smaller clusters. Electromagnetic interactions among passing gas particles would then have caused the luminous matter to slow and settle near the point of impact. Because dark matter does not interact electromagnetically, it did not slow and continued past the center.
X-ray observations show that much of the luminous matter (in the form of 107–108 Kelvin gas or plasma) is concentrated in the cluster's center. Weak gravitational lensing observations show that much of the missing mass would reside outside the central region. Unlike galactic rotation curves, this evidence is independent of the details of Newtonian gravity, directly supporting dark matter.
Dark matter's observed behavior constrains whether and how much it scatters off other dark matter particles, quantified as its self-interaction cross section. If dark matter has no pressure, it can be described as a perfect fluid that has no damping. The distribution of mass in galaxy clusters has been used to argue both for and against the significance of self-interaction.
An ongoing survey using the Subaru telescope uses weak lensing to analyze background light, bent by dark matter, to determine how the shape of the lens (how dark matter is distributed in the foreground). The survey studies galaxies more than a billion light-years distant, across an area greater than a thousand square degrees (about one fortieth of the entire sky).
Cosmic microwave background
Angular CMB fluctuations provide evidence for dark matter. The typical angular scales of CMB oscillations, measured as the power spectrum of the CMB anisotropies, reveal the different effects of baryonic and dark matter. Ordinary matter interacts strongly via radiation whereas dark matter particles (WIMPs) do not; both affect the oscillations by way of their gravity, so the two forms of matter have different effects.
The spectrum shows a large first peak and smaller successive peaks. The first peak tells mostly about the density of baryonic matter, while the third peak relates mostly to the density of dark matter, measuring the density of matter and the density of atoms.[clarification needed]
Sky surveys and baryon acoustic oscillations
The early universe's acoustic oscillations affected visible matter by way of Baryon Acoustic Oscillation (BAO) clustering, in a way that can be measured with sky surveys such as the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. These measurements are consistent CMB metrics derived from the WMAP spacecraft and further constrain the Lambda CDM model and dark matter. Note that CMB and BAO data adopt different distance scales.
Type Ia supernova distance measurements
Type Ia supernovae can be used as "standard candles" to measure extragalactic distances. Extensive data sets of these supernovae can be used to constrain cosmological models. They constrain the dark energy density ΩΛ = ~0.713 for a flat, Lambda CDM universe and the parameter for a quintessence model. The results are roughly consistent with those derived from the WMAP observations and further constrain the Lambda CDM model and (indirectly) dark matter.
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
Structure formation refers to the serial transformations of the universe following the Big Bang. Prior to structure formation, e.g., Friedmann cosmology solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures.
Observations suggest that structure formation proceeds hierarchically, with the smallest structures collapsing first, followed by galaxies and then galaxy clusters. As the structures collapse in the evolving universe, they begin to "light up" as baryonic matter heats up through gravitational contraction and approaches hydrostatic pressure balance.
CMB anisotropy measurements fix models in which most matter is dark. Dark matter also close gaps in models of large-scale structure. The dark matter hypothesis corresponds with statistical surveys of the visible structure and precisely to CMB predictions.
Initially, baryonic matter's post-Big Bang temperature and pressure were too high to collapse and form smaller structures, such as stars, via the Jeans instability. The gravity from dark matter increase the compaction force, allowing the creation of these structures.
Computer simulations of billions of dark matter particles confirmed that the "cold" dark matter model of structure formation is consistent with the structures observed through galaxy surveys, such as the Sloan Digital Sky Survey and 2dF Galaxy Redshift Survey, as well as observations of the Lyman-alpha forest.
Tensions separate observations and simulations. Observations have turned up 90-99% fewer small galaxies than permitted by dark matter-based predictions. In addition, simulations predict dark matter distributions with a dense cusp near galactic centers, but the observed halos are smoother than predicted.
||This section includes a list of references, related reading or external links, but the sources of this section remain unclear because it lacks inline citations. (August 2015)|
|Unsolved problem in physics:
The composition of dark matter remains uncertain. Possibilities include dense baryonic (interacts with electromagnetic force) matter and non-baryonic matter (interacts with its surroundings only through gravity).
Baryonic vs nonbaryonic matter
Baryonic matter is made of baryons (protons and neutrons), that make up stars and planets. It also encompasses less common black holes, neutron stars, faint old white dwarfs and brown dwarfs, collectively known as massive compact halo objects or MACHOs.
Baryonic dark matter (other than MACHOs) must be made of thus far unknown non-luminous elementary particles. Candidates include weakly interacting massive particles (WIMPs), including neutralinos, axions and sterile neutrinos.
Candidates for nonbaryonic dark matter are hypothetical particles such as axions or supersymmetric particles; neutrinos can only supply a small fraction of dark matter, due to limits derived from large-scale structure and high-redshift galaxies.
Unlike baryonic matter, nonbaryonic matter did not contribute to the formation of the elements in the early universe ("Big Bang nucleosynthesis") and so its presence is revealed only via its gravitational effects. In addition, if the particles of which it is composed are supersymmetric, they can undergo annihilation interactions with themselves, possibly resulting in observable by-products such as gamma rays and neutrinos ("indirect detection").
Multiple lines of evidence suggest the majority of dark matter is not made of baryons:
- Sufficient diffuse, baryonic gas or dust would be visible when backlit by stars.
- The theory of Big Bang nucleosynthesis predicts the observed abundance of the chemical elements and that baryonic matter accounts for around 4–5 percent of the universe's critical density, leaving 95-6% unaccounted for. In contrast, large-scale structure and other observations indicates that the total matter density is about 30% of the critical density.
- Large astronomical searches for gravitational microlensing in the Milky Way found only a small contingent of the missing matter in dark, compact, conventional objects (MACHOs, etc.); the examined range of object sizes is from half the Earth's mass up to 30 solar masses, which covers nearly all the plausible candidates.
- Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background observed by WMAP and Planck shows that around five-sixths of the total matter is in a form that interacts significantly with ordinary matter or photons only through gravitational effects.
- Data from galaxy rotation curves, gravitational lensing, structure formation, the fraction of baryons in clusters and cluster abundance combined with independent evidence for baryon density, indicate that 85–90% of dark matter is non-baryonic (does not interact with the electromagnetic force).
Dark matter can be divided into cold, warm and hot categories. These categories refer to velocity rather than temperature, indicating how far corresponding objects moved due to random motions in the early universe, before they slowed due to expansion – this is an important distance called the "free streaming length" (FSL). Primordial density fluctuations smaller than this length get washed out as particles spread from overdense to underdense regions, while larger fluctuations are unaffected; therefore this length sets a minimum scale for structure formation. The categories are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy). Cold, warm and hot dark matter's FSLs are much smaller, similar and much larger, respectively.
Cold dark matter leads to a "bottom-up" formation of structure while hot dark matter would result in a "top-down" formation scenario; he latter is excluded by high-redshift galaxy observations.
These categories also correspond according to fluctuation spectrum effects and interval following the Big Bang at which each type became non-relativistic.
Davis et al. wrote in 1985:
Candidate particles can be grouped into three categories on the basis of their effect on the fluctuation spectrum (Bond et al. 1983). If the dark matter is composed of abundant light particles which remain relativistic until shortly before recombination, then it may be termed "hot". The best candidate for hot dark matter is a neutrino ... A second possibility is for the dark matter particles to interact more weakly than neutrinos, to be less abundant, and to have a mass of order 1 keV. Such particles are termed "warm dark matter", because they have lower thermal velocities than massive neutrinos ... there are at present few candidate particles which fit this description. Gravitinos and photinos have been suggested (Pagels and Primack 1982; Bond, Szalay and Turner 1982) ... Any particles which became nonrelativistic very early, and so were able to diffuse a negligible distance, are termed "cold" dark matter (CDM). There are many candidates for CDM including supersymmetric particles.
Another approximate dividing line is that warm dark matter became non-relativistic when the universe was approximately 1 year old and 1 millionth of its present size and in the radiation-dominated era (photons and neutrinos), with a photon temperature 2.7 million K. Standard physical cosmology gives the particle horizon size as 2 ct[clarification needed] in the radiation-dominated era, thus 2 light-years. A region of this size would ultimately expand to 2 million light years (absent structure formation). The actual FSL is roughly 5x the above length, since it continues to grow slowly as particle velocities decrease inversely with the scale factor after they become non-relativistic. In this example the FSL would correspond to 10 million light-years or 3 Mpc today, around the size containing an average large galaxy.
The 2.7 million K photon temperature gives a typical photon energy of 250 electron-volts, thereby setting a typical mass scale for "warm" dark matter: particles much more massive than this, such as GeV – TeV mass WIMPs, would become non-relativistic much earlier than 1 year after the Big Bang and thus have FSL's much smaller than a proto-galaxy, making them cold. Conversely, much lighter particles, such as neutrinos with masses of only a few eV, have FSL's much larger than a proto-galaxy, thus qualifying them as hot.
Cold dark matter
Cold dark matter offers the simplest explanation for most cosmological observations. It is dark matter composed of constituents with an FSL much smaller than a protogalaxy. This is the focus for dark matter research, as hot dark matter does not seem to be capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early.
The constituents of cold dark matter are unknown. Possibilities range from large objects like MACHOs (such as black holes) or RAMBOs (such as clusters of brown dwarfs), to new particles such as WIMPs and axions.
Studies of Big Bang nucleosynthesis and gravitational lensing convinced most cosmologists that MACHOs cannot make up more than a small fraction of dark matter. According to A. Peter: "... the only really plausible dark-matter candidates are new particles."
The DAMA/NaI experiment and its successor DAMA/LIBRA claimed to directly detect dark matter particles passing through the Earth, but many researchers remain skeptical, as negative results from similar experiments seem incompatible with the DAMA results.
Many supersymmetric models offer dark matter candidates in the form of the WIMPy Lightest Supersymmetric Particle (LSP). Separately, heavy sterile neutrinos exist in non-supersymmetric extensions to the standard model that explain the small neutrino mass through the seesaw mechanism.
Warm dark matter
Warm dark matter refers to particles with an FSL comparable to the size of a protogalaxy. Predictions based on warm dark matter are similar to those for cold dark matter on large scales, but with less small-scale density perturbations. This reduces the predicted abundance of dwarf galaxies and may lead to lower density of dark matter in the central parts of large galaxies; some researchers consider this to be a better fit to observations. A challenge for this model is the lack of particle candidates with the required mass ~ 300 eV to 3000 eV.
No known particles can be categorized as warm dark matter. A postulated candidate is the sterile neutrino: a heavier, slower form of neutrino that does not interact through the weak force (unlike other neutrinos). Some modified gravity theories, such as scalar-tensor-vector gravity, require warm dark matter to make their equations work.
Hot dark matter
Hot dark matter consists of particles whose FSL is much larger than the size of a protogalaxy. The neutrino qualifies. They were discovered independently, long before the hunt for dark matter: they were postulated in 1930, and detected in 1956. Neutrinos' mass is less than 10-6 that of an electron. Neutrinos interact with normal matter only via gravity and the weak force, making them difficult to detect (the weak force only works over a small distance, thus a neutrino triggers a weak force event only if it hits a nucleus head-on). This makes them 'weakly interacting light particles' (WILPs), as opposed to WIMPs.
The three known flavors of neutrinos are the electron, muon and tau. Their masses are slightly different. Neutrinos oscillate among the flavors as they move. It is hard to determine an exact upper bound on the collective average mass of the three neutrinos (or for any of the three individually). For example, if the average neutrino mass were over 50 eV/c2 (less than 10-5 of the mass of an electron), the universe would collapse. CMB data and other methods indicate that their average mass probably does not exceed 0.3 eV/c2. Thus, observed neutrinos cannot explain dark matter.
Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies that the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies. Deep-field observations show instead that galaxies formed first, followed by clusters and superclusters as galaxies clump together.
If dark matter is made up of WIMPs, then millions, possibly billions, of WIMPs must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs are popular search candidates, the Axion Dark Matter eXperiment (ADMX) searches for axions. Another candidate is heavy hidden sector particles that only interact with ordinary matter via gravity.
These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of WIMP annihilations.
Direct detection experiments operate deep underground to reduce the interference from cosmic rays. Detectors include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, Ontario, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the Particle and Astrophysical Xenon Detector.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECA. Noble liquid experiments include ZEPLIN, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques distinguish background particles (that scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO.
The DAMA/NaI, DAMA/LIBRA experiments detected an annual modulation in the event rate that they claim is due to dark matter. (As the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount). This claim is so far unconfirmed and unreconciled with negative results of other experiments.
A low pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun is travelling (roughly towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
In 2009, CDMS researchers reported two possible WIMP candidate events. They estimate that the probability that these events are due to background (neutrons or misidentified beta or gamma events) is 23%, and conclude "this analysis cannot be interpreted as significant evidence for WIMP interactions, but we cannot reject either event as signal."
In 2011, researchers using the CRESST detectors presented evidence of 67 collisions occurring in detector crystals from subatomic particles. They calculated the probability that all were caused by known sources of interference/contamination was 1 in 10-5.
Indirect detection experiments search for the products of WIMP annihilation/decay. If WIMPs are Majorana particles (their own antiparticle) then two WIMPs could annihilate to produce gamma rays or Standard Model particle-antiparticle pairs. If the WIMP is unstable, WIMPs could decay into standard model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions. The detection of such a signal is not conclusive evidence, as the sources of gamma ray production are not fully understood.
A few of the WIMPs passing through the Sun or Earth may scatter off atoms and lose energy. Thus WIMPs may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal.
WIMP annihilation from the Milky Way Galaxy as a whole may also be detected in the form of various annihilation products. The Galactic Center is a particularly good place to look because the density of dark matter may be higher there.
The EGRET gamma ray telescope observed more gamma rays than expected from the Milky Way, but scientists concluded that this was most likely due to incorrect estimation of the telescope's sensitivity.
The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In April 2012, an analysis of previously available data from its Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation.
In 2013 results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays that could be due to dark matter annihilation.
An alternative approach to the detection of WIMPs in nature is to produce them in the laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect WIMPs produced in collisions of the LHC proton beams. Because a WIMP has negligible interaction with matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. These experiments could show that WIMPs can be created, but a direct detection experiment must still show that they exist in sufficient numbers to account for dark matter.
Mass in extra dimensions
In some multidimensional theories, the force of gravity is the only force with effect across all dimensions. This explains the relative weakness of gravity compared to the other forces of nature that cannot cross into extra dimensions. In that case, dark matter could exist in a “Hidden Valley” in other dimensions that only interact with the matter in our dimensions through gravity. That dark matter could potentially aggregate in the same way as ordinary matter, forming other-dimensional galaxies.
Dark matter could consist of primordial defects ("birth defects") in the topology of quantum fields, which would contain energy and therefore gravitate. This possibility may be investigated by the use of an orbital network of atomic clocks that would register the passage of topological defects by changes to clock synchronization. The Global Positioning System may be able to operate as such a network.
Some theories modify the laws of gravity.
The earliest was Mordehai Milgrom's Modified Newtonian Dynamics (MOND) in 1983, which adjusts Newton's laws to increase gravitational field strength where gravitational acceleration becomes tiny (such as near the rim of a galaxy). It had some success explaining rotational velocity curves of elliptical and dwarf elliptical galaxies, but not galaxy cluster gravitational lensing. MOND was not relativistic: it was an adjustment of the Newtonian account. Attempts were made to bring MOND into conformity with general relativity; this spawned competing MOND-based hypotheses—including TeVeS, MOG or STV gravity and the phenomenological covariant approach.
In 2007, Moffat proposed a modified gravity hypothesis based on nonsymmetric gravitational theory (NGT) that claims to account for the behavior of colliding galaxies. This model requires the presence of non-relativistic neutrinos or other cold dark matter, to work.
Another proposal uses a gravitational backreaction from a theory that explains gravitational force between objects as an action, a reaction and then a back-reaction. Thus, an object A affects an object B, and the object B then re-affects object A, and so on: creating a feedback loop that strengthens gravity.
In 2008, another group proposed "dark fluid", a modification of large-scale gravity. It hypothesized that attractive gravitational effects are instead a side-effect of dark energy. Dark fluid combines dark matter and dark energy in a single energy field that produces different effects at different scales. This treatment is a simplification of a previous fluid-like model called the generalized Chaplygin gas model in which the whole of spacetime is a compressible gas. Dark fluid can be compared to an atmospheric system. Atmospheric pressure causes air to expand and air regions can collapse to form clouds. In the same way, the dark fluid might generally disperse, while collecting around galaxies.
Applying relativity to fractal, non-differentiable spacetime, Nottale suggests that potential energy may arise due to the fractality of spacetime, which would account for the missing mass-energy observed at cosmological scales.
Mention of dark matter is made in some video games and other works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties. Such descriptions are often inconsistent with the hypothesized properties of dark matter in physics and cosmology.
- Since dark energy, by convention, does not count as "matter", this is 26.8/(4.9 + 26.8)=0.845
- "Hubble Finds Dark Matter Ring in Galaxy Cluster".
- Trimble, V. (1987). "Existence and nature of dark matter in the universe". Annual Review of Astronomy and Astrophysics 25: 425–472. Bibcode:1987ARA&A..25..425T. doi:10.1146/annurev.aa.25.090187.002233.
- Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; (Planck Collaboration); et al. (22 March 2013). "Planck 2013 results. I. Overview of products and scientific results – Table 9". Astronomy and Astrophysics 1303: 5062. arXiv:1303.5062. Bibcode:2014A&A...571A...1P. doi:10.1051/0004-6361/201321529.
- Francis, Matthew (22 March 2013). "First Planck results: the Universe is still weird and interesting". Arstechnica.
- "Planck captures portrait of the young Universe, revealing earliest light". University of Cambridge. 21 March 2013. Retrieved 21 March 2013.
- Sean Carroll, Ph.D., Cal Tech, 2007, The Teaching Company, Dark Matter, Dark Energy: The Dark Side of the Universe, Guidebook Part 2 page 46, Accessed Oct. 7, 2013, "...dark matter: An invisible, essentially collisionless component of matter that makes up about 25 percent of the energy density of the universe... it's a different kind of particle... something not yet observed in the laboratory..."
- Ferris, Timothy. "Dark Matter". Retrieved 2015-06-10.
- Jarosik, N.; et al. (2011). "Seven-Year Wilson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results". Astrophysical Journal Supplement 192 (2): 14. arXiv:1001.4744. Bibcode:2011ApJS..192...14J. doi:10.1088/0067-0049/192/2/14.
- Siegfried, T. (5 July 1999). "Hidden Space Dimensions May Permit Parallel Universes, Explain Cosmic Mysteries". The Dallas Morning News.
- Copi, C. J.; Schramm, D. N.; Turner, M. S. (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science 267 (5195): 192–199. arXiv:astro-ph/9407006. Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624.
- Kroupa, P.; et al. (2010). "Local-Group tests of dark-matter Concordance Cosmology: Towards a new paradigm for structure formation". Astronomy and Astrophysics 523: 32–54. arXiv:1006.1647. Bibcode:2010A&A...523A..32K. doi:10.1051/0004-6361/201014892.
- Angus, G. (2013). "Cosmological simulations in MOND: the cluster scale halo mass function with light sterile neutrinos". Monthly Notices of the Royal Astronomical Society 436: 202–211. arXiv:1309.6094. Bibcode:2013MNRAS.436..202A. doi:10.1093/mnras/stt1564.
- Bertone, G.; Hooper, D.; Silk, J. (2005). "Particle dark matter: Evidence, candidates and constraints". Physics Reports 405 (5–6): 279–390. arXiv:hep-ph/0404175. Bibcode:2005PhR...405..279B. doi:10.1016/j.physrep.2004.08.031.
- Kapteyn, Jacobus Cornelius (1922). "First attempt at a theory of the arrangement and motion of the sidereal system". Astrophysical Journal 55: 302–327. Bibcode:1922ApJ....55..302K. doi:10.1086/142670.
It is incidentally suggested that when the theory is perfected it may be possible to determine the amount of dark matter from its gravitational effect.(emphasis in original)
- Rosenberg, Leslie J (30 June 2014). Status of the Axion Dark-Matter Experiment (ADMX) (PDF). 10th PATRAS Workshop on Axions, WIMPs and WISPs. p. 2.
- Oort, J.H. (1932) “The force exerted by the stellar system in the direction perpendicular to the galactic plane and some related problems,” Bulletin of the Astronomical Institutes of the Netherlands, 6 : 249-287.
- "The Hidden Lives of Galaxies: Hidden Mass". Imagine the Universe!. NASA/GSFC.
- Kuijken, K.; Gilmore, G. (July 1989). "The Mass Distribution in the Galactic Disc - Part III - the Local Volume Mass Density" (PDF). Monthly Notices of the Royal Astronomical Society 239 (2): 651–664. Bibcode:1989MNRAS.239..651K. doi:10.1093/mnras/239.2.651.
- Zwicky, F. (1933). "Die Rotverschiebung von extragalaktischen Nebeln". Helvetica Physica Acta 6: 110–127. Bibcode:1933AcHPh...6..110Z.
- Zwicky, F. (1937). "On the Masses of Nebulae and of Clusters of Nebulae". The Astrophysical Journal 86: 217. Bibcode:1937ApJ....86..217Z. doi:10.1086/143864.
- Zwicky, F. (1933), "Die Rotverschiebung von extragalaktischen Nebeln", Helvetica Physica Acta 6: 110–127, Bibcode:1933AcHPh...6..110Z See also Zwicky, F. (1937), "On the Masses of Nebulae and of Clusters of Nebulae", Astrophysical Journal 86: 217, Bibcode:1937ApJ....86..217Z, doi:10.1086/143864
- Some details of Zwicky's calculation and of more modern values are given in Richmond, M., Using the virial theorem: the mass of a cluster of galaxies, retrieved 2007-07-10
- Freese, Katherine (4 May 2014). The Cosmic Cocktail: Three Parts Dark Matter. Princeton University Press. ISBN 978-1-4008-5007-5.
- Babcock, H, 1939, "The rotation of the Andromeda Nebula", Lick Observatory bulletin ; no. 498
- First observational evidence of dark matter. Darkmatterphysics.com. Retrieved 6 August 2013.
- Rubin, Vera C.; Ford, W. Kent, Jr. (February 1970). "Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions". The Astrophysical Journal 159: 379–403. Bibcode:1970ApJ...159..379R. doi:10.1086/150317.
- Bosma, A. (1978). "The distribution and kinematics of neutral hydrogen in spiral galaxies of various morphological types" (Ph.D. Thesis). Rijksuniversiteit Groningen.
- Rubin, V.; Thonnard, W. K. Jr.; Ford, N. (1980). "Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R = 4kpc) to UGC 2885 (R = 122kpc)". The Astrophysical Journal 238: 471. Bibcode:1980ApJ...238..471R. doi:10.1086/158003.
- Bergstrom, L. (2000). "Non-baryonic dark matter: Observational evidence and detection methods". Reports on Progress in Physics 63 (5): 793–841. arXiv:hep-ph/0002126. Bibcode:2000RPPh...63..793B. doi:10.1088/0034-4885/63/5/2r3.
- Komatsu, E.; et al. (2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Cosmological Interpretation". The Astrophysical Journal Supplement 180 (2): 330–376. arXiv:0803.0547. Bibcode:2009ApJS..180..330K. doi:10.1088/0067-0049/180/2/330.
- Boggess, N. W.; et al. (1992). "The COBE Mission: Its Design and Performance Two Years after the launch". The Astrophysical Journal 397: 420. Bibcode:1992ApJ...397..420B. doi:10.1086/171797.
- Melchiorri, A.; et al. (2000). "A Measurement of Ω from the North American Test Flight of Boomerang". The Astrophysical Journal Letters 536 (2): L63–L66. arXiv:astro-ph/9911445. Bibcode:2000ApJ...536L..63M. doi:10.1086/312744.
- Leitch, E. M.; et al. (2002). "Measurement of polarization with the Degree Angular Scale Interferometer". Nature 420 (6917): 763–771. arXiv:astro-ph/0209476. Bibcode:2002Natur.420..763L. doi:10.1038/nature01271. PMID 12490940.
- Leitch, E. M.; et al. (2005). "Degree Angular Scale Interferometer 3 Year Cosmic Microwave Background Polarization Results". The Astrophysical Journal 624 (1): 10–20. arXiv:astro-ph/0409357. Bibcode:2005ApJ...624...10L. doi:10.1086/428825.
- Readhead, A. C. S.; et al. (2004). "Polarization Observations with the Cosmic Background Imager". Science 306 (5697): 836–844. arXiv:astro-ph/0409569. Bibcode:2004Sci...306..836R. doi:10.1126/science.1105598. PMID 15472038.
- Hinshaw, G.; et al. (2009). "Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results". The Astrophysical Journal Supplement 180 (2): 225–245. arXiv:0803.0732. Bibcode:2009ApJS..180..225H. doi:10.1088/0067-0049/180/2/225.
- "Serious Blow to Dark Matter Theories?" (Press release). European Southern Observatory. 18 April 2012.
- Freeman, K.; McNamara, G. (2006). In Search of Dark Matter. Birkhäuser. p. 37. ISBN 0-387-27616-5.
- Jörg, D.; et al. (2012). "A filament of dark matter between two clusters of galaxies". Nature 487 (7406): 202. arXiv:1207.0809. Bibcode:2012Natur.487..202D. doi:10.1038/nature11224.
- Salucci, P.; Borriello, A. (2003). "The Intriguing Distribution of Dark Matter in Galaxies". Lecture Notes in Physics. Lecture Notes in Physics 616: 66–77. arXiv:astro-ph/0203457. Bibcode:2003LNP...616...66S. doi:10.1007/3-540-36539-7_5. ISBN 978-3-540-00711-1.
- Dekel, A.; et al. (2005). "Lost and found dark matter in elliptical galaxies". Nature 437 (7059): 707–710. arXiv:astro-ph/0501622. Bibcode:2005Natur.437..707D. doi:10.1038/nature03970. PMID 16193046.
- Faber, S. M.; Jackson, R. E. (1976). "Velocity dispersions and mass-to-light ratios for elliptical galaxies". The Astrophysical Journal 204: 668–683. Bibcode:1976ApJ...204..668F. doi:10.1086/154215.
- Collins, G. W. (1978). "The Virial Theorem in Stellar Astrophysics". Pachart Press.
- Rejkuba, M.; Dubath, P.; Minniti, D.; Meylan, G. (2008). "Masses and M/L Ratios of Bright Globular Clusters in NGC 5128". Proceedings of the International Astronomical Union 246: 418–422. Bibcode:2008IAUS..246..418R. doi:10.1017/S1743921308016074.
- Weinberg, M. D.; Blitz, L. (2006). "A Magellanic Origin for the Warp of the Galaxy". The Astrophysical Journal Letters 641 (1): L33–L36. arXiv:astro-ph/0601694. Bibcode:2006ApJ...641L..33W. doi:10.1086/503607.
- Minchin, R.; et al. (2005). "A Dark Hydrogen Cloud in the Virgo Cluster". The Astrophysical Journal Letters 622: L21–L24. arXiv:astro-ph/0502312. Bibcode:2005ApJ...622L..21M. doi:10.1086/429538.
- Ciardullo, R.; Jacoby, G. H.; Dejonghe, H. B. (1993). "The radial velocities of planetary nebulae in NGC 3379". The Astrophysical Journal 414: 454–462. Bibcode:1993ApJ...414..454C. doi:10.1086/173092.
- Vikhlinin, A.; et al. (2006). "Chandra Sample of Nearby Relaxed Galaxy Clusters: Mass, Gas Fraction, and Mass–Temperature Relation". The Astrophysical Journal 640 (2): 691–709. arXiv:astro-ph/0507092. Bibcode:2006ApJ...640..691V. doi:10.1086/500288.
- Taylor, A. N.; et al. (1998). "Gravitational Lens Magnification and the Mass of Abell 1689". The Astrophysical Journal 501 (2): 539. arXiv:astro-ph/9801158. Bibcode:1998ApJ...501..539T. doi:10.1086/305827.
- Wu, X.; Chiueh, T.; Fang, L.; Xue, Y. (1998). "A comparison of different cluster mass estimates: consistency or discrepancy?". Monthly Notices of the Royal Astronomical Society 301 (3): 861–871. arXiv:astro-ph/9808179. Bibcode:1998MNRAS.301..861W. doi:10.1046/j.1365-8711.1998.02055.x.
- Refregier, A. (2003). "Weak gravitational lensing by large-scale structure". Annual Review of Astronomy and Astrophysics 41 (1): 645–668. arXiv:astro-ph/0307212. Bibcode:2003ARA&A..41..645R. doi:10.1146/annurev.astro.41.111302.102207.
- "Abell 2029: Hot News for Cold Dark Matter". Chandra X-ray Observatory. 11 June 2003.
- Massey, R.; et al. (2007). "Dark matter maps reveal cosmic scaffolding". Nature 445 (7125): 286–290. arXiv:astro-ph/0701594. Bibcode:2007Natur.445..286M. doi:10.1038/nature05497. PMID 17206154.
- Clowe, D.; et al. (2006). "A direct empirical proof of the existence of dark matter". The Astrophysical Journal 648 (2): 109–113. arXiv:astro-ph/0608407. Bibcode:2006ApJ...648L.109C. doi:10.1086/508162.
- Tiberiu, H.; Lobo, F. S. N. (2011). "Two-fluid dark matter models". Physical Review D 83 (12): 124051. arXiv:1106.2642. Bibcode:2011PhRvD..83l4051H. doi:10.1103/PhysRevD.83.124051.
- Spergel, D. N.; Steinhardt, P. J. (2000). "Observational evidence for self-interacting cold dark matter". Physical Review Letters 84 (17): 3760–3763. arXiv:astro-ph/9909386. Bibcode:2000PhRvL..84.3760S. doi:10.1103/PhysRevLett.84.3760.
- Markevitch, M.; et al. (2004). "Direct Constraints on the Dark Matter Self-Interaction Cross Section from the Merging Galaxy Cluster 1E 0657-56". The Astrophysical Journal 606 (2): 819–824. arXiv:astro-ph/0309303. Bibcode:2004ApJ...606..819M. doi:10.1086/383178.
- Allen, S. W.; Evrard, A. E.; Mantz, A. B. (2011). "Cosmological Parameters from Observations of Galaxy Clusters". Annual Review of Astronomy & Astrophysics 49: 409–470. arXiv:1103.4829. Bibcode:2011ARA&A..49..409A. doi:10.1146/annurev-astro-081710-102514.
- "Press Release - Dark Matter Map Begins to Reveal the Universe's Early History - Subaru Telescope". www.subarutelescope.org. Retrieved 2015-07-03.
- Miyazaki, Satoshi; Oguri, Masamune; Hamana, Takashi; Tanaka, Masayuki; Miller, Lance; Utsumi, Yousuke; Komiyama, Yutaka; Furusawa, Hisanori; Sakurai, Junya (2015-07-01). "Properties of Weak Lensing Clusters Detected on Hyper Suprime-Cam’s 2.3 deg2 field". The Astrophysical Journal 807 (1): 22. arXiv:1504.06974. Bibcode:2015ApJ...807...22M. doi:10.1088/0004-637X/807/1/22. ISSN 0004-637X.
- Percival, W. J.; et al. (2007). "Measuring the Baryon Acoustic Oscillation scale using the Sloan Digital Sky Survey and 2dF Galaxy Redshift Survey". Monthly Notices of the Royal Astronomical Society 381 (3): 1053–1066. arXiv:0705.3323. Bibcode:2007MNRAS.381.1053P. doi:10.1111/j.1365-2966.2007.12268.x.
- Kowalski, M.; et al. (2008). "Improved Cosmological Constraints from New, Old, and Combined Supernova Data Sets". The Astrophysical Journal 686 (2): 749–778. arXiv:0804.4142. Bibcode:2008ApJ...686..749K. doi:10.1086/589937.
- Viel, M.; Bolton, J. S.; Haehnelt, M. G. (2009). "Cosmological and astrophysical constraints from the Lyman α forest flux probability distribution function". Monthly Notices of the Royal Astronomical Society 399 (1): L39–L43. arXiv:0907.2927. Bibcode:2009MNRAS.399L..39V. doi:10.1111/j.1745-3933.2009.00720.x.
- "Hubble Maps the Cosmic Web of "Clumpy" Dark Matter in 3-D" (Press release). NASA. 7 January 2007.
- Springel, V.; et al. (2005). "Simulations of the formation, evolution and clustering of galaxies and quasars". Nature 435 (7042): 629–636. arXiv:astro-ph/0504097. Bibcode:2005Natur.435..629S. doi:10.1038/nature03597. PMID 15931216.
- Mateo, M. L. (1998). "Dwarf Galaxies of the Local Group". Annual Review of Astronomy and Astrophysics 36 (1): 435–506. arXiv:astro-ph/9810070. Bibcode:1998ARA&A..36..435M. doi:10.1146/annurev.astro.36.1.435.
- Moore, B.; et al. (1999). "Dark Matter Substructure within Galactic Halos". The Astrophysical Journal Letters 524 (1): L19–L22. arXiv:astro-ph/9907411. Bibcode:1999ApJ...524L..19M. doi:10.1086/312287.
- Bertone, G.; Merritt, D. (2005). "Dark Matter Dynamics and Indirect Detection". Modern Physics Letters A 20 (14): 1021–1036. arXiv:astro-ph/0504422. Bibcode:2005MPLA...20.1021B. doi:10.1142/S0217732305017391.
- Achim Weiss, "Big Bang Nucleosynthesis: Cooking up the first light elements" in: Einstein Online Vol. 2 (2006), 1017
- Raine, D.; Thomas, T. (2001). An Introduction to the Science of Cosmology. IOP Publishing. p. 30. ISBN 0-7503-0405-7.
- Tisserand, P.; Le Guillou, L.; Afonso, C.; Albert, J. N.; Andersen, J.; Ansari, R.; Aubourg, É.; Bareyre, P.; Beaulieu, J. P.; Charlot, X.; Coutures, C.; Ferlet, R.; Fouqué, P.; Glicenstein, J. F.; Goldman, B.; Gould, A.; Graff, D.; Gros, M.; Haissinski, J.; Hamadache, C.; De Kat, J.; Lasserre, T.; Lesquoy, É.; Loup, C.; Magneville, C.; Marquette, J. B.; Maurice, É.; Maury, A.; Milsztajn, A.; Moniez, M. (2007). "Limits on the Macho content of the Galactic Halo from the EROS-2 Survey of the Magellanic Clouds". Astronomy and Astrophysics 469 (2): 387. arXiv:astro-ph/0607207. Bibcode:2007A&A...469..387T. doi:10.1051/0004-6361:20066017.
- Graff, D. S.; Freese, K. (1996). "Analysis of a Hubble Space Telescope Search for Red Dwarfs: Limits on Baryonic Matter in the Galactic Halo". The Astrophysical Journal 456. arXiv:astro-ph/9507097. Bibcode:1996ApJ...456L..49G. doi:10.1086/309850.
- Najita, J. R.; Tiede, G. P.; Carr, J. S. (2000). "From Stars to Superplanets: The Low‐Mass Initial Mass Function in the Young Cluster IC 348". The Astrophysical Journal 541 (2): 977. arXiv:astro-ph/0005290. Bibcode:2000ApJ...541..977N. doi:10.1086/309477.
- Wyrzykowski, Lukasz et al. (2011) The OGLE view of microlensing towards the Magellanic Clouds – IV. OGLE-III SMC data and final conclusions on MACHOs, MNRAS, 416, 2949
- Freese, Katherine; Fields, Brian; Graff, David (2000). "Death of Stellar Baryonic Dark Matter Candidates". arXiv:astro-ph/0007444 [astro-ph].
- Freese, Katherine; Fields, Brian; Graff, David (2000). "Death of Stellar Baryonic Dark Matter". The First Stars. ESO Astrophysics Symposia. p. 18. arXiv:astro-ph/0002058. Bibcode:2000fist.conf...18F. doi:10.1007/10719504_3. ISBN 3-540-67222-2.
- Silk, Joseph (6 December 2000). "IX". The Big Bang: Third Edition. Henry Holt and Company. ISBN 978-0-8050-7256-3.
- Vittorio, N.; J. Silk (1984). "Fine-scale anisotropy of the cosmic microwave background in a universe dominated by cold dark matter". Astrophysical Journal, Part 2 – Letters to the Editor 285: L39–L43. Bibcode:1984ApJ...285L..39V. doi:10.1086/184361.
- Umemura, Masayuki; Satoru Ikeuchi (1985). "Formation of Subgalactic Objects within Two-Component Dark Matter". Astrophysical Journal 299: 583–592. Bibcode:1985ApJ...299..583U. doi:10.1086/163726.
- Davis, M.; Efstathiou, G., Frenk, C. S., & White, S. D. M. (May 15, 1985). "The evolution of large-scale structure in a universe dominated by cold dark matter". Astrophysical Journal 292: 371–394. Bibcode:1985ApJ...292..371D. doi:10.1086/163168.
- Hawkins, M. R. S. (2011). "The case for primordial black holes as dark matter". Monthly Notices of the Royal Astronomical Society 415 (3): 2744–2757. arXiv:1106.3875. Bibcode:2011MNRAS.415.2744H. doi:10.1111/j.1365-2966.2011.18890.x.
- Carr, B. J.; et al. (May 2010). "New cosmological constraints on primordial black holes" (PDF). Physical Review D 81 (10): 104019. arXiv:0912.5297. Bibcode:2010PhRvD..81j4019C. doi:10.1103/PhysRevD.81.104019.
- Peter, A. H. G. (2012). "Dark Matter: A Brief Review". arXiv:1201.3942 [astro-ph.CO].
- Garrett, Katherine; Dūda, Gintaras (2011). "Dark Matter: A Primer". Advances in Astronomy 2011: 1. arXiv:1006.2483. Bibcode:2011AdAst2011E...8G. doi:10.1155/2011/968283.
MACHOs can only account for a very small percentage of the nonluminous mass in our galaxy, revealing that most dark matter cannot be strongly concentrated or exist in the form of baryonic astrophysical objects. Although microlensing surveys rule out baryonic objects like brown dwarfs, black holes, and neutron stars in our galactic halo, can other forms of baryonic matter make up the bulk of dark matter? The answer, surprisingly, is no...
- Bertone, G. (2010). "The moment of truth for WIMP dark matter". Nature 468 (7322): 389–393. arXiv:1011.3532. Bibcode:2010Natur.468..389B. doi:10.1038/nature09509. PMID 21085174.
- Olive, Keith A. (2003). "TASI Lectures on Dark Matter". p. 21
- Jungman, Gerard; Kamionkowski, Marc; Griest, Kim (1996-03-01). "Supersymmetric dark matter". Physics Reports 267 (5–6): 195–373. doi:10.1016/0370-1573(95)00058-5.
- "Neutrinos as Dark Matter". Astro.ucla.edu. 21 September 1998. Retrieved 6 January 2011.
- Gaitskell, Richard J. (2004). "Direct Detection of Dark Matter". "Annual Review of Nuclear and Particle Systems" 54: 315–359. Bibcode:2004ARNPS..54..315G. doi:10.1146/annurev.nucl.54.070103.181244.
- "NEUTRALINO DARK MATTER". Retrieved 26 December 2011. Griest, Kim. "WIMPs and MACHOs" (PDF). Retrieved 26 December 2011.
- Drukier, A.; Freese, K. and Spergel, D. (1986). "Detecting Cold Dark Matter Candidates". Physical Review D 33 (12): 3495–3508. Bibcode:1986PhRvD..33.3495D. doi:10.1103/PhysRevD.33.3495.
- Bernabei, R.; Belli, P.; Cappella, F.; Cerulli, R.; Dai, C. J.; d’Angelo, A.; He, H. L.; Incicchitti, A.; Kuang, H. H.; Ma, J. M.; Montecchia, F.; Nozzoli, F.; Prosperi, D.; Sheng, X. D.; Ye, Z. P. (2008). "First results from DAMA/LIBRA and the combined results with DAMA/NaI". Eur. Phys. J. C 56 (3): 333–355. arXiv:0804.2741. doi:10.1140/epjc/s10052-008-0662-y.
- Stonebraker, Alan (2014-01-03). "Synopsis: Dark-Matter Wind Sways through the Seasons". Physics - Synopses (American Physical Society). Retrieved 6 January 2014.
- Lee, Samuel K.; Mariangela Lisanti, Annika H. G. Peter, and Benjamin R. Safdi (2014-01-03). "Effect of Gravitational Focusing on Annual Modulation in Dark-Matter Direct-Detection Experiments". Phys. Rev. Lett. (American Physical Society) 112 (1): 011301 (2014) [5 pages]. arXiv:1308.1953. Bibcode:2014PhRvL.112a1301L. doi:10.1103/PhysRevLett.112.011301.
- The Dark Matter Group. "An Introduction to Dark Matter". Dark Matter Research (Sheffield, UK: University of Sheffield). Retrieved 7 January 2014.
- "Blowing in the Wind". Kavli News (Sheffield, UK: Kavli Foundation). Retrieved 7 January 2014.
Scientists at Kavli MIT are working on...a tool to track the movement of dark matter.
- The CDMS II Collaboration; Ahmed, Z.; Akerib, D. S.; Arrenberg, S.; Bailey, C. N.; Balakishiyeva, D.; Baudis, L.; Bauer, D. A.; Brink, P. L.; Bruch, T.; Bunker, R.; Cabrera, B.; Caldwell, D. O.; Cooley, J.; Cushman, P.; Daal, M.; Dejongh, F.; Dragowsky, M. R.; Duong, L.; Fallows, S.; Figueroa-Feliciano, E.; Filippini, J.; Fritts, M.; Golwala, S. R.; Grant, D. R.; Hall, J.; Hennings-Yeomans, R.; Hertel, S. A.; Holmgren, D.; Hsu, L. (2010). "Dark Matter Search Results from the CDMS II Experiment". Science 327 (5973): 1619–1621. arXiv:0912.3592. Bibcode:2010Sci...327.1619C. doi:10.1126/science.1186112. PMID 20150446.
- Angloher, G.; Bauer; Bavykina; Bento; Bucci; Ciemniak; Deuter; von Feilitzsch; Hauff; et al. (2011). "Results from 730kg days of the CRESST-II Dark Matter Search". arXiv:1109.0702v1 [astro-ph.CO].
- "Dark matter even darker than once thought". Retrieved 16 June 2015.
- Freese, K. (1986). "Can Scalar Neutrinos or Massive Dirac Neutrinos be the Missing Mass?". Physics Letters B 167 (3): 295–300. Bibcode:1986PhLB..167..295F. doi:10.1016/0370-2693(86)90349-7.
- Ellis, J.; Flores, R. A.; Freese, K.; Ritz, S.; Seckel, D.; Silk, J. (1988). "Cosmic ray constraints on the annihilations of relic particles in the galactic halo". Physics Letters B 214 (3): 403. Bibcode:1988PhLB..214..403E. doi:10.1016/0370-2693(88)91385-8.
- Bertone, Gianfranco (2010). "Dark Matter at the Centers of Galaxies". Particle Dark Matter: Observations, Models and Searches. Cambridge University Press. pp. 83–104. arXiv:1001.3706. ISBN 978-0-521-76368-4.
- Stecker, F.W.; Hunter, S; Kniffen, D (2008). "The likely cause of the EGRET GeV anomaly and its implications". Astroparticle Physics 29 (1): 25–29. arXiv:0705.4311. Bibcode:2008APh....29...25S. doi:10.1016/j.astropartphys.2007.11.002.
- Atwood, W.B.; Abdo, A. A.; Ackermann, M.; Althouse, W.; Anderson, B.; Axelsson, M.; Baldini, L.; Ballet, J.; et al. (2009). "The large area telescope on the Fermi Gamma-ray Space Telescope Mission". Astrophysical Journal 697 (2): 1071–1102. arXiv:0902.1089. Bibcode:2009ApJ...697.1071A. doi:10.1088/0004-637X/697/2/1071.
- Weniger, Christoph (2012). "A Tentative Gamma-Ray Line from Dark Matter Annihilation at the Fermi Large Area Telescope". Journal of Cosmology and Astroparticle Physics 2012 (8): 7. arXiv:1204.2797v2. Bibcode:2012JCAP...08..007W. doi:10.1088/1475-7516/2012/08/007.
- Cartlidge, Edwin (24 April 2012). "Gamma rays hint at dark matter". Institute Of Physics. Retrieved 23 April 2013.
- Albert, J.; Aliu, E.; Anderhub, H.; Antoranz, P.; Backes, M.; Baixeras, C.; Barrio, J. A.; Bartko, H.; Bastieri, D.; Becker, J. K.; Bednarek, W.; Berger, K.; Bigongiari, C.; Biland, A.; Bock, R. K.; Bordas, P.; Bosch‐Ramon, V.; Bretz, T.; Britvitch, I.; Camara, M.; Carmona, E.; Chilingarian, A.; Commichau, S.; Contreras, J. L.; Cortina, J.; Costado, M. T.; Curtef, V.; Danielyan, V.; Dazzi, F.; De Angelis, A. (2008). "Upper Limit for γ‐Ray Emission above 140 GeV from the Dwarf Spheroidal Galaxy Draco". The Astrophysical Journal 679: 428. arXiv:0711.2574. Bibcode:2008ApJ...679..428A. doi:10.1086/529135.
- Aleksić, J.; Antonelli, L. A.; Antoranz, P.; Backes, M.; Baixeras, C.; Balestra, S.; Barrio, J. A.; Bastieri, D.; González, J. B.; Bednarek, W.; Berdyugin, A.; Berger, K.; Bernardini, E.; Biland, A.; Bock, R. K.; Bonnoli, G.; Bordas, P.; Tridon, D. B.; Bosch-Ramon, V.; Bose, D.; Braun, I.; Bretz, T.; Britzger, D.; Camara, M.; Carmona, E.; Carosi, A.; Colin, P.; Commichau, S.; Contreras, J. L.; Cortina, J. (2010). "Magic Gamma-Ray Telescope Observation of the Perseus Cluster of Galaxies: Implications for Cosmic Rays, Dark Matter, and Ngc 1275". The Astrophysical Journal 710: 634. arXiv:0909.3267. Bibcode:2010ApJ...710..634A. doi:10.1088/0004-637X/710/1/634.
- Adriani, O.; Barbarino, G. C.; Bazilevskaya, G. A.; Bellotti, R.; Boezio, M.; Bogomolov, E. A.; Bonechi, L.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carlson, P.; Casolino, M.; Castellini, G.; De Pascale, M. P.; De Rosa, G.; De Simone, N.; Di Felice, V.; Galper, A. M.; Grishantseva, L.; Hofverberg, P.; Koldashov, S. V.; Krutkov, S. Y.; Kvashnin, A. N.; Leonov, A.; Malvezzi, V.; Marcelli, L.; Menn, W. (2009). "An anomalous positron abundance in cosmic rays with energies 1.5–100 GeV". Nature 458 (7238): 607–609. arXiv:0810.4995. Bibcode:2009Natur.458..607A. doi:10.1038/nature07942. PMID 19340076.
- Aguilar, M. (AMS Collaboration); et al. (3 April 2013). "First Result from the Alpha Magnetic Spectrometer on the International Space Station: Precision Measurement of the Positron Fraction in Primary Cosmic Rays of 0.5–350 GeV". Physical Review Letters. Bibcode:2013PhRvL.110n1102A. doi:10.1103/PhysRevLett.110.141102. Retrieved 3 April 2013.
- "First Result from the Alpha Magnetic Spectrometer Experiment". AMS Collaboration. 3 April 2013. Retrieved 3 April 2013.
- Heilprin, John; Borenstein, Seth (3 April 2013). "Scientists find hint of dark matter from cosmos". Associated Press. Retrieved 3 April 2013.
- Amos, Jonathan (3 April 2013). "Alpha Magnetic Spectrometer zeroes in on dark matter". BBC. Retrieved 3 April 2013.
- Perrotto, Trent J.; Byerly, Josh (2 April 2013). "NASA TV Briefing Discusses Alpha Magnetic Spectrometer Results". NASA. Retrieved 3 April 2013.
- Overbye, Dennis (3 April 2013). "New Clues to the Mystery of Dark Matter". New York Times. Retrieved 3 April 2013.
- Kane, G. and Watson, S. (2008). "Dark Matter and LHC:. what is the Connection?". Modern Physics Letters A 23 (26): 2103–2123. arXiv:0807.2244. Bibcode:2008MPLA...23.2103K. doi:10.1142/S0217732308028314.
- Extra dimensions, gravitons, and tiny black holes. CERN. Retrieved on 17 November 2014.
- Dark matter. CERN. Retrieved on 17 November 2014.
- Rzetelny, Xaq (19 November 2014). "Looking for a different sort of dark matter with GPS satellites". Ars Technica. Retrieved 24 November 2014.
- Exirifard, Q. (2010). "Phenomenological covariant approach to gravity". General Relativity and Gravitation 43 (1): 93–106. arXiv:0808.1962. Bibcode:2011GReGr..43...93E. doi:10.1007/s10714-010-1073-6.
- Brownstein, J.R.; Moffat, J. W. (2007). "The Bullet Cluster 1E0657-558 evidence shows modified gravity in the absence of dark matter". Monthly Notices of the Royal Astronomical Society 382 (1): 29–47. arXiv:astro-ph/0702146. Bibcode:2007MNRAS.382...29B. doi:10.1111/j.1365-2966.2007.12275.x.
- Anastopoulos, C. (2009). "Gravitational backreaction in cosmological spacetimes". Physical Review D 79 (8): 084029. arXiv:0902.0159. Bibcode:2009PhRvD..79h4029A. doi:10.1103/PhysRevD.79.084029.
- "New Cosmic Theory Unites Dark Forces". SPACE.com. 11 February 2008. Retrieved 6 January 2011.
- Nottale, Laurent (May 29, 2009). "Scale relativity and fractal space-time: theory and applications" (PDF).
- Nottale, Laurent (17 June 2011). Scale Relativity and Fractal Space-Time: A New Approach to Unifying Relativity and Quantum Mechanics. World Scientific. p. 516. ISBN 978-1-908977-87-8.
|Wikimedia Commons has media related to Dark matter.|
- Dark matter at DMOZ
- Dark matter (Astronomy) at Encyclopædia Britannica
- What is dark matter? at cosmosmagazine.com
- The Dark Matter Crisis 18 August 2010 by Pavel Kroupa, posted in General
- The European astroparticle physics network
- Helmholtz Alliance for Astroparticle Physics
- "NASA Finds Direct Proof of Dark Matter" (Press release). NASA. 21 August 2006.
- Tuttle, Kelen (22 August 2006). "Dark Matter Observed". SLAC (Stanford Linear Accelerator Center) Today.
- Sample, Ian (17 December 2009). "Dark Matter Detected". London: Guardian. Retrieved 1 May 2010.
- Video lecture on dark matter by Scott Tremaine, IAS professor
- Science Daily story "Astronomers' Doubts About the Dark Side ..."
- Gray, Meghan; Merrifield, Mike; Copeland, Ed (2010). "Dark Matter". Sixty Symbols. Brady Haran for the University of Nottingham. | https://en.wikipedia.org/wiki/Dark_matter |
4.0625 | The Roche limit (pronounced /ʁoʃ/ in IPA, similar to the sound of rosh), sometimes referred to as the Roche radius, is the distance within which a celestial body, held together only by its own gravity, will disintegrate due to a second celestial body's tidal forces exceeding the first body's gravitational self-attraction. Inside the Roche limit, orbiting material disperses and forms rings whereas outside the limit material tends to coalesce. The term is named after Édouard Roche, who is the French astronomer who first calculated this theoretical limit in 1848.
- 1 Explanation
- 2 Roche limits for selected examples
- 3 Determining the Roche limit
- 4 See also
- 5 References
- 6 Sources
- 7 External links
Typically, the Roche limit applies to a satellite's disintegrating due to tidal forces induced by its primary, the body about which it orbits. Parts of the satellite that are closer to the primary are attracted more strongly by gravity from the primary than parts that are farther away; this disparity effectively pulls the near and far parts of the satellite apart from each other, and if the disparity (combined with any centrifugal effects due to the object's spin) is larger than the force of gravity holding the satellite together, it can pull the satellite apart. Some real satellites, both natural and artificial, can orbit within their Roche limits because they are held together by forces other than gravitation. Objects resting on the surface of such a satellite would be lifted away by tidal forces. A weaker satellite, such as a comet, could be broken up when it passes within its Roche limit.
Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit, Saturn's E-Ring and Phoebe ring being notable exceptions. They could either be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart.
Roche limits for selected examples
||This section may be too technical for most readers to understand. (March 2014)|
The table below shows the mean density and the equatorial radius for selected objects in the Solar System.
|Primary||Density (kg/m3)||Radius (m)|
The equations for the Roche limits relate the minimum sustainable orbital radius to the ratio of the two objects' densities and the Radius of the primary body. Hence, using the data above, the Roche limits for these objects can be calculated. This has been done twice for each, assuming the extremes of the rigid and fluid body cases. The average density of comets is taken to be around 500 kg/m³.
The table below gives the Roche limits expressed in kilometres and in primary radii. The mean radius of the orbit can be compared with the Roche limits. For convenience, the table lists the mean radius of the orbit for each, excluding the comets, whose orbits are extremely variable and eccentric.
|Body||Satellite||Roche limit (rigid)||Roche limit (fluid)||Mean orbital radius (km)|
|Distance (km)||R||Distance (km)||R|
So, clearly these bodies are well outside their Roche limits by various factors, from 21 for the Moon (over its fluid-body Roche limit) as part of the Earth–Moon system, upwards to thousands for Earth and Jupiter.
But how close are the Solar System's other moons to their Roche limits? The table below gives each satellite's closest approach in its orbit divided by its own Roche limit. Again, both rigid and fluid body calculations are given. Note that Pan, Cordelia and Naiad, in particular, may be quite close to their actual break-up points.
In practice, the densities of most of the inner satellites of giant planets are not known. In these cases, shown in italics, likely values have been assumed, but their actual Roche limit can vary from the value shown.
|Primary||Satellite||Orbital Radius / Roche limit|
Determining the Roche limit
The limiting distance to which a satellite can approach without breaking up depends on the rigidity of the satellite. At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. At the other extreme, a highly fluid satellite gradually deforms leading to increased tidal forces, causing the satellite to elongate, further compounding the tidal forces and causing it to break apart more readily. Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. But note that, as defined above, the Roche limit refers to a body held together solely by the gravitational forces which cause otherwise unconnected particles to coalesce, thus forming the body in question. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory.
The rigid-body Roche limit is a simplified calculation for a spherical satellite. Irregular shapes such as those of tidal deformation on the body or the primary it orbits are neglected. It is assumed to be in hydrostatic equilibrium. These assumptions, although unrealistic, greatly simplify calculations.
The Roche limit for a rigid spherical satellite is the distance, , from the primary at which the gravitational force on a test mass at the surface of the object is exactly equal to the tidal force pulling the mass away from the object:
This does not depend on the size of the objects, but on the ratio of densities. This is the orbital distance inside of which loose material (e.g. regolith) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also be pulled away from, rather than toward, the satellite.
Note that this is an approximate result as inertia force and rigid structure are ignored in its derivation.
Derivation of the formula
In order to determine the Roche limit, we consider a small mass on the surface of the satellite closest to the primary. There are two forces on this mass : the gravitational pull towards the satellite and the gravitational pull towards the primary. Assuming that the satellite is in free fall around the primary and that the tidal force is the only relevant term of the gravitational attraction of the primary. This assumption is a simplification as free-fall only truly applies to the planetary center, but will suffice for this derivation.
The gravitational pull on the mass towards the satellite with mass and radius can be expressed according to Newton's law of gravitation.
the tidal force on the mass towards the primary with radius and mass , at a distance between the centers of the two bodies, can be expressed approximately as
To obtain this approximation, find the difference in the primary's gravitational pull on the center of the satellite and on the edge of the satellite closest to the primary:
In the approximation where and , we can say that the in the numerator and every term with in the denominator goes to zero, which gives us:
The Roche limit is reached when the gravitational force and the tidal force balance each other out.
which gives the Roche limit, , as
However, we don't really want the radius of the satellite to appear in the expression for the limit, so we re-write this in terms of densities.
For a sphere the mass can be written as
- where is the radius of the primary.
- where is the radius of the satellite.
Substituting for the masses in the equation for the Roche limit, and cancelling out gives
which can be simplified to the Roche limit:
A more accurate formula
and it gets added to FT. Doing the force-balance calculation yields this result for the Roche limit:
- .......... (1)
or: .......... (2)
Use (where is the radius of the satellite) to replace in formula(1), we can have a third formula:
- .......... (3)
Thus, we just need to observe the mass of the star(planet) and to estimate the density of the planet(satellite), then we can have the certain Roche limit of this planet(satellite) in the stellar(planetary) system.
Roche limit, Hill sphere and radius of the planet
Consider a planet with a density of and a radius of , orbiting a star with a mass of M in a distant of R,
Let's place the planet on its Roche limit:
Hill sphere of the planet here is around L1(or L2): , Hill sphere ..........(4)
see Hill sphere, or Roche lobe.
the surface of the planet coincide with the Roche lobe(or the planet fill full the Roche lobe)!
Celestial body cannot absorb any little thing or further more, lose its material.This is the physical meaning of Roche limit, Roche lobe and Hill sphere.
Formula(2) can be described as: , a perfect mathematical symmetry.
This is the astronomical significance of Roche limit and Hill sphere.
A more accurate approach for calculating the Roche limit takes the deformation of the satellite into account. An extreme example would be a tidally locked liquid satellite orbiting a planet, where any force acting upon the satellite would deform it into a prolate spheroid.
The calculation is complex and its result cannot be represented in an exact algebraic formula. Roche himself derived the following approximate solution for the Roche limit:
However, a better approximation that takes into account the primary's oblateness and the satellite's mass is:
where is the oblateness of the primary. The numerical factor is calculated with the aid of a computer.
The fluid solution is appropriate for bodies that are only loosely held together, such as a comet. For instance, comet Shoemaker–Levy 9's decaying orbit around Jupiter passed within its Roche limit in July 1992, causing it to fragment into a number of smaller pieces. On its next approach in 1994 the fragments crashed into the planet. Shoemaker–Levy 9 was first observed in 1993, but its orbit indicated that it had been captured by Jupiter a few decades prior.
Derivation of the formula
As the fluid satellite case is more delicate than the rigid one, the satellite is described with some simplifying assumptions. First, assume the object consists of incompressible fluid that has constant density and volume that do not depend on external or internal forces.
Second, assume the satellite moves in a circular orbit and it remains in synchronous rotation. This means that the angular speed at which it rotates around its center of mass is the same as the angular speed at which it moves around the overall system barycenter.
The angular speed is given by Kepler's third law:
When M is very much bigger than m, this will be close to
The synchronous rotation implies that the liquid does not move and the problem can be regarded as a static one. Therefore, the viscosity and friction of the liquid in this model do not play a role, since these quantities would play a role only for a moving fluid.
Given these assumptions, the following forces should be taken into account:
- The force of gravitation due to the main body;
- the centrifugal force in the rotary reference system; and
- the self-gravitation field of the satellite.
Since all of these forces are conservative, they can be expressed by means of a potential. Moreover, the surface of the satellite is an equipotential one. Otherwise, the differences of potential would give rise to forces and movement of some parts of the liquid at the surface, which contradicts the static model assumption. Given the distance from the main body, our problem is to determine the form of the surface that satisfies the equipotential condition.
As the orbit has been assumed circular, the total gravitational force and orbital centrifugal force acting on the main body cancel. That leaves two forces: the tidal force and the rotational centrifugal force. The tidal force depends on the position with respect to the center of mass, already considered in the rigid model. For small bodies, the distance of the liquid particles from the center of the body is small in relation to the distance d to the main body. Thus the tidal force can be linearized, resulting in the same formula for FT as given above.
While this force in the rigid model depends only on the radius r of the satellite, in the fluid case we need to consider all the points on the surface and the tidal force depends on the distance Δd from the center of mass to a given particle projected on the line joining the satellite and the main body. We call Δd the radial distance. Since the tidal force is linear in Δd, the related potential is proportional to the square of the variable and for we have
Likewise, the centrifugal force has a potential
for rotational angular velocity .
We want to determine the shape of the satellite for which the sum of the self-gravitation potential and VT + VC is constant on the surface of the body. In general, such a problem is very difficult to solve, but in this particular case, it can be solved by a skillful guess due to the square dependence of the tidal potential on the radial distance Δd To a first approximation, we can ignore the centrifugal potential VC and consider only the tidal potential VT.
Since the potential VT changes only in one direction, i.e. the direction toward the main body, the satellite can be expected to take an axially symmetric form. More precisely, we may assume that it takes a form of a solid of revolution. The self-potential on the surface of such a solid of revolution can only depend on the radial distance to the center of mass. Indeed, the intersection of the satellite and a plane perpendicular to the line joining the bodies is a disc whose boundary by our assumptions is a circle of constant potential. Should the difference between the self-gravitation potential and VT be constant, both potentials must depend in the same way on Δd. In other words, the self-potential has to be proportional to the square of Δd. Then it can be shown that the equipotential solution is an ellipsoid of revolution. Given a constant density and volume the self-potential of such body depends only on the eccentricity ε of the ellipsoid:
where is the constant self-potential on the intersection of the circular edge of the body and the central symmetry plane given by the equation Δd=0.
The dimensionless function f is to be determined from the accurate solution for the potential of the ellipsoid
and, surprisingly enough, does not depend on the volume of the satellite.
Although the explicit form of the function f looks complicated, it is clear that we may and do choose the value of ε so that the potential VT is equal to VS plus a constant independent of the variable Δd. By inspection, this occurs when
This equation can be solved numerically. The graph indicates that there are two solutions and thus the smaller one represents the stable equilibrium form (the ellipsoid with the smaller eccentricity). This solution determines the eccentricity of the tidal ellipsoid as a function of the distance to the main body. The derivative of the function f has a zero where the maximal eccentricity is attained. This corresponds to the Roche limit.
More precisely, the Roche limit is determined by the fact that the function f, which can be regarded as a nonlinear measure of the force squeezing the ellipsoid towards a spherical shape, is bounded so that there is an eccentricity at which this contracting force becomes maximal. Since the tidal force increases when the satellite approaches the main body, it is clear that there is a critical distance at which the ellipsoid is torn up.
The maximal eccentricity can be calculated numerically as the zero of the derivative of f'. One obtains
which corresponds to the ratio of the ellipsoid axes 1:1.95. Inserting this into the formula for the function f one can determine the minimal distance at which the ellipsoid exists. This is the Roche limit,
Surprisingly, including the centrifugal potential makes remarkably little difference, though the object becomes a Roche ellipsoid, a general triaxial ellipsoid with all axes having different lengths. The potential becomes a much more complicated function of the axis lengths, requiring elliptic functions. However, the solution proceeds much as in the tidal-only case, and we find
The ratios of polar to orbit-direction to primary-direction axes are 1:1.06:2.07.
- Roche lobe
- Chandrasekhar limit
- Hill sphere
- Spaghettification (a rather extreme tidal distortion)
- Black hole
- Triton (moon) (Neptune's satellite)
- Comet Shoemaker–Levy 9
- Eric W. Weisstein (2007). "Eric Weisstein's World of Physics – Roche Limit". scienceworld.wolfram.com. Retrieved September 5, 2007.
- NASA. "What is the Roche limit?". NASA – JPL. Retrieved September 5, 2007.
- see calculation in Frank H. Shu, The Physical Universe: an Introduction to Astronomy, p. 431, University Science Books (1982), ISBN 0-935702-05-9.
- "Roche Limit: Why Do Comets Break Up?".
- Gu; et al. "The effect of tidal inflation instability on the mass and dynamical evolution of extrasolar planets with ultrashort periods". Astrophysical Journal. Retrieved May 1, 2003.
- International Planetarium Society Conference, Astronaut Memorial Planetarium & Observatory, Cocoa, Florida Rob Landis 10–16 July 1994 archive 21/12/1996
- Édouard Roche: "La figure d'une masse fluide soumise à l'attraction d'un point éloigné" (The figure of a fluid mass subjected to the attraction of a distant point), part 1, Académie des sciences de Montpellier: Mémoires de la section des sciences, Volume 1 (1849) 243–262. 2.44 is mentioned on page 258. (French)
- Édouard Roche: "La figure d'une masse fluide soumise à l'attraction d'un point éloigné", part 2, Académie des sciences de Montpellier: Mémoires de la section des sciences, Volume 1 (1850) 333–348. (French)
- Édouard Roche: "La figure d'une masse fluide soumise à l'attraction d'un point éloigné", part 3, Académie des sciences de Montpellier: Mémoires de la section des sciences, Volume 2 (1851) 21–32. (French)
- George Howard Darwin, "On the figure and stability of a liquid satellite", Scientific Papers, Volume 3 (1910) 436–524.
- James Hopwood Jeans, Problems of cosmogony and stellar dynamics, Chapter III: Ellipsoidal configurations of equilibrium, 1919.
- S. Chandrasekhar, Ellipsoidal figures of equilibrium (New Haven: Yale University Press, 1969), Chapter 8: The Roche ellipsoids (189–240).
- S. Chandrasekhar, "The equilibrium and the stability of the Roche ellipsoids", Astrophysical Journal 138 (1963) 1182–1213. | https://en.wikipedia.org/wiki/Roche_limit |
4.25 | A minimum wage is the lowest remuneration that employers may legally pay to workers. Equivalently, it is the price floor below which workers may not sell their labor. Although minimum wage laws are in effect in many jurisdictions, differences of opinion exist about the benefits and drawbacks of a minimum wage. Supporters of the minimum wage say it increases the standard of living of workers, reduces poverty, reduces inequality, boosts morale and forces businesses to be more efficient. In contrast, opponents of the minimum wage say it increases poverty, increases unemployment (particularly among unskilled or inexperienced workers) and is damaging to businesses.
- 1 History
- 2 Minimum wage laws
- 3 Economics models
- 4 Empirical studies
- 5 Debate over consequences
- 6 Surveys of economists
- 7 Alternatives
- 8 US movement
- 9 See also
- 10 Notes
- 11 Further reading
- 12 External links
Modern minimum wage laws trace their origin to the Ordinance of Labourers (1349), which was a decree by King Edward III that set a maximum wage for laborers in medieval England. King Edward III, who was a wealthy landowner, was dependent, like his lords, on serfs to work the land. In the autumn of 1348, the Black Plague reached England and decimated the population. The severe shortage of labor caused wages to soar and encouraged King Edward III to set a wage ceiling. Subsequent amendments to the ordinance, such as the Statute of Labourers (1351), increased the penalties for paying a wage above the set rates.
While the laws governing wages initially set a ceiling on compensation, they were eventually used to set a living wage. An amendment to the Statute of Labourers in 1389 effectively fixed wages to the price of food. As time passed, the Justice of the Peace, who was charged with setting the maximum wage, also began to set formal minimum wages. The practice was eventually formalized with the passage of the Act Fixing a Minimum Wage in 1604 by King James I for workers in the textile industry.
By the early 19th century, the Statutes of Labourers was repealed as increasingly capitalistic England embraced laissez-faire policies which disfavored regulations of wages (whether upper or lower limits). The subsequent 19th century saw significant labor unrest affect many industrial nations. As trade unions were decriminalized during the century, attempts to control wages through collective agreement were made. However, this meant that a uniform minimum wage was not possible. In Principles of Political Economy in 1848, John Stuart Mill argued that because of the collective action problems that workers faced in organisation, it was a justified departure from laissez faire policies (or freedom of contract) to regulate people's wages and hours by law.
It was not until the 1890s that modern legislative attempts to regulate minimum wages were seen in New Zealand and Australia. The movement for a minimum wage was initially focused on stopping sweatshop labor and controlling the proliferation of sweatshops in manufacturing industries. The sweatshops employed large numbers of women and young workers, paying them what were considered to be substandard wages. The sweatshop owners were thought to have unfair bargaining power over their employees, and a minimum wage was proposed as a means to make them pay fairly. Over time, the focus changed to helping people, especially families, become more self-sufficient.
Minimum wage laws
The first national minimum wage law was enacted by the government of New Zealand in 1894, followed by Australia in 1896 and the United Kingdom in 1909. In the United States, statutory minimum wages were first introduced nationally in 1938, and reintroduced and expanded in the United Kingdom in 1998. There is now legislation or binding collective bargaining regarding minimum wage in more than 90 percent of all countries. In the European Union, 21 member states currently have national minimum wages. In July 2014 Germany began legislating to introduce a federally-mandated minimum wage which would come into effect on 1 January 2015. Many countries, such as Sweden, Finland, Denmark, Switzerland, Austria, and Italy have no minimum wage laws, but rely on employer groups and trade unions to set minimum earnings through collective bargaining.
Minimum wage rates vary greatly across many different jurisdictions, not only in setting a particular amount of money – e.g. US$7.25 per hour ($14,500 per year) under certain states' laws (or $2.13 for employees who receive tips, known as the tipped minimum wage), $9.47 in the US state of Washington, and £6.50 (for those aged 21+) in the United Kingdom – but also in terms of which pay period (e.g. Russia and China set monthly minimums) or the scope of coverage. Some jurisdictions allow employers to count tips given to their workers as credit towards the minimum wage levels. India was one of the first developing countries to introduce minimum wage policy. It also has one of the most complicated systems with more than 1200 minimum wage rates.
Informal minimum wages
Customs and extra-legal pressures from governments or labor unions can produce a de facto minimum wage. So can international public opinion, by pressuring multinational companies to pay Third World workers wages usually found in more industrialized countries. The latter situation in Southeast Asia and Latin America was publicized in the 2000s, but it existed with companies in West Africa in the middle of the twentieth century.
Setting minimum wage
Among the indicators that might be used to establish an initial minimum wage rate are ones that minimize the loss of jobs while preserving international competitiveness. Among these are general economic conditions as measured by real and nominal gross domestic product; inflation; labor supply and demand; wage levels, distribution and differentials; employment terms; productivity growth; labor costs; business operating costs; the number and trend of bankruptcies; economic freedom rankings; standards of living and the prevailing average wage rate.
In the business sector, concerns include the expected increased cost of doing business, threats to profitability, rising levels of unemployment (and subsequent higher government expenditure on welfare benefits raising tax rates), and the possible knock-on effects to the wages of more experienced workers who might already be earning the new statutory minimum wage, or slightly more. Among workers and their representatives, political consideration weigh in as labor leaders seek to win support by demanding the highest possible rate. Other concerns include purchasing power, inflation indexing and standardized working hours.
In the United States, the minimum wage promulgated by the Fair Labor Standards Act of 1938 was intentionally set at a high, national level to render low-technology, low-wage factories in the South obsolete. According to the Economic Policy Institute, the minimum wage in the United States would have been $18.28 in 2013 if the minimum wage kept pace with labor productivity. To adjust for increased rates of worker productivity in the United States, raising the minimum wage to $22 (or more) an hour has been presented.
Supply and demand
An analysis of supply and demand of the type shown in many mainstream economics textbooks implies that by mandating a price floor above the equilibrium wage, minimum wage laws should cause unemployment. This is because a greater number of people are willing to work at the higher wage while a smaller number of jobs will be available at the higher wage. Companies can be more selective in those whom they employ thus the least skilled and least experienced will typically be excluded. An imposition or increase of a minimum wage will generally only affect employment in the low-skill labor market, as the equilibrium wage is already at or below the minimum wage, whereas in higher skill labor markets the equilibrium wage is too high for a change in minimum wage to affect employment.
According to the supply and demand model shown in many textbooks on economics, increasing the minimum wage decreases the employment of minimum-wage workers. One such textbook says:
If a higher minimum wage increases the wage rates of unskilled workers above the level that would be established by market forces, the quantity of unskilled workers employed will fall. The minimum wage will price the services of the least productive (and therefore lowest-wage) workers out of the market. …The direct results of minimum wage legislation are clearly mixed. Some workers, most likely those whose previous wages were closest to the minimum, will enjoy higher wages. This is known as the "ripple effect". The ripple effect shows that when you increase the minimum wage the wages of all others will consequently increase due the need for relativity. Others, particularly those with the lowest prelegislation wage rates, will be unable to find work. They will be pushed into the ranks of the unemployed or out of the labor force. Some argue that by increasing the federal minimum wage, however, the economy will be adversely affected due to small businesses not being able to keep up with the need to subsequently increase all workers wages.
The textbook illustrates the point with a supply and demand diagram similar to the one above. In the diagram it is assumed that workers are willing to labor for more hours if paid a higher wage. Economists graph this relationship with the wage on the vertical axis and the quantity (hours) of labor supplied on the horizontal axis. Since higher wages increase the quantity supplied, the supply of labor curve is upward sloping, and is shown as a line moving up and to the right.
A firm's cost is a function of the wage rate. It is assumed that the higher the wage, the fewer hours an employer will demand of an employee. This is because, as the wage rate rises, it becomes more expensive for firms to hire workers and so firms hire fewer workers (or hire them for fewer hours). The demand of labor curve is therefore shown as a line moving down and to the right.
Combining the demand and supply curves for labor allows us to examine the effect of the minimum wage. We will start by assuming that the supply and demand curves for labor will not change as a result of raising the minimum wage. This assumption has been questioned. If no minimum wage is in place, workers and employers will continue to adjust the quantity of labor supplied according to price until the quantity of labor demanded is equal to the quantity of labor supplied, reaching equilibrium price, where the supply and demand curves intersect. Minimum wage behaves as a classical price floor on labor. Standard theory says that, if set above the equilibrium price, more labor will be willing to be provided by workers than will be demanded by employers, creating a surplus of labor, i.e. unemployment.
In other words, the simplest and most basic economics says this about commodities like labor (and wheat, for example): Artificially raising the price of the commodity tends to cause the supply of it to increase and the demand for it to lessen. The result is a surplus of the commodity. When there is a wheat surplus, the government buys it. Since the government does not hire surplus labor, the labor surplus takes the form of unemployment, which tends to be higher with minimum wage laws than without them.
So the basic theory says that raising the minimum wage helps workers whose wages are raised, and hurts people who are not hired (or lose their jobs) because companies cut back on employment. But proponents of the minimum wage hold that the situation is much more complicated than the basic theory can account for. One complicating factor is possible monopsony in the labor market, whereby the individual employer has some market power in determining wages paid. Thus it is at least theoretically possible that the minimum wage may boost employment. Though single employer market power is unlikely to exist in most labor markets in the sense of the traditional 'company town,' asymmetric information, imperfect mobility, and the personal element of the labor transaction give some degree of wage-setting power to most firms.
Criticism of the neoclassical model
The argument that a minimum wage decreases employment is based on a simple supply and demand model of the labor market. A number of economists (for example Pierangelo Garegnani, Robert L. Vienneau, and Arrigo Opocher & Ian Steedman), building on the work of Piero Sraffa, argue that that model, even given all its assumptions, is logically incoherent. Michael Anyadike-Danes and Wynne Godley argue, based on simulation results, that little of the empirical work done with the textbook model constitutes a potentially falsifiable theory, and consequently empirical evidence hardly exists for that model. Graham White argues, partially on the basis of Sraffianism, that the policy of increased labor market flexibility, including the reduction of minimum wages, does not have an "intellectually coherent" argument in economic theory.
Gary Fields, Professor of Labor Economics and Economics at Cornell University, argues that the standard textbook model for the minimum wage is ambiguous, and that the standard theoretical arguments incorrectly measure only a one-sector market. Fields says a two-sector market, where "the self-employed, service workers, and farm workers are typically excluded from minimum-wage coverage... [and with] one sector with minimum-wage coverage and the other without it [and possible mobility between the two]," is the basis for better analysis. Through this model, Fields shows the typical theoretical argument to be ambiguous and says "the predictions derived from the textbook model definitely do not carry over to the two-sector case. Therefore, since a non-covered sector exists nearly everywhere, the predictions of the textbook model simply cannot be relied on."
An alternate view of the labor market has low-wage labor markets characterized as monopsonistic competition wherein buyers (employers) have significantly more market power than do sellers (workers). This monopsony could be a result of intentional collusion between employers, or naturalistic factors such as segmented markets, search costs, information costs, imperfect mobility and the personal element of labor markets. In such a case a simple supply and demand graph would not yield the quantity of labor clearing and the wage rate. This is because while the upward sloping aggregate labor supply would remain unchanged, instead of using the upward labor supply curve shown in a supply and demand diagram, monopsonistic employers would use a steeper upward sloping curve corresponding to marginal expenditures to yield the intersection with the supply curve resulting in a wage rate lower than would be the case under competition. Also, the amount of labor sold would also be lower than the competitive optimal allocation.
Such a case is a type of market failure and results in workers being paid less than their marginal value. Under the monopsonistic assumption, an appropriately set minimum wage could increase both wages and employment, with the optimal level being equal to the marginal product of labor. This view emphasizes the role of minimum wages as a market regulation policy akin to antitrust policies, as opposed to an illusory "free lunch" for low-wage workers.
Another reason minimum wage may not affect employment in certain industries is that the demand for the product the employees produce is highly inelastic. For example, if management is forced to increase wages, management can pass on the increase in wage to consumers in the form of higher prices. Since demand for the product is highly inelastic, consumers continue to buy the product at the higher price and so the manager is not forced to lay off workers. Economist Paul Krugman argues this explanation neglects to explain why the firm was not charging this higher price absent the minimum wage.
Three other possible reasons minimum wages do not affect employment were suggested by Alan Blinder: higher wages may reduce turnover, and hence training costs; raising the minimum wage may "render moot" the potential problem of recruiting workers at a higher wage than current workers; and minimum wage workers might represent such a small proportion of a business's cost that the increase is too small to matter. He admits that he does not know if these are correct, but argues that "the list demonstrates that one can accept the new empirical findings and still be a card-carrying economist."
Economists disagree as to the measurable impact of minimum wages in the 'real world'. This disagreement usually takes the form of competing empirical tests of the elasticities of supply and demand in labor markets and the degree to which markets differ from the efficiency that models of perfect competition predict.
Economists have done empirical studies on different aspects of the minimum wage, including:
- Employment effects, the most frequently studied aspect
- Effects on the distribution of wages and earnings among low-paid and higher-paid workers
- Effects on the distribution of incomes among low-income and higher-income families
- Effects on the skills of workers through job training and the deferring of work to acquire education
- Effects on prices and profits
- Effects on on-the-job training
Until the mid-1990s, a general consensus existed among economists, both conservative and liberal, that the minimum wage reduced employment, especially among younger and low-skill workers. In addition to the basic supply-demand intuition, there were a number of empirical studies that supported this view. For example, Gramlich (1976) found that many of the benefits went to higher income families, and in particular that teenagers were made worse off by the unemployment associated with the minimum wage.
Brown et al. (1983) noted that time series studies to that point had found that for a 10 percent increase in the minimum wage, there was a decrease in teenage employment of 1–3 percent. However, the studies found wider variation, from 0 to over 3 percent, in their estimates for the effect on teenage unemployment (teenagers without a job and looking for one). In contrast to the simple supply and demand diagram, it was commonly found that teenagers withdrew from the labor force in response to the minimum wage, which produced the possibility of equal reductions in the supply as well as the demand for labor at a higher minimum wage and hence no impact on the unemployment rate. Using a variety of specifications of the employment and unemployment equations (using ordinary least squares vs. generalized least squares regression procedures, and linear vs. logarithmic specifications), they found that a 10 percent increase in the minimum wage caused a 1 percent decrease in teenage employment, and no change in the teenage unemployment rate. The study also found a small, but statistically significant, increase in unemployment for adults aged 20–24.
Wellington (1991) updated Brown et al.'s research with data through 1986 to provide new estimates encompassing a period when the real (i.e., inflation-adjusted) value of the minimum wage was declining, because it had not increased since 1981. She found that a 10% increase in the minimum wage decreased the absolute teenage employment by 0.6%, with no effect on the teen or young adult unemployment rates.
Some research suggests that the unemployment effects of small minimum wage increases are dominated by other factors. In Florida, where voters approved an increase in 2004, a follow-up comprehensive study after the increase confirmed a strong economy with increased employment above previous years in Florida and better than in the U.S. as a whole. When it comes to on-the-job training, some believe the increase in wages is taken out of training expenses. A 2001 empirical study found that there is "no evidence that minimum wages reduce training, and little evidence that they tend to increase training."
Some empirical studies have tried to ascertain the benefits of a minimum wage beyond employment effects. In an analysis of Census data, Joseph Sabia and Robert Nielson found no statistically significant evidence that minimum wage increases helped reduce financial, housing, health, or food insecurity. This study was undertaken by the Employment Policies Institute, a think tank funded by the food, beverage and hospitality industries. In 2012, Michael Reich published an economic analysis that suggested that a proposed minimum wage hike in San Diego might stimulate the city's economy by about $190 million.
The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs....America's federal minimum wage, at 38% of median income, is one of the rich world's lowest. Some studies find no harm to employment from federal or state minimum wages, others see a small one, but none finds any serious damage. ... High minimum wages, however, particularly in rigid labour markets, do appear to hit employment. France has the rich world’s highest wage floor, at more than 60% of the median for adults and a far bigger fraction of the typical wage for the young. This helps explain why France also has shockingly high rates of youth unemployment: 26% for 15- to 24-year-olds."
Most studies on the effects of minimum wages have been conducted in high-income economies. A study on minimum wages increases in China shows that "minimum wage changes led to significant adverse effects on employment in the Eastern and Central regions of China, and resulted in disemployment for females, young adults, and low-skilled workers".
Card and Krueger
In 1992, the minimum wage in New Jersey increased from $4.25 to $5.05 per hour (an 18.8% increase) while the adjacent state of Pennsylvania remained at $4.25. David Card and Alan Krueger gathered information on fast food restaurants in New Jersey and eastern Pennsylvania in an attempt to see what effect this increase had on employment within New Jersey. Basic economic theory would have implied that relative employment should have decreased in New Jersey. Card and Krueger surveyed employers before the April 1992 New Jersey increase, and again in November–December 1992, asking managers for data on the full-time equivalent staff level of their restaurants both times. Based on data from the employers' responses, the authors concluded that the increase in the minimum wage slightly increased employment in the New Jersey restaurants.
One possible explanation that the current minimum wage laws may not affect unemployment in the United States is that the minimum wage is set close to the equilibrium point for low and unskilled workers. Thus, according to this explanation, in the absence of the minimum wage law unskilled workers would be paid approximately the same amount and an increase above this equilibrium point could likely bring about increased unemployment for the low and unskilled workers.
Card and Krueger expanded on this initial article in their 1995 book Myth and Measurement: The New Economics of the Minimum Wage. They argued that the negative employment effects of minimum wage laws are minimal if not non-existent. For example, they look at the 1992 increase in New Jersey's minimum wage, the 1988 rise in California's minimum wage, and the 1990–91 increases in the federal minimum wage. In addition to their own findings, they reanalyzed earlier studies with updated data, generally finding that the older results of a negative employment effect did not hold up in the larger datasets.
Research subsequent to Card and Krueger's work
In subsequent research, David Neumark and William Wascher attempted to verify Card and Krueger's results by using administrative payroll records from a sample of large fast food restaurant chains in order to verify employment. They found that the minimum wage increases were followed by decreases in employment. On the other hand, an assessment of data collected and analyzed by Neumark and Wascher did not initially contradict the Card and Krueger results, but in a later edited version they found a four percent decrease in employment, and reported that "the estimated disemployment effects in the payroll data are often statistically significant at the 5- or 10- percent level although there are some estimators and subsamples that yield insignificant—although almost always negative" employment effects. However, this paper's conclusions were rebutted in a 2000 paper by Card and Krueger. A 2011 paper has reconciled the difference between Card and Krueger's survey data and Neumark and Wascher's payroll-based data. The paper shows that both datasets evidence conditional employment effects that are positive for small restaurants, but are negative for large fast-food restaurants.
In 1996 and 1997, the federal minimum wage was increased from $4.25 to $5.15, thereby increasing the minimum wage by $0.90 in Pennsylvania but by just $0.10 in New Jersey; this allowed for an examination of the effects of minimum wage increases in the same area, subsequent to the 1992 change studied by Card and Krueger. A study by Hoffman and Trace found the result anticipated by traditional theory: a detrimental effect on employment.
Further application of the methodology used by Card and Krueger by other researchers yielded results similar to their original findings, across additional data sets. A 2010 study by three economists (Arindrajit Dube of the University of Massachusetts Amherst, T. William Lester of the University of North Carolina at Chapel Hill, and Michael Reich of the University of California, Berkeley), compared adjacent counties in different states where the minimum wage had been raised in one of the states. They analyzed employment trends for several categories of low-wage workers from 1990 to 2006 and found that increases in minimum wages had no negative effects on low-wage employment and successfully increased the income of workers in food services and retail employment, as well as the narrower category of workers in restaurants.
However, a 2011 study by Baskaya and Rubinstein of Brown University found that at the federal level, "a rise in minimum wage have [sic] an instantaneous impact on wage rates and a corresponding negative impact on employment", stating, "Minimum wage increases boost teenage wage rates and reduce teenage employment." Another 2011 study by Sen, Rybczynski, and Van De Waal found that "a 10% increase in the minimum wage is significantly correlated with a 3−5% drop in teen employment." A 2012 study by Sabia, Hansen, and Burkhauser found that "minimum wage increases can have substantial adverse labor demand effects for low-skilled individuals", with the largest effects on those aged 16 to 24.
A 2013 study by Meer and West concluded that "the minimum wage reduces net job growth, primarily through its effect on job creation by expanding establishments ... most pronounced for younger workers and in industries with a higher proportion of low-wage workers." This study by Meer and West was later critiqued for its trends of assumption in the context of narrowly defined low-wage groups. The authors replied to the critiques and released additional data which addressed the criticism of their methodology, but did not resolve the issue of whether their data showed a causal relationship. Another 2013 study by Suzana Laporšek of the University of Primorska, on youth unemployment in Europe claimed there was "a negative, statistically significant impact of minimum wage on youth employment." A 2013 study by labor economists Tony Fang and Carl Lin which studied minimum wages and employment in China, found that "minimum wage changes have significant adverse effects on employment in the Eastern and Central regions of China, and result in disemployment for females, young adults, and low-skilled workers".
Several researchers have conducted statistical meta-analyses of the employment effects of the minimum wage. In 1995, Card and Krueger analyzed 14 earlier time-series studies on minimum wages and concluded that there was clear evidence of publication bias (in favor of studies that found a statistically significant negative employment effect). They point out that later studies, which had more data and lower standard errors, did not show the expected increase in t-statistic (almost all the studies had a t-statistic of about two, just above the level of statistical significance at the .05 level). Though a serious methodological indictment, opponents of the minimum wage largely ignored this issue; as Thomas C. Leonard noted, "The silence is fairly deafening."
In 2005, T.D. Stanley showed that Card and Krueger's results could signify either publication bias or the absence of a minimum wage effect. However, using a different methodology, Stanley concludes that there is evidence of publication bias and that correction of this bias shows no relationship between the minimum wage and unemployment. In 2008, Hristos Doucouliagos and T.D. Stanley conducted a similar meta-analysis of 64 U.S. studies on dis-employment effects and concluded that Card and Krueger's initial claim of publication bias is still correct. Moreover, they concluded, "Once this publication selection is corrected, little or no evidence of a negative association between minimum wages and employment remains."
Consistent with the results from Doucouliagos and Stanley, and Card and Krueger, Baskaya and Rubinstein's 2011 study, which analyzed 24 papers on the minimum wage, found "mild positive, yet statistically insignificant association between the change in the employment of teenagers" at state minimum wage levels. However, when minimum wage is set at the federal level, they found "notable wage impacts and large corresponding disemployment effects".
Debate over consequences
Minimum wage laws affect workers in most low-paid fields of employment and have usually been judged against the criterion of reducing poverty. Minimum wage laws receive less support from economists than from the general public. Despite decades of experience and economic research, debates about the costs and benefits of minimum wages continue today.
Various groups have great ideological, political, financial, and emotional investments in issues surrounding minimum wage laws. For example, agencies that administer the laws have a vested interest in showing that "their" laws do not create unemployment, as do labor unions whose members' finances are protected by minimum wage laws. On the other side of the issue, low-wage employers such as restaurants finance the Employment Policies Institute, which has released numerous studies opposing the minimum wage. The presence of these powerful groups and factors means that the debate on the issue is not always based on dispassionate analysis. Additionally, it is extraordinarily difficult to separate the effects of minimum wage from all the other variables that affect employment.
The following table summarizes the arguments made by those for and against minimum wage laws:
Supporters of the minimum wage claim it has these effects:
Opponents of the minimum wage claim it has these effects:
A widely circulated argument that the minimum wage was ineffective at reducing poverty was provided by George Stigler in 1949:
- Employment may fall more than in proportion to the wage increase, thereby reducing overall earnings;
- As uncovered sectors of the economy absorb workers released from the covered sectors, the decrease in wages in the uncovered sectors may exceed the increase in wages in the covered ones;
- The impact of the minimum wage on family income distribution may be negative unless the fewer but better jobs are allocated to members of needy families rather than to, for example, teenagers from families not in poverty;
- Forbidding employers to pay less than a legal minimum is equivalent to forbidding workers to sell their labor for less than the minimum wage. The legal restriction that employers cannot pay less than a legislated wage is equivalent to the legal restriction that workers cannot work at all in the protected sector unless they can find employers willing to hire them at that wage.
In 2006, the International Labour Organization (ILO) argued that the minimum wage could not be directly linked to unemployment in countries that have suffered job losses. In April 2010, the Organisation for Economic Co-operation and Development (OECD) released a report arguing that countries could alleviate teen unemployment by "lowering the cost of employing low-skilled youth" through a sub-minimum training wage. A study of U.S. states showed that businesses' annual and average payrolls grow faster and employment grew at a faster rate in states with a minimum wage. The study showed a correlation, but did not claim to prove causation.
Although strongly opposed by both the business community and the Conservative Party when introduced in 1999, the Conservatives reversed their opposition in 2000. Accounts differ as to the effects of the minimum wage. The Centre for Economic Performance found no discernible impact on employment levels from the wage increases, while the Low Pay Commission found that employers had reduced their rate of hiring and employee hours employed, and found ways to cause current workers to be more productive (especially service companies). The Institute for the Study of Labor found prices in the minimum wage sector rose significantly faster than prices in non-minimum wage sectors, in the four years following the implementation of the minimum wage. Neither trade unions nor employer organizations contest the minimum wage, although the latter had especially done so heavily until 1999.
In 2014, supporters of minimum wage cited a study that found that job creation within the United States is faster in states that raised their minimum wages. In 2014, supporters of minimum wage cited news organizations who reported the state with the highest minimum-wage garnered more job creation than the rest of the United States.
In 2014, in Seattle, Washington, liberal and progressive business owners who had supported the city's new $15 minimum wage said they might hold off on expanding their businesses and thus creating new jobs, due to the uncertain timescale of the wage increase implementation. However, subsequently at least two of the business owners quoted did expand.
The dollar value of the minimum wage loses purchasing power over time due to inflation. Minimum wage laws, for instance proposals to index the minimum wage to average wages, have the potential to keep the dollar value of the minimum wage relevant and predictable.
With regard to the economic effects of introducing minimum wage legislation in Germany in January 2015, recent developments have shown that the feared increase in unemployment has not materialized, however, in some economic sectors and regions of the country, it came to a decline in job opportunities particularly for temporary and part-time workers, and some low-wage jobs have disappeared entirely. Because of this overall positive development, the Deutsche Bundesbank revised its opinion, and ascertained that “the impact of the introduction of the minimum wage on the total volume of work appears to be very limited in the present business cycle”.
Surveys of economists
According to a 1978 article in the American Economic Review, 90% of the economists surveyed agreed that the minimum wage increases unemployment among low-skilled workers. By 1992 the survey found 79% of economists in agreement with that statement, and by 2000, 45.6% were in full agreement with the statement and 27.9% agreed with provisos (73.5% total). The authors of the 2000 study also reweighted data from a 1990 sample to show that at that time 62.4% of academic economists agreed with the statement above, while 19.5% agreed with provisos and 17.5% disagreed. They state that the reduction on consensus on this question is "likely" due to the Card and Krueger research and subsequent debate.
A similar survey in 2006 by Robert Whaples polled PhD members of the American Economic Association (AEA). Whaples found that 46.8% respondents wanted the minimum wage eliminated, 37.7% supported an increase, 14.3% wanted it kept at the current level, and 1.3% wanted it decreased. Another survey in 2007 conducted by the University of New Hampshire Survey Center found that 73% of labor economists surveyed in the United States believed 150% of the then-current minimum wage would result in employment losses and 68% believed a mandated minimum wage would cause an increase in hiring of workers with greater skills. 31% felt that no hiring changes would result.
Surveys of labor economists have found a sharp split on the minimum wage. Fuchs et al. (1998) polled labor economists at the top 40 research universities in the United States on a variety of questions in the summer of 1996. Their 65 respondents were nearly evenly divided when asked if the minimum wage should be increased. They argued that the different policy views were not related to views on whether raising the minimum wage would reduce teen employment (the median economist said there would be a reduction of 1%), but on value differences such as income redistribution. Daniel B. Klein and Stewart Dompe conclude, on the basis of previous surveys, "the average level of support for the minimum wage is somewhat higher among labor economists than among AEA members."
In 2007, Klein and Dompe conducted a non-anonymous survey of supporters of the minimum wage who had signed the "Raise the Minimum Wage" statement published by the Economic Policy Institute. 95 of the 605 signatories responded. They found that a majority signed on the grounds that it transferred income from employers to workers, or equalized bargaining power between them in the labor market. In addition, a majority considered disemployment to be a moderate potential drawback to the increase they supported.
In 2013, a diverse group of 37 economics professors was surveyed on their view of the minimum wage's impact on employment. 34% of respondents agreed with the statement, "Raising the federal minimum wage to $9 per hour would make it noticeably harder for low-skilled workers to find employment." 32% disagreed and the remaining respondents were uncertain or had no opinion on the question. 47% agreed with the statement, "The distortionary costs of raising the federal minimum wage to $9 per hour and indexing it to inflation are sufficiently small compared with the benefits to low-skilled workers who can find employment that this would be a desirable policy", while 11% disagreed.
Economists and other political commentators[who?] have proposed alternatives to the minimum wage.[which?] They argue that these alternatives may address the issue of poverty better than a minimum wage, as it would benefit a broader population of low wage earners, not cause any unemployment, and distribute the costs widely rather than concentrating it on employers of low wage workers.
A basic income (or negative income tax) is a system of social security that periodically provides each citizen with a sum of money that is sufficient to live on frugally. It is argued that recipients of the basic income would have considerably more bargaining power when negotiating a wage with an employer as there would be no risk of destitution for not taking the employment. As a result, the jobseeker could spend more time looking for a more appropriate or satisfying job, or they could wait until a higher-paying job appeared. Alternately, they could spend more time increasing their skills in university, which would make them more suitable for higher-paying jobs, as well as provide numerous other benefits. Experiments on Basic Income and NIT in Canada and the USA show that people spent more time studying while the program was running.
Proponents argue that a basic income that is based on a broad tax base would be more economically efficient, as the minimum wage effectively imposes a high marginal tax on employers, causing losses in efficiency.
Guaranteed minimum income
A guaranteed minimum income is another proposed system of social welfare provision. It is similar to a basic income or negative income tax system, except that it is normally conditional and subject to a means test. Some proposals also stipulate a willingness to participate in the labor market, or a willingness to perform community services.
Refundable tax credit
A refundable tax credit is a mechanism whereby the tax system can reduce the tax owed by a household to below zero, and result in a net payment to the taxpayer beyond their own payments into the tax system. Examples of refundable tax credits include the earned income tax credit and the additional child tax credit in the U.S., and working tax credits and child tax credits in the UK. Such a system is slightly different from a negative income tax, in that the refundable tax credit is usually only paid to households that have earned at least some income. This policy is more targeted against poverty than the minimum wage, because it avoids subsidizing low-income workers who are supported by high-income households (for example, teenagers still living with their parents).
In the United States, earned income tax credit rates, also known as EITC or EIC, vary by state − some are refundable while other states do not allow a refundable tax credit. The federal EITC program has been expanded by a number of presidents including Jimmy Carter, Ronald Reagan, George H.W. Bush, and Bill Clinton. In 1986, President Reagan described the EITC as "the best anti poverty, the best pro-family, the best job creation measure to come out of Congress." The ability of the earned income tax credit to deliver larger monetary benefits to the poor workers than an increase in the minimum wage and at a lower cost to society was documented in a 2007 report by the Congressional Budget Office.
Italy, Sweden, Norway, Finland, and Denmark are examples of developed nations where there is no minimum wage that is required by legislation. Such nations, particularly the Nordics, have very high union participation rates. Instead, minimum wage standards in different sectors are set by collective bargaining.
In January 2014, seven Nobel economists--Kenneth Arrow, Peter Diamond, Eric Maskin, Thomas Schelling, Robert Solow, Michael Spence, and Joseph Stiglitz—and 600 other economists wrote a letter to the US Congress and the US President urging that, by 2016, the US government should raise the minimum wage to $10.10. They endorsed the Minimum Wage Fairness Act which was introduced by US Senator Tom Harkin in 2013. Democratic presidential candidate Bernie Sanders has introduced a bill to the Senate that would raise the minimum wage to $15.
- "The Advantage Of The Minimum Wage | Robert Nielsen". Robertnielsen21.wordpress.com. Retrieved 2013-03-30.
- "Minimum Wages. by David Neumark and William L. Wascher".
- "The Young and the Jobless". The Wall Street Journal. 2009-10-03. Retrieved 2014-01-11.
- Black, John (September 18, 2003). Oxford Dictionary of Economics. Oxford University Press. p. 300. ISBN 978-0-19-860767-0.
- Mihm, Stephen (5 September 2013). "How the Black Death Spawned the Minimum Wage". Bloomberg View. Retrieved 17 April 2014.
- Thorpe, Vanessa (29 March 2014). "Black death was not spread by rat fleas, say researchers". theguardian.com. Retrieved 29 March 2014.
- Starr, Gerald (1993). Minimum wage fixing : an international review of practices and problems (2nd impression (with corrections) ed.). Geneva: International Labour Office. p. 1. ISBN 9789221025115.
- Nordlund, Willis J. (1997). The quest for a living wage : the history of the federal minimum wage program. Westport, Conn.: Greenwood Press. p. xv. ISBN 9780313264122.
- Neumark, David; William L. Wascher (2008). Minimum Wages. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-14102-4.
- "OECD Statistics (GDP, unemployment, income, population, labour, education, trade, finance, prices...)". Stats.oecd.org. Retrieved 2013-03-29.
- Grossman, Jonathan. "Fair Labor Standards Act of 1938: Maximum Struggle for a Minimum Wage". Department of Labor. Retrieved 17 April 2014.
- Stone, Jon (1 October 2010). "History of the UK's minimum wage". Total Politics. Retrieved 17 April 2014.
- Williams, Walter E. (June 2009). "The Best Anti-Poverty Program We Have?". Regulation 32 (2): 62.
- "ILO 2006: Minimum wages policy (PDF)" (PDF). Ilo.org. Retrieved March 1, 2012.
- Eurostat (2006): Minimum Wages 2006 - Variations from 82 to 1503 euro gross per month (PDF)
- "Germany may become 22nd EU state with federal minimum wage". Germany News.Net. Retrieved 7 July 2014.
- Ehrenberg, Ronald G. Labor Markets and Integrating National Economies, Brookings Institution Press (1994), p. 41
- Alderman, Liz; Greenhouse, Steven (October 27, 2014). "Fast Food in Denmark Serves Something Atypical: Living Wages". New York Times. Retrieved October 27, 2014.
- "Minimum Wage". Washington State Dept. of Labor & Industries. Retrieved 2015-01-18.
- "British Government website". Retrieved 2013-09-05.
- "Most Asked Questions about Minimum Wages in India". PayCheck.in. 2013-02-22. Retrieved 2013-03-29.
- Sowell, Thomas (2004). "Minimum Wage Laws". Basic Economics: A Citizen's Guide to the Economy. New York: Basic Books. pp. 163–9. ISBN 978-0-465-08145-5.
- Provisional Minimum Wage Commission: Preliminary Views on a Bask of Indicators, Other Relevant Considerations and Impact Assessment, Provisional Minimum Wage Commission, Hong Kong Special Administrative Region Government,
- Setting the Initial Statutory Minimum Wage Rate, submission to government by the Hong Kong General Chamber of Commerce.
- Li, Joseph, "Minimum wage legislation for all sectors," China Daily October 16, 2008 , "Hong Kong sets Minimum Wage – What one Singaporean thinks," Speaker's Corner, SG Forums, November 5, 2010
- Berstein, David E., & Leonard, Thomas C., Excluding Unfit Workers: Social Control Versus Social Justice in the Age of Economic Reform, Law and Contemporary Problems, Vol. 72, No. 3, 2009
- Editorial Board (February 9, 2014). "The Case for a Higher Minimum Wage". New York Times. Retrieved February 9, 2014.
- Chumley, Cheryl K. (March 18, 2013). "Take it to the bank: Sen. Elizabeth Warren wants to raise minimum wage to $22 per hour". Washington Times. Retrieved January 22, 2014.
- Wing, Nick (March 18, 2013). "Elizabeth Warren: Minimum Wage Would Be $22 An Hour If It Had Kept Up With Productivity". Huffington Post. Retrieved January 22, 2014.
- Hart-Landsberg, Ph.D., Martin (December 19, 2013). "$22.62/HR: The Minimum Wage If It Had Risen Like The Incomes Of The 1%". thesocietypages.org. Retrieved January 22, 2014.
- Rmusemore (December 3, 2013). "Stop Complaining Republicans, the Minimum Wage Should be $22.62 an Hour". policususa.com. Retrieved January 22, 2014.
- McConnell, C. R.; Brue, S. L. (1999). Economics (14th ed.). Irwin-McGraw Hill. p. 594.
- Gwartney, J. D.; Stroup, R. L.; Sobel, R. S.; Macpherson, D. A. (2003). Economics: Private and Public Choice (10th ed.). Thomson South-Western. p. 97.
- Mankiw, N. Gregory (2011). Principles of Macroeconomics (6th ed.). South-Western Pub. p. 311.
- Card, David; Krueger, Alan B. (1995). Myth and Measurement: The New Economics of the Minimum Wage. Princeton University Press. pp. 1; 6–7.
- Formby, J. P.; Bishop, J. A.; Kim, H. (2010). "The Redistributive Effects and Cost-Effectiveness of Increasing the Federal Minimum Wage". Public Finance Review 38 (5): 585–618. doi:10.1177/1091142110373481.
- Belman, Dale L.; Wolfson, Paul (2010). "The Effect of Legislated Minimum Wage Increases on Employment and Hours: A Dynamic Analysis". Labour 24 (1): 1–25. doi:10.1111/j.1467-9914.2010.00468.x.
- Gwartney, James David; Stroup, Richard L.; Studenmund, A. H. (1987). Economics: Private and Public Choice. New York: Harcourt Brace Jovanovich. pp. 559–62. ISBN 978-0-15-518880-8.
- e.g. DE Card and AB Krueger, Myth and Measurement: The New Economics of the Minimum Wage (1995) and S Machin and A Manning, ‘Minimum wages and economic outcomes in Europe’ (1997) 41 European Economic Review 733
- Rittenberg, Timothy Tregarthen, Libby (1999). Economics (2nd ed.). New York: Worth Publishers. p. 290. ISBN 9781572594180. Retrieved 21 June 2014.
- Ehrenberg, R. and Smith, R. "Modern labor economics: theory and public policy", HarperCollins, 1994, 5th ed.[page needed]
- By Jim Stanford, Debate: Boost the wage, help the worker, National Post, February 22, 2011
- Boal, William M.; Ransom, Michael R (March 1997). "Monopsony in the Labor Market". Journal of Economic Literature 35 (1): 86–112. JSTOR 2729694.
- Garegnani, P. (July 1970). "Heterogeneous Capital, the Production Function and the Theory of Distribution". The Review of Economic Studies 37 (3): 407–36. doi:10.2307/2296729. JSTOR 2296729.
- Vienneau, Robert L. (2005). "On Labour Demand and Equilibria of the Firm". The Manchester School 73 (5): 612–9. doi:10.1111/j.1467-9957.2005.00467.x.
- Opocher, A.; Steedman, I. (2009). "Input price-input quantity relations and the numeraire". Cambridge Journal of Economics 33 (5): 937–48. doi:10.1093/cje/bep005.
- Anyadike-Danes, Michael; Godley, Wynne (1989). "Real Wages and Employment: A Sceptical View of Some Recent Empirical Work". The Manchester School 57 (2): 172–87. doi:10.1111/j.1467-9957.1989.tb00809.x.
- White, Graham (November 2001). "The Poverty of Conventional Economic Wisdom and the Search for Alternative Economic and Social Policies". The Drawing Board: an Australian Review of Public Affairs 2 (2): 67–87.
- Fields, Gary S. (1994). "The Unemployment Effects of Minimum Wages". International Journal of Manpower 15 (2): 74–81. doi:10.1108/01437729410059323.
- Manning, Alan (2003). Monopsony in motion: Imperfect Competition in Labor Markets. Princeton, NJ: Princeton University Press. ISBN 0-691-11312-2.[page needed]
- Gillespie, Andrew (2007). Foundations of Economics. Oxford University Press. p. 240.
- Krugman, Paul (2013). Economics. Worth Publishers. p. 385.
- Blinder, Alan S. (May 23, 1996). "The $5.15 Question". The New York Times. p. A29.
- Schmitt, John (February 2013). "Why Does the Minimum Wage Have No Discernible Effect on Employment?" (PDF). Center for Economic and Policy Research. Retrieved December 5, 2013. Lay summary – The Washington Post (February 14, 2013).
- Gramlich, Edward M.; Flanagan, Robert J.; Wachter, Michael L. (1976). "Impact of Minimum Wages on Other Wages, Employment, and Family Incomes". Brookings Papers on Economic Activity 1976 (2): 409–61. doi:10.2307/2534380.
- Brown, Charles; Gilroy, Curtis; Kohen, Andrew (Winter 1983). "Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment". The Journal of Human Resources 18 (1): 3–31. doi:10.2307/145654. JSTOR 145654.
- Wellington, Alison J. (Winter 1991). "Effects of the Minimum Wage on the Employment Status of Youths: An Update". The Journal of Human Resources 26 (1): 27–46. doi:10.2307/145715. JSTOR 145715.
- Fox, Liana (October 24, 2006). "Minimum wage trends: Understanding past and contemporary research". Economic Policy Institute. Retrieved December 6, 2013.
- "The Florida Minimum Wage: Good for Workers, Good for the Economy" (PDF). Retrieved 3 November 2013.
- Acemoglu, Daron; Pischke, Jörn-Steffen (November 2001). "Minimum Wages and On-the-Job Training" (PDF). Institute for the Study of Labor. SSRN 288292. Retrieved December 6, 2013. Also published as Acemoglu, Daron; Pischke, Jörn-Steffen (2003). "Minimum Wages and On-the-job Training". In Polachek, Solomon W. Worker Well-Being and Public Policy. Research in Labor Economics 22. pp. 159–202. doi:10.1016/S0147-9121(03)22005-7. ISBN 978-0-76231-026-5.
- Sabia, Joseph J.; Nielsen, Robert B. (April 2012). Can Raising the Minimum Wage Reduce Poverty and Hardship?. Employment Policies Institute.[page needed]
- Michael Reich. "Increasing the Minimum Wage in San Jose: Benefits and Costs- White Paper" (PDF). Retrieved 2013-03-29.
- The Economist-The Logical Floor-December 2013
- Fang, Tony; Lin, Carl (2015-11-27). "Minimum wages and employment in China". IZA Journal of Labor Policy 4 (1): 22. doi:10.1186/s40173-015-0050-9. ISSN 2193-9004.
- Card, David; Krueger, Alan B. (September 1994). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania". The American Economic Review 84 (4): 772–93. JSTOR 2118030.
- ISBN 0-691-04823-1[full citation needed][page needed]
- Card; Krueger (2000). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Reply". American Economic Review 90 (5): 1397–1420. doi:10.1257/aer.90.5.1397.
- Dube, Arindrajit; Lester, T. William; Reich, Michael (November 2010). "Minimum Wage Effects Across State Borders: Estimates Using Contiguous Counties". Review of Economics and Statistics 92 (4): 945–964. doi:10.1162/REST_a_00039. Retrieved 10 March 2014.
- Schmitt, John (January 1, 1996). "The Minimum Wage and Job Loss". Economic Policy Institute. Retrieved December 7, 2013.
- Neumark, David; Wascher, William (December 2000). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Comment". The American Economic Review 90 (5): 1362–96. doi:10.1257/aer.90.5.1362. JSTOR 2677855.
- http://www.davidson.edu/academic/economics/foley/eco324_s06/Neumark_Wascher%20AER%20(2000).pdf[full citation needed][dead link]
- Card and Krueger (2000) "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Reply" American Economic Review, Volume 90 No. 5. pg 1397-1420
- Ropponen, Olli (2011). "Reconciling the evidence of Card and Krueger (1994) and Neumark and Wascher (2000)". Journal of Applied Econometrics 26 (6): 1051–7. doi:10.1002/jae.1258.
- Hoffman, Saul D; Trace, Diane M (2009). "NJ and PA Once Again: What Happened to Employment when the PA–NJ Minimum Wage Differential Disappeared?". Eastern Economic Journal 35 (1): 115–28. doi:10.1057/eej.2008.1.
- Dube, Arindrajit; Lester, T. William; Reich, Michael (November 2010). "Minimum Wage Effects Across State Borders: Estimates Using Contiguous Counties" (PDF). The Review of Economics and Statistics 92 (4): 945–64. doi:10.1162/REST_a_00039.
- FOLBRE, NANCY (November 1, 2010). "Along the Minimum-Wage Battle Front". New York Times. Retrieved 4 December 2013.
- "Using Federal Minimum Wages to Identify the Impact of Minimum Wages on Employment and Earnings Across the U.S. States" (PDF). 1 Oct 2011.
- "Teen employment, poverty, and the minimum wage: Evidence from Canada". 1 Jan 2011.
- "Are the Effects of Minimum Wage Increases Always Small? New Evidence from a Case Study of New York State". 2 Apr 2012.
- Meer, Jonathan; West, Jeremy (2013). "Effects of the Minimum Wage on Employment Dynamics". NBER Working Paper No. 19262.
- "Minimum wage effects on youth employment in the European Union". 14 Sep 2013.
- "Minimum Wages and Employment in China". 14 Dec 2013.
- Card, David; Krueger, Alan B. (May 1995). "Time-Series Minimum-Wage Studies: A Meta-analysis". The American Economic Review 85 (2): 238–43. JSTOR 2117925.
- Leonard, T. C. (2000). "The Very Idea of Applying Economics: The Modern Minimum-Wage Controversy and Its Antecedents". History of Political Economy 32: 117. doi:10.1215/00182702-32-Suppl_1-117.
- Stanley, T. D. (2005). "Beyond Publication Bias". Journal of Economic Surveys 19 (3): 309. doi:10.1111/j.0950-0804.2005.00250.x.
- Doucouliagos, Hristos; Stanley, T. D. (2009). "Publication Selection Bias in Minimum-Wage Research? A Meta-Regression Analysis". British Journal of Industrial Relations 47 (2): 406–28. doi:10.1111/j.1467-8543.2009.00723.x.
- Eatwell, John, Ed.; Murray Milgate; Peter Newman (1987). The New Palgrave: A Dictionary of Economics. London: The Macmillan Press Limited. pp. 476–478. ISBN 0-333-37235-2.
- Bernstein, Harry (September 15, 1992). "Troubling Facts on Employment". Los Angeles Times. p. D3. Retrieved December 6, 2013.
- Engquist, Erik (May 2006). "Health bill fight nears showdown". Crain's New York Business 22 (20): 1.
- Stilwell, Victoria (March 8, 2014). "Highest Minimum-Wage State Washington Beats U.S. in Job Creation". Bloomberg.
- "Real Value of the Minimum Wage". Epi.org. Retrieved 2013-03-29.
- Freeman, Richard B. (1994). "Minimum Wages – Again!". International Journal of Manpower 15 (2): 8–25. doi:10.1108/01437729410059305.
- Bernard Semmel, Imperialism and Social Reform: English Social-Imperial Thought 1895–1914 (London: Allen and Unwin, 1960), p. 63.
- "ITIF Report Shows Self-service Technology a New Force in Economic Life". The Information Technology & Innovation Foundation. April 14, 2010. Retrieved October 5, 2011.
- Alesina, Alberto F.; Zeira, Joseph (2006). "Technology and Labor Regulations". SSRN Electronic Journal. doi:10.2139/ssrn.936346.
- "Minimum Wages in canada : theory, evidence and policy". Hrsdc.gc.ca. March 7, 2008. Retrieved October 5, 2011.
- Kallem, Andrew (2004). "Youth Crime and the Minimum Wage". SSRN Electronic Journal. doi:10.2139/ssrn.545382.
- "Crime and work: What we can learn from the low-wage labor market | Economic Policy Institute". Epi.org. July 1, 2000. Retrieved October 5, 2011.
- Kosteas, Vasilios D. "Minimum Wage." Encyclopedia of World Poverty. Ed. M. Odekon.Thousand Oaks, CA: Sage Publications, Inc., 2006. 719-21. SAGE knowledge. Web.
- Abbott, Lewis F. Statutory Minimum Wage Controls: A Critical Review of their Effects on Labour Markets, Employment, and Incomes. ISR Publications, Manchester UK, 2nd. edn. 2000. ISBN 978-0-906321-22-5. [page needed]
- Llewellyn H. Rockwell Jr. (October 28, 2005). "Wal-Mart Warms to the State - Mises Institute". Mises.org. Retrieved October 5, 2011.
- Tupy, Marian L. Minimum Interference, National Review Online, May 14, 2004
- "The Wages of Politics". Wall Street Journal. November 11, 2006. Retrieved December 6, 2013.
- Messmore, Ryan. "Increasing the Mandated Minimum Wage: Who Pays the Price?". Heritage.org. Retrieved October 5, 2011.
- Art Carden. "Why Wal-Mart Matters - Mises Institute". Mises.org. Retrieved October 5, 2011.
- "Will have only negative effects on the distribution of economic justice. Minimum-wage legislation, by its very nature, benefits some at the expense of the least experienced, least productive, and poorest workers." (Cato)
- Williams, Walter (1989). South Africa's War Against Capitalism. New York: Praeger. ISBN 0-275-93179-X.
- A blunt instrument, The Economist, October 26, 2006 (English)
- Partridge, M. D.; Partridge, J. S. (1999). "Do minimum wage hikes reduce employment? State-level evidence from the low-wage retail sector". Journal of Labor Research 20 (3): 393. doi:10.1007/s12122-999-1007-9.
- "The Effects of a Minimum-Wage Increase on Employment and Family Income". February 18, 2014. Retrieved July 26, 2014.
- Scarpetta, Stephano, Anne Sonnet and Thomas Manfredi,Rising Youth Unemployment During The Crisis: How To Prevent Negative Long-Term Consequences on a Generation?, April 14, 2010 (read-only PDF)
- Fiscal Policy Institute, "States with Minimum Wages Above the Federal Level have had Faster Small Business and Retail Job Growth," March 30, 2006.
- "National Minimum Wage". politics.co.uk. Archived from the original on December 1, 2007. Retrieved December 29, 2007.
- Metcalf, David (April 2007). "Why Has the British National Minimum Wage Had Little or No Impact on Employment?".
- Low Pay Commission (2005). National Minimum Wage - Low Pay Commission Report 2005
- Wadsworth, Jonathan (September 2009). "Did the National Minimum Wage Affect UK Prices" (PDF).
- Rugaber, Christopher S. (July 19, 2014). "States with higher minimum wage gain more jobs". USA Today.
- Lobosco, Katie (May 14, 2014). "Washington state defies minimum wage logic". CNN.
- Meyerson, Harold (May 21, 2014). "Harold Meyerson: A higher minimum wage may actually boost job creation". The Washington Post.
- Minimum Wage Limbo Keeps Small Business Owners Up At Night, kuow.org, May 22, 2014
- Seattle Magazine, March 23, 2015
- KOMO News, July 31,2015
- C. Eisenring (Dec 2015). Gefährliche Mindestlohn-Euphorie (in German). Neue Zürcher Zeitung. Retrieved 30 December 2015.
- R. Janssen (Sept 2015). The German Minimum Wage Is Not A Job Killer. Social Europe. Retrieved 30 December 2015.
- Kearl, J. R.; Pope, Clayne L.; Whiting, Gordon C.; Wimmer, Larry T. (May 1979). "A Confusion of Economists?". The American Economic Review 69 (2): 28–37. JSTOR 1801612.
- Alston, Richard M.; Kearl, J. R.; Vaughan, Michael B. (May 1992). "Is There a Consensus Among Economists in 1990s?". The American Economic Review 82 (2): 203–9. JSTOR 2117401.
- survey by Dan Fuller and Doris Geide-Stevenson using a sample of 308 economists surveyed by the American Economic Association
- Hall, Robert Ernest. Economics: Principles and Applications. Centage Learning. ISBN 1111798206.
- Fuller, Dan; Geide-Stevenson, Doris (2003). "Consensus Among Economists: Revisited". Journal of Economic Education 34 (4): 369–87. doi:10.1080/00220480309595230.
- Whaples, Robert (2006). "Do Economists Agree on Anything? Yes!". The Economists' Voice 3 (9): 1–6. doi:10.2202/1553-3832.1156.
- http://epionline.org/studies/epi_minimumwage_07-2007.pdf[full citation needed]
- Fuchs, Victor R.; Krueger, Alan B.; Poterba, James M. (September 1998). "Economists' Views about Parameters, Values, and Policies: Survey Results in Labor and Public Economics". Journal of Economic Literature 36 (3): 1387–425. JSTOR 2564804.
- Klein, Daniel; Dompe, Stewart (January 2007). "Reasons for Supporting the Minimum Wage: Asking Signatories of the 'Raise the Minimum Wage' Statement". Economics in Practice 4 (1): 125–67.
- "Minimum Wage". IGM Forum. February 26, 2013. Retrieved December 6, 2013.
- "Suggestion: Raise welfare children in institutions". Star-News. Jan 28, 1972. Retrieved November 19, 2013.
- David Scharfenberg (April 28, 2014). "What The Research Says In The Minimum Wage Debate". WBUR.
- "50 State Resources Map on State EITCs". The Hatcher Group. Retrieved June 16, 2010.
- "New Research Findings on the Effects of the Earned Income Tax Credit". Center on Budget and Policy Priorities. Retrieved June 30, 2010.
- Furman, Jason (April 10, 2006). "Tax Reform and Poverty". Center on Budget and Policy Priorities. Retrieved December 7, 2013.
- "Response to a Request by Senator Grassley About the Effects of Increasing the Federal Minimum Wage Versus Expanding the Earned Income Tax Credit" (PDF). Congressional Budget Office. January 9, 2007. Retrieved July 25, 2008.
- Olson, Parmy (9/01/2009). The Best Minimum Wages In Europe. Forbes. Retrieved 21 February 2014.
- "Labor Criticizes". Lewiston Morning Tribune. Associated Press. March 2, 1933. pp. 1, 6.
- 75 economists back minimum wage hike CNN Money, January 14, 2014
- Over 600 Economists Sign Letter In Support of $10.10 Minimum Wage Economist Statement on the Federal Minimum Wage, Economic Policy Institute
- "Sanders Introduces Bill for $15-an-Hour Minimum Wage". Sen. Bernie Sanders. Retrieved 2015-09-15.
- The rapid success of Fight for $15: 'This is a trend that cannot be stopped' S. Greenhouse, The Guardian, US-News, 24 Jul 2015
- Burkhauser, R. V. (2014). Why minimum wage increases are a poor way to help the working poor (No. 86). IZA Policy Paper, Institute for the Study of Labor (IZA).
|Wikiquote has quotations related to: Minimum wage|
|Wikimedia Commons has media related to Minimum wage.|
- Minimum wage at DMOZ
- Resource Guide on Minimum Wages from the International Labour Organization (a UN agency)
- Minimum Wage Rates in All States of India from Paycheck India
- The National Minimum Wage (U.K.) from official UK government website
- Find It! By Topic: Wages: Minimum Wage U.S. Department of Labor
- Characteristics of Minimum Wage Workers: 2009 U.S. Department of Labor, Bureau of Labor Statistics
- History of Changes to the Minimum Wage Law U.S. Department of Labor, Wage and Hour Division
- The Effects of a Minimum-wage Increase on Employment and Family Income Congressional Budget Office
- Inflation and the Real Minimum Wage: A Fact Sheet Congressional Research Service
- Minimum Wages in Central and Eastern Europe Database Central Europe
- Prices and Wages - research guide at the University of Missouri libraries
- Increasing national minimum wage - from official Aaron and Partners site
- Issues about Minimum Wage from the AFL-CIO (U.S. labor federation favoring the minimum wage)
- Issue Guide on the Minimum Wage from the Economic Policy Institute
- A $15 U.S. Minimum Wage: How the Fast-Food Industry Could Adjust Without Shedding Jobs from the Political Economy Research Institute, January 2015.
- Reporting the Minimum Wage from The Cato Institute (U.S. libertarian organization opposed to the minimum wage)
- The Economic Effects of Minimum Wages from Show-Me Institute (U.S. libertarian organization opposed to the minimum wage)
- Economics in One Lesson: The Lesson Applied, Chapter 19: Minimum Wage Laws by Henry Hazlitt | https://en.wikipedia.org/wiki/Minimum_wage |
4.0625 | Making Measurements in ScienceThe standard system of measurement all scientists use is the METRIC SYSTEM
Measuring Mass Mass is a measurement of the amount of matter in an object Instrument: Balancetriple beam,double panElectronic Base unit: gram (g) Other common units: milligram (mg) kilogram (kg) Which is larger than a gram? Which is smaller?
Measuring Distance Distance is a measurement of how far apart objects are from each other. Instrument: Meter stick Base unit: meter (m) Other common units: millimeter (mm) centimeter (cm) kilometer (km) Which of the units are larger than a meter? Which are smaller?
Measuring Liquid Volume Volume is a measurement of the amount of space an object or matter occupies Instrument: graduated cylinder or beaker Which instrument will give you a more accurate measurement? Base unit: liter (L) Other common unit: milliliter (mL) Is this smaller or larger than a liter?
Reading the Meniscus When measuring liquid volume, you must remember to read the meniscus properly The meniscus is curve of a liquid when placed in a cylinder Always make your measurement at the middle of the meniscus What is the measurement in the graduated cylinder?
Measuring Solid Volume How do you measure the volume of an object that has a regular shape, such as a block of wood? Instrument: meter stick Formula: Volume = l x w x h Base unit: cubic meter (m3) Other common unit: cubic cm (cm3) Why is the unit cubed?
Measuring Solid Volume Water Displacement Method How do you measure the volume of an object that has a irregular shape, such as a rock? There are actually 2 methods, both involving water. Instrument: graduated cylinder or overflow can Base unit: cubic meter (m3) Other common unit: cubic cm (cm3) Which do you think is more accurate?
Measuring Weight Weight is a measurement of the force of gravity acting upon an object Instrument: Spring scale Base unit: Newton (N)
Measuring Temperature Temperature is the measurement of the energy of the molecules in an object The higher the temperature, the faster the molecules are moving Instrument: thermometer Base unit: degrees Celsius (oC) Other unit: Kelvin (K) | http://www.slideshare.net/mrzolli/making-measurements-in-science |
4.1875 | 5 Written questions
5 Matching questions
- base-isolated building
- reverse fault
- strike-slip fault
- a rocks on either side of the fault slip past each other sideways, with little up or down motion
- b stress force that pulls on the crust stretching rock so that it becomes thinner in the middle.
- c number assigned by geologists based on the Earth quakes size
- d same as a reverse fault except the blocks move in the opposite direction
- e building designed to reduce the amount of energy that reaches the building during an earthquake.
5 Multiple choice questions
- the rock on a normal fault that lies below.
- the shaking and trembling that results from the movement of rock beneath the earths surface
- A type of seismic wave that compresses and expands the ground.
- A type of seismic wave that moves the ground up and down or side to side.
- Stress force that squeezes rock until until it folds or breaks
5 True/False questions
plateau → A type of seismic wave that compresses and expands the ground.
stress → A type of seismic wave that moves the ground up and down or side to side.
normal fault → same as a reverse fault except the blocks move in the opposite direction
mercalli scale → developed to rate earthquakes according to the level of damage at a given place.
SYNCLINE → a fold in rock that bends downward to form a valley | https://quizlet.com/6711830/test |
4.25 | Using Tangent Lines to Approximate Function Values
"Approximation" is what we do when we can't or don't want to find an exact value. We're going to approximate actual function values using tangent lines.
We pointed out earlier that if we zoom in far enough on a continuous function, it looks like a line. For example, take the function f(x) = x2 and zoom in around x = 1. If we zoom in enough near x = 1, the function f looks like a line.
If we graph the function and its tangent line at 1, we'll see that as we zoom in around x = 1, the function f looks like its tangent line.
If we zoom back out a little bit, the function doesn't look quite so much like a line. However, the function and its tangent line are still "close together."
This means, for example, that the y-value on the tangent line at x = 1.1 is "close" to the y-value on the function f(x) = x2 when x = 1.1.
We found earlier that the tangent line to f(x) = x2 at 1 has the equation:
y = 2x – 1.
If we don't feel like calculating the actual value f(1.1), we can instead plug 1.1 into the tangent line equation and see what comes out:
2(1.1) – 1 = 2.2 – 1 = 1.2.
This is a good approximation of f(1.1):
If we then go and calculate the exact value of the function, we find
f(1.1) = 1.21.
This means our approximation was only 0.01 off.
Why bother? Approximation is supposed to make life easier, so why should we go to all that work of finding the equation of a line and finding the y-value of the line when x = 1.1 instead of calculating f(1.1) and being done with it?
In that example, we could calculate f(1.1) exactly, but we can't do that for every function. Try doing this with a function like ex or ln(x). Without a calculator, evaluating those functions for most values of x will get pretty hairy. | http://www.shmoop.com/derivatives/tangent-line-approximating-functions.html |
4 | Twelve Minor Prophets
The Minor Prophets or Twelve Prophets (Aramaic: תרי עשר, Trei Asar, "The Twelve"), occasionally Book of the Twelve, is the last book of the Nevi'im, the second main division of the Jewish Tanakh. The collection is broken up to form twelve individual books in the Christian Old Testament, one for each of the prophets. The terms "minor prophets" and "twelve prophets" can also refer to the twelve traditional authors of these works.
The term "Minor" relates to the length of each book (ranging from a single chapter to fourteen); even the longest is short compared to the three major prophets, Isaiah, Ezekiel and Jeremiah. It is not known when these short works were collected and transferred to a single scroll, but the first extra-biblical evidence we have for the Twelve as a collection is c. 190 BCE in the writings of Jesus ben Sirach, and evidence from the Dead Sea Scrolls suggests that the modern order was established by 150 BCE. It is believed that initially the first six were collected, and later the second six were added; the two groups seem to complement each other, with Hosea through Micah raising the question of iniquity, and Nahum through Malachi proposing resolutions.
In the Hebrew Old Testament, these works were counted as one book. The works are commonly studied together, and are consistently ordered in Jewish, Protestant and Catholic Bibles as:
- Hosea (Osee)
- Obadiah (Abdias)
- Jonah (Jonas)
- Micah (Micheas)
- Habakkuk (Habacuc)
- Zephaniah (Sophanias)
- Haggai (Aggeus)
- Zechariah (Zacharias)
- Malachi (Malachias)
Many, though not all, modern scholars agree that the editing process which produced the Book of the Twelve reached its final form in Jerusalem during the Achaemenid period (538–332 BCE), although there is disagreement over whether this was early or late. Scholars usually assume that there exists an original core of prophetic tradition behind each book which can be attributed to the figure after whom it is named. The noteworthy exception is the Book of Jonah, an anonymous work containing no prophetic oracles, probably composed in the Hellenistic period (332–167 BCE).
In general, each book includes three types of material:
- Autobiographical material in the first person, some of which may go back to the prophet in question;
- Biographical materials about the prophet in the third person – which incidentally demonstrate that the collection and editing of the books was completed by persons other than the prophets themselves;
- Oracles or speeches by the prophets, usually in poetic form, and drawing on a wide variety of genres, including covenant lawsuit, oracles against the nations, judgment oracles, messenger speeches, songs, hymns, narrative, lament, law, proverb, symbolic gesture, prayer, wisdom saying, and vision.
The comparison of different ancient manuscripts indicates that the order of the individual books was originally fluid. The arrangement found in current Bibles is roughly chronological. First come those prophets dated to the early Assyrian period: Hosea, Amos, Obadiah, Jonah, and Micah; Joel is undated, but it was possibly placed before Amos because parts of a verse near the end of Joel (3.16 [4.16 in Hebrew]) and one near the beginning of Amos (1.2) are identical. Also we can find in both Amos (4.9 and 7.1–3) and Joel a description of a plague of locusts. These are followed by prophets that are set in the later Assyrian period: Nahum, Habakkuk, and Zephaniah. Last come those set in the Persian period: Haggai, Zechariah, and Malachi. However it is important to note that chronology was not the only consideration, as "It seems that an emphatic focus on Jerusalem and Judah was [also] a main concern. For example, Obadiah is generally understood as reflecting the destruction of Jerusalem in 586 BCE. and would therefore fit later in a purely chronological sequence.
In the Roman Catholic Church, the twelve minor prophets are read in the Lectionary during the fourth and fifth weeks of November, which are the last two weeks of the liturgical year. They are collectively commemorated in the Calendar of saints of the Armenian Apostolic Church on July 31.
- Achtemeier, Elizabeth R. & Murphy, Frederick J. The New Interpreter’s Bible, Vol. VII: Introduction to Apocalyptic Literature, Daniel, The Twelve Prophets (Abingdon, 1996)
- Cathcart, Kevin J. & Gordon, Robert P. The Targum of the Minor Prophets. The Aramaic Bible 14 (Liturgical Press, 1989)
- Chisholm, Robert B. Interpreting the Minor Prophets (Zondervan, 1990)
- Coggins, Richard; Han, Jin H (2011). Six Minor Prophets Through the Centuries: Nahum, Habakkuk, Zephaniah, Haggai, Zechariah, and Malachi. John Wiley & Sons. ISBN 978-1-44434279-6.
- Coogan, Michael D (2009). A brief introduction to the Old Testament. Oxford University Press.
- Dell, Katherine J (1996). "Reinventing the Wheel: the Shaping of the Book of Jonah". In Barton, John; Reimer, David James. After the exile: essays in honour of Rex Mason. Mercer University Press. ISBN 978-0-86554524-3.
- Feinberg, Charles L. The Minor Prophets (Moody, 1990)
- Ferreiro, Alberto (ed). The Twelve Prophets. Ancient Christian Commentary on Scripture (Inter-Varsity Press, 2003)
- Floyd, Michael H (2000). Minor prophets 2. Eerdmans. ISBN 9780802844521.
- Hill, Robert C. (tr). Theodoret of Cyrus: Commentary on the Prophets Vol 3: Commentary on the Twelve Prophets (Holy Cross Orthodox Press, 2007)
- of Mopsuestia, Theodore; Hill, Robert C, tr (2004). "Commentary on the Twelve Prophets". The Fathers of the Church. Catholic University of America.
- House, Paul R. The Unity of the Twelve. JSOT Supplement Series, 97 (Almond Press, 1990)
- Jones, Barry Alan. The Formation of the Book of the Twelve: a Study in Text and Canon. SBL Dissertation Series 149 (Society of Biblical Literature, 1995)
- Keil, Carl Friedrich. Keil on the Twelve Minor Prophets (1878) (Kessinger, 2008)
- Longman, Tremper & Garland, David E. (eds). Daniel–Malachi. The Expositor’s Bible Commentary (Revised ed) 8 (Zondervan, 2009)
- McComiskey, Thomas Edward (ed). The Minor Prophets: An Exegetical and Expository Commentary (Baker, 2009)
- Navarre Bible, The: Minor Prophets (Scepter & Four Courts, 2005)
- Nogalski, James D. Literary Precursors to the Book of the Twelve. Beihefte Zur Zeitschrift Fur Die Alttestamentliche Wissenschaft (Walter de Gruyter, 1993)
- Nogalski, James D; Sweeney, Marvin A; 3, eds. (2000). Reading and Hearing the Book of the Twelve. Symposium. Society of Biblical Literature.
- Petterson, Anthony R., ‘The Shape of the Davidic Hope across the Book of the Twelve’, Journal for the Study of the Old Testament 35 (2010), 225–46.
- Phillips, John. Exploring the Minor Prophets. The John Phillips Commentary Series. (Kregel, 2002)
- Redditt, Paul L (2003). "The Formation of the Book of the Twelve". In Redditt, Paul L; Schart, Aaron. Thematic threads in the Book of the Twelve. Beihefte Zur Zeitschrift Fur Die Alttestamentliche Wissenschaft. Walter de Gruyter. ISBN 978-3-11017594-3.
- Roberts, Matis (ed). Trei asar: The Twelve Prophets: a New Translation with a Commentary Anthologized from Talmudic, Midrashic, and Rabbinic Sources (Mesorah, 1995–)
- Rosenberg, A.J. (ed). The Twelve Prophets: Hebrew Text and English Translation. Soncino Books of the Bible (Soncino, 2004)
- Schart, Aaron (1998). "Die Entstehung des Zwölfprophetenbuchs. Neubearbeitungen von Amos im Rahmen schriftenübergreifender Redaktionsprozesse". Beihefte zur Zeitschrift für die alttestamentliche Wissenschaft (in German) (260). Walter de Gruyter.
- Shepherd, Michael B. "The Twelve Prophets in the New Testament" (Peter Lang, 2011)
- Slavitt, David R. (tr). The Book of the Twelve Prophets (Oxford University Press, 1999)
- Smith, James E. The Minor Prophets. Old Testament Survey (College Press, 1994)
- Stevenson, John. Preaching From The Minor Prophets To A Postmodern Congregation (Redeemer, 2008)
- Walton, John H. (ed). The Minor Prophets, Job, Psalms, Proverbs, Ecclesiastes, Song of Songs Zondervan Illustrated Bible Backgrounds Commentary (Zondervan, 2009)
- Zvi, Ehud Ben (2004). "Introduction to The Twelve Minor Prophets". In Berlin, Adele; Brettler, Mark Zvi. The Jewish Study Bible. Oxford University Press. ISBN 978-0-19529751-5.
|Hebrew Bible||Followed by
|Christian Old Testament||End of Old Testament
New Testament begins with | https://en.wikipedia.org/wiki/Twelve_Minor_Prophets |
4.1875 | Like their human hosts, bacteria need iron to survive and they must obtain that iron from the environment. While humans obtain iron primarily through the food they eat, bacteria have evolved complex and diverse mechanisms to allow them access to iron. A Syracuse University research team led by Robert Doyle, assistant professor of chemistry in The College of Arts and Sciences, discovered that some bacteria are equipped with a gene that enables them to harvest iron from their environment or human host in a unique and energy efficient manner. Doyle's discovery could provide researchers with new ways to target such diseases as tuberculosis. The research will be published in the August issue (volume 190, issue 16) of the prestigious Journal of Bacteriology, published by the American Society for Microbiology.
"Iron is the single most important micronutrient bacteria need to survive," Doyle says. "Understanding how these bacteria thrive within us is a critical element of learning how to defeat them."
Doyle's research group studied Streptomyces coelicolor, a Gram-positive bacteria that is closely related to the bacteria that causes tuberculosis. Streptomyces is abundant in soil and in decaying vegetation, but does not affect humans. The TB bacteria and Streptomyces are both part of a family of bacteria called Actinomycetes. These bacteria have a unique defense mechanism that enables them to produce chemicals to destroy their enemies. Some of these chemicals are used to make antibiotics and other drugs.
Actinomycetes need lots of iron to wage chemical warfare on its enemies; however, iron is not easily accessible in the environments in which the bacteria live— e.g. human or soil. Some iron available in the soil is bonded to citrate, making a compound called iron-citrate. Citrate is a substance that cells can use as a source of energy. Doyle and his research team wondered if the compound iron-citrate could be a source of iron for the bacteria. In a series of experiments that took place over more than two years, the researchers observed that Streptomyces could ingest iron-citrate, metabolize the iron, and use the citrate as a free source of energy. Other experiments demonstrated that the bacteria ignored citrate when it was not bonded to iron; likewise, the bacteria ignored citrate when it was bonded to other metals, such as magnesium, nickel, and cobalt.
The next task was to uncover the mechanism that triggered the bacteria to ingest iron-citrate. Computer modeling predicted that a single Streptomyces gene enabled the bacteria to identify and ingest iron-citrate. The researchers isolated the gene and added it to E. coli bacteria (which is not an Actinomycete bacteria). They found that the mutant E. coli bacteria could also ingest iron-citrate. Without the gene, E. coli could not gain access to the iron.
"It's amazing that the bacteria could learn to extract iron from their environment in this way," Doyle says. "We went into these experiments with no idea that this mechanism existed. But then, bacteria have to be creative to survive in some very hostile environments; and they've had maybe 3.5 billion years to figure it out."
The Streptomyces gene enables the bacteria to passively diffuse iron-citrate across the cell membrane, which means that the bacteria do not expend additional energy to ingest the iron. Once in the cell, the bacteria metabolize the iron and, as an added bonus, use the citrate as an energy source. Doyle's team is the first to identify this mechanism in a bacteria belonging to the Actinomycete family. The team plans further experiments to confirm that the gene performs the same signaling function in tuberculosis bacteria. If so, the mechanism could potentially be exploited in the fight against tuberculosis.
"TB bacteria have access to an abundant supply of iron-citrate flowing through the lungs in the blood," Doyle says. "Finding a way to sneak iron from humans at no energy cost to the bacteria is as good as it gets. Our discovery may enable others to figure out a way to limit TB's access to iron-citrate, making the bacteria more vulnerable to drug treatment."
Source: Syracuse University | http://phys.org/news/2008-07-scientists-bacteria-iron-human-hosts.html |
4.3125 | Students will learn about the conservation of angular momentum. How to apply it in both conceptual questions and in problem solving situations.
The angular momentum of a spinning object can be found in two equivalent ways. Just like linear momentum, one way, shown in the first equation, is to multiply the moment of inertia, the rotational analog of mass, with the angular velocity. The other way is simply multiplying the linear momentum by the radius, as shown in the second equation.
Just the same as linear momentum, the torque required to change the momentum L in t time (L/t) can be compared to the force required to change the momentum p in t time. (p/t) Torques produce a change in angular momentum with time.
The same principles that hold with linear momentum conservation can be applied here with angular momentum conservation. The direction of L is given by the right hand rule. Simply wrap your fingers around and in the direction the object is spinning and your thumb tells you the direction the vector is pointing.
- Angular momentum can not change unless an outside torque is applied to the object.
- Recall that momentum is a vector quantity, thus the direction a spinning object is pointing can not change without an applied torque.
Watch this Explanation
- You have two coins; one is a standard U.S. quarter, and the other is a coin of equal mass and size, but with a hole cut out of the center.
- A star is rotating with a period of 10.0 days. It collapses with no loss in mass to a white dwarf with a radius of of its original radius.
- What is its initial angular velocity?
- What is its angular velocity after collapse?
- A merry-go-round consists of a uniform solid disc of and a radius of . A single person stands on the edge when it is coasting at revolutions per sec. How fast would the device be rotating after the person has walked toward the center. (The moments of inertia of compound objects add.)
- a. Coin with the hole b. Coin with the hole
- a. b. | http://www.ck12.org/physics/Angular-Momentum/lesson/Angular-Momentum/ |
4.3125 | |This article needs additional citations for verification. (January 2014)|
An ocean current is a continuous, directed movement of seawater generated by forces acting upon this mean flow, such as breaking waves, wind, the Coriolis effect, cabbeling, temperature and salinity differences, while tides are caused by the gravitational pull of the Sun and Moon. Depth contours, shoreline configurations, and interactions with other currents influence a current's direction and strength.
Ocean currents flow for great distances, and together, create the global conveyor belt which plays a dominant role in determining the climate of many of the Earth’s regions. More specifically, ocean currents influence the temperature of the regions through which they travel. For example, warm currents traveling along more temperate coasts increase the temperature of the area by warming the sea breezes that blow over them. Perhaps the most striking example is the Gulf Stream, which makes northwest Europe much more temperate than any other region at the same latitude. Another example is Lima, Peru where the climate is cooler (sub-tropical) than the tropical latitudes in which the area is located, due to the effect of the Humboldt Current.
Surface oceanic currents are sometimes wind driven and develop their typical clockwise spirals in the northern hemisphere and counterclockwise rotation in the southern hemisphere because of imposed wind stresses. In wind driven currents, the Ekman spiral effect results in the currents flowing at an angle to the driving winds. The areas of surface ocean currents move somewhat with the seasons; this is most notable in equatorial currents.
Ocean basins generally have a non-symmetric surface current, in that the eastern equatorward-flowing branch is broad and diffuse whereas the western poleward flowing branch is very narrow. These western boundary currents (of which the Gulf Stream is an example) are a consequence of the rotation of the Earth.
Deep ocean currents are driven by density and temperature gradients. Thermohaline circulation is also known as the ocean's conveyor belt (which refers to deep ocean density driven ocean basin currents). These currents, called submarine rivers, flow under the surface of the ocean and are hidden from immediate detection. Where significant vertical movement of ocean currents is observed, this is known as upwelling and downwelling. Deep ocean currents are currently being researched using a fleet of underwater robots called Argo.
The South Equatorial Currents of the Atlantic and Pacific straddle the equator. Though the Coriolis effect is weak near the equator (and absent at the equator), water moving in the currents on either side of the equator is deflected slightly poleward and replaced by deeper water. Thus, equatorial upwelling occurs in these westward flowing equatorial surface currents. Upwelling is an important process because this water from within and below the pycnocline is often rich in the nutrients needed by marine organisms for growth. By contrast, generally poor conditions for growth prevail in most of the open tropical ocean because strong layering isolates deep, nutrient rich water from the sunlit ocean surface.
Surface currents make up only 8% of all water in the ocean, are generally restricted to the upper 400 m (1,300 ft) of ocean water, and are separated from lower regions by varying temperatures and salinity which affect the density of the water, which in turn, defines each oceanic region. Because the movement of deep water in ocean basins is caused by density driven forces and gravity, deep waters sink into deep ocean basins at high latitudes where the temperatures are cold enough to cause the density to increase.
Ocean currents are measured in sverdrup (sv), where 1 sv is equivalent to a volume flow rate of 1,000,000 m3 (35,000,000 cu ft) per second.
Horizontal and vertical currents also exist below the pycnocline in the ocean's deeper waters. The movement of water due to differences in density as a function of water temperature and salinity is called thermohaline circulation. Ripple marks in sediments, scour lines, and the erosion of rocky outcrops on deep-ocean floors are evidence that relatively strong, localized bottom currents exist. Some of these currents may move as rapidly as 60 centimeters (24 inches) per second.
These currents are strongly influenced by bottom topography, since dense, bottom water must forcefully flow around seafloor projections. Thus, they are sometimes called contour currents. Bottom currents generally move equator-ward at or near the western boundaries of ocean basins (below the western boundary surface currents). The deep-water masses are not capable of moving water at speeds comparable to that of wind-driven surface currents. Water in some of these currents may move only 1 to 2 meters per day. Even at that slow speed, the Coriolis effect modifies their pattern of flow.
Downwelling of deep water in polar regions
Antarctic Bottom Water is the most distinctive of the deep-water masses. It is characterized by a salinity of 34.65‰, a temperature of -0.5 °C (30 °F), and a density of 1.0279 grams per cubic centimeter. This water is noted for its extreme density (the densest in the world ocean), for the great amount of it produced near Antarctic coasts, and for its ability to migrate north along the seafloor. Most Antarctic Bottom Water forms near the Antarctic coast south of South America during winter. Salt is concentrated in pockets between crystals of pure water and then squeezed out of the freezing mass to form a frigid brine. Between 20 million and 50 million cubic meters of this brine form every second. The water's great density causes it to sink toward the continental shelf, where it mixes with nearly equal parts of water from the southern Antarctic Circumpolar Current. The mixture settles along the edge of Antarctica's continental shelf, descends along the slope, and spreads along the deep-sea bed, creeping north in slow sheets. Antarctic Bottom Water flows many times as slowly as the water in surface currents: in the Pacific it may take a thousand years to reach the equator. Antarctic Bottom Water also flows into the Atlantic Ocean basin, where it flows north at a faster rate than in the Pacific. Antarctic Bottom Water has been identified as high as 40° N on the Atlantic floor.
A small amount of dense bottom water also forms in the northern polar ocean. Although, the topography of the Arctic Ocean basin prevents most of the bottom water from escaping, with the exception of deep channels formed in the submarine ridges between Scotland, Iceland, and Greenland. These channels allow the cold, dense water formed in the Arctic to flow into the North Atlantic to form North Atlantic Deep Water. North Atlantic Deep Water forms when the relatively warm and salty North Atlantic Ocean cools as cold winds from northern Canada sweep over it. Exposed to the chilled air, water at the latitude of Iceland releases heat, cools from 10 °C to 2 °C, and sinks. Gulf Stream water that sinks in the north is replaced by warm water flowing clockwise along the U.S. east coast in the North Atlantic gyre.
Knowledge of surface ocean currents is essential in reducing costs of shipping, since traveling with them reduces fuel costs. In the wind powered sailing-ship era, knowledge was even more essential. A good example of this is the Agulhas Current, which long prevented Portuguese sailors from reaching India. In recent times, around-the-world sailing competitors make good use of surface currents to build and maintain speed. Ocean currents are also very important in the dispersal of many life forms. An example is the life-cycle of the European Eel.
Ocean currents are important in the study of marine debris, and vice versa. These currents also affect temperatures throughout the world. For example, the ocean current that brings warm water up the north Atlantic to northwest Europe also cumulatively and slowly blocks ice from forming along the seashores, which would also block ships from entering and exiting inland waterways and seaports, hence ocean currents play a decisive role in influencing the climates of regions through which they flow. Cold ocean water currents flowing from polar and sub-polar regions bring in a lot of plankton that are crucial to the continued survival of several key sea creature species in marine ecosystems. Since plankton are the food of fish, abundant fish populations often live where these currents prevail.
Ocean currents can also be used for marine power generation, with areas off of Japan, Florida and Hawaii being considered for test projects.
OSCAR: Near-realtime global ocean surface current data set
The OSCAR Near-realtime global ocean surface currents website from which users can create customized graphics and download the data. A section of the website provides validation studies in the form of graphics comparing OSCAR data with moored buoys and global drifters.
OSCAR data is used extensively in climate studies. maps and descriptions or annotations of climatic anomalies have been published in the monthly Climate Diagnostic Bulletin since 2001 and are routinely used to monitor ENSO and to test weather prediction models. OSCAR currents are routinely used to evaluate the surface currents in Global Circulation Models (GCMs), for example in NCEP Global Ocean Data Assimilation System (GODAS) and European Centre for Medium-Range Weather Forecasts (ECMWF).
- Deep ocean water
- Thermohaline circulation
- Fish migration
- List of ocean circulation models
- Oceanic gyres
- Physical oceanography
- Marine current power
- Latitude of the Gulf Stream and the Gulf Stream north wall index
- Hansen, B.; Østerhus, S; Quadfasel, D; Turrell, W (2004). "Already the day after tomorrow?". Science 305 (5686): 953–954. doi:10.1126/science.1100085. PMID 15310882.
- Kerr, Richard A. (2004). "A slowing cog in the North Atlantic ocean's climate machine". Science 304 (5669): 371–372. doi:10.1126/science.304.5669.371a. PMID 15087513.
- Munday, Phillip L.; Jones, Geoffrey P.; Pratchett, Morgan S.; Williams, Ashley J. (2008). "Climate change and the future for coral reef fishes". Fish and Fisheries 9 (3): 261–285. doi:10.1111/j.1467-2979.2008.00281.x.
- Rahmstorf, S. (2003). "Thermohaline circulation: The current climate". Nature 421 (6924): 699–699. doi:10.1038/421699a. PMID 12610602.
- Roemmich, D. (2007). "Physical oceanography: Super spin in the southern seas". Nature 449 (7158): 34–35. doi:10.1038/449034a. PMID 17805284.
|Wikimedia Commons has media related to Ocean currents.|
- NOAA Ocean Surface Current Analyses - Realtime (OSCAR) Near-realtime Global Ocean Surface Currents derived from satellite altimeter and scatterometer data.
- RSMAS Ocean Surface Currents
- Coastal Ocean Current Monitoring Program
- Ocean Motion and Surface Currents
- Data Visualizer from OceanMotion.org
- Changes in Ocean Circulation - Cluster of Excellence "Future Ocean", Kiel | https://en.wikipedia.org/wiki/Current_(ocean) |
4 | Marfan syndrome is a genetic disorder that affects the body’s connective tissue. Connective tissue holds all the body’s cells, organs and tissue together.
Marfan syndrome is a condition in which your body's connective tissue is abnormal. Connective tissue helps support all parts of your body. It also helps control how ...
WebMD's guide to Marfan syndrome, an inherited disease that affects the heart.
Marfan syndrome is a life-threatening genetic disorder, and an early, accurate diagnosis is essential, not only for people with Marfan syndrome, but also for those ...
Marfan syndrome is an inherited disorder that affects connective tissue — the fibers that support and anchor your organs and other structures in your body.
Read about Marfan syndrome, a hereditary condition affecting connective tissue. Read about Marfan syndrome facts, treatment, symptoms, prognosis, life expectacny ...
Marfan Syndrome. October 2015. Questions and Answers about Marfan Syndrome. This publication answers general questions about Marfan syndrome. It describes the ...
Marfan syndrome (also called Marfan's syndrome) is a genetic disorder of connective tissue. It has a variable clinical presentation, ranging from mild to severe ...
Marfan syndrome is a disorder that affects the connective tissue in many parts of the body. Connective tissue provides strength and flexibility to ...
This is an easy-to-read public information piece. Marfan syndrome is a disorder that affects connective tissue. | http://search.lycos.com/web/?q=marfan_syndrome |
4 | The Mexican-American War began on April 25, 1846 and ended February 2, 1848.
In the U.S. the war is termed the Mexican–American War, also known as the
Mexican .... In the sparsely settled interior of northern Mexico, the end of Spanish
... that this territory, which it...
War's End ... The Treaty of Guadalupe Hidalgo ended the U.S.-Mexican War. ....
the Native Americans in the ceded territories, who in fact were Mexican citizens, ...
Find out more about the history of Mexican-American War, including videos,
interesting articles, ... Did You Know? ... Santa Anna convinced Polk that, if
allowed to return to Mexico, he would end the war on terms favorable to the
... on February 2, 1848, ended the Mexican-American War in favor of the United ...
1846, asked Congress to declare war on Mexico, which it did two days later.
... which brought an official end to the Mexican-American War (1846-1848) was ...
Trist determined that Washington did not understand the situation in Mexico ...
Between 1846 and 1848, a war fought between two North American nations, the
United States and Mexico, did what most wars do- it began with a ... incur many
battles, be fought mostly on land, and result in an end to the Texas/Mexico border
Instead, Mexico's neighbor to the north had captured the country. How and why
did the United States defeat Mexico in the Mexican-American War? To the victors
The Mexican-American War From 1846-1848. ... William Huddle's 1886 depiction
of the end of the. Texas Revolution shows Mexican General Santa Anna.
Sep 11, 2015 ... Mexican-American War, also called Mexican War, Spanish Guerra de ...
Ultimately, the House did not act on Lincoln's resolutions, and Polk ... | http://www.ask.com/web?q=When+Did+the+Mexican+American+War+End%3F&o=2603&l=dir&qsrc=3139&gc=1 |
4 | Rhabdomyolysis is a condition that may occur when muscle tissue is damaged due to an injury in which muscle in the body is damaged (rhabdomyo=skeletal muscle + lysis= rapid breakdown). There are three types of muscle in the body, including:
- skeletal muscles that move the body;
- cardiac muscle located in the heart; and
- smooth muscle that lines blood vessels, gastrointestinal tract, bronchi in the lung, and thebladder and uterus. This type of muscle is not under conscious control.
Rhabdomyolysis occurs when there is damage to the skeletal muscle.
The injured muscle cell leaks myoglobin (a protein) into the blood stream. Myoglobin can be directly toxic to kidney cells, and it can impair and clog the filtration system of the kidney. Both mechanisms can lead to kidney failure (the major complication of rhabdomyolysis).
Significant muscle injury can cause fluid and electrolyte shifts from the bloodstream into the damaged muscle cells, and in the other direction (from the damaged muscle cells into the bloodstream). As a result, dehydration may occur. Elevated levels of potassium in the bloodstream (hyperkalemia) may be associated with heart rhythm disturbances and sudden cardiac death due to ventricular tachycardia and ventricular fibrillation.
Complications of rhabdomyolysis also include disseminated intravascular coagulation, a condition that occurs when small blood clots begin forming in the body's blood vessels. These clots consume all the clotting factors and platelets in the body, and bleeding begins to occur spontaneously.
When muscles are damaged, especially due to a crush injury, swelling within the muscle can occur, causing compartment syndrome. If this occurs in an area where the muscle is bound by fascia (a tough fibrous tissue membrane), the pressure inside the muscle compartment can increase to the point at which blood supply to the muscle is compromised and muscle cells begin to die.
Rhabdomyolysis was first appreciated as a significant complication from crush and blast injuries sustained in a volcano eruption in Italy, in 1908. Victims of the blast injuries during the first and second World Wars help further understand the relationship between massive muscle damage and kidney failure.
Medically Reviewed by a Doctor on 9/1/2015
Must Read Articles Related to Rhabdomyolysis
Chemical burns can occur in the home, at work or school, and as a result of accident or assault.learn more >>
Overdoses of drugs or chemicals can be either accidental or intentional. Drug overdoses occur when a person takes more than the medically recommended dose. Over...learn more >>
Epilepsy is a condition in which the brain repea...learn more >>
Patient Comments & Reviews
The eMedicineHealth doctors ask about Rhabdomyolysis: | http://www.emedicinehealth.com/rhabdomyolysis/article_em.htm |
4 | Eastern long-beaked echidna
|Eastern long-beaked echidna|
|Eastern long-beaked echidna range|
The eastern long-beaked echidna (Zaglossus bartoni), also known as Barton's long-beaked echidna, is one of three species from the genus Zaglossus to occur in New Guinea. It is found mainly in Papua New Guinea at elevations between 2,000 and 3,000 metres (6,600 and 9,800 ft).
The eastern long-beaked echidna can be distinguished from other members of the genus by the number of claws on the fore and hind feet: it has five claws on its fore feet and four on its hind feet. Its weight varies from 5 to 10 kilograms (11 to 22 lb); its body length ranges from 60 to 100 centimetres (24 to 39 in); it has no tail. It has dense black fur. It rolls into a spiny ball for defense.
All long-beaked echidnas were classified as a single species, until 1998 when Flannery published an article identifying several new species and subspecies. Three species were then recognized based on various attributes such as body size, skull morphology, and the number of toes on the front and back feet.
There are four recognized subspecies of Zaglossus bartoni :
- Z. bartoni bartoni
- Z. bartoni clunius
- Z. bartoni smeenki
- Z. bartoni diamondi
The population of each subspecies is geographically isolated. The subspecies are distinguished primarily by differences in body size.
Eastern long-beaked echidnas are mainly insect eaters, or insectivores. The long snout proves essential for the Echidna’s survival because of its ability to get in between hard-to-reach places and scavenge for smaller insect organisms such as larvae and ticks. Along with this snout, they have a specific evolutionary adaptation in their tongues for snatching up various earthworms, which are its main type of food source.
Zaglossus bartoni habitats include tropical hill forests to sub-alpine forests, upland grasslands and scrub. The species has been found in locations up to an elevation of around 4,150 m. Today is it rare to find Zaglossus bartoni at sea level.
Ecology and Behavior
Humans are the main factor in diminishing populations of eastern long-beaked echidnas. Locals in areas surrounding regions that these organisms inhabit often prey upon them for food. Feral dogs are known to occasionally consume this species. These mammals dig burrows, providing some protection from predation.
The eastern long-beaked echidna is a member of the order Monotremata. Although monotremes have some of the same mammal features such as hair and mammary glands, they do not give birth to live young, they lay eggs. Like birds and reptiles, monotremes have a single opening, the cloaca. The cloaca allows for the passage of urine and feces, the transmission of sperm, and the laying of eggs.
Little is actually known about the breeding behaviors of this animal, due to the difficulty of finding and tracking specimens. Unfortunately, the way the spines on the echidna lie make it difficult to attach tracking devices, in addition to the difficulty in finding the animals themselves, as they are mainly nocturnal.
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (April 2009)|
- Groves, C.P. (2005). "Order Monotremata". In Wilson, D.E.; Reeder, D.M. Mammal Species of the World: A Taxonomic and Geographic Reference (3rd ed.). Johns Hopkins University Press. p. 1. ISBN 978-0-8018-8221-0. OCLC 62265494.
- Leary, T., Seri, L., Flannery, T., Wright, D., Hamilton, S., Helgen, K., Singadan, R., Menzies, J., Allison, A., James, R., Aplin, K., Salas, L. & Dickman, C. (2008). Zaglossus bartoni. In: IUCN 2008. IUCN Red List of Threatened Species. Retrieved 28 December 2008. Database entry includes justification for why this species is listed as critically endangered.
- Flannery, T. F.; Groves, C. P. (Jan 1998). "A revision of the genus Zaglossus (Monotremata, Tachyglossidae), with description of new species and subspecies". 'Mammalia' 6 (3): 367–396. doi:10.1515/mamm.19188.8.131.527.
- Wilson, Don E. "Zaglossus bartoni". Integrated Taxonomic Information System. Retrieved 25 October 2013.
- "Zaglossus bartoni (Eastern Long-beaked Echidna)". The IUCN Red List of Threatened Species. 2014. Retrieved 29 July 2014.
- "Monotreme". Columbia Electronic Encyclopedia, 6th Edition. EBSCOhost. 2013. ISBN 9780787650155.
- Opiang, Muse (April 2009). "Home Ranges, Movement, and Den Use in Long-Beaked Echidnas, Zaglossus Bartoni, From Papua New Guinea". Journal of Mammalogy (American Society of Mammalogists) 9 (2): 340–346. doi:10.1644/08-MAMM-A-108.1.
- Flannery, T.F. and Groves, C.P. 1998. A revision of the genus Zaglossus (Monotremata, Tachyglossidae), with description of new species and subspecies. Mammalia, 62(3): 367–396
|Wikispecies has information related to: Zaglossus|
- EDGE of Existence (Zaglossus spp.) – Saving the World's most Evolutionarily Distinct and Globally Endangered (EDGE) species | https://en.wikipedia.org/wiki/Eastern_Long-beaked_Echidna |
4.1875 | West Nile Virus and West Nile Encephalitis (WNE)
West Nile Virus Facts
- West Nile virus is transmitted to humans by mosquito bites and may cause encephalitis (West Nile encephalitis or WNE) in a few patients.
- West Nile virus usually occurs in birds but can be transmitted by a mosquito vector to humans.
- Symptoms of West Nile viral infections may range from no symptoms to fever, chills, muscle aches, headaches, and sensitivity to light; severe infections may cause additional symptoms associated with meningitis, encephalitis, coma, seizures, and infrequently, death.
- West Nile virus infections are diagnosed by the patient's physical exam and by immunological tests.
- Treatment for West Nile virus infections is mainly supportive and is aimed at reducing symptoms; severe infections often require hospital treatment.
- Risk factors for West Nile virus infections include exposure to infected mosquitoes, being age 50 and older, and having any medical problem that reduces the immune response.
- In general, the prognosis of most West Nile viral infections is very good; however, severe infections have a more guarded prognosis because of potential neurological damage.
- Currently, there is no vaccine available to prevent West Nile virus infections in humans; however, preventing mosquito bites by several methods (wearing long-sleeved shirts, long pants, using mosquito repellent, and eliminating areas that are good breeding grounds for mosquitoes) help prevent infections.
West Nile Virus Overview
West Nile virus is a Flaviviridae virus transmitted to humans by mosquito bites. Virus symptoms range from none to severe: encephalitis (inflammation of the brain) or meningitis (inflammation of the lining of the brain and spinal cord). The disease the virus causes is termed West Nile encephalitis (WNE). WNE currently is endemic in Asia, Africa, and the Middle East. Since 1999, the disease has been detected in many states (see map below) in the U.S. The disease is considered to be endemic now in the U.S.; in 2013, 39,567 individuals had been diagnosed with the disease. From 2013-2015, about 2,000 per year are detected with new West Nile infections in 47 states in the U.S.
West Nile virus was discovered in 1937 in the West Nile district of Uganda. Although wild birds are the preferred hosts for the virus and are likely the hosts that spread the disease from country to country, West Nile virus can infect other mammals such as horses and dogs, for example. The virus is transferred from animal or birds to humans by mosquitoes. Since the virus was first detected in the United States in 1999, every year since then there has been an outbreak in the U.S. of West Nile virus (for example, outbreaks have occurred in California, Arizona, Illinois, Massachusetts, Oregon, Pennsylvania, Wisconsin, and Texas); the virus has been detected in 47 U.S. states and in Canada.
Medically Reviewed by a Doctor on 4/9/2015
Must Read Articles Related to West Nile Virus
Encephalitis is an acute infection and inflammation of the brain itself. This is in contrast to meningitis, which is an inflammation of the layers covering the ...learn more >>
Insect stings and bites are common. Common symptoms include pain, swelling, redness, and itching. Treatment of insect stings and bites depends on the type of in...learn more >>
Patient Comments & Reviews
The eMedicineHealth doctors ask about West Nile Virus: | http://www.emedicinehealth.com/west_nile_virus/article_em.htm |
4.25 | As of right now, the Common Core Standards objective numbers are written for grades 3 and up.
*With prompting and support, ask and answer
questions about key details in a text.
*Identify the front cover, back cover, and title
page of a book.
*Name the author and illustrator of a text and
define the role of each in presenting the ideas or
information in a text.
*Actively engage in group reading activities with
purpose and understanding.
*Use a combination of drawing, dictating, and
writing to compose opinion pieces in which they
tell a reader the topic or the name of the book
they are writing about and state an opinion or
preference about the topic or book (e.g., My
favorite book is . . .).
*Confirm understanding of a text read aloud or
information presented orally or through other
media by asking and answering questions
about key details and requesting clarification if
something is not understood.
*Write numbers from 0 to 20. Represent a number of objects with a
written numeral 0-20 (with 0 representing a count of no objects).
*Count to answer “how many?” questions about as many as 20 things
arranged in a line, a rectangular array, or a circle, or as many as 10
things in a scattered configuration; given a number from 1–20, count
out that many objects.
*Identify whether the number of objects in one group is greater than,
less than, or equal to the number of objects in another group, e.g., by
using matching and counting strategies.1
*Solve addition and subtraction word problems, and add and subtract
within 10, e.g., by using objects or drawings to represent the problem.
Life cycle- the process of moving from one stage of life to another (egg-caterpillar (larva)-pupa-butterfly)
Cocoon- a shell formed around a moth larva for protection during the pupal stage
Chrysalis- a shell formed around a butterfly larva for protection during the pupal stage
Metamorphosis- a change of physical form such as a caterpillar to a butterfly
Egg- a protective hard shell from which a baby caterpillar hatches
Larva-a caterpillar that hatches from an egg
Pupa- the stage of a caterpillars life where it builds a protective covering (chrysalis/butterfly or cocoon/moth) around itself so that it can turn into an adult
Butterfly- an insect that flies around in the daytime with brightly colored wings and a hollow tongue for sucking nectar from plants
The teacher will:
1. Read the story, The Very Hungry Caterpillar by Eric Carle
a. after reading (can be later on in the day or next day as day 2 of story) lead discussion about
* which days had an even or an odd number of food,
*which day did he eat the most food, the least food
*if you added Monday and Tuesdays food together would it be greater than (more), less than (less) or equal to Wednesdays food? ***put different combinations together
b. if you decide to do the food pyramid chart, discuss the categories and which foods are good choices and which foods are once in a while choices. Have children come up to put lamintated cut outs of the food the caterpillar ate in the proper spots
2. Discuss/review the vocabulary words before, during and after the story
a. write words on board so children can notice spelling patterns and use vocab in writing
3. Use the poly vision board or poster to discuss and show pictures of the life cycle of a butterfly
4. Give the instructions and model how to complete the life cycle worksheet
5. Give the instructions and model how to complete the writing paper and illustration on which part of the story was the student's favorite part and why.
The students will:
1. Listen to the teacher read The Very Hungry Caterpillar by Eric Carle
2a. Tell which day had an even or an odd number of food, which day had the most and the least amount of food and add two days together and answer whether those days food totals are greater than, less than or equal to another day (laminated cut outs of the food can be used to solve addition problems and for students who need to manipulate items to see <, > and =)
2b. Place laminated cut outs of the food that the caterpillar ate in the proper spots on the food pyramid chart.
3. Join in the discussion of the vocabulary words (ask questions if need be, repeat word, make prediction of what word means)
4. Use the poly vision board to see pictures of the life cycle of a butterfly.
5. Complete the life cycle worksheet
6. Complete the writing and illustration paper on which part of the story and why was their favorite. (I liked it when the caterpillar ......./ The best part was .....)
7. Share their writing with the class if they would like to.
Why do you think Eric Carle chose to write a book about a caterpillar?
Which day did the caterpillar eat the most food? (Saturday)
Which day did the caterpillar eat the least amount of food? (Monday)
Which amount of food is an even number? (2, 4)
Which amount of food is an odd number? (1, 3, 5)
Which foods that the caterpillar ate are good choices on the food pyramid? (apple, pear, plum, strawberries, oranges, cheese, pickle, sausage, salami, watermelon)
Which foods that the caterpillar ate are choices that we should only have every once in awhile? (lollipop, cupcake/cake, pie)
When we are making choices on what food we want to eat, why should you choose these choices (point to laminated "good choice foods") over these choices (point to laminated "once in a while foods")?
How many parts are there in a butterfly life cycle? (4)
What is it called when when a caterpillar moves from one stage of its life to another? (life cycle)
What are the four parts of the butterfly life cycle? (egg, larva, pupa, butterfly)
Which part comes first? (egg) Which part is last? (butterfly)
What is the protective covering around the caterpillar called? (chrysalis)
What is it called when a caterpillar turns into a butterfly? (metamorphosis)
Which part of the story was your favorite part? Why do you like that part?
One 30-40 minute session for reading, discussing life cycle and completing life cycle sheet.
One 20-30 minute session for writing and illustrating about favorite part.
One 20 minute session for math questions and/or enrichment chart.
The Very Hungry Caterpillar book by Eric Carle.
caterpillar puppet, a piece of fabric for the chrysalis, butterfly puppet and laminated pictures of what it eats
poly vision board and pen
dry erase markers (just in case poly vision board pen isn't working properly)
Butterfly life cycle poster
Life cycle worksheet and construction paper
pencil, crayons, scissors, glue
|W:||I will introduce the story of The Very Hungry Caterpillar by telling the children they will be learning about the life cycle of a butterfly. I will help them understand the new vocabulary words by showing them pictures of the life cycle of a butterfly and the words that go along with each stage. I will explain that they will be responsible for completing a life cycle paper that looks like the one we are completing as a class.|
|H:||I will hold the students attention during reading the story by using a caterpillar puppet, cut outs of the food he eats, fabric to wrap him in a chrysalis and a butterfly puppet to represent the life cycle of a butterfly.|
|E:||I will motivate the students to further understand the life cycle by showing pictures of different stages of a butterfly cycle and, after breaking into groups, having the random reporter tell us which stage the butterfly in the picture is in.|
|R:||I will motivate the students to reflect, revisit, revise and rethink the vocabulary words by reviewing the stages and the vocabulary words first and then explaining the instructions on how to complete the life cycle worksheet by themselves at their desks.|
|E:||I will evaluate the students understanding of the butterfly life cycle by giving them a worksheet that asks them to match the pictures of each stage with the vocabulary words.|
|T:||To meet the needs of all of my students, I will use strategies such as Think-Pair-Share, random reporter and have the children partner up to find the answers to questions posed during reading. I will gear my level of questioning based on the developmental level of each child (ask easier questions to those who struggle, harder questions to those who need more challenge) and use manipulatives to find the answers if need be.|
|O:||I will organize the objectives that the children have learned to further their prior knowledge for later concepts. As part of reading unit on trees and plants, I could have the children compare the life cycle of a butterfly to the life cycle of a tree or plant. That discussion could lead to a discussion on the food that trees and plants give us which could tie in with the food pyramid and making healthy eating choices.|
The teacher will use the poly vision board to:
1. show a video of a butterfly life cycle
2. show pictures of each stage so the students can use the pen to label each stage
3. show pictures of different butterflies.
4. show a food pyramid that the students can interact with and decide in which section would each piece of food the caterpillar eats belong
1. After discussing the differences between a moth and a butterfly, the teacher will show pictures of each and the children will decide whether the picture is a moth or a butterfly.
2. After viewing different types of butterflies on the poly vision board, each child will pick the one they liked best, print the picture and attach it to their writing on why that butterfly is their favorite.
3. Create a class graph (which includes tally marks, numbers and words) to show which butterfly is the favorite.
4. Based on the food that the caterpillar eats in the story, create a food pyramid to show which food choices are healthy and which are not. (use laminated cut outs of food for children to place in proper spots on food pyramid)
The Very Hungry Caterpillar by Eric Carle
1. Tell whether the food on any given day within the story is an even number or an odd number?
Even yes/no Odd yes/no
2. Add two days of food from the story together? Yes/No
3. Tell which number of food is greater than, less than or equal to another number of food?
Greater than Yes/No
Less than Yes/No
Equal to Yes/No
4. Put the pictures of a butterfly life cycle in sequential order? Yes/No
5. Match the words (egg, larva, pupa, butterfly) to the correct pictures in the butterfly life cycle?
6. Write and illustrate a sentence (or two) about the part of the story that they liked best? Yes/No
***Questions 1,2 and 3 can be assessed either one on one or using individual white boards or chalkboards in a whole group setting.
***Questions 4, 5 and 6 can be assessed with the student's work.
Other resources I could use with this lesson are:
1. Eric Carle library (The Very Grouchy Ladybug, The Very Quiet Cricket, etc...)
2. Interview with Eric Carle on Reading Rockets Intervention site: http://www.readingrockets.org/books/interviews/carle/ | http://www.pdesas.org/module/content/resources/19862/view.ashx |
4.3125 | On January 16, 1920 the United States embarked on one of its greatest social experiments—the effort to prohibit within its borders the manufacture and sale of alcoholic beverages. A year earlier, the 18th Amendment had been ratified by the states, setting the process in motion; the federal government had followed with enabling legislation, defining alcoholic drinks, establishing an enforcement procedure, and setting penalties for violators.
The drive to prohibit the consumption of intoxicating beverages was not an American innovation. Most societies from antiquity shared a common desire to maintain stability and believed that drunkenness led too often to signs of alcoholism, impoverishment and the disintegration of families.
Movements for temperance developed in many western countries, particularly in northern Europe. Public attitudes toward drinking were often much more accepting in the Mediterranean European countries.
The First Reform Era in the pre-Civil War United States brought a host of social concerns to public attention. Beginning with an outburst of religious enthusiasm, the movement concentrated most notably on the abolition of slavery, but also on the punitive treatment of the mentally ill, the wretched conditions of prisoners and the growing toll taken by Demon Rum.
By the 1830s, thousands of temperance societies, with hundreds of thousands of members, had been formed in the United States. Massachusetts, in 1838, crafted a law requiring the purchase of hard liquor to be made in large quantities; this measure was designed to make it more difficult for the laboring class to afford strong drink.
A more far-reaching law was enacted by Maine in 1846, becoming the first to opt for statewide prohibition. Other towns and localities voted to become “dry," as did a dozen other states. In succeeding years, most of those laws were either voided by court action or repealed. The stresses and privations of the Civil War later wiped out most of the few remaining gains made by the temperance movement.
Following the war, relaxed standards of behavior and the growth of the liquor industry brought a massive increase in drunkenness and revived the social reformers. The political parties were timid; both the Republican and Democratic parties declined to nail prohibition planks onto their platforms.
This omission provoked the inception of the Prohibition Party in 1869. That organization, the Woman`s Christian Temperance Union (1874) and lesser-known groups turned prohibition into a political issue.
A sharpening of differences in American society gave added momentum to alcohol reform efforts. By the 1890s, a wide gulf separated urban and rural dwellers, as evidenced in differing positions on many economic issues of the day.
Rural elements in the West and South viewed the rapidly expanding cities with alarm. The urban centers were the home of easily available alcohol and host of other vices. Immigration of this era was largely from southern and eastern Europe where prohibition movements had made little headway.
Further, many of the recently arrived city dwellers were Roman Catholics, making them all the more suspect in the eyes of old line Christian evangelicals. Suspicion of city life reached its height during the era of the muckrakers, whose writings detailed the corruption and depravity of urban America.
New organizations, like the Anti-Saloon League (1893), began on the local level to induce towns, cities, and counties to go dry. In 1913, they launched a national drive for a constitutional amendment prohibiting the manufacture and sale of alcoholic beverages.
This effort, however, failed to garner the necessary support in the House of Representatives. Despite that national failure, state legislatures came increasingly under the control of prohibition supporters.
During World War I, prohibition advocates buttressed their cause through the Food and Fuel Control Act (1917), which contained a section prohibiting manufacture of distilled liquor, beer, and wine.
Support was given to this measure by non-prohibitionists who were convinced that grain production should be devoted to food, not drink, during wartime. Moreover, the 1917 Reed Amendment to the Webb-Kenyon Act made it unlawful to use the mails to send liquor advertisements to persons in dry territory.
In December 1917, Congress began the Constitutional amendment process by passing a resolution that would make the entire country dry. Many states did not wait for ratification and 31 adopted statewide laws supporting prohibition.
In the end, however, prohibition was a manifest failure. Bootlegging, defined as the unlawful manufacture, sale, and transportation of alcoholic beverages without registration or payment of taxes, became widespread and a staple of organized crime.
Home stills sprouted up both in isolated places and the bathtubs of posh homes. Illegal drinking establishments, dubbed "speakeasies," sprang up in many parts of the country, especially large cities. Concealment of alcohol on one`s person became an artform. Methods from hollow canes to hollow books were used. Enforcement of prohibition was an extremely difficult, costly, and often violent proposition for law enforcement from the local to federal level.
In 1932, the Republican and Democratic party platforms called for repeal of prohibition, subject to the will of the people. The Congress passed a resolution proposing repeal in 1933, and it was promptly ratified by three-fourths of the states before year’s end. The 21st Amendment remains as the only amendment repealing a previously adopted one.
---- Selected Quotes ----
Quotes regarding Prohibition.
By Al Capone
I make my money by supplying a public demand. If I break the law, my customers, who number hundreds of the best people in Chicago, are as guilty as I am. Everybody calls me a racketeer. I call myself a businessman.
Comment in 1925
By Eleanor Roosevelt
Little by little it dawned upon me that this law was not making people drink any less, but it was making hypocrites and law breakers of a great number of people.
Her newspaper column, "My Day," July 14, 1939
By Rutherford B. Hayes
Personally I do not resort to force — not even the force of law — to advance moral reforms. I prefer education, argument, persuasion, and above all the influence of example — of fashion. Until these resources are exhausted I would not think of force.
Regarding a suggested Prohibition amendment, in his diary, 1883
- - - Books You May Like Include: ----
Baltimore Beer A Satisfying History of Charm City Brewing by Rob Kasper.
Dark Tide: The Great Boston Molasses Flood of 1919 by Stephen Puleo.
Hoosier Beer Tapping into Indiana Brewing Histor by Bob Ostrander and Derrick Morris.
Last Call: The Rise and Fall of Prohibition by Daniel Okrent.
Lost German Chicago by Joseph C. Heinen, Susan Barton Heinen.
Only Yesterday by Frederick L. Allen.
The Jazz Age: The 20s by Time-Life Books. | http://www.u-s-history.com/pages/h1085.html |
4 | James Bradley was an English astronomer most famous for his discovery of the aberration of starlight. The finding was an important piece of evidence supporting Copernicus's theory that the Earth moved around the sun, and also provided an alternative way to estimate the velocity of light.
Born in Sherborne, England, Bradley was the nephew of clergyman and amateur astronomer James Pound. His uncle trained him in astronomy from an early age and Bradley formally studied at Oxford University, from which he received a Bachelors degree in 1714, and a Masters in 1717. Fearing an inability to support himself financially as an astronomer, Bradley became a member of the clergy and was given a living at Bridstow. However, due to his scientific efforts and friendship with Edmund Halley, Bradley was elected a fellow of the Royal Society in 1718. An offer of professorship at Oxford followed in 1721, and twenty-eight year old Bradley quickly gave up his living at Bridstow in order to teach astronomy at the prestigious school.
A driving force in Bradley's career was his desire to measure the parallax of the stars, an apparent change in their positions that mirrored the change in the Earth's position in its orbit around the sun. Utilizing the observatory of his friend Samuel Molyneux, Bradley systematically studied the star Gamma Draconis and, though he did not successfully observe parallax, he made an important discovery while attempting to do so. Bradley found that Gamma Draconis did indeed shift in its location, but in the opposite direction from what was expected. He then deduced that the observed stellar variation in position was brought about by the aberration of light, a result of the finite speed of light and the forward movement of the Earth in its orbit.
Bradley announced his discovery to the Royal Society in 1728. The aberration of stellar light was of particular interest to the organization's members because it provided some proof for the extremely controversial heliocentric theory. The findings were also significant in that they provided another technique for calculating the speed of light. By analyzing measurements of stellar aberration angle and applying that data to the orbital speed of the Earth, Bradley was able to arrive at the remarkably accurate estimate of 183,000 miles (295,000 kilometers) per second.
Another important scientific contribution made by Bradley was the discovery of the nutation, or oscillation, of the Earth's axis. Bradley first noticed the fluctuation when he was carrying out his studies on parallax at Molyneux's observatory. However, since he believed that nutation was caused by the moon's gravitational pull, he decided to observe a full cycle of the motion of the moon's nodes, approximately 18.6 years, before announcing any findings. Completing his research in 1747, Bradley's discovery was finally made public in 1748 and was honored with the Copley Medal of the Royal Society that same year.
When Edmund Halley died in 1742, Bradley was named his successor as Astronomer Royal at Greenwich Observatory. He held the influential position for the rest of his life, greatly improving upon the condition of the observatory and the instruments it contained. Bradley also continued to study the stars and composed extremely accurate star charts, though the bulk of his observations would be published posthumously. He died on July 13, 1762, never realizing his hope of detecting the parallactic motion of the stars, but profoundly affecting the field of astronomy in his attempts to observe the elusive phenomenon.
BACK TO PIONEERS IN OPTICS
Questions or comments? Send us an email.
© 1995-2015 by
Michael W. Davidson
and The Florida State University.
All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our
Graphics & Web Programming Team
in collaboration with Optical Microscopy at the
National High Magnetic Field Laboratory.
Last Modification Friday, Nov 13, 2015 at 01:19 PM
Access Count Since March 11, 2003: 26419
Visit the websites of our partners in education: | http://micro.magnet.fsu.edu/optics/timeline/people/bradley.html |
4.1875 | Timeline of the American Civil Rights Movement
The Civil Rights Movement in the United States began to gain prominence in the late 1940s. In 1948 President Truman signed the Executive Order 9981, which declared there would be equal treatment and opportunity for all persons regardless of race or color in the armed services. This was the first step in creating a nation filled with equality. Throughout the passing years, there were many events that were milestones in the Civil rights movement. Below are some of the most well known events that helped shaped history.
1954 – Brown vs. Board of Education
- Summary of Brown Vs. the Board of Education - This event is one of the most significant trials in US history.
- Segregation of White and Black Children - This supreme court case ended segregation in the classroom
- Brown Vs. the Board of Education Historic Site - Learn about where the injustice behind this court case took place.
- Archive of Brown Vs. the Board of Education - Take a walk through history with information on the court case, oral arguments both for an against Brown Vs. the Board of Education, and an image gallery that focuses on the Civil rights movement.
1955 – Montgomery Bus Boycott
- Story of the Montgomery Bus Boycott - Learn the historic story of a town full of civilians who banded together to make a stand in the Civil rights movement.
- Montgomery Bus Boycott - Articles, historical timelines and biographies of important people who made the Montgomery Bus Boycott a critical piece of US history.
- Montgomery Alabama and the Bus Boycott - Learn about Alabama's shining moments in the Civil rights movement, as well as in American history.
- Rosa Parks - One of the most famous people to come out of the Civil rights movement, Rosa Parks was a key factor in the Montgomery Bus Boycott.
- Martin Luther King Jr. - The face of the Civil rights movement, Martin Luther King Jr. helped to lead the Montgomery Bus Boycott.
1957 – Desegregation at Little Rock
- Segregation Showdown at Little Rock - Follow the archives through the breakdown of segregation in Little Rock, Arkansas.
- Little Rock Nine - In Little Rock, Arkansas students attempted to attend an all white high school. Read the documentation of what happened following this event.
- Stand Up for Your Rights - Read the story of what happened to the 9 students who attempted to attend a high school that was still racially segregated.
- Little Rock Central High School - The protest of black students entering this Arkansas school got so bad, President Eisenhower was forced to send in federal protection.
1960 – Sit-in Campaign
- Sit-in Campaign - The basis of sit-in campaigns resulted from students "sitting" at lunch counters until they were acknowledged and served food.
- Nashville, TN Sit-in Campaigns - African Americans would sit and wait at the lunch counters in a very polite, non-violent manner. If police arrested them for not leaving, a new group of African Americans would take their place.
1961 – Freedom Rides
- Civil Rights Movements and Freedom Rides - Learn how American's tested the commitment to Civil rights through this unique strategy.
- Freedom Riders - The Congress on Racial Equality organized these techniques by placing black and white volunteers next to each other on buses and other forms of public transportation.
- Freedom Rides - See how the freedom riders played a part in the Civil rights movement timeline.
1962 – Mississippi Riot
- Mississippi Riot - Learn how the state of Mississippi rallied against a federal court's decision to allow one black man to attend an all white school.
- James H. Meredith - This man was a crucial figure in the American Civil rights movement. By having a federal court approve his case to attend an all white school in Mississippi, riots broke out and in turn paved the way for equality in the US.
- University of Mississippi Riot - Learn about the violence and death that ensued from the protest of a black man attending a white school.
1963 – Birmingham
- Birmingham, Alabama – In one of the most turbulent cities during the Civil rights movement, this organization explores all of the different activities that made this city a hub of change during this time period.
- Birmingham Demonstrations - Read about the efforts Martin Luther King Jr. and citizens hoping for change took to ensure equality for all.
- Birmingham Civil Rights District - A historical look at all of the events that took place in Birmingham during the Civil rights movement.
1963 – March on Washington
- March on Washington - With an estimated 250,000 people in attendance, this was truly a landmark event for the Civil rights movement.
- March on Washington for Jobs and Freedom - Both black and white people gathered together to witness Martin Luther King Jr. give his historical "I Have a Dream" speech.
- "I Have a Dream" - Read the words, written and spoke by Martin Luther King Jr., which united a nation.
1964 – Freedom Summer
- In the summer of l964, forty-one Freedom Schools opened in the churches, on the back porches, and under the trees of Mississippi.
- Mississippi Freedom Summer (Summer Project) Events
1965 – Selma
- Selma Marches - What was to be a peaceful march turned into a violent display of hate against the Civil Rights movement.
- Bloody Sunday - The demonstration march from Selma to Montgomery was nicknamed "Bloody Sunday" due to the brutality and violence troops used against the peaceful demonstrators.
- March 7th Selma, Alabama - Over 600 people partook in the March from Selma, Alabama.
Photographs from Civil Rights Movements
- The March on Washington - A collection of photographs from that monumental day in 1963.
- Civil Rights Movement in Florida - Images from buses, stores and theatres that demonstrate the progress being made in the Civil rights movement.
- Powerful Days in Black and White - Kodak shows the struggles during the Civil rights movement in these photographs.
- Black and White Photos - A wonderful collection of black and white photos from the Civil rights movement.
The Civil rights movement is a timeline of events that shaped American history and the world we live in today. | https://www.gettysburgflag.com/timeline-american-civil-rights |
4 | |Rights by claimant|
|Other groups of rights|
|Part of a series on|
|NGOs and political groups|
Indigenous rights are those rights that exist in recognition of the specific condition of the indigenous peoples. This includes not only the most basic human rights of physical survival and integrity, but also the preservation of their land, language, religion, and other elements of cultural heritage that are a part of their existence as a people. This can be used as an expression for advocacy of social organizations or form a part of the national law in establishing the relation between a government and the right of self-determination among the indigenous people living within its borders, or in international law as a protection against violation by actions of governments or groups of private interests.
Definition and historical background
The indigenous rights belong to those who, being indigenous peoples, are defined by being the original people of a land that has been conquested and colonized by outsiders. Exactly who is a part of the indigenous peoples is disputed, but can broadly be understood in relation to colonialism. When we speak of indigenous peoples we speak of those pre-colonial societies that face a specific threat from this phenomenon of occupation, and the relation that these societies have with the colonial powers. The exact definition of who are the indigenous people, and the consequent state of rightsholders, varies. It is considered both to be bad to be too inclusive as it is to be non-inclusive. In the context of modern indigenous people of European colonial powers, the recognition of indigenous rights can be traced to at least the period of Renaissance. Along with the justification of colonialism with a higher purpose for both the colonists and colonized, some voices expressed concern over the way indigenous peoples were treated and the effect it had on their societies.
The issue of indigenous rights is also associated with other levels of human struggle. Due to the close relationship between indigenous peoples' cultural and economic situations and their environmental settings, indigenous rights issues are linked with concerns over environmental change and sustainable development. According to scientists and organizations like the Rainforest Foundation, the struggle for indigenous peoples is essential for solving the problem of reducing carbon emission, and approaching the threat on both cultural and biological diversity in general.
The rights, claims and even identity of indigenous peoples are apprehended, acknowledged and observed quite differently from government to government. Various organizations exist with charters to in one way or another promote (or at least acknowledge) indigenous aspirations, and indigenous societies have often banded together to form bodies which jointly seek to further their communal interests.
There are several non-governmental civil society movements, networks, indigenous and non-indigenous organizations, such as International Indian Treaty Council, Indigenous World Association, the International Land Coalition, ECOTERRA Intl. , Indigenous Environmental Network, Earth Peoples, Global Forest Coalition, Amnesty International, Indigenous Peoples Council on Biocolonialism, Friends of Peoples Close to Nature, Indigenous Peoples Issues and Resources, Minority Rights Group International, Survival International and Cultural Survival, whose founding mission is to protect indigenous rights, including land rights. These organizations, networks and groups underline that the problems that indigenous peoples are facing is the lack of recognition that they are entitled to live the way they choose, and lack of the right to their lands and territories. Their mission is to protect the rights of indigenous peoples without states imposing their ideas of "development". These groups say that each indigenous culture is differentiated, rich of religious believe systems, way of life, substenance and arts, and that the root of problem would be the interference with their way of living by state's disrespect to their rights, as well as the invasion of traditional lands by multinational cooperations and small businesses for exploitation of natural resources.
Indigenous peoples and their interests are represented in the United Nations primarily through the mechanisms of the Working Group on Indigenous Populations (WGIP). In April 2000 the United Nations Commission on Human Rights adopted a resolution to establish the United Nations Permanent Forum on Indigenous Issues (PFII) as an advisory body to the Economic and Social Council with a mandate to review indigenous issues.
In late December 2004, the United Nations General Assembly proclaimed 2005–2014 to be the Second International Decade of the World's Indigenous People. The main goal of the new decade will be to strengthen international cooperation around resolving the problems faced by indigenous peoples in areas such as culture, education, health, human rights, the environment, and social and economic development.
In September 2007, after a process of preparations, discussions and negotiations stretching back to 1982, the General Assembly adopted the Declaration on the Rights of Indigenous Peoples. The non-binding declaration outlines the individual and collective rights of indigenous peoples, as well as their rights to identity, culture, language, employment, health, education and other issues. Four nations with significant indigenous populations voted against the declaration: the United States, Canada, New Zealand and Australia. All four have since then changed their vote in favour. Eleven nations abstained: Azerbaijan, Bangladesh, Bhutan, Burundi, Colombia, Georgia, Kenya, Nigeria, Russia, Samoa and Ukraine. Thirty-four nations did not vote, while the remaining 143 nations voted for it.
ILO 169 is a convention of the International Labour Organisation. Once ratified by a state, it is meant to work as a law protecting tribal people's rights. There are twenty-two cphysical survival and ntegrity, but also the preservation of their land, language, religion, ribhf
Definition and historical background
The indigenous rights belong to those who, being indigenous peoples, are defined by being the original people of a land that has been invaded and colonized by outsiders. Exactly who is a part of the indigenous peoples is disputed, but can broadly be understood in relation to colonialism. When we speak of indigenous peoples we speak of those pre-colonial societies that face a specific threat from this phenomenon of occupation, and the relation that these societies have with the colonial powers. The exact definition of who are the indigenous people, and the consequent state of rightsholders, varies. It is considered both to be bad to be too inclusive as it is to be non-inclusive. In the context of modern indigenous people of European colonial powers, the recognition of indigenous rights can be traced to at least the period of Renaissance. Along with the justification of colonialism with a higher purpose for both the colonists and colonized, some voices ountries that ratified the Convention 169 since the year of adoption in 1989: Argentina, Bolivia, Brazil, Central African Republic, Chile, Colombia, Costa Rica, Denmark, Dominica, Ecuador, Fiji, Guatemala, Honduras, México, Nepal, Netherlands, Nicaragua, Norway, Paraguay, Peru, Spain and Venezuela. The law recognizes land ownership; equality and freedom; and autonomy for decisions affecting indigenous peoples.
Organization of American States
Since 1997, the nations of the Organization of American States have been discussing draft versions of a proposed American Declaration on the Rights of Indigenous Peoples.
- Lindholt, Lone (2005). Human Rights in Development Yearbook 2003: Human Rights and Local/living Law. Martinus Nijhoff Publishers. ISBN 90-04-13876-5.
- Gray, Andrew (2003). Indigenous Rights and Development: Self-Determination in an Amazonian Community. Berghahn Books. ISBN 1-57181-837-5.
- Keal, Paul (2003). European Conquest and the Rights of Indigenous Peoples: The Moral Backwardness of International Society. Cambridge University Press. ISBN 0-521-82471-0.
- Kuppe, Rene (2005). Law & Anthropology: "Indigenous Peoples, Constitutional States And Treaties Of Other Constructive Arrangements Between Indigenous Peoples And States". Brill Academic Publishers. ISBN 90-04-14244-4.
- Anaya, S. James (2004). Indigenous Peoples in International Law. Oxford University Press. ISBN 0-19-517350-3.
- Stevens, Stanley (1997). Conservation through cultural survival: indigenous peoples and protected areas. Island Press. ISBN 1-55963-449-9.
- United Nations, State of The World's Indigenous Peoples – UNPFII report, First Issue, 2009
- Earth Peoples
- Survival International website – About Us
- International Indian Treaty Council website
- UNPO – ILO 169: 20 years later
- Survival International – ILO 169
- Jones, Peris: When the lights go out. Struggles over hydroelectric power and indigenous rights in Nepal NIBR International Blog 11.03.10
- Website of the Proposed American Declaration on the Rights of Indigenous Peoples
|Library resources about
- The Rights of Indigenous Peoples: Study Guide – University of Minnesota
- Researching Indigenous People's Rights Under International Law – Steven C. Perkins
- Indigenous Rights – International Encyclopedia of the Social Sciences, 2nd Edition
- United Nations Declaration on the Rights of Indigenous Peoples
- ILO Convention 169 (full text) - Current international law on indigenous peoples
- State of The World's Indigenous Peoples – UN report, First Issue, 2009
- Genocide Lewis, Norman, February 1969 - Article that led to the foundation of several prominent indigenous rights organizations | https://en.wikipedia.org/wiki/Indigenous_rights |
4.125 | Astronomers have long known that light bouncing off man-made reflectors on the lunar surface is fainter than expected, and mysteriously dims even more whenever the moon is full. Now they think moon dust and solar heating may be the dirty culprit, according to a new report.
The evidence is right here on Earth, researchers said. Only a fraction of the light a team beamed at the moon from a telescope in New Mexico bounces off of old reflectors on the lunar surface and returns to the observatory.
"Near full moon, the strength of the returning light decreases by a factor of ten," said Tom Murphy, an associate professor of physics at the University of California, San Diego, and the study's lead author. "Something happens on the surface of the moon to destroy the performance of the reflectors at full moon."
Measuring the moon
Murphy leads an effort to precisely measure the distance from Earth to the moon by timing the pulses of laser light that reflect off targets left on the lunar surface 40 years ago by Apollo astronauts.
Earth's atmosphere scatters the outgoing beam, spreading it over a distance of approximately 1.24 miles (2 km) on the surface of the moon.
The scientists aim the light at polished blocks of glass called comer cube prisms, each of which is about 1 1/2 inches (3.8 cm) in diameter.
Most of the laser light misses its target, which is roughly equivalent to the size of a suitcase. Furthermore, the reflectors also diffract returning light so that it spreads over 9.3 miles (15 km) when it reaches Earth again.
So the researchers have always expected to recapture only a small portion of the reflected photons, or particles of light, that actually bounce back. On average, their instruments detect just one-tenth of the returning light, and when the moon is full, ?the results are oddly ten times worse.
Moon dust and heat
Murphy believes that the cubes are heating unevenly at full moon, and that the cause of this discrepancy is likely caused by dust.
"Dust is dark," Murphy said. "It absorbs solar light and would warm the cube prism on the front face."
Ideally, for optimum performance, the entire cube must be the same temperature.
"It doesn't take much, just a few degrees, to significantly affect performance," Murphy said.
NASA engineers went to great lengths to minimize temperature differences across the prisms, which rest in arrays tilted toward Earth. Individual prisms sit in recessed pockets so that they are shielded from direct light when the sun is low on the moon's horizon.
But, when the full face of the moon appears illuminated from Earth, the sun is directly above the arrays.
"At full moon, the sun is coming straight down the pipe into these recessed pockets," Murphy explained.
The reflective properties of the prisms, which are clear glass, derive from the shape of their polished facets. Uneven heating of the prisms, which could occur with absorption by a dust coating, would bend the shape of the light pulses they reflect, interfering with the accuracy of measurements.
Light travels faster through warmer glass, and although all paths through the cube prisms are the same length, photons that strike the edge of the reflector will stay near the surface. Meanwhile, those that strike the center will pass deeper into the cube before hitting a reflective surface.
If the surface of the prism is warmer than the deeper parts, light that strikes the edges of the prism will re-emerge sooner than light that strikes the center, distorting the shape of the reflected laser pulses.
Lunar dust dilemma
But finding the source of the problematic dust could be more difficult, Murphy said.
The moon has no atmosphere and no wind, but electrostatic forces can move dust around. A constant rain of micrometeorites might also puff dust onto the moon's surface. Larger impacts that eject material from the surface across a greater distance could also contribute to the buildup.
Murphy recently returned from a trip to Italy, where a chamber built to simulate lunar conditions may help sort through the possible explanations.
"We think we have a thermal problem at full moon, plus optical loss at all phases of the moon," Murphy said. Accumulated dust on the front surface of the reflectors could account for both observations.
If sunlight-heated dust is really to blame, the researchers should notice the effect vanish during a lunar eclipse. In other words, light should bounce back while the moon passes through Earth's shadow, then dim again as sunlight hits the arrays once more.
"Measurements during an eclipse ? there are just a few ? look fine," Murphy said. "When you remove the solar flux, the reflectors recover quickly, on a time scale of about half an hour."
The researchers' findings will be published in an upcoming issue of the journal Icarus.
Previously, the McDonald Observatory, a research unit of The University of Texas at Austin, located in the Davis Mountains of West Texas, ran similar experiments at full moon between 1973 and 1976. But, between 1979 and 1984, they had "a bite taken out of their data," during full moons, Murphy said. "Ours is deeper." This could signify that the problem may be getting worse.
So far, bad weather has prevented the project from operating during a lunar eclipse. The next opportunity for the researchers will be on the night of Dec. 21, 2010.
- Images - Full Moon Fever
- Top 10 Lunar Eclipse Facts
- Images - Apollo 11 Anniversary: A Look Back in Pictures | http://www.space.com/8270-mystery-faint-moonlight-finally-solved.html |
4.15625 | |This article does not cite any sources. (December 2011)|
Cross slope or camber is a geometric feature of pavement surfaces: the transverse slope with respect to the horizon. It is a very important safety factor. Cross slope is provided to provide a drainage gradient so that water will run off the surface to a drainage system such as a street gutter or ditch. Inadequate cross slope will contribute to aquaplaning. On straight sections of normal two-lane roads, the pavement cross section is usually highest in the center and drains to both sides. In horizontal curves, the cross slope is banked into superelevation to reduce steering effort and lateral force required to go around the curve. All water drains to the inside of the curve. If the cross slope magnitude oscillates within 1–25 metres (3–82 ft), the body and payload of high (heavy) vehicles will experience high roll vibration.
Cross slope is usually expressed as a percentage: Cross slope .
Cross Slope is the angle in the vertical plane from a horizontal line to a line on the surface, which is perpendicular to the center line.
Typical values range from 2 percent for straight segments to 10 percent for sharp superelevated curves. It may also be expressed as a fraction of an inch in rise over a one-foot run (e.g. 1/4 inch per foot). | https://en.wikipedia.org/wiki/Cross_slope |
4.1875 | In cell biology, the nucleus (pl. nuclei; from Latin nucleus or nuculeus, meaning kernel) is a membrane-enclosed organelle found in eukaryotic cells.
The Cell Nucleus. The nucleus is a highly specialized organelle that serves as the information processing and administrative center of the cell.
Biology4Kids.com! This tutorial introduces the cell nucleus. Other sections include plants, animal systems, invertebrates, vertebrates, and microorganisms.
The cell nucleus is the command center of our cells. It contains our chromosomes and genetic information needed for the reproduction of life.
The eukaryotic cell nucleus. Visible in this diagram are the ribosome-studded double membranes of the nuclear envelope, the DNA (as chromatin), and the nucleolus.
A cell nucleus (plural: cell nuclei) is the part of the cell which contains the genetic code, the DNA. The nucleus is small and round, and it works as the cell's ...
Definition: The nucleus is a membrane bound structure that contains the cell's hereditary information and controls the cell's growth and reproduction.
Cell Nucleus and Nuclear Envelope. The nucleus of a eukaryotic cell contains the DNA, the genetic material of the cell. The DNA contains the information necessary for ...
This site provides an educational resource focusing on the structurees, functions, and dynamics of the interphase cell nucleus. The interphase nucleus is the place in ...
Cell Nucleus: Structure and Functions. The nucleus is a spherical-shaped organelle present in every eukaryotic cell. It is the control center of eukaryotic cells ... | https://www.search.com/reference/Cell_nucleus |
4.5 | |This article needs additional citations for verification. (February 2013)|
Flaps are devices used to alter the lift characteristics of a wing and are mounted on the trailing edges of the wings of a fixed-wing aircraft to reduce the speed at which the aircraft can be safely flown and to increase the angle of descent for landing. They do this by lowering the stall speed and increasing the drag. Flaps shorten takeoff and landing distances.
Extending flaps increases the camber or curvature of the wing, raising the maximum lift coefficient — the lift a wing can generate. This allows the aircraft to generate as much lift, but at a lower speed, reducing the stalling speed of the aircraft, or the minimum speed at which the aircraft will maintain flight. Extending flaps increases drag, which can be beneficial during approach and landing, because it slows the aircraft. On some aircraft, a useful side effect of flap deployment is a decrease in aircraft pitch angle which lowers the nose thereby improving the pilot's view of the runway over the nose of the aircraft during landing. However the flaps may also cause pitch-up depending on the type of flap and the location of the wing.
There are many different types of flaps used, with the specific choice depending on the size, speed and complexity of the aircraft on which they are to be used, as well as the era in which the aircraft was designed. Plain flaps, slotted flaps, and Fowler flaps are the most common. Krueger flaps are positioned on the leading edge of the wings and are used on many jet airliners.
The Fowler, Fairey-Youngman and Gouge types of flap increase the wing area in addition to changing the camber. The larger lifting surface reduces wing loading and allows the aircraft to generate the required lift at a lower speed and reduces stalling speed.
The general airplane lift equation demonstrates these relationships:
- L is the amount of Lift produced,
- is the air density,
- V is the true airspeed of the airplane or the Velocity of the airplane, relative to the air
- S is the wing area and
- is the lift coefficient, which is determined by the shape of the airfoil used and the angle at which the wing meets the air (or angle of attack).
Here, it can be seen that increasing the area (S) and lift coefficient () allow a similar amount of lift to be generated at a lower airspeed (V).
Extending the flaps also increases the drag coefficient of the aircraft. Therefore, for any given weight and airspeed, flaps increase the drag force. Flaps increase the drag coefficient of an aircraft due to higher induced drag caused by the distorted spanwise lift distribution on the wing with flaps extended. Some flaps increase the wing area and, for any given speed, this also increases the parasitic drag component of total drag.
Flaps during takeoff
Depending on the aircraft type, flaps may be partially extended for takeoff. When used during takeoff, flaps trade runway distance for climb rate—using flaps reduces ground roll and the climb rate. The amount of flap used on takeoff is specific to each type of aircraft, and the manufacturer will suggest limits and may indicate the reduction in climb rate to be expected. The Cessna 172S Pilot Operating Handbook generally recommends 10° of flaps on takeoff, especially when the ground is rough or soft.
Flaps during landing
Flaps may be fully extended for landing to give the aircraft a lower stall speed so the approach to landing can be flown more slowly, which also allows the aircraft to land in a shorter distance. The higher lift and drag associated with fully extended flaps allows a steeper and slower approach to the landing site, but imposes handling difficulties in aircraft with very low wing loading (the ratio between the wing area and the weight of the aircraft). Winds across the line of flight, known as crosswinds, cause the windward side of the aircraft to generate more lift and drag, causing the aircraft to roll, yaw and pitch off its intended flight path, and as a result many light aircraft land with reduced flap settings in crosswinds. Furthermore, once the aircraft is on the ground, the flaps may decrease the effectiveness of the brakes since the wing is still generating lift and preventing the entire weight of the aircraft from resting on the tires, thus increasing stopping distance, particularly in wet or icy conditions. Usually, the pilot will raise the flaps as soon as possible to prevent this from occurring.
Some gliders not only use flaps when landing, but also in flight to optimize the camber of the wing for the chosen speed. When thermalling, flaps may be partially extended to reduce the stalling speed so that the glider can be flown more slowly and thereby reduce the rate of sink, which lets the glider use the rising air of the thermal more efficiently, and to turn in a smaller circle to make best use of the core of the thermal. At higher speeds a negative flap setting is used to reduce the nose-down pitching moment. This reduces the balancing load required on the horizontal stabilizer, which in turn reduces the trim drag associated with keeping the glider in longitudinal trim. Negative flap may also be used during the initial stage of an aerotow launch and at the end of the landing run in order to maintain better control by the ailerons.
Like gliders, some fighters such as the Nakajima Ki-43 also use special flaps to improve maneuverability during air combat, allowing the fighter to create more lift at a given speed, allowing for much tighter turns. The flaps used for this must be designed specifically to handle the greater stresses and most flaps have a maximum speed at which they can be deployed. Control line model aircraft built for precision aerobatics competition usually have a type of maneuvering flap system that moves them in an opposing direction to the elevators, to assist in tightening the radius of a maneuver.
Flap track fairings
Fairings streamline the airflow over the flap support mechanisms to help reduce cruise drag - the smaller the fairing the lower the drag.
Thrust gates, or gaps, in the trailing edge flaps may be required to minimise interference between the engine flow and deployed flaps. In the absence of an in-board aileron, which provides a gap in many flap installations, a modified flap section may be needed. The thrust gate on the Boeing 757 was provided by a single-slotted flap in between the inboard and outboard double-slotted flaps. The A320,A330,A340 and A380 have no in-board aileron. No thrust gate is required in the continuous, single-slotted flap. Interference in the go-around case while the flaps are still fully deployed can cause increased drag which must not compromise the climb gradient.
- Plain flap: the rear portion of airfoil rotates downwards on a simple hinge mounted at the front of the flap. The Royal Aircraft Factory and National Physical Laboratory in the United Kingdom tested flaps in 1913 and 1914, but these were never installed in an actual aircraft. In 1916, the Fairey Aviation Company made a number of improvements to a Sopwith Baby they were rebuilding, including their Patent Camber Changing Gear, making the Fairey Hamble Baby as they renamed it, the first aircraft to fly with flaps. These were full span plain flaps which incorporated ailerons, making it also the first instance of flaperons. Fairey were not alone however, as Breguet soon incorporated automatic flaps into the lower wing of their Breguet 14 reconnaissance/bomber in 1917. Due to the greater efficiency of other flap types, the plain flap is normally only used where simplicity is required.
- Split flap: the rear portion of the lower surface of the airfoil hinges downwards from the leading edge of the flap, while the upper surface stays immobile. Like the plain flap, this can cause large changes in longitudinal trim, pitching the nose either down or up, and tends to produce more drag than lift. At full deflection, a split flaps acts much like a spoiler, producing lots of drag and little or no lift. It was invented by Orville Wright and James M. H. Jacobs in 1920, but only became common in the 1930s and was then quickly superseded. The Douglas DC-3 & C-47 used a split flap.
- Slotted flap: a gap between the flap and the wing forces high pressure air from below the wing over the flap helping the airflow remain attached to the flap, increasing lift compared to a split flap. Additionally, lift across the entire chord of the primary airfoil is greatly increased as the velocity of air leaving its trailing edge is raised, from the typical non-flap 80% of freestream, to that of the higher-speed, lower-pressure air flowing around the leading edge of the slotted flap. Any flap that allows air to pass between the wing and the flap is considered a slotted flap. The slotted flap was a result of research at Handley-Page, a variant of the slot that dates from the 1920s, but wasn't widely used until much later. Some flaps use multiple slots to further boost the effect.
- Fowler flap: split flap that slides backward flat, before hinging downward, thereby increasing first chord, then camber. The flap may form part of the upper surface of the wing, like a plain flap, or it may not, like a split flap, but it must slide rearward before lowering. It may provide some slot effect, but this is not a defining feature of the type. Invented by Harlan D. Fowler in 1924, and tested by Fred Weick at NACA in 1932. They were first used on the Martin 146 prototype in 1935, and in production on the 1937 Lockheed Electra, and are still in widespread use on modern aircraft, often with multiple slots.
- Junkers flap: a slotted plain flap where the flap is fixed below the trailing edge of the wing, rotating about its forward edge, and usually forming the "inboard" hinged section (closer to the root) of the Junkers Doppelflügel, or "double-wing" style of wing trailing edge control surfaces (including the outboard-mounted ailerons), which hung just below and behind the wing's fixed trailing edge. When not in use, it has more drag than other types, but is more effective at creating additional lift than a plain or split flap, while retaining their mechanical simplicity. Invented by Otto Mader at Junkers in the late 1920s, they were historically most often seen on both the Ju 52/3m airliner/cargo plane, and the Ju 87 Stuka dive bomber, though the same wing control surface can be also be found on many modern ultralights.
- Gouge flap: a type of split flap that slides backward along curved tracks that force the trailing edge downward, increasing chord and camber without affecting trim or requiring any additional mechanisms. It was invented by Arthur Gouge for Short Brothers in 1936 and used on the Short Empire and Sunderland flying boats, which used the very thick Shorts A.D.5 airfoil. Short Brothers may have been the only company to use this type.
- Fairey-Youngman flap: drops down (becoming a Junkers Flap) before sliding aft and then rotating up or down. Fairey was one of the few exponents of this design, which was used on the Fairey Firefly and Fairey Barracuda. When in the extended position, it could be angled up (to a negative angle of incidence) so that the aircraft could be dived vertically without needing excessive trim changes.
- Zap Flap or commonly, but incorrectly, Zapp Flap: Invented by Edward F. Zaparka while he was with Berliner/Joyce and tested on a General Aircraft Corporation Aristocrat in 1932 and on other types periodically thereafter, but it saw little use on production aircraft other than on the Northrop P-61 Black Widow. The leading edge of the flap is mounted on a track, while a point at mid chord on the flap is connected via an arm to a pivot just above the track. When the flap's leading edge moves aft along the track, the triangle formed by the track, the shaft and the surface of the flap (fixed at the pivot) gets narrower and deeper, forcing the flap down.
- Krueger flap: hinged flap, which folds out from under the wing's leading edge while not forming a part of the leading edge of the wing when retracted. This increases the camber and thickness of the wing, which in turn increases lift and drag. This is not the same as a leading edge droop flap, as that is formed from the entire leading edge. Invented by Werner Krüger in 1943 and evaluated in Goettingen, Krueger flaps are found on many modern swept wing airliners.
- Gurney flap: A small fixed perpendicular tab of between 1 and 2% of the wing chord, mounted on the high pressure side of the trailing edge of an airfoil. It was named for racing car driver Dan Gurney who rediscovered it in 1971, and has since been used on some helicopters such as the Sikorsky S-76B to correct control problems without having to resort to a major redesign. It boosts the efficiency of even basic theoretical airfoils (made up of a triangle and a circle overlapped) to the equivalent of a conventional airfoil. The principle was discovered in the 1930s, but was rarely used and was then forgotten. Late marks of the Supermarine Spitfire used a bead on the trailing edge of the elevators, which functioned in a similar manner.
- Leading edge droop: entire leading edge of the wing rotating downward, effectively increasing camber, but slightly reducing chord. Most commonly found on fighters with very thin wings unsuited to other leading edge high lift devices.
- Blown flaps: also known as Boundary Layer Control Systems, are systems that blow engine air or exhaust over the flaps to increase lift beyond that attainable with mechanical flaps. Types include the original (internally blown flap) which blows compressed air from the engine over the top of the flap, the externally blown flap, which blows engine exhaust over the upper and lower surfaces of the flap, and upper surface blowing which blows engine exhaust over the top of the wing and flap. While testing was done in Britain and Germany before the Second World War, and flight trials started, the first production aircraft with blown flaps wasn't until the 1957 Lockheed T2V SeaStar. Upper Surface Blowing was used on the Boeing YC-14 in 1976.
- Flexible flap or FlexFoil: modern interpretation of wing warping, internal mechanical actuators bend a lattice that changes the airfoil shape. It may have a flexible gap seal at the transition between fixed and flexible airfoils.
- Controls that look like flaps, but are not:
- Handley Page leading edge slats/slots may be confused for flaps, but are mounted on the top of the wings' leading edge and while they may be either fixed or retractable, when deployed they provide a slot or gap under the slat to force air against the top of the wing, which is absent on a Krueger flap. They offer excellent lift and enhance controllability at low speeds. Other types of flaps may be equipped with one or more slots to increase their effectiveness, a typical setup on many modern airliners. These are known as slotted flaps as described above. Frederick Handley Page experimented with fore and aft slot designs in the 20s and 30s.
- Spoilers may also be confused for flaps, but are intended to create drag and reduce lift by "spoiling" the airflow over the wing. A spoiler is much larger than a Gurney flap, and can be retracted. Spoilers are usually installed mid chord on the upper surface of the wing, but may also be installed on the lower surface of the wing as well.
- Air brakes are used on high performance combat aircraft to increase drag, allowing the aircraft to decelerate rapidly. They may be installed either on the wings or fuselage and differ from flaps and spoilers in that they are not intended to reduce lift and are built strongly enough to be deployed at much higher speeds.
- Ailerons are similar to flaps (and work the same way), but are intended to provide lateral control, rather than to change the lifting characteristics of both wings together, and so operate differentially - when an aileron on one wing increases the lift, the opposite aileron does not, and will often work to decrease lift. Some aircraft use flaperons, which combine both the functionality of flaps and ailerons in a single control, working together to increase lift, but to slightly different degrees so the aircraft will roll toward the side generating the least lift. Flaperons were used by the Fairey Aviation Company as early as 1916, but didn't become common until after World War II.
Split flap on a World War II Avro Lancaster bomber
Fully extended double slotted Fowler flaps before landing on a Boeing 737
Triple-slotted trailing-edge flaps and leading edge Krueger flaps fully extended on a Boeing 747 for landing.
|Wikimedia Commons has media related to Trailing-edge flaps.|
- Air brake (aeronautics)
- Aircraft flight control system
- Circulation control wing
- High-lift device
- Leading-edge slats
- Perkins, Courtland; Hage, Robert (1949). Airplane performance, stability and control, Chapter 2, John Wiley and Sons. ISBN 0-471-68046-X.
- Cessna Aircraft Company. Cessna Model 172S Nav III. Revision 3 - 12, 2006, p. 4-19 to 4-47.
- Windrow, 1965, p.4
- http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19960052267.pdf p.39
- http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19960052267.pdf p.40,54
- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.602.7484&rep=rep1&type=pdf p.7
- Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.452
- Taylor, 1974, pp.8-9
- Toelle, Alan (2003). Windsock Datafile Special, Breguet 14. Hertfordshire, Great Britain: Albatros Productions. ISBN 1-902207-61-0.
- Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.584
- Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.569
- Smith, Apollo M. O. (1975). "High-Lift Aerodynamics" (PDF). Journal of Aircraft 12 (6): 518–523. doi:10.2514/3.59830. ISSN 0021-8669. Retrieved 12 July 2011.
- Gunston, Bill, The Cambridge Aerospace Dictionary, Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.249-250
- Flight 1942
- National Aeronautics and Space Administration. Wind and Beyond: A Documentary Journey Into the History of Aerodynamics.
- Gunston, Bill, The Cambridge Aerospace Dictionary, Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.331
- Gunston, Bill, The Cambridge Aerospace Dictionary, Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.270
- C.M. Poulsen, ed. (27 July 1933). ""The Aircraft Engineer - flight engineering section" Supplement to Flight". Flight Magazine. pp. 754a–d.
- NASA on High-Lift Systems
- Virginia Tech – Aerospace & Ocean Engineering
- Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.335
- from German wiki page on Krüger flaps @ http://wikipedia.qwika.com/de2en/Kr%C3%BCgerklappe (accessed 18 October 2011)
- Gunston, Bill, The Cambridge Aerospace Dictionary Cambridge, Cambridge University Press 2004, ISBN 978-0-521-84140-5/ISBN 0-521-84140-2 p.191
- http://naca.central.cranfield.ac.uk/reports/arc/cp/0209.pdf page 1 accessdate=11 Jan 2016
- American Military Training Aircraft' E.R. Johnson and Lloyd S. Jones, McFarland & Co. Inc. Publishers, Jefferson, North Carolina
- "Shape-shifting flap takes flight". Retrieved 19 November 2014.
- Clancy, L.J. (1975). "6". Aerodynamics. London: Pitman Publishing Limited. ISBN 0-273-01120-0.
- Taylor, H.A. (1974). Fairey Aircraft since 1915. London: Putnam. ISBN 0-370-00065-X.
- Windrow, Martin C. and René J. Francillon. The Nakajima Ki-43 Hayabusa. Leatherhead, Surrey, UK: Profile Publications, 1965. | https://en.wikipedia.org/wiki/Flaps_(aircraft) |
4.3125 | Think of a number, square it and subtract your starting number. Is
the number you’re left with odd or even? How do the images
help to explain this?
In each of the pictures the invitation is for you to: Count what you see. Identify how you think the pattern would continue.
How can you arrange these 10 matches in four piles so that when you
move one match from three of the piles into the fourth, you end up
with the same arrangement?
Take a counter and surround it by a ring of other counters that
MUST touch two others. How many are needed?
These squares have been made from Cuisenaire rods. Can you describe
the pattern? What would the next square look like?
Watch this film carefully. Can you find a general rule for
explaining when the dot will be this same distance from the
What would be the smallest number of moves needed to move a Knight
from a chess set from one corner to the opposite corner of a 99 by
99 square board?
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10
Can you continue this pattern of triangles and begin to predict how many sticks are used for each new "layer"?
Polygonal numbers are those that are arranged in shapes as they enlarge. Explore the polygonal numbers drawn here.
While we were sorting some papers we found 3 strange sheets which
seemed to come from small books but there were page numbers at the
foot of each page. Did the pages come from the same book?
Use the interactivity to investigate what kinds of triangles can be
drawn on peg boards with different numbers of pegs.
Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this?
Delight your friends with this cunning trick! Can you explain how
Use the animation to help you work out how many lines are needed to draw mystic roses of different sizes.
Find out what a "fault-free" rectangle is and try to make some of
Imagine starting with one yellow cube and covering it all over with
a single layer of red cubes, and then covering that cube with a
layer of blue cubes. How many red and blue cubes would you need?
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
How could Penny, Tom and Matthew work out how many chocolates there
are in different sized boxes?
Only one side of a two-slice toaster is working. What is the
quickest way to toast both sides of three slices of bread?
Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions.
How many ways can you find to do up all four buttons on my coat?
How about if I had five buttons? Six ...?
Three circles have a maximum of six intersections with each other.
What is the maximum number of intersections that a hundred circles
In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square?
This challenge focuses on finding the sum and difference of pairs of two-digit numbers.
Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens?
Can you dissect an equilateral triangle into 6 smaller ones? What
number of smaller equilateral triangles is it NOT possible to
dissect a larger equilateral triangle into?
Compare the numbers of particular tiles in one or all of these
three designs, inspired by the floor tiles of a church in
Find a route from the outside to the inside of this square, stepping on as many tiles as possible.
Square numbers can be represented as the sum of consecutive odd
numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153?
Strike it Out game for an adult and child. Can you stop your partner from being able to go?
This article for teachers describes several games, found on the
site, all of which have a related structure that can be used to
develop the skills of strategic planning.
A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle
contains 20 squares. What size rectangle(s) contain(s) exactly 100
squares? Can you find them all?
Use your addition and subtraction skills, combined with some strategic thinking, to beat your partner at this game.
Take any two positive numbers. Calculate the arithmetic and geometric means. Repeat the calculations to generate a sequence of arithmetic means and geometric means. Make a note of what happens to the. . . .
What would you get if you continued this sequence of fraction sums?
1/2 + 2/1 =
2/3 + 3/2 =
3/4 + 4/3 =
The aim of the game is to slide the green square from the top right
hand corner to the bottom left hand corner in the least number of
One block is needed to make an up-and-down staircase, with one step up and one step down. How many blocks would be needed to build an up-and-down staircase with 5 steps up and 5 steps down?
Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see?
How many different journeys could you make if you were going to visit four stations in this network? How about if there were five stations? Can you predict the number of journeys for seven stations?
What happens if you join every second point on this circle? How
about every third point? Try with different steps and see if you
can predict what will happen.
Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning?
We can arrange dots in a similar way to the 5 on a dice and they
usually sit quite well into a rectangular shape. How many
altogether in this 3 by 5? What happens for other sizes?
In this problem we are looking at sets of parallel sticks that
cross each other. What is the least number of crossings you can
make? And the greatest?
An investigation that gives you the opportunity to make and justify
Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total?
What happens when you round these three-digit numbers to the nearest 100?
Explore the effect of reflecting in two intersecting mirror lines.
Imagine a large cube made from small red cubes being dropped into a
pot of yellow paint. How many of the small cubes will have yellow
paint on their faces? | http://nrich.maths.org/public/leg.php?code=72&cl=2&cldcmpid=502 |
4.375 | |Dictionary / T|
Turboprop engines are a type of aircraft powerplant that use a gas turbine to drive a propeller. The gas turbine is designed specifically for this application, with almost all of its output being used to drive the propeller. The engine's exhaust gases contain little energy compared to a jet engine and play a minor role in the propulsion of the aircraft.
The propeller is coupled to the turbine through a reduction gear that converts the high RPM, low torque output to low RPM, high torque. The propeller itself is normally a constant speed (variable pitch) type similar to that used with larger reciprocating aircraft engines.
Turboprop engines are generally used on small subsonic aircraft, but some aircraft outfitted with turboprops have cruising speeds in excess of 500 kt (926 km/h, 575 mph). Large military and civil aircraft, such as the Lockheed L-188 Electra and the Tupolev Tu-95, have also used turboprop power.
In its simplest form a turboprop consists of an intake, compressor, combustor, turbine, and a propelling nozzle. Air is drawn into the intake and compressed by the compressor. Fuel is then added to the compressed air in the combustor, where the fuel-air mixture then combusts. The hot combustion gases expand through the turbine. Some of the power generated by the turbine is used to drive the compressor. The rest is transmitted through the reduction gearing to the propeller. Further expansion of the gases occurs in the propelling nozzle, where the gases exhaust to atmospheric pressure. The propelling nozzle provides a relatively small proportion of the thrust generated by a turboprop.
Turboprops are very efficient at modest flight speeds (below 450 mph) because the jet velocity of the propeller (and exhaust) is relatively low. Due to the high price of turboprop engines they are mostly used where high-performance short-takeoff and landing (STOL) capability and efficiency at modest flight speeds are required. The most common application of turboprop engines in civilian aviation is in small commuter aircraft, where their greater reliability than reciprocating engines offsets their higher initial cost.
Much of the jet thrust in a turboprop is sacrificed in favor of shaft power, which is obtained by extracting additional power (up to that necessary to drive the compressor) from turbine expansion. While the power turbine may be integral with the gas generator section, many turboprops today feature a free power turbine on a separate coaxial shaft. This enables the propeller to rotate freely, independent of compressor speed. Owing to the additional expansion in the turbine system, the residual energy in the exhaust jet is low. Consequently, the exhaust jet produces (typically) less than 10% of the total thrust.
Propellers are not efficient when the tips reach or exceed supersonic speeds. For this reason, a reduction gearbox is placed in the drive line between the power turbine and the propeller to allow the turbine to operate at its most efficient speed while the propeller operates at its most efficient speed. The gearbox is part of the engine and contains the parts necessary to operate a constant speed propeller. This differs from the turboshaft engines used in helicopters, where the gearbox is remote from the engine.
Residual thrust on a turboshaft is avoided by further expansion in the turbine system and/or truncating and turning the exhaust through 180 degrees, to produce two opposing jets. Apart from the above, there is very little difference between a turboprop and a turboshaft.
While most modern turbojet and turbofan engines use axial-flow compressors, turboprop engines usually contain at least one stage of centrifugal compression. Centrifugal compressors have the advantage of being simple and lightweight, at the expense of a streamlined shape. Propellers lose efficiency as aircraft speed increases, so turboprops are normally not used on high-speed aircraft. However, propfan engines, which are very similar to turboprop engines, can cruise at flight speeds approaching Mach 0.75. To increase the efficiency of the propellers, a mechanism can be used to alter the pitch, thus adjusting the pitch to the airspeed. A variable pitch propeller, also called a controllable pitch propeller, can also be used to generate negative thrust while decelerating on the runway. Additionally, in the event of an engine outage, the pitch can be adjusted to a vaning pitch (called feathering), thus minimizing the drag of the non-functioning propeller.
Some commercial aircraft with turboprop engines include the Bombardier Dash 8, ATR 42, ATR 72, BAe Jetstream 31, Embraer EMB 120 Brasilia, The Fairchild Swearingen Metroliner, Saab 340 and 2000, Xian MA60, Xian MA600, and Xian MA700.
The world's first turboprop was the Jendrassik Cs-1, designed by the Hungarian mechanical engineer György Jendrassik. It was produced and tested in the Ganz factory in Budapest between 1939 and 1942. It was planned to fit to the Varga RMI-1 X/H twin-engined reconnaissance bomber designed by László Varga in 1940, but the program was cancelled. Jendrassik had also designed a small-scale 75 kW turboprop in 1937. However, Jendrassik's achievement was not unnoticed. After WW2, György Jendrassik moved to London. Building off a similar principle the first British turboprop engine was the Rolls Royce RB.50 Trent
The first British turboprop engine was the Rolls-Royce RB.50 Trent, a converted Derwent II fitted with reduction gear and a Rotol 7-ft, 11-in five-bladed propeller. Two Trents were fitted to Gloster Meteor EE227 — the sole "Trent-Meteor" — which thus became the world's first turboprop powered aircraft, albeit a test-bed not intended for production. It first flew on 20th September 1945. From their experience with the Trent, Rolls-Royce developed the Dart, which became one of the most reliable turboprop engines ever built. Dart production continued for more than fifty years. The Dart-powered Vickers Viscount was the first turboprop aircraft of any kind to go into production and sold in large numbers. It was also the first four-engined turboprop. Its first flight was on 16th July 1948. The world's first single engined turboprop aircraft was the Armstrong Siddeley Mamba-powered Boulton Paul Balliol, which first flew on 24th March 1948.
While the Soviet Union had the technology to create a jet-powered strategic bomber comparable to Boeing's B-52 Stratofortress, they instead produced the Tupolev Tu-95, powered with four Kuznetsov NK-12 turboprops, mated to eight contra-rotating propellers (two per nacelle) with supersonic tip speeds to achieve maximum cruise speeds in excess of 575 mph, faster than many of the first jet aircraft and comparable to jet cruising speeds for most missions. The Bear would serve as their most successful long-range combat and surveillance aircraft and symbol of Soviet power projection throughout the end of the 20th century. The USA would incorporate contra-rotating turboprop engines, such as the ill-fated Allison T40, into a series of experimental aircraft during the 1950s, but none would be adopted into service.
The first American turboprop engine was the General Electric XT31, first used in the experimental Consolidated Vultee XP-81. The XP-81 first flew in December 1945, the first aircraft to use a combination of turboprop and turbojet power. America skipped over turboprop airliners in favor of the Boeing 707, but the technology of the unsuccessful Lockheed Electra was used in both the long-lived P-3 Orion as well as the classic C-130 Hercules, one of the most successful military aircraft ever in terms of length of production. One of the most popular turboprop engines is the Pratt & Whitney Canada PT6 engine.
The first turbine powered, shaft driven helicopter was the Bell XH-13F, a version of the Bell 47 powered by Continental XT-51-T-3 (Turbomeca Artouste) engine. (Wikipedia) | https://www.tititudorancea.com/z/turboprop.htm |
4.03125 | At the simplest level, obesity is caused by consuming more calories than you burn.
Obesity, however, is a complex condition caused by more than simply eating too much and moving too little.
The environment you live in and your community's social norms surrounding food, eating, and lifestyle strongly influence what, when, and how much you eat.
Similarly, your environment affects whether, where, and how you are able to be physically active.
Diet and Lifestyle
Changes in American dietary habits and lifestyle have contributed to today's high prevalence of obesity.
Those changes include:
- More adults in the workforce, combined with long work hours and commutes, have led to fewer meals prepared at home.
- More Americans eat more meals in restaurants, which often serve oversized portions of calorie-dense foods.
- Portion sizes of packaged foods, such as snacks and soft drinks, have gotten larger over the years.
- Children spend more hours watching television, using computers, or playing electronic games and less time engaging in active play and recreation.
- Adults have gotten more sedentary as fewer perform physical labor on the job.
The way communities, workplaces, and schools are structured in much of the United States has contributed to the country's high rate of obesity.
Some of the changes seen in the past few decades include:
- Food (especially junk food) is now sold in places such as gas stations and office supply stores that historically did not sell food. The end result is that food is available almost constantly.
- Food products and restaurants are marketed intensively on television, radio, online, and elsewhere.
- Many communities have no safe routes for walking or bicycling, or safe places to play outdoors.
- Most jobs present few opportunities for physical activity.
- Many schools provide little or no recess periods or gym classes.
- Poor neighborhoods are often "food deserts," with no purveyors of fresh, healthy foods.
- There are many television shows dedicated to food, restaurants, and cooking that show no regard for the health consequences of the food being featured.
Stress contributes to obesity in a few ways:
- People who are stressed tend to make bad food choices and to eat too much.
- Stress causes the release of stress hormones including cortisol, which triggers the release of triglycerides (fatty acids) from storage and relocates them to fat cells deep in the abdomen. Cortisol also increases appetite.
Some people have a genetic predisposition to being overweight or obese.
However, in most cases, those people do not become obese unless they also have an energy imbalance — meaning they consume more calories than they burn.
A genetic tendency toward obesity often becomes apparent only when a person's or group's lifestyle or environment changes significantly.
Genetic syndromes such as Prader-Willi, Alstrom, Bardet-Biedl, Cohen, Börjeson-Forssman-Lehman, Frohlich, and others can also lead to obesity.
Such syndromes are rare, however, and they typically include other abnormalities besides obesity.
A variety of medical conditions are associated with being overweight and obese, including:
- Cushing's syndrome (a rare syndrome that results from excess production of cortisol by the adrenal glands)
- Eating disorders, especially binge eating disorder, bulimia nervosa, and night eating disorder
- Growth hormone deficiency
- Hypogonadism (low testosterone)
- Hypothyroidism (underactive thyroid)
- Insulinoma (a tumor of the pancreas that secretes insulin)
- Polycystic ovarian syndrome
In some cases it's not clear whether obesity causes the medical condition, or whether the condition causes obesity.
Drugs That Contribute to Obesity
Certain drugs have been shown to encourage weight gain — often by increasing appetite — and contribute to obesity.
These drugs include:
- Diabetes drugs, including insulin, thiazolidinediones (Actos and Avandia), and sulphonylureas (glimepiride, glipizide, and glyburide)
- Drugs for high blood pressure, including thiazide diuretics, loop diuretics, calcium channel blockers, beta blockers, and alpha-adrenergic blockers
- Antihistamines (used for allergies), particularly cyproheptadine
- Steroids, including corticosteroids and birth control pills
- Psychotherapeutic medications, including lithium, antipsychotics, and antidepressants
- Anticonvulsant drugs (used for epilepsy and some other conditions), such as sodium valproate and carbamazepine
In some cases, other drugs can be substituted for those that encourage weight gain, or a lower dose can be used.
However, don't stop taking prescribed medications on your own.
Discuss your options with your doctor, and make a decision together about what's best for you.
If you must take a medicine that increases your appetite, behavioral measures such as learning to count calories and eat slowly can help to limit weight gain.
Last Updated: 4/30/2015 | http://www.everydayhealth.com/obesity/causes-and-risk-factors/ |
4.0625 | Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
Truman was the first president to embrace containment and use it as a policy. He funded Greek and Turkish governments to rebuild after WWII because he did not want communist influence to infiltrate and overcome weak countries.
Similarly, Truman initiated the Marshall plan and supported NATO in efforts to have financial and military links tying Western nations together.
Johnson adhered closely to containment during the Vietnam War. Nixon, who replaced Johnson in 1969, referred to his foreign policy as détente, or a relaxation of tension.
Although it continued to aim at restraining the Soviet Union, détente was based on political realism, or thinking in terms of national interest, as opposed to crusades against communism or for democracy.
When the U.N. intervened in the Korea crisis, war broke out and the United States was given much military responsibility. While the war was supposed to be a policing action between North and South Korea, McArthur crossed the 38th parallel to attack, signaling to the communists an act of war. Truman tried to continue to keep this a policing mission, but he faced opposition from republicans who wanted rollback.
Vietnam was another policing action that tested containment. Johnson did not give full power to one general, and instead separated power in the hands of three. Still, conflict turned into a war, and containment nearly failed as the communists eventually took over South Vietnam in 1975.
Nixon used his foreign policy experience to achieve a detente with the Soviets. This was an easing in tensions and focus on diplomacy.
Ronald Reagan did not accept coexistence, and wanted the West to win at all costs. One strategy besides all-out war was the Space Defense Iniative. He wanted to outspend the Soviets by creating shields that would protect the United States in the skies, knowing the Soviets would compete to make a duplicate, though they could not afford it. Soviet public spending on its creation would contribute to the financial downfall of the USSR.
Another way Reagan combated the Soviet Union was by having the CIA overthrow Third World governments hostile to American interests. He sent the CIA to Latin America, Africa, and the Middle East.
The strategy of forcing change in the major policies of a state, usually by replacing its ruling regime. It contrasts with containment, which means preventing the expansion of that state; and with détente, which means a working relationship with that state. Most of the discussions of rollback in the scholarly literature deal with United States foreign policy toward Communist countries during the Cold War. The rollback strategy was tried, and failed, in Korea in 1950, and in Cuba in 1961.
National Security Council Report 68 (NSC-68) was a 58-page top secret policy paper issued by the United States National Security Council on April 14, 1950, during the presidency of Harry S. Truman. It was one of the most significant statements of American policy in the Cold War. NSC-68 largely shaped U.S. foreign policy in the Cold War for the next 20 years, and involved a decision to make containment of Communist expansion a high priority. The strategy outlined in NSC-68 achieved ultimate victory, according to this view, with the collapse of the Soviet power and the emergence of a "new world order" centered on American liberal-capitalist values. Truman officially signed NSC-68 on September 30, 1950.
(1904 – 2005) an American adviser, diplomat, political scientist and historian, best known as "the father of containment" and as a key figure in the emergence of the Cold War. He was a core member of the group of foreign policy elders known as "The Wise Men." In the late 1940s, his writings inspired the Truman Doctrine and the U.S. foreign policy of "containing" the Soviet Union, thrusting him into a lifelong role as a leading authority on the Cold War. His "Long Telegram" from Moscow in 1946 and the subsequent 1947 article "The Sources of Soviet Conduct" argued that the Soviet regime was inherently expansionist and that its influence had to be "contained" in areas of vital strategic importance to the United States
G. F. Kennan had been stationed at the U.S. Embassy in Moscow as minister-counselor since 1944. Although highly critical of the Soviet system, the mood within the U.S. State Department was friendship towards the Soviets, since they were an important ally in the war against Nazi Germany. In February 1946, the United States Treasury asked the U.S. Embassy in Moscow why the Soviets were not supporting the newly created World Bank and the International Monetary Fund. In reply, Kennan wrote the Long Telegram outlining his opinions and views of the Soviets; it arrived in Washington on February 22, 1946. Among its most-remembered parts was that while Soviet power was impervious to the logic of reason, it was highly sensitive to the logic of force.
A United States policy using numerous strategies to prevent the spread of communism abroad. A component of the Cold War, this policy was a response to a series of moves by the Soviet Union to enlarge communist influence in Eastern Europe, China, Korea, and Vietnam. It represented a middle-ground position between détente and rollback.
Containment was a United States policy using numerous strategies to prevent the spread of communism abroad. This policy was a response to a series of moves by the Soviet Union to enlarge communist influence in Eastern Europe, China, Korea, and Vietnam. The basis of the doctrine was articulated in a 1946 cable by U.S. diplomat George F. Kennan, and the term is a translation of the French cordon sanitaire, used to describe Western policy toward the Soviet Union in the 1920s.
Containment is associated most strongly with the policies of U.S. President Harry Truman (1945–53), including the establishment of the North Atlantic Treaty Organization (NATO), a mutual defense pact. Further, President Lyndon Johnson (1963–69) cited containment as a justification for his policies in Vietnam, while President Richard Nixon (1969–74), working with his top adviser Henry Kissinger, rejected containment in favor of friendly relations with the Soviet Union and China. This détente, or relaxation of tensions, involved expanded trade and cultural contacts. Central programs under containment, including NATO and nuclear deterrence, remained in effect even after the end of the war.
Following the 1917 communist revolution in Russia, there were calls by Western leaders to isolate the Bolshevik government, which seemed intent on promoting worldwide revolution. In March 1919, French Premier Georges Clemenceau called for a cordon sanitaire, or ring of non-communist states, to isolate the Soviet Union. Translating this phrase, U.S. President Woodrow Wilson called for a "quarantine. " Both phrases compare communism to a contagious disease. Nonetheless, during World War II, the U.S. and the Soviet Union found themselves allied in opposition to the Axis powers.
Key State Department personnel grew increasingly frustrated with and suspicious of the Soviets as the war drew to a close. Averell Harriman, U.S. ambassador in Moscow, once a "confirmed optimist" regarding U.S.-Soviet relations, was disillusioned by what he saw as the Soviet betrayal of the 1944 Warsaw Uprising as well as by violations of the February 1945 Yalta Agreement concerning Poland. Harriman would later have significant influence in forming Truman's views on the Soviet Union. In February 1946, the U.S. State Department asked George F. Kennan, then at the U.S. Embassy in Moscow, why the Russians opposed the creation of the World Bank and the International Monetary Fund. He responded with a wide-ranging analysis of Russian policy now called the "Long Telegram" . According to Kennan:
The Soviets perceived themselves to be in a state of perpetual war with capitalism;
The Soviets would use controllable Marxists in the capitalist world as allies;
Soviet aggression was not aligned with the views of the Russian people or with economic reality, but with historic Russian xenophobia and paranoia;
The Soviet government's structure prevented objective or accurate pictures of internal and external reality.
Clark Clifford and George Elsey produced a report elaborating on the Long Telegram and proposing concrete policy recommendations based on its analysis. This report, which recommended "restraining and confining" Soviet influence, was presented to Truman on September 24, 1946.
In March 1947, President Truman, a Democrat, asked the Republican controlled Congress to appropriate $400 million in aid to the Greek and Turkish governments, then fighting Communist subversion. Truman pledged to, "support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures. " This pledge became known as the Truman Doctrine. Portraying the issue as a mighty clash between "totalitarian regimes" and "free peoples", the speech marks the onset of the Cold War and the adoption of containment as official U.S. policy.
Truman followed up his speech with a series of measures to contain Soviet influence in Europe, including the Marshall Plan, or European Recovery Program, and NATO, a military alliance between the U.S. and Western European nations created in 1949. Because containment required detailed information about Communist moves, the government relied increasingly on the Central Intelligence Agency (CIA). Established by the National Security Act of 1947, the CIA conducted espionage in foreign lands, some of it visible, more of it secret. Truman approved a classified statement of containment policy called NSC 20/4 in November 1948, the first comprehensive statement of security policy ever created by the United States. The Soviet Union first nuclear test in 1949 prompted the National Security Council to formulate a revised security doctrine. Completed in April 1950, it became known as NSC 68. It concluded that a massive military buildup was necessary to the deal with the Soviet threat.
Many Republicans, including John Foster Dulles, concluded that Truman had been too timid. In 1952, Dulles called for rollback and the eventual "liberation" of eastern Europe. Dulles was named Secretary of State by incoming President Dwight Eisenhower, but Eisenhower's decision not to intervene during the Hungarian Uprising of 1956 made containment a bipartisan doctrine. President Eisenhower relied on clandestine CIA actions to undermine hostile governments and used economic and military foreign aid to strengthen governments supporting the American position in the Cold War.
Senator Barry Goldwater, the Republican candidate for president in 1964, challenged containment and asked, "Why not victory? " President Johnson, the Democratic nominee, answered that rollback risked nuclear war. Goldwater lost to Johnson in the general election by a wide margin. Johnson adhered closely to containment during the Vietnam War. Nixon, who replaced Johnson in 1969, referred to his foreign policy as détente, or a relaxation of tension. Although it continued to aim at restraining the Soviet Union, it was based on political realism, or thinking in terms of national interest, as opposed to crusades against communism or for democracy.
Source: Boundless. “Containment in Foreign Policy.” Boundless U.S. History. Boundless, 21 Jul. 2015. Retrieved 10 Feb. 2016 from https://www.boundless.com/u-s-history/textbooks/boundless-u-s-history-textbook/politics-and-culture-of-abundance-1943-1960-28/policy-of-containment-217/containment-in-foreign-policy-1204-9252/ | https://www.boundless.com/u-s-history/textbooks/boundless-u-s-history-textbook/politics-and-culture-of-abundance-1943-1960-28/policy-of-containment-217/containment-in-foreign-policy-1204-9252/ |
4 | The geologic record in stratigraphy, paleontology and other natural sciences refers to the entirety of the layers of rock strata — deposits laid down by volcanism or by deposition of sediment derived from weathering detritus (clays, sands etc.) including all its fossil content and the information it yields about the history of the Earth: its past climate, geography, geology and the evolution of life on its surface. According to the law of superposition, sedimentary and volcanic rock layers are deposited on top of each other. They harden over time to become a solidified (competent) rock column, that may be intruded by igneous rocks and disrupted by tectonic events.
Correlating the rock record
At a certain locality on the Earth's surface, the rock column provides a cross section of the natural history in the area during the time covered by the age of the rocks. This is sometimes called the rock history and gives a window into the natural history of the location that spans many geological time units such as ages, epochs, or in some cases even multiple major geologic periods—for the particular geographic region or regions. The geologic record is in no one place entirely complete for where geologic forces one age provide a low-lying region accumulating deposits much like a layer cake, in the next may have uplifted the region, and the same area is instead one that is weathering and being torn down by chemistry, wind, temperature, and water. This is to say that in a given location, the geologic record can be and is quite often interrupted as the ancient local environment was converted by geological forces into new landforms and features. Sediment core data at the mouths of large riverine drainage basins, some of which go 7 miles (11 km) deep thoroughly support the law of superposition.
However using broadly occurring deposited layers trapped within differently located rock columns, geologists have pieced together a system of units covering most of the geologic time scale using the law of superposition, for where tectonic forces have uplifted one ridge newly subject to erosion and weathering in folding and faulting the strata, they have also created a nearby trough or structural basin region that lies at a relative lower elevation that can accumulate additional deposits. By comparing overall formations, geologic structures and local strata, calibrated by those layers which are widespread, a nearly complete geologic record has been constructed since the 17th century.
Discordant strata example
Correcting for discordancies can be done in a number of ways and utilizing a number of technologies or field research results from studies in other disciplines.
In this example, the study of layered rocks and the fossils they contain is called biostratigraphy and utilizes amassed geobiology and paleobiological knowledge. Fossils can be used to recognize rock layers of the same or different geologic ages, thereby coordinating locally occurring geologic stages to the overall geologic timeline.
The pictures of the fossils of monocellular algae in this USGS figure were taken with a scanning electron microscope and have been magnified 250 times.
In the U.S. state of South Carolina three marker species of fossil algae are found in a core of rock whereas in Virginia only two of the three species are found in the Eocene Series of rock layers spanning three stages and the geologic ages from 37.2–55.8 Ma.
Comparing the record about the discordance in the record to the full rock column shows the non-occurrence of the missing species and that portion of the local rock record, from the early part of the middle Eocene is missing there. This is one form of discordancy and the means geologists use to compensate for local variations in the rock record. With the two remaining marker species it is possible to correlate rock layers of the same age (early Eocene and latter part of the middle Eocene) in both South Carolina and Virginia, and thereby "calibrate" the local rock column into its proper place in the overall geologic record.
|Segments of rock (strata) in chronostratigraphy||Time spans in geochronology||Notes to
|Eonothem||Eon||4 total, half a billion years or more|
|Erathem||Era||10 defined, several hundred million years|
|System||Period||22 defined, tens to ~one hundred million years|
|Series||Epoch||34 defined, tens of millions of years|
|Stage||Age||99 defined, millions of years|
|Chronozone||Chron||subdivision of an age, not used by the ICS timescale|
Lithology vs paleontology
Consequently, as the picture of the overall rock record emerged, and discontinuities and similarities in one place were cross-correlated to those in others, it became useful to subdivide the overall geologic record into a series of component sub-sections representing different sized groups of layers within known geologic time, from the shortest time span stage to the largest thickest strata eonothem and time spans eon. Concurrent work in other natural science fields required a time continuum be defined, and earth scientists decided to coordinate the system of rock layers and their identification criteria with that of the geologic time scale. This gives the pairing between the physical layers of the left column and the time units of the center column in the table at right.
Well stratified and fully exposed Dinosaur Park formations (in Dinosaur Provincial Park, Alberta, Canada) and like formations that extend for over a thousand miles exposing eons of rock history through numerous wind and water exposed strata layers— which in the Colorado Plateau are miles thick.
New Orleans after Hurricane Katrina: Unlithified sediment layers laid down in historic times. This cut was an attempt to find bedrock near a residential street near the lower breach of the London Avenue Canal after restoring the levees which has been plowed/excavated clear by the Army Corp of Engineers, showing a nascent stratigraphy in the large deposits of silt deposited by flooding in recent earth history.
Three eras of deposition and two discordancies are visible in this highway cut in the Netherlands. Note the color and slight angular change between the lower red bed layering and the middle strata. The upper strata are tilted yet again relative to the bottom layerings well demonstrating the cycles this land formation went through as part of the sea floor.
An ancient rockfall which protected the rock records beneath its impact site from further large scale erosion. Taken along Burr Trail, Grand Staircase-Escalante National Monument, Utah, USA.
Sediment core, taken with a gravity corer by the research vessel POLARSTERN in the South Atlantic; light/dark-coloured changes are due to climatic variation of the Quaternary; basis age of the core is about 1 million years.
- Committee on the Geologic Record of Biosphere Dynamics, Board on Earth Sciences and Resources, Board on Life Sciences, Division on Earth and Life Studies, National Research Council (2005), The Geological Record of Ecological Dynamics: Understanding the Biotic Effects of Future Environmental Change, National Academies Press, p. 14
- Cohen, K.M.; Finney, S.; Gibbard, P.L. (2015), International Chronostratigraphic Chart (PDF), International Commission on Stratigraphy.
- Hunt , C. B., 1956, Cenozoic Geology of the Colorado Plateau: U. S. Geol. Survey Professional Paper 279, Washington DC, 99 p. - www.riversimulator.org
- Annabelle Foos, 1999; GEOLOGY OF THE COLORADO PLATEAU - www.tribesandclimatechange.org | https://en.wikipedia.org/wiki/Geologic_record |
4.03125 | Image: Laser Stars
But there is more to the story. The Bell Labs patent wasn't issued until 1960. And the race to build the first working laser--a ruby crystal that emitted pulses of light at 0.69 microns--was won that same year by Theodore Maiman of Hughes Aircraft Co. Meanwhile, a graduate student at Columbia University named Gordon Gould had scribbled down ideas for a laser in 1957. Gould had his notebook, in which he coined the term laser, notarized. But he didn't apply for patents because he believed he had to build a working model first. By the time he did file in 1959, the Bell Labs application was already being considered by the Patent Office. Only after 20 years of bitter litigation did Gould win several key patents, beginning in 1977.
But whatever the official date, the work of these four laser pioneers sparked a technological revolution. In just four decades, the laser has become so commonplace that few people even realize that laser, now used as a noun, had its beginning as an acronym for "light amplification by stimulated emission of radiation."
In their paper, Townes and Schawlow presented the idea of arranging mirrors at each end of a cavity containing a gas or substance that could be excited to emit light. The mirrors would bounce the light back and forth so that all the photons would be moving in one direction. The size of the mirrors and the cavity could be adjusted to produce one frequency of light.
Theodore Maiman was one reader of the article who decided to see if he could test the idea by building a working laser. He selected a crystal of ruby and coated each end with a silver mirror. One mirror was thinner so that some of the light could escape as a beam. The ruby was surrounded by a flash tube to provide the energy to stimulate the atoms in the crystal. The entire assembly was encased in a polished aluminum tube. It worked.
When the team at Bell Labs heard of Maiman's success, they dispatched Amnon Yariv, one of their colleagues who was vacationing in San Diego, to rush to Maiman's lab in Malibu. He returned with the bad (good) news. The laser now existed as more than a theory proposed by Albert Einstein's in 1917. Chagrined but not defeated, the researchers at Bell Labs soon bested Hughes with a laser that ran continuously, rather than in pulses (they replaced the flash lamp with an arc lamp).
The contentious genesis of the laser did almost nothing to slow the rapid pace of its development and commercialization. Patent piled on patent, Bell Labs and others churned out a steady stream of innovations that continues unabated today. The importance did not pass unnoticed. In fewer than six years from the publication of the Schawlow-Townes paper, the 1964 Nobel Prize in Physics was shared by Townes and by Nicolay Gennadiyevich Basov and Aleksandr Mikhailovich Prokhorov of the Lebedev Institute in Moscow for their early work in masers ("microwave amplification by stimulated emission of radiation") and the subsequent development of the optical maser, or laser.
What the 1964 Nobel committee failed to note was the vast potential for practical applications of the laser, emphasizing instead its opening of "new possibilities for studying the interaction of radiation and matter." That, of course was very true. A recent example is the 1997 Nobel in Physics to Steven Chu of Stanford University for his use of laser light to trap and cool atoms to nearly absolute zero.
But today, lasers--from semiconductor devices as tiny as grains of sand to experimental giants the size of buildings--are used in hundreds of applications, from cutting and welding metal to repairing damage to delicate tissue of the eyes. They are at the heart of many scientific instruments and are guiding surveyors and sighting weapons. With their light guided through threads of glass, they have revolutionized communications. Lasers scan bar codes at the supermarket and record sound on compact disks.
It's already getting hard to imagine what life was like without them. So, happy 40th--give or take a few months.
OPTICAL MASERS, Arthur L. Schawlow, Scientific American, June 1961
ADVANCES IN OPTICAL MASERS, Arthur L. Schawlow, Scientific American, July 1963
THE PRESSURE OF LASER LIGHT, Arthur Ashkin, Scientific American, February 1972
LASER TRAPPING OF NEUTRAL PARTICLES Steven Chu, Scientific American, February 1992.
Images: LUCENT TECHNOLOGIES (Townes and Schalow, laser animation), LASER STARS (Maiman, ruby laser), MIT LEMELSON INVENTION DIMENSION (Gould) | http://www.scientificamerican.com/article/the-laser-at-about-40/ |
4.75 | The parts of speech explains the ways words can be used in various contexts. Every word in the English language functions as at least one part of speech; many words can serve, at different times, as two or more parts of speech, depending on the context.
1Understand that nouns and their 'partners' are the following:
- A noun is a word or phrase that names a person, place, thing, or idea. (Fred, New York, table, beauty, execution ). A noun may be used as the subject of a verb, the object of a verb, an identifying noun, the object of a preposition, or an appositive (an explanatory phrase coupled with a subject or object ).
- An adjective is a word or combination of words that modifies a noun or pronoun. (blue-green, central, half-baked, temporary ). E.g. in use, "blue-green handbag" (modifies the noun 'handbag').
- A pronoun is a word that substitutes for a noun and refers to a person, place, thing, idea, or act that was mentioned previously or that can be inferred from the context of the sentence (he, she, it, that ). A pronoun can also come before a noun, as a form of modification, e.g., "HER book", "her" being a possessive pronoun to indicate who the book (noun) belongs to.
- A preposition is a word or phrase that shows the relationship of a noun to another element in the sentence (at, by, in, to, from, with ), e.g., "I put the money in my purse" - "in" is a preposition.
2Think of a preposition as anything that a caterpillar can do to an apple. It can go in, on, under, around, through, below, or into the apple, for example. Those italicized words are all prepositions. They begin prepositional phrases, which always include a noun as the object of the preposition.
3Know that a verb or its modifiers are:
- A verb is a word or phrase that expresses action and links. Verbs can be transitive, requiring an object (object = "her", in "I met her" ), or intransitive, requiring only a subject ("The sun rises"). Some verbs, like feel , are both transitive ("Feel the fabric") and intransitive ("I feel cold", in which cold is an adjective and not an object).
- An adverb is a word that modifies a verb, an adjective, or another adverb (slowly, obstinately, much ). E.g. in use, "I ran slowly" (modifying the verb 'ran').
4Learn that connection words or exclamation words are parts of speech also.
- A conjunction is a word that connects other words, phrases, or sentences (and, but, or, because ). E.g. in use, "I like cats, but I don't like dogs" ("but", here, is a coordinating conjunction), "I went outside, although it was raining" ("although" is used as a subordinating conjunction).
- An interjection is a word, phrase, or sound used as an exclamation and capable of standing by itself (oh, Lord, damn, my goodness ). Also known as an "ejaculation", e.g., "Balderdash!". It is not grammatically related to the rest of the sentence.
Questions and Answers
Give us 3 minutes of knowledge!
- Don't rely on these definitions alone. You need to put the meanings and examples in a way in which you understand.
In other languages:
Thanks to all authors for creating a page that has been read 83,627 times. | http://www.wikihow.com/Identify-Parts-of-Speech |
4 | Gender, Sex, and Slavery
In this activity students read about slavery's effect on women from the perspectives of an enslaved woman and a plantation mistress. Then students create a dialogue between the two women.
Students will analyze two descriptions of the lives of enslaved women.
Students will describe how slavery affected women differently than men.
Students will create a dialogue between a slave and a slaveowner.
Step 1: Divide students into small groups. Have students read Sections V and VI ("Trials of Girlhood" and "Jealous Mistress") from Harriet Jacobs' Incidents in the Life of a Slave Girl. Ask students to free write their general impressions, including any aspects of the reading that surprised them.
Step 2: Have students read aloud and discuss the brief excerpts from Harriet Jacobs and Mary Boykin Chestnut ("A Plantation Mistress Decries 'A Monstrous System'"). In discussion, students should address the following questions:
Does Jacobs, a former slave, see female slaves as victims or perpetrators?
How does Chestnut, a plantation mistress, see female slaves?
Step 3: Ask each student to imagine and write a dialogue between Harriet Jacobs and Mary Chestnut.
Step 4: Based on their written dialogues and a careful study of the readings, ask students to assess gender roles and moral and sexual attitudes under slavery. In their groups, students should address the following:
How did sexual or moral attitudes differ for whites and blacks during slavery?
Jacobs and Chestnut believe that oppression differs for women and men under slavery? Do you agree?
Why do you think the authors wrote these passages? Who do you think were the audiences for these writings?
As students listen to their group members, they should make a list of common points of view and areas of difference between Jacobs and Chestnut. Then ask groups to reconvene and as an entire class, discuss the differences and similarities between the two women's views.
The institution of slavery permeated every aspect of American society before the Civil War, and it impacted the lives of women regardless of race or economic status. Defenders of slavery justified the institution on the grounds that there were innate racial differences between blacks and whites. These racial prejudices also helped define gender identities during the antebellum era. Slavery attached different sexual behaviors and traits to white and black women, which were then used as the basis for separate roles for white and black women in both the private and public spheres. Laws and social practices held that white women were fragile, moral, and sexually innocent, while black women were viewed as laborers and over-sexed beings. In the intimate setting of the household, such inequalities ensured that relations between white and black women were fraught with distrust, jealousy, and rage. Plantation mistresses were often well aware that the men in their lives took advantage of black slave women sexually; enslaved women could not escape white men's sexual attentions and rape was common. Many white slave mistresses used their "higher" racial standing to take out their frustrations about their confined role in society and their menfolk's infidelities, on black women. Black women had to cope with the unpredictable actions of both the master and mistress.
Creator | American Social History Project/Center for Media and Learning
Rights | Copyright American Social History Project/Center for Media and Learning This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
Item Type | Teaching Activity
Cite This document | American Social History Project/Center for Media and Learning, “Gender, Sex, and Slavery,” HERB: Resources for Teachers, accessed February 6, 2016, http://herb.ashp.cuny.edu/items/show/1377. | http://herb.ashp.cuny.edu/items/show/1377 |
4.28125 | fault, in geology, fracture in the earth's crust in which the rock on one side of the fracture has measurable movement in relation to the rock on the other side. Faults on other planets and satellites of the solar system also have been recognized. Evidence of faults are found either at the surface (fault surface) or underground (fault plane). Faults are most evident in outcrops of sedimentary formations where they conspicuously offset previously continuous strata. Movement along a fault plane may be vertical, horizontal, or oblique in direction, or it may consist in the rotation of one or both of the fault blocks, with most movements associated with mountain building and plate tectonics. The two classes of faults include the dip-slip (up and down movement), which is further divided into normal and thrust (reverse) faults; and strike-slip (movement parallel to the fault plane). The San Andreas fault of California is of this type. In dip-slip faults the term "hanging wall" is used for the side that lies vertically above the other, called the "footwall." A fault in which the hanging wall moves down and the footwall is stationary is called a normal fault. Normal faults are formed by tensional, or pull-apart, forces. A fault in which the hanging wall is the upthrown side is called a thrust fault because the hanging wall appears to have been pushed up over the footwall. Such faults are formed by compressional forces that push rock together and are by far the most common of the dip-slip faults. All types of faults have been recognized on the ocean floor: normal faults occur in the rift valleys associated with mid ocean ridges spreading at slow rates; strike-slip faults appear between the offset portions of mid-ocean ridges; and thrust faults occur at subducting plate boundaries. Active faults, though they may not move for decades, can move many feet in a matter of seconds, producing an earthquake. The largest earthquakes occur along thrust faults. Some faults creep from a half inch to as much as 4 in. (1 to 10 cm) per year. Fault movements are measured using laser and other devices. Faults create interpretation problems for geologists by altering the relations of strata (see stratification), such as making the same rock layer offset in two vertical cross sections of a formation or making layers disappear altogether. Faults are often seen on the surface as topographical features, including offset streams, linear lakes, and fault scarps. | http://www.factmonster.com/encyclopedia/science/fault.html |
4.03125 | Air quality index
An air quality index (AQI) is a number used by government agencies to communicate to the public how polluted the air currently is or how polluted it is forecast to become. As the AQI increases, an increasingly large percentage of the population is likely to experience increasingly severe adverse health effects. Different countries have their own air quality indices, corresponding to different national air quality standards. Some of these are the Air Quality Health Index (Canada), the Air Pollution Index (Malaysia), and the Pollutant Standards Index (Singapore).
- 1 Definition and usage
- 2 Indices by location
- 2.1 Canada
- 2.2 Hong Kong
- 2.3 Mainland China
- 2.4 India
- 2.5 Mexico
- 2.6 Singapore
- 2.7 South Korea
- 2.8 United Kingdom
- 2.9 Europe
- 2.10 United States
- 3 See also
- 4 References
- 5 External links
Definition and usage
Computation of the AQI requires an air pollutant concentration over a specified averaging period, obtained from an air monitor or model. Taken together, concentration and time represent the dose of the air pollutant. Health effects corresponding to a given dose are established by epidemiological research. Air pollutants vary in potency, and the function used to convert from air pollutant concentration to AQI varies by pollutant. Air quality index values are typically grouped into ranges. Each range is assigned a descriptor, a color code, and a standardized public health advisory.
The AQI can increase due to an increase of air emissions (for example, during rush hour traffic or when there is an upwind forest fire) or from a lack of dilution of air pollutants. Stagnant air, often caused by an anticyclone, temperature inversion, or low wind speeds lets air pollution remain in a local area, leading to high concentrations of pollutants, chemical reactions between air contaminants and hazy conditions.
On a day when the AQI is predicted to be elevated due to fine particle pollution, an agency or public health organization might:
- advise sensitive groups, such as the elderly, children, and those with respiratory or cardiovascular problems to avoid outdoor exertion.
- declare an "action day" to encourage voluntary measures to reduce air emissions, such as using public transportation.
- recommend the use of masks to keep fine particles from entering the lungs
During a period of very poor air quality, such as an air pollution episode, when the AQI indicates that acute exposure may cause significant harm to the public health, agencies may invoke emergency plans that allow them to order major emitters (such as coal burning industries) to curtail emissions until the hazardous conditions abate.
Most air contaminants do not have an associated AQI. Many countries monitor ground-level ozone, particulates, sulfur dioxide, carbon monoxide and nitrogen dioxide, and calculate air quality indices for these pollutants.
The definition of the AQI in a particular nation reflects the discourse surrounding the development of national air quality standards in that nation. A website allowing government agencies anywhere in the world to submit their real-time air monitoring data for display using a common definition of the air quality index has recently become available.
Indices by location
Air quality in Canada has been reported for many years with provincial Air Quality Indices (AQIs). Significantly, AQI values reflect air quality management objectives, which are based on the lowest achievable emissions rate, and not exclusively concern for human health. The Air Quality Health Index or (AQHI) is a scale designed to help understand the impact of air quality on health. It is a health protection tool used to make decisions to reduce short-term exposure to air pollution by adjusting activity levels during increased levels of air pollution. The Air Quality Health Index also provides advice on how to improve air quality by proposing behavioural change to reduce the environmental footprint. This index pays particular attention to people who are sensitive to air pollution. It provides them with advice on how to protect their health during air quality levels associated with low, moderate, high and very high health risks.
The Air Quality Health Index provides a number from 1 to 10+ to indicate the level of health risk associated with local air quality. On occasion, when the amount of air pollution is abnormally high, the number may exceed 10. The AQHI provides a local air quality current value as well as a local air quality maximums forecast for today, tonight, and tomorrow, and provides associated health advice.
|Risk:||Low (1–3)||Moderate (4–6)||High (7–10)||Very high (above 10)|
|Health Risk||Air Quality Health Index||Health Messages|
|At Risk population||*General Population|
|Low||1–3||Enjoy your usual outdoor activities.||Ideal air quality for outdoor activities|
|Moderate||4–6||Consider reducing or rescheduling strenuous activities outdoors if you are experiencing symptoms.||No need to modify your usual outdoor activities unless you experience symptoms such as coughing and throat irritation.|
|High||7–10||Reduce or reschedule strenuous activities outdoors. Children and the elderly should also take it easy.||Consider reducing or rescheduling strenuous activities outdoors if you experience symptoms such as coughing and throat irritation.|
|Very high||Above 10||Avoid strenuous activities outdoors. Children and the elderly should also avoid outdoor physical exertion.||Reduce or reschedule strenuous activities outdoors, especially if you experience symptoms such as coughing and throat irritation.|
On the 30th December 2013 Hong Kong replaced the Air Pollution Index with a new index called the Air Quality Health Index. This index is on a scale of 1 to 10+ and considers four air pollutants: ozone; nitrogen dioxide; sulphur dioxide and particulate matter (including PM10 and PM2.5). For any given hour the AQHI is calculated from the sum of the percentage excess risk of daily hospital admissions attributable to the 3-hour moving average concentrations of these four pollutants. The AQHIs are grouped into five AQHI health risk categories with health advice provided:
|Health risk category||AQHI|
Each of the health risk categories has advice with it. At the low and moderate levels the public are advised that they can continue normal activities. For the high category, children, the elderly and people with heart or respiratory illnesses are advising to reduce outdoor physical exertion. Above this (very high or serious) the general public are also advised to reduce or avoid outdoor physical exertion.
China's Ministry of Environmental Protection (MEP) is responsible for measuring the level of air pollution in China. As of 1 January 2013, MEP monitors daily pollution level in 163 of its major cities. The API level is based on the level of 6 atmospheric pollutants, namely sulfur dioxide (SO2), nitrogen dioxide (NO2), suspended particulates smaller than 10 μm in aerodynamic diameter (PM10), suspended particulates smaller than 2.5 μm in aerodynamic diameter (PM2.5), carbon monoxide (CO), and ozone (O3) measured at the monitoring stations throughout each city.
An individual score (IAQI) is assigned to the level of each pollutant and the final AQI is the highest of those 6 scores. The pollutants can be measured quite differently. PM2.5、PM10 concentration are measured as average per 24h. SO2, NO2, O3, CO are measured as average per hour. The final API value is calculated per hour according to a formula published by the MEP.
The scale for each pollutant is non-linear, as is the final AQI score. Thus an AQI of 100 does not mean twice the pollution of AQI at 50, nor does it mean twice as harmful. While an AQI of 50 from day 1 to 182 and AQI of 100 from day 183 to 365 does provide an annual average of 75, it does not mean the pollution is acceptable even if the benchmark of 100 is deemed safe. This is because the benchmark is a 24-hour target. The annual average must match against the annual target. It is entirely possible to have safe air every day of the year but still fail the annual pollution benchmark.
AQI and Health Implications (HJ 663-2012)
|0–50||Excellent||No health implications.|
|51–100||Good||Few hypersensitive individuals should reduce outdoor exercise.|
|101–150||Lightly Polluted||Slight irritations may occur, individuals with breathing or heart problems should reduce outdoor exercise.|
|151–200||Moderately Polluted||Slight irritations may occur, individuals with breathing or heart problems should reduce outdoor exercise.|
|201–300||Heavily Polluted||Healthy people will be noticeably affected. People with breathing or heart problems will experience reduced endurance in activities. These individuals and elders should remain indoors and restrict activities.|
|300+||Severely Polluted||Healthy people will experience reduced endurance in activities. There may be strong irritations and symptoms and may trigger other illnesses. Elders and the sick should remain indoors and avoid exercise. Healthy individuals should avoid out door activities.|
The Minister for Environment, Forests & Climate Change Shri Prakash Javadekar launched The National Air Quality Index (AQI) in New Delhi on 17 September 2014 under the Swachh Bharat Abhiyan. It is outlined as ‘One Number- One Colour-One Description’ for the common man to judge the air quality within his vicinity. The index constitutes part of the Government’s mission to introduce the culture of cleanliness. Institutional and infrastructural measures are being undertaken in order to ensure that the mandate of cleanliness is fulfilled across the country and the Ministry of Environment, Forests & Climate Change proposed to discuss the issues concerned regarding quality of air with the Ministry of Human Resource Development in order to include this issue as part of the sensitisation programme in the course curriculum.
While the earlier measuring index was limited to three indicators, the current measurement index had been made quite comprehensive by the addition of five additional parameters. Under the current measurement of air quality there are 8 parameters . The initiatives undertaken by the Ministry recently aimed at balancing environment and conservation and development as air pollution has been a matter of environmental and health concerns, particularly in urban areas.
The Central Pollution Control Board along with State Pollution Control Boards has been operating National Air Monitoring Program (NAMP) covering 240 cities of the country. In addition, continuous monitoring systems that provide data on near real-time basis are also installed in a few cities. They provide information on air quality in public domain in simple linguistic terms that is easily understood by a common person. Air Quality Index (AQI) is one such tool for effective dissemination of air quality information to people. As such an Expert Group comprising medical professionals, air quality experts, academia, advocacy groups, and SPCBs was constituted and a technical study was awarded to IIT Kanpur. IIT Kanpur and the Expert Group recommended an AQI scheme in 2014.
There are six AQI categories, namely Good, Satisfactory, Moderately polluted, Poor, Very Poor, and Severe. The proposed AQI will consider eight pollutants (PM10, PM2.5, NO2, SO2, CO, O3, NH3, and Pb) for which short-term (up to 24-hourly averaging period) National Ambient Air Quality Standards are prescribed. Based on the measured ambient concentrations, corresponding standards and likely health impact, a sub-index is calculated for each of these pollutants. The worst sub-index reflects overall AQI. Associated likely health impacts for different AQI categories and pollutants have been also been suggested, with primary inputs from the medical expert members of the group. The AQI values and corresponding ambient concentrations (health breakpoints) as well as associated likely health impacts for the identified eight pollutants are as follows:
|AQI Category (Range)||PM10 (24hr)||PM2.5 (24hr)||NO2 (24hr)||O3 (8hr)||CO (8hr)||SO2 (24hr)||NH3 (24hr)||Pb (24hr)|
|Moderately polluted (101-200)||101-250||61-90||81-180||101-168||2.1-10||81-380||401-800||1.1-2.0|
|Very poor (301-400)||351-430||121-250||281-400||209-748||17-34||801-1600||1200-1800||3.1-3.5|
|AQI||Associated Health Impacts|
|Good (0-50)||Minimal impact|
|Satisfactory (51-100)||May cause minor breathing discomfort to sensitive people.|
|Moderately polluted (101–200)||May cause breathing discomfort to people with lung disease such as asthma, and discomfort to people with heart disease, children and older adults.|
|Poor (201-300)||May cause breathing discomfort to people on prolonged exposure, and discomfort to people with heart disease.|
|Very poor (301-400)||May cause respiratory illness to the people on prolonged exposure. Effect may be more pronounced in people with lung and heart diseases.|
|Severe (401-500)||May cause respiratory impact even on healthy people, and serious health impacts on people with lung/heart disease. The health impacts may be experienced even during light physical activity.|
The air quality in Mexico City is reported in IMECAs. The IMECA is calculated using the measurements of average times of the chemicals ozone (O3), sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO) and particles smaller than 10 micrometers (PM10).
Singapore uses the Pollutant Standards Index to report on its air quality, with details of the calculation similar but not identical to that used in Malaysia and Hong Kong The PSI chart below is grouped by index values and descriptors, according to the National Environment Agency.
|PSI||Descriptor||General Health Effects|
|51–100||Moderate||Few or none for the general population|
|101–200||Unhealthy||Mild aggravation of symptoms among susceptible persons i.e. those with underlying conditions such as chronic heart or lung ailments; transient symptoms of irritation e.g. eye irritation, sneezing or coughing in some of the healthy population.|
|201–300||Very Unhealthy||Moderate aggravation of symptoms and decreased tolerance in persons with heart or lung disease; more widespread symptoms of transient irritation in the healthy population.|
|301–400||Hazardous||Early onset of certain diseases in addition to significant aggravation of symptoms in susceptible persons; and decreased exercise tolerance in healthy persons.|
|Above 400||Hazardous||PSI levels above 400 may be life-threatening to ill and elderly persons. Healthy people may experience adverse symptoms that affect normal activity.|
The Ministry of Environment of South Korea uses the Comprehensive Air-quality Index (CAI) to describe the ambient air quality based on the health risks of air pollution. The index aims to help the public easily understand the air quality and protect people's health. The CAI is on a scale from 0 to 500, which is divided into six categories. The higher the CAI value, the greater the level of air pollution. Of values of the five air pollutants, the highest is the CAI value. The index also has associated health effects and a colour representation of the categories as shown below.
|0–50||Good||A level that will not impact patients suffering from diseases related to air pollution.|
|51–100||Moderate||A level that may have a meager impact on patients in case of chronic exposure.|
|101–150||Unhealthy for sensitive groups||A level that may have harmful impacts on patients and members of sensitive groups.|
|151–250||Unhealthy||A level that may have harmful impacts on patients and members of sensitive groups (children, aged or weak people), and also cause the general public unpleasant feelings.|
|251–500||Very unhealthy||A level that may have a serious impact on patients and members of sensitive groups in case of acute exposure.|
The N Seoul Tower on Namsan Mountain in central Seoul, South Korea, is illuminated in blue, from sunset to 23:00 and 22:00 in winter, on days where the air quality in Seoul is 45 or less. During the spring of 2012, the Tower was lit up for 52 days, which is four days more than in 2011.
The most commonly used air quality index in the UK is the Daily Air Quality Index recommended by the Committee on Medical Effects of Air Pollutants (COMEAP). This index has ten points, which are further grouped into 4 bands: low, moderate, high and very high. Each of the bands comes with advice for at-risk groups and the general population.
|Air pollution banding||Value||Health messages for At-risk individuals||Health messages for General population|
|Low||1–3||Enjoy your usual outdoor activities.||Enjoy your usual outdoor activities.|
|Moderate||4–6||Adults and children with lung problems, and adults with heart problems, who experience symptoms, should consider reducing strenuous physical activity, particularly outdoors.||Enjoy your usual outdoor activities.|
|High||7–9||Adults and children with lung problems, and adults with heart problems, should reduce strenuous physical exertion, particularly outdoors, and particularly if they experience symptoms. People with asthma may find they need to use their reliever inhaler more often. Older people should also reduce physical exertion.||Anyone experiencing discomfort such as sore eyes, cough or sore throat should consider reducing activity, particularly outdoors.|
|Very High||10||Adults and children with lung problems, adults with heart problems, and older people, should avoid strenuous physical activity. People with asthma may find they need to use their reliever inhaler more often.||Reduce physical exertion, particularly outdoors, especially if you experience symptoms such as cough or sore throat.|
The index is based on the concentrations of 5 pollutants. The index is calculated from the concentrations of the following pollutants: Ozone, Nitrogen Dioxide, Sulphur Dioxide, PM2.5 (particles with an aerodynamic diameter less than 2.5 μm) and PM10. The breakpoints between index values are defined for each pollutant separately and the overall index is defined as the maximum value of the index. Different averaging periods are used for different pollutants.
|Index||Ozone, Running 8 hourly mean (μg/m3)||Nitrogen Dioxide, Hourly mean (μg/m3)||Sulphur Dioxide, 15 minute mean (μg/m3)||PM2.5 Particles, 24 hour mean (μg/m3)||PM10 Particles, 24 hour mean (μg/m3)|
|10||≥ 241||≥ 601||≥ 1065||≥ 71||≥ 101|
To present the air quality situation in European cities in a comparable and easily understandable way, all detailed measurements are transformed into a single relative figure: the Common Air Quality Index (or CAQI) Three different indices have been developed by Citeair to enable the comparison of three different time scale:.
- An hourly index, which describes the air quality today, based on hourly values and updated every hours,
- A daily index, which stands for the general air quality situation of yesterday, based on daily values and updated once a day,
- An annual index, which represents the city's general air quality conditions throughout the year and compare to European air quality norms. This index is based on the pollutants year average compare to annual limit values, and updated once a year.
However, the proposed indices and the supporting common web site www.airqualitynow.eu are designed to give a dynamic picture of the air quality situation in each city but not for compliance checking.
The hourly and daily common indices
These indices have 5 levels using a scale from 0 (very low) to > 100 (very high), it is a relative measure of the amount of air pollution. They are based on 3 pollutants of major concern in Europe: PM10, NO2, O3 and will be able to take into account to 3 additional pollutants (CO, PM2.5 and SO2) where data are also available.
The calculation of the index is based on a review of a number of existing air quality indices, and it reflects EU alert threshold levels or daily limit values as much as possible. In order to make cities more comparable, independent of the nature of their monitoring network two situations are defined:
- Background, representing the general situation of the given agglomeration (based on urban background monitoring sites),
- Roadside, being representative of city streets with a lot of traffic, (based on roadside monitoring stations)
The indices values are updated hourly (for those cities that supply hourly data) and yesterdays daily indices are presented.
Common air quality index legend:
The common annual air quality index
The common annual air quality index provides a general overview of the air quality situation in a given city all the year through and regarding to the European norms.
It is also calculated both for background and traffic conditions but its principle of calculation is different from the hourly and daily indices. It is presented as a distance to a target index, this target being derived from the EU directives (annual air quality standards and objectives):
- If the index is higher than 1: for one or more pollutants the limit values are not met.
- If the index is below 1: on average the limit values are met.
The annual index is aimed at better taking into account long term exposure to air pollution based on distance to the target set by the EU annual norms, those norms being linked most of the time to recommendations and health protection set up by World Health Organisation.
The United States Environmental Protection Agency (EPA) has developed an Air Quality Index that is used to report air quality. This AQI is divided into six categories indicating increasing levels of health concern. An AQI value over 300 represents hazardous air quality and below 50 the air quality is good.
The AQI is based on the five "criteria" pollutants regulated under the Clean Air Act: ground-level ozone, particulate matter, carbon monoxide, sulfur dioxide, and nitrogen dioxide. The EPA has established National Ambient Air Quality Standards (NAAQS) for each of these pollutants in order to protect public health. An AQI value of 100 generally corresponds to the level of the NAAQS for the pollutant. The Clean Air Act (USA) (1990) requires EPA to review its National Ambient Air Quality Standards every five years to reflect evolving health effects information. The Air Quality Index is adjusted periodically to reflect these changes.
Computing the AQI
The air quality index is a piecewise linear function of the pollutant concentration. At the boundary between AQI categories, there is a discontinuous jump of one AQI unit. To convert from concentration to AQI this equation is used:
- = the (Air Quality) index,
- = the pollutant concentration,
- = the concentration breakpoint that is ≤ ,
- = the concentration breakpoint that is ≥ ,
- = the index breakpoint corresponding to ,
- = the index breakpoint corresponding to .
|O3 (ppb)||O3 (ppb)||PM2.5 (µg/m3)||PM10 (µg/m3)||CO (ppm)||SO2 (ppb)||NO2 (ppb)||AQI||AQI|
|Clow - Chigh (avg)||Clow - Chigh (avg)||Clow- Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Ilow - Ihigh||Category|
|0-54 (8-hr)||-||0.0-12.0 (24-hr)||0-54 (24-hr)||0.0-4.4 (8-hr)||0-35 (1-hr)||0-53 (1-hr)||0-50||Good|
|55-70 (8-hr)||-||12.1-35.4 (24-hr)||55-154 (24-hr)||4.5-9.4 (8-hr)||36-75 (1-hr)||54-100 (1-hr)||51-100||Moderate|
|71-85 (8-hr)||125-164 (1-hr)||35.5-55.4 (24-hr)||155-254 (24-hr)||9.5-12.4 (8-hr)||76-185 (1-hr)||101-360 (1-hr)||101-150||Unhealthy for Sensitive Groups|
|86-105 (8-hr)||165-204 (1-hr)||55.5-150.4 (24-hr)||255-354 (24-hr)||12.5-15.4 (8-hr)||186-304 (1-hr)||361-649 (1-hr)||151-200||Unhealthy|
|106-200 (8-hr)||205-404 (1-hr)||150.5-250.4 (24-hr)||355-424 (24-hr)||15.5-30.4 (8-hr)||305-604 (24-hr)||650-1249 (1-hr)||201-300||Very Unhealthy|
|-||405-504 (1-hr)||250.5-350.4 (24-hr)||425-504 (24-hr)||30.5-40.4 (8-hr)||605-804 (24-hr)||1250-1649 (1-hr)||301-400||Hazardous|
|-||505-604 (1-hr)||350.5-500.4 (24-hr)||505-604 (24-hr)||40.5-50.4 (8-hr)||805-1004 (24-hr)||1650-2049 (1-hr)||401-500|
Suppose a monitor records a 24-hour average fine particle (PM2.5) concentration of 12.0 micrograms per cubic meter. The equation above results in an AQI of:
corresponding to air quality in the "Good" range. To convert an air pollutant concentration to an AQI, EPA has developed a calculator.
If multiple pollutants are measured at a monitoring site, then the largest or "dominant" AQI value is reported for the location. The ozone AQI between 100 and 300 is computed by selecting the larger of the AQI calculated with a 1-hour ozone value and the AQI computed with the 8-hour ozone value.
8-hour ozone averages do not define AQI values greater than 300; AQI values of 301 or greater are calculated with 1-hour ozone concentrations. 1-hour SO2 values do not define higher AQI values greater than 200. AQI values of 201 or greater are calculated with 24-hour SO2 concentrations.
Real time monitoring data from continuous monitors are typically available as 1-hour averages. However, computation of the AQI for some pollutants requires averaging over multiple hours of data. (For example, calculation of the ozone AQI requires computation of an 8-hour average and computation of the PM2.5 requires a 24-hour average.) To accurately reflect the current air quality, the multi-hour average used for the AQI computation should be centered on the current time, but as concentrations of future hours are unknown and are difficult to estimate accurately, EPA uses surrogate concentrations to estimate these multi-hour averages. For reporting the PM2.5 AQI, this surrogate concentration is called the NowCast. The Nowcast is a particular type of weighted average constructed from the most recent 12-hours of PM2.5 data. EPA estimates eight-hour average ozone values in real time using the most recent 1-hour ozone average and the historical relationship between 1-hour maximum and 8-hour maximum values developed for each ozone monitoring site.
Public Availability of the AQI
Real time monitoring data and forecasts of air quality that are color-coded in terms of the air quality index are available from EPA's AirNow web site. Historical air monitoring data including AQI charts and maps are available at EPA's AirData website.
History of the AQI
The AQI made its debut in 1968, when the National Air Pollution Control Administration undertook an initiative to develop an air quality index and to apply the methodology to Metropolitan Statistical Areas. The impetus was to draw public attention to the issue of air pollution and indirectly push responsible local public officials to take action to control sources of pollution and enhance air quality within their jurisdictions.
Jack Fensterstock, the head of the National Inventory of Air Pollution Emissions and Control Branch, was tasked to lead the development of the methodology and to compile the air quality and emissions data necessary to test and calibrate resultant indices.
The initial iteration of the air quality index used standardized ambient pollutant concentrations to yield individual pollutant indices. These indices were then weighted and summed to form a single total air quality index. The overall methodology could use concentrations that are taken from ambient monitoring data or are predicted by means of a diffusion model. The concentrations were then converted into a standard statistical distribution with a preset mean and standard deviation. The resultant individual pollutant indices are assumed to be equally weighted, although values other than unity can be used. Likewise, the index can incorporate any number of pollutants although it was only used to combine SOx, CO, and TSP because of a lack of available data for other pollutants.
While the methodology was designed to be robust, the practical application for all metropolitan areas proved to be inconsistent due to the paucity of ambient air quality monitoring data, lack of agreement on weighting factors, and non-uniformity of air quality standards across geographical and political boundaries. Despite these issues, the publication of lists ranking metropolitan areas achieved the public policy objectives and led to the future development of improved indices and their routine application.
- "International Air Quality". Retrieved 20 August 2015.
- National Weather Service Corporate Image Web Team. "NOAA's National Weather Service/Environmental Protection Agency - United States Air Quality Forecast Guidance". Retrieved 20 August 2015.
- "Step 2 - Dose-Response Assessment". Retrieved 20 August 2015.
- Myanmar government (2007). "Haze". Archived from the original on 27 January 2007. Retrieved 2007-02-11.
- "Air Quality Index - American Lung Association". American Lung Association. Retrieved 20 August 2015.
- "Spare the Air - Summer Spare the Air". Retrieved 20 August 2015.
- "FAQ: Use of masks and availability of masks". Retrieved 20 August 2015.
- "Air Quality Index (AQI) - A Guide to Air Quality and Your Health". US EPA. 9 December 2011. Retrieved 8 August 2012.
- Jay Timmons (13 August 2014). "The EPA's Latest Threat to Economic Growth". WSJ. Retrieved 20 August 2015.
- "World Air Quality Index". Retrieved 20 August 2015.
- "Environment Canada - Air - AQHI categories and explanations". Ec.gc.ca. 2008-04-16. Retrieved 2011-11-11.
- Hsu, Angel. "China’s new Air Quality Index: How does it measure up?". Retrieved 8 February 2014.
- "Air Quality Health Index". Government of the Hong Kong Special Administrative Region. Retrieved 9 February 2014.
- "Focus on urban air quality daily". Archived from the original on 2004-10-25.
- "People's Republic of China Ministry of Environmental Protection Standard: Technical Regulation on Ambient Air Quality Index (Chinese PDF)" (PDF).
- Rama Lakshmi (17 October 2014). "India launches its own Air Quality Index. Can its numbers be trusted?". Washington Post. Retrieved 20 August 2015.
- "National Air Quality Index (AQI) launched by the Environment Minister AQI is a huge initiative under ‘Swachh Bharat’". Retrieved 20 August 2015.
- "India launches index to measure air quality". timesofindia-economictimes. Retrieved 20 August 2015.
- "::: Central Pollution Control Board :::". Retrieved 20 August 2015.
- "MEWR - Key Environment Statistics - Clean Air". App.mewr.gov.sg. 2011-06-08. Retrieved 2011-11-11.
- ."National Environment Agency - Calculation of PSI" (PDF). Retrieved 2012-06-15.
- "National Environment Agency". App2.nea.gov.sg. Retrieved 2011-11-11.
- "What's CAI". Air Korea. Retrieved 25 October 2015.
- "Improved Air Quality Reflected in N Seoul Tower". Chosun Ilbo. 18 May 2012. Retrieved 29 July 2012.
- COMEAP. "Review of the UK Air Quality Index". COMEAP website.
- "Daily Air Quality Index". Air UK Website. Defra.
- Garcia, Javier; Colosio, Joëlle (2002). Air-quality indices : elaboration, uses and international comparisons. Presses des MINES. ISBN 2-911762-36-3.
- "Indices definition". Air quality. Retrieved 9 August 2012.
- David Mintz (February 2009). Technical Assistance Document for the Reporting of Daily Air Quality – the Air Quality Index (AQI) (PDF). North Carolina: US EPA Office of Air Quality Planning and Standards. EPA-454/B-09-001. Retrieved 9 August 2012.
- Revised Air Quality Standards For Particle Pollution And Updates To The Air Quality Index (AQI) (PDF). North Carolina: US EPA Office of Air Quality Planning and Standards. 2013.
- "AQI Calculator: Concentration to AQI". Retrieved 9 August 2012.
- "AirNow API Documentation". Retrieved 20 August 2015.
- "How are your ozone maps calculated?". Retrieved 20 August 2015.
- "AirNow". Retrieved 9 August 2012..
- "AirData - US Environmental Protection Agency". Retrieved 20 August 2015.
- J.C Fensterstock et al., " The Development and Utilization of an Air Quality Index," Paper No. 69-73, presented at the 62nd Annual Meeting of the Air Pollution Control Administration, June 1969.
- CAQI in Europe- AirqualityNow website
- CAI at Airkorea.or.kr - website of South Korea Environmental Management Corp.
- AQI at airnow.gov - cross-agency U.S. Government site
- New Mexico Air Quality and API data - Example of how New Mexico Environment Department publishes their Air Quality and API data.
- AQI at Meteorological Service of Canada
- The UK Air Quality Archive
- API at JAS (Malaysian Department of Environment)
- API at Hong Kong - Environmental Protection Department of the Government of the Hong Kong Special Administrative Region
- San Francisco Bay Area Spare-the-Air - AQI explanation
- Malaysia Air Pollution Index
- AQI in Thailand provinces and in Bangkok
- Unofficial PM25 AQI in Hanoi, Vietnam | https://en.wikipedia.org/wiki/Air_Quality_Index |
4.09375 | At the center of our solar system is an enormous nuclear generator. The Earth revolves around this massive body at an average distance of 93 million miles (149.6 million kilometers). It's a star we call the sun. The sun provides us with the energy necessary for life. But could scientists create a miniaturized version here on Earth?
It's not just possible -- it's already been done. If you think of a star as a nuclear fusion machine, mankind has duplicated the nature of stars on Earth. But this revelation has qualifiers. The examples of fusion here on Earth are on a small scale and last for just a few seconds at most.
To understand how scientists can make a star, it's necessary to learn what stars are made of and how fusion works. The sun is about 75 percent hydrogen and 24 percent helium. Heavier elements make up the final percent of the sun's mass. The core of the sun is intensely hot -- temperatures are greater than 15 million degrees Kelvin (nearly 27 million degrees Fahrenheit or just under 15 million degrees Celsius).
At these temperatures, the hydrogen atoms absorb so much energy that they fuse together. This isn't a trivial matter. The nucleus of a hydrogen atom is a single proton. To fuse two protons together requires enough energy to overcome electromagnetic force. That's because protons are positively charged. If you're familiar with magnets, you know that similar charges repel each other. But if you have enough energy to overcome this force, you can fuse the two nuclei into one.
What you're left with after this initial fusion is deuterium, an isotope of hydrogen. It's an atom with one proton and one neutron. Fusing deuterium with hydrogen creates helium-3. Fusing two helium-3 atoms together creates helium-4 and two hydrogen atoms. If you break all that down, it essentially means that four hydrogen atoms fuse to create a single helium-4 atom.
Here's where energy comes into play. A helium-4 atom has less mass than four hydrogen atoms collectively. So where does that extra mass go? It's converted into energy. And as Einstein's famous equation tells us, energy is equal to the mass of an object times the speed of light squared. That means the mass of the tiniest particle is equivalent to an enormous amount of energy.
So how can scientists create a star?
Creating enough energy to overcome electromagnetic force isn't easy but the United States managed to do it on Nov. 1, 1952. That's when Ivy Mike, the world's first hydrogen bomb, detonated on Elugelab Island. The bomb had two stages. The first stage was a fission bomb. Fission is the process of splitting a nucleus. It's the type of bomb the United States used on Nagasaki and Hiroshima to end World War II.
The fission bomb element of Ivy Mike was necessary to create the massive amount of energy required to overcome the electromagnetic force of hydrogen to fuse it into helium. Heat from the initial explosion transferred through the lead casing of the bomb to a flask containing liquid deuterium. A plutonium rod inside the flask acted as the ignition for the fusion reaction.
The resulting explosion was 10.4 megatons in size. It completely obliterated the island, leaving behind a crater 164 feet deep (nearly 50 meters) and 1.2 miles (1.9 kilometers) across [source: Brookings Institution]. For a brief moment, man had harnessed the power of the stars to create a weapon of immense power. The thermonuclear age had begun.
Laboratories around the world are now trying to find a way to harness fusion as an energy source. If they can find a way to create sustainable and controllable reactions, scientists could use fusion to provide massive amounts of power for millions of years. There's no shortage of fuel -- hydrogen is plentiful and the oceans have large amounts of deuterium in them.
But getting to the point where we can harness fusion for power is going to take years of research and billions of dollars in resources. The amount of power required to initiate fusion coupled with the intense heat created by the event make it difficult to build a facility capable of containing a reaction. Some scientists are looking at massive lasers as a way to ignite a fusion event. Others are exploring options with plasma -- the fourth state of matter. But no one has unlocked the secret just yet.
So, we can create a star on Earth -- at least for a short time. But it remains to be seen if we can sustain such a creation and harness its astounding energy.
Learn more about stars and energy by following the links on the next page.
Many starry superstitions date back thousands of years. View 10 superstitions about stars to learn about star beliefs and legends.
Related HowStuffWorks Articles
More Great Links
- Brookings Institution. "The 'Mike' test, November 1, 1952." 2010. (May 20, 2010) http://www.brookings.edu/projects/archive/nucweapons/mike.aspx
- Cox, Brian. "Can we make a star on Earth?" BBC Horizons. February 2009. (May 19, 2010) http://www.bbc.co.uk/programmes/b00hr6bk
- Cox, Brian. "How to build a star on Earth." BBC News. Feb. 16, 2009. (May 18, 2010) http://news.bbc.co.uk/2/hi/sci/tech/7891787.stm
- Gray, Richard. "Scientists plan to ignite tiny man-made star." Telegraph. Dec. 27, 2008. (May 18, 2010) http://www.telegraph.co.uk/science/science-news/3981697/Scientists-plan-to-ignite-tiny-man-made-star.html
- Los Alamos National Labs. "Helium." Dec. 15, 2003. (May 18, 2010) http://periodic.lanl.gov/elements/2.html
- Los Alamos National Labs. "Hydrogen." Dec. 15, 2003. (May 18, 2010) http://periodic.lanl.gov/elements/1.html
- NASA. "Sun." World Book at NASA. (May 18, 2010) http://www.nasa.gov/worldbook/sun_worldbook.html
- The Astrophysics Spectator. "Hydrogen Fusion." Oct. 6, 2004. (May 19, 2010) http://www.astrophysicsspectator.com/topics/stars/FusionHydrogen.html | http://science.howstuffworks.com/create-star-on-earth.htm/printable |
4.03125 | Methane Explosion Warmed the Prehistoric Earth, Possible Again
A tremendous release of methane gas frozen beneath the sea floor heated the Earth by up to 13°F (7°C) 55 million years ago, a new NASA study confirms. NASA scientists used data from a computer simulation of the paleo-climate to better understand the role of methane in climate change. While most greenhouse gas studies focus on carbon dioxide, methane is 20 times more potent as a heat-trapping gas in the atmosphere.
In the last 200 years, atmospheric methane has more than doubled due to decomposing organic materials in wetlands and swamps and human aided emissions from gas pipelines, coal mining, increases in irrigation and livestock flatulence.
However, there is another source of methane, formed from decomposing organic matter in ocean sediments, frozen in deposits under the seabed.
"We understand that other greenhouse gases apart from carbon dioxide are important for climate change today," said Gavin Schmidt, the lead author of the study and a researcher at NASA's Goddard Institute for Space Studies in New York, NY and Columbia University's Center for Climate Systems Research. "This work should help quantify how important they have been in the past, and help estimate their effects in the future."
The study will be presented on December 12, 2001, at the American Geophysical Union (AGU) Fall Meeting in San Francisco, Calif.
Generally, cold temperatures and high pressure keep methane stable beneath the ocean floor, however, that might not always have been the case. A period of global warming, called the Late Paleocene Thermal Maximum (LPTM), occurred around 55 million years ago and lasted about 100,000 years. Current theory has linked this to a vast release of frozen methane from beneath the sea floor, which led to the earth warming as a result of increased greenhouse gases in the atmosphere.
A movement of continental plates, like the Indian subcontinent, may have initiated a release that led to the LPTM, Schmidt said. We know today that when the Indian subcontinent moved into the Eurasian continent, the Himalayas began forming. This uplift of tectonic plates would have decreased pressure in the sea floor, and may have caused the large methane release. Once the atmosphere and oceans began to warm, Schmidt added, it is possible that more methane thawed and bubbled out. Some scientists speculate current global heating could eventually lead to a similar scenario in the future if the oceans warm substantially.
When methane (CH4) enters the atmosphere, it reacts with molecules of oxygen (O) and hydrogen (H), called OH radicals. The OH radicals combine with methane and break it up, creating carbon dioxide (CO2) and water vapor (H2O), both of which are greenhouse gases. Scientists previously assumed that all of the released methane would be converted to CO2 and water after about a decade. If that happened, the rise in CO2 would have been the biggest player in warming the planet. But when scientists tried to find evidence of increased CO2 levels to explain the rapid warming during the LPTM, none could be found.
The models used in the new study show that when you greatly increase methane amounts, the OH quickly gets used up, and the extra methane lingers for hundreds of years, producing enough global warming to explain the LTPM climate.
"Ten years of methane is a blip, but hundreds of years of atmospheric methane is enough to warm up the atmosphere, melt the ice in the oceans, and change the whole climate system," Schmidt said. "So we may have solved a conundrum."
Schmidt said the study should help in understanding the role methane plays in current greenhouse warming.
"If you want to think about reducing future climate change, you also have to be aware of greenhouse gases other than carbon dioxide, like methane and chlorofluorocarbons," said Schmidt. "It gives a more rounded view, and in the short-term, it may end up being more cost-efficient to reduce methane in the atmosphere than it is to reduce carbon dioxide."
Schmidt, G.A., and D.T. Shindell 2003. Atmospheric composition, radiative forcing, and climate change as a consequence of a massive methane release from gas hydrates. Paleoceanography 18, no. 1, 1004, doi:10.1029/2002PA000757.
Timothy R. Tawney, NASA Goddard Space Flight Center, Greenbelt, MD. Phone: 301/614-6573. firstname.lastname@example.org
Krishna Ramanujan, NASA Goddard Space Flight Center, Greenbelt, MD. Phone: 301/286-3026. email@example.com
This article was derived from the NASA Goddard Space Flight Center Top Story. | http://www.giss.nasa.gov/research/news/20011210/ |
4.3125 | Order of magnitude
Orders of magnitude are written in powers of 10. For example, the order of magnitude of 1500 is 3, since 1500 may be written as 1.5 × 103.
Differences in order of magnitude can be measured on the logarithmic scale in "decades" (i.e., factors of ten). Examples of numbers of different magnitudes can be found at Orders of magnitude (numbers).
We say two numbers have the same order of magnitude of a number if the big one divided by the little one is less than 10. For example, 23 and 82 have the same order of magnitude, but 23 and 820 do not.
Orders of magnitude are used to make approximate comparisons. If numbers differ by one order of magnitude, x is about ten times different in quantity than y. If values differ by two orders of magnitude, they differ by a factor of about 100. Two numbers of the same order of magnitude have roughly the same scale: the larger value is less than ten times the smaller value.
The order of magnitude of a number is, intuitively speaking, the number of powers of 10 contained in the number. More precisely, the order of magnitude of a number can be defined in terms of the common logarithm, usually as the integer part of the logarithm, obtained by truncation. For example, the number 4,000,000 has a logarithm (in base 10) of 6.602; its order of magnitude is 6. When truncating, a number of this order of magnitude is between 106 and 107. In a similar example, with the phrase "He had a seven-figure income", the order of magnitude is the number of figures minus one, so it is very easily determined without a calculator to be 6. An order of magnitude is an approximate position on a logarithmic scale.
An order-of-magnitude estimate of a variable whose precise value is unknown is an estimate rounded to the nearest power of ten. For example, an order-of-magnitude estimate for a variable between about 3 billion and 30 billion (such as the human population of the Earth) is 10 billion. To round a number to its nearest order of magnitude, one rounds its logarithm to the nearest integer. Thus 4,000,000, which has a logarithm (in base 10) of 6.602, has 7 as its nearest order of magnitude, because "nearest" implies rounding rather than truncation. For a number written in scientific notation, this logarithmic rounding scale requires rounding up to the next power of ten when the multiplier is greater than the square root of ten (about 3.162). For example, the nearest order of magnitude for 1.7 × 108 is 8, whereas the nearest order of magnitude for 3.7 × 108 is 9. An order-of-magnitude estimate is sometimes also called a zeroth order approximation.
An order-of-magnitude difference between two values is a factor of 10. For example, the mass of the planet Saturn is 95 times that of Earth, so Saturn is two orders of magnitude more massive than Earth. Order-of-magnitude differences are called decades when measured on a logarithmic scale.
Non-decimal orders of magnitude
Other orders of magnitude may be calculated using bases other than 10. The ancient Greeks ranked the nighttime brightness of celestial bodies by 6 levels in which each level was the fifth root of one hundred (about 2.512) as bright as the nearest weaker level of brightness, and thus the brightest level being 5 orders of magnitude brighter than the weakest indicates that it is (1001/5)5 or a factor of 100 times brighter.
The different decimal numeral systems of the world use a larger base to better envision the size of the number, and have created names for the powers of this larger base. The table shows what number the order of magnitude aim at for base 10 and for base 1,000,000. It can be seen that the order of magnitude is included in the number name in this example, because bi- means 2 and tri- means 3 (these make sense in the long scale only), and the suffix -illion tells that the base is 1,000,000. But the number names billion, trillion themselves (here with other meaning than in the first chapter) are not names of the orders of magnitudes, they are names of "magnitudes", that is the numbers 1,000,000,000,000 etc.
|order of magnitude||is log10 of||is log1,000,000 of||short scale||long scale|
SI units in the table at right are used together with SI prefixes, which were devised with mainly base 1000 magnitudes in mind. The IEC standard prefixes with base 1024 were invented for use in electronic technology.
The ancient apparent magnitudes for the brightness of stars uses the base and is reversed. The modernized version has however turned into a logarithmic scale with non-integer values.
Extremely large numbers
For extremely large numbers, a generalized order of magnitude can be based on their double logarithm or super-logarithm. Rounding these downward to an integer gives categories between very "round numbers", rounding them to the nearest integer and applying the inverse function gives the "nearest" round number.
The double logarithm yields the categories:
- ..., 1.0023–1.023, 1.023–1.26, 1.26–10, 10–1010, 1010–10100, 10100–101000, ...
(the first two mentioned, and the extension to the left, may not be very useful, they merely demonstrate how the sequence mathematically continues to the left).
The super-logarithm yields the categories:
- , or
- negative numbers, 0–1, 1–10, 10–1e10, 1e10–101e10, 101e10–410, 410–510, etc. (see tetration)
The "midpoints" which determine which round number is nearer are in the first case:
- 1.076, 2.071, 1453, 4.20e31, 1.69e316,...
and, depending on the interpolation method, in the second case
- −.301, .5, 3.162, 1453, 1e1453, , ,... (see notation of extremely large numbers)
For extremely small numbers (in the sense of close to zero) neither method is suitable directly, but the generalized order of magnitude of the reciprocal can be considered.
Similar to the logarithmic scale one can have a double logarithmic scale (example provided here) and super-logarithmic scale. The intervals above all have the same length on them, with the "midpoints" actually midway. More generally, a point midway between two points corresponds to the generalised f-mean with f(x) the corresponding function log log x or slog x. In the case of log log x, this mean of two numbers (e.g. 2 and 16 giving 4) does not depend on the base of the logarithm, just like in the case of log x (geometric mean, 2 and 8 giving 4), but unlike in the case of log log log x (4 and 65536 giving 16 if the base is 2, but, otherwise).
- Big O notation
- Names of large numbers
- Names of small numbers
- Number sense
- Orders of approximation
- Orders of magnitude (numbers)
- Asimov, Isaac The Measure of the Universe (1983)
- The Scale of the Universe 2 Interactive tool from Planck length 10−35 meters to universe size 1027
- Cosmos – an Illustrated Dimensional Journey from microcosmos to macrocosmos – from Digital Nature Agency
- Powers of 10, a graphic animated illustration that starts with a view of the Milky Way at 1023 meters and ends with subatomic particles at 10−16 meters.
- What is Order of Magnitude? | https://en.wikipedia.org/wiki/Order_of_magnitude |
4.0625 | The Neogrammarians (also Young Grammarians, German Junggrammatiker) were a German school of linguists, originally at the University of Leipzig, in the late 19th century who proposed the Neogrammarian hypothesis of the regularity of sound change. According to this hypothesis, a diachronic sound change affects simultaneously all words in which its environment is met, without exception. Verner's law is a famous example of the Neogrammarian hypothesis, as it resolved an apparent exception to Grimm's law. The Neogrammarian hypothesis was the first hypothesis of sound change to attempt to follow the principle of falsifiability according to scientific method. Subsequent researchers have questioned this hypothesis from two perspectives. First, adherents of lexical diffusion (where a sound change affects only a few words at first and then gradually spreads to other words) believe that some words undergo changes before others. Second, some believe that it is possible for sound changes to observe grammatical conditioning. Nonetheless, both of these challenges to exceptionlessness remain controversial, and many investigators continue to adhere to the Neogrammarian doctrine.
Other contributions of the Neogrammarians to general linguistics were:
- The object of linguistic investigation is not the language system, but rather the idiolect, that is, language as it is localized in the individual, and therefore is directly observable.
- Autonomy of the sound level: being the most observable aspect of language, the sound level is seen as the most important level of description, and absolute autonomy of the sound level from syntax and semantics is assumed.
- Historicism: the chief goal of linguistic investigation is the description of the historical change of a language.
- Analogy: if the premise of the inviolability of sound laws fails, analogy can be applied as an explanation if plausible. Thus, exceptions are understood to be a (regular) adaptation to a related form.
Leading Neogrammarian linguists included:
- Otto Behaghel (1854–1936)
- Wilhelm Braune (1850–1926)
- Karl Brugmann (1849–1919)
- Berthold Delbrück (1842–1922)
- August Leskien (1840–1916)
- Adolf Noreen (1854–1925)
- Hermann Osthoff (1847–1909)
- Hermann Paul (1846–1921)
- Eduard Sievers (1850–1932)
Despite their strong influence in their time, the methods and goals of the Neogrammarians have been criticized from various points of view, but mainly for: reducing the object of investigation to the idiolect; restricting themselves to the description of surface phenomena (sound level); overvaluation of historical languages and neglect of contemporary ones.
- Hermann Paul: Prinzipien der Sprachgeschichte. (1880).
- Jankowsky, Kurt R. (1972). The neogrammarians. A re-evaluation of their place in the development of linguistic science. The Hague, Mouton.
- Karl Brugmann und Bertold Delbrück: Grundriß der vergleichenden Grammatik der indogermanischen Sprachen. (1897–1916).
- Hugo Schuchardt: „Über die Lautgesetze. Gegen die Junggrammatiker“, in Hugo-Schuchardt-Brevier, ein Vademekum der allgemeinen Sprachwissenschaft., ed. Leo Spitzer. Halle (Saale) 1922.
- Harald Wiese: Eine Zeitreise zu den Ursprüngen unserer Sprache. Wie die Indogermanistik unsere Wörter erklärt, Logos Verlag Berlin, 2007, ISBN 978-3-8325-1601-7.
- For a discussion and rejection of grammatical conditioning see Hill, Nathan W. (2014) 'Grammatically conditioned sound change.' Language and Linguistics Compass, 8 (6). pp. 211-229.
|This linguistics article is a stub. You can help Wikipedia by expanding it.| | https://en.wikipedia.org/wiki/Neogrammarian |
4.4375 | If you're seeing this message, it means we're having trouble loading external resources for Khan Academy.
If you're behind a web filter, please make sure that the domains
*.kastatic.org and *.kasandbox.org are unblocked.
Comparing with multiplication
In this tutorial, we look at multiplication and division through the lens of comparison. For example, say that you are 9 and 3 times older than your cousin. How old would your cousin be? Multiplying a number times 3 gets you to your age, 9. Can you figure out the answer? We'll go through several exercises together so you get enough practice to feel confident multiplying. By the way, memorizing your multiplication tables helps a lot!
You'll learn that sometimes multiplication is used as a way of comparing two things. In these 2 examples, we're comparing age and height.
When comparing it's often helpful to start by using letters to represent numbers. Watch this video as we walk you through 2 examples involving money and distance.
If you think about it, multiplying is just another way of comparing numbers. How do we compare 4 and 20 using multiplication? Let's find out together.
Multiplication helps us compare ages. Hint: learn those multiplication tables. They really help on problems like these.
You read that right! We're comparing the strength of Ron and Hermione using multiplication. Who said math has to be boring?
Rewrite multiplication equations as comparisons and comparisons as equations.
Select the equation that can be used to solve a word problem. | https://www.khanacademy.org/math/cc-fourth-grade-math/cc-4th-mult-div-topic/cc-4th-mult-comparing |
4.1875 | Skip to main content
Get your brand new Wikispaces Classroom now
and do "back to school" in style.
Math for English Language Learners
Pages and Files
Math for English Language Learners
Add "All Pages"
Teaching Math Vocabulary to English Language Learners is critical
One of the myths that drove us to choose mathematics as a topic for our ELL project was the misconception of how students should have less difficulty with math because it is a universal language.
We found out how a low proficiency in a second language has a significative impact on learning math and that ELLs stumble on many things when they try to comprehend new math concepts or express their understanding.
ELLs require extra help to be able to make the connection between mathematical operators and numbers
Ballantyne, K.G., Sanderman, A.R., Levy, J.
, 2008, p.51).
Strategies which can be used to teach math vocabulary according to
Rothenberg and Fisher in Murrey, D. (2008, p.147)
are summarized as follows:
Math Vocabulary Teaching Strategies
- Promoting comprehensive input by using adequate speech, gestures and scaffolding techniques. This also includes teaching words with different meanings, use of cognates and introducing vocabulary after the students have learned the concept.
- Contextualizing instruction by teaching academic language with realia support, manipulatives and graphic organizers when possible.
- Creating a low - anxiety learning environment through well planned lessons.
- Engaging in meaningful learning activities through real-world context tasks and discussions.
To have a glimpse of the effective use of strategies in a math class setting click on the next example.
It is very important to have ELLs understand the math concept first and then build their new vocabulary. It is also very enlightening for them to see different meanings to one word, especially if one is of common English and the other an important math definition. Words with different definitions are very powerful while teaching ELLs and there are many resources and tools to implement such as the use of a
. To support vocabulary development teachers can incorporate strategies such as
which are more thoroughly explained in the Connected Mathematics Project from
Michigan State University
along with the use of graphic organizers, Venn diagrams, concept maps, vocabulary charts and tree diagrams.
The approach of teaching cognates to students is widely recommended because it allows the ELL to make a strong personal connection with new vocabulary. To realize how a math concept can be defined by two similar words in two languages is an “ah ha!” you will appreciate from the students. You can read more about strategies for math teachers focused on ELLs on the
Texas Comprehensive Center
website which is another strong example of how education institutions across the U.S. have created projects to research and support the education system with the goal to better serve students that are in learning English as a second language.
Ballantyne, K.G., Sanderman, A.R., Levy, J. (2008).
Educating English language learners: Building teacher capacity
. Washington, DC: National Clearinghouse for English Language Acquisition. Retrieved February 28, 2012 from
Murrey, D. (2008).
Differentiating instruction in mathematics for English language learners.
Mathematics Teaching in the Middle School, v14 n3 (p146-153) National Council of Teachers of Mathematics. Retrieved February 27, 2012 from
For more information on vocabulary see these websites:
Connected Mathematics Project from Michigan State University
Texas Comprehensive Center Website:
help on how to format text
Turn off "Getting Started" | http://ism-math.wikispaces.com/Vocabulary?responseToken=0b1c7a93dfb1e51228717fa3fbcf771f5 |
4.15625 | Word relationship questions assess your ability to identify the relationship between words and to then apply this verbal analogy. To answer these questions you need to understand the meaning of the words in the question and establish what exactly the relationship is between them.
You should then look at the answer options and decide which answer is the most appropriate. These questions test your reasoning ability as well as your vocabulary. These types of question appear in nearly all levels of verbal ability tests.
This sample question paper contains 40 questions and has a suggested time limit of 10 minutes. The questions are presented in Letter/A4 format for easy printing and self-marking.
Word relationship questions often take the form of verbal
analogies. These can be classified into specific categories. For
example; materials, taxonomic relationships, temporal
relationships, parts of speech etc. The list is almost endless. Be
sure that you understand what an analogy is before you start.
Every analogy expresses a relationship between two things. It is
this relationship that you must understand as you look at the
options required to complete the analogy.
First try to understand the relationships expressed in the question words. Then choose your answer so that the relationship in the first pair of words is similar to the relationship in the second pair of words in terms of meaning, order and function.
Check that the parts of speech used in the two sections of analogy are consistent and follow in the same sequence. For example, if the first pair of words contains an adjective and a noun in that order, then the second pair of words must contain an adjective and a noun in the same order. Test designers are very fond of offering answer options which initially seem credible but where this golden rule is broken.
Learn how to prepare properly for verbal reasoning tests.
Click here for details of this best selling eBook, available now for immediate download. | http://www.psychometric-success.com/downloads/download-verbal-relationship-practice-tests.htm |
4.1875 | A multiplier is a number by which another number is multiplied. What do you call a number by which another number is added or subtracted?
- Anybody can ask a question
- Anybody can answer
- The best answers are voted up and rise to the top
This question is too basic; it can be definitively and permanently answered by a single link to a standard internet reference source designed specifically to find that type of information.If this question can be reworded to fit the rules in the help center, please edit the question.
There is the multiplier (that which multiplies) and the multiplicand (that which is to be multiplied).
For subtraction there is the subtrahend (that which is to be subtracted) and the minuend (that which is to be diminished).
For division there is the divisor (that which divides) and the dividend (that which is to be divided).
These words with -and or -end are Latin future participles.
There are a couple of mathematical points to make which inform the English involved.
(1) 'Addition' (yes, simple addition!) covers two major operations:
(a) Combination of two elements from a set (a binary operation) (eg 3 marbles + 2 marbles = 5 marbles).
(b) Transformation of one element into another (a unary operation) (eg a 3cm-long worm grows by 2cm )
Both are modelled identically by 3 + 2 = 5, but a transformation arrow with '+2' over the top is fitting for the transformation.
In the first case, addend + addend = sum / total
In the second case, augend + addend = sum / total.
(2) We've had a thread discussing the fact that there is no agreed term for the result of a subtraction; 'directed difference' is used by some. | http://english.stackexchange.com/questions/112574/what-is-a-word-similar-to-multiplier-but-for-addition-or-subtraction/112583 |
4.28125 | In this course, you'll come to see English grammar as a three-dimensional process that's useful in bringing coherence, cohesion, and texture to writing and speech. We'll begin by considering seven definitions of grammar that we'll draw on throughout the course. We'll also discuss the differences between patterns and rules, and why second-language learners benefit from our instruction on both.
You'll learn why students need to understand the three dimensions of grammar—form, meaning, and use—and how seeing grammar as a dynamic and changing system helps students overcome many of their grammar challenges. You'll also see why teaching grammar in a way that makes it personally meaningful to your students brings the best results.
And since teaching isn't just about presenting lessons, we'll also go over the importance of "reading" your students—observing them to try to figure out what learning process they're using. We'll contrast rote or mechanical practice with meaningful practice, and we'll go over guidelines for creating activities and adapting your textbook exercises to get students working on the unique learning challenge presented by each different grammatical structure.
Toward the end of the course, we'll talk about what specific errors students make can indicate, and how they can help us pinpoint the unique challenges our students face so we can develop meaningful practice activities to help them meet those challenges. And we'll finish up the course by discussing ways that you can give valuable feedback to your students. Get ready to discover how to teach grammar in a way that's both effective and enjoyable for your students!
Course materials are developed by Heinle I Cengage Learning, a global leader in ESL/EFL materials. Course content is approved by the TESOL Professional Development Committee so students who successfully complete this course receive a TESOL Certificate of Completion.
Diane Larsen-Freeman is a Professor of Education, Professor of Linguistics, and Research Scientist at the English Language Institute at the University of Michigan in Ann Arbor. She is also Distinguished Senior Faculty Fellow at the School for International Training in Brattleboro, Vermont. She has spoken and published widely on the topics of teacher education, second language acquisition, English grammar, and language teaching methodology. Her books include: An Introduction to Second Language Acquisition Research (with Michael Long, Longman, 1991), The Grammar Book (with Marianne Celce-Murcia, Heinle/Thomson, 1999, second edition), Techniques and Principles in Language Teaching (Oxford University Press, 2000, second edition), Grammar Dimensions (Series Director, Heinle, 2007, 4th edition), Teaching Language: From Grammar to Grammaring (Heinle, 2003) and Complex Systems and Applied Linguistics (with Lynne Cameron, Oxford University Press, 2008). In 1997, Dr. Larsen-Freeman was inducted into the Vermont Academy of Arts and Sciences. In 1999, she was named one of the ESL pioneers by ESL Magazine. In 2000, she received a Lifetime Achievement Award from Heinle Publishers.
Charletta Bowen will be your facilitator in the Discussion Areas. She is an English as a Second Language (ESL) teacher and has been teaching ESL for 30 years. She currently teaches advanced level students at a university in the U.S.
• Internet access
• One of the following browsers:
o Mozilla Firefox
o Microsoft Internet Explorer (9.0 or above)
o Google Chrome
• Adobe PDF plug-in (a free download obtained at Adobe.com .)
A new session of each course opens each month, allowing you to enroll whenever your busy schedule permits!
How does it work? Once a session starts, two lessons will be released each week, for the six-week duration of your course. You will have access to all previously released lessons until the course ends.
Keep in mind that the interactive discussion area for each lesson automatically closes 2 weeks after each lesson is released, so you?re encouraged to complete each lesson within two weeks of its release.
The Final Exam will be released on the same day as the last lesson. Once the Final Exam has been released, you will have 2 weeks plus 10 days to complete the Final and finish any remaining lessons in your course. No further extensions can be provided beyond these 10 days.
Grammar is an incredibly rich system for making meaning in a language. It's a subject that many people misunderstand, though, and that's something we should all be concerned about because if we don't see fully how grammar contributes to communication, then our students won't either. When students misunderstand grammar, they'll often develop a negative attitude toward studying grammar. We'll begin this first lesson by considering seven definitions of grammar, and we'll draw on all seven of these definitions later in this course. We'll also discuss the differences between patterns and rules, and why second-language learners benefit from our instruction on both patterns and the rules in the classroom.
Many people think of grammar structures as forms in a language. For instance, one form instructs us to place an s at the end of a noun if we want to make that noun plural. While there are indeed grammatical forms such as the plural s, there's more to grammar than form! In this lesson, you'll learn that grammar structures have meanings, and they have uses as well. This is very important to understand because grammar doesn't relate only to accuracy. It also relates to meaningfulness and appropriateness. We often teach grammar as forms that have meaning, but students don't often understand when or why to use particular structures. They wind up overusing them, under using them, or using them inappropriately. Students need to understand that there are three dimensions of grammar?form, meaning, and use?and that's what we'll discuss in today's lesson.
The title of this lesson is Grammaring. Grammar + -ing. If you haven't heard the term before, don't be surprised. I coined it myself because I think adding the ing helps people understand that grammar isn't a fixed system of unchanging rules. On the contrary, grammatical rules and patterns change all the time! In this lesson, we'll talk about three ways that grammar is dynamic and changing. We'll also consider a long-time problem in language learning?the inert knowledge problem, where students appear to have learned something in class but can't use it outside of class for their own purposes. Finally, we'll talk about helping students overcome the inert knowledge problem by viewing grammar as a dynamic system and teaching it in a psychologically authentic way.
When you think about grammar, you might think about rules that apply to sentences. Such rules might tell us the order of words in a phrase or in a sentence. But grammar goes beyond the sentence, too. Think about the sentences in a paragraph. There's an order they must follow to make sense, and grammar is what helps you to organize them! Today you'll learn the ways that people can use grammar to bring cohesion, coherence, and texture to what they're saying and writing. In the process, grammar helps to create organized wholes from written sentences and spoken utterances. Knowing how to create an organized whole out of sentences and utterances is very important for ESL and EFL students so they can learn to write and speak in a comprehensible way.
Often people make a clear division between grammar structures and words. Grammar structures are patterns or formulas with open slots where the words go?it's up to you to add words to that structure. In this lesson, however, you'll see that grammar structures and words are actually interconnected. For one thing, the slots in certain grammatical patterns aren't really open, waiting for just any old word to fill them in. They can only be filled by particular words. Plus, certain grammar structures have characteristics that put them into the category of words, and some words have characteristics that would equally qualify them as grammar structures. So they can go either way as words or grammar structures. We'll talk about all of it in this lesson about lexicogrammar!
If I asked you what you associate with the term "grammar," what would you say? I bet you'd say "rules." It's probably the most common association with grammar. Grammar rules are important in both language learning and teaching. I've taught grammar rules, and perhaps you have, too. I wouldn't want to do anything to discourage you from teaching rules. But in today's lesson, I hope to convince you that grammar has underlying reasons as well as rules. Reasons help you understand why rules are the way they are. Grammar isn't as arbitrary as you may have thought. You don't always have to tell your students, "That's just the way it is." Reasons will also help you understand the so-called "exceptions" to rules. Besides, reasons are broader than rules. If you understand a single reason, you'll understand a number of rules. Now, that sounds like a bargain, doesn't it?
One of the problems that all teachers face is lack of time. There's never enough time to teach all you want your students to learn. You have to be selective. Now, you may be thinking selection becomes more difficult with a grammaring approach. After all, you've learned by now in this course that grammar is more complex than you may have thought. But in today's lesson, you'll learn an important principle as well. It's called the challenge principle. It's a principle for selecting what it is that you need to spend time on with your students. The challenge principle says that you should spend time focusing on the dimension of grammar that students find most challenging?it could be form, or meaning, or use. In this lesson, you'll learn how to apply the challenge principle to determine an instructional focus.
Teaching isn't only about presenting lessons. A large part of being a good teacher is "reading" your students. By "reading" your students, I mean observing them while they're learning?trying to figure out what learning process they're using. You'll also see that students have their own goals for what they want to learn and their own strategies for how they'll meet these goals. In this lesson, you'll learn about some of the learning processes that students use to grasp grammar. You'll also see that different students approach learning grammar in different ways. By the end of this lesson, you'll have acquired the knowledge what you need to be a better observer and manager of your students' learning. And, believe me; watching your students learn is one of the very special rewards of teaching!
In this lesson, we'll examine three different approaches to teaching grammar. We'll start with the traditional 3-P approach: present, practice, and produce?present a grammar structure, practice it, and then have your students produce it. We'll then contrast this traditional approach with a more recent proposal to focus on form within a communicative approach. I'll also talk about my grammaring approach. As you know by now, I believe that we need to teach grammar in a more dynamic fashion in order to overcome the inert knowledge problem. And my goal in this lesson is to convince you of that, too!
We can make a number of contrasts between learning grammar in your native language and learning grammar in another language. One of the important differences is that in learning your native language, you learn from experience?you learn implicitly. Second language learners, on the other hand, often learn grammar explicitly?by following explicit rules and explanations. In this lesson, I'll contrast the two?implicit learning and teaching, and explicit learning and teaching. We'll also discuss the important question about using grammatical terminology while you're teaching grammar. Using grammar terms can be useful to students, but let's not to lose sight of the fact that what we're trying to do is to help them achieve an ability to use grammar?not necessarily turn them into grammarians.
By now, you know that I believe that learning grammar should be an active process. The capacity to use grammar structures actively requires practice. In this lesson, we'll start off by contrasting rote or mechanical practice with meaningful practice. When people think of grammatical practice, they often think of drills. But today, you'll find out how to create meaningful practice activities that address the form, meaning, and use challenges in learning grammar. With these guidelines, you'll be able to create activities and adapt your textbook exercises so that your students are working on the unique learning challenge presented by each different grammatical structure. You can make your teaching process much more effective this way.
With this lesson, we'll conclude our course. But we can't do that without taking up the important issue of giving feedback to our students, and that's the focus of this lesson. We'll start off by talking about what an error is. Recognizing what is and what isn't an error might not always be easy. Then, once we're satisfied that we've defined and detected an error, we'll need to go over what to do about it. This is actually a controversial area! I'll try to help by suggesting what sort of feedback students find most useful. Errors are also important windows into learners' minds?we can actually learn quite a lot from our learners' errors! You may find it amusing that one of the final questions we'll consider in this course has to do with learning. As you've no doubt seen throughout this course, I consider learning?learning about grammar, learning from our students, learning from each other?to be at the heart of good grammar teaching. So we'll conclude with a wish for the joy of learning.
Reviews coming soon! Please check back next month.
About Us |
Job Opportunities |
Contact Us |
Site Map | | http://www.ce.ucf.edu/Program-Search/1537/Teaching-Esl-Efl-Grammar/ |
4.125 | 1 Answer | Add Yours
When speaking about the Constitution of the United States, delegated powers are the same thing as enumerated powers. These are the powers that are specifically given to the federal government. These can mostly be found in Section 8 of Article I. There, the Constitution lays out powers that Congress has. These include things like the power to impose taxes and the power to lay out rules for how new citizens can be naturalized. Delegated powers must be distinguished from implied powers. These are powers that are not specifically given to Congress in the Constitution but which are covered under the power to do things that are “necessary and proper” for carrying out the delegated powers.
We’ve answered 301,108 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/what-delegated-powers-371664 |
4.03125 | A chord, in music, is any harmonic set of three or more notes that is heard as if sounding simultaneously. These need not actually be played together: arpeggios and broken chords (these involve the notes of the chord played one after the other, rather than at the same time) may, for many practical and theoretical purposes, constitute chords. Chords and sequences of chords are frequently used in modern Western, West African and Oceanian music, whereas they are absent from the music of many other parts of the world.
In tonal Western classical music, the most frequently encountered chords are triads, so called because they consist of three distinct notes: the root note, a third above the root and a fifth interval above the root. Further notes may be added to give tetrads such as seventh chords (the most commonly encountered example being the dominant seventh chord) and added tone chords, as well as extended chords and tone clusters. Triads commonly found in the Western classical tradition are major and minor chords, with augmented and diminished chords appearing less often. The descriptions major, minor, augmented, and diminished are referred to collectively as chordal quality. Chords are also commonly classified by their root note—for instance, a C major triad consists of the pitch classes C, E, and G. A chord retains its identity if the notes are stacked in a different way vertically; however, if a chord has a note other than the root note as the lowest note, the chord is said to be in an inversion (this is also called an "inverted chord").
An ordered series of chords is called a chord progression. One example of a widely used chord progression in Western traditional music and blues is the 12 bar blues progression, the simplest versions of which include tonic, subdominant and dominant chords (this system of naming chords is described later in this section). Although any chord may in principle be followed by any other chord, certain patterns of chords are more common in Western music, and some pattern have been accepted as establishing the key (tonic note) in common-practice harmony–notably the movement between tonic and dominant chords. To describe this, Western music theory has developed the practicing of numbering chords using Roman numerals which represent the number of diatonic steps up from the tonic note of the scale.
Common ways of notating or representing chords in Western music other than conventional staff notation include Roman numerals, figured bass, macro symbols (sometimes used in modern musicology), and chord charts. Each of these systems is more likely to appear in certain contexts: figured bass notation was used prominently in notation of Baroque music, macro symbols are used in modern musicology, and chord charts are typically found in the lead sheets used in popular music and jazz. The chords in a song or piece are also given names which refer to their function. The chord built on the first note of a major scale is called the tonic chord (colloquially called a "I" or "one" chord). The chord built on the fourth note of a major scale is called the subdominant chord (colloquially called a "IV" chord or "four" chord). The chord built on the fifth degree of the major scale is called the dominant chord (colloquially called a "V chord" or "five" chord). There are names for the chords built on every note of the major scale. Chords can be played on many instruments, including piano, pipe organ, guitar and mandolin. Chords can also be performed when multiple musicians play together in a musical ensemble or when multiple singers sing in a choir and they play or sing three or more notes at the same time.
- 1 Definition and history
- 2 Notation
- 3 Characteristics
- 4 Triads
- 5 Seventh chords
- 6 Extended chords
- 7 Altered chords
- 8 Added tone chords
- 9 Suspended chords
- 10 Borrowed chords
- 11 References
- 12 Sources
- 13 Further reading
- 14 External links
Definition and history
The English word chord derives from Middle English cord, a shortening of accord in the original sense of agreement and later, harmonious sound. A sequence of chords is known as a chord progression or harmonic progression. These are frequently used in Western music. A chord progression "aims for a definite goal" of establishing (or contradicting) a tonality founded on a key, root or tonic chord. The study of harmony involves chords and chord progressions, and the principles of connection that govern them.
Ottó Károlyi writes that, "Two or more notes sounded simultaneously are known as a chord," though, since instances of any given note in different octaves may be taken as the same note, it is more precise for the purposes of analysis to speak of distinct pitch classes. Furthermore, as three notes are needed to define any common chord, three is often taken as the minimum number of notes that form a definite chord.[vague] Hence Andrew Surmani, for example, (2004, p. 72) states, "When three or more notes are sounded together, the combination is called a chord." George T. Jones (1994, p. 43) agrees: "Two tones sounding together are usually termed an interval, while three or mores tones are called a chord." According to Monath (1984, p. 37); "A chord is a combination of three or more tones sounded simultaneously," and the distances between the tones are called intervals. However sonorities of two pitches, or even single-note melodies, are commonly heard as implying chords.
Since a chord may be understood as such even when all its notes are not simultaneously audible, there has been some academic discussion regarding the point at which a group of notes may be called a chord. Jean-Jacques Nattiez (1990, p. 218) explains that, "We can encounter 'pure chords' in a musical work," such as in the Promenade of Modest Mussorgsky's Pictures at an Exhibition but, "Often, we must go from a textual given to a more abstract representation of the chords being used," as in Claude Debussy's Première Arabesque.
In the medieval era, early Christian hymns featured organum (which used the simultaneous perfect intervals of a fourth, a fifth, and an octave), with chord progressions and harmony an incidental result of the emphasis on melodic lines during the medieval and then Renaissance (15-17th centuries).
The Baroque period, the 17th and 18th centuries, began to feature the major and minor scale based tonal system and harmony, including chord progressions and circle progressions. It was in the Baroque period that the accompaniment of melodies with chords was developed, as in figured bass, and the familiar cadences (perfect authentic, etc.). In the Renaissance, certain dissonant sonorities that suggest the dominant seventh occurred with frequency. In the Baroque period the dominant seventh proper was introduced, and was in constant use in the Classical and Romantic periods. The leading-tone seventh appeared in the Baroque period and remains in use. Composers began to use nondominant seventh chords in the Baroque period. They became frequent in the Classical period, gave way to altered dominants in the Romantic period, and underwent a resurgence in the Post-Romantic and Impressionistic period.
The Romantic period, the 19th century, featured increased chromaticism. Composers began to use secondary dominants in the Baroque, and they became common in the Romantic period. Many contemporary popular Western genres continue to rely on simple diatonic harmony, though far from universally: notable exceptions include the music of film scores, which often use chromatic, atonal or post-tonal harmony, and modern jazz (especially circa 1960), in which chords may include up to seven notes (and occasionally more). When referring to chords that do not function as harmony, such as in atonal music, the term "sonority" is often used specifically to avoid any tonal implications of the word "chord".
Triads consist of three notes; the root or first note, the third, and the fifth. For example, the C major scale consists of the notes C D E F G A B: a triad can be constructed on any note of such a major scale, and all are minor or major except the triad on the seventh or leading-tone, which is a diminished chord. A triad formed using the note C itself consists of C (the root note), E (the third note of the scale) and G (the fifth note of the scale). The interval from C to E is of four semitones, a major third, and so this triad is called C Major. A triad formed upon the same scale but with D as the root note, D (root), F (third), A (fifth), on the other hand, has only three semitones between the root and third and is called D minor, a minor triad.
Chords can be represented in various ways. The most common notation systems are:
- Plain staff notation, used in classical music
- Roman numerals, commonly used in harmonic analysis to denote the scale step on which the chord is built.
- Figured bass, much used in the Baroque era, uses numbers added to a bass line written on staff (music), to enable keyboard players to improvise chords with the right hand while playing the bass with their left.
- Macro symbols, sometimes used in modern musicology, to denote chord root and quality.
- Various chord names and symbols used in popular music lead sheets, fake books, and chord charts, to quickly lay out the harmonic groundplan of a piece so that the musician may improvise, jam, or vamp on it.
While scale degrees are typically represented with Arabic numerals, the triads that have these degrees as their roots are often identified by Roman numerals. In some conventions (as in this and related articles) upper-case Roman numerals indicate major triads while lower-case Roman numerals indicate minor triads: other writers, (e.g. Schoenberg) use upper case Roman numerals for both major and minor triads. Some writers use upper-case Roman numerals to indicate the chord is diatonic in the major scale, and lower-case Roman numerals to indicate that the chord is diatonic in the minor scale. Diminished triads may be represented by lower-case Roman numerals with a degree symbol. Roman numerals can also be used in stringed instrument notation to indicate the position or string to play.
Figured bass notation
Figured bass or thoroughbass is a kind of musical notation used in almost all Baroque music (ca. 1600-1750), though rarely in music from later than 1750, to indicate harmonies in relation to a conventionally written bass line. Figured bass is closely associated with chord-playing basso continuo accompaniment instruments, which included harpsichord, pipe organ and lute. Added numbers, symbols and accidentals beneath the staff indicate at the intervals to play, the numbers stand for the number of scale steps above the written note to play the figured notes. In the 2010s, some classical musicians who specialize in music from the Baroque era can still perform chords using figured bass notation. In many cases, however, when Baroque music is performed in the 2010s, the chord-playing performers read fully notated chords that have been prepared for the piece by the music publisher. A Baroque part for a chord-playing instrument that has fully written-out chords is called a "realization" of the figured bass part.
In the illustration the bass note is a C, and the numbers 4 and 6 indicate that notes a fourth and a sixth above, that is F and A, should be played, giving the second inversion of the F major triad. If no numbers are written beneath a bass note, this is assumed to indicate the figure 5,3, which calls for a third and a fifth above the bass note (i.e., a root position triad).
Macro analysis uses upper-case and lower-case letters to indicate the roots of chords, followed by symbols that specify the chord quality.
- The root note (e.g. C).
- The chord quality (e.g. major, maj, or M). [Note: if no chord quality is specified, the chord is assumed to be a major triad by default.]
- The number of an interval (e.g. seventh, or 7), or less often its full name or symbol (e.g. major seventh, maj7, or M7).
- The altered fifth (e.g. sharp five, or ♯5).
- An additional interval number (e.g. add 13 or add13), in added tone chords.
For instance, the name C augmented seventh, and the corresponding symbol Caug7, or C+7, are both composed of parts 1, 2, and 3.
None of these parts, except for the root, directly refer to the notes forming the chord, but to the intervals they form with respect to the root. For instance, Caug7 is formed by the notes C-E-G♯-B♭. However, its name and symbol refer only to the root note C, the augmented (fifth) interval from C to G♯, and the (minor) seventh interval from C to B♭. The interval from C to E (a major third) sets the chord quality (major). A set of decoding rules is applied to deduce the missing information.
Some of the symbols used for chord quality are similar to those used for interval quality:
- m, or min for minor,
- M, maj, or no symbol for major,
- aug for augmented,
- dim for diminished.
The interpretation of chord symbols depends on the genre of music being played. In jazz from the Bebop era or later, major and minor chords are typically voiced as seventh chords even if only "C" or "c min" appear in the chart. In jazz charts, seventh chords are often voiced with upper extensions, such as the 9th, #11th and 13th, even if the chart only indicates "A7". As well, in jazz, the root and fifth are often omitted from chord voicings, except when there is a flat fifth. The root is played by the bass player. In cases where two chordal instruments are comping at the same time from a chart, the players have to either listen to each other's voicings, agree on a voicing beforehand, or alternate comping in different choruses. This is done because if the electric guitarist interprets an "A7" chord as "A7 b9" and the Hammond organ player interprets the "A7" as "A9", the two chords would clash. The interpretation of chord symbols also depends on the taste preferences of the bandleader or singer who is being accompanied. Some bandleaders or singers may prefer alt chords to be interpreted in different ways. One singer may prefer alt chords with b9s, while another singer may prefer b13s.
In a pop or rock context, however, "C" and "c min" would almost always be played as triads, with no sevenths. In pop and rock, in the relatively less common cases where a songwriter wishes a Major 7th chord or a minor 7th chord, she will indicate this explicitly with the indications "C Maj 7" or "c min 7".
In addition, however,
- Δ is sometimes used for major, instead of the standard M, or maj,
- − is sometimes used for minor, instead of the standard m or min,
- +, or aug, is used for augmented (A is not used),
- o, °, dim, is used for diminished (d is not used),
- ø, or Ø is used for half diminished,
- dom is used for dominant 7th
- alt is used in jazz to indicate an altered dominant seventh chord (e.g., flat 9 and/or # 11)
- 7 is used for dominant 7th
- 9 is used for a ninth chord, which in jazz usually includes the dominant 7th as well
- 13 indicates that the 13th is added to the chord. In jazz, when a number higher than 9th is used, it implies that other lower numbers are played. Thus for A13, a pianist would play the 3rd, the 7th, 9th and 13th (the 11th is normally omitted unless it is sharpened. Roots and fifths are commonly omitted from jazz chord voicings).
- sus 4 indicates that the third is omitted and the fourth used instead. Other notes may be added to a Sus 4 chord, indicated with the word "add" and the scale degree (e.g., A sus 4 (add 9) or A sus 4 (add 7)).
- /C# bass or /C# indicates that a bass note other than the root should be played. For example, A/C# bass indicates that an A Maj triad should be played with a C# in the bass. (Note: in some genres of modern jazz, two chords with a slash between them may indicate an advanced voicing called a polychord, which is the playing of two chords simultaneously--e.g., F/A would be interpreted as an F Major triad played simultaneously with an A Maj triad, that is the notes "F, A, C" and "A, C#, E". To avoid misunderstanding, the "/C# bass" notation can be used).
- 5 in rock, hard rock and metal indicates that a power chord should be played. A power chord consists of the root and the fifth, possibly with the root doubled an octave higher. Thirds and sevenths are not played in power chords. Typically, power chords are played with distortion or overdrive.
- Unusual chords can be indicated with a sequence of scale degrees and indicated additions or omissions (e.g., C7 (no 5th add 9) or F9 (no 7th add 13)).
Within the diatonic scale, every chord has certain characteristics, which include:
- Number of pitch classes (distinct notes without respect to octave) that constitute the chord.
- Scale degree of the root note
- Position or inversion of the chord
- General type of intervals it appears constructed from—for example seconds, thirds, or fourths
- Counts of each pitch class as occur between all combinations of notes the chord contains
Number of notes
|Number of notes||Name||Alternate name|
Two-note combinations, whether referred to as chords or intervals, are called dyads. Chords constructed of three notes of some underlying scale are described as triads. Chords of four notes are known as tetrads, those containing five are called pentads and those using six are hexads. Sometimes the terms trichord, tetrachord, pentachord, and hexachord are used—though these more usually refer to the pitch classes of any scale, not generally played simultaneously. Chords that may contain more than three notes include pedal point chords, dominant seventh chords, extended chords, added tone chords, clusters, and polychords.
Polychords are formed by two or more chords superimposed. Often these may be analysed as extended chords; examples include tertian, altered chord, secundal chord, quartal and quintal harmony and Tristan chord). Another example is when G7(♯11♭9) (G-B-D-F-A♭-C♯) is formed from G major (G-B-D) and D♭ major (D♭-F-A♭). A nonchord tone is a dissonant or unstable tone that lies outside the chord currently heard, though often resolving to a chord tone.
|Roman numeral||Scale degree|
|viio / ♭VII||leading tone / subtonic|
In the key of C major the first degree of the scale, called the tonic, is the note C itself, so a C major chord, a triad built on the note C, may be called the one chord of that key and notated in Roman numerals as I. The same C major chord can be found in other scales: it forms chord III in the key of A minor (A-B-C) and chord IV in the key of G major (G-A-B-C). This numbering lets us see the job a chord is doing in the current key and tonality.
Many analysts use lower-case Roman numerals to indicate minor triads and upper-case for major ones, and degree and plus signs ( o and + ) to indicate diminished and augmented triads respectively. Otherwise all the numerals may be upper-case and the qualities of the chords inferred from the scale degree. Chords outside the scale can be indicated by placing a flat/sharp sign before the chord — for example, the chord of E flat major in the key of C major is represented by ♭III. The tonic of the scale may be indicated to the left (e.g. F♯:) or may be understood from a key signature or other contextual clues. Indications of inversions or added tones may be omitted if they are not relevant to the analysis. Roman numerals indicate the root of the chord as a scale degree within a particular major key as follows:
In the harmony of Western art music a chord is in root position when the tonic note is the lowest in the chord, and the other notes are above it. When the lowest note is not the tonic, the chord is inverted. Chords, having many constituent notes, can have many different inverted positions as shown below for the C major chord:
|Bass note||Position||Order of notes||Notation|
|C||root position||C E G||5
3 as G is a 5th above C and E is a 3rd above C
|E||1st inversion||E G C||6
3 as C is a 6th above E and G is a 3rd above E
|G||2nd inversion||G C E||6
4 as E is a 6th above G and C is a 4th above G
Further, a four-note chord can be inverted to four different positions by the same method as triadic inversion. Where guitar chords are concerned the term "inversion" is used slightly differently; to refer to stock fingering "shapes".
Secundal, tertian, and quartal chords
|Secundal||2nd's : major 2nd, minor 2nd|
|Tertian||3rd's : major 3rd, minor 3rd|
|Quartal||4th's : perfect 4th, augmented 4th|
Many chords are a sequence of ascending notes separated by intervals of roughly the same size. Chords can be classified into different categories by this size:
- Tertian chords can be decomposed into a series of (major or minor) thirds. For example, the C major triad (C-E-G) is defined by a sequence of two intervals, the first (C-E) being a major third and the second (E-G) being a minor third. Most common chords are tertian.
- Secundal chords can be decomposed into a series of (major or minor) seconds. For example, the chord C-D-E♭ is a series of seconds, containing a major second (C-D) and a minor second (D-E♭).
- Quartal chords can be decomposed into a series of (perfect or augmented) fourths. Quartal harmony normally works with a combination of perfect and augmented fourths. Diminished fourths are enharmonically equivalent to major thirds, so they are uncommon. For example, the chord C-F-B is a series of fourths, containing a perfect fourth (C-F) and an augmented fourth/tritone (F-B).
These terms can become ambiguous when dealing with non-diatonic scales, such as the pentatonic or chromatic scales. The use of accidentals can also complicate the terminology. For example, the chord B♯-E-A♭ appears to be a series of diminished fourths (B♯-E and E-A♭) but is enharmonically equivalent to (and sonically indistinguishable from) the chord C-E-G♯, which is a series of major thirds (C-E and E-G♯).
The notes of a chord form intervals with each of the other notes of the chord in combination. A 3-note chord has 3 of these harmonic intervals, a 4-note chord has 6, a 5-note chord has 10, a 6-note chord has 15. The absence, presence, and placement of certain key intervals plays a large part in the sound of the chord, and sometimes of the selection of the chord that follows.
A chord containing tritones is called tritonic; one without tritones is atritonic. Harmonic tritones are an important part of Dominant seventh chords, giving their sound a characteristic tension, and making the tritone interval likely to move in certain stereotypical ways to the following chord.
A chord containing semitones, whether appearing as Minor seconds or Major sevenths, is called hemitonic; one without semitones is anhemitonic. Harmonic semitones are an important part of Major seventh chords, giving their sound a characteristic high tension, and making the harmonic semitone likely to move in certain stereotypical ways to the following chord. A chord containing Major sevenths but no Minor seconds is much less harsh in sound than one containing Minor seconds as well.
Other chords of interest might include the
- Diminished chord, which has many Minor thirds and no Major thirds, many Tritones but no Perfect fifths
- Augmented chord, which has many Major thirds and no Minor thirds or Perfect fifths
- Dominant seventh flat five chord, which has many Major thirds and Tritones and no Minor thirds or Perfect fifths
Triads, also called triadic chords, are tertian chords with three notes. The four basic triads are described below.
|Component intervals||Chord symbol||Notes||Audio|
|Major triad||major||perfect||C, CM, Cmaj, CΔ, Cma||C E G||play (help·info)|
|Minor triad||minor||perfect||Cm, Cmin, C-, Cmi||C E♭ G||play (help·info)|
|Augmented triad||major||augmented||Caug, C+, C+||C E G♯||play (help·info)|
|Diminished triad||minor||diminished||Cdim, Co, Cm(♭5)||C E♭ G♭||play (help·info)|
Seventh chords are tertian chords, constructed by adding a fourth note to a triad, at the interval of a third above the fifth of the chord. This creates the interval of a seventh above the root of the chord, the next natural step in composing tertian chords. The seventh chord built on the fifth step of the scale (the dominant seventh) is the only one available in the major scale: it contains all three notes of the diminished triad of the seventh and is frequently used as a stronger substitute for it.
There are various types of seventh chords depending on the quality of both the chord and the seventh added. In chord notation the chord type is sometimes superscripted and sometimes not (e.g. Dm7, Dm7, and Dm7 are all identical).
|Component intervals||Chord symbol||Notes||Audio|
|Diminished seventh||minor||diminished||diminished||Co7, Cdim7||C E♭ G♭ B||Play (help·info)|
|Half-diminished seventh||minor||diminished||minor||Cø7, Cm7♭5, C−7(♭5)||C E♭ G♭ B♭||Play (help·info)|
|Minor seventh||minor||perfect||minor||Cm7, Cmin7, C−7, C−7||C E♭ G B♭||Play (help·info)|
|Minor major seventh||minor||perfect||major||Cm(M7), Cm maj7, C−(j7), C−Δ7, C−M7||C E♭ G B||Play (help·info)|
|Dominant seventh||major||perfect||minor||C7, C7, Cdom7||C E G B♭||Play (help·info)|
|Major seventh||major||perfect||major||CM7, Cmaj7, CΔ7, CΔ7, CΔ7, Cj7||C E G B||Play (help·info)|
|Augmented seventh||major||augmented||minor||C+7, Caug7, C7+, C7+5, C7♯5||C E G♯ B♭||Play (help·info)|
|Augmented major seventh||major||augmented||major||C+(M7), CM7+5, CM7♯5, C+j7, C+Δ7||C E G♯ B||Play (help·info)|
Extended chords are triads with further tertian notes added beyond the seventh; the ninth, eleventh, and thirteenth chords. After the thirteenth, any notes added in thirds duplicate notes elsewhere in the chord; all seven notes of the scale are present in the chord and adding more notes does not add new pitch classes. Such chords may be constructed only by using notes that lie outside the diatonic seven-note scale.
|Dominant ninth||dominant seventh||major ninth||-||-||C9||C E G B♭ D||Play (help·info)|
|Dominant eleventh||dominant seventh
(the third is usually omitted)
|major ninth||eleventh||-||C11||C E G B♭ D F||Play (help·info)|
|Dominant thirteenth||dominant seventh||major ninth||perfect eleventh
|major thirteenth||C13||C E G B♭ D F A||Play (help·info)|
Other extended chords follow similar rules, so that for example maj9, maj11, and maj13 contain major seventh chords rather than dominant seventh chords, while min9, min11, and min13 contain minor seventh chords.
Although the third and seventh of the chord are always determined by the symbols shown above, the fifth, ninth, eleventh and thirteenth may all be chromatically altered by accidentals (the root cannot be so altered without changing the name of the chord, while the third cannot be altered without altering the chord's quality). These are noted alongside the altered element. Accidentals are most often used with dominant seventh chords. Altered dominant seventh chords (C7alt) may have a flat ninth, a sharp ninth, a diminished fifth or an augmented fifth (see Levine's Jazz Theory). Some write this as C7+9, which assumes also the flat ninth, diminished fifth and augmented fifth (see Aebersold's Scale Syllabus). The augmented ninth is often referred to in blues and jazz as a blue note, being enharmonically equivalent to the flat third or tenth. When superscripted numerals are used the different numbers may be listed horizontally (as shown) or else vertically.
|Seventh augmented fifth||dominant seventh||augmented fifth||C7+5, C7♯5||Play (help·info)|
|Seventh flat ninth||dominant seventh||minor ninth||C7-9, C7♭9||Play (help·info)|
|Seventh sharp ninth||dominant seventh||augmented ninth||C7+9, C7♯9||Play (help·info)|
|Seventh augmented eleventh||dominant seventh||augmented eleventh||C7+11, C7♯11||Play (help·info)|
|Seventh flat thirteenth||dominant seventh||minor thirteenth||C7-13, C7♭13||Play (help·info)|
|Half-diminished seventh||minor seventh||diminished fifth||Cø, Cm7♭5||Play (help·info)|
Added tone chords
An added tone chord is a triad chord with an added, non-tertian note, such as the commonly added sixth as well as chords with an added second (ninth) or fourth (eleventh) or a combination of the three. These chords do not include "intervening" thirds as in an extended chord. Added chords can also have variations. Thus madd9, m4 and m6 are minor triads with extended notes.
Sixth chords can belong to either of two groups. One is first inversion chords and added sixth chords that contain a sixth from the root. The other group is inverted chords in which the interval of a sixth appears above a bass note that is not the root.
The major sixth chord (also called, sixth or added sixth with the chord notation 6, e.g., "C6") is by far the most common type of sixth chord of the first group. It comprises a major triad with the added major sixth above the root, common in popular music. For example, the chord C6 contains the notes C-E-G-A. The minor sixth chord (min6 or m6, e.g., "Cm6") is a minor triad with the same added note. For example, the chord Cmin6 contains the notes C-E♭-G-A. In chord notation, the sixth of either chord is always assumed a major sixth rather than a minor sixth, however a minor sixth interval may be indicated in the notation as, for example, "Cm(m6)", or Cmm6.
The augmented sixth chord usually appears in chord notation as its enharmonic equivalent, the seventh chord. This chord contains two notes separated by the interval of an augmented sixth (or, by inversion, a diminished third, though this inversion is rare). The augmented sixth is generally used as a dissonant interval most commonly used in motion towards a dominant chord in root position (with the root doubled to create the octave the augmented sixth chord resolves to) or to a tonic chord in second inversion (a tonic triad with the fifth doubled for the same purpose). In this case, the tonic note of the key is included in the chord, sometimes along with an optional fourth note, to create one of the following (illustrated here in the key of C major):
- Italian augmented sixth: A♭, C, F♯
- French augmented sixth: A♭, C, D, F♯
- German augmented sixth: A♭, C, E♭, F♯
The augmented sixth family of chords exhibits certain peculiarities. Since they are not based on triads, as are seventh chords and other sixth chords, they are not generally regarded as having roots (nor, therefore, inversions), although one re-voicing of the notes is common (with the namesake interval inverted to create a diminished third).
The second group of sixth chords includes inverted major and minor chords, which may be called sixth chords in that the six-three (6/3) and six-four (6/4) chords contain intervals of a sixth with the bass note, though this is not the root. Nowadays this is mostly for academic study or analysis (see figured bass) but the neapolitan sixth chord is an important example; a major triad with a flat supertonic scale degree as its root that is called a "sixth" because it is almost always found in first inversion. Though a technically accurate Roman numeral analysis would be ♭II, it is generally labelled N6. In C major, the chord is notated (from root position) D♭, F, A♭. Because it uses chromatically altered tones this chord is often grouped with the borrowed chords but the chord is not borrowed from the relative major or minor and it may appear in both major and minor keys.
|Add nine||major triad||major ninth||-||C2, Cadd9||C E G D||Play (help·info)|
|Add fourth||major triad||perfect fourth||-||C4, Cadd11||C E G F||Play (help·info)|
|Add sixth||major triad||major sixth||-||C6||C E G A||Play (help·info)|
|Six-nine||major triad||major sixth||major ninth||C6/9||C E G A D||-|
|Mixed-third||major triad||minor third||-||-||C E♭ E G||Play (help·info)|
A suspended chord, or "sus chord" (sometimes wrongly thought to mean sustained chord), is a chord in which the third is replaced by either the second or the fourth. This produces two main chord types: the suspended second (sus2) and the suspended fourth (sus4). The chords, Csus2 and Csus4, for example, consist of the notes C D G and C F G, respectively. There is also a third type of suspended chord, in which both the second and fourth are present, for example the chord with the notes C D F G.
The name suspended derives from an early polyphonic technique developed during the common practice period, in which a stepwise melodic progress to a harmonically stable note in any particular part was often momentarily delayed or suspended by extending the duration of the previous note. The resulting unexpected dissonance could then be all the more satisfyingly resolved by the eventual appearance of the displaced note. In traditional music theory the inclusion of the third in either chord would negate the suspension, so such chords would be called added ninth and added eleventh chords instead.
In modern layman usage the term is restricted to the displacement of the third only and the dissonant second or fourth no longer needs to be held over (prepared) from the previous chord. Neither is it now obligatory for the displaced note to make an appearance at all though in the majority of cases the conventional stepwise resolution to the third is still observed. In post-bop and modal jazz compositions and improvisations suspended seventh chords are often used in nontraditional ways: these often do not function as V chords, and do not resolve from the fourth to the third. The lack of resolution gives the chord an ambiguous, static quality. Indeed, the third is often played on top of a sus4 chord. A good example is the jazz standard, Maiden Voyage.
Extended versions are also possible, such as the seventh suspended fourth, which, with root C, contains the notes C F G B♭ and is notated as C7sus4 play (help·info). Csus4 is sometimes written Csus since the sus4 is more common than the sus2.
|Sus2||open fifth||major second||-||-||Csus2||C D G||Play (help·info)|
|Sus4||open fifth||perfect fourth||-||-||Csus4||C F G||Play (help·info)|
|Jazz sus||open fifth||perfect fourth||minor seventh||major ninth||C9sus4||C F G B♭ D||Play (help·info)|
A borrowed chord is one from a different key than the home key, the key of the piece it is used in. The most common occurrence of this is where a chord from the parallel major or minor key is used. Particularly good examples can be found throughout the works of composers such as Schubert.
For instance, for a composer working in the C major key, a major ♭III chord would be borrowed, as this appears only in the C minor key. Although borrowed chords could theoretically include chords taken from any key other than the home key, this is not how the term is used when a chord is described in formal musical analysis.
When a chord is analysed as "borrowed" from another key it may be shown by the Roman numeral corresponding with that key after a slash so, for example, V/V indicates the dominant chord of the dominant key of the present home-key. The dominant key of C major is G major so this secondary dominant is the chord of the fifth degree of the G major scale, which is D major. If used, this chord causes a modulation.
- Benward & Saker (2003). Music: In Theory and Practice, Vol. I, p. 67&359. Seventh Edition. ISBN 978-0-07-294262-0."A chord is a harmonic unit with at least three different tones sounding simultaneously." "A combination of three or more pitches sounding at the same time."
- Károlyi, Otto (1965). Introducing Music. Penguin Books. p. 63.
Two or more notes sounding simultaneously are known as a chord.
- Mitchell, Barry (January 16, 2008). "An explanation for the emergence of Jazz (1956)", Theory of Music.
- Linkels, Ad, The Real Music of Paradise", In Broughton, Simon and Ellingham, Mark with McConnachie, James and Duane, Orla (Ed.), World Music, Vol. 2: Latin & North America, Caribbean, India, Asia and Pacific, pp 218–229. Rough Guides Ltd, Penguin Books. ISBN 1-85828-636-0
- Malm, William P. (1996). Music Cultures of the Pacific, the Near East, and Asia. p.15. ISBN 0-13-182387-6. Third edition: "Indeed this harmonic orientation is one of the major differences between Western and much non-Western music."
- Arnold Schoenberg, Structural Functions of Harmony, Faber and Faber, 1983, p.1-2.
- Benward & Saker (2003), p. 77. Cite error: Invalid
<ref>tag; name "B.26S" defined multiple times with different content (see the help page). Cite error: Invalid
<ref>tag; name "B.26S" defined multiple times with different content (see the help page).
- Merriam-Webster, Inc. (1995). "Chord", Merriam-Webster's dictionary of English usage, p.243. ISBN 978-0-87779-132-4.
- "Chord", Oxford Dictionaries.
- Dahlhaus, Car. "Harmony". In Macy, Laura. Grove Music Online. Oxford Music Online. Oxford University Press. (subscription required)
- Károlyi, Ottó, Introducing Music, p. 63. England: Penguin Books.
- Schellenberg, E. Glenn; Bigand, Emmanuel; Poulin-Charronnat, Benedicte; Garnier, Cecilia; Stevens, Catherine (Nov 2005). "Children's implicit knowledge of harmony in Western music". Developmental Science 8 (8): 551–566. doi:10.1111/j.1467-7687.2005.00447.x. PMID 16246247.
- Duarter, John (2008). Melody & Harmony for Guitarists, p.49. ISBN 978-0-7866-7688-0.
- Benward & Saker (2003), p.70.
- Benward & Saker (2003), p.100.
- Benward & Saker (2003), p.201.
- Benward & Saker (2003), p.220.
- Benward & Saker (2003), p.231.
- Benward & Saker (2003), p.274.
- Winston Harrison, The Rockmaster System: Relating Ongoing Chords to the Keyboard – Rock, Book 1, Dellwin Publishing Co. 2005, p. 33
- Pachet, François, Surprising Harmonies, International Journal on ComputingAnticipatory Systems, 1999.
- Pen, Ronald (1992). Introduction to Music, p.81. McGraw-Hill, ISBN 0-07-038068-6. "In each case the note that forms the foundation pitch is called the root, the middle tone of the chord is designated the third (because it is separated by the interval of a third from the root), and the top tone is referred to as the fifth (because it is a fifth away from the root)."
- William G Andrews and Molly Sclater (2000). Materials of Western Music Part 1, p.227. ISBN 1-55122-034-2.
- The symbol Δ is ambiguous, as it is used by some as a synonym for M (e.g. CΔ=CM and CΔ7=CM7), and by others as a synonym of M7 (e.g. CΔ=CM7).
- Haerle, Dan (1982). The Jazz Language: A Theory Text for Jazz Composition and Improvisation, p.30. ISBN 978-0-7604-0014-2.
- Policastro, Michael A. (1999). Understanding How to Build Guitar Chords and Arpeggios, p.168. ISBN 978-0-7866-4443-8.
- Benward & Saker (2003), p.92.
- Bert Weedon, Play in a Day, Faber Music Ltd, ISBN 0-571-52965-8, passim - among a wide range of other guitar tutors
- Dufrenne, Mikel (1989). The Phenomenology of Aesthetic Experience, p.253. ISBN 0-8101-0591-8.
- Connie E. Mayfield (2012) "Theory Essentials", p.523. ISBN 1-133-30818-X.
- Hanson, Howard. (1960) Harmonic Materials of Modern Music, p.7ff. New York: Appleton-Century-Crofts. LOC 58-8138.
- Benjamin, Horvit, and Nelson (2008). Techniques and Materials of Music, p.46-47. ISBN 0-495-50054-2.
- Benjamin, Horvit, and Nelson (2008). Techniques and Materials of Music, p.48-49. ISBN 0-495-50054-2.
- Hawkins, Stan. "Prince- Harmonic Analysis of 'Anna Stesia'", p.329 and 334n7, Popular Music, Vol. 11, No. 3 (Oct., 1992), pp. 325-335.
- Miller, Michael (2005). The Complete Idiot's Guide to Music Theory, p.119. ISBN 978-1-59257-437-7.
- Piston, Walter (1987). Harmony (5th ed.), p.66. New York: W.W. Norton & Company. ISBN 0-393-95480-3.
- Bartlette, Christopher, and Steven G. Laitz (2010). Graduate Review of Tonal Theory. New York: Oxford University Press. ISBN 978-0-19-537698-2
- Grout, Donald Jay (1960). A History Of Western Music. Norton Publishing.
- Dahlhaus, Carl. Gjerdingen, Robert O. trans. (1990). Studies in the Origin of Harmonic Tonality, p. 67. Princeton University Press. ISBN 0-691-09135-8.
- Goldman (1965). Cited in Nattiez (1990).
- Jones, George T. (1994). HarperCollins College Outline Music Theory. ISBN 0-06-467168-2.
- Nattiez, Jean-Jacques (1990). Music and Discourse: Toward a Semiology of Music (Musicologie générale et sémiologue, 1987). Translated by Carolyn Abbate (1990). ISBN 0-691-02714-5.
- Norman Monath, Norman (1984). How To Play Popular Piano In 10 Easy Lessons. Fireside Books. ISBN 0-671-53067-4.
- Stanley Sadie and John Tyrrell, eds. (2001). The New Grove Dictionary of Music and Musicians. ISBN 1-56159-239-0.
- Surmani, Andrew (2004). Essentials of Music Theory: A Complete Self-Study Course for All Musicians. ISBN 0-7390-3635-1.
- Benward, Bruce & Saker, Marilyn (2002). Music in Theory and Practice, Volumes I & II (7th ed.). New York: McGraw Hill. ISBN 0-07-294262-2.
- Mailman, Joshua B. (2015). "Schoenberg's Chordal Experimentalism Revealed Through Representational Hierarchy Association (RHA), Contour Motives, and Binary State Switching" (PDF). Music Theory Spectrum 37 (2): 224–252.
- Persichetti, Vincent (1961). Twentieth-century Harmony: Creative Aspects and Practice. New York: W. W. Norton. ISBN 0-393-09539-8. OCLC 398434.
- Schejtman, Rod (2008). Music Fundamentals. The Piano Encyclopedia. ISBN 978-987-25216-2-2. | https://en.wikipedia.org/wiki/Chord_(music) |
4.15625 | Phospholipids are a class of lipids that are a major component of all cell membranes. They can form lipid bilayers because of their amphiphilic characteristic. The structure of the phospholipid molecule generally consists of two hydrophobic fatty acid "tails" and a hydrophilic "head", joined together by a glycerol molecule. The phosphate groups can be modified with simple organic molecules such as choline.
The first phospholipid identified in 1847 as such in biological tissues was lecithin, or phosphatidylcholine, in the egg yolk of chickens by the French chemist and pharmacist, Theodore Nicolas Gobley. Biological membranes in eukaryotes also contain another class of lipid, sterol, interspersed among the phospholipids and together they provide membrane fluidity and mechanical strength. Purified phospholipids are produced commercially and have found applications in nanotechnology and materials science.
- 1 Amphipathic character
- 2 Applications
- 3 Simulations
- 4 Characterization
- 5 Phospholipid synthesis
- 6 Sources
- 7 In signal transduction
- 8 Food technology
- 9 Phospholipid derivatives
- 10 Abbreviations used and chemical information of glycerophospholipids
- 11 See also
- 12 References
The 'head' is hydrophilic (attracted to water), while the hydrophobic 'tails' are repelled by water and are forced to aggregate. The hydrophilic head contains the negatively charged phosphate group and glycerol. The hydrophobic tail usually consists of 2 long fatty acid chains. When placed in water, phospholipids form a variety of structures depending on the specific properties of the phospholipid. These specific properties allow phospholipids to play an important role in the phospholipid bilayer. In biological systems, the phospholipids often occur with other molecules (e.g., proteins, glycolipids, sterols) in a bilayer such as a cell membrane. Lipid bilayers occur when hydrophobic tails line up against one another, forming a membrane of hydrophilic heads on both sides facing the water.
Such movement can be described by the fluid mosaic model, that describes the membrane as a mosaic of lipid molecules that act as a solvent for all the substances and proteins within it, so proteins and lipid molecules are then free to diffuse laterally through the lipid matrix and migrate over the membrane. Sterols contribute to membrane fluidity by hindering the packing together of phospholipids. However, this model has now been superseded, as through the study of lipid polymorphism it is now known that the behaviour of lipids under physiological (and other) conditions is not simple.
- See: Glycerophospholipid
- Phosphatidic acid (phosphatidate) (PA)
- Phosphatidylethanolamine (cephalin) (PE)
- Phosphatidylcholine (lecithin) (PC)
- Phosphatidylserine (PS)
- Ceramide phosphorylcholine (Sphingomyelin) (SPH)
- Ceramide phosphorylethanolamine (Sphingomyelin) (Cer-PE)
- Ceramide phosphoryllipid
Phospholipids have been widely used to prepare liposomal, ethosomal and other nanoformulations of topical, oral and parenteral drugs for differing reasons like improved bio-availability, reduced toxicity and increased penetration. Ethosomal formulation of ketoconazole using Phospholipids showed good entrapment efficiency, stability profile and is a promising option for transdermal delivery with potential for topical application in fungal infections. Liposomes are often composed of phosphatidylcholine-enriched phospholipids and may also contain mixed Phospholipid chains with surfactant properties.
Phospholipids are optically highly birefringent, i.e. their refractive index is different along their axis as opposed to perpendicular to it. Measurement of birefringence can be achieved using cross polarisers in a microscope to obtain an image of e.g. vesicle walls or using techniques such as dual polarisation interferometry to quantify lipid order or disruption in supported bilayers.
There are no simple methods available for analysis of Phospholipids since the close range of polarity between different phospholipid species makes detection difficult. Oil chemists often use spectroscopy to determine total Phosphorus content and then calculate content of Phospholipids based on molecular weight of expected fatty acid species. Lipidomists use more absolute methods of analysis of with nuclear magnetic resonance spectroscopy (NMR), particularly 31P-NMR, while HPLC-ELSD provides relative values.
Phospholipid synthesis occurs in the cytosol adjacent to ER membrane that is studded with proteins that act in synthesis (GPAT and LPAAT acyl transferases, phosphatase and choline phosphotransferase) and allocation (flippase and floppase). Eventually a vesicle will bud off from the ER containing phospholipids destined for the cytoplasmic cellular membrane on its exterior leaflet and phospholipids destined for the exoplasmic cellular membrane on its inner leaflet.
Common sources of industrially produced phospholipids are soya, rapeseed, sunflower, chicken eggs, bovine milk, fish eggs etc. Each source has a unique profile of individual phospholipid species and consequently differing applications in food, nutrition, pharmaceuticals, cosmetics and drug delivery.
In signal transduction
Some types of phospholipid can be split to produce products that function as second messengers in signal transduction. Examples include phosphatidylinositol (4,5)-bisphosphate (PIP2), that can be split by the enzyme Phospholipase C into inositol triphosphate (IP3) and diacylglycerol (DAG), which both carry out the functions of the Gq type of G protein in response to various stimuli and intervene in various processes from long term depression in neurons to leukocyte signal pathways started by chemokine receptors.
Phospholipids also intervene in prostaglandin signal pathways as the raw material used by lipase enzymes to produce the prostaglandin precursors. In plants they serve as the raw material to produce Jasmonic acid, a plant hormone similar in structure to prostaglandins that mediates defensive responses against pathogens.
Phospholipids can act as an emulsifier, enabling oils to form a colloid with water. Phospholipids are one of the components of lecithin which is found in egg-yolks, as well as being extracted from soy beans, and is used as a food additive in many products, and can be purchased as a dietary supplement. Lysolecithins are typically used for WO emulsions like margarine due to their higher HLB ratio.
- See table below for an extensive list.
- Natural phospholipid derivates:
- egg PC, egg PG, soy PC, hydrogenated soy PC, sphingomyelin as natural phospholipids.
- Synthetic phospholipid derivates:
- Phosphatidic aDSPA
- Phosphatidylcholine (DDPC, DLPC, DMPC, DPPC, DSPC, DOPC, POPC, DEPC)
- Phosphatidylglycerol (DMPG, DPPG, DSPG, POPG)
- Phosphatidylethanolamine (DMPE, DPPE, DSPE DOPE)
- Phosphatidylserine (DOPS)
- PEG phospholipid (mPEG-phospholipid, polyglycerin-phospholipid, funcitionalized-phospholipid, terminal activated-phospholipid)
Abbreviations used and chemical information of glycerophospholipids
|DEPA-NA||80724-31-8||1,2-Dierucoyl-sn-glycero-3-phosphate (Sodium Salt)||Phosphatidic acid|
|DEPG-NA||1,2-Dierucoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Sodium Salt)||Phosphatidylglycerol|
|DLPA-NA||1,2-Dilauroyl-sn-glycero-3-phosphate (Sodium Salt)||Phosphatidic acid|
|DLPG-NA||1,2-Dilauroyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Sodium Salt)||Phosphatidylglycerol|
|DLPG-NH4||1,2-Dilauroyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Ammonium Salt)||Phosphatidylglycerol|
|DLPS-NA||1,2-Dilauroyl-sn-glycero-3-phosphoserine (Sodium Salt)||Phosphatidylserine|
|DMPA-NA||80724-3||1,2-Dimyristoyl-sn-glycero-3-phosphate (Sodium Salt)||Phosphatidic acid|
|DMPG-NA||67232-80-8||1,2-Dimyristoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Sodium Salt)||Phosphatidylglycerol|
|DMPG-NH4||1,2-Dimyristoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Ammonium Salt)||Phosphatidylglycerol|
|DMPG-NH4/NA||1,2-Dimyristoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Sodium/Ammonium Salt)||Phosphatidylglycerol|
|DMPS-NA||1,2-Dimyristoyl-sn-glycero-3-phosphoserine (Sodium Salt)||Phosphatidylserine|
|DOPA-NA||1,2-Dioleoyl-sn-glycero-3-phosphate (Sodium Salt)||Phosphatidic acid|
|DOPG-NA||62700-69-0||1,2-Dioleoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Sodium Salt)||Phosphatidylglycerol|
|DOPS-NA||70614-14-1||1,2-Dioleoyl-sn-glycero-3-phosphoserine (Sodium Salt)||Phosphatidylserine|
|DPPA-NA||71065-87-7||1,2-Dipalmitoyl-sn-glycero-3-phosphate (Sodium Salt)||Phosphatidic acid|
|DPPG-NA||67232-81-9||1,2-Dipalmitoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Sodium Salt)||Phosphatidylglycerol|
|DPPG-NH4||73548-70-6||1,2-Dipalmitoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Ammonium Salt)||Phosphatidylglycerol|
|DPPS-NA||1,2-Dipalmitoyl-sn-glycero-3-phosphoserine (Sodium Salt)||Phosphatidylserine|
|DSPA-NA||108321-18-2||1,2-Distearoyl-sn-glycero-3-phosphate (Sodium Salt)||Phosphatidic acid|
|DSPG-NA||67232-82-0||1,2-Distearoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Sodium Salt)||Phosphatidylglycerol|
|DSPG-NH4||108347-80-4||1,2-Distearoyl-sn-glycero-3[Phospho-rac-(1-glycerol...) (Ammonium Salt)||Phosphatidylglycerol|
|DSPS-NA||1,2-Distearoyl-sn-glycero-3-phosphoserine (Sodium Salt)||Phosphatidylserine|
|Egg Sphingomyelin empty Liposome|
|HEPC||Hydrogenated Egg PC||Phosphatidylcholine|
|HSPC||Hydrogenated Soy PC||Phosphatidylcholine|
|Milk Sphingomyelin MPPC||1-Myristoyl-2-palmitoyl-sn-glycero 3-phosphocholine||Phosphatidylcholine|
|POPG-NA||81490-05-3||1-Palmitoyl-2-oleoyl-sn-glycero-3[Phospho-rac-(1-glycerol)...] (Sodium Salt)||Phosphatidylglycerol|
- Mashaghi S., Jadidi T., Koenderink G., Mashaghi A. (2013). "Lipid Nanotechnology". Int. J. Mol. Sci. 2013 (14): 4242–4282. doi:10.3390/ijms140242.
- Campbell, Neil A.; Brad Williamson; Robin J. Heyden (2006). Biology: Exploring Life. Boston, Massachusetts: Pearson Prentice Hall. ISBN 0-13-250882-6.[page needed]
- Ketoconazole Encapsulated Liposome and Ethosome: GUNJAN TIWARI
- HPLC SEPARATION of PHOSPHOLIPIDS - W.W. Christie
- P. Meneses and T. Glonek (1988). "High resolution 31P NMR of extracted phospholipids". The Journal of Lipid Research 29 (5): 679–689. PMID 3411242.
- Furse, Samuel; Liddell, Susan; Ortori, Catharine A.; Williams, Huw; Neylon, D. Cameron; Scott, David J.; Barrett, David A.; Gray, David A. (2013). "The lipidome and proteome of oil bodies from Helianthus annuus (common sunflower)". Journal of Chemical Biology 6 (2): 63–76. doi:10.1007/s12154-012-0090-1. PMC 3606697. PMID 23532185.
- T.L. Mounts, A.M. Nash (1990). "HPLC analysis of phospholipids in crude oil for evaluation of soybean deterioration". Journal of the American Oil Chemists' Society 67 (11): 757–760. doi:10.1007/BF02540486.
- Lodish, Harvey; Berk, Krieger, Kaiser, Scott, Bretsher, Ploegh, Matsuaira (2008). Molecular Cell Biology. W.H. Freeman and Company. ISBN 0-7167-7601-4. Cite uses deprecated parameter
- Choi, S.-Y.; Chang, J; Jiang, B; Seol, GH; Min, SS; Han, JS; Shin, HS; Gallagher, M; Kirkwood, A (2005). "Multiple Receptors Coupled to Phospholipase C Gate Long-Term Depression in Visual Cortex". Journal of Neuroscience 25 (49): 11433–43. doi:10.1523/JNEUROSCI.4084-05.2005. PMID 16339037.
- Cronshaw, D. G.; Kouroumalis, A; Parry, R; Webb, A; Brown, Z; Ward, SG (2006). "Evidence that phospholipase C-dependent, calcium-independent mechanisms are required for directional migration of T lymphocytes in response to the CCR4 ligands CCL17 and CCL22". Journal of Leukocyte Biology 79 (6): 1369–80. doi:10.1189/jlb.0106035. PMID 16614259. | https://en.wikipedia.org/wiki/Phospholipid |
4.25 | What it is:
The Fisher Effect is an economic hypothesis stating that the real interest rate is equal to the nominal rate minus the expected rate of inflation.
How it works (Example):
In the late 1930s, U.S. economist Irving Fisher wrote a paper which posited that a country's interest rate level rises and falls in direct relation to its inflation rates. Fisher mathematically expressed this theory in the following way:
R Nominal = R Real + R Inflation
The equation states that a country's current (nominal) interest rate is equal to a real interest rate adjusted for the rate of inflation. In this sense, Fisher conceived of interest rates, as the prices of lending, being adjusted for inflation in the same manner that prices of goods and services are adjusted for inflation. For instance, if a country's nominal interest rate is six percent and its inflation rate is two percent, the country's real interest rate is four percent (6% - 2% = 4%).
Why it Matters:
The Fisher effect is an important tool by which lenders can gauge whether or not they are making money on a granted loan. Unless the rate charged is above and beyond the economy's inflation rate, a lender will not profit from the interest. Moreover, according to Fisher's theory, even if a loan is granted at no interest, a lending party would need to charge at least the inflation rate in order to retain purchasing power upon repayment. | http://www.investinganswers.com/financial-dictionary/economics/fisher-effect-1796 |
4.28125 | 1 Answer | Add Yours
As with many movements in the arts, the Harlem Renaissance was influenced heavily by the economics of its time. During the early 20th century, America like much of the world was moving from an agriculturally based economy to a more industrial. Cities in the North were leading the way in this change, so many African Americans in the less affluent South started to seek higher paying jobs above the Mason Dixon line.
U.S. policy on immigration also prompted this northern migration, as the government began limiting the numbers of immigrants from entering the country. This policy along with the industrialization above and a promise of a better more equal life caused African American populations in major northern cities to nearly double by the 1930s.
Socially, writers such as Booker T. Washington had pushed the notion forward of a well educated and proud African American man instead of the backward stereotypes that many people, including African Americans, had accepted from their society. This push towards independent thinking and pride in leadership of new lives also helped the Harlem Renaissance blossom.
Beyond the extensive reach of WWI general influence, the end of the war in 1919 also brought racial relations to a head. White soldiers returned and struggled to accept the changing roles of African Americans. African American soldiers returned from fighting the respect they earned on the battlefield and once again were treated as second class citizens by fellow Americans they defended. In 1919, 25 race riots took place and over 75 lynchings were reported.
Finally, Harlem itself influenced the movement as it became the center of African American life and culture in the northeast. The neighborhood had once been a wealthy white collection of homes and recently experienced a housing bubble and it was left with foreclosed properties that were affordable. The comfort and excitement offered by the neighborhood continued to attract African American leaders in all walks of life through the 1920s. The population quadrupled in that decade.
We’ve answered 302,639 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/what-historical-factors-influenced-writers-harlem-437211 |
4.125 | Genome projects are scientific endeavours that ultimately aim to determine the complete genome sequence of an organism (be it an animal, a plant, a fungus, a bacterium, an archaean, a protist or a virus) and to annotate protein-coding genes and other important genome-encoded features. The genome sequence of an organism includes the collective DNA sequences of each chromosome in the organism. For a bacterium containing a single chromosome, a genome project will aim to map the sequence of that chromosome. For the human species, whose genome includes 22 pairs of autosomes and 2 sex chromosomes, a complete genome sequence will involve 46 separate chromosome sequences.
The Human Genome Project was a landmark genome project that is already having a major impact on research across the life sciences, with potential for spurring numerous medical and commercial developments.
Genome assembly refers to the process of taking a large number of short DNA sequences and putting them back together to create a representation of the original chromosomes from which the DNA originated. In a shotgun sequencing project, all the DNA from a source (usually a single organism, anything from a bacterium to a mammal) is first fractured into millions of small pieces. These pieces are then "read" by automated sequencing machines, which can read up to 1000 nucleotides or bases at a time. (The four bases are adenine, guanine, cytosine, and thymine, represented as AGCT.) A genome assembly algorithm works by taking all the pieces and aligning them to one another, and detecting all places where two of the short sequences, or reads, overlap. These overlapping reads can be merged, and the process continues.
Genome assembly is a very difficult computational problem, made more difficult because many genomes contain large numbers of identical sequences, known as repeats. These repeats can be thousands of nucleotides long, and some occur in thousands of different locations, especially in the large genomes of plants and animals.
The resulting (draft) genome sequence is produced by combining the information sequenced contigs and then employing linking information to create scaffolds. Scaffolds are positioned along the physical map of the chromosomes creating a "golden path".
Originally, most large-scale DNA sequencing centers developed their own software for assembling the sequences that they produced. However, this has changed as the software has grown more complex and as the number of sequencing centers has increased. An example of such assembler Short Oligonucleotide Analysis Package developed by BGI for de novo assembly of human-sized genomes, alignment, SNP detection, resequencing, indel finding, and structural variation analysis.
- identifying portions of the genome that do not code for proteins
- identifying elements on the genome, a process called gene prediction, and
- attaching biological information to these elements.
Automatic annotation tools try to perform all this by computer analysis, as opposed to manual annotation (a.k.a. curation) which involves human expertise. Ideally, these approaches co-exist and complement each other in the same annotation pipeline.
The basic level of annotation is using BLAST for finding similarities, and then annotating genomes based on that. However, nowadays more and more additional information is added to the annotation platform. The additional information allows manual annotators to deconvolute discrepancies between genes that are given the same annotation. Some databases use genome context information, similarity scores, experimental data, and integrations of other resources to provide genome annotations through their Subsystems approach. Other databases (e.g. Ensembl) rely on both curated data sources as well as a range of different software tools in their automated genome annotation pipeline.
Structural annotation consists of the identification of genomic elements.
- ORFs and their localisation
- gene structure
- coding regions
- location of regulatory motifs
Functional annotation consists of attaching biological information to genomic elements.
- biochemical function
- biological function
- involved regulation and interactions
These steps may involve both biological experiments and in silico analysis. Proteogenomics based approaches utilize information from expressed proteins, often derived from mass spectrometry, to improve genomics annotations.
A variety of software tools have been developed to permit scientists to view and share genome annotations.
Genome annotation remains a major challenge for scientists investigating the human genome, now that the genome sequences of more than a thousand human individuals and several model organisms are largely complete. Identifying the locations of genes and other genetic control elements is often described as defining the biological "parts list" for the assembly and normal operation of an organism. Scientists are still at an early stage in the process of delineating this parts list and in understanding how all the parts "fit together".
Genome annotation is an active area of investigation and involves a number of different organizations in the life science community which publish the results of their efforts in publicly available biological databases accessible via the web and other electronic means. Here is an alphabetical listing of on-going projects relevant to genome annotation:
- Encyclopedia of DNA elements (ENCODE)
- Entrez Gene
- Gene Ontology Consortium
- Vertebrate and Genome Annotation Project (Vega)
At Wikipedia, genome annotation has started to become automated under the auspices of the Gene Wiki portal which operates a bot that harvests gene data from research databases and creates gene stubs on that basis.
When is a genome project finished?
When sequencing a genome, there are usually regions that are difficult to sequence (often regions with highly repetitive DNA). Thus, 'completed' genome sequences are rarely ever complete, and terms such as 'working draft' or 'essentially complete' have been used to more accurately describe the status of such genome projects. Even when every base pair of a genome sequence has been determined, there are still likely to be errors present because DNA sequencing is not a completely accurate process. It could also be argued that a complete genome project should include the sequences of mitochondria and (for plants) chloroplasts as these organelles have their own genomes.
It is often reported that the goal of sequencing a genome is to obtain information about the complete set of genes in that particular genome sequence. The proportion of a genome that encodes for genes may be very small (particularly in eukaryotes such as humans, where coding DNA may only account for a few percent of the entire sequence). However, it is not always possible (or desirable) to only sequence the coding regions separately. Also, as scientists understand more about the role of this noncoding DNA (often referred to as junk DNA), it will become more important to have a complete genome sequence as a background to understanding the genetics and biology of any given organism.
In many ways genome projects do not confine themselves to only determining a DNA sequence of an organism. Such projects may also include gene prediction to find out where the genes are in a genome, and what those genes do. There may also be related projects to sequence ESTs or mRNAs to help find out where the genes actually are.
Historical and technological perspectives
Historically, when sequencing eukaryotic genomes (such as the worm Caenorhabditis elegans) it was common to first map the genome to provide a series of landmarks across the genome. Rather than sequence a chromosome in one go, it would be sequenced piece by piece (with the prior knowledge of approximately where that piece is located on the larger chromosome). Changes in technology and in particular improvements to the processing power of computers, means that genomes can now be 'shotgun sequenced' in one go (there are caveats to this approach though when compared to the traditional approach).
Improvements in DNA sequencing technology has meant that the cost of sequencing a new genome sequence has steadily fallen (in terms of cost per base pair) and newer technology has also meant that genomes can be sequenced far more quickly.
When research agencies decide what new genomes to sequence, the emphasis has been on species which are either high importance as model organism or have a relevance to human health (e.g. pathogenic bacteria or vectors of disease such as mosquitos) or species which have commercial importance (e.g. livestock and crop plants). Secondary emphasis is placed on species whose genomes will help answer important questions in molecular evolution (e.g. the common chimpanzee).
In the future, it is likely that it will become even cheaper and quicker to sequence a genome. This will allow for complete genome sequences to be determined from many different individuals of the same species. For humans, this will allow us to better understand aspects of human genetic diversity.
Example genome projects
Many organisms have genome projects that have either been completed or will be completed shortly, including:
- Humans, Homo sapiens; see Human genome project
- Palaeo-Eskimo, an ancient-human
- Neanderthal, "Homo neanderthalensis" (partial); see Neanderthal Genome Project
- Common Chimpanzee Pan troglodytes; see Chimpanzee Genome Project
- Domestic Cow
- Bovine Genome
- Honey Bee Genome Sequencing Consortium
- Horse genome
- Human microbiome project
- International Grape Genome Program
- International HapMap Project
- Tomato 150+ genome resequencing project
- 100K Genome Project
- Genomics England
- Joint Genome Institute
- Model organism
- National Center for Biotechnology Information
- Illumina, private company involved in genome sequencing
- Knome, private company offering genome analysis & sequencing
- Pevsner, Jonathan (2009). Bioinformatics and functional genomics (2nd ed.). Hoboken, N.J: Wiley-Blackwell. ISBN 9780470085851.
- "Potential Benefits of Human Genome Project Research". Department of Energy, Human Genome Project Information. 2009-10-09. Retrieved 2010-06-18.
- Li, Ruiqiang; Hongmei Zhu, Jue Ruan, Wubin Qian, Xiaodong Fang, Zhongbin Shi, Yingrui Li, Shengting Li, Gao Shan, Karsten Kristiansen, Songgang Li, Huanming Yang, Jian Wang, Jun Wang (February 2010). "De novo assembly of human genomes with massively parallel short read sequencing". Genome Research 20 (2): 265–272. doi:10.1101/gr.097261.109. ISSN 1549-5469. PMC 2813482. PMID 20019144. Cite uses deprecated parameter
- Rasmussen, Morten; Yingrui Li, Stinus Lindgreen, Jakob Skou Pedersen, Anders Albrechtsen, Ida Moltke, Mait Metspalu, Ene Metspalu, Toomas Kivisild, Ramneek Gupta, Marcelo Bertalan, Kasper Nielsen, M Thomas P Gilbert, Yong Wang, Maanasa Raghavan, Paula F Campos, Hanne Munkholm Kamp, Andrew S Wilson, Andrew Gledhill, Silvana Tridico, Michael Bunce, Eline D Lorenzen, Jonas Binladen, Xiaosen Guo, Jing Zhao, Xiuqing Zhang, Hao Zhang, Zhuo Li, Minfeng Chen, Ludovic Orlando, Karsten Kristiansen, Mads Bak, Niels Tommerup, Christian Bendixen, Tracey L Pierre, Bjarne Grønnow, Morten Meldgaard, Claus Andreasen, Sardana A Fedorova, Ludmila P Osipova, Thomas F G Higham, Christopher Bronk Ramsey, Thomas V O Hansen, Finn C Nielsen, Michael H Crawford, Søren Brunak, Thomas Sicheritz-Pontén, Richard Villems, Rasmus Nielsen, Anders Krogh, Jun Wang, Eske Willerslev (2010-02-11). "Ancient human genome sequence of an extinct Palaeo-Eskimo". Nature 463 (7282): 757–762. doi:10.1038/nature08835. ISSN 1476-4687. PMC 3951495. PMID 20148029. Cite uses deprecated parameter
- Wang, Jun; Wei Wang, Ruiqiang Li, Yingrui Li, Geng Tian, Laurie Goodman, Wei Fan, Junqing Zhang, Jun Li, Juanbin Zhang, Yiran Guo, Binxiao Feng, Heng Li, Yao Lu, Xiaodong Fang, Huiqing Liang, Zhenglin Du, Dong Li, Yiqing Zhao, Yujie Hu, Zhenzhen Yang, Hancheng Zheng, Ines Hellmann, Michael Inouye, John Pool, Xin Yi, Jing Zhao, Jinjie Duan, Yan Zhou, Junjie Qin, Lijia Ma, Guoqing Li, Zhentao Yang, Guojie Zhang, Bin Yang, Chang Yu, Fang Liang, Wenjie Li, Shaochuan Li, Dawei Li, Peixiang Ni, Jue Ruan, Qibin Li, Hongmei Zhu, Dongyuan Liu, Zhike Lu, Ning Li, Guangwu Guo, Jianguo Zhang, Jia Ye, Lin Fang, Qin Hao, Quan Chen, Yu Liang, Yeyang Su, A. san, Cuo Ping, Shuang Yang, Fang Chen, Li Li, Ke Zhou, Hongkun Zheng, Yuanyuan Ren, Ling Yang, Yang Gao, Guohua Yang, Zhuo Li, Xiaoli Feng, Karsten Kristiansen, Gane Ka-Shu Wong, Rasmus Nielsen, Richard Durbin, Lars Bolund, Xiuqing Zhang, Songgang Li, Huanming Yang, Jian Wang (2008-11-06). "The diploid genome sequence of an Asian individual". Nature 456 (7218): 60–65. doi:10.1038/nature07484. ISSN 0028-0836. PMC 2716080. PMID 18987735. Retrieved 2012-12-22. Cite uses deprecated parameter
- Stein, L. (2001). "Genome annotation: from sequence to biology". Nature Reviews Genetics 2 (7): 493–503. doi:10.1038/35080529. PMID 11433356.
- "Ensembl's genome annotation pipeline online documentation".
- Gupta, Nitin; Stephen Tanner; Navdeep Jaitly; Joshua N Adkins; Mary Lipton; Robert Edwards; Margaret Romine; Andrei Osterman; Vineet Bafna; Richard D Smith; Pavel A Pevzner (September 2007). "Whole proteome analysis of post-translational modifications: applications of mass-spectrometry for proteogenomic annotation". Genome Research 17 (9): 1362–1377. doi:10.1101/gr.6427907. ISSN 1088-9051. PMC 1950905. PMID 17690205.
- ENCODE Project Consortium (2011). Becker PB, ed. "A User's Guide to the Encyclopedia of DNA Elements (ENCODE)". PLOS Biology 9 (4): e1001046. doi:10.1371/journal.pbio.1001046. PMC 3079585. PMID 21526222.
- McVean, G. A.; Abecasis, D. M.; Auton, R. M.; Brooks, G. A. R.; Depristo, D. R.; Durbin, A.; Handsaker, A. G.; Kang, P.; Marth, E. E.; McVean, P.; Gabriel, S. B.; Gibbs, R. A.; Green, E. D.; Hurles, M. E.; Knoppers, B. M.; Korbel, J. O.; Lander, E. S.; Lee, C.; Lehrach, H.; Mardis, E. R.; Marth, G. T.; McVean, G. A.; Nickerson, D. A.; Schmidt, J. P.; Sherry, S. T.; Wang, J.; Wilson, R. K.; Gibbs (Principal Investigator), R. A.; Dinh, H.; Kovar, C. (2012). "An integrated map of genetic variation from 1,092 human genomes". Nature 491 (7422): 56–65. doi:10.1038/nature11632. PMC 3498066. PMID 23128226.
- Dunham, I.; Bernstein, A.; Birney, S. F.; Dunham, P. J.; Green, C. A.; Gunter, F.; Snyder, C. B.; Frietze, S.; Harrow, J.; Kaul, R.; Khatun, J.; Lajoie, B. R.; Landt, S. G.; Lee, B. K.; Pauli, F.; Rosenbloom, K. R.; Sabo, P.; Safi, A.; Sanyal, A.; Shoresh, N.; Simon, J. M.; Song, L.; Trinklein, N. D.; Altshuler, R. C.; Birney, E.; Brown, J. B.; Cheng, C.; Djebali, S.; Dong, X.; Dunham, I. (2012). "An integrated encyclopedia of DNA elements in the human genome". Nature 489 (7414): 57–74. doi:10.1038/nature11247. PMC 3439153. PMID 22955616.
- Huss, Jon W.; Orozco, C; Goodale, J; Wu, C; Batalov, S; Vickers, TJ; Valafar, F; Su, AI (2008). "A Gene Wiki for Community Annotation of Gene Function". PLoS Biology 6 (7): e175. doi:10.1371/journal.pbio.0060175. PMC 2443188. PMID 18613750.
- Yates, Diana (2009-04-23). "What makes a cow a cow? Genome sequence sheds light on ruminant evolution" (Press Release). EurekAlert!. Retrieved 2012-12-22.
- Elsik, C. G.; Elsik, R. L.; Tellam, K. C.; Worley, R. A.; Gibbs, D. M.; Muzny, G. M.; Weinstock, D. L.; Adelson, E. E.; Eichler, L.; Elnitski, R.; Guigó, D. L.; Hamernik, S. M.; Kappes, H. A.; Lewin, D. J.; Lynn, F. W.; Nicholas, A.; Reymond, M.; Rijnkels, L. C.; Skow, E. M.; Zdobnov, L.; Schook, J.; Womack, T.; Alioto, S. E.; Antonarakis, A.; Astashyn, C. E.; Chapple, H. -C.; Chen, J.; Chrast, F.; Câmara, O.; Ermolaeva, C. N. (2009). "The Genome Sequence of Taurine Cattle: A Window to Ruminant Biology and Evolution". Science 324 (5926): 522–528. doi:10.1126/science.1169588. PMC 2943200. PMID 19390049.
|The Wikibook Next Generation Sequencing (NGS) has a page on the topic of: De_novo_assembly|
- GOLD:Genomes OnLine Database
- Genome Project Database
- The Protein Naming Utility
- EchinoBase An Echinoderm genomic database, (previous SpBase, a sea urchin genome database)
- Global Invertebrate Genomics Alliance (GIGA) | https://en.wikipedia.org/wiki/Genome_project |
4.0625 | Some mammals used highly complex teeth to compete with dinosaurs
Conventional wisdom holds that during the Mesozoic Era, mammals were small creatures that held on at life's edges. But at least one mammal group, rodent-like creatures called multituberculates, actually flourished during the last 20 million years of the dinosaurs' reign and survived their extinction 66 million years ago. New research led by a University of Washington paleontologist suggests that the multituberculates did so well in part because they developed numerous tubercles (bumps, or cusps) on their back teeth that allowed them to feed largely on angiosperms, flowering plants that were just becoming commonplace.
"These mammals were able to radiate in terms of numbers of species, body size and shapes of their teeth, which influenced what they ate," said Gregory P. Wilson, a UW assistant professor of biology. He is the lead author of a paper documenting the research, published March 14 in the online edition of Nature.
Some 170 million years ago, multituberculates were about the size of a mouse. Angiosperms started to appear about 140 million years ago and after that the small mammals' body sizes increased, eventually ranging from mouse-sized to the size of a beaver.
Following the dinosaur extinction, multituberculates continued to flourish until other mammals -- mostly primates, ungulates and rodents -- gained a competitive advantage. That ultimately led to multituberculate extinction about 34 million years ago.
The scientists examined teeth from 41 multituberculate species kept in fossil collections worldwide. They used laser and computed tomography (or CT) scanning to create 3-D images of the teeth in very high resolution, less than than 30 microns (smaller than one-third the diameter of a human hair). Using geographic information system software, they analyzed the tooth shape much as a geographer might in examining a mountain range when charting topography, Wilson said.
The work involved determining which direction various patches of the tooth surfaces were facing. The more patches on a tooth the more complex its structure, and the most complex teeth show many bumps, or cusps.
Carnivores have relatively simple teeth, with perhaps 110 patches per tooth row, because their food is easily broken down, Wilson said. But animals that depend more on vegetation for sustenance have teeth with substantially more patches because much of their food is broken down by the teeth.
In multituberculates, sharper bladelike teeth were situated toward the front of the mouth. But the new analysis shows that in some multituberculates these teeth became less prominent over time and the teeth in the back became very complex, with as many as 348 patches per tooth row, ideal for crushing plant material.
"If you look at the complexity of teeth, it will tell you information about the diet," Wilson said. "Multituberculates seem to be developing more cusps on their back teeth, and the bladelike tooth at the front is becoming less important as they develop these bumps to break down plant material."
The researchers concluded that some angiosperms apparently suffered little effect from the dinosaur extinction event, since the multituberculates that ate those flowering plants continued to prosper. As the plants spread, the population of insect pollinators likely grew too and species feeding on insects also would have benefited, Wilson said.
The paper's coauthors are Alistair Evans of Monash University in Australia, Ian Corfe, Mikael Fortelius and Jukka Jernvall of the University of Helsinki in Finland, and Peter Smits of the UW and Monash University.
The research was funded by the National Science Foundation, the Denver Museum of Nature and Science, the University of Washington, the Australian Research Council, Monash University, Academy of Finland and the European Union's Synthesis of Systematic Resources.
Source: University of Washington
- Some Mammals Used Highly Complex Teeth to Compete with Dinosaursfrom Newswise - ScinewsFri, 16 Mar 2012, 0:00:53 EDT
- Study: Dinosaurs' exit not mammals' cuefrom UPIThu, 15 Mar 2012, 1:31:04 EDT
- Study: Dinosaurs' exit not mammals' cuefrom UPIWed, 14 Mar 2012, 22:30:36 EDT
- Fossils show mammals lived with dinosaursfrom MSNBC: ScienceWed, 14 Mar 2012, 18:30:29 EDT
- Some mammals used highly complex teeth to compete with dinosaursfrom Science DailyWed, 14 Mar 2012, 17:30:13 EDT
- Observatory: Mammals’ Rise Began Before Dinosaurs’ Fall, Study Findsfrom NY Times ScienceWed, 14 Mar 2012, 14:40:08 EDT
- Fossil Teeth Show Mammals Thrived Before Dinos Diedfrom Live ScienceWed, 14 Mar 2012, 14:30:24 EDT
- Some mammals used highly complex teeth to compete with dinosaurs: studyfrom PhysorgWed, 14 Mar 2012, 14:00:33 EDT
Latest Science NewsletterGet the latest and most popular science news articles of the week in your Inbox! It's free!
Check out our next project, Biology.Net
From other science news sites
Popular science news articles
- Scientists map movement of Greenland Ice during past 9,000 years
- Central Appalachia flatter due to mountaintop mining
- From allergens to anodes: Pollen derived battery electrodes
- Prunetin prolongs lifespan in male fruit flies and enhances overall health
- Half of the large carnivore attacks are due to the imprudence of human behavior
- No proof that radiation from X rays and CT scans causes cancer
- Researchers sequence bedbug genome, find unique features
- Researchers sequence first bed bug genome
- Two AgriLife Research entomologists co-author bedbug genome mapping paper
- For older adults, serious depression symptoms increase risk for stroke and heart disease | http://esciencenews.com/articles/2012/03/14/some.mammals.used.highly.complex.teeth.compete.with.dinosaurs |
4.25 | During the Fifth & Fourth Centuries B.C.
As such, it was superior to both the hieroglyphic style which was based on the representation of words in the form of pictures or ideograms, and the syllabic style which was based on "the systematic representation of syllables rather than words by signs." There is a great deal of evidence to support the claim that the Greeks adopted their alphabet from the Phoenicians, a Semitic group with whom the Greeks engaged in trade. In fact, the Greeks themselves originally referred to their alphabet as phoinikeia, which means "Phoenician objects." In adopting the Phoenician alphabet to their own use, the Greeks made some minor but significant changes. For example, the Phoenician symbols were still somewhat pictographic in that each letter represented an object (aleph stood for "ox," beth stood for "house," and so forth). In the Greek system, each letter simply represented a phonetic sound. This made the Greek alphabet more flexible in its usage than the Phoenician system. In addition, the Phoenician alphabet was designed to represent only the consonants of the language, with the missing vowels supposedly being understood by the reader. It may be noted that this trait of leaving out the vowels is still common among the languages which have stemmed from the Semitic branch, such as Arabic and Hebrew. The inventors of the Greek alphabet changed this by taking the Phoenician letters that represented sounds not used in the Greek language and using them to represent vowel sounds. Thus, Phoenician letters "derived from Semitic glottal stops, and breathings, were employed to signify vowel sounds." The resultant changes produced a new alphabet which possessed many inherent advantages. For example, with only twenty-four symbols, it was a simpler writing system than the earlier syllabic or hieroglyphic systems, which often contained hundreds of symbols that the reader had to learn. As such, "it was an alphabet which was relatively easy to lea...
More on During the Fifth & Fourth Centuries B.C....
During the Fifth & Fourth Centuries B.C.. (1969, December 31). In LotsofEssays.com. Retrieved 23:35, February 12, 2016, from http://www.collegetermpapers.com/viewpaper/1303717507.html | http://www.collegetermpapers.com/viewpaper/1303717507.html |
4.28125 | 6 Written questions
6 Multiple choice questions
- Process by which eroded rock is dropped in a new place
- Giant pieces made of rock that float on a layer of partly melted rock. They move very, very slowly.
- Huge sheets of ice that move slowly over land
- A place where rock breaks and slips
- Mountains and other surface features of the land
- Flat piece of land made of sand and mud deposited by a river near its mouth.
5 True/False questions
Sediment → The Point of Earth's surface that is directly above the focus
Erosion → The process by which weathered rock is picked up and moved to a new place
Earthquake → The breaking down of rock on Earth's surface into smaller pieces
Volcano → Flat piece of land made of sand and mud deposited by a river near its mouth.
Epicenter → The Point of Earth's surface that is directly above the focus | https://quizlet.com/1231935/test |
4 | Names of the Celts
The name Κελτοί Keltoi and Celtae is used in Greek and Latin, respectively, as the name of a people of the La Tène horizon in the region of the upper Rhine and Danube during the 6th to 1st centuries BC in Greco-Roman ethnography. The etymology of this name and that of the Gauls Γαλάται Galatai / Galli is of uncertain etymology. The name of the Welsh, on the other hand, is taken from the designator used by the Germanic peoples for Celtic- and Latin-speaking peoples, *walha-, meaning foreign.
The linguistic sense of the name Celts, grouping all speakers of Celtic languages, is modern. In particular, aside from a 1st-century literary genealogy of Celtus the grandson of Bretannos by Heracles, there is no record of the term "Celt" being used in connection with the Insular Celts, the inhabitants of the British Isles during the Iron Age, prior to the 17th century.
The first recorded use of the name of Celts – as Κελτοί – to refer to an ethnic group was by Hecataeus of Miletus, the Greek geographer, in 517 BC, when writing about a people living near Massilia (modern Marseille). In the 5th century BC Herodotus referred to Keltoi living around the head of the Danube and also in the far west of Europe.
The etymology of the term Keltoi is unclear. Possible roots include Indo-European *k´el-‘to hide’ (also in Old Irish celid), IE *k´el- ‘to heat’ or *kel- ‘to impel’. Several authors have supposed it to be Celtic in origin, while others view it as a name coined by Greeks. Linguist Patrizia De Bernardo Stempel falls in the latter group, and suggests the meaning "the tall ones".
The Romans preferred the name Gauls (Galli) for those Celts whom they first encountered in northern Italy (Cisalpine Gaul). In the first century BC Caesar referred to the Gauls as calling themselves Celts in their own tongue.
According to the 1st-century poet Parthenius of Nicaea, Celtus (Κελτός) was the son of Heracles and Keltine (Κελτίνη), the daughter of Bretannus (Βρεττανός); this literary genealogy exists nowhere else and was not connected with any known cult. Celtus became the eponymous ancestor of Celts. In Latin Celta came in turn from Herodotus' word for the Gauls, Keltoi. The Romans used Celtae to refer to continental Gauls, but apparently not to Insular Celts. The latter are divided linguistically into Goidels and Brythons.
Aside from the Celtiberians —Lusones, Titi, Arevaci and Pelendones among others— who inhabited large regions of central Spain, Greek and Roman geographers also spoke of a people or group of peoples called Celtici or Κελτικοί, living in the South of modern day Portugal, in the Alentejo region, between the Tagus and the Guadiana rivers. They are first mentioned by Strabo, who wrote that they were the most numerous people inhabiting that region. Later, the description of Ptolemy shows a more reduced territory, comprising the regions from Évora to Setúbal, being the coastal and southern areas occupied by the Turdetani.
A second group of Celtici was mentioned by Pliny living in the region of Baeturia (northwestern Andalusia); he considered that they proceeded "of the Celtiberians from the Lusitania, because of their religion, language, and because of the names of their cities".
In the North, in Galicia, another group of Celtici dwelt the coastal areas. They comprised several populi, including the Celtici proper: the Praestamarci south of the Tambre river (Tamaris), the Supertamarci north of it, and the Neri by the Celtic promontory (Promunturium Celticum). Pomponius Mela affirmed that all the inhabitants of the coastal regions, from the bays of southern Galicia and till the Astures, were also Celtici: "All (this coast) is inhabited by the Celtici, except from the Douro river to the bays, where the Grovi dwelt (…) In the north coast first there are the Artabri, still of the Celtic people (Celticae gentis), and after them the Astures." He also mentioned the fabulous isles of tin, the Cassiterides, as situated among these Celtici.
The Celtici Supertarmarci have also left a number of inscriptions, as the Celtici Flavienses did. Several villages and rural parishes still bear the name Céltigos (from Latin Celticos) in Galicia. This is also the name of an archpriesthood of the Catholic Church, a division of the archbishopric of Santiago de Compostela, encompassing part of the lands attributed to the Celtici Supertamarci by ancient authors.
Introduction in Early Modern literature
The name of the Celtae is revived in the learned literature of the Early Modern period. The French celtique and the German celtisch appear in the 16th century. The English word Celts is first attested in 1607. The adjective Celtic, formed after French celtique, appears a little later, in the mid 17th century. An early attestation is found in Milton's Paradise Lost (1667), in reference to the Insular Celts of antiquity: [the Ionian gods ... who] o'er the Celtic [fields] roamed the utmost Isles. (I.520, here in the 1674 spelling).
In the 18th century the interest in "primitivism", which led to the idea of the "noble savage", brought a wave of enthusiasm for all things "Celtic". The antiquarian William Stukeley pictured a race of "Ancient Britons" constructing the "Temples of the Ancient Celts" such as Stonehenge (actually a pre-Celtic structure) before he decided in 1733 to recast the "Celts" in his book as "Druids". The Ossian fables written by James Macpherson - portrayed as ancient Scottish Gaelic poems - added to this romantic enthusiasm. The "Irish revival" came after the Catholic Emancipation Act of 1829 as a conscious attempt to demonstrate an Irish national identity, and with its counterpart in other countries subsequently became known as the "Celtic revival".
The initial consonant of the English words Celt and Celtic can be realised either as /k/ or /s/ (that is, either hard or soft ⟨c⟩), both variants being recognised by modern dictionaries. A minor spelling variant Kelt, Keltic exists, for which /k/ is the only pronunciation.
The English word originates in the 17th century, taken from the Celtæ of classical Latin. Until the mid 19th century, the sole pronunciation in English was /selt/ in keeping of the treatment of the letter ⟨c⟩ inherited by Middle English from Old French and Late Latin.
Beginning in the mid-19th century, Celtic revivalist and nationalist publications advocated imitating the pronunciation of classical Latin in the time of Julius Caesar, when Latin Celtæ was pronounced /keltai/.
An early example of this is a short article in a November 1857 issue of The Celt, a publication of the Irish Celtic Union.
- "Of all the nations that have hitherto lived on the face of the earth, the English have the worst mode of pronouncing learned languages. This is admitted by the whole human race [...] This poor meagre sordid language resembles nothing so much as the hissing of serpents or geese. [...] The distinction which English writers are too stupid to notice, but which the Irish Grammarians are perpetually talking of, the distinction between broad and narrow vowels—governs the English language. [...] If we follow the unwritten law of the English we shall pronounce (Celt) Selt but Cæsar would pronounce it, Kaylt. Thus the reader may take which pronunciation he pleases. He may follow the rule of the Latin or the rule of the English language, and in either case be right."
A guide to English pronunciation for Welsh speakers published in 1861 gives the alternative pronunciations "sel´tik, kel´tik" for the adjective Celtic.
The pronunciation with /s/ remained standard throughout the 19th to early 20th century, but the variant with /k/ seems to have gained ground during the later 20th century, especially among "students of Celtic culture". On the other hand, the /s/ pronunciation remains the most recognised form when it occurs in the names of sports teams, most notably Celtic Football Club and the Boston Celtics basketball team. Cavan newspaper The Anglo-Celt also uses the soft c pronunciation in its name.
The corresponding words in French are pronounced with /s/, and English Celtic was formed in imitation of French celtique. The corresponding German terms are Kelten and keltisch, not only pronounced as /k/ but even spelled with ⟨k⟩. This is a regular German treatment of names in Greek kappa, also observed in cases such Cimbri, Cimmerians, Cambyses, etc. These spellings with ⟨k⟩ arise in the later 18th century. From the 16th to the early 18th century, the prevalent spelling in German was celtisch.
In current usage the terms "Celt" and "Celtic" can take several senses depending on context: The Celts of the European Iron Age, the group of Celtic-speaking peoples in historical linguistics, and the modern Celtic identity derived from the Romanticist Celtic Revival.
After its use by Edward Lhuyd in 1707, the use of the word "Celtic" as an umbrella term for the pre-Roman peoples of the British Isles gained considerable popularity. Lhuyd was the first to recognise that the Irish, British and Gaulish languages were related to one another, and the inclusion of the Insular Celts under the term "Celtic" from this time expresses this linguistic relationship. By the late 18th century, the Celtic languages were recognised as one branch within the larger Indo-European family.
The timeline of Celtic settlement in the British Isles is unclear and the object of much speculation, but it is clear that by the 1st century BC, most of Great Britain and Ireland was inhabited by Celtic-speaking peoples now known as the Insular Celts, divided into two large groups, Brythonic or P-Celtic, and Goidelic or Q-Celtic. The Brythonic groups under Roman rule were known in Latin as Britanni, while use of the names Celtae or Galli / Galatai was restricted to the Gauls. There are no specimens of Goidelic languages prior to the appearance of Primitive Irish inscriptions in the 4th century AD, however there are earlier references to the Iverni (in Ptolemy ca. 150, later also appearing as Hierni and Hiberni), and by 314, to the Scoti.
Simon James argues that, while the term "Celtic" expresses a valid linguistic connection, its use of both Insular and Continental Celtic culture is misleading, as archaeology does not suggest a unified Celtic culture during the Iron Age.[importance?][page needed]
With the rise of Celtic nationalism in the early to mid 19th century, the term "Celtic" also came to be a self-designation used by proponents of a modern Celtic identity. Thus, the contributor to "The Celt" discussing "the word Celt" states "The Greeks called us Keltoi", expressing a position of ethnic essentialism that extends the first person pronoun to include both 19th-century Irishmen and the Danubian Κελτοί of Herodotus.
This sense of "Celtic" is preserved in its political sense in Celtic nationalism of organisations such as the Celtic League, but it is also used in a more general unpolitical sense, in expressions such as Celtic music.
Latin Galli might be from an originally Celtic ethnic or tribal name, perhaps borrowed into Latin during the early 5th century BC Celtic expansions into Italy. Its root may be the Common Celtic *galno-, meaning "power" or "strength". The Greek Γαλάται Galatai (cf. Galatia in Anatolia) seems to be based on the same root, borrowed directly from the same hypothetical Celtic source which gave us Galli (the suffix -atai is simply an ethnic name indicator).
Schumacher's account is slightly different: He states that Galli (nominative singular *gallos) is derived from the present stem of the verb which he reconstructs for Proto-Celtic as *gal-nV- (it is not clear what the vowel in the suffix, marked as V, should be reconstructed as), whose meaning he gives as "to be able to, to gain control of", while Galatai comes from the same root and is to be reconstructed as nominative singular *galatis < *gelH-ti-s. He gives the same meaning for both reconstructs, namely "Machthaber", i. e. "potentate, ruler (or even warlord)", or alternatively "Plünderer, Räuber", i. e. "raider, looter, pillager, marauder", and points out that both names can be exonyms in order to explain their pejorative meaning. The Proto-Indo-European verbal root in question is reconstructed by Schumacher as *gelH-, whose meaning is given as "Macht bekommen über", i. e., "to acquire power over" in the Lexikon der indogermanischen Verben.
Gaul, Gaulish, Welsh
The English Gaul and French: Gaule, Gaulois are unrelated to Latin Gallia and Galli, despite superficial similarity. They are rather derived from the Germanic term walha, "foreigner, Romanized person", an exonym applied by Germanic speakers to Celts, likely via a Latinization of Frankish *Walholant "Gaul", literally "Land of the Foreigners/Romans", making it partially cognate with the names Wales and Wallachia), the usual word for the non-Germanic-speaking peoples (Celtic-speaking and Latin-speaking indiscriminately). The Germanic w is regularly rendered as gu / g in French (cf. guerre = war, garder = ward), and the diphthong au is the regular outcome of al before a following consonant (cf. cheval ~ chevaux). Gaule or Gaulle can hardly be derived from Latin Gallia, since g would become j before a (cf. gamba > jambe), and the diphthong au would be unexplained; the regular outcome of Latin Gallia is Jaille in French which is found in several western placenames.
The French term for "Welsh" is gallois, which is, however, not derived from the Latin Galli, but, like gaulois, borrowed (with suffix substitution) from Germanic *walhiska- "Celtic, Gallo-Roman, Romance" or its Old English descendant wælisc (= Modern English Welsh). The English form "Gaul" (first recorded in the 17th century) and "Gaulish" come from the French "Gaule" and "Gaulois", which translate Latin "Gallia" and "Gallus, -icus" respectively. In Old French, the words "gualeis", "galois", "walois" (Northern French phonetics keeping /w/) had different meanings: Welsh or the Langue d'oïl, etc. On the other hand, the word "Waulle" (Northern French phonetics keeping /w/) is recorded for the first time in the 13th century to translate the Latin word Gallia, while "gaulois" is recorded for the first time in the 15th century, and the scholars use it to translate the Latin words Gallus / Gallicus. The word comes from Proto-Germanic *Walha- (see Gaul: Name).
The English word "Welsh" originates from the word wælisċ, the Anglo-Saxon form of *walhiska-, the reconstructed Proto-Germanic word for "foreign" or "Celt" (South German Welsch(e) "Celtic speaker", "French speaker", "Italian speaker"; Old Norse "valskr", pl. "valir" "Gaulish", "French"), that is supposed to be derived of the name of the "Volcae", a Celtic tribe who lived first in the South of Germany and emigrated then to Gaul.
The Germanic term may ultimately have a Celtic source: It is possibly the result of a loan of the Celtic tribal name Volcae into pre-Germanic, *wolk- changing according to Grimm's Law to yield proto-Germanic *walh-. The Volcae were one of the Celtic peoples who for two centuries barred the southward expansion of the Germanic tribes (in what is now central Germany) on the line of the Harz mountains and into Saxony and Silesia.
In the Middle Ages, territories with primarily Romance-speaking populations, such as France and Italy, were known in German as Welschland as opposed to Deutschland, and the word is cognate with Vlach and Walloon as well as with the "-wall" in "Cornwall". Other examples are the surnames "Wallace" and "Walsh". During the early Germanic period, the term seems to have been applied to the peasant population of the Roman Empire, most of whom were in the areas immediately settled by the Germanic people.
The term Gael is, despite superficial similarity, also completely unrelated to Galli, see Gaels#Terminology.
The Celtic-speaking people of Great Britain were known as Brittanni or Brittones in Latin and as Βρίττωνες in Greek; an earlier form was Pritani, or Πρετ(τ)αν(ν)οί in Greek (as recorded by Pytheas in the 4th century BC, among others, and surviving in Welsh as Prydain, the old name for Britain). Related to this is *Priteni, the reconstructed self-designation of the people later known as Picts, which is recorded later in Old Irish as Cruithin and Welsh as Prydyn.
- H. D. Rankin, Celts and the classical world. Routledge, 1998 ISBN 0-415-15090-6. 1998. pp. 1–2. ISBN 978-0-415-15090-3. Retrieved 2010-06-07.
- Herodotus, The Histories, 2.33; 4.49.
- John T. Koch (ed.), Celtic Culture: a historical encyclopedia. 5 vols. 2006, p. 371. Santa Barbara, California: ABC-CLIO.
- P. De Bernardo Stempel 2008. Linguistically Celtic ethnonyms: towards a classification, in Celtic and Other Languages in Ancient Europe, J. L. García Alonso (ed.), 101-118. Ediciones Universidad Salamanca.
- Julius Caesar, Commentarii de Bello Gallico 1.1: "All Gaul is divided into three parts, one of which the Belgae live, another in which the Aquitani live, and the third are those who in their own tongue are called Celts (Celtae), in our language Gauls (Galli). Compare the tribal name of the Celtici.
- Parthenius, Love Stories 2, 30
- "Celtine, daughter of Bretannus, fell in love with Heracles and hid away his kine (the cattle of Geryon) refusing to give them back to him unless he would first content her. From Celtus the Celtic race derived their name." "(Ref.: Parth. 30.1-2)". Retrieved 5 December 2005.
- Lorrio, Alberto J.; Gonzalo Ruiz Zapatero (1 February 2005). "The Celts in Iberia: An Overview" (PDF). e-Keltoi 6: 183–185. Retrieved 5 October 2011.
- 'Celticos a Celtiberis ex Lusitania advenisse manifestum est sacris, lingua, oppidorum vocabulis', NH, II.13
- Celtici: Pomponius Mela and Pliny; Κελτικοί: Strabo
- 'Totam Celtici colunt, sed a Durio ad flexum Grovi, fluuntque per eos Avo, Celadus, Nebis, Minius et cui oblivionis cognomen est Limia. Flexus ipse Lambriacam urbem amplexus recipit fluvios Laeron et Ullam. Partem quae prominet Praesamarchi habitant, perque eos Tamaris et Sars flumina non longe orta decurrunt, Tamaris secundum Ebora portum, Sars iuxta turrem Augusti titulo memorabilem. Cetera super Tamarici Nerique incolunt in eo tractu ultimi. Hactenus enim ad occidentem versa litora pertinent. Deinde ad septentriones toto latere terra convertitur a Celtico promunturio ad Pyrenaeum usque. Perpetua eius ora, nisi ubi modici recessus ac parva promunturia sunt, ad Cantabros paene recta est. In ea primum Artabri sunt etiamnum Celticae gentis, deinde Astyres.', Pomponius Mela, Chorographia, III.7-9.
- Pomponius Mela, Chorographia, III.40.
- Eburia / Calveni f(ilia) / Celtica / Sup(ertamarca) |(castello?) / Lubri; Fusca Co/edi f(ilia) Celti/ca Superta(marca) / |(castello) Blaniobr/i; Apana Ambo/lli f(ilia) Celtica / Supertam(arca) / Maiobri; Clarinu/s Clari f(ilius) Celticus Su/pertama(ricus). Cf. Epigraphik-Datenbank Clauss / Slaby.
- [Do]quirus Doci f(ilius) / [Ce]lticoflavien(sis); Cassius Vegetus / Celti Flaviensis.
- Álvarez, Rosario, Francisco Dubert García, Xulio Sousa Fernández (ed.) (2006). Lingua e territorio (PDF). Santiago de Compostela: Consello da Cultura Galega. pp. 98–99. ISBN 84-96530-20-5.
- The Indians were wont to use no bridles, like the Græcians and Celts. Edward Topsell, The historie of foure-footed beastes (1607), p. 251 (cited after OED).
- (Lhuyd, p. 290) Lhuyd, E. "Archaeologia Britannica; An account of the languages, histories, and customs of the original inhabitants of Great Britain." (reprint ed.) Irish University Press, 1971. ISBN 0-7165-0031-0
- Laing, Lloyd and Jenifer (1992) Art of the Celts, London, Thames and Hudson ISBN 0-500-20256-7
- OED, s.v. "Celt", "Celtic".
- "Keltic". American Heritage Dictionary. Retrieved 21 November 2014.
- "Celtic or Keltic". Collins English Dictionary. HarperCollins. Retrieved 21 November 2014.
- "The word Celt", The Celt: A weekly periodical of Irish national literature edited by a committee of the Celtic Union, 28 November 1857, pp. 287–288, retrieved 11 November 2010
- "Celtic", William Spurrell, An English-Welsh Pronouncing Dictionary, 1861, p. 45, retrieved 11 November 2010
- "Although many dictionaries, including the OED, prefer the soft c pronunciation, most students of Celtic culture prefer the hard c." MacKillop, J. (1998) Dictionary of Celtic Mythology. New York: Oxford University Press ISBN 0-19-869157-2
- @theanglocelt Twitter feed
- but not Latin names such as Cicero, Cato; Kaiser is a special case as it is not a learned introduction into Modern German but a loan inherited from Old High German.
- An early attestation is found in volume 2 of Sebastian Franck's and Nikolaus Höniger's Chronick of 1585
- (Lhuyd, p. 290) Lhuyd, E. (1971) Archaeologia Britannica; An account of the languages, histories, and customs of the original inhabitants of Great Britain. (reprint ed.) Irish University Press, ISBN 0-7165-0031-0
- Schumacher, Stefan; Schulze-Thulin, Britta; aan de Wiel, Caroline (2004). Die keltischen Primärverben. Ein vergleichendes, etymologisches und morphologisches Lexikon (in German). Innsbruck: Institut für Sprachen und Kulturen der Universität Innsbruck. pp. 325–326. ISBN 3-85124-692-6.
- Rix, Helmut; Kümmel, Martin; Zehnder, Thomas; Lipp, Reiner; Schirmer, Brigitte (2001). Lexikon der indogermanischen Verben. Die Wurzeln und ihre Primärstammbildungen (in German) (2nd, expanded and corrected ed.). Wiesbaden, Germany: Ludwig Reichert Verlag. p. 185. ISBN 3-89500-219-4.
- Sjögren, Albert, "Le nom de "Gaule", in "Studia Neophilologica", Vol. 11 (1938/39) pp. 210-214.
- Oxford Dictionary of English Etymology (OUP 1966), p. 391.
- Nouveau dictionnaire étymologique et historique (Larousse 1990), p. 336.
- Neilson, William A. (ed.) (1957). Webster's New International Dictionary of the English Language, second edition. G & C Merriam Co. p. 2903.
- Koch, John Thomas (2006). Celtic culture: a historical encyclopedia. ABC-CLIO. p. 532. ISBN 1-85109-440-7.
- Mountain, Harry (1998). The Celtic Encyclopedia, Volume 1. uPublish.com. p. 252. ISBN 1-58112-889-4. | https://en.wikipedia.org/wiki/Names_of_the_Celts |
4.125 | An Earth impact by a large comet or asteroid could knock out human civilization with a single blow, as most people are now aware thanks to recent Hollywood movies and public outreach by planetary scientists. Since 1998, when NASA initiated its Spaceguard program to find comets and asteroids 1 km in diameter and larger, researchers have made some crucial inventories of the risky space rocks with orbits that come into close proximity of Earth. For instance, there are almost 1,000 of these so-called near-Earth objects with diameters of 1-kilometer or more.
However disconcerting this might seem, we can rest assured that none will make it here in our lifetimes. “We can say with a very good deal of certainty that no asteroid or comet large enough to threaten life as we know it will hit Earth in the next 100 years,” says Donald Yeomans. At NASA’s Jet Propulsion Laboratory, Yeomans is a senior research scientist and manager of the Near-Earth Object Program Office. He has spent his career studying the physical and dynamical modeling of near-Earth objects, as well as tracking them down.
Yeomans works with an international network of professional and amateur astronomers who find and monitor asteroids and comets with orbits that come within approximately 0.33 AU, which is equivalent to 150 million kilometers. The team has identified 8,800 near-Earth objects as of early 2012, he noted during a talk at the American Museum of Natural History in New York on January 14 on his new book Near-Earth Objects, Finding Them Before They Find Us. The book gives readers an inside account of the latest efforts to find, track and study life-threatening asteroids and comets.
There are literally millions of asteroids and comets in the solar system, ranging in size from the microscopic to hundreds of kilometers in diameter. Both are made of rocky, metallic materials that failed to aggregate into planets during the early days of the solar system. Yeomans says the only real difference between asteroids and comets is that a comet actively loses its dust and ice when near the sun, causing a highly visible tail to form behind it.
Scientists have made exponential progress in identifying and tracking near-Earth objects in the past decade. NASA-sponsored near-Earth object surveys have found 90 percent of all asteroids and comets larger than a kilometer in diameter and projected their orbits at least 100 years into the future. Yeomans says the challenge now is finding all asteroids larger than 35 meters across, the size where one would pose a threat to a town or city, rather than all life on Earth.
Historically, Earth impacts by large asteroids and comets are rare. In addition, there is no clear record of a person being killed by one. Yeomans says that while Earth impacts by large asteroid and meteors are very low probability events, they are of very high consequence.
A prime example is just outside Winslow, Ariz., where a large crater was blasted into the Earth 50,000 years ago by a nearly 30-meter asteroid. Despite its relatively small size, the asteroid generated around 10 megatons of energy upon collision. By comparison, the atomic bomb dropped on Hiroshima during World War II generated around 0.02 megatons.
The asteroid that killed the dinosaurs 65 million years ago was much larger—a chunk of rock 10 to 15 kilometers across. The crater that formed when it struck near what is now the Yucatn Peninsula is 150 kilometers in diameter. The impact caused an immense explosion that deposited a layer of debris 10 meters deep as far as several hundred kilometers away from the impact and rained burning ash down on all corners of the globe. Most animals on the surface of the Earth died, and debris in the upper atmosphere launched the planet into a global winter. Many of the life forms that survived were either in the ocean or underground.
Today, if a survey detected a giant NEO headed for Earth, Yeomans says, humanity would have more than 50 years to prepare for it. He says a spacecraft could theoretically be used to divert such an asteroid off its Earth-colliding trajectory and out into space, and put in his plug for his employer, or at least organizations that support human ingenuity. “We have conceptual plans on how this could be done,” he says. “The reason the dinosaurs went extinct is because they didn’t have a space program.” | http://blogs.scientificamerican.com/observations/asteroid-hunter-gives-an-update-on-the-threat-of-near-earth-objects/ |
4.0625 | Rubik's Cube is a famous puzzle cube invented by Ernö Rubik in 1974.
For those unfamiliar with the cube, the basic concept is that the cube is made up of 27 cubelets. The exposed faces of these each have a different colour. Rubik invented a mechanism whereby any layer of cubelets (i.e. 9 cubelets) can be rotated in a clockwise or an anticlockwise (counterclockwise) direction, independently of the other layers. By doing this many times at random, the colours displayed on the faces become jumbled up. The objective of the puzzle is to restore the initial position in which each side of the cube shows only one colour.
The approach to solving Rubik's Cube and the Wolstenholme notation and tools (sequences of twists) used are best seen in the Kublitz Cube application - an electronic version of the Cube. This is a version of the Cube in which you can effectively see, or know, the colours on all the cubelets without needing to turn the Cube around. This is effected by providing gaps between the 27 cubelets and by ensuring that the colours on any pair of opposite cubelet faces are identical; so, while you cannot see the colours on the back of the Cube, you can see those on the forward-facing sides of the back cubelets, which are the same.
You can find an online version of Kublitz Cube here. Tap on the Help button to see the approach to solving the Cube and the Wolstenholme notation (which is also given below). With the Kublitz Cube application you don't simply get the approach to solving the Cube, but a means of putting it into practice.
The Wolstenholme notation is concerned with specifying rotations of the cube in order to solve it. In order to introduce it, we shall first give a brief outline of the Kublitz Cube referred to above, and show some images of it that also demonstrate the names of the outer faces or layers of the cube.
Outline of the Kublitz Cube
The graphics of the Kublitz Cube are intended to represent a large cube made up of 27 smaller cubes, or cubelets - hence the name Kublitz, a word that sounds the same. Three key features of the Kublitz Cube cubelets are:
The large cube (often referred to as just 'the cube') can also be considered as comprising 9 layers of 9 cubelets: a front layer and a back layer; a top layer and a bottom layer; a right layer and a left layer; and the middle layers lying between each of these pairs.
The large cube is designed so that it has a goal, or reset, state, in which the 9 cubelet faces of each layer of 9 cubelets are the same colour. This is equivalent to the state in which the colours displayed on the outside of every one of the 6 faces of the large cube are all the same colour (a different colour for each of the 6 faces), as shown below. This is effectively the same goal state as a traditional physical cube.
To demonstrate that the opposite faces of a cubelet are the same colour, if we rotate the cube 180 degrees around the top-bottom axis, it now appears as follows:
Note how the cubelets at the front, which were displaying red on their forward-looking faces, are now at the back, displaying red on their forward-facing faces - the faces that had been backward-facing when the cubelets were at the front of the large cube. Likewise, you can see that the cubelets on the right that were displaying blue on their right-facing sides are now on the left, but still displaying blue on their right-facing sides.
This design, and the view you get as shown above, means that you are able to know what colour is on every face of the large cube, since the opposite side of every hidden face, which has the same colour, is visible. This is a major difference, and improvement, on traditional physical puzzle cubes and their visual representations.
Basic Wolstenholme notation
In the images above, there are 12 arrows at the corners of the cube concerned with rotating the 6 outer layers.
Near each pair of arrows you will see a single letter. This is the way this notation refers to these layers:
The rotation of a given layer is specified in this notation by one of the following, placed after the layer letter:
So, LA means turns the Left layer 90 degrees anticlockwise (counterclockwise).
Important: Please note that the terms clockwise and anticlockwise are always used to indicate the direction of rotation when facing that particular side or layer from the outside of the large cube. This means that an anticlockwise rotation of the Back layer looks like a clockwise rotation when viewed from the front.
In addition, the notation uses a three-letter specification for cube rotations, in which the letter C is added to the end to signify that the entire cube is to be rotated. So, FOC means that the entire cube is to be rotated around the Front-Back axis in a clockwise direction when viewed from the Front. Note that the cube rotation FOC is the same as the cube rotation BAC.
When specifying sequences of rotations, we generally join two specifications of layer rotations together to form a 4-letter 'word'. So, ROTA specifies a clockwise rotation of the Right layer followed by an anticlockwise rotation of the Top layer.
These 4-letter 'words' and the 3-letter cube specifications often form recognizable words or names, or maybe sequences of letters that sound like words. Examples include: FOTO, ROC, BAC, ROTA, RITA, ROTI, or RIFA. This is the primary reason for choosing the notation above, since we humans are good at remembering words and can build stories around them to help us remember them.
The Kublitz Cube app includes four top-layer tools, which are sequences of rotations that have an overall impact on the Top layer, but not on the two layers below (see the app for more details). The sequences for each of the four tools are specified using the Wolstenholme notation and 'words', and have associated with them mnemonics that can help you remember them. You'll have to get used to thinking of eating rat, and know that a roti is a flat bread, often eaten as a wrap. These demonstrate the usefulness of having word-like rotation specifications.
FOTO ROTA RAFA (photo rota rougher)
Flipper and his dolphin friends are forming a photo rota to have their pictures taken in turn, but the rougher sea may prevent this.
LOTA RATO LATA ROTI (lotta rat-oh latter roti)
Lunch arrives: Cheese and a lotta rat! Oh, I'd rather swap the latter for a roti.
RATA ROTA RATI ROTI (ratter rota ratty roti)
The Ratter Rota shows the rat-catchers' work plans for the week, but with a twist, as it says who has to make the ratty roti for lunch.
RITA FOBA RIFA BOTA RI (Rita Fo B.A. reefer boater rye)
Loopy Rita Fo B.A. is celebrating her graduation smoking a reefer, wearing a boater and drinking rye whiskey.
These four top-layer tools can, of course, be used with a physical cube - and can you really say you've solved the cube until you've solved a physical cube? If you want to solve the cube without having instructions around, you'll need to remember the tool sequences, hence the mnemonics above.
The notation in Kublitz Cube was designed specifically for rotations of single outside layers of cubelets and of the entire cube. While all sequences can be specified using only this simple notation, more advanced puzzlers might wish to specify other moves more concisely. For example, they might want to refer to the rotation of the middle layer, between Front and Back, by 90 degrees clockwise as seen from the Front. Now this could be specified as three rotations using the simple notation: FOC FABO (i.e. rotate the complete cube clockwise then turn the Front and Back layers back again), but this is not concise.
The notation can be easily extended by references to the specific layers. In order to maintain the word-like notation, each layer reference begins with the letter E and is then followed by one or more of the following letters: N is the layer nearest to the face specified; M is the middle layer, i.e. the second layer back from the face specified, and X is the extreme layer furthest from that face. The E followed by letters denoting the layer or layers to be rotated are specified between the face letter and the rotation direction letter. So, the rotation of the middle layer, between Front and Back, by 90 degrees clockwise as seen from the Front can be specified as FEMO (or as BEMA). The rotation of the top two layers in an anticlockwise direction as seen from the Top can be specified as TENMA (or DEXMO). The rotation of the Right layer by 180 degrees could be referred to as RENI or LEXI, but you would normally use the simple RI for this, unless the extended notation helps you to remember the move. | http://www.topaccolades.com/notation/rubikscube.htm |
4.1875 | Skip to main content
Wikispaces Classroom is now free, social, and easier than ever.
Try it today.
Pages and Files
Unit 1 Basic Economic Concepts
Unit II. Measurement of Economic Performance
Unit III. National Income and Price Determination
Unit IV. Financial Sector
Unit VI. Economic Growth and Productivity
VII. Open Economy - International Trade and Finance
Add "All Pages"
Unit 1 Basic Economic Concepts
Unit I. Basic Economic Concepts
A. Scarcity, choice and opportunity costs
Scarcity is the fundamental economic problem of having seemingly unlimited human needs and wants, in a world of limited resources. It states that society has insufficient productive resources to fulfill all human wants and needs. Alternatively, scarcity implies that not all of society's goals can be pursued at the same time; trade-offs are made of one good against others.
Opportunity cost is the cost of any activity measured in terms of the best alternative forgone. It is the sacrifice related to the second best choice available to someone who has picked among several mutually exclusive choices
Rational choice theory
, also known as
rational action theory
, is a framework for understanding and often formally modeling social and economic behavior. It is the main theoretical paradigm in the currently-dominant school of microeconomics. Rationality (here equated with "wanting more rather than less of a good") is widely used as an assumption of the behavior of individuals in microeconomic models and analysis and appears in almost all economics textbook treatments of human decision-making. It is also central to some of modern political science and is used by some scholars in other disciplines such as philosophy. It is the same as instrumental rationality, which involves seeking the most cost-effective means to achieve a specific goal without reflecting on the worthiness of that goal.
was an early proponent of applying rational actor models more widely. He won the 1992 Nobel Memorial Prize for Economic Sciences for his studies of discrimination, crime, and human capital.
B. Production possibilities curve
The Production Possibilities Curve, pictured above,
represents the point at which an economy is most efficiently producing its goods and services and, therefore, allocating its resources in the best way possible. If the economy is not producing the quantities indicated by the PPF, resources are being managed inefficiently and the production of society will dwindle. The production possibility frontier shows there are limits to production, so an economy, to achieve efficiency, must decide what combination of goods and services can be produced.
Let’s take a look at what each of these points specified mean:
- Producing at point A would be an
point for a country, since it is located on the PPC. It shows, however, that the country is efficiently producing more units of wine than cotton.
This is also a point at which all resources are being allocated
, since it is another point located on the curve. However, at this point, the country is not focusing on the production of a good, wine or cotton, over the other.
Again, this is another point of
due to this point being located on the curve. However, producing at this point shows that the country focuses more of its resources on the production of wine rather than the production of cotton.
This is the first point examined at which a country would NOT be efficient with allocating its resources. Any point that lies underneath the PPC is
of resources, meaning the country is not producing at its full potential.
On the first graph pictured, point Y represents a point of output level that cannot be reached with the amount of resources currently available in the country. However, in the second graph, we see the curve itself shift to a new curve, including point Y. This is called
, and can be caused by a change in technology. (For example, more advanced machinery which increased the amount produced in the same amount of time.)
C. Comparative advantage, absolute advantage, specialization and exchange
We’ll start with some basic definitions:
- a country has a comparative advantage in the production of a good when they can produce that good with a lower opportunity cost than another country.
- a country has an absolute advantage simply when they can produce more of a specific good
- this is when a country focuses on the production of specific goods (usually those native to their location). It is also the basis of global trade.
AP_Microeconomics Practice Exam 2
Opportunity cost of producing one crab = 1/3 of a cake
Opportunity cost of producing one crab = 1 cake
Opportunity cost of producing one cake = 3 crabs
Opportunity cost of producing one cake = 1 crab
Based on the table of opportunity costs and graph above, we can determine which countries have the comparative and absolute advantages in producing crabs and cakes, therefore, which country should specialize in what good.
Nation A has the comparative AND advantage in producing crabs, and Nation B also has both the comparative and absolute advantage in producing cakes. Therefore, Nation A should specialize in producing crabs, and Nation B should specialize in producing cakes.
Comparative Advantage Practice problems
D. Demand, supply and market equilibrium
Supply and demand have similar behavior when it comes to micro and macro. However, there are still differences. Let’s compare the above graphs: the one of the left being a micro supply and demand curve, and the one on the right being an aggregate supply and demand schedule.
When converting from Micro to Macro…
Price turns into price level.
Quantity turns into output, or real GDP
Long –run aggregate supply, or full-employment, must be added.
The equilibrium still occurs at the point where the demand and supply intersect.
Economists have 3 reasons as to why the demand curve slopes downward
The wealth effect:
The premise that when the value of stock portfolios rises due to escalating stock prices, investors feel more comfortable and secure about their wealth, causing
them to spend more.
The interest-rate effect:
the rationale for the down-sloping aggregate demand curve lies in the impact of the changing price level on interest rates and, in turn, on consumption and investment spending. More specifically, as the price level rises so do interest rates; rising interest rates in turn cause reductions in certain kinds of consumption and investment spending.
The foreign purchases effect: if our price level rises relative to foreign countries, Australian buyers will purchase more imports at the expense of Australian goods. Similarly, foreigners will also buy fewer Australian goods, causing our exports to decline. In short, other things being equal, a rise in our domestic price level increases our imports and reduces our exports, thereby reducing the net exports component of aggregate demand in Australia.
As far as shifting the aggregate demand, a change in any of the components of AD (in other words, a change in any of the components of GDP) will shift the demand accordingly. For example, if the government were to increase spending on national security, real GDP, or aggregate demand, would increase. Likewise, government cuts back on their spending, aggregate demand decreases, because government spending is a component of GDP.
E. Macroeconomic issues: business cycle, unemployment, inflation, growth...
The term business cycle refers to economy-wide fluctuations in production or economic activity over several months or years. These fluctuations occur around a long-term growth trend, and typically involve shifts over time between periods of relatively rapid economic growth (an expansion or boom), and periods of relative stagnation or decline (a contraction or recession).
, as defined by the International Labour Organization, occurs when people are without jobs and they have actively looked for work within the past four weeks. The unemployment rate is a measure of the prevalence of unemployment and it is calculated as a percentage by dividing the number of unemployed individuals by all individuals currently in the labor force.
File:World map of countries by rate of unemployment.svg
To calculate the unemployment rate for a particular area or region, you will need to know the number of unemployed workers and the total number of people in the labor force in the particular area (such as a state or country). In the United States, this data is available from the Bureau of Labor Statistics. Labor Force refers to the number of people of working age and below retirement age who are actively participating in the work force or are actively seeking employment. Note that the total population of the area or region is irrelevant when calculating the unemployment rate.
The formula for calculating the unemployment rate (expressed as a percent) is as follows:Unemployment Rate = (Unemployed Workers / Total Labor Force) * 100
For example: A small country has a population of 15,000 people. Of the total population, 12,000 people are in the labor force and 11,500 people are employed. What is the unemployment rate? First, find the number of unemployed by subtracting the number of employed (11,500) from the labor force (12,000). So, 12,000-11,500=500. Therefore, 500 people are unemployed. Now, to find the unemployment rate, plug the numbers into the formula: Unemployment Rate = (500/12,000)*100 = 4.2 percent.
Unemployment calculations practice
is a rise in the general level of prices of goods and services in an economy over a period of time. When the general price level rises, each unit of currency buys fewer goods and services. Consequently, inflation also reflects an erosion in the purchasing power of money – a loss of real value in the internal medium of exchange and unit of account in the economy. A chief measure of price inflation is the inflation rate, the annualized percentage change in a general price index (normally the Consumer Price Index) over time.
World Rates Of Inflation
- Inflation Calculator
- Calculating Inflation
is the increase of per capita gross domestic product (GDP) or other measures of aggregate income, typically reported as the annual rate of change in real GDP. Economic growth is primarily driven by improvements in productivity, which involves producing more goods and services with the same inputs of labour, capital, energy and materials. Economists draw a distinction between short-term economic stabilization and long-term economic growth. The topic of economic growth is primarily concerned with the long run. The short-run variation of economic growth is termed the business cycle.
The long-run path of economic growth is one of the central questions of economics; despite some problems of measurement, an increase in GDP of a country greater than population growth is generally taken as an increase in the standard of living of its inhabitants. Over long periods of time, even small rates of annual growth can have large effects through compounding. A growth rate of 2.5% per annum will lead to a doubling of GDP within 29 years, whilst a growth rate of 8% per annum will lead to a doubling of GDP within 10 years. This exponential characteristic can exacerbate differences across nations.
File:Gdp accumulated change.png
©Barb E. Dahl and Chris P. Bacon
AP Economics 2011
help on how to format text
Turn off "Getting Started" | http://ap-macroeconomics.wikispaces.com/Unit+1+Basic+Economic+Concepts?responseToken=60cc6248bba54ecc23dafbe90d7ea946 |
4.03125 | Planetary Orbits and Motion
Born in Germany in 1571, Johannes Kepler one of the most intelligent figures in the period of human growth known as the Renaissance. Kepler came to learn a lot about the nature of the universe from his teacher. Tycho Brahe was a prominent figure in astronomy circles.
After Brahe passed away, Kepler spent much of his time continuing the work of his teacher in addition to his own.
The result was a set of three beautifully formed laws that mathematically related orbits to existing knowledge. His most famous law: "...orbits are elliptical..." was particularly outstanding, as it shattered the age old Greek (Aristotle) conception that motion in the heavens (space) was perfectly circular. Kepler had shown through mathematics that in fact orbits were not perfect circles. Rather they were elliptical, and egg-like. Kepler died in 1630.
But planetary motion is still described within the context of "Kepler's Laws".
Kepler's 1st Law: Orbits are Elliptical
After many experiments, Kepler discovered that the planets move on ellipses around the Sun. An ellipse is kind of a stretched out circle. A real circle has the same width, or diameter, whether you measure it across or up and down. But an ellipse has diameters of different lengths. How long the longest diameter is compared to the shortest one determines the eccentricity (e) of the ellipse; it's a measure of how stretched out the ellipse is.
Circles have e=0 because their diameters are all the same. If an ellipse has one very short diameter, and one very long one, then it is a very stretched-out ellipse, and has an eccentricity nearly equal to 1.
Planets do move on ellipses, but they are nearly circular (e very close to 0). Comets are a good example of objects in our solar system that may have very elliptical orbits. Compare the eccentricities and orbits of the objects in the diagram.
Once Kepler figured out that planets move around the Sun on ellipses, he then discovered another interesting fact about the speeds of planets as they go around the Sun.
Kepler's 2nd Law: The Speeds of Planets
Kepler realized that the line connecting the planet and the Sun sweeps out equal area in equal time. Look at the diagram to the left. What Kepler found is that it takes the same amount of time for the blue planet to go from A to B as it does to go from C to D. But the distance from C to D is much larger than that from A to B. It has to be so that the green regions have the same area. So the planet must be moving faster between C and D than it is between A and B. This means that when planets are near the Sun in their orbit, they move faster than when they are further away.
The diagram above shows a planet on its elliptical orbit around the Sun. The shaded areas are of equal size, and were swept out in equal time, i.e. it took the same amount of time for the planet to move from A to B and from C to D.
Kepler's work led him to one more important discovery about the distances of planets.
Kepler's 3rd Law: P2 = a3
Kepler's 3rd law is a mathematical formula. It means that if you know how long it takes a planet to go around the Sun (P), then you can determine that planet's distance from the Sun (a).
This formula also tells us that planets far away from the Sun take longer to go around the Sun than those that are close to the Sun.
Following is a table of orbital data for the planets. You will notice that Venus' orbital period (P) is longer than Mercury's, and the Earth's period is longer than Venus', and Mars' period is longer than Earth's...
* Negative values of rotation period indicate that the planet rotates in the direction opposite to that in which it orbits the Sun. This is called retrograde rotation.
The semimajor axis (the average distance to the Sun) is given in units of the Earth's average distance to the Sun, which is called an AU. For example, Neptune is 30 times more distant from the Sun than the Earth, on average. Orbital periods are also given in units of the Earth's orbital period, which is a year.
As explained in Kepler's first law, the eccentricity (e) is a number which measures how elliptical orbits are. If e=0, the orbit is a circle. All the planets have eccentricities close to 0, so they must have orbits which are nearly circular.
Spiral Wishing Wells demonstrate the laws of planetary motion as coins and balls (ball bearings and marbles) are launched in elliptical orbits.
Click here to learn how to get your own mini-gravity-well and conduct various experiments.
About the author...
This information is collected, written, and edited by Steve Divnick, a former school teacher-turned-inventor. One of his inventions is the Spiral Wishing Well which has raised over $1 Billion for charities around the world. The Spiral Wishing Well is the same shape as a tornado and other naturally-occurring vortex's including the shape of water going down the drain.
Divnick also invented the miniature Vortx® Coin-Spinning Toy which uses the same principles of physics. You can even make coins climb UP the funnel similar to how a tornado sucks objects up into its funnel.
Click here to get your own Vortx®.
Click here for more information about Divnick's other inventions. | http://www.spiralwishingwells.com/guide/planets.html |
4.28125 | Homograph Teacher Resources
Find Homograph educational ideas and activities
Showing 1 - 20 of 105 resources
Homophones and Homographs
Getting tired of correcting to, two, and too? What about weather and whether? Use a thorough lesson on homophones and homographs to clear up those differences. Fourth and fifth graders identify which words sound the same and are spelled...
4th - 5th English Language Arts CCSS: Designed
Unlocking the Secrets: Homophones and Homographs
Homophones and homographs are the focus of this language arts PowerPoint. Learners are coached on the meanings of these parts of speech, then attempt to select the correct word in a variety of slides. A very nice presentation on these...
3rd - 4th English Language Arts
Goldie Girl and There, Their, and They're: Homophones and Homographs
Instruct your class on homonyms and homophones. Learners take a pre-test and examine a list of homophones. They also play online word games to practice spelling and usage and write a fairy tale in which they use at least 10 homophones....
8th English Language Arts
Instant Spelling Activity: Homographs
Middle schoolers identify homographs. In this homograph lesson, students listen to a sentence that contains a homograph to identify the homograph. Middle schoolers choose 16 homographs to put in a BINGO grid. Students write sentences...
6th - 8th English Language Arts
Basic Linguistics: Fun Trivia Quiz
Meta-cognition can transform learning. If your syllabus includes linguistics to enhance learners' comprehension and expression in English, here is an interactive online quiz to assess what they have learned. Titled "Basic Linguistics,"...
10th - Higher Ed English Language Arts | http://www.lessonplanet.com/lesson-plans/homograph |
4.0625 | There is usually a period of several weeks in which newly infected people have not yet produced enough HIV antibodies to be detected. The CDC said there have been 35 AIDS cases since 1985 linked to blood from people in this "window period."
Like the first generation of AIDS blood tests, current tests detect antibodies to the AIDS viruses. Unfortunately, for a brief time after infection, people make too few antibodies for these tests to detect. As a result, their blood passes all the screening tests, even though it can transmit HIV. The "window period" for HIV-1 lasts about 22 days The FDA recommended in August 1995 that blood banks begin using the new p24 antigen test for HIV-1 when it became available. This test might cut 6 to 12 days from the window period=97at an added cost of perhaps $60 million each year.
Future tests may be based on the polymerase chain reaction (PCR), which can detect HIV directly by detecting its genetic material. PCR is sensitive enough to detect HIV in blood several days earlier than the p24 antigen test. PCR was invented in the mid-1980s, and it gave scientists a way to quickly and simply make millions of copies of genes for their experiments. Molecular biologists could probe the gene defects underlying cystic fibrosis, muscular dystrophy, and many other diseases.
Virologists could study the myriad variants of HIV-1 to determine how the virus changes over time. Pharmacologists could measure the effects of potential drugs on viruses. Archaeologists could track ancient human migrations . Once refined by experimental scientists, PCR was eagerly adapted by clinicians. Its sensitivity made it seem a natural for testing donated blood for diseases.
So far, PCR has proved difficult to automate, a necessity for processing the 14 million units of blood donated each year. Also, PCR is expensive and, as viewers of the O.J. Simpson trial learned, demands pristine handling conditions and meticulous technique. More research is needed before blood banks can take advantage of PCRs power. But protecting the blood supply from HIV-1 is not enough. Scientists continue to discover other diseases that can be transmitted in blood. Also, mistakes can occur. Blood can be mislabeled. Blood bank volunteers can neglect to ask prospective donors all the required questions. Lab workers can be sloppy in testing blood, or a test kit can be defective. Thousands of errors and accidents are reported to FDA each year.
And even if cheap, reliable, error-proof screening tests were available for every transmittable disease, transfusion would still not be 100% safe. No medical procedure is . Transfusions cause some kind of problem in about 10% of recipients. These problems range from fever and hives to iron overload and congestive heart failure.
The test, to be sold under the name Abbott HIVAG-1 Monoclonal, is an enzyme immunoassay (EIA), and is the second FDA-licensed HIV antigen detection kit intended for use in blood banks and plasma centers nationwide according to new FDA blood screening recommendations. Abbott HIVAG-1 Monoclonal is also cleared for prognostic use in HIV-infected patients.
"The test reduces the window period between HIV infection and detection the first 25 to 45 days, when the virus can elude efforts to screen it out," said Ronald Gilcher, M.D., president and CEO, Sylvan N. Goldman Center, Oklahoma Blood Institute, where the test has been researched since 1991. "The new test also cuts testing time to four hours from 24 from Abbotts earlier version antigen test."
HIV antibody testing has been used by all U.S. blood institutions to screen donated blood and plasma for HIV infection. This new test will allow U.S. blood screening centers to have an alternate source for the p24 antigen test. (HIV antibody tests will continue to be used in parallel to identify HIV positive blood.) While antibody testing can measure the bodys immune response to the presence of a virus, antigen testing detects the virus itself. Patients infected with HIV may have positive antigen results very early in the infectionbefore a substantial antibody response has formed. As a result, people recently infected with HIV may be identified by the antigen test while the antibody test remains negative or indeterminate until a later date.
I hope this information is of some assistance.David A. Reznik, D.D.S. Some notes and a link The "window period" is the time it takes for a person who has been infected with HIV to react to the virus by creating HIV antibodies.
CLICK HERE FOR MORE ON HIV TESTING
Local factors in the genital tract are also important. Other local infections, ulcers, mucosal trauma etc contribute of higher chance. Prophylaxis with medicines can prevent transmission if taken as early as possible after exposure, preferably within a day of contact. Medicines are not of known benefit after 1 month. PCR test can confirm the disease early after contact (95% detection after 6 weeks). ELISA and Western blot can be falsely negative upto 3-4 months. Similarly, a negative test in the sex worker may not mean 100% negative result, if the lady is in window period. If you are negative even after 6 months, means you should have not acquired the infection. In such a fortunate situation, it is wise to avoid sexual mis-adventures in future!
Your WB test is negative at this point (nearly 50 days; less than 2 months). It is most likely that you are un-infected. However, as a possibility of still 5-10% chance, it is worthwhile to not donate blood for 6 months post exposure. A negative WB or ELISA at 3 months will however make you 99% negative and at 6 month, 100% negative. If there is an emergency to donate, a negative DNA PCR test at this point can be done which can suggest that you are 99% negative.Dr AS, AIDSHELPLINE Belgaum, India
Dental HealthPatient's Paradise
[an error occurred while processing this directive] | http://www.healthmantra.com/aids-win.shtml |
4 | |Feudal land tenure in England|
A vassal or feudatory is a person who has entered into a mutual obligation to a lord or monarch in the context of the feudal system in medieval Europe. The obligations often included military support and mutual protection, in exchange for certain privileges, usually including the grant of land held as a fiefdom. The term can be applied to similar arrangements in other feudal societies. In contrast, a fidelity, or fidelitas, was a sworn loyalty, subject to the king.
In fully developed vassalage, the lord and the vassal would take part in a commendation ceremony composed of two parts, the homage and the fealty, including the use of Christian sacraments to show its sacred importance. According to Eginhard's brief description, the commendatio made to Pippin the Younger in 757 by Tassillo, Duke of Bavaria, involved the relics of Saints Denis, Rusticus, Éleuthère, Martin, and Germain – apparently assembled at Compiègne for the event. Such refinements were not included from the outset when it was time of crisis, war, hunger, etc.. Under feudalism, those who were weakest needed the protection of the knights who owned the weapons and knew how to fight. Feudal society was increasingly based on the concept of "lordship" (French seigneur), which was one of the distinguishing features of the Early Middle Ages and had evolved from times of Late Antiquity.
In the time of Charlemagne (ruled 768–814), the connection slowly developed between vassalage and the grant of land, the main form of wealth at that time. Contemporaneous social developments included agricultural "manorialism" and the social and legal structures labelled — but only since the 18th century — "feudalism". These developments proceeded at different rates in various regions. In Merovingian times (5th century to 752), monarchs would reward only the greatest and most trusted vassals with lands. Even at the most extreme devolution of any remnants of central power, in 10th-century France, the majority of vassals still had no fixed estates.
The stratification of a fighting band of vassals into distinct groups might roughly correlate with the new term "fief" that had started to supersede "benefice" in the 9th century. An "upper" group comprised great territorial magnates, who were strong enough to ensure the inheritance of their benefice to the heirs of their family. A "lower" group consisted of landless knights attached to a count or duke. This social settling process also received impetus in fundamental changes in the conduct of warfare. As co-ordinated cavalry superseded disorganized infantry, armies became more expensive to maintain. A vassal needed economic resources to equip the cavalry he was bound to contribute to his lord to fight his frequent wars. Such resources, in the absence of a money economy, came only from land and its associated assets, which included peasants as well as wood and water.
Difference between "vassal" and "vassal state"
|This section does not cite any sources. (November 2011)|
Many empires have set up vassal states out of cities, kingdoms, and tribes that they wish to bring under their auspices without having to conquer or govern them. In these cases, vassalage (or suzerainty) just means forfeiting foreign-policy independence in exchange for full internal autonomy and perhaps a formal tribute. A lesser state that might be called a "junior ally" would be called a "vassal" as a reference to a domestic "fiefholder" or "trustee", simply to apply a common domestic norm to diplomatic culture. This allows different cultures to understand formal hegemonic relationships in personal terms, even among states using non-personal forms of rule. Imperial states that have used this terminology include Ancient Rome, the Mongol Empire, and the British Empire.
In Feudal Japan, the relations between the powerful Daimyo and Shugo and the subordinate Ji-samurai bear some obvious resemblance to the Western Vassalage, though there are also some significant differences.
Modern, neo-feudal equivalents
Vassal relations are reincarnated in neo-feudal societies, such as Russia, Ukraine and other post-soviet states since the dissolution of the Soviet Union. Whereas modern constitutions do not provide for establishment of formal ruler-vassal relations, societies work on similar informal principles. The post-soviet neo-feudal system is based on the lack of any civil structures, even an ideological structure such as the Communist Party. Governance has been established by a network of former middle-level party officials, police and secret service members and the new oligarchs - nouveau riche with political ambitions. Which group dominates is country specific. In the Russian Federation it is claimed that up to 78% of the elite have signs of being siloviki. In Ukraine the upper hand have the oligarchs, whereas the post-soviet Central Asia offers several examples of dynastic (family) rules.
- Vassal state
- Feudalism in the Holy Roman Empire
- Mandala (political model)
- Vavasour, a type of vassal
- Manrent, Scottish Clan treaties of offensive and defensive alliance
- Gokenin, vassals of the shogunate in Japan
- Villein, a serf, or low-born worker under Feudalism
- nöken (plural: nöker) was the Mongol term for a tribal leader acknowledging another as his liege
- Hughes, Michael (1992). Early Modern Germany, 1477–1806, MacMillan Press and University of Pennsylvania Press, Philadelphia, p. 18. ISBN 0-8122-1427-7.
- F. L. Ganshof, "Benefice and Vassalage in the Age of Charlemagne" Cambridge Historical Journal 6.2 (1939:147-75).
- Ganshof 151 note 23 and passim; the essential point was made again, and the documents on which the historian's view of vassalage are based were reviewed, with translation and commentary, by Elizabeth Magnou-Nortier, Foi et Fidélité. Recherches sur l'évolution des liens personnels chez les Francs du VIIe au IXe siècle (University of Toulouse Press) 1975.
- "at". Noctes-gallicanae.org. Retrieved 2012-02-13.
- The Tours formulary, which a mutual contract of rural patronage, offered parallels; it was probably derived from Late Antique Gallo-Roman precedents, according to Magnou-Nortier 1975.
- Ganshof, François Louis, Feudalism translated 1964
- V. L. Inozemtsev: Neo-Feudalism Explained, The American Interest, Volume 6, Number 4, March 1, 2011; retrieved 2015-12-30
- Ex-KGB Fill Russia's Elite, Reuters, 2006; retrieved 2015-12-30
- After the Euromaidan of 2014 a major struggle for power started in Ukraine between the corruption and oligarch-based network and the actors of the public revolt. | https://en.wikipedia.org/wiki/Vassalage |
4.59375 | The earliest pre-Ptolemaic theories assumed that the earth was the center of the universe (see Ptolemaic system). With the acceptance of the heliocentric, or sun-centered, theory (see Copernican system), the nature and extent of the solar system began to be realized. The Milky Way, a vast collection of stars separated by enormous distances, came to be called a galaxy and was thought to constitute the entire universe with the sun at or near its center. By studying the distribution of globular star clusters the American astronomer Harlow Shapley was able to give the first reliable indication of the size of the galaxy and the position of the sun within it. Modern estimates show it to have a diameter of about 100,000 light-years with the sun toward the edge of the disk, about 28,000 light-years from the center.
During the first two decades of the 20th cent. astronomers came to realize that some of the faint hazy patches in the sky, called nebulae, are not within our own galaxy, but are separate galaxies at great distances from the Milky Way. Willem de Sitter of Leyden suggested that the universe began as a single point and expands without end. After studying the red shift (see Doppler effect) in the spectral lines of the distant galaxies, the American astronomers Edwin Hubble and M. L. Humason concluded that the universe is expanding, with the galaxies appearing to fly away from each other at great speeds. According to Hubble's law, the expansion of the universe is approximately uniform. The greater the distance between any two galaxies, the greater their relative speed of separation.
At the end of the 20th cent. the study of very distant supernovas led to the belief that the cosmic expansion was accelerating. To explain this cosmologists postulated a repulsive force, dark energy, that counteracts gravity and pushes galaxies apart. It also appears that the universe has been expanding at different rates over its cosmic history. This led to a variation of the big-bang theory in which, under the influence of gravity, the expansion slowed initially and then, under the influence of dark energy, suddenly accelerated. It is estimated that this "cosmic jerk" occurred five billion years ago, about the time the solar system was formed. This theory postulates a flat, expanding universe with a composition of c.70% dark energy, c.30% dark matter, and c.0.5% bright stars.
A number of questions must be answered, however, before cosmologists can establish a single, comprehensive theory. The expansion rate and age of the universe must be established. The nature and density of the missing mass, the dark matter and dark energy that is far more abundant than ordinary, visible matter, must be identified. The total mass of the universe must be determined to establish whether it is sufficient to support an equilibrium condition—a state in which the universe will neither collapse of its own weight nor expand into diminishing infinity. Such an equilibrium is called "omega equals one," where omega is the ratio between the actual density of the universe and the critical density required to support equilibrium. If omega is greater than one, the universe would have too much mass and its gravity would cause a cosmic collapse. If omega is less than one, the low-density universe would expand forever. Today the most widely accepted picture of the universe is an omega-equals-one system of hundreds of billions of galaxies, many of them clustered in groups of hundreds or thousands, spread over a volume with a diameter of at least 10 billion light-years and all receding from each other, with the speeds of the most widely separated galaxies approaching the speed of light. On a more detailed level there is great diversity of opinion, and cosmology remains a highly speculative and controversial science. | http://www.factmonster.com/encyclopedia/science/cosmology-development-modern-cosmology.html |
4 | The Indiana bat was one of the 1st species in the United States that was listed as endangered and has been protected by law since March 11, 1967 .OTHER STATUS:
Indiana bats were found in a variety of plant associations in a southern Iowa study. Riparian areas were dominated by eastern cottonwood, hackberry (Celtis occidentalis), and silver maple (Acer saccharinum). In the forested floodplains, the dominant plants included black walnut (Juglans nigra), silver maple, American elm (Ulmus americana), and eastern cottonwood. In undisturbed upland forest, the most common plants were black oak (Quercus velutina), bur oak (Q. macrocarpa), shagbark hickory (Carya ovata), and bitternut hickory (C. cordiformis). Black walnut, American basswood, American elm, and bur oak dominated other upland Indiana bat sites .Studies have identified at least 29 tree species that Indiana bats use during the summer. The greatest number of utilized tree species are found in the central portion of Indiana bats's range (primarily Missouri, southern Illinois, southern Indiana, and Kentucky), but this is likely because the majority of research conducted on the species has occurred in this region. Roost trees from these central states, which are mainly in the oak-hickory cover type, include silver maple, red maple (Acer rubrum), sugar maple (A. saccharum), white oak (Q. alba), red oak (Q. rubra), pin oak (Q. palustris), scarlet oak (Q. coccinea), post oak (Q. stellata), shingle oak (Q. imbricaria), eastern cottonwood, shagbark hickory, bitternut hickory, mockernut hickory (C. alba), pignut hickory (C. glabra), American elm, slippery elm (Ulmus rubra), honey locust (Gleditsia triacanthos), sourwood (Oxydendrum arboreum), green ash (Fraxinus pennsylvanica), white ash (F. americana), Virginia pine (Pinus virginiana), American sycamore (Platanus occidentalis), and sassafras (Sassafras albidum) [17,18,20,35,36,38,44,47,50,59,89]. In southern Michigan and northern Indiana, which are mainly in the oak-hickory and elm-ash-cottonwood cover types, trees utilized as roosts include green, white, and black ash (Fraxinus nigra), silver maple, shagbark hickory, and American elm [22,46,51,56]. And finally, in the southern areas of the Indiana bat's range (primarily Tennessee, Arkansas, and northern Alabama), which include the oak-hickory and oak-pine cover types, Indiana bats utilize shagbark hickory, white oak, red oak, pitch pine (P. rigida), shortleaf pine (P. echinata), loblolly pine (P. taeda), sweet birch (Betula lenta), and eastern hemlock (Tsuga canadensis) [14,85]. Virtually no information exists for Indiana bats roosting in the Northeast (such as Pennsylvania, New York, and Vermont) or for the eastern sections of the range (including Virginia, West Virginia, and Maryland). In these areas, Indiana bats likely utilize the some of the same species listed here and also take advantage of other tree species that are available.
|Scott Johnson, Indiana Department of Natural Resources|
TIMING OF MAJOR LIFE HISTORY EVENTS:
Indiana bats begin to arrive at hibernacula (caves and mines in which they spend the winter) from their summer roosting sites in late August, with most returning in September . Females enter hibernation shortly after arriving at hibernacula, but males remain active until late autumn to breed with females arriving late. Most Indiana bats hibernate from October through April, but many at the northern extent of their range hibernate from September to May. Occasionally, Indiana bats are found hibernating singly, but almost all are found hibernating in dense clusters of 3,230 bats/m² to 5,215 bats/m² .
Spring migration can begin as early as late March, but most Indiana bats do not leave their winter hibernacula until late April to early May . Females emerge from hibernacula first, usually between late March and early May. Most males do not begin to emerge until mid- to late April [58,93]. Females arrive at summer locations beginning in mid-April. Females form summer nursery colonies of up to 100 adult females during summer [47,93]. Males typically roost alone or in small bachelor groups during the summer. Many males spend the summer near their winter hibernacula, while others migrate to other areas, similar to areas used by females .
Females can mate during their 1st fall, but some do not breed until their 2nd year [81,93]. Males become reproductively active during their 2nd year . Breeding occurs in and around hibernacula in fall [29,93]. During the breeding season, Indiana bats undergo a phenomenon known as swarming. During this activity, large numbers of bats fly in and out of caves from sunset to sunrise . Swarming mainly occurs during August to September and is thought to be an integral part of mating . Bats have been observed copulating in caves until early October . During the swarming/breeding period, very few bats are found roosting within the hibernacula during the day . Limited mating may also occur at the end of hibernation .
Fertilization does not occur until the end of hibernation [81,87,93], and gestation takes approximately 60 days . Parturition occurs in late May to early July [81,93]. Female Indiana bats typically give birth to 1 pup [81,87,93]. Juveniles are weaned after 25 to 37 days and become volant (able to fly) at about the same time . Most young can fly by early to late July [44,62], but sometimes do not fly until early August . Humphrey and others reported an 8% mortality rate by the time young were weaned. However, they assumed that all females mate in the autumn , which is not the case, so not all the females would give birth. Thus, mortality of young may be even lower than 8%.
Indiana bats are relatively long lived. One Indiana bat was captured 20 years after being banded as an adult . Data from other recaptured individuals show that females live at least 14 years 9 months , while males may live for at least 13 years 10 months .PREFERRED HABITAT:
In an Illinois study by Gardner and others , the study area where Indiana bats were found was estimated as approximately 67% agricultural land including cropland and old fields; 30% was upland forest; while 2.2% was floodplain forest. Finally, only 0.1% of the area was covered with water. Kurta and others found that in southern Michigan, the general landscape occupied by Indiana bats consisted of open fields and agricultural lands (55%), wetlands and lowland forest (19%), other forested habitats (17%), developed areas (6%), and perennial water sources such as ponds and streams (3%). In southern Illinois, Carter and others reported that all roosts were located in bottomland, swamp, and floodplain areas. Miller and others determined the predominant habitat types near areas where Indiana bats were captured in Missouri were forest, crop fields, and grasslands. Indiana bats did not show any preference for early successional habitats, such as old fields, shrublands, and early successional forests, showing 71% to 75% of activity occurring in other habitats . Although much of the landscape throughout the distributional range of the Indiana bat is dominated by agricultural lands and other open areas, these areas are typically not utilized by Indiana bats [44,67].
Indiana bats typically spend the winter months in caves or mines. However, a few bats have been found hibernating on a dam in northern Michigan . Indiana bats need very specific conditions in order to survive the winter hibernation period, which lasts approximately 6 months. As the microclimate in a hibernaculum fluctuates throughout the winter, Indiana bats sometimes fly to different areas within the hibernaculum to find optimal conditions [28,40], but this does not appear necessary for every hibernaculum . Indiana bats may even switch between nearby hibernacula in search of the most appropriate hibernating conditions . Indiana bats are generally loyal to specific hibernacula or to the general area near hibernacula that they have occupied previously . Critical winter habitats of Indiana bats have been designated by the U.S. Fish and Wildlife Service and include 13 hibernacula distributed across Illinois, Indiana, Kentucky, Missouri, Tennessee, and West Virginia .
Three types of hibernacula have been designated depending on the amount of use each receives from year to year. Priority One hibernacula are those that consistently have greater than 30,000 Indiana bats hibernating inside each winter. Priority Two hibernacula contain 500 to 30,000 bats, and Priority 3 hibernacula are any with fewer than 500 bats. At least 50% of Indiana bats are thought to hibernate in the 8 Priority One hibernacula, which can be found in Indiana (3 hibernacula), Missouri (3), and Kentucky (2). Estimates of hibernating populations in 2001 suggest that Priority One hibernacula have experienced a 48% decline since 1983. Overall, populations have fallen approximately 57% since 1960 across all hibernacula. Evidence suggests that Priority Two hibernacula are becoming more important to Indiana bat survival .
Site Characteristics: Studies have identified at least 29 tree species (see Plant Communities) used by Indiana bats during the summer and during spring and fall migrations. Since so many tree species are utilized as roosts, tree species is likely not a limiting habitat requirement. In addition to trees, Indiana bats have used a Pennsylvania church attic , a utility pole , and bat boxes as roosts. However, use of man-made structures appears to be rare. Roost selection by females may be related to environmental factors, especially weather. Cool temperatures can slow fetal development [44,75], so choosing roosts with appropriate conditions is essential for reproductive success and probably influences roost choice.
Two types of day roosts utilized by Indiana bats have been identified as primary and alternate roosts. Primary roosts typically support more than 30 bats at a time and are used most often by a maternity colony. Trees that support smaller numbers of Indiana bats from the same maternity colony are designated as alternate roosts. In cases where smaller maternity colonies are present in an area, primary roosts may be defined as those used for more than 2 days at a time by each bat, while alternate roosts are generally used less than 2 consecutive days . Maternity colonies may use up to 3 primary roosts and up to 33 alternate roosts [36,64] in a single season. Reproductively active females frequently switch roosts to find optimal roosting conditions. When switching between day roosts, Indiana bats may travel as little as 23 feet (7 m) or as far as 3.6 miles (5.8 km) [53,54,56]. In general, moves are relatively short and typically less than 0.6 mile (1 km) .
Primary roosts are most often found at forest edges or in canopy gaps [17,64]. Alternate roosts are generally located in a shaded portion of the interior forest and occasionally at the forest edge . Most roost trees in a Kentucky study occurred in canopy gaps in oak, oak-hickory, oak-pine, and oak-poplar community types . Roosts found by Kurta and others in a elm-ash-maple forest in Michigan were in a woodland/marsh edge, a lowland hardwood forest, small wetlands, a shrub wetland/cornfield edge, and a small woodlot. Around hibernacula in autumn, Indiana bats tended to choose roost trees on upper slopes and ridges that were exposed to direct sunlight throughout the day .
The preferred amount of canopy cover at the roost is unclear. Many studies have reported the need for low cover, while others have documented use of trees with moderate to high canopy cover, occasionally up to complete canopy closure. Canopy cover ranges from 0% at the forest edge to 100% in the interior of the stand [17,35,59]. A general trend is that primary roosts are found in low cover, while alternate roosts tend to be more shaded. Few data directly compare the differences between roost types. In Alabama, canopy cover at the roost tended to be low at an average of 35.5%, but at the stand level, canopy cover was higher with a mean of 65.8% . In a habitat suitability model, Romme and others recommended the ideal canopy cover for roosting Indiana bats as 60% to 80%. Actual roost sites in eastern Tennessee were very high in the tree, and Indiana bats were able to exit the roost above the surrounding canopy. Thus, canopy cover measurements taken from the bases of roost trees may overestimate the actual amount of cover required by roosting Indiana bats .
A great deal of difference exists between stands occupied by Indiana bats. A Virginia pine roost was in a stand with a density of only 367 trees/ha while in Kentucky, a shagbark hickory roost was in a closed canopy stand with 1,210 trees/ha . Overall tree density in Great Smoky Mountain National Park was higher around primary roosts than at alternate roosts . At the landscape level, the basal area for stands with roosts was 30% lower than basal area of random stands in Alabama . Tree density in southern Iowa varied between different habitats. In a forested floodplain, tree density was lowest at 229 trees/ha, while a riparian strip had the highest tree density at 493 trees/ha .
The number of roosts used and home range occupied by a maternity colony can vary widely. In Missouri, Callahan and others found that the highest density of roosts being used in a oak-hickory stand was 0.25 tree/ha. In Michigan, the number of trees utilized by a colony was 4.6 trees/ha, with as many as 13.2 potential roosts/ha in the green ash-silver maple stand . Clark and others estimated that the density of potential roosts in southern Iowa in areas where Indiana bats were caught was 10 to 26/ha in riparian, floodplain, and upland areas dominated by eastern cottonwood-silver maple, oak-hickory, and black walnut-silver maple-American elm, respectively. In Illinois, the suggested optimal number of potential roost trees in an upland oak-hickory habitat was 64/ha; the optimal number for riparian and floodplain forest, dominated by silver maple and eastern cottonwood, was proposed to be 41/ha . Salyers and others suggested a potential roost density of 15 trees/ha was needed, or 30 roosts/ha if artificial roost boxes are erected in a stand with American elm and shagbark hickory. The roosting home range used by any single Indiana bat was as large as 568 hectares in a oak-pine community in Kentucky . Roosts of 2 maternity colonies in southern Illinois were located in roosting areas estimated at 11.72 hectares and 146.5 hectares and included green ash, American elm, silver maple, pin oak, and shagbark hickory . The extent of the maternity home range may depend on the availability of suitable roosts in the area .
Most habitat attributes measured for the Indiana bat were insignificant as well as inconsistent from one location to another. In Missouri, oak-hickory stands with maternity colonies had significantly (p=0.01) more medium trees (12-22 inches (30-57 cm) dbh) and significantly (p=0.01) more large sized trees (>22 inches (57 cm) dbh) than areas where Indiana bats were not found. No other major landscape differences were detected .
Distances seen between roosts and other habitat features may be influenced by the age, sex, and reproductive condition of the Indiana bats. Distances between roosts and paved roads is greater than the distances between roosts and unpaved roads in some locales, although overlap between the two situations has been documented. In Illinois, most roosts used by adult females and juveniles were about 2,300 feet (700 m) or more from a paved highway, while adult males roosted less than 790 feet (240 m) from the road [35,36]. In Michigan, roosts were only slightly closer to paved roads: 2,000 feet (600 m) on average for all roosts located . In general, roosts were located 1,600 feet (500 m) to 2,600 feet (800 m) from unpaved roads in Illinois and Michigan [36,51]. Roost trees used during autumn in Kentucky were very close to unpaved roads at an average of 160 feet (50 m) .
Roost proximity to water is highly variable and therefore probably not as important as once thought. In Indiana, roost trees were discovered less than 660 feet (200 m) from a creek , while roosts in another part of Indiana were 1.2 miles (2 km) from the nearest permanent water source [36,51]. To the other extreme, roosts of a maternity colony from Michigan were all found in a 12-acre (5 ha) wetland that was inundated for most of the year . In Virginia, foraging areas near a stream were utilized . Intermittent streams may be located closer to roosts than more permanent sources of water [35,51]. Ponds, streams, and road ruts appear to be important water sources, especially in upland habitats .
Foraging habitat: Studies on the foraging needs for Indiana bats are inconclusive. Callahan and others reported that bats foraged in a landscape composed of pasture, corn fields, woodlots, and a strip of riparian woodland, although Indiana bat activity was not necessarily recorded in all these habitat types. Murray and Kurta made some qualitative assessments of Indiana bat foraging habitat in Michigan: the majority of bats were found foraging in forested wetlands and other woodlands, while 1 bat foraged in an area around a small lake and another in an area with 50% woodland and 50% open fields. Another Indiana bat foraged over a river, while 10 others foraged in areas greater than 0.6 mile (1 km) from the same river . Bat activity was centered around small canopy gaps or closed forest canopy along small 2nd-order streams in West Virginia . Indiana bats foraged under the dense oak-hickory forest canopy along ridges and hillsides in eastern Missouri, but rarely over streams . Indiana bats have been detected foraging in upland forest [11,23,47,93] in addition to riparian areas such as floodplain forest edges [11,23,44,55,69,72,93]. Romme and others also suggested that foraging habitat would ideally have 50% to 70% canopy closure. Indiana bats rarely utilize open agricultural fields and pastures, upland hedgerows, open water, and deforested creeks for traveling or foraging [36,44,67]. Boyles and others concluded that most activity occurred under the canopy as opposed to above the canopy.
Hibernacula: During hibernation, Indiana bats occupy open areas of hibernacula ceilings and generally avoid crevices and other enclosed areas . Indiana bats were associated with hibernacula that were long (µ=2,817 feet (858 m)), had high ceilings (µ=15 feet (4.5 m)), and had large entrances (µ=104.4 feet² (9.7 m²)). The preferred hibernacula often had multiple entrances promoting airflow. Hibernacula choice may be influenced by the ability of the outside landscape to provide adequate forage upon arrival at the hibernacula as well as the specific microclimate inside. Having forested areas around the hibernacula entrance and low amounts of open farmland may be important factors influencing the suitability of hibernacula . This is the only comprehensive habitat assessment of Indiana bat hibernacula known to date (2005).COVER REQUIREMENTS:
Another important factor relating to roost suitability is tree condition. Indiana bats prefer dead or dying trees with exfoliating bark . The amount of exfoliating bark present on a tree seems to be insignificant . Indiana bats show an affinity for very large trees that receive lots of sunlight . Typically, Indiana bats roost in snags, but a few species of live trees are also utilized. Live roost trees are usually shagbark hickory, silver maple, and white oak [17,35,89]. Shagbark hickories make excellent alternate roosts throughout the Indiana bats's range due to their naturally exfoliating bark . Although Indiana bats primarily roost under loose bark, a small fraction roost in tree cavities [14,35,50,51,53,89].
Primary roosts are generally larger than alternate roosts , but both show a lot of variability. Females typically use large roost trees averaging 10.8 inches (27.4 cm) to 25.7 inches (65.3 cm) as maternity roosts [17,35,41,47,50,51,56,83,85]. Males are more flexible, roosting in trees as small as 3 inches (8 cm) dbh [35,41,47,59]. In a review, Romme and others determined that Indiana bats required tree roosts greater than 8 inches (22 cm) dbh, while Clawson suggested that roosts of 12 inches (30 cm) dbh or larger were preferred. The heights of roost trees vary, but they tend to be tall, with average heights ranging from 62.7 feet (19.1 m) to 100 feet (30 m). The heights of the actual roosting sites are variable as well, ranging from 4.6 feet (1.4 m) to 59 feet (18 m) [41,51,56,85]. There is evidence that roost height is influenced by the extent of canopy closure. Specifically, more open canopies tend to be correlated with into lower roost heights . However, this rule does not appear to hold true in all localities [51,85].
In addition to day roosts, Indiana bats use temporary roosts throughout the night to rest between foraging bouts. Limited research has examined the use of night roosts by Indiana bats, and thus their use and importance are poorly understood. Males, lactating and postlactating females, and juveniles have been found roosting under bridges at night [48,65]. Some Indiana bats were tracked to 3 different night roosts within the same night . Night roosts are often found within the bats's foraging area. Indiana bats using night roosts are thought to roost alone and only and for short periods, typically 10 minutes or less. Lactating bats may return to the day roost several times each night, presumably to nurse their young. Pregnant bats have not been tracked back to the day roost during the night except during heavy rain. Because Indiana bats are difficult to track during their nightly movements and usually rest for such short periods of time, the specific requirements that Indiana bats need in a night roost, and reasons why night roosts are needed, are still unknown.
During spring and fall, Indiana bats migrate between hibernacula and summer roosting sites. In New York and Vermont, bats traveled up to 25 miles (40 km) between hibernacula and summer roosting sites in spring . This is a considerably shorter distance than what is seen in the Midwest, where bats may travel up to 300 miles (500 km) . Many males remain close to hibernacula during the spring and summer [2,102] rather than migrating long distances like females. Occasionally, they even roost within hibernacula during the summer . Males also roost in caves and trees during fall swarming [26,102]. Few data exist for the roosting requirements of Indiana bats during spring and fall migrations; data indicate that requirements during these times are similar to summer needs in that the bats chose large trees with direct sunlight and exfoliating bark [15,41].
The ability for Indiana bats to find suitable hibernating conditions is critical for their survival. A hibernaculum that remained too warm during one winter caused a 45% mortality rate in hibernating Indiana bats . Bats generally hibernate in warmer portions of the hibernacula in fall, then move to cooler areas as winter progresses. During October and November, temperatures at roosting sites within major hibernacula in 6 states averaged 43.5 °F to 53.2 °F (6.4-11.8 °C). Roost temperatures at the same hibernacula ranged from 34.5 °F to 48.6 °F (1.4-9.2 °C) from December to February. Temperatures in March and April were slightly lower than in autumn at 39.6 °F to 51.3 °F (4.2-10.7 °C) . The Indiana Bat Recovery Team discovered that Indiana bat populations increased over time in hibernacula that had stable mid-winter temperatures averaging 37.4 °F to 45.0 °F (3.0-7.2 °C) , and declined in hibernacula with temperatures outside this range [90,93]. Temperatures slightly above freezing during hibernation allow Indiana bats to slow their metabolic rates as much as possible without the risk of freezing to death or using up fat too quickly [42,77]. Hibernating Indiana bats may also survive low temperatures by sharing body heat within the tight clusters they typically form . Bats awaken periodically throughout the hibernation period, presumably to eliminate waste or to move to more appropriate microclimates [39,40]. This periodic waking does not seem to affect the survival of Indiana bats, but waking caused by disturbance can cause Indiana bats to use up large amounts of energy, which can cause them to run out of fat reserves before the end of winter, possibly leading to death .
One way in which caves retain low temperatures is through a constant input of cold air from outside the cave circulating in. Typically, the caves supporting the largest Indiana bat populations have multiple entrances that allow cool air from outside the cave to come in, creating a circulation of fresh cooled air . Gates that are meant to keep vandals out of caves have altered the temperature and airflow of hibernacula, resulting in population declines of Indiana bats at many major hibernacula throughout their range. Removing or modifying gates at some of these have given these populations a chance to rebound . Also, the bats seem to prefer a relative humidity of 74% to 100%, although it is uncommon for the air to be saturated [2,26,28,39,42]. Relative humidities of only 50.4% have also been recorded . More research is needed to identify other specific environmental conditions that bats require at hibernacula.FOOD HABITS:
In addition to differences in diet, variation in foraging behaviors have been documented. For instance, the distance that an individual Indiana bat travels between a day roost and a nightly foraging range can vary. Garner and Gardner discovered that Indiana bats traveled up to 1.6 miles (2.6 km) from their day roosts to their foraging sites in Illinois. Similarly, bats traveled up to 1.5 miles (2.4 km) to forage in Kentucky . In Michigan, female bats traveled as far as 2.6 miles (4.2 km) to reach foraging areas with an average of 1.5 miles (2.4 km) .
Several studies have documented similarities in how foraging habitats are actually utilized by Indiana bats. Humphrey and others found Indiana bats in Indiana were foraging around the canopy, which was 7 to 98 feet (2-30 m) above ground. LaVal and others , whose study was also conducted in Missouri, found that a female bat foraged 7 to 33 feet (2-10 m) above a river. In the same study, a male Indiana bat was observed flying in an elliptical pattern among trees at 10 to 33 feet (3-10 m) above the ground under the canopy of dense forests . In addition, bats were observed foraging at canopy height in Virginia , which would likely provide foraging conditions similar to the studies previously mentioned.
Differences in the extent of foraging ranges have also been noted. Bats from the same colony foraged in different areas at least some of the time . Humphrey and others reported that the average foraging area for female bats in Indiana was 843 acres (341 ha), but the foraging area for males averaged 6,837 acres (2,767 ha). Hobson and Holland reported a male bat utilizing a foraging area of 1,544 acres (625 ha) in Virginia . In Illinois, however, the foraging ranges were much smaller at an average of 625 acres (253 ha) for adult females, 141 acres (57 ha) for adult males, 91 acres (37 ha) for juvenile females, and only 69 acres (28 ha) for juvenile males [35,36,47]. Humphrey and others found that foraging areas utilized by Indiana bats in Indiana increased throughout the summer season, but only averaged 11.2 acres (4.54 ha) in mid-summer.PREDATORS:
The impact of natural predators on Indiana bats is minimal compared to the damage to habitat and mortality caused by humans, especially during hibernation. The presence of people in caves can cause Indiana bats to come out of hibernation, leading to a large increase in the energy used by the bats. By causing Indiana bats to wake up and use greater amounts of energy stores, humans can cause high mortality in a cave population of hibernating Indiana bats . Human disturbance and the degradation of habitat are the primary causes for the decline of the Indiana bat .MANAGEMENT CONSIDERATIONS:
Harvesting trees within stands where Indiana bats are known to roost during the summer could result in the mortality or displacement of individual bats or possibly entire colonies . Harvests would probably be safer in these areas during the hibernation period when the trees are not being utilized. However, felling trees at any time may result in the loss of unknown maternity roosts . Cutting down a tree with roosting Indiana bats is assumed to be unlikely in most cases because of the rarity of the species, because many stands with suitable habitat have more potential roost trees than are likely utilized by Indiana bats, and because most maternity colonies are far apart across their range .
Since roost trees tend to be ephemeral, lasting for just a couple of seasons because of tree fall or loss of exfoliating bark from the bole, it may be more beneficial for Indiana bat conservation to protect and manage stands rather than individual trees. Others go a step further by recommending that existing snags should be retained and new snags should be recruited . This may be especially true for lands intensively managed for wood harvest where forests are not allowed to reach old age classes and very few snags are typically created . Since Indiana bats need a variety of roosts to suit their roosting needs, trees of various species, size, and condition should be maintained to provide the maximum probability that the needs of the bats will be met and to provide a continuous supply of roosts for when old roosts become unsuitable .
When harvesting trees in either green or salvage units, the U.S. Fish and Wildlife Service in Pennsylvania recommends that all shagbark and shellbark hickories, living or dead, be retained in any area where Indiana bats could potentially occur. More than 16 live trees of at least 9 inches (23 cm) dbh should be left per acre in partial harvest units. Three of these live trees should be at least 20 inches (51 cm) dbh. For final harvest units and clearcuts, 8 to 15 live trees at least 9 inches (23 cm) dbh should be retained per acre with 1 tree being at least 20 inches dbh per acre. For partial to intermediate harvests in green stands, canopy cover should be reduced to 54%. Live residual trees surrounding approximately 1/3rd of the large (>12 inches (30 cm) dbh) snags with exfoliating bark should be saved in order to provide partial shade of the snags throughout the summer . These live trees could potentially become suitable roosts in the future. Live hickories, oaks, elms, ashes, cottonwoods, and maples should be retained when possible since they are the types primarily used as roosts [78,94]. In partial and final harvests in salvage units as well as in clearcuts, 5 to 10 snags of at least 9 inches (23 cm) in diameter should be retained per acre. At least 1 snag 16 inches (41 cm) dbh should remain per 2 acres (0.8 ha). All known Indiana bat roosts should be protected until they are no longer suitable for use as roosts . In other reports, even more harvest guidelines were presented. For instance, for every 5 acres (2 ha) harvested, a clump of trees 0.25 acre ( 0.1 ha) in size should be left and contain "den trees," snags, oaks and hickories, conifers, less common species, and/or mast species in a variety of sizes . Additionally, large snags in open canopy should be preserved . Due to the bats' preference for large snags, removal of any snag with exfoliating bark within Indiana bat habitat would potentially cause habitat degradation by the removal of current or potential roost sites.
Several studies document stand use by Indiana bats after tree harvest. Bats completely avoided sites in the study area that had been recently clearcut in Kentucky . However, unmanaged forest stands received 1.5 to 2 times as much activity as expected based on habitat availability. Recent two-aged shelterwood harvests experienced 4 to 7 times the amount of activity expected. These two-aged shelterwood harvests followed guidelines that called for the retention of 40 live trees/ha as well as all snags, shagbark hickories, hollow trees, and trees with large dead limbs. More roosts and more bats utilizing those roosts were found within these harvested areas than in a similar Kentucky study where 40 live trees and just 5 snags/ha were retained . In Illinois, a maternity colony remained in a selectively harvested area and utilized the same roosts that were previously occupied . Indiana bats were occasionally observed foraging under intact canopies and forests with gaps that were created by diameter-limit harvests in West Virginia. Indiana bat activity was not recorded in clearcut areas or under complex canopies .
A project in eastern Texas proposed that thinning in pine forests will create more suitable habitats for the southeastern myotis (Myotis austroriparius) and Rafinesque's big-eared bat (Corynorhinus rafinsquii) by promoting the growth of remaining pine trees to old-growth age class . This condition is reportedly similar to what is preferred by the Indiana bat . Thus, thinning understory may help to improve Indiana bat summer habitat. (See Site Characteristics for discussion on the preferred stand structure of Indiana bats.)
Further recommendations for improving and maintaining the landscape for the Indiana bat have been proposed by biologists in Missouri, Pennsylvania, and Ohio. Riparian corridors should be forested for 100 feet (30 m) or more on either side of a stream. In areas lacking wide forest corridors, reforestation should be a priority . Reducing forest canopies from 100% cover to roughly 30% to 80% cover and clearing some understory is also recommended [8,26]. However, reducing canopy cover even more could be detrimental to the bats by causing the loss of current and future roosts as well as by altering the landscape and microhabitats. Creating new water sources, especially in upland habitats, may improve habitats in which other water sources are not readily available . Sedimentation of stream corridors following logging could potentially affect the insect prey assemblage in a foraging area .
Greater threats to the survival of Indiana bats may exist during hibernation. Hibernating bats that are disturbed by human activity have faster weight loss than those not visited by people. Bats located in hibernacula that are visited by people during the hibernation period are more likely to die before spring . The biggest threats to Indiana bat hibernacula are human disturbance, including researchers and spelunkers, and vandalism, poorly designed gates that disrupt airflow, natural hazards such as flooding or mine collapse, and microclimate changes [39,93]. Although gating cave and mine entrances can deter humans from entering hibernacula and disturbing hibernating bats, gates can severely change temperature and airflow within the cave causing it to fall below optimal conditions [42,77]. Management recommendations include protecting hibernacula with bat-friendly gates and restoring abandoned hibernacula if possible [26,103]. Hibernacula should also be closed to visitation from September 1 to April 30 toward the southern extent of the species's range, and from September 1 to May 31 in the north. To minimize disturbance to hibernating populations, censuses of Indiana bats should only occur biennially . Clawson also recommended that a 0.25-mile (0.4 ha) buffer zone be established around hibernacula, in which no development, agricultural activities, logging, or mining should occur. Kiser and Elliott suggested that any snags located within 1.4 miles (2.4 km) of a hibernaculum should be retained, and recruitment of new snags in the same area should also be a priority to ensure that a continuous supply of new roost trees will be available. In addition, any areas that have been altered through agricultural, mining, logging, and other activities should be reforested with trees that are commonly used by Indiana bats for roosting .
Pesticides commonly used in agriculture in the past and present, such as organochlorines, organophosphates, carbamates, and pyrethroids, have all been found in the feces and tissues of bats. Since the ban on organochlorines, pyrethroids may be the biggest threat to bat health because they are likely to persist in the environment . Pesticides inhibit cholinesterase and may cause cancer, birth defects, and death in bats. These conclusions are based on preliminary results and thus are largely speculative . Organochlorines, especially DDE, a long-lived product of DDT, build up in bat tissues but are not always found at lethal levels . Products of DDT are highly soluble in fat, so when bats build up fat for hibernation, they run the risk of taking so much that it can be fatal. Pesticide residues concentrate in the brain as other fat in the body is metabolized . There is evidence of Indiana bat mortality due to organochlorines found in the Indiana bat [25,71]. Organochlorine residues still exist in the environment even though their use has been banned for decades. Pesticide residues originate in the bats' prey and build up in tissues, including brain tissues, over time . Pesticide toxicity may have contributed to many mass die-offs that have occurred in the Indiana bat as well as other species around the world . Restricting pesticide use, especially within the vicinity of hibernacula, may help reduce the negative impacts that they can have on Indiana bats and other bat species.In the northern regions of its range, Indiana bat populations have remained stable or increased slightly since surveys were first conducted in 1960, especially in Indiana, New York, Ohio, and West Virginia. However, Indiana bat populations have decreased drastically in the southern portion of its range, especially in Kentucky, Missouri, Tennessee, and Arkansas [27,93]. What we have learned about their year-round habitat needs can give us direction on how land should be managed to ensure the survival of the Indiana bat.
MacGregor and others discovered Indiana bats utilizing roosts located within prescribed fire areas twice as much as expected during 1 year based on the amount of area available. Indiana bat utilization was equal to expected during the 2nd year of the Kentucky study . Some individuals were discovered roosting in a partially burned post oak (Quercus stellata) in Illinois . These studies show that fire-affected landscapes remain suitable for Indiana bat use over time.
A mathematical model suggested that closely related bat species in California would be affected differently by high-severity fires. The fringed myotis (Myotis thysanodes) and the Yuma myotis (M. yumanensis) were both predicted to be adversely affected by postwildfire conditions due to a perceived decrease in habitat suitability. In contrast, the model suggested that the long-eared myotis (M. evotis) would benefit from a high severity wildfire through the production of more suitable habitat . Based on the results of this model, it is unclear how a high severity fire may affect Indiana bat habitat.
The following table provides fire return intervals for plant communities and ecosystems where the Indiana bat is important. Find further fire regime information for the plant communities in which this species may occur by entering the species name in the FEIS home page under "Find Fire Regimes".
|Community or Ecosystem||Dominant Species||Fire Return Interval Range (years)|
|silver maple-American elm||Acer saccharinum-Ulmus americana||<5 to 200|
|sugar maple||A. saccharum||>1,000|
|sugar maple-basswood||A. saccharum-Tilia americana||>1,000|
|sugarberry-America elm-green ash||Celtis laevigata-Ulmus americana-Fraxinus pennsylvanica||<35 to 200|
|beech-sugar maple||Fagus spp.-Acer saccharum||>1,000|
|black ash||Fraxinus nigra||<35 to 200|
|shortleaf pine-oak||Pinus echinata-Quercus spp.||<10|
|slash pine-hardwood||P. elliottii-variable||<35|
|longleaf pine-scrub oak||P. palustris-Quercus spp.||6-10|
|Table Mountain pine||P. pungens||<35 to 200|
|eastern white pine||P. strobus||35-200|
|eastern white pine-eastern hemlock||Pinus strobus-Tsuga canadensis||35-200|
|Virginia pine-oak||P. virginiana-Quercus spp.||10 to <35|
|sycamore-sweetgum-American elm||Platanus occidentalis-Liquidambar styraciflua-Ulmus americana||<35 to 200|
|black cherry-sugar maple||Prunus serotina-Acer saccharum||>1,000|
|oak-hickory||Quercus-Carya spp.||<35 |
|oak-gum-cypress||Quercus-Nyssa-spp.-Taxodium distichum||35 to >200 |
|southeastern oak-pine||Quercus-Pinus spp.||<10|
|white oak-black oak-northern red oak||Quercus alba-Q. velutina-Q. rubra||<35|
|northern pin oak||Q. ellipsoidalis||<35|
|bear oak||Q. ilicifolia||<35|
|bur oak||Q. macrocarpa||<10|
|chestnut oak||Q. prinus||3-8|
|northern red oak||Q. rubra||10 to <35|
|post oak-blackjack oak||Q. stellata-Q. marilandica||<10|
|black oak||Q. velutina||<35|
|eastern hemlock-yellow birch||Tsuga canadensis-Betula alleghaniensis||>200 |
|elm-ash-cottonwood||Ulmus-Fraxinus-Populus spp.||<35 to 200 [31,98]|
1. Baker, Robert J.; Bradley, Lisa C.; Bradley, Robert D.; Dragoo, Jerry W.; Engstrom, Mark D.; Hoffmann, Robert S.; Jones, Cheri A.; Reid, Fiona; Rice, Dale W.; Jones, Clyde. 2003. Revised checklist of North American mammals north of Mexico, 2003. Occasional Papers No. 229. Lubbock, TX: Museum of Texas Tech University. 23 p.
2. Barbour, Roger W.; Davis, Wayne H. 1969. Bats of America. Lexington, KY: The University Press of Kentucky. 286 p.
3. Barclay, Robert M. R.; Brigham, R. Mark. 2001. Year-to-year reuse of tree-roosts by California bats (Myotis californicus) in southern British Columbia. American Midland Naturalist. 146(1): 80-85.
4. Bat Conservation International, Inc. 2001. Bats in eastern woodlands. Austin, TX: Bat Conservation International, Inc. 307 pp.
5. Belwood, Jacqueline J. 2001. An Indiana bat roost in suburbia: important observations and concerns for the future. Bat Research News. 42(2): 26.
6. Bernard, Stephen R.; Brown, Kenneth F. 1977. Distribution of mammals, reptiles, and amphibians by BLM physiographic regions and A.W. Kuchler's associations for the eleven western states. Tech. Note 301. Denver, CO: U.S. Department of the Interior, Bureau of Land Management. 169 p.
7. Bogener, Dave. 2003. SP-T11 -- Effects of fuel load management and fire prevention on wildlife and plant communities. Oroville, CA: State of California, Department of Water Resources. Draft final report. Oroville Facilities Relicensing: Federal Energy Regulatory Commission Project No. 2100. 42 p.
8. Boyer, Angela L. 2001. Biological opinion on the land and resource management plan: Wayne National Forest, Ohio. Reynoldsburg, OH: U.S. Department of the Interior, Fish and Wildlife Service. 52 p.
9. Boyles, Justin G.; Miller, Matt N.; Robbins, Lynn W. 2003. Bat species activity in two forest habitats above and below the canopy. Bat Research News. 44(1): 21. [Abstract].
10. Brack, Virgil, Jr.; LaVal, Richard K. 1985. Food habits of the Indiana bat in Missouri. Journal of Mammalogy. 66(2): 308-315.
11. Brack, Virgil, Jr.; Tyrell, Karen. 1990. A model of the habitat used by the Indiana bat (Myotis sodalis) during the summer in Indiana: 1990 field studies. Project E-1-4, Study No. 8. Indianapolis, IN: Indiana Department of Natural Resources, Division of Fish and Wildlife. 42 p.
12. Brack, Virgil, Jr.; Whitaker, John O., Jr.; Pruitt, Scott E. 2004. Bats of Hoosier National Forest. Proceedings of the Indiana Academy of Science. 113(1): 76-86.
13. Brady, John T. 1983. Use of dead trees by the endangered Indiana bat. In: Davis, Jerry W.; Goodwin, Gregory A.; Ockenfeis, Richard A., technical coordinators. Snag habitat management: proceedings of the symposium; 1983 June 7-9; Flagstaff, AZ. Gen. Tech. Rep. RM-99. Fort Collins, CO: U.S. Department of Agriculture, Forest Service, Rocky Mountain Forest and Range Experiment Station: 111-113.
14. Britzke, Eric R.; Harvey, Michael J.; Loeb, Susan C. 2003. Indiana bat, Myotis sodalis, maternity roosts in the southern United States. Southeastern Naturalist. 2(2): 235-242.
15. Britzke, Eric R.; Hicks, Alan C.; von Oettingen, Susanna L.; Darling, Scott R. 2004. Spring roosting ecology of female Indiana bats (Myotis sodalis) in the northeastern United States. Bat Research News. 45(2): 52-53.
16. Butchkoski, Calvin M.; Hassinger, Jerry D. 2001. The ecology of the Indiana bat using a building as a maternity site. Bat Research News. 42(2): 28.
17. Callahan, Edward V.; Drobney, Ronald D.; Clawson, Richard L. 1997. Selection of summer roosting sites by Indiana bats (Myotis sodalis) in Missouri. Journal of Mammalogy. 78(3): 818-825.
18. Carter, Timothy C.; Carroll, Steven K.; Feldhamer, George A. 2001. Preliminary work on maternity colonies of Indiana bats in Illinois. Bat Research News. 42(2): 28-29.
19. Carter, Timothy C.; Ford, W. Mark; Menzel, Michael A. 2002. Fire and bats in the Southeast and Mid-Atlantic: more questions than answers? In: Ford, W. Mark; Russell, Kevin R.; Moorman, Christopher E., eds. The role of fire in nongame wildlife management and community restoration: traditional uses and new directions: Proceedings of a special workshop; 2000 December 15; Nashville, TN. Gen. Tech. Rep. NE-288. Newtown Square, PA: U.S. Department of Agriculture, Forest Service, Northeastern Research Station: 139-143.
20. Carter, Timothy C.; Menzel, Michael A.; Ford, W. Mark. 2000. Fire and bats in the East: something you've never thought about but probably should. In: In: Excellence in wildlife stewardship through science and education: Proceedings, 7th annual conference of the Wildlife Society; 2000 September 12-16; Nashville, TN. Bethesda, MD: The Wildlife Society: 75-76.
21. Cary, D. L.; Clawson, R. L.; Grimes, D. 1981. An observation of snake predation on a bat. Transactions of the Kansas Academy of Sciences. 84(4): 223-224.
22. Caryl, Joseph; Kurta, Allen. 1996. Ecology and behavior of the Indiana bat along the Raisin River: preliminary observations. Bat Research News. 37: 129.
23. Clark, Bryon K.; Bowles, John B.; Clark, Brenda S. 1987. Summer status of the endangered Indiana bat in Iowa. The American Midland Naturalist. 118(1): 32-39.
24. Clark, Donald R., Jr. 1981. Bats and environmental contaminants: a review. Special Scientific Report-Wildlife No. 235. Washington, DC: U.S. Department of the Interior, Fish and Wildlife Service. 26 p.
25. Clark, Donald R., Jr.; LaVal, Richard K.; Tuttle, Merlin D. 1981. Estimating pesticide burdens of bats from guano analyses. Bulletin of Environmental Contamination and Toxicology. 29: 214-220.
26. Clawson, Richard L. 2000. Implementation of a recovery plan for the endangered Indiana bat. In: Vories, Kimery C.; Throgmorton, Dianne, eds. In: Proceedings of bat conservation and mining: a technical interactive forum; 2000 November 14-16; St. Louis, MO. Alton, IL: U.S. Department of the Interior, Office of Surface Mining; Carbondale, IL: Coal Research Center, Southern Illinois University: 173-186.
27. Clawson, Richard L. 2002. Trends in population size and current status. In: Kurta, Allen; Kennedy, Jim, eds. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 2-8.
28. Clawson, Richard L.; LaVal, Richard K.; LaVal, Margaret L.; Caire, William. 1980. Clustering behavior of hibernating Myotis sodalis in Missouri. Journal of Mammalogy. 61(2): 245-253.
29. Cope, James B.; Humphrey, Stephen R. 1977. Spring and autumn swarming behavior in the Indiana bat, Myotis sodalis. Journal of Mammalogy. 58(1): 93-95.
30. Currie, Robert R. 2002. Response to gates at hibernacula. In: Kurta, Allen; Kennedy, Jim, eds. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 86-99.
31. Duchesne, Luc C.; Hawkes, Brad C. 2000. Fire in northern ecosystems. In: Brown, James K.; Smith, Jane Kapler, eds. Wildland fire in ecosystems: Effects of fire on flora. Gen. Tech. Rep. RMRS-GTR-42-vol. 2. Ogden, UT: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: 35-51.
32. Eyre, F. H., ed. 1980. Forest cover types of the United States and Canada. Washington, DC: Society of American Foresters. 148 p.
33. Ford, W. Mark; Menzel, Jennifer M.; Rodrigue, Jane. 2004. Hearing bat habitat: Anabat surveys on the Fernow Experimental Forest. Bat Research News. 45(2): 56.
34. Fuller, Todd K.; DeStefano, Stephen. 2003. Relative importance of early-successional forests and shrubland habitats to mammals in the northeastern United States. Forest Ecology and Management. 185(1-2): 75-79. Available online: http://www.sciencedirect.com [2005, April 4].
35. Gardner, James E.; Garner, James D.; Hofmann, Joyce E. 1991. Summer roost selection and roosting behavior of Myotis sodalis (Indiana bat) in Illinois. Final report. Champaign, IL: Illinois Department of Conservation, Illinois Natural History Survey. 56 p. On file with: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Fire Sciences Laboratory, Missoula, MT.
36. Garner, James D.; Gardner, James E. 1992. Determination of summer distribution and habitat utilization of the Indiana bat (Myotis sodalis) in Illinois. [Place of publication unknown]: Illinois Department of Conservation, Illinois Natural History Survey. Final Report: Project E-3. 23 p.
37. Garrison, George A.; Bjugstad, Ardell J.; Duncan, Don A.; Lewis, Mont E.; Smith, Dixie R. 1977. Vegetation and environmental features of forest and range ecosystems. Agric. Handb. 475. Washington, DC: U.S. Department of Agriculture, Forest Service. 68 p.
38. Gumbert, Mark W.; O'Keefe, Joy M.; MacGregor, John R. 2002. Roost fidelity in Kentucky. In: Kurta, Allen; Kennedy, Jim, eds. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 143-152.
39. Hall, John S. 1962. A life history and taxonomic study of the Indiana bat, Myotis sodalis. Scientific Publications No. 12. Reading, PA: Reading Public Museum and Art Gallery. 68 p.
40. Hardin, James W.; Hassell, Marion D. 1970. Observation on waking periods and movements of Myotis sodalis during hibernation. Journal of Mammalogy. 51: 829-831.
41. Hobson, Christopher S.; Holland, J. Nathaniel. 1995. Post-hibernation movement and foraging habitat of a male Indiana bat, Myotis sodalis (Chiroptera: Vespertilionidae), in western Virginia. Brimleyana. 23: 95-101.
42. Humphrey, Stephen R. 1978. Status, winter habitat, and management of the endangered Indiana bat, Myotis sodalis. Florida Scientist. 41(2): 65-76.
43. Humphrey, Stephen R.; Cope, James B. 1977. Survival rates of the endangered Indiana bat, Myotis sodalis. Journal of Mammalogy. 58(1): 32-36.
44. Humphrey, Stephen R.; Richter, Andreas R.; Cope, James B. 1977. Summer habitat and ecology of the endangered Indiana bat, Myotis sodalis. Journal of Mammalogy. 58(3): 334-346.
45. Johnson, Scott A.; Brack, Virgil, Jr.; Rolley, Robert E. 1998. Overwinter weight loss of Indiana bats (Myotis sodalis) from hibernacula subject to human visitation. The American Midland Naturalist. 139(2): 255-261.
46. King, D. 1992. Roost trees of the endangered Indiana bat (Myotis sodalis) in Michigan. Bios. 62: 75.
47. Kiser, James D.; Elliott, Charles L. 1996. Foraging habitat, food habits, and roost tree characteristics of the Indiana bat (Myotis sodalis) during autumn in Jackson County, Kentucky. Frankfort, KY: Kentucky Department of Fish and Wildlife Resources. 65 p.
48. Kiser, James D.; MacGregor, J. R.; Bryan, H. D.; Howard, A. 2001. The use of concrete bridges as night roosts by Indiana bats in south central Indiana. Bat Research News. 42(2): 33.
49. Kuchler, A. W. 1964. United States [Potential natural vegetation of the conterminous United States]. Special Publication No. 36. New York: American Geographical Society. 1:3,168,000; colored.
50. Kurta, Allen; Kath, Joseph; Smith, Eric L.; Foster, Rodney; Orick, Michael W.; Ross, Ronald. 1993. A maternity roost of the endangered Indiana bat (Myotis sodalis) in an unshaded, hollow, sycamore tree (Platanus occidentalis). The American Midland Naturalist. 130(2): 405-407.
51. Kurta, Allen; King, David; Teramino, Joseph A.; Stribley, John M.; Williams, Kimberly J. 1993. Summer roosts of the endangered Indiana bat (Myotis sodalis) on the northern edge of its range. The American Midland Naturalist. 129(1): 132-138.
52. Kurta, Allen; Murray, Susan W. 2001. Philopatry and migration of banded Indiana bats. Bat Research News. 42(2): 34-35.
53. Kurta, Allen; Murray, Susan W.; Miller, David H. 2002. Roost selection and movements across the summer landscape. In: Kurta, Allen; Kennedy, Jim, eds. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 118-129.
54. Kurta, Allen; Murray, Susan W.; Miller, David. 2001. The Indiana bat: journeys in space and time. Bat Research News. 42(2): 31. Abstract.
55. Kurta, Allen; Whitaker, John O., Jr. 1998. Diet of the endangered Indiana bat (Myotis sodalis) on the northern edge of its range. The American Midland Naturalist. 140(2): 280-286.
56. Kurta, Allen; Williams, Kimberly J.; Mies, Robert. 1996. Ecological, behavioural, and thermal observations of a peripheral population of Indiana bats (Myotis sodalis). In: Barclay, R. M. R.; Brigham, R. M., eds. Bats and forests. Victoria, BC: Ministry of Forests Research Program: 102-117.
57. LaVal, Richard K.; Clawson, Richard L.; LaVal, Margaret L.; Caire, William. 1977. Foraging behavior and nocturnal activity patterns of Missouri bats, with emphasis on the endangered species Myotis grisescens and Myotis sodalis. Journal of Mammalogy. 58(4): 592-599.
58. LaVal, Richard K.; LaVal, Margaret L. 1980. Ecological studies and management of Missouri bats. Terrestrial Series #8. Jefferson City, MO: Missouri Department of Conservation. 53 p.
59. MacGregor, John R.; Kiser, James D.; Gumbert, Mark W.; Reed, Timothy O. 1999. Autumn roosting habitat of male Indiana bats (Myotis sodalis) in a managed forest setting in Kentucky. In: Stringer, Jeffrey W.; Loftis, David L., eds. Proceedings, 12th central hardwood forest conference; 1999 February 28-March 2; Lexington, KY. Gen. Tech. Rep. SRS-24. Asheville, NC: U.S. Department of Agriculture, Forest Service, Southern Research Station.: 169-170. [Abstract].
60. Martin, Chester O.; Kiser James D. 2004. Managing special landscape features for forest bats, with emphasis on riparian areas and water sources. Bat Research News. 45(2): 62-63.
61. Massachusetts Natural Heritage and Endangered Species Program. 1984. Indiana bat (Myotis sodalis): Vespertilionidae--evening bats. In: Rare species fact sheets. Westborough, MA: Massachusetts Division of Fisheries and Wildlife, Massachusetts Natural Heritage and Endangered Species Program (producer). Available: http://www.mass.gov/dfwele/dfw/nhesp/nhfacts/myosod.pdf [2005, May 25].
62. Menzel, Michael A.; Menzel, Jennifer M.; Carter, Timothy C.; Ford, W. Mark; Edwards, John W. 2001. Review of the forest habitat relationships of the Indiana bat (Myotis sodalis). Gen. Tech. Rep. NE-284. Newton Square, PA: U.S. Department of Agriculture, Forest Service, Northeastern Research Station. 21 p.
63. Miller, G. S., Jr.; Allen, G. M. 1928. Myotis sodalis. United States National Museum. Bulletin 144: 130-135.
64. Miller, Nancy E.; Drobney, Ronald D.; Clawson, Richard L.; Callahan, E. V. 2002. Summer habitat in northern Missouri. In: Kurta, Allen; Kennedy, Jim, eds. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 165-171.
65. Mumford, Russell E.; Cope, James B. 1958. Summer records of Myotis sodalis in Indiana. Journal of Mammalogy. 39(4): 586-587.
66. Munson, Patrick J.; Keith, James H. 1984. Prehistoric raccoon predation on hibernating Myotis, Wyandotte Cave, Indiana. Journal of Mammalogy. 65(1): 152-155.
67. Murray, S. W.; Kurta, A. 2004. Nocturnal activity of the endangered Indiana bat (Myotis sodalis). Journal of Zoology. 262(2): 197-206.
68. Murray, Susan W. 2001. Variations in the diet of the Indiana bat. Bat Research News. 42(2): 35-36. [Abstract].
69. Murray, Susan W.; Kurta, Allen. 2002. Spatial and temporal variation in diet. In: Kurta, Allen; Kennedy, Jim, eds. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 182-192.
70. Myers, Ronald L. 2000. Fire in tropical and subtropical ecosystems. In: Brown, James K.; Smith, Jane Kapler, eds. Wildland fire in ecosystems: Effects of fire on flora. Gen. Tech. Rep. RMRS-GTR-42-vol. 2. Ogden, UT: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: 161-173.
71. O'Shea, Thomas J.; Clark, Donald R., Jr. 2002. An overview of contaminants and bats, with special reference to insecticides and the Indiana bat. In: Kurta, Allen; Kennedy, Jim. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 237-253.
72. Owen, Sheldon F.; Menzel, Michael A.; Edwards, John W.; Ford, W. Mark; Menzel, Jennifer M.; Chapman, Brian R.; Wood, Petra Bohall, Miller, Karl V. 2004. Bat activity in harvested and intact forest stands in the Allegheny Mountains. Northern Journal of Applied Forestry. 21(3): 154-159.
73. Paradiso, John L.; Greenhall, Arthur M. 1967. Longevity records for American bats. The American Midland Naturalist. 78(1): 251-252.
74. Quesada, Felix. 2003. Boswell Creek Watershed Healthy Forests Initiative Project. Biological Assessment BE 04-04-01. Lufkin, TX: U.S. Department of Agriculture, Forest Service, Sam Houston National Forest. 16 p.
75. Racey, P. A. 1982. Ecology of bat reproduction. In: Kunz, T. H., ed. Ecology of bats. New York: Plenum Press: 57-104.
76. Raesly, Richard L.; Gates, J. Edward. 1987. Winter habitat selection by north temperate cave bats. The American Midland Naturalist. 118(1): 15-31.
77. Richter, Andreas R.; Humphrey, Stephen R.; Cope, James B.; Brack, Virgil, Jr. 1993. Modified cave entrances: thermal effect on body mass and resulting decline of endangered Indiana bats (Myotis sodalis). Conservation Biology. 7(2): 407-415.
78. Romme, Russell C.; Tyrell, Karen; Brack, Virgil, Jr. 1995. Literature summary and habitat suitability index model: components of summer habitat for the Indiana bat, Myotis sodalis. Project C7188: Federal Aid Project E-1-7, Study No. 8. Bloomington, IN: Indiana Department of Natural Resources, Division of Fish and Wildlife. 174 pp.
79. Salyers, Jo; Tyrell, Karen; Brack, Virgil. 1996. Artificial roost structure use by Indiana bats in wooded areas in central Indiana. Bat Research News. 37(4): 148.
80. Schmidt, Angela C.; Tyrell, Karen; Glueck, Thomas. 2002. Environmental contaminants in bats collected from Missouri. In: Kurta, Allen; Kennedy, Jim, eds. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 228-236.
81. Schultz, John R. 2003. Appendix C - Biological Assessment. In: Prescribed Fire Environmental Assessment. Bradford, PA: U.S. Department of Agriculture, Allegheny National Forest. 66 p.
82. Shiflet, Thomas N., ed. 1994. Rangeland cover types of the United States. Denver, CO: Society for Range Management. 152 p.
83. Sparks, Dale W.; Simmons, Michael T.; Gummer, Curtis L.; Duchamp, Joseph E. 2003. Disturbance of roosting bats by woodpeckers and raccoons. Northeastern Naturalist. 10(1): 105-108.
84. Steffan, Terry. 2004. Appendix C: Biological assessment and evaluation. In: Trails End Re-entry Environmental Assessment. Warren, PA: U.S. Department of Agriculture, Forest Service, Allegheny National Forest, Marienville Ranger District. 107 p.
85. Stone, William E.; Battle, Ben L. 2004. Indiana bat habitat attributes at three spatial scales in northern Alabama. Bat Research News. 45(2): 71.
86. Thomas, Donald W. 1995. Hibernating bats are sensitive to nontactile human disturbance. Journal of Mammalogy. 76(3): 940-946.
87. Thomson, Christine E. 1982. Myotis sodalis. Mammalian Species. 163: 1-5.
88. Tibbels, Annie; Rice, Heidi; Foster, Rodney; Murray, Susan; Kurta, Allen. 2001. A southern bat beyond the northern edge of its range - Indiana bats at Tippy Dam. Bat Research News. 42(2): 38.
89. Timpone, John C.; Miller, Matthew N.; Murray, Kevin L.; Robbins, Lynn W. 2001. Day-roost characteristics and movements of the Indiana bat in northeast Missouri. Bat Research News. 42(4): 186.
90. Tuttle, Merlin D.; Kennedy, Jim. 2002. Thermal requirements during hibernation. In: Kurta, Allen; Kennedy, Jim, eds. The Indiana bat: biology and management of an endangered species. Austin, TX: Bat Conservation International: 68-78.
91. Tuttle, Merlin D.; Stevenson, Diane E. 1978. Variation in the cave environment and its biological implications. In: Zuber, Ron; Chester, James; Gilbert, Stephanie; Rhodes, Doug, eds. National cave management symposium: Proceedings; 1977 October 3-7; Big Sky, MT. Albuquerque, NM: Adobe Press: 108-121.
92. U.S. Department of the Interior, Fish and Wildlife Service. 1976. Endangered and threatened wildlife and plants: Determination of critical habitat for American crocodile, California condor, Indiana bat, and Florida manatee, [Online]. Federal Register. 41(187): 41914-41916. Available: ecos.fws.gov/SpeciesProfile?spcode=A000 [2005, August 19].
93. U.S. Department of the Interior, Fish and Wildlife Service. 1999. Agency draft: Indiana bat (Myotis sodalis) revised recovery plan. Fort Snelling, MN: U.S. Department of the Interior, Fish and Wildlife Service, Region 3. 53 p.
94. U.S. Department of the Interior, Fish and Wildlife Service. 1999. Biological opinion on the impacts of forest management and other activities to the bald eagle, Indiana bat, clubshell, and northern riffleshell on the Allegheny National Forest. [Place of publication unknown]: U.S. Department of the Interior, Fish and Wildlife Service. 94 p.
95. U.S. Department of the Interior, Fish and Wildlife Service. 2005. Bat, Indiana, [Online]. In: Threatened and Endangered Species System (TESS). [Washington, DC]: U.S. Department of the Interior, Fish and Wildlife Service (producer). Available: http://ecos.fws.gov/species_profile/servlet/gov.doi.species_profile.servlets.SpeciesProfile?spcode=A000. [2005, October 11].
96. U.S. Department of the Interior, Fish and Wildlife Service. 2016. Endangered Species Program, [Online]. Available: http://www.fws.gov/endangered/.
97. Van Lear, David H. 1996. Dynamics of coarse woody debris in southern forest ecosystems. In: McMinn, James W.; Crossley, D. A., Jr., eds. Biodiversity and coarse woody debris in southern forests: Proceedings of the workshop on coarse woody debris in southern forests: effects on biodiversity; 1993 October 18-20; Athens, GA. Gen. Tech. Rep. SE-94. Asheville, NC: U.S. Department of Agriculture, Forest Service, Southern Research Station: 10-17.
98. Wade, Dale D.; Brock, Brent L.; Brose, Patrick H.; Grace, James B.; Hoch, Greg A.; Patterson, William A., III. 2000. Fire in eastern ecosystems. In: Brown, James K.; Smith, Jane Kapler, eds. Wildland fire in ecosystems: Effects of fire on flora. Gen. Tech. Rep. RMRS-GTR-42-vol. 2. Ogden, UT: U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station: 53-96.
99. Warwick, Adam; Fredrickson, Leigh H.; Heitmeyer, Mickey. 2001. Distribution of bats in fragmented wetland forests of southeast Missouri. Bat Research News. 42(4): 187.
100. Whitaker, John O., Jr. 1972. Food habits of bats from Indiana. Canadian Journal of Zoology. 50: 877-883.
101. Whitaker, John O., Jr. 2004. Prey selection in a temperate zone insectivorous bat community. Journal of Mammalogy. 85(3): 460-469.
102. Widlak, James C. 1997. Biological opinion on the impacts of forest management and other activities to the Indiana bat on the Cherokee National Forest, Tennessee. Cookeville, TN: U.S. Department of the Interior, Fish and Wildlife Service. 38 p.
103. Widlak, James C. 1997. Biological opinion on the impacts of forest management and other activities to the Indiana bat on the Daniel Boone National Forest, Kentucky. Cookeville, TN: U.S. Department of the Interior, Fish and Wildlife Service. 24 p. | http://www.fs.fed.us/database/feis/animals/mammal/myso/all.html |
4.03125 | Scientists have confirmed the function of a gene that controls the awakening of trees from winter dormancy, a critical factor in their ability to adjust to environmental changes associated with climate change.
While other researchers have identified genes involved in producing the first green leaves of spring, the discovery of a master regulator in poplar trees (Populus species) could eventually lead to breeding plants that are better adapted for warmer climates.
The results of the study that began more than a decade ago at Oregon State University were published today in the Proceedings of the National Academy of Sciences, by scientists from Michigan Technological University and Oregon State.
"No one has ever isolated a controlling gene for this timing in a wild plant, outside of Arabidopsis, a small flowering plant related to mustard and cabbage," said Steve Strauss, co-author and distinguished professor of forest biotechnology at OSU. "This is the first time a gene that controls the timing of bud break in trees has been identified."
The timings of annual cycles — when trees open their leaves, when they produce flowers, when they go dormant — help trees adapt to changes in environmental signals like those associated with climate, but the genetics have to keep up, Strauss said.
While trees possess the genetic diversity to adjust to current conditions, climate models suggest that temperature and precipitation patterns in many parts of the world may expose trees to more stressful conditions in the future. Experts have suggested that some tree species may not be able to cope with these changes fast enough, whether by adaptation or migration. As a result, forest health may decline, trees may disappear from places they are currently found, and some species may even go extinct.
"For example, are there going to be healthy and widespread populations of Douglas fir in Oregon in a hundred years?" said Strauss. "That depends on the natural diversity that we have and how much the environment changes. Will there be sufficient genetic diversity around to evolve populations that can cope with a much warmer and likely drier climate? We just don't know."
Strauss called the confirmation of the bud-break gene — which scientists named EBB1 for short — a "first step" in developing the ability to engineer adaptability into trees in the future.
"Having this knowledge enables you to engineer changes when they might become urgent," he said.
Yordan Yordanov and Victor Busov at Michigan Tech worked with Cathleen Ma and Strauss at Oregon State to trace the function of EBB1 in buds and other plant tissues responsible for setting forth the first green shoots of spring. They developed modified trees that overproduced EBB1 genes and emerged from dormancy earlier in the year. They also showed that trees with less EBB1 activity emerged from dormancy later.
"The absence of EBB1 during dormancy allows the tree to progress through the physiological, developmental and adaptive changes leading to dormancy," said Busov, "while the expression of EBB1 in specific cell layers prior to bud-break enables reactivation of growth in the cells that develop into shoots and leaves, and re-entry into the active growth phase of the tree."
The study began when Strauss noticed poplar trees emerging earlier than others in an experimental field trial at Oregon State. One April morning, he found that four seedling trees in a 2.5-acre test plot were putting forth leaves at least a week before all the other trees. Strauss and Busov, a former post-doctoral researcher at Oregon State, led efforts to identify the genes responsible.
They found that EBB1 codes for a protein that helps to restart cell division in a part of the tree known as meristem, which is analogous to stem cells in animals. EBB1 also plays a role in suppressing genes that prepare trees for dormancy in the fall and in other processes such as nutrient cycling and root growth that are critical for survival. Altogether, they found nearly 1,000 other poplar genes whose activity is affected by EBB1.
It's unlikely that plant breeders will use the finding any time soon, Strauss said. Breeders tend to rely on large clusters of genes that are associated with specific traits such as hardiness, tree shape or flowering. However, as more genes of this kind are identified, the opportunity to breed or engineer trees adapted to extreme conditions will grow.
Funding for the research was provided by the U.S. Department of Agriculture, the U.S. Department of Energy and the Tree Biosafety and Genomics Research Cooperative at Oregon State.
Steven Strauss | Eurek Alert!
New method opens crystal clear views of biomolecules
11.02.2016 | Deutsches Elektronen-Synchrotron DESY
Scientists from MIPT gain insights into 'forbidden' chemistry
11.02.2016 | Moscow Institute of Physics and Technology
Today, plants and microorganisms are heavily used for the production of medicinal products. The production of biopharmaceuticals in plants, also referred to as “Molecular Pharming”, represents a continuously growing field of plant biotechnology. Preferred host organisms include yeast and crop plants, such as maize and potato – plants with high demands. With the help of a special algal strain, the research team of Prof. Ralph Bock at the Max Planck Institute of Molecular Plant Physiology in Potsdam strives to develop a more efficient and resource-saving system for the production of medicines and vaccines. They tested its practicality by synthesizing a component of a potential AIDS vaccine.
The use of plants and microorganisms to produce pharmaceuticals is nothing new. In 1982, bacteria were genetically modified to produce human insulin, a drug...
Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock which attains an accuracy which had only been predicted theoretically so far. Their optical ytterbium clock achieved a relative systematic measurement uncertainty of 3 E-18. The results have been published in the current issue of the scientific journal "Physical Review Letters".
Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock...
The University of Würzburg has two new space projects in the pipeline which are concerned with the observation of planets and autonomous fault correction aboard satellites. The German Federal Ministry of Economic Affairs and Energy funds the projects with around 1.6 million euros.
Detecting tornadoes that sweep across Mars. Discovering meteors that fall to Earth. Investigating strange lightning that flashes from Earth's atmosphere into...
Physicists from Saarland University and the ESPCI in Paris have shown how liquids on solid surfaces can be made to slide over the surface a bit like a bobsleigh on ice. The key is to apply a coating at the boundary between the liquid and the surface that induces the liquid to slip. This results in an increase in the average flow velocity of the liquid and its throughput. This was demonstrated by studying the behaviour of droplets on surfaces with different coatings as they evolved into the equilibrium state. The results could prove useful in optimizing industrial processes, such as the extrusion of plastics.
The study has been published in the respected academic journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).
Exceeding critical temperature limits in the Southern Ocean may cause the collapse of ice sheets and a sharp rise in sea levels
A future warming of the Southern Ocean caused by rising greenhouse gas concentrations in the atmosphere may severely disrupt the stability of the West...
09.02.2016 | Event News
02.02.2016 | Event News
26.01.2016 | Event News
11.02.2016 | Life Sciences
11.02.2016 | Physics and Astronomy
11.02.2016 | Earth Sciences | http://www.innovations-report.com/html/reports/life-sciences/discovery-of-a-bud-break-gene-could-lead-to-trees-adapted-for-a-changing-climate.html |
4.0625 | Toggle: English / Spanish
Chromosomes are structures found in the center (nucleus) of cells that carry long pieces of DNA. DNA is the material that holds genes. It is the building block of the human body.
Chromosomes also contain proteins that help DNA exist in the proper form.
Chromosomes come in pairs. Normally, each cell in the human body has 23 pairs of chromosomes (46 total chromosomes). Half come from the mother; the other half come from the father.
Two of the chromosomes (the X and the Y chromosome) determine if you are born a boy or a girl (your gender). They are called sex chromosomes:
- Females have 2 X chromosomes.
- Males have 1 X and 1 Y chromosome.
The mother gives an X chromosome to the child. The father may contribute an X or a Y. The chromosome from the father determines if the baby is a girl or a boy.
The remaining chromosomes are called autosomal chromosomes. They are known as chromosome pairs 1 through 22.
Dorland's Online Medical Dictionary. Available at: www.dorlands.com/def.jsp?id=100021078. Accessed April 27, 2015.
Stein CK. Applications of Cytogenetics in Modern Pathology. In: McPherson RA, Pincus MR, eds. Henry's Clinical Diagnosis and Management by Laboratory Methods. 22nd ed. Philadelphia, PA: Elsevier Saunders; 2011:chap 68.
- Last reviewed on 4/20/2015
- Chad Haldeman-Englert, MD, FACMG, Wake Forest School of Medicine, Department of Pediatrics, Section on Medical Genetics, Winston-Salem, NC. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- 2013 A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited. | http://umm.edu/Health/Medical/Ency/Articles/Chromosome |
4.03125 | The history of Rome in the Middle Ages, bewildering in its detail, is essentially that of two institutions, the papacy and the commune of Rome. In the 5th cent. the Goths ruled Italy from Ravenna, their capital. Odoacer and Theodoric the Great kept the old administration of Rome under Roman law, with Roman officials. The city, whose population was to remain less than 50,000 throughout the Middle Ages, suffered severely from the wars between the Goths and Byzantines. In 552, Narses conquered Rome for Byzantium and became the first of the exarchs (viceroys) who ruled Italy from Ravenna. Under Byzantine rule commerce declined, and the senate and consuls disappeared.
Pope Gregory I (590–604), one of the greatest Roman leaders of all time, began to emancipate Rome from the exarchs. Sustained by the people, the popes soon exercised greater power in Rome than did the imperial governors, and many secular buildings were converted into churches. The papal elections were, for the next 12 centuries, the main events in Roman history. Two other far-reaching developments (7th–8th cent.) were the division of the people into four classes (clergy, nobility, soldiers, and the lowest class) and the emergence of the Papal States.
The coronation (800) at Rome of Charlemagne as emperor of the West ended all question of Byzantine suzerainty over Rome, but it also inaugurated an era characterized by the ambiguous relationship between the emperors and the popes. That era was punctuated by visits to the city by the German kings, to be crowned emperor or to secure the election of a pope to their liking or to impose their will on the pope. In 846, Rome was sacked by the Arabs; the Leonine walls were built to protect the city, but they did not prevent the frequent occupations and plunderings of the city by Christian powers.
By the 10th cent., Rome and the papacy had reached their lowest point. Papal elections, originally exercised by the citizens of Rome, had come under the control of the great noble families, among whom the Frangipani and Pierleone families and later the Orsini and the Colonna were the most powerful. Each of these would rather have torn Rome apart than allowed the other families to gain undue influence. They built fortresses in the city (often improvised transformations of the ancient palaces and theaters) and ruled Rome from them.
From 932 to 954, Alberic, a very able man, governed Rome firmly and restored its self-respect, but after his death and after the proceedings that accompanied the coronation of Otto I as emperor, Rome relapsed into chaos, and the papal dignity once more became the pawn of the emperors and of local feudatories. Contending factions often elected several popes at once. Gregory VII reformed these abuses and strongly claimed the supremacy of the church over the municipality, but he himself ended as an exile, Emperor Henry IV having taken Rome in 1084. The Normans under Robert Guiscard came to rescue Gregory and thoroughly sacked the city on the same occasion (1084).
Papal authority was challenged in the 12th cent. by the communal movement. A commune was set up (1144–55), led by Arnold of Brescia, but it was subdued by the intervention of Emperor Frederick I. Finally, a republic under papal patronage was established, headed by an elected senator. However, civil strife continued between popular and aristocratic factions and between Guelphs and Ghibellines. The commune made war to subdue neighboring cities, for it pretended to rule over the Papal States, particularly the duchy of Rome, which included Latium and parts of Tuscany. Innocent III controlled the government of the city, but it regained its autonomy after the accession of Emperor Frederick II. Later in the 13th cent. foreign senators began to be chosen; among them were Brancaleone degli Andalò (1252–58) and Charles I of Naples.
During the "Babylonian captivity" of the popes at Avignon (1309–78) Rome was desolate, economically ruined, and in constant turmoil. Cola di Rienzi became the champion of the people and tried to revive the ancient Roman institutions, as envisaged also by Petrarch and Dante; in 1347 he was made tribune, but his dreams were doomed. Cardinal Albornoz temporarily restored the papal authority over Rome, but the Great Schism (1378–1417) intervened. Once more a republic was set up. In 1420, Martin V returned to Rome, and with him began the true and effective dominion of the popes in Rome.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | http://www.factmonster.com/encyclopedia/world/rome-city-italy-medieval-rome.html |
4.40625 | A collider is a type of particle accelerator involving directed beams of particles. Colliders may either be ring accelerators or linear accelerators, and may collide a single beam of particles against a stationary target or two beams head-on.
Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for tiny periods of time, and therefore may be hard or impossible to study in other ways.
In particle physics one gains knowledge about elementary particles by accelerating particles to very high kinetic energy and letting them impact on other particles. For sufficiently high energy, a reaction happens that transforms the particles into other particles. Detecting these products gives insight into the physics involved.
To do such experiments there are two possible setups:
- Fixed target setup: A beam of particles (the projectiles) is accelerated with a particle accelerator, and as collision partner, one puts a stationary target into the path of the beam.
- Collider: Two beams of particles are accelerated and the beams are directed against each other, so that the particles collide while flying in opposite directions. This process can be used to make strange and anti-matter.
The collider setup is harder to construct but has the great advantage that according to special relativity the energy of an inelastic collision between two particles approaching each other with a given velocity is not just 4 times as high as in the case of one particle resting (as it would be in non-relativistic physics); it can be orders of magnitude higher if the collision velocity is near the speed of light.
The first serious proposal for a collider originated with a group at the Midwestern Universities Research Association (MURA). This group proposed building two tangent radial-sector FFAG accelerator rings. Tihiro Ohkawa, one of the authors of the first paper, went on to develop a radial-sector FFAG accelerator design that could accelerate two counterrotating particle beams within a single ring of magnets. The third FFAG prototype built by the MURA group was a 50 MeV electron machine built in 1961 to demonstrate the feasibility of this concept.
Gerard K. O'Neill proposed using a single accelerator to inject particles into a pair of tangent storage rings. As in the original MURA proposal, collisions would occur in the tangent section. The benefit of storage rings is that the storage ring can accumulate a high beam flux from an injection accelerator that achieves a much lower flux.
The first electron-positron colliders were built in late 1950's-early 1960's in Italy, at the Istituto Nazionale di Fisica Nucleare in Frascati near Rome, by the Austrian-Italian physicist Bruno Touschek and in the US, by the Stanford-Princeton team that included William C.Barber, Bernard Gittelman, Gerry O’Neill, and Burton Richter. Around the same time, in the early 1960s, the VEP-1 electron-electron collider was independently developed and built under supervision of Gersh Budker in the Soviet Institute of Nuclear Physics.
In 1966, work began on the Intersecting Storage Rings at CERN, and in 1971, this collider was operational. The ISR was a pair of storage rings that accumulated particles injected by the CERN Proton Synchrotron. This was the first hadron collider, as all of the earlier efforts had worked with electrons or with electrons and positrons.
|Accelerator||Centre, city, country||First operation||accelerated particles||max energy per beam, GeV||Luminosity, 1030 cm−2 s−1||Perimeter (length), km|
|VEPP-2000||INP, Novosibirsk, Russia||2006||е+e−||1.0||100||0.024|
|VEPP-4М||INP, Novosibirsk, Russia||1994||е+e−||6||20||0.366|
|BEPC II||IHEP, Beijing, China||2008||е+е−||3.7||700||0.240|
|KEKB||KEK, Tsukuba, Japan||1999||е+е−||8.5 (e-), 4 (e+)||21100||3.016|
|RHIC||BNL, USA||2000||pp, Au-Au, Cu-Cu, d-Au||100/n||10, 0.005, 0.02, 0.07||3.834|
|6500 (planned 7000),
1580/n (planned 2760/n)
|10000 (reached 7700),
- List of known colliders
- Large Electron–Positron Collider
- Large Hadron Collider
- Very Large Hadron Collider
- Relativistic Heavy Ion Collider
- International Linear Collider
- Storage ring
- Kerst, D. W.; Cole, F. T.; Crane, H. R.; Jones, L. W.; et al. (1956). "Attainment of Very High Energy by Means of Intersecting Beams of Particles". Physical Review 102 (2): 590–591. Bibcode:1956PhRv..102..590K. doi:10.1103/PhysRev.102.590.
- US patent 2890348, Tihiro Ohkawa, "Particle Accelerator", issued 1959-06-09
- Science: Physics & Fantasy, Time, Monday, Feb. 11, 1957.
- O'Neill, G. (1956). "Storage-Ring Synchrotron: Device for High-Energy Physics Research" (PDF). Physical Review 102 (5): 1418–1419. Bibcode:1956PhRv..102.1418O. doi:10.1103/PhysRev.102.1418.
- Shiltsev, V. "The first colliders: AdA, VEP-1 and Princeton-Stanford". arXiv:1307.3116.
- Kjell Johnsen, The ISR in the time of Jentschke, CERN Courier, June 1, 2003.
- Shiltsev, V. "High energy particle colliders: past 20 years, next 20 years and beyond, Physics-Uspekhi 55.10 (2012) 965". doi:10.3367/UFNe.0182.201210d.1033/meta (inactive 2016-01-18).
- Shiltsev, V. "Crystal Ball: On the Future High Energy Colliders". arXiv:1511.01934.
- High Energy Collider Parameters
- Handbook of accelerator physics and engineering, edited by A. Chao, M. Tigner, 1999, p.11.
- DAFNE Achievements | https://en.wikipedia.org/wiki/Particle_collider |
4.0625 | The most important carbohydrate is glucose, a simple sugar (monosaccharide) that is metabolized by nearly all known organisms. Glucose and other carbohydrates are part of a wide variety of metabolic pathways across species: plants synthesize carbohydrates from carbon dioxide and water by photosynthesis storing the absorbed energy internally, often in the form of starch or lipids. Plant components are consumed by animals and fungi, and used as fuel for cellular respiration. Oxidation of one gram of carbohydrate yields approximately 4 kcal of energy, while the oxidation of one gram of lipids yields about 9 kcal. Energy obtained from metabolism (e.g., oxidation of glucose) is usually stored temporarily within cells in the form of ATP. Organisms capable of aerobic respiration metabolize glucose and oxygen to release energy with carbon dioxide and water as byproducts.
Carbohydrates can be chemically divided into complex and simple. Simple carbohydrates consist of single or double sugar units (monosaccharides and disaccharides, respectively). Sucrose or table sugar (a disaccharide) is a common example of a simple carbohydrate. Complex carbohydrates contain three or more sugar units linked in a chain, with most containing hundreds to thousands of sugar units. They are digested by enzymes to release the simple sugars. Starch, for example, is a polymer of glucose units and is typically broken down to glucose. Cellulose is also a polymer of glucose but it cannot be digested by most organisms. Bacteria that produce enzymes to digest cellulose live inside the gut of some mammals, such as cows, and when these mammals eat plants, the cellulose is broken down by the bacteria and some of it is released into the gut.
Doctors and scientists once believed that eating complex carbohydrates instead of sugars would help maintain lower blood glucose. Numerous studies suggest, however, that both sugars and starches produce an unpredictable range of glycemic and insulinemic responses. While some studies support a more rapid absorption of sugars relative to starches other studies reveal that many complex carbohydrates such as those found in bread, rice, and potatoes have glycemic indices similar to or higher than simple carbohydrates such as sucrose. Sucrose, for example, has a glycemic index lower than expected because the sucrose molecule is half fructose, which has little effect on blood glucose. The value of classifying carbohydrates as simple or complex is questionable. The glycemic index is a better predictor of a carbohydrate's effect on blood glucose.
Carbohydrates are a superior short-term fuel for organisms because they are simpler to metabolize than fats or those amino acids (components of proteins) that can be used for fuel. In animals, the most important carbohydrate is glucose. The concentration of glucose in the blood is used as the main control for the central metabolic hormone, insulin. Starch, and cellulose in a few organisms (e.g., some animals (such as termites) and some microorganisms (such as protists and bacteria)), both being glucose polymers, are disassembled during digestion and absorbed as glucose. Some simple carbohydrates have their own enzymatic oxidation pathways, as do only a few of the more complex carbohydrates. The disaccharide lactose, for instance, requires the enzyme lactase to be broken into its monosaccharide components; many animals lack this enzyme in adulthood.
Carbohydrates are typically stored as long polymers of glucose molecules with glycosidic bonds for structural support (e.g. chitin, cellulose) or for energy storage (e.g. glycogen, starch). However, the strong affinity of most carbohydrates for water makes storage of large quantities of carbohydrates inefficient due to the large molecular weight of the solvated water-carbohydrate complex. In most organisms, excess carbohydrates are regularly catabolised to form acetyl-CoA, which is a feed stock for the fatty acid synthesis pathway; fatty acids, triglycerides, and other lipids are commonly used for long-term energy storage. The hydrophobic character of lipids makes them a much more compact form of energy storage than hydrophilic carbohydrates. However, animals, including humans, lack the necessary enzymatic machinery and so do not synthesize glucose from lipids, though glycerol can be converted to glucose.
Oligosaccharides and/or polysaccharides are typically cleaved into smaller monosaccharides by enzymes called glycoside hydrolases. The monosaccharide units then enter monosaccharide catabolism. Organisms vary in the range of monosaccharides they can absorb and use and they can also vary in the range of more complex carbohydrates they are capable of disassembling.
The pentose phosphate pathway, which acts in the conversion of hexoses into pentoses and in NADPH regeneration. NADPH is an essential antioxidant in cells which prevents oxidative damage and acts as precursor for production of many biomolecules.
The hormoneinsulin is the primary regulatory signal in animals, suggesting that the basic mechanism is very old and very central to animal life. When present, it causes many tissue cells to take up glucose from the circulation, causes some cells to store glucose internally in the form of glycogen, causes some cells to take in and hold lipids, and in many cases controls cellular electrolyte balances and amino acid uptake as well. Its absence turns off glucose uptake into cells, reverses electrolyte adjustments, begins glycogen breakdown and glucose release into the circulation by some cells, begins lipid release from lipid storage cells, etc. The level of circulatory glucose (known informally as "blood sugar") is the most important signal to the insulin-producing cells. Because the level of circulatory glucose is largely determined by the intake of dietary carbohydrates, diet controls major aspects of metabolism via insulin. In humans, insulin is made by beta cells in the pancreas, fat is stored in adipose tissue cells, and glycogen is both stored and released as needed by liver cells. Regardless of insulin levels, no glucose is released to the blood from internal glycogen stores from muscle cells.
The hormone glucagon, on the other hand, has an effect opposite to that of insulin, forcing the conversion of glycogen in liver cells to glucose, which is then released into the blood. Muscle cells, however, lack the ability to export glucose into the blood. The release of glucagon is precipitated by low levels of blood glucose. Other hormones, notably growth hormone, cortisol, and certain catecholamines (such as epinepherine) have glucoregulatory actions similar to glucagon.
^Blaack, EE; Saris, WHM (1995). "Health Aspects of Various Digestible Carbohydrates". Nutritional Research15 (10): 1547-73.
^Wolever, Thomas M. S. (2006), The Glycaemic Index: A Physiological Classification of Dietary Carbohydrate, CABI, pg. 65, ISBN 9781845930516. “Indeed, blood glucose responses elicited by pure sugars and fruits suggest rapid absorption because the blood glucose concentration rises more quickly and falls more rapidly than after bread (Wolever et al., 1993; Lee and Wolever, 1998). Further evidence that sugars are rapidly absorbed is provided by recent studies indicating that the switch from oxidation of fat to carbohydrate occurs more rapidly after a high-sucrose meal than a high-starch meal, with the increase in carbohydrate oxidation being sustained for longer after the starch than the sucrose meal (Daly et al., 2000).”
^Jenkins, DJ; Jenkins, AL; Wolever, TM; Josse, RG; Wong, GS (1984). "The glycaemic response to carbohydrate foods". The Lancet324: 388–391. doi:10.1016/s0140-6736(84)90554-3. | https://en.wikipedia.org/wiki/Carbohydrate_metabolism |
4.15625 | Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made.
Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students.
The pollen tube develops from the pollen grain to initiate fertilization; the pollen grain divides into two sperm cells by mitosis; one of the sperm cells unites with the egg cell during fertilization.
Once the ovule is fertilized, a diploidsporophyte is produced, which gives rise to the embryo enclosed in a seed coat of tissue from the parent plant.
Fetilization and seed development can take years; the seed that is formed is made up of three tissues: the seed coat, the gametophyte, and the embryo.
Pine trees are conifers (cone bearing) and carry both male and female sporophylls on the same mature sporophyte. Therefore, they are monoecious plants. Like all gymnosperms, pines are heterosporous, generating two different types of spores: male microspores and female megaspores. In the male cones (staminate cones), the microsporocytes give rise to pollen grains by meiosis. In the spring, large amounts of yellow pollen are released and carried by the wind. Some gametophytes will land on a female cone. Pollination is defined as the initiation of pollen tube growth. The pollen tube develops slowly as the generative cell in the pollen grain divides into two haploid sperm cells by mitosis. At fertilization, one of the sperm cells will finally unite its haploid nucleus with the haploid nucleus of an egg cell.
Female cones (ovulate cones) contain two ovules per scale. One megaspore mother cell (megasporocyte) undergoes meiosis in each ovule. Three of the four cells break down leaving only a single surviving cell which will develop into a female multicellular gametophyte. It encloses archegonia (an archegonium is a reproductive organ that contains a single large egg). Upon fertilization, the diploid egg will give rise to the embryo, which is enclosed in a seed coat of tissue from the parent plant. Fertilization and seed development is a long process in pine trees: it may take up to two years after pollination. The seed that is formed contains three generations of tissues: the seed coat that originates from the sporophyte tissue, the gametophyte that will provide nutrients, and the embryo itself.
In the life cycle of a conifer, the sporophyte (2n) phase is the longest phase. The gametophytes (1n), microspores and megaspores, are reduced in size. This phase may take more than one year between pollination and fertilization while the pollen tube grows towards the megasporocyte (2n), which undergoes meiosis into megaspores. The megaspores will mature into eggs (1n) .
Male & female gametophytes develop from sporophytes., Male & female gametophytes develop from the same cones., Male & female gametophytes are made by separate cones on the same plant., or Male & female gametophytes are made by separate cones on different plants. | https://www.boundless.com/biology/textbooks/boundless-biology-textbook/seed-plants-26/gymnosperms-159/life-cycle-of-a-conifer-622-11843/ |
4.0625 | Joint, in anatomy, a structure that separates two or more adjacent elements of the skeletal system. Depending on the type of joint, such separated elements may or may not move on one another. This article discusses the joints of the human body—particularly their structure but also their ligaments, nerve and blood supply, and nutrition. Although the discussion focuses on human joints, its content is applicable to joints of vertebrates in general and mammals in particular. For information about the disorders and injuries that commonly affect human joints, see joint disease.
In order to describe the main types of joint structures, it is helpful first to summarize the motions made possible by joints. These motions include spinning, swinging, gliding, rolling, and approximation.
Spin is a movement of a bone around its own long axis; it is denoted by the anatomical term rotation. An important example of spin is provided by the radius (outer bone of the forearm); this bone can spin upon the lower end of the humerus (upper arm) in all positions of the elbow. When an individual presses the back of the hand against the mouth, the forearm is pronated, or twisted; when the palm of the hand is pressed against the mouth, the forearm is supinated, or untwisted. Pronation is caused by medial (inward) rotation of the radius and supination by lateral (outward) rotation.
Swing, or angular movement, brings about a change in the angle between the long axis of the moving bone and some reference line in the fixed bone. Flexion (bending) and extension (straightening) of the elbow are examples of swing. A swing (to the right or left) of one bone away from another is called abduction; the reverse, adduction.
Approximation denotes the movement caused by pressing or pulling one bone directly toward another—i.e., by a “translation” in the physical sense. The reverse of approximation is separation. Gliding and rolling movements occur only within synovial joints and cause a moving bone to swing.
Types of joints
Considered temporally, joints are either transient or permanent. The bones of a transient joint fuse together sooner or later, but always after birth. All the joints of the skull, for example, are transient except those of the middle ear and those between the lower jaw and the braincase. The bones of a permanent joint do not fuse except as the result of disease or surgery. Such fusion is called arthrodesis. All permanent and some transient joints permit movement. Movement of the latter may be temporary, as with the roof bones of an infant’s skull during birth, or long-term, as with the joints of the base of the skull during postnatal development.
There are two basic structural types of joint: diarthrosis, in which fluid is present, and synarthrosis, in which there is no fluid. All the diarthroses (commonly called synovial joints) are permanent. Some of the synarthroses are transient; others are permanent.
Synarthroses are divided into three classes: fibrous, symphysis, and cartilaginous. | http://www.britannica.com/science/joint-skeleton |
4.3125 | |This article needs additional citations for verification. (January 2008)|
|Problems playing this file? See media help.|
The Doppler effect (or Doppler shift) is the change in frequency of a wave (or other periodic event) for an observer moving relative to its source. It is named after the Austrian physicist Christian Doppler, who proposed it in 1842 in Prague. It is commonly heard when a vehicle sounding a siren or horn approaches, passes, and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, and lower during the recession.
When the source of the waves is moving toward the observer, each successive wave crest is emitted from a position closer to the observer than the previous wave. Therefore, each wave takes slightly less time to reach the observer than the previous wave. Hence, the time between the arrival of successive wave crests at the observer is reduced, causing an increase in the frequency. While they are travelling, the distance between successive wave fronts is reduced, so the waves "bunch together". Conversely, if the source of waves is moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased, reducing the frequency. The distance between successive wave fronts is then increased, so the waves "spread out".
For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted. The total Doppler effect may therefore result from motion of the source, motion of the observer, or motion of the medium. Each of these effects is analyzed separately. For waves which do not require a medium, such as light or gravity in general relativity, only the relative difference in velocity between the observer and the source needs to be considered.
- 1 Developments
- 2 General
- 3 Analysis
- 4 Application
- 5 Inverse Doppler effect
- 6 See also
- 7 References
- 8 Further reading
- 9 External links
Doppler first proposed this effect in 1842 in his treatise "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels" (On the coloured light of the binary stars and some other stars of the heavens). The hypothesis was tested for sound waves by Buys Ballot in 1845. He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, and lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848 (in France, the effect is sometimes called "effet Doppler-Fizeau" but that name was not adopted by the rest of the world as Fizeau's discovery was six years after Doppler's proposal). In Britain, John Scott Russell made an experimental study of the Doppler effect (1848).
In classical physics, where the speeds of source and the receiver relative to the medium are lower than the velocity of waves in the medium, the relationship between observed frequency and emitted frequency is given by:
- is the velocity of waves in the medium;
- is the velocity of the receiver relative to the medium; positive if the receiver is moving towards the source (and negative in the other direction);
- is the velocity of the source relative to the medium; positive if the source is moving away from the receiver (and negative in the other direction).
The frequency is decreased if either is moving away from the other.
The above formula assumes that the source is either directly approaching or receding from the observer. If the source approaches the observer at an angle (but still with a constant velocity), the observed frequency that is first heard is higher than the object's emitted frequency. Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is coming from a direction perpendicular to the relative motion (and was emitted at the point of closest approach; but when the wave is received, the source and observer will no longer be at their closest), and a continued monotonic decrease as it recedes from the observer. When the observer is very close to the path of the object, the transition from high to low frequency is very abrupt. When the observer is far from the path of the object, the transition from high to low frequency is gradual.
If the speeds and are small compared to the speed of the wave, the relationship between observed frequency and emitted frequency is approximately
|Observed frequency||Change in frequency|
- is the velocity of the receiver relative to the source: it is positive when the source and the receiver are moving towards each other.
To understand what happens, consider the following analogy. Someone throws one ball every second at a man. Assume that balls travel with constant velocity. If the thrower is stationary, the man will receive one ball every second. However, if the thrower is moving towards the man, he will receive balls more frequently because the balls will be less spaced out. The inverse is true if the thrower is moving away from the man. So it is actually the wavelength which is affected; as a consequence, the received frequency is also affected. It may also be said that the velocity of the wave remains constant whereas wavelength changes; hence frequency also changes.
With an observer stationary relative to the medium, if a moving source is emitting waves with an actual frequency (in this case, the wavelength is changed, the transmission velocity of the wave keeps constant note that the transmission velocity of the wave does not depend on the velocity of the source), then the observer detects waves with a frequency given by
A similar analysis for a moving observer and a stationary source (in this case, the wavelength keeps constant, but due to the motion, the rate at which the observer receives waves and hence the transmission velocity of the wave [with respect to the observer] is changed) yields the observed frequency:
These can be generalized into the equation that was presented in the previous section.
An interesting effect was predicted by Lord Rayleigh in his classic book on sound: if the source is moving at twice the speed of sound, a musical piece emitted by that source would be heard in correct time and tune, but backwards. The Doppler effect with sound is only clearly heard with objects moving at high speed, as change in frequency of musical tone involves a speed of around 40 meters per second, and smaller changes in frequency can easily be confused by changes in the amplitude of the sounds from moving emitters. Neil A Downie has demonstrated how the Doppler effect can be made much more easily audible by using an ultrasonic (e.g. 40 kHz) emitter on the moving object. The observer then uses a heterodyne frequency converter, as used in many bat detectors, to listen to a band around 40 kHz. In this case, with the bat detector tuned to give frequency for the stationary emitter of 2000 Hz, the observer will perceive a frequency shift of a whole tone, 240 Hz, if the emitter travels at 2 meters per second.
The siren on a passing emergency vehicle will start out higher than its stationary pitch, slide down as it passes, and continue lower than its stationary pitch as it recedes from the observer. Astronomer John Dobson explained the effect thus:
- "The reason the siren slides is because it doesn't hit you."
In other words, if the siren approached the observer directly, the pitch would remain constant until the vehicle hit him, and then immediately jump to a new lower pitch. Because the vehicle passes by the observer, the radial velocity does not remain constant, but instead varies as a function of the angle between his line of sight and the siren's velocity:
where is the angle between the object's forward velocity and the line of sight from the object to the observer.
The Doppler effect for electromagnetic waves such as light is of great use in astronomy and results in either a so-called redshift or blueshift. It has been used to measure the speed at which stars and galaxies are approaching or receding from us; that is, their radial velocities. This may be used to detect if an apparently single star is, in reality, a close binary, to measure the rotational speed of stars and galaxies, or to detect exoplanets. (Note that redshift is also used to measure the expansion of space, but that this is not truly a Doppler effect.)
The use of the Doppler effect for light in astronomy depends on our knowledge that the spectra of stars are not homogeneous. They exhibit absorption lines at well defined frequencies that are correlated with the energies required to excite electrons in various elements from one level to another. The Doppler effect is recognizable in the fact that the absorption lines are not always at the frequencies that are obtained from the spectrum of a stationary light source. Since blue light has a higher frequency than red light, the spectral lines of an approaching astronomical light source exhibit a blueshift and those of a receding astronomical light source exhibit a redshift.
Among the nearby stars, the largest radial velocities with respect to the Sun are +308 km/s (BD-15°4041, also known as LHS 52, 81.7 light-years away) and -260 km/s (Woolley 9722, also known as Wolf 1106 and LHS 64, 78.2 light-years away). Positive radial velocity means the star is receding from the Sun, negative that it is approaching.
The Doppler effect is used in some types of radar, to measure the velocity of detected objects. A radar beam is fired at a moving target — e.g. a motor car, as police use radar to detect speeding motorists — as it approaches or recedes from the radar source. Each successive radar wave has to travel farther to reach the car, before being reflected and re-detected near the source. As each wave has to move farther, the gap between each wave increases, increasing the wavelength. In some situations, the radar beam is fired at the moving car as it approaches, in which case each successive wave travels a lesser distance, decreasing the wavelength. In either situation, calculations from the Doppler effect accurately determine the car's velocity. Moreover, the proximity fuze, developed during World War II, relies upon Doppler radar to detonate explosives at the correct time, height, distance, etc.
Because the doppler shift affects the wave incident upon the target as well as the wave reflected back to the radar, the change in frequency observed by a radar due to a target moving at relative velocity is twice that from the same target emitting a wave:
Medical imaging and blood flow measurement
An echocardiogram can, within certain limits, produce an accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, any abnormal communications between the left and right side of the heart, any leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output. Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements.
Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift (when the received signal arrives).
Velocity measurements of blood flow are also used in other fields of medical ultrasonography, such as obstetric ultrasonography and neurology. Velocity measurement of blood flow in arteries and veins based on Doppler effect is an effective tool for diagnosis of vascular problems like stenosis.
Instruments such as the laser Doppler velocimeter (LDV), and acoustic Doppler velocimeter (ADV) have been developed to measure velocities in a fluid flow. The LDV emits a light beam and the ADV emits an ultrasonic acoustic burst, and measure the Doppler shift in wavelengths of reflections from particles moving with the flow. The actual flow is computed as a function of the water velocity and phase. This technique allows non-intrusive flow measurements, at high precision and high frequency.
Velocity profile measurement
Developed originally for velocity measurements in medical applications (blood flow), Ultrasonic Doppler Velocimetry (UDV) can measure in real time complete velocity profile in almost any liquids containing particles in suspension such as dust, gas bubbles, emulsions. Flows can be pulsating, oscillating, laminar or turbulent, stationary or transient. This technique is fully non-invasive.
Fast moving satellites can have a Doppler shift of dozens of kilohertz relative to a ground station. The speed, thus magnitude of Doppler effect, changes due to earth curvature. Dynamic Doppler compensation, where the frequency of a signal is changed multiple times during transmission, is used so the satellite receives a constant frequency signal.
The Leslie speaker, associated with and predominantly used with the Hammond B-3 organ, takes advantage of the Doppler effect by using an electric motor to rotate an acoustic horn around a loudspeaker, sending its sound in a circle. This results at the listener's ear in rapidly fluctuating frequencies of a keyboard note.
A laser Doppler vibrometer (LDV) is a non-contact method for measuring vibration. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.
During the segmentation of vertebrate embroys, waves of gene expression sweep across the presomitic mesoderm, the tissue from which the precursors of the vertebrae (somites) are formed. A new somite is formed upon arrival of a wave at the anterior end of the presomitic mesoderm. In zebrafish, it has been shown that the shortening of the presomitic mesoderm during segmentation leads to a Doppler effect as the anterior end of the tissue moves into the waves. This Doppler effect contributes to the period of segmentation.
Inverse Doppler effect
Since 1968 scientists such as Victor Veselago have speculated about the possibility of an inverse Doppler effect. The experiment that claimed to have detected this effect was conducted by Nigel Seddon and Trevor Bearpark in Bristol, United Kingdom in 2003.
Researchers from Swinburne University of Technology and the University of Shanghai for Science and Technology showed that this effect can be observed in optical frequencies as well. This was made possible by growing a photonic crystal and projecting a laser beam into the crystal. This made the crystal act like a super prism and the inverse Doppler effect could be observed.
- Relativistic Doppler effect
- Doppler cooling
- Fizeau experiment
- Photoacoustic Doppler effect
- Differential Doppler effect
- Rayleigh fading
- Alec Eden The search for Christian Doppler,Springer-Verlag, Wien 1992. Contains a facsimile edition with an English translation.
- Buys Ballot (1845). "Akustische Versuche auf der Niederländischen Eisenbahn, nebst gelegentlichen Bemerkungen zur Theorie des Hrn. Prof. Doppler (in German)". Annalen der Physik und Chemie 11: 321–351. Bibcode:1845AnP...142..321B. doi:10.1002/andp.18451421102.
- Fizeau: "Acoustique et optique". Lecture, Société Philomathique de Paris, 29 December 1848. According to Becker(pg. 109), this was never published, but recounted by M. Moigno(1850): "Répertoire d'optique moderne" (in French), vol 3. pp 1165-1203 and later in full by Fizeau, "Des effets du mouvement sur le ton des vibrations sonores et sur la longeur d'onde des rayons de lumière"; [Paris, 1870]. Annales de Chimie et de Physique, 19, 211-221.
- Scott Russell, John (1848). "On certain effects produced on sound by the rapid motion of the observer". Report of the Eighteenth Meeting of the British Association for the Advancement of Science (John Murray, London in 1849) 18 (7): 37–38. Retrieved 2008-07-08.
- Rosen, Joe; Gothard, Lisa Quinn (2009). Encyclopedia of Physical Science. Infobase Publishing. p. 155. ISBN 0-8160-7011-3. Extract of page 155
- Strutt (Lord Rayleigh), John William (1896). MacMillan & Co, ed. The Theory of Sound 2 (2 ed.). p. 154.
- Downie, Neil A, 'Vacuum Bazookas, Electric Rainbow Jelly and 27 other projects for Saturday Science', Princeton (2001) ISBn 0-691-00986-4
- The distinction is made clear in Harrison, Edward Robert (2000). Cosmology: The Science of the Universe (2nd ed.). Cambridge University Press. pp. 306ff. ISBN 0-521-66148-X.
- Evans, D. H.; McDicken, W. N. (2000). Doppler Ultrasound (2nd ed.). New York: John Wiley and Sons. ISBN 0-471-97001-8.[page needed]
- Qingchong, Liu (1999), "Doppler measurement and compensation in mobile satellite communications systems", Military Communications Conference Proceedings / MILCOM 1: 316–320, doi:10.1109/milcom.1999.822695
- Soroldoni, D.; Jörg, D. J.; Morelli, L. G.; Richmond, D. L.; Schindelin, J.; Jülicher, F.; Oates, A. C. (2014). "A Doppler Effect in Embryonic Pattern Formation". Science 345: 222–225. Bibcode:2014Sci...345..222S. doi:10.1126/science.1253089. PMID 25013078.
- Kozyrev, Alexander B.; van der Weide, Daniel W. (2005). "Explanation of the Inverse Doppler Effect Observed in Nonlinear Transmission Lines". Physical Review Letters 94 (20): 203902. Bibcode:2005PhRvL..94t3902K. doi:10.1103/PhysRevLett.94.203902. PMID 16090248. Lay summary – Phys.org (May 23, 2005).
- Scientists reverse Doppler Effect, physorg.com, March 7, 2011, retrieved 2011-03-18
- Doppler, C. (1842). Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels (About the coloured light of the binary stars and some other stars of the heavens). Publisher: Abhandlungen der Königl. Böhm. Gesellschaft der Wissenschaften (V. Folge, Bd. 2, S. 465-482) [Proceedings of the Royal Bohemian Society of Sciences (Part V, Vol 2)]; Prague: 1842 (Reissued 1903). Some sources mention 1843 as year of publication because in that year the article was published in the Proceedings of the Bohemian Society of Sciences. Doppler himself referred to the publication as "Prag 1842 bei Borrosch und André", because in 1842 he had a preliminary edition printed that he distributed independently.
- "Doppler and the Doppler effect", E. N. da C. Andrade, Endeavour Vol. XVIII No. 69, January 1959 (published by ICI London). Historical account of Doppler's original paper and subsequent developments.
- Adrian, Eleni (24 June 1995). "Doppler Effect". NCSA. Retrieved 2008-07-13.
|Wikimedia Commons has media related to Doppler effect.|
- Doppler Effect, [ScienceWorld]
- Java simulation of Doppler effect
- Doppler Shift for Sound and Light at MathPages
- Flash simulation and game of Doppler effect of sound at Scratch (programming language)
- The Doppler Effect and Sonic Booms (D.A. Russell, Kettering University)
- Video Mashup with Doppler Effect videos
- Wave Propagation from John de Pillis. An animation showing that the speed of a moving wave source does not affect the speed of the wave.
- EM Wave Animation from John de Pillis. How an electromagnetic wave propagates through a vacuum
- Doppler Shift Demo - Interactive flash simulation for demonstrating Doppler shift.
- Interactive applets at Physics 2000 | https://en.wikipedia.org/wiki/Doppler_effect |
4.125 | A stereotype is a popular belief about specific types of individuals. The concepts of "stereotype" and "prejudice" are often confused with many other different meanings. Stereotypes are standardized and simplified conceptions of people based on some prior assumptions.Another name for stereotyping is bias. A bias is a tendency, most of these are good, but sometimes stereotyping can turn into discrimination if we misinterpret a bias and act upon it in a negative manner.
The stereotype was invented by Firmin Didot in the world of printing; it was originally a duplicate impression of an original typographical element, used for printing instead of the original. American journalist Walter Lippmann coined the metaphor, calling a stereotype a "picture in our heads" saying, "Whether right or wrong (...) imagination is shaped by the pictures seen (...) originally printers' words, and in their literal printers' meanings were synonymous. Specifically, cliché was a French word for the printing surface for a stereotype. The first reference to "stereotype," in its modern, English use was in 1850, in the noun, meaning "image perpetuated without change."
In one perspective of the stereotyping process, there are the concepts of ingroups and outgroups. From each individual's perspective, ingroups are viewed as normal and superior, and are generally the group that they already associate with, or aspire to join. An outgroup is simply all the other groups. They are seen as lesser than or inferior to the in-groups. An example of this would be: Asians are smarter than Americans. In this example Asians are looked at as being smarter because their education systems are more strict than that of the Americans.
A second perspective is that of automatic and explicit or subconscious and conscious. Automatic or subconscious stereotyping is that which everyone does without noticing. Automatic stereotyping is quickly preceded by an explicit or conscious check which permits time for any needed corrections. Automatic stereotyping is affected by explicit stereotyping because frequent conscious thoughts will quickly develop into subconscious stereotypes.
A third method to categorizing stereotypes is general types and sub-types. Stereotypes consist of hierarchical systems consisting of broad and specific groups being the general types and sub-types respectively. A general type could be defined as a broad stereotype typically known among many people and usually widely accepted, whereas the sub-group would be one of the several groups making up the general group. These would be more specific, and opinions of these groups would vary according to differing perspectives.
Certain circumstances can affect the way an individual stereotypes. Some theorists argue in favor of the conceptual connection and that one's own subjective thought about someone is sufficient information to make assumptions about that individual. Other theorists argue that at minimum there must be a causal connection between mental states and behavior to make assumptions or stereotypes. Thus results and opinions may vary according to circumstance and theory. An example of a common, incorrect assumption is that of assuming certain internal characteristics based on external appearance. The explanation for one's actions is his or her internal state (goals, feeling, personality, traits, motives, values, and impulses), not his or her appearance.
Sociologist Charles E. Hurst, "One reason for stereotypes is the lack of personal, concrete familiarity that individuals have with persons in other racial or ethnic groups. Lack of familiarity encourages the lumping together of unknown individuals."
Stereotypes focus upon and thereby exaggerate differences between groups. Competition between groups minimizes similarities and magnifies differences. This makes it seem as if groups are very different when in fact they may be more alike than different. For example, among African Americans, identity as an American citizen is more salient than racial background; that is, African Americans are more American than African.
Different disciplines give different accounts of how stereotypes develop: Psychologists may focus on an individual's experience with groups, patterns of communication about those groups, and intergroup conflict. Pioneering psychologist William James cautioned psychologists themselves to be wary of their own stereotyping, in what he called the psychologist's fallacy. Sociologists focus on the relations among different groups in a social structure. Psychoanalytically-oriented humanists (e.g., Sander Gilman) have argued that stereotypes, by definition, are representations that are not accurate, but a projection of one to another.
A number of theories have been derived from sociological studies of stereotyping and prejudicial thinking. In early studies it was believed that stereotypes were only used by rigid, repressed, and authoritarian people. Sociologists concluded that this was a result of conflict, poor parenting, and inadequate mental and emotional development. This idea has been overturned; more recent studies have concluded that stereotypes are commonplace.
One theory as to why people stereotype is that it is too difficult to take in all of the complexities of other people as individuals. Even though stereotyping is inexact, it is an efficient way to mentally organize large blocks of information. Categorization is an essential human capability because it enables us to simplify, predict, and organize our world. Once one has sorted and organized everyone into tidy categories, there is a human tendency to avoid processing new or unexpected information about each individual. Assigning general group characteristics to members of that group saves time and satisfies the need to predict the social world in a general sense.
Some psychologist believe that childhood influences are some of the most complex and influential factors in developing stereotypes. Though they can be absorbed at any age, stereotypes are usually acquired in early childhood under the influence of parents, teachers, peers, and the media. Once a stereotype is learned, it often becomes self-perpetuating.
Another prominent theory is the stereotype content model which attempts to predict behavior based on levels of warmth and competence.
See main article: Stereotype threat. Stereotypes can have a negative and positive impact on individuals. Joshua Aronson and Claude M. Steele have done research on the psychological effects of stereotyping, particularly its effect on African Americans and women. They argue that psychological research has shown that competence is highly responsive to situation and interactions with others. They cite, for example, a study which found that bogus feedback to college students dramatically affected their IQ test performance, and another in which students were either praised as very smart, congratulated on their hard work, or told that they scored high. The group praised as smart performed significantly worse than the others. They believe that there is an 'innate ability bias'. These effects are not just limited to minority groups. Mathematically competent white males, mostly math and engineering students, were asked to take a difficult math test. One group was told that this was being done to determine why Asians were scoring better. This group performed significantly worse than the control group.
Possible prejudicial effects of stereotypes are:
Stereotypes allow individuals to make better informed evaluations of individuals about whom they possess little or no individuating information, and in many, but not all circumstances stereotyping helps individuals arrive at more accurate conclusions. Over time, some victims of negative stereotypes display self-fulfilling prophecy behavior, in which they assume that the stereotype represents norms to emulate. Negative effects may include forming inaccurate opinions of people, scapegoating, erroneous judgmentalism, preventing emotional identification, distress, and impaired performance.
Yet, the stereotype that stereotypes are inaccurate, resistant to change, overgeneralized, exaggerated, and destructive is not founded on empirical social science research, which instead shows that stereotypes are often accurate and that people do not rely on stereotypes when relevant personal information is available. Indeed, Jussim et al. comment that ethnic and gender stereotypes are surprisingly accurate, while stereotypes concerning political affiliation and nationality are much less accurate; the stereotypes assessed for accuracy concerned intelligence, behavior, personality, and economic status. Stereotype accuracy is a growing area of study and for Yueh-Ting Lee and his colleagues they have created an EPA Model (Evaluation, Potency, Accuracy) to describe the continuously changing variables of stereotypes.
Stereotypes are common in various cultural media, where they take the form of dramatic stock characters. These characters are found in the works of playwright Bertold Brecht, Dario Fo, and Jacques Lecoq, who characterize their actors as stereotypes for theatrical effect. In commedia dell'arte this is similarly common. The instantly recognizable nature of stereotypes mean that they are effective in advertising and situation comedy. These stereotypes change, and in modern times only a few of the stereotyped characters shown in John Bunyan's The Pilgrim's Progress would be recognizable.
In literature and art, stereotypes are clichéd or predictable characters or situations. Throughout history, storytellers have drawn from stereotypical characters and situations, in order to connect the audience with new tales immediately. Sometimes such stereotypes can be sophisticated, such as Shakespeare's Shylock in The Merchant of Venice. Arguably a stereotype that becomes complex and sophisticated ceases to be a stereotype per se by its unique characterization. Thus while Shylock remains politically unstable in being a stereotypical Jew, the subject of prejudicial derision in Shakespeare's era, his many other detailed features raise him above a simple stereotype and into a unique character, worthy of modern performance. Simply because one feature of a character can be categorized as being typical does not make the entire character a stereotype.
Despite their proximity in etymological roots, cliché and stereotype are not used synonymously in cultural spheres. For example a cliché is a high criticism in narratology where genre and categorization automatically associates a story within its recognizable group. Labeling a situation or character in a story as typical suggests it is fitting for its genre or category. Whereas declaring that a storyteller has relied on cliché is to pejoratively observe a simplicity and lack of originality in the tale. To criticize Ian Fleming for a stereotypically unlikely escape for James Bond would be understood by the reader or listener, but it would be more appropriately criticized as a cliché in that it is overused and reproduced. Narrative genre relies heavily on typical features to remain recognizable and generate meaning in the reader/viewer.
Some contemporary studies indicate that racial, ethnic and cultural stereotypes are still widespread in Hollywood blockbuster movies.
See also: Anti-Igbo sentiment.
See main article: Anti-Europeanism.
See also: Albanophobia, Anglophobia, Anti-British sentiment, Anti-Catalanism, Francophobia, Anti-German sentiment, Anti-Italianism, Anti-Polish sentiment, Lusophobia, Antiziganism, Anti-Romanian discrimination, Anti-Estonian sentiment, Anti-Serb sentiment, Anti-Scottish sentiment, Anti-Sovietism and Anti-Ukrainian sentiment.
See main article: Stereotypes of Jews.
See main article: LGBT stereotypes. | http://everything.explained.today/Stereotype/ |
4.125 | Midnight Judges Act
The Midnight Judges Act (also known as the Judiciary Act of 1801; 2 Stat. 89, or the Midnight Appointments) represented an effort to solve an issue in the U.S. Supreme Court during the early 19th century. There was concern, beginning in 1789, about the system that required the Justices of the Supreme Court to “ride circuit” and reiterate decisions made in the appellate level courts. The Supreme Court Justices have often voiced concern and suggested that the judges of the Supreme and circuit courts be divided. Jefferson did not want the judiciary to gain more power over the executive branch.
The Act reduced the number of seats on the Supreme Court from 6 to 5, effective upon the next vacancy in the Court. No such vacancy occurred during the brief period the Act was in effect, so the size of the Court remained unchanged.
It reorganized the circuit courts, doubling them in number from three to six, and created three new circuit judgeships for each circuit (except the sixth, which received only one circuit judge). In addition to creating new lifetime posts for Federalist judges, the circuit judgeships were intended to relieve the Justices of the Supreme Court from the hardships of riding circuit (that is, sitting as judges on the circuit courts). The circuit judge-ships were abolished in 1802, and the Justices continued to ride circuit until 1879. One of the judges on the Supreme Court appointed by Adams was Chief Justice John Marshall.
It also reorganized the district courts, creating ten. These courts were to be presided over by the existing district judges in most cases. In addition to subdividing several of the existing district courts, it created the District of Ohio which covered the Northwest and Indiana Territories, and the District of Potomac from the District of Columbia and pieces of Maryland and Virginia, which was the first time a federal judicial district crossed state lines. However, the district courts for Kentucky and Tennessee were abolished, and their judges reassigned to the circuit courts.
In addition, it gave the circuit courts jurisdiction to hear "all cases in law or equity, arising under the constitution and laws of the United States, and treaties made, or which shall be made, under their authority." This form of jurisdiction, now known as federal question jurisdiction, had not previously been granted to the federal courts.
The Midnight Judges
In the nineteen days between passage of this Act and the conclusion of his administration, President Adams quickly filled as many of the newly created circuit judgeships as possible. The new judges were known as the Midnight Judges because Adams was said to be signing their appointments at midnight prior to President Thomas Jefferson's inauguration. (Actually, only three commissions were signed on his last day.) The famous Supreme Court case of Marbury v. Madison involved one of these "midnight" appointments, although it was an appointment to a judgeship of the District of Columbia, which was authorized under a different Act of Congress, not the Judiciary Act.
Attempts to solve this situation before and throughout the presidency of John Adams were overshadowed by more pressing foreign and domestic issues that occupied Congress during the early years of the nation’s development. None of those attempts to fix the situation facing the Supreme Court was successful until John Adams took control in 1797. Faced with the Election of 1800, a watershed moment in American history that represented not only the struggle to correctly organize the foundation of the United States government but also the culmination of struggle between the waning Federalist Party and the rising Democratic-Republican Party, John Adams successfully reorganized the nation’s court system with the Judiciary Act of 1801.
The Election of 1800
During the Election of 1800, there was an intense growth of partisan politics, the political party of the executive branch of government changed for the first time, and there was an unprecedented peaceful transition of the political orientation of the country’s leadership. The main issues in this election were taxes, the military, peace negotiations with France, and the Alien and Sedition Acts and Virginia and Kentucky Resolutions. The campaign leading up to this election and the election itself revealed sharp divisions within the Federalist Party. Alexander Hamilton and the extreme Federalists attacked Adams for his persistence for peace with France, his opposition to building an army, and his failure to enforce the Alien & Sedition Acts.
The results of this election favored Thomas Jefferson and Aaron Burr over John Adams, but both Jefferson and Burr got 73 electoral votes. Presented with a tie, the House of Representatives, which was dominated by Federalists and led by Alexander Hamilton, eventually decided the election in favor of Thomas Jefferson. Democratic-Republicans also won control of the legislative branch of government after the congressional elections.
Thomas Jefferson was inaugurated March 4, 1801 without the presence of President John Adams. Jefferson's inaugural address attempted to appease the Federalists by promising to maintain the strength of the federal government and to pay off the national debt. Jefferson spoke of dangerous “entangling alliances” with foreign countries as George Washington did, and made a plea for national unity claiming that “we are all republicans and we are all federalists.” Once elected, Jefferson set out to rescind the Judiciary Act of 1801 and remove newly appointed Federalists.
Marbury v. Madison
The implications of Adams's actions in appointing Federalists to the Supreme Court and the Federal courts, led to one of the most important decisions in American judicial history. Marbury v. Madison solidified the United States' system of checks and balances and gave the judicial branch equal power with the executive and legislative branches. This controversial case began with Adams’ appointment of Federalist William Marbury as a Justice of the Peace in the District of Columbia. When the newly appointed Secretary of State James Madison refused to process Marbury’s selection, Marbury requested a writ of mandamus, which would force Madison to make his appointment official. Chief Justice John Marshall declared that the Supreme Court did not have the authority to force Madison to make the appointment official. This statement actually challenged the Judiciary Act of 1789, which stated that the Supreme Court did, in fact, have the right to issue those writs. Marshall, therefore, ruled that part of the Judiciary Act of 1789 unconstitutional because the Constitution did not expressly grant this power to the judiciary. In deciding the constitutionality of an act of Congress, Marshall established judicial review, the most significant development in the history of the Supreme Court.
Impeachment of Samuel Chase
Among the repercussions of the repeal of the Judiciary Act was the first and, to date, only impeachment of a sitting Supreme Court Justice, Samuel Chase. Chase, a Federalist appointed to the Supreme Court by George Washington, had publicly attacked the repeal in May 1803 while issuing his charge to a grand jury in Baltimore, Maryland: "The late alteration of the federal judiciary...will take away all security for property and personal liberty, and our Republican constitution will sink into a mobocracy, the worst of all popular governments." Jefferson responded to the attack by suggesting to his supporters in the U.S. House of Representatives that Chase be impeached, asking, "Ought the seditious and official attack on the principles of our Constitution . . .to go unpunished?" The House took Jefferson's suggestion, impeaching Chase in 1804. He was acquitted by the Senate of all charges in March 1805.
Federal question jurisdiction
The repeal of the Judiciary Act also ended the brief period of comprehensive Federal-question jurisdiction. The federal courts would not receive such jurisdiction again until 1875.
- Turner, Katheryn. “Republican Policy and the Judiciary Act of.” William and Mary Quarterly, 3rd ser., 22. January 1965. New York: Columbia University Press, 1992. Page 5.
- Marbury v. Madison, 5 U.S. (1 Cranch) 137 (1803).
- Elkins, Stanley M.; McKitrick, Eric; The Age of Federalism; New York: Oxford University Press, 1993. p. 731.
- Stephenson, D. Grier; Campaigns and The Court: The U.S. Supreme Court in Presidential Elections; New York: Columbia University Press, c1999. Page 48.
- “The John Adams Administration.” Presidential Administration Profiles for Students. Online Edition. Gale Group. Pages 1, 3.
- “The Thomas Jefferson Administrations.” Presidential Administration Profiles for Students. Online Edition. Gale Group, 2002. Page 3.
- Elkins, Stanley M.; McKitrick, Eric; The Age of Federalism; New York: Oxford University Press, 1993. p. 731 - 732.
- Rehnquist, William H. Grand Inquests: The Historic Impeachments of Justice Samuel Chase and President Andrew Johnson. Quill: 1992, p.52
- Jerry W. Knudson, "The Jeffersonian Assault on the Federalist Judiciary, 1802-1805: Political Forces and Press Reaction," American Journal of Legal History 1970 14(1): 55-75; Richard Ellis, "The Impeachment of Samuel Chase," in American Political Trials, ed. by Michael R. Belknap (1994) pp 57-76, quote on p. 64.
- James M. O'Fallon, The Case of Benjamin More: A Lost Episode in the Struggle over Repeal of the 1801 Judiciary Act, 11 Law & Hist. Rev. 43 (1993). | https://en.wikipedia.org/wiki/Midnight_Judges_Act |
4.0625 | Astronomy & Space Classroom Resources
This collection of lessons and web resources is aimed at classroom teachers, their students, and students' families. Most of these resources come from the National Science Digital Library (NSDL). NSDL is the National Science Foundation's online library of resources for science, technology, engineering, and mathematics education. See www.nsdl.org
Teachers' Domain: Earth in the Universe
Resource: Educator (grades K-12)
This Web site provides a collection of information and lesson plans on the topic of Earth and the Universe. Navigate this site with ease to access information.
ComPADRE Pathway for Physics and Astronomy Education Communities
Resource: Educator (grades K-12)
The ComPADRE Digital Library is a network of free online resource collections for faculty, college students, and teachers in Physics and Astronomy education. The site offers online discussions and tutorials for students, resources for physics and astronomy teachers in grades K-12, research for college faculty and information on available learning opportunities.
Resource: General Public
Maintained by astronomer Phil Plait, this Web site has a new home at Discover Blog and is devoted to correcting myths and misconceptions about astronomy and related topics. Plait is a skeptic, and fights misuses of science as well as praising the wonder of real science. The site provides numerous links to support Plait's opinion and is sometimes quite entertaining but always thought provoking.
Eyes on the Sky and Feet on the Ground
Resource: Educator (grades K-6) and Parents
This Web site promotes an understanding of the scientific process of investigation and includes suggestions for discussions before and after explorations. There are lesson plans for hands-on activities broken down by levels: K-2, 2-4, and 4-6. A most valuable resource for teaching inquiry-based science to young children leading their curiosity to question and explore.
Marshall Space Flight Center Education
Resource: Educator (grades K-12) and Parents
This NASA Web page provides resources for students and educators and offers detailed information on the educational programs and research opportunities offered by the Marshall Space Flight Center. The page has something for all ages of students as well as educators and parents.
Resource: Educator (grades 6-8) and General Public
The Amazing Space Web site promotes the science and majestic beauty of the universe for use in the classroom. Content developed for educators and learners of all ages is accurate, classroom-friendly, visually appealing, and carefully crafted to adhere to accepted educational standards. By sharing Hubble Space Telescope's greatest discoveries, learners of all ages will enjoy learning about the universe and gain an even greater understanding of it in the future.
Exploration of the Universe (EUD)
Resource: Educator (grades 8-12) and Students (grades 8-12)
The Education and Outreach page of NASA's Goddard Space Flight Center Web site provides background information about the structure and evolution of the universe. This page is a valuable resource for educators and students.
Resource: Educator (grades K-high education), Students (grades K-higher education) and General Public
The NASA home page provides everyone with the latest information from space. The home page allows users to choose an information path for Educators, Students, and the general public.
Astronomy & Space Research Overview
Resource: All Audiences
Information on NSF.gov website reviewing polar research. Site provides information on life in the poles to climate change. Information is current with eye-catching visuals. Targets grades K-12. | http://www.plainlanguage@nsf.gov/news/classroom/astronomy.jsp |
4.09375 | Monitoring a volcano requires scientists to use of a variety of techniques that can hear and see activity inside a volcano. The USGS Volcano Hazards Program monitors volcanoes to detect signs of change that forewarn of volcanic reawakening. To fully understand a volcano's behavior, monitoring should include several types of observations (earthquakes, ground movement, volcanic gas, rock chemistry, water chemistry, remote satellite analysis) on a continuous or near-real-time basis.
Scientists collect data from the instrument networks then analyze them to look for out-of-the-ordinary signals. By comparing the data analysis with similar results from past volcanic events, volcanologists are better able to forecast changes in volcanic activity and determine whether and when a volcano might erupt in the future. Most data can be accessed from our offices in the observatories but visits to the volcanoes, when possible, add valuable information.
Rapid advances in technology are helping scientists develop efficient and accurate monitoring equipment. These new systems are capable of collecting and transmitting accurate real-time data from the volcano back to Observatory offices, which improves eruption forecasting. It is important that instruments be installed during quiet times when volcanoes are not active so that they are ready to detect the slightest bit of volcanic stirring. Early detection gives the maximum amount of time for people to prepare for an eruption.
When a volcano begins showing new or unusual signs of activity, monitoring data help answer critical questions necessary for assessing and then communicating timely information about volcanic hazards. For example, prior to the 2004 eruption at Mount St. Helens monitoring equipment recorded a large increase in earthquake activity. Scientists quickly examined other monitoring data including gas, ground deformation, and satellite imagery to assess if magma or fluid was moving towards the surface. Based on the history of the volcano and the analysis of the monitoring data scientists were able to determine the types of magma could be moving towards the surface. This type of knowledge helps scientists figure out the possible types of volcanic activity and the associated hazards to people. Knowing the hazards helps officials determine which real-time warnings are needed to prevent loss of life and property. | http://volcanoes.usgs.gov/vhp/monitoring.html |
4.09375 | Salmonellosis is a foodborne illness caused by infection with Salmonella bacteria. Most infections are spread to people through consumption of contaminated food (usually meat, poultry, eggs, or milk).
Salmonella infections affect the intestines and cause vomiting, fever, and cramping, which usually clear up without medical treatment.
You can help prevent Salmonella infections by not serving any raw meat or eggs, and by not keeping reptiles as pets, particularly if you have very young children.
Hand washing is a powerful way to guard against Salmonella infections. So teach kids to wash their hands, particularly after trips to the bathroom and before handling food in any way.
Not everyone who ingests Salmonella bacteria will become ill. Children, especially infants, are most likely to get sick from it. About 50,000 cases of salmonellosis are reported in the United States each year and about a third of those are in kids 4 years old or younger.
There are many different types of Salmonella bacteria. The type responsible for most infections in humans is carried by chickens, cows, pigs, and reptiles (such as turtles, lizards, and iguanas). Another, rarer form — called Salmonella Typhi (S.Typhi) — causes typhoid fever. People usually get typhoid fever by drinking beverages or eating food that has been handled by someone who has typhoid fever or is a carrier of the illness. Most cases are in developing countries where clean water and other sanitation measures are hard to come by.
Signs and Symptoms
A Salmonella infection generally causes nausea, vomiting, abdominal cramps, diarrhea (sometimes bloody), fever, and headache. Because many different kinds of illnesses can cause these symptoms, most doctors will take a stool sample to make an accurate diagnosis.
Symptoms of most infections start within 3 days of contamination and usually go away without medical treatment.
At first, typhoid fever caused by Salmonella bacteria looks similar to infections by non-typhoid Salmonella. But in the second week, the liver and spleen can become enlarged, and a distinctive "rose spotted" skin rash may appear. From there, the infection can cause other health problems, like meningitis and pneumonia.
People at risk for more serious complications from a Salmonella infection include those who:
- have problems with their immune systems (such as people with HIV)
- take cancer-fighting drugs
- have sickle cell disease or an absent or nonfunctioning spleen
- take chronic stomach acid suppression medication
In these higher-risk groups, most doctors will treat an infection with antibiotics to prevent it from spreading to other parts of the body.
Here are some ways to help prevent Salmonella bacteria from making your family sick:
- Cook food thoroughly. Salmonella bacteria are most commonly found in animal products and can be killed by the heat of cooking. Don't serve raw or undercooked eggs, poultry, or meat. Microwaving is not a reliable way to kill the bacteria. If you're pregnant, be especially careful to avoid undercooked foods.
- Take care with eggs. Because Salmonella bacteria can contaminate even intact and disinfected grade A eggs, cook them well and avoid serving poached or sunny-side up eggs (with runny yolks).
- Clean cooking surfaces regularly. Salmonellosis also can spread through cross-contamination, so when you're preparing meals, keep uncooked meats away from cooked and ready-to-eat foods. Thoroughly wash your hands, cutting boards, counters, and knives after handling uncooked foods.
- Avoid foods that might contain raw-food products. Caesar salad dressing, the Italian dessert tiramisu, homemade ice cream, chocolate mousse, eggnog, cookie dough, and frostings can contain raw eggs. Unpasteurized milk and juices also can be contaminated with Salmonella.
- Wash hands often. Fecal matter (poop) is often the source of Salmonella contamination, so hand washing is extremely important, especially after using the toilet and before preparing food.
- Take care with pets. Avoid contact with the feces of family pets — especially reptiles. Wash your hands thoroughly after handling an animal and make sure that no reptiles are permitted to come into contact with a baby. Even healthy reptiles (especially turtles and iguanas) are not safe pets for small children and should not be in the same house as an infant.
- Don't cook food for others if you are sick, especially if you have vomiting or diarrhea.
- Keep food chilled. Don't leave cooked food out for more than 2 hours after serving (1 hour on a hot day) and store it promptly. Also, keep your refrigerator set to under 40ºF (4.4ºC).
If your child has salmonellosis and a healthy immune system, your doctor may let the infection pass without giving any medicines. But any time a child develops a fever, headache, or bloody diarrhea, call the doctor to rule out any other problems.
If your child is infected and has a fever, you may want to give acetaminophen to reduce his or her temperature and relieve cramping. As with any infection that causes diarrhea, it's important to give your child plenty of liquids to avoid dehydration.
Reviewed by: Rupal Christine Gupta, MD
Date reviewed: August 2014
|Centers for Disease Control and Prevention (CDC) The CDC (the national public health institute of the United States) promotes health and quality of life by preventing and controlling disease, injury, and disability.|
|U.S. Food and Drug Administration (FDA) The FDA is responsible for protecting the public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biological products, medical devices, our nation's food supply, cosmetics, and products that emit radiation.|
|CDC: Travelers' Health Look up vaccination requirements for travel destinations, get updates on international outbreaks, and more, searachable by country.|
|U.S. Department of Agriculture (USDA) The USDA works to enhance the quality of life for people by supporting the production of agriculture.|
|E. Coli Undercooked burgers and unwashed produce are among the foods that can harbor E. coli bacteria and lead to infection marked by severe diarrhea. Here's how to protect your family.|
|Yersiniosis Yersiniosis is an uncommon infection caused by the consumption of undercooked meat products, unpasteurized milk, or water contaminated by the bacteria.|
|Food Safety for Your Family Why is food safety important? And how can you be sure your kitchen and the foods you prepare in it are safe?|
|Why Is Hand Washing So Important? Did you know that proper hand washing is the best way to keep from getting sick? Here's how to teach this all-important habit to your kids.|
|Campylobacter Infections These bacterial infections can cause diarrhea, cramping, abdominal pain, and fever. Good hand-washing and food safety habits can prevent them.|
|Food Poisoning Sometimes, germs can get into food and cause food poisoning. Find out what to do if your child gets food poisoning - and how to prevent it.|
|What Are Germs? Germs are the microscopic bacteria, viruses, fungi, and protozoa that can cause disease. With a little prevention, you can keep harmful germs out of your family's way.|
|Typhoid Fever While typhoid fever isn't common in the U.S., it can be a health threat elsewhere. Learn about this illness and how to prevent it.|
|Stool Test: Bacteria Culture A stool culture helps doctors determine if there's a bacterial infection in the intestines.|
|Produce Precautions Kids need daily servings of fruits and vegetables. Here's how to make sure the produce you buy and prepare is safe.|
|Listeria Infections Listeriosis, a serious infection caused by eating food contaminated with a bacterium, primarily affects pregnant women, newborns, and adults with weakened immune systems. Some simple precautions can protect your family from infection.|
|Fever and Taking Your Child's Temperature Although it can be frightening when your child's temperature rises, fever itself causes no harm and can actually be a good thing - it's often the body's way of fighting infections.|
|Diarrhea Most kids battle diarrhea from time to time, so it's important to know what to do to relieve and even prevent it.|
Note: All information is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor.
© 1995-2016 KidsHealth® All rights reserved.
Images provided by iStock, Getty Images, Corbis, Veer, Science Photo Library, Science Source Images, Shutterstock, and Clipart.com | http://www.childrensdayton.org/cms/kidshealth/5ae9735bd2d3709f/index.html |
4.15625 | CHAPTER 3 DISCRETE PROBABILITY DISTRIBUTION Discrete Probability Distribution
Jan 2009 : At the end of the lecture, you will be able to : - select an appropriate discrete probability distribution * binomial distribution or * poisson distribution to calculate probabilities in specific application - calculate the probability, means and variance for each of the discrete distributions presented DISCRETE PROBABILITY DISTRIBUTION Learning Objectives:
Jan 2009 Bernoulli Trials : experiment with two possible outcomes, either ‘ Success’ or ‘failure’. Probability of success is given as p and probability of failure is 1- p BINOMIAL DISTRIBUTION Bin(n,p) Requirements of a binomial experiment: * n Bernoulli trials * trials are independent * that each trial have a constant probability p of success. Example binomial experiment: tossing the same coin successively and independently n times
Jan 2009 BINOMIAL DISTRIBUTION Bin( n , p ) A binomial random variable X associated with a binomial experiment consisting of n trials is defined as: X = the number of ‘success’ among n trials The probability mass function of X is Mean and Variance: A random variable that has a binomial distribution with parameters n and p , is denoted by X ~ Bin ( n , p )
Jan 2009 Poisson Probability Distribution Conditions to apply the Poisson Probability distribution are: 1. x is a discrete random variable 2. The occurrences are random 3. The occurrences are independent Useful to model the number of times that a certain event occurs per unit of time, distance, or volume. Examples of application of Poisson probability distribution <ul><li>The number of telephone calls received by an office during </li></ul><ul><li>a given day </li></ul>ii) The number of defects in a five-foot-long iron rod.
Jan 2009 Poisson Probability Distribution, X ~ P( ) The probability of x occurrences in an interval is where is the mean number of occurrences in that interval. ( per unit time or per unit area) Mean and Variance:
Jan 2009 <ul><li>Carry out experiment to estimate that represents the </li></ul><ul><li>mean number of events that occur in one unit time/ space </li></ul>* The number of events X that occur in t units of time is counted and is estimated. * If the number of events are independent and events cannot occur simultaneously, then X follows a Poisson distribution. A process that produces such events is a Poisson process Let denote the mean number of events that occur in one unit of time.Let N T denote the number of events that are observed to occur in T units of time or space, then N T ~ P ( T), Poisson Process
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later. | http://www.slideshare.net/ayimsevenfold/chapter-3-discretedistributionrev2009 |
4 | The beginnings of civilization in Europe can be traced to very ancient times, but they are not as old as the civilizations of Mesopotamia and Egypt. The Roman and Greek cultures flourished in Europe, and European civilization—language, technology, political concepts, and the Christian religion—have been spread throughout the world by European colonists and immigrants. Throughout history, Europe has been the scene of many great and destructive wars that have ravaged both rural and urban areas. Once embraced by vast and powerful empires and kingdoms, successful nationalistic uprisings (especially in the 19th cent.) divided the continent into many sovereign states. The political fragmentation led to economic competition and political strife among the states.
After World War II, Europe became divided into two ideological blocs (Eastern Europe, dominated by the USSR, and Western Europe, dominated by the United States) and became engaged in the cold war. The North Atlantic Treaty Organization (NATO) was formed as a military deterrent to the spread of Communism and sought to maintain a military balance with its eastern equivalent, the Warsaw Treaty Organization. Cold war tensions eased in the 1960s, and signs of normalization of East-West relations appeared in the 1970s.
In Western Europe, the European Economic Community (Common Market), the European Coal and Steel Community, and the European Atomic Energy Community (Euratom) merged in 1967 to form the European Community. Known since 1993 as the European Union, the organization aims to develop economic and monetary union among its members, ultimately leading to political union. The Eastern European counterpart was the Council for Mutual Economic Assistance (COMECON), which, like the Warsaw Treaty Organization, dissolved with the breakup of the Soviet bloc in the early 1990s.
The loosening of political control sparked a revival of the long pent-up ethnic nationalism and a wave of democratization that led to an overthrow of the Communist governments in Eastern Europe. In the former Yugoslavia, ethnic tensions between Muslims, Croats, and Serbs were unleashed, leading to civil war and massacres of members of ethnic groups, or "ethnic cleansing," in areas where other groups won military control. During the early and mid-1990s most of the former Soviet bloc countries embarked on economic restructuring programs to transform their centralized economies into market-based ones. The pace of reform varied, especially as the hardships involved became increasingly evident. Meanwhile, in Western Europe the European Union, amid some tensions, continued working toward greater political and economic unity, including the creation of a common European currency. | http://www.factmonster.com/encyclopedia/world/europe-outline-history.html |
4 | Coral reefs: stunning, diverse, found worldwide, and incredibly fragile, despite the fact that they look like they’re made from stone. These delicate, beautiful structures are microcosms, communities filled with organisms living in a mutually beneficial world that provides food, shelter and protection from harsh weather. Sadly, 25% of coral reefs are already hopelessly damaged, according to the World Wildlife Fund, and many others face serious threats.
Combating damage to coral reefs requires understanding the multifaceted nature of the threats against their survival, and determining the best way to address these environmental issues before it’s too late. The loss of coral reefs would be tragic not just because we’d miss something beautiful in the world, but because they also play an important environmental role.
1. Ocean Acidification
Associated with climate change, ocean acidification occurs as atmospheric CO2 rises and the ocean absorbs it. The oceans have been burdened with a huge percentage of the rapidly-rising CO2 in the Earth’s atmosphere, and they aren’t equipped to handle it. Historically, the ocean’s pH was relatively stable. Today, it’s dropping due to reactions between seawater and CO2, and corals are missing out on valuable carbonate ions they need to form. Not only that, but as the level of dissolved CO2 in the ocean rises, it appears to be directly damaging coral skeletons, causing them to break and crumble.
2. Coral Bleaching
Thanks to climate change, the ocean is getting warmer. Corals, along with many other organisms in the sea, are extremely sensitive to small temperature changes. In their case, they can react to temperature increases by expelling their critical symbiotic algae, known as zooxanthellae. How critical? They provide up to 80% of the energy needed by the coral to survive, so when they leave, the coral is at risk of dying off — and it acquires a distinctive pale color, explaining the term “bleaching.”
Coral, like the rest of us, doesn’t take kindly to toxins in its environment, and when exposed to chemical and industrial pollution, it can die. Moreover, corals are at risk of what is known as “nutrient pollution,” where the ocean becomes rich in nutrients as a result of fertilizer release, animal waste and related materials. It turns out there is such a thing as too much of a good thing — algae swarm in and bloom in response to the sudden food source, and they choke out the coral population. Better pollution controls and conservation are critical to prevent this issue.
Coral reefs often furnish a number of valuable food species, but unfortunately, humans don’t always manage fisheries responsibly. Consequently, species can become fished out, disturbing the balance of the reef environment. Not only that, but some fishers use destructive practices like adding chemicals to the water to stun fish, deep water trawling or using explosives to quickly startle fish to the surface of the water. These practices damage the coral and harm bycatch — the “useless” species that won’t be harvested. Likewise, crab and lobster traps can damage reefs by banging around in the current and entangling coral and other species in their ropes.
Coastlines tend to make popular places for development. Historically, they were ideal for trade and other activities thanks to their proximity to major ports. Now, coastlines have become one of the most popular places in the world to live thanks to existing settlement and stunning views of the water, along with activities associated with the ocean like surfing, going to the beach and snorkling. Unfortunately for coral, development is bad news, because it increases pressures on already fragile reefs. Some cities that once had thriving reefs now have nothing left, while in other rapidly-developing areas, things are not looking good for coral reefs.
Tourism, closely related to development, is also linked with damage to coral reefs. Tourists who aren’t aware of environmental issues may directly damage coral by stepping on it, harvesting souvenirs to take home, or disrupting the marine environment. Meanwhile, boaters may dump waste in reefs as well as damaging coral by hitting it with propellers and anchors.
Ever get a sunburn? Coral has some natural protections against UV radiation, but it’s not prepared for ozone depletion. As the Earth’s ozone has become thinned in spots, some corals are showing signs of damage caused by UV exposure; it’s not exactly like they can slap on a layer of sunscreen for additional protection in the face of increasing exposure. Like other changes in the Earth’s atmosphere, ozone depletion is hard to fix, and it’s difficult to come up with a way to protect corals from it.
Coral jewelry is just one of many things made from coral. In addition to being used in souvenirs for tourists, coral is also removed for use in making roads, paths and various other products. This is especially common in nations with limited sources of income, which turn to their reefs and other natural wonders to meet their economic needs. Even though this puts substantial pressure on the environment, and eventually depletes reefs, these nations may have no other choice.
Think back on the photos of coral reefs you’ve seen, or, if you’ve been lucky enough to see one in person, the real thing. One thing you’ll note in almost all of them is the extremely clear water. Coral hates suspended sediment, and doesn’t thrive in waters clogged with dirt, debris and other materials. Sadly, sedimentation is on the rise thanks to development and the destruction of wetlands, which normally act like giant traps for sediment, preventing it from reaching the ocean (and, incidentally, preventing loss of valuable topsoil). As sedimentation increases, coral populations suffer.
9. Stormy Waters Ahead
Tropical storms, hurricanes and other rough weather are a fact of nature, but evidence suggests they may be increasing in frequency and severity in response to climate change. Coral reefs can be badly damaged as a result of storm surge, the high, aggressive waves associated with severe storms. Sadly, this doesn’t just damage the coral; it also exposes the shoreline to further damage, because the coral would normally act as a buffer zone to help protect the shore.
10. Rising Sea Levels
Coral is highly sensitive to light levels (one reason it can’t handle sedimentation and algae blooms). As sea levels rise, the amount of available light will decrease around existing reefs. Coral won’t be able to grow under those conditions, and it may begin to die off, which means that it will cease to support the reef and the larger population of organisms that relies on the coral for food and shelter. Formerly diverse areas could become deserts very quickly, and projections suggest that at current predicted rates of sea level rise, many famous coral reefs, such as those in the Caribbean, won’t be able to keep pace. | http://www.care2.com/causes/10-threats-to-the-worlds-stunning-coral-reefs.html/2 |
4.03125 | Photographs can be powerful connections to the past.
Grade Range: 4-12
Resource Type(s): Reference Materials, Reference Materials
Date Posted: 10/14/2008
From 1861-1865, Americans battled over preserving their Union and ending slavery. The Civil War is the focus of this section of The Price of Freedom: Americans at War, an online exhibition. This pivotal and complicated period of American history is divided into sections that allow students to focus either on a specific aspect of the war, or the conflict as a whole. The sections included are: John Brown, Fort Sumter, the Battle of Bull Run, major turning points, the war at sea, Wilderness to Appomattox, political leaders, military leaders, soldiers in blue and gray; battles and casualties and Reconstruction and the legacies of the war. A non-flash version of this site is available: The Civil War.
Historical Thinking Standards (Grades K-4)
3B: Compare and contrast differing sets of ideas, values, personalities, behaviors, and institutions.
3C: Analyze historical fiction.
3D: Distinguish between fact and fiction.
3E: Compare different stories about a historical figure, era, or event.
3F: Analyze illustrations in historical stories.
3G: Consider multiple perspectives.
3H: Explain causes in analyzing historical actions.
3I: Challenge arguments of historical inevitability.
3J: Hypothesize influences of the past.
Standards in History (Grades K-4)
United States History Standards (Grades 5-12)
Historical Thinking Standards (Grades 5-12)
2B: Reconstruct the literal meaning of a historical passage.
2C: Identify the central question(s) the historical narrative addresses.
2D: Differentiate between historical facts and historical interpretations.
2E: Read historical narratives imaginatively.
2F: Appreciate historical perspectives.
2G: Draw upon data in historical maps.
2H: Utilize visual, mathematical, and quatitative data.
2I: Draw upon the visual, literary, and musical sources.
3B: Consider multiple perspectives.
3C: Analyze cause-and-effect relationships.
3D: Draw comparisons across eras and regions in order to define enduring issues.
3E: Distinguish between unsupported expressions of opinion and informed hypotheses grounded in historical evidence.
3F: Compare competing historical narratives.
3G: Challenge arguments of historical inevitability.
3H: Hold interpretations of history as tentative.
3I: Evaluate major debates among historians.
3J: Hypothesize the influence of the past. | https://historyexplorer.si.edu/resource/civil-war |
4.09375 | Scientists at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland believe solar panels made from nanowires could be built with fewer materials and perform more efficiently than current photovoltaic modules.
Anna Fontcuberta i Morral and her team built a nanowire solar cell out of gallium arsenide, a material which is better at converting light into power than silicon. They found that it collects up to 12 times more light than the usual flat solar cell.
Fontcuberta’s prototype is said to be almost 10 per cent more efficient at transforming light into power than allowed, in theory, for conventional single material solar panels.
Furthermore, optimizing the dimensions of the nanowire, improving the quality of the gallium arsenide and using better electrical contacts to extract the current could increase the prototype’s efficiency.
The EPFL study, published in Nature Photonics, suggests that an array of nanowires may attain 33 per cent efficiency, whereas commercial (flat) solar panels are up to 20 per cent efficient.
Also, arrays of nanowires would use at least 10,000 times less gallium arsenide, allowing for industrial use of the material. | http://www.theengineer.co.uk/swiss-scientists-demonstrate-nanowire-solar-cell/ |
4.0625 | Find content from Thinkfinity Partners using a visual bookmarking and sharing tool.
Home › Results from ReadWriteThink
1-10 of 223 Results from ReadWriteThink
- Classroom Resources | Grades 6 – 8 | Lesson Plan | Standard Lesson
ABC Bookmaking Builds Vocabulary in the Content Areas
V is for vocabulary. A content area unit provides the theme for a specialized ABC book, as students select, research, define, and illustrate a word for each alphabet letter.
- Classroom Resources | Grades 9 – 12 | Lesson Plan | Standard Lesson
A Biography Study: Using Role-Play to Explore Authors' Lives
Students read biographies and explore websites of selected American authors and then role-play as the authors.
- Classroom Resources | Grades 4 – 7 | Lesson Plan | Standard Lesson
A “Cay”ribbean Island Study
As a pre-reading activity for The Cay, groups of students choose and study a Caribbean island, create a final product in the format of their choice, and finally, do an oral presentation to share information learned.
- Classroom Resources | Grades 6 – 12 | Lesson Plan | Recurring Lesson
Active Reading through Self-Assessment: The Student-Made Quiz
This recurring lesson encourages students to comprehend their reading through inquiry and collaboration. They choose important quotations from the text and work in groups to formulate “quiz” questions that their peers will answer.
- Classroom Resources | Grades 3 – 12 | Calendar Activity | February 20
Actor Sidney Poitier was born in 1924.
Students do a journal entry about barriers that have been broken,such as age, race, and gender, that might impede them in the future, and how they can break through those barriers.
- Classroom Resources | Grades 3 – 5 | Lesson Plan | Recurring Lesson
A Daily DEAR Program: Drop Everything, and Read!
The teacher shouts, "Drop Everything and Read!" and students settle into their seats to read books they've selected. This independent reading program helps students build a lifelong reading habit.
- Classroom Resources | Grades K – 2 | Lesson Plan | Standard Lesson
Adventures in Nonfiction: A Guided Inquiry Journey
Students are guided through an informal exploration of nonfiction texts and child-oriented Websites, learning browsing and skimming techniques for the purpose of gathering interesting information.
- Classroom Resources | Grades 7 – 12 | Calendar Activity | July 16
African American journalist Ida B. Wells was born in 1862.
Students brainstorm a list of human rights issues, research their group's issue in depth, examine the way journalists cover a story, and create articles for a classroom newspaper.
- Classroom Resources | Grades 3 – 5 | Lesson Plan | Standard Lesson
A Genre Study of Letters With The Jolly Postman
Students read The Jolly Postman, in which a postman delivers letters to storybook characters. They explore different types of mail and categorize letters from the book and their own mail.
- Classroom Resources | Grades 5 – 12 | Calendar Activity | August 11
Alex Haley, author of Roots, was born in 1921.
Students explore their own roots by interviewing family members and use their family history to write a fictional account of their roots. | http://www.readwritethink.org/search/?resource_type_filtering=6-16-18-20-126&theme=9 |
4.28125 | hydrothermal vent, crack along a rift or ridge in the deep ocean floor that spews out water heated to high temperatures by the magma under the earth's crust. Some vents are in areas of seafloor spreading, and in some locations water temperatures above 350°C (660°F) have been recorded; temperatures at vents in the Cayman Trough in the Caribbean Sea have been measured at above 400°C (750°F). The deepest known vents are those of the Beebe Vent Field in the Cayman Trough, some 16,273 ft (4,960 m) below the sea surface. The hot springs found at hydrothermal vents leach out valuable subsurface minerals and deposit them on the ocean floor. The dissolved minerals precipitate when they hit the cold ocean water, in some cases creating dark, billowing clouds (hence the name "black smokers" for some of the springs) and settling to build large chimneylike structures.
Giant tube worms, bristle worms, yellow mussels, clams, and pink sea urchins are among the animals found in the unique ecological systems that surround the vents. All of these animals live—without sunlight—in conditions of high pressure, steep temperature gradients, and levels of minerals that would be toxic to animals on land. The primary producers of these ecosystems are bacteria that use chemosynthesis to produce energy from dissolved hydrogen sulfide. Some scientists believe such vents may have been the source of life on earth.
Hydrothermal vents were first discovered near the Galápagos Islands in 1977 by scientists in the research submersible Alvin. Vents have since been discovered in the Atlantic, Indian, and Southern oceans as well. Although a number of species found around the vents in each ocean are also found in other oceans, many of the species are unique to the particular region in which they are found.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | http://www.factmonster.com/encyclopedia/science/hydrothermal-vent.html |
4.40625 | |This article or section may have been copied and pasted from a source, possibly in violation of Wikipedia's copyright policy. Please remedy this by editing this article to remove any non-free copyrighted content and attributing free content correctly, or flagging the content for deletion. Please be sure that the supposed source of the copyright violation is not itself a Wikipedia mirror. (August 2015)|
Positive Discipline (or PD) is a discipline model used by schools, and in parenting, that focuses on the positive points of behaviour, based on the idea that there are no bad children, just good and bad behaviors. You can teach and reinforce the good behaviors while weaning the bad behaviors without hurting the child verbally or physically. People engaging in positive discipline are not ignoring problems. Rather, they are actively involved in helping their child learn how to handle situations more appropriately while remaining calm, friendly and respectful to the children themselves. Positive discipline includes a number of different techniques that, used in combination, can lead to a more effective way for parents to manage their kids behaviour, or for teachers to manage groups of students. Some of these are listed below. Positive Behavior Support (PBS) is a structured, open-ended model that many parents and schools follow. It promotes positive decision making, teaching expectations to children early, and encouraging positive behaviors.
Positive discipline contrasts with negative discipline. Negative discipline may involve angry, destructive, or violent responses to inappropriate behavior. In the terms used by psychology research, positive discipline uses the full range of reinforcement and punishment options:
- Positive reinforcement, such as complimenting a good effort;
- Negative reinforcement, such as ignoring requests made in a whining tone of voice;
- Positive punishment, such as requiring a child to clean up a mess he made; and
- Negative punishment, such as removing a privilege in response to poor behavior.
However, unlike negative discipline, it does all of these things in a kind, encouraging, and firm manner. The focus of positive discipline is to establish reasonable limits and guide children to take responsibility to stay within these limits, or learn how to remedy the situation when they don't.
There are 5 criteria for effective positive discipline:
- Helps children feel a sense of connection. (Belonging and significance)
- Is mutually respectful and encouraging. (Kind and firm at the same time.)
- Is effective long-term. (Considers what the children are thinking, feeling, learning, and deciding about themselves and their world – and what to do in the future to survive or to thrive.)
- Teaches important social and life skills. (Respect, concern for others, problem solving, and cooperation as well as the skills to contribute to the home, school or larger community.)
- Invites children to discover how capable they are. (Encourages the constructive use of personal power and autonomy.)
Positive Behavior Support (PBS) is a form of child discipline that is a proactive and positive approach used by staff, parents and community agencies to promote successful behavior and learning at home and at school for all students. PBS supports the acquisition of replacement behaviors, a reduction of crisis intervention, the appreciation of individual differences, strategies for self-control, and durable improvement in the quality of life for all.
Part of using positive discipline is preventing situations in which negative behaviors can arise. There are different techniques that teachers can use to prevent bad behaviors:
Students who "misbehave" are actually demonstrating "mistaken" behavior. There are many reasons why a student may exhibit mistaken behavior, i.e. lack of knowing appropriate behavior to feeling unwanted or unaccepted. For students who simply do not know what appropriate behavior they should be exhibiting, the teacher can teach the appropriate behavior. For example, the young child who grabs toys from others can be stopped from grabbing a toy and then shown how to ask for a turn. For students who are feeling unwanted or unaccepted, a positive relationship needs to develop between the teacher and student before ANY form of discipline will work.
The sanctions that are listed at the end of the article would be less needed if students have a strong connection with the adult in charge and knew that the teacher respected them. Teachers need to know how to build these relationships. Simply telling them to demonstrate respect and connection with students is not enough for some of them, because they may also lack knowledge on how to do this.
Teachers need to view each child as an account; they must deposit positive experiences in the student before they make a withdraw from the child when discipline takes place. Teachers can make deposits through praise, special activities, fun classroom jobs, smiles and appropriate pats on the backs. Some children have never experienced positive attention. Children long for attention; if they are not receiving positive attention they will exhibit behavior that will elicit negative attention.
Teachers can recognize groups of students who would not work well together (because they are friends or do not get along well) and have them separated from the start. Some teachers employ the "boy-girl-boy-girl" method of lining or circling up (which may be sexist or effective, depending on your perspective).
Another technique would be to be explicit with the rules, and consequences for breaking those rules, from the start. If students have a clear understanding of the rules, they will be more compliant when there are consequences for their behaviors later on. A series of 3 warnings is sometimes used before a harsher consequence is used (detention, time-out, etc.), especially for smaller annoyances (for example, a student can get warnings for calling out, rather than getting an immediate detention, because a warning is usually effective enough). Harsher consequences should come without warnings for more egregious behaviors (hitting another student, cursing, deliberately disobeying a warning, etc.). Teachers can feel justified that they have not "pulled a fast one" on students.
Students are more likely to follow the rules and expectations when they are clearly defined and defined early. Many students need to know and understand what the negative behaviors are before they end up doing one by accident.
Involving the students when making the rules and discipline plans may help prevent some students from acting out. It teaches the students responsibility and creates an awareness of what good versus bad behaviors are. It also makes the student feel obligated and motivated to follow the rules because they were involved while they were created.
Gerunds are words ending in "ing". It is believed that using gerunds can help reinforce the positive behavior another would like to see rather than attacking a bad behavior. For example, a teacher might see students running down the hall and calmly say "walking" rather than yell "stop running" in an agitated voice. He might say "gently" (an adverb) instead of insisting "calm down!"
(This addition is an example of "Behaviorism" and is not part of the original Positive Discipline that does not advocate punishment or rewards.) Positive discipline includes rewarding good behavior as much as curtailing negative behaviors. Some "rewards" can be verbal. Some are actual gifts.
Instead of yelling at a student displaying negative behaviors, a teacher/leader might recognize a student behaving well with a "thank you Billy for joining the line", or "I like the way you helped Billy find his notebook." Recognizing a positive behavior can bring a group's focus away from the students displaying negative behavior, who might just be "acting out" for attention. Seeing this, students seeking attention might try displaying good behaviors to get the recognition of the leader.
One persons submits this as a reward method: Students are given stamps in their planner if they do well in a lesson. When they receive enough stamps from the same subject (usually 3 or 5) the student has a credit. When 50, 100, 150, 200 and 250 credits have been awarded to a particular student, that student receives a certificate. If a student meets certain behavioural criteria, they are rewarded with a trip at the end of term.
- A special chain or necklace students pass from one to another for doing good deeds.
- High fives and positive words.
- Awards/achievements on the wall of the classroom or cafeteria.
If a student is causing a distraction during class, a teacher might do something to gain the attention of the student without losing momentum of the lecture. One technique is quietly placing a hand on the shoulder of the student while continuing to speak. The student becomes aware that the teacher would like them to focus. Another technique is to non-chalantly stand in-between two students talking to each other. This causes a physical barrier to the conversation and alerts the students to the teacher's needs. A third technique for a standing group is to gently move the student next to the teacher.
A funny technique that requires a skilled PD practitioner is "the grocery list look". A gentler version of "the evil eye" this look is not happy or mad, but focused. The teacher looks at the student, places her tongue on the tip of her mouth, and thinks about a list of things to do (not to the child!). This focused look, along with silence, makes a student just uncomfortable enough to change behaviors, not enough to make them feel embarrassed or scared as an evil eye might.
Studies of implementation of Positive Discipline techniques have shown that Positive Discipline tools do produce significant results. A study of school-wide implementation of classroom meetings in a lower-income Sacramento, CA elementary school over a four-year period showed that suspensions decreased (from 64 annually to 4 annually), vandalism decreased (from 24 episodes to 2) and teachers reported improvement in classroom atmosphere, behavior, attitudes and academic performance. (Platt, 1979) A study of parent and teacher education programs directed at parents and teachers of students with "maladaptive" behavior that implemented Positive Discipline tools showed a statistically significant improvement in the behavior of students in the program schools when compared to control schools. (Nelsen, 1979) Smaller studies examining the impacts of specific Positive Discipline tools have also shown positive results. (Browning, 2000; Potter, 1999; Esquivel) Studies have repeatedly demonstrated that a student’s perception of being part of the school community (being "connected" to school) decreases the incidence of socially risky behavior (such as emotional distress and suicidal thoughts / attempts, cigarette, alcohol and marijuana use; violent behavior) and increases academic performance. (Resnick et al., 1997; Battistich, 1999; Goodenow, 1993) There is also significant evidence that teaching younger students social skills has a protective effect that lasts into adolescence. Students that have been taught social skills are more likely to succeed in school and less likely to engage in problem behaviors. (Kellam et al., 1998; Battistich, 1999)
Programs similar to Positive Discipline have been studied and shown to be effective in changing parent behavior. In a study of Adlerian parent education classes for parents of teens, Stanley (1978) found that parents did more problem solving with their teens and were less autocratic in decision making. Positive Discipline teaches parents the skills to be both kind and firm at the same time. Numerous studies show that teens who perceive their parents as both kind (responsive) and firm (demanding) are at lower risk for smoking, use of marijuana, use of alcohol, or being violent, and have a later onset of sexual activity. (Aquilino, 2001; Baumrind, 1991; Jackson et al., 1998; Simons, Morton et al., 2001) Other studies have correlated the teen’s perception of parenting style (kind and firm versus autocratic or permissive) with improved academic performance. (Cohen, 1997; Deslandes, 1997; Dornbusch et al., 1987; Lam, 1997)
Studies have shown that through the use of positive intervention programs "designed specifically to address the personal and social factors that place some high school students at risk of drug abuse, schools can reduce these young people's drug use and other unhealthy behaviors" (Eggert, 1995; Nicholas, 2995; Owen, 1995). Use of such programs has shown improvement in academics and a decline in drug use across the board.
Studies have shown "that kids who are at high risk of dropping out of school and abusing drugs are more isolated and depressed and have more problems with anger", says Dr. Leona Eggert of the University of Washington in Seattle. "They are disconnected from school and family and are loosely connected with negative peers" (Eggert, 1995; Nicholas, 1995; Owen, 1995).
Overall implementing positive programs to deal with Positive Discipline will better the decision making process of teens and parents, according to some researchers.
Better student-teacher relations. Less teacher wasted energy/frustration. Students recognize desirable positive behaviors, rather than feel attacked.
Statistics show that each year, close to one third of eighteen-year-olds do not finish high school (Bridgeland, 2006; Dilulio, 2006; Morison, 2006). Minority and low-income areas show even higher numbers. 75 percent of crimes committed in the United States are done by high school drop-outs. In order to know how to intervene Civic Enterprises interviewed dropouts and asked them what they suggest be done to increase high school completion numbers. Here is what they came up with: 81% said there should be more opportunities for "real-world" learning, 81% said "better" teachers, 75% said smaller class numbers, 70% said "increasing supervision in schools", 70% said greater opportunities for summer school and after-school programs, 62% said "more classroom discipline, and 41% said to have someone available to talk about personal problems with (Bridgeland, 2006; Dilulio, 2006; Morison, 2006). Through use of Positive Discipline, efforts are being made to prevent occurrences such as dropping out of school.
- School punishment
- Compare with Discipline in Sudbury Model Democratic Schools
- Child discipline
- Assertive discipline
- "Madison Metropolitan School District Student Conduct and Discipline Plan" (PDF). Retrieved 14 January 2016.
- Nelsen, Jane (2006). Positive Discipline. ISBN 978-0-345-48767-4.
- "Creating Behavior Plans".
- Eggert, L.L.; Nicholas, L.J.; Owen, L.M (1995). Reconnecting Youth: A peer group approach to building life skills. Bloomington, IN: National Educational Service.
- Bridgeland, John; Dilulio, John; Morison, Karen (2006). The Silent Epidemic: Perspectives of High School Dropouts. Washington, D.C: Civic Enterprises, LLC.
- Positive Child Discipline Positive Child Discipline
- Positive Discipline 101: How to Discipline a Child in a Way That Actually Works | https://en.wikipedia.org/wiki/Positive_Discipline |
4.125 | 3 Answers | Add Yours
Romanticism was, in essence, a movement that rebelled against and defined itself in opposition to the Enlightenment. For the artists and philosophers of the Enlightenment, the ideal life was one governed by reason. Artists and poets strove for ideals of harmony, symmetry, and order, valuing meticulous craftsmanship and the classical tradition. Among philosophers, truth was discovered by a combination of reason and empirical research. The ideals of the period included a faith in human reason to understand the universe and resolve the problems of the world, expressed in the couplet by Alexander Pope:
Nature and Nature's laws lay hid in night:
God said, "Let Newton be!" and all was light
The Romantic movement emphasized the individual self and sentiment as opposed to reason and was more pessimistic in attitude, viewing the intellectual and artist as solitary geniuses rather than integral parts of a social system. Rather than valuing symmetry and harmony, the Romantics valued individuality, surprise, intensity of emotion, and expressiveness. They looked back to medieval (or "Gothic") models as much as to the Augustan tradition of Rome.
The ideals of these two intellectual movements were very different from one another.
The Enlightenment thinkers believed very strongly in rationality and science. They believed that the natural world and even human behavior could be explained scientifically. They even felt that they could use the scientific method to improve human society.
By contrast, the Romantics rejected the whole idea of reason and science. They felt that a scientific worldview was cold and sterile. They felt that science and material progress would rob people of their humanity (this is, for example, one of the major themes of Frankenstein, by Mary Shelley). In place of reason, the Romantics exalted feelings and emotions. They felt that intuition and emotions were important sources of knowledge.
Thus, the ideals of the Romantics and the thinkers of the Enlightenment were very much opposed to one another.
Romanticism was a movement that stressed human emotions and beauty of nature, while Enlightenment was quite an opposite movement.
While Enlightenment's belief about the material world is that it is the reflection of the ideal world, Romanticism's belief about the material world is that it is the manifestation of divinity or God's self-expression.
The Enlightenment movement was characterized by moderation and order, while Romanticism admired spontaneity and disorder.
While the Enlightenment believed that objectivity and realism can be possible, Romanticism believed that subjectivity and relativity cannot be avoided.
The Enlightenment's literary forms kept and applied traditional forms, while Romanticism's literary forms revealed stylistic autonomy. Enlightenment's art had to be educational and it had to prove its utility, while Romanticism's art appealed mainly to emotions.
We’ve answered 301,231 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/compare-contrast-enlightenment-ideals-romanticism-363819 |
4.4375 | Grammar, Pronunciation, and Vocabulary
The various forms of Chinese differ least in grammar, more in vocabulary, and most in pronunciation. Like the other Sino-Tibetan languages, Chinese is tonal, i.e., different tones distinguish words otherwise pronounced alike. The number of tones varies in different forms of Chinese, but Mandarin has four tones: a high tone, a rising tone, a tone that combines a falling and a rising inflection, and a falling tone.
Chinese (again, like other Sino-Tibetan languages) is also strongly monosyllabic. Chinese often uses combinations of monosyllables that result in polysyllabic compounds having different meanings from their individual elements. For example, the word for "explanation," shue-ming, combines shue ("speak") with ming ("bright"). These compounds can embrace three and even four monosyllables: shuo-ch'u-lai, the word for "describe," is made up of shuo ("speak"), ch'u ("out"), and lai ("come"). This practice has greatly increased the Chinese vocabulary and also makes it much easier to grasp the meaning of spoken Chinese words.
The elements of Chinese tend to be more grammatically isolated than connected, because the language lacks inflection to indicate person, number, gender, case, tense, voice, and so forth. Suffixes may be used to denote some of these features. For example, the suffix -le is a sign of the perfect tense of the verb. Subordination and possession can be marked by the suffix -te. The position and use of a word in a sentence may determine its part of speech and its meaning.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Language and Linguistics | http://www.infoplease.com/encyclopedia/society/chinese-grammar-pronunciation-vocabulary.html |
4.03125 | Spanish missions in California
|Part of a series on the|
|Part of a series on the|
|Spanish missions in California|
The Spanish missions in California comprise a series of 21 religious outposts; established by Catholic priests of the Franciscan order between 1769 and 1833, to expand Christianity among the Native Americans northwards into what is today the U.S. state of California. The missions were part of a major effort by the Spanish Empire to extend colonization into the most northern and western parts of Spain's North American claims. The missionaries introduced European fruits, vegetables, cattle, horses, ranching and technology into the region that became the New Spain province of Alta California; however, the missions also brought serious negative consequences to the Native American populations with whom the missionaries and other Spaniards came in contact.
Mexico achieved independence in 1822, taking Alta California along with it, but the missions maintained authority over native neophytes and control of vast land holdings until the 1830s. The Alta California government secularized the missions after the passage of the Mexican secularization act of 1833. This divided the mission lands into land grants, which became many of the Ranchos of California. In the end, the missions had mixed results in their objectives: to convert, educate, and "civilize" the indigenous population and transform the natives into Spanish colonial citizens. Today, the surviving mission buildings are the state's oldest structures, and the most-visited historic monuments.
- 1 History
- 2 Mission locations and military districts
- 3 Site selection and layout
- 4 Franciscans and Indians
- 5 Mission industries
- 6 Present-day California missions
- 7 Legacy and Native American controversy
- 8 Gallery of missions
- 9 See also
- 10 Notes
- 11 Citations
- 12 References
- 13 Further reading
- 14 External links
Beginning in 1492 with the voyages of Christopher Columbus, the Kingdom of Spain sought to establish missions to convert indigenous people in Nueva España (New Spain), which consisted of the Caribbean, Mexico, and most of what is now the Southwestern United States) to Roman Catholicism. This would facilitate colonization of these lands awarded to Spain by the Catholic Church, including that region later known as Alta California.[notes 1][notes 2][notes 3]
Early Spanish exploration
Only 48 years after Columbus discovered the Americas for Europe, Francisco Vázquez de Coronado set out from Compostela, New Spain on February 23, 1540, at the head of a large expedition. Accompanied by 400 European men-at-arms (mostly Spaniards), 1,300 to 2,000 Mexican Indian allies, several Indian and Africans slaves, and four Franciscan monks, he traveled from Mexico through parts of the southwestern United States to present-day Kansas between 1540 and 1542. Two years later on 27 June 1542, Juan Rodriguez Cabrillo set out from Navidad, Mexico and sailed up the coast of Baja California and into the region of Alta California.
Secret English claims
Unknown to Spain, Sir Francis Drake, an English privateer who pillaged Hispanic ships and settlements, claimed the Alta California region for England in 1579, a full generation before the first English landing in Jamestown, Virginia in 1607. During his circumnavigation of the world, Drake anchored in a harbor just north of present-day San Francisco, California, and claimed the territory for Queen Elizabeth I. To preserve an uneasy peace with Spain and to avoid the prospect of Spain threatening England's claims in the New World, Queen Elizabeth I ordered Drake's discovery and claim kept secret.
However, it wasn't until 1741 that the Spanish monarchy of King Philip V was stimulated to consider how to protect his claims to Alta California. Philip was spurred on when the territorial ambitions of Tsarist Russia were expressed in the Vitus Bering expedition along the western coast on the North American continent.[notes 4][notes 5]
California represents the "high-water mark" of Spanish expansion in North America as the last and northernmost colony on the continent. The mission system arose in part from the need to control Spain's ever-expanding holdings in the New World. Realizing that the colonies required a literate population base that the mother country could not supply, the Spanish government (with the cooperation of the Church) established a network of missions to convert the indigenous population to Christianity. They aimed to make converts and tax paying citizens of those they conquered.[notes 6] To make them into Spanish citizens and productive inhabitants, the Spanish government and the Church required the indigenous people to learn Spanish language and vocational skills along with Christian teachings.
Estimates for the pre-contact indigenous population in California are based on a number of different sources and vary substantially, from 133,000, to 225,000, to as high as 705,000 from more than 100 separate tribes or nations.[notes 7][notes 8]
On January 29, 1767, Spain's King Charles III ordered the new governor Portola to forcibly expel the Jesuits, who operated under the authority of the Pope and had established a chain of fifteen missions on the Baja California Peninsula.[notes 9] Visitador General José de Gálvez engaged the Franciscans, under the leadership of Fray Junípero Serra, to take charge of those outposts on March 12, 1768. The padres closed or consolidated several of the existing settlements, and also founded Misión San Fernando Rey de España de Velicatá (the only Franciscan mission in all of Baja California) and the nearby Visita de la Presentación in 1769. This plan, however, changed within a few months after Gálvez received the following orders: "Occupy and fortify San Diego and Monterey for God and the King of Spain." The Church ordered the priests of the Dominican Order to take charge of the Baja California missions so the Franciscans could concentrate on founding new missions in Alta California.
Mission period (1769–1833)
On July 14, 1769 Gálvez sent the Portolá expedition out from Loreto to explore lands to the north. Leader Gaspar de Portolá was accompanied by a group of Franciscans led by Junípero Serra. Serra's plan was to extend the string of missions north from the Baja California peninsula, connected by an established road and spaced a day's travel apart. The first Alta California mission and presidio were founded at San Diego, the second at Monterey.
En route to Monterey, the Rev. Francisco Gómez and the Rev. Juan Crespí came across a Native settlement wherein two young girls were dying: one, a baby, said to be "dying at its mother's breast," the other a small girl suffering of burns. On July 22, Gómez baptized the baby, naming her Maria Magdalena, while Crespí baptized the older child, naming her Margarita. These were the first recorded baptisms in Alta California. Crespi dubbed the spot Los Cristianos.[notes 10] The group continued northward but missed Monterey Harbor and returned to San Diego on January 24, 1770. Near the end of 1769 the Portolá expedition had reached its most northerly point at present-day San Francisco. In following years, the Spanish Crown sent a number of follow-up expeditions to explore more of Alta California.
Each mission was to be turned over to a secular clergy and all the common mission lands distributed amongst the native population within ten years after its founding, a policy that was based upon Spain's experience with the more advanced tribes in Mexico, Central America, and Peru. In time, it became apparent to the Rev. Serra and his associates that the natives on the northern frontier in Alta California required a much longer period of acclimatization. None of the California missions ever attained complete self-sufficiency, and required continued (albeit modest) financial support from mother Spain. Mission development was therefore financed out of El Fondo Piadoso de las Californias (The Pious Fund of the Californias), which originated in 1697 and consisted of voluntary donations from individuals and religious bodies in Mexico to members of the Society of Jesus) to enable the missionaries to propagate the Catholic Faith in the area then known as California. Starting with the onset of the Mexican War of Independence in 1810, this support largely disappeared, and missions and converts were left on their own. As of 1800, native labor had made up the backbone of the colonial economy.
Arguably "the worst epidemic of the Spanish Era in California" was known to be the measles epidemic of 1806, wherein one-quarter of the mission Native American population of the San Francisco Bay area died of the measles or related complications between March and May of that year. In 1811, the Spanish Viceroy in Mexico sent an interrogatorio (questionnaire) to all of the missions in Alta California regarding the customs, disposition, and condition of the Mission Indians. The replies, which varied greatly in the length, spirit, and even the value of the information contained therein, were collected and prefaced by the Father-Presidente with a short general statement or abstract; the compilation was thereupon forwarded to the viceregal government.[notes 11] The contemporary nature of the responses, no matter how incomplete or biased some may be, are nonetheless of considerable value to modern ethnologists.
Russian colonization of the Americas reached its southernmost point with the 1812 establishment of Fort Ross (krepost' rus), an agricultural, scientific, and fur-trading settlement located in present-day Sonoma County, California. In November and December 1818, several of the missions were attacked by Hipólito Bouchard, "California's only pirate."[notes 12] A French privateer sailing under the flag of Argentina, Pirata Buchar (as he was known to the locals) worked his way down the California coast, conducting raids on the installations at Monterey, Santa Barbara, and San Juan Capistrano, with limited success. Upon hearing of the attacks, many mission priests (along with a few government officials) sought refuge at Mission Nuestra Señora de la Soledad, the mission chain's most isolated outpost. Ironically, Mission Santa Cruz (though ultimately ignored by the marauders) was ignominiously sacked and vandalized by local residents who were entrusted with securing the church's valuables.
By 1819, Spain decided to limit its "reach" in the New World to Northern California due to the costs involved in sustaining these remote outposts; the northernmost settlement therefore is Mission San Francisco Solano, founded in Sonoma in 1823.[notes 13] An attempt to found a twenty-second mission in Santa Rosa in 1827 was aborted.[notes 14][notes 15][notes 16] In 1833 the final group of missionaries arrived in Alta California. These were Mexican-born (rather than Spaniards), and had been trained at the Apostolic College of Our Lady of Guadalupe in Zacatecas. Among these friars was Francisco García Diego y Moreno, who would become the first bishop of the Diocese of Both Californias. These friars would bear the brunt of the changes brought on by secularization and the U.S. occupation, and many would marked by allegations of corruption.
José María de Echeandía, the first native Mexican elected Governor of Alta California issued a "Proclamation of Emancipation" (or "Prevenciónes de Emancipacion") on July 25, 1826. All Indians within the military districts of San Diego, Santa Barbara, and Monterey who were found qualified were freed from missionary rule and made eligible to become Mexican citizens. Those who wished to remain under mission tutelage were exempted from most forms of corporal punishment.[notes 18] By 1830 even the neophyte populations themselves appeared confident in their own abilities to operate the mission ranches and farms independently; the padres, however, doubted the capabilities of their charges in this regard.
Accelerating immigration, both Mexican and foreign, increased pressure on the Alta California government to seize the mission properties and dispossess the natives in accordance with Echeandía's directive.[notes 19] Despite the fact that Echeandía's emancipation plan was met with little encouragement from the novices who populated the southern missions, he was nonetheless determined to test the scheme on a large scale at Mission San Juan Capistrano. To that end, he appointed a number of comisionados (commissioners) to oversee the emancipation of the Indians. The Mexican government passed legislation on December 20, 1827 that mandated the expulsion of all Spaniards younger than sixty years of age from Mexican territories; Governor Echeandía nevertheless intervened on behalf of some of the missionaries to prevent their deportation once the law took effect in California.
Governor José Figueroa (who took office in 1833) initially attempted to keep the mission system intact, but the Mexican Congress passed An Act for the Secularization of the Missions of California on August 17, 1833 when liberal Valentín Gómez Farías was in office.[notes 20]
The Act also provided for the colonization of both Alta and Baja California, the expenses of this latter move to be borne by the proceeds gained from the sale of the mission property to private interests.
Mission San Juan Capistrano was the very first to feel the effects of secularization when, on August 9, 1834 Governor Figueroa issued his "Decree of Confiscation." Nine other settlements quickly followed, with six more in 1835; San Buenaventura and San Francisco de Asís were among the last to succumb, in June and December 1836, respectively. The Franciscans soon thereafter abandoned most of the missions, taking with them almost everything of value, after which the locals typically plundered the mission buildings for construction materials. Former mission pasture lands were divided into large land grants called ranchos, greatly increasing the number of private land holdings in Alta California.
Rancho period (1834–1849)
In spite of this neglect, the Indian towns at San Juan Capistrano, San Dieguito, and Las Flores did continue on for some time under a provision in Gobernador Echeandía's 1826 Proclamation that allowed for the partial conversion of missions to pueblos. According to one estimate, the native population in and around the missions proper was approximately 80,000 at the time of the confiscation; others claim that the statewide population had dwindled to approximately 100,000 by the early 1840s, due in no small part to the natives' exposure to European diseases, and from the Franciscan practice of cloistering women in the convento and controlling sexuality during the child-bearing age. (Baja California Territory experienced a similar reduction in native population resulting from Spanish colonization efforts there).
Pío de Jesus Pico, the last Mexican Governor of Alta California, found upon taking office that there were few funds available to carry on the affairs of the province. He prevailed upon the assembly to pass a decree authorizing the renting or the sale of all mission property, reserving only the church, a curate's house, and a building for a courthouse. The expenses of conducting the services of the church were to be provided from the proceeds, but there was no disposition made as to what should be done to secure the funds for that purpose. After secularization, Father-Presidente Narciso Durán transferred the missions' headquarters to Santa Barbara, thereby making Mission Santa Barbara the repository of some 3,000 original documents that had been scattered through the California missions. The Mission archive is the oldest library in the State of California that still remains in the hands of its founders, the Franciscans (it is the only mission where they have maintained an uninterrupted presence). Beginning with the writings of Hubert Howe Bancroft, the library has served as a center for historical study of the missions for more than a century. In 1895 journalist and historian Charles Fletcher Lummis criticized the Act and its results, saying:
Disestablishment—a polite term for robbery—by Mexico (rather than by native Californians misrepresenting the Mexican government) in 1834, was the death blow of the mission system. The lands were confiscated; the buildings were sold for beggarly sums, and often for beggarly purposes. The Indian converts were scattered and starved out; the noble buildings were pillaged for their tiles and adobes...
California statehood (1850 and beyond)
Precise figures relating to the population decline of California indigenes are not available. One writer, Gregory Orfalea, estimates that pre-contact population was reduced by 33 percent during Spanish and Mexican rule, mostly through introduction of European diseases, but much more after the United States takeover in 1848. By 1870, the loss of indigenous lives had become catastrophic. Up to 80 percent died, leaving a population of about 30,000 in 1870. Orfalea claims that nearly half of the native deaths after 1848 were murder.
In 1837-38, a major smallpox epidemic devastated native tribes north of San Francisco Bay, in the jurisdiction of Mission San Francisco Solano. General Mariano Vallejo estimated that 70,000 died from the disease. Vallejo's ally, chief Sem-Yeto, was one of the few natives to be vaccinated, and one of the few to survive.
When the mission properties were secularized between 1834 and 1838, the approximately 15,000 resident neophytes lost whatever protection the mission system afforded them. While under the secularization laws the natives were to receive up to one-half of the mission properties, this never happened. The natives lost whatever stock and movable property they may have accumulated. When California became a U.S. state, California law stripped them of legal title to the land. In the Act of September 30, 1850, Congress appropriated funds to allow the President to appoint three Commissioners, O. M. Wozencraft, Redick McKee and George W. Barbour, to study the California situation and "...negotiate treaties with the various Indian tribes of California." Treaty negotiations ensued during the period between March 19, 1851 and January 7, 1852, during which the Commission interacted with 402 Indian chiefs and headmen (representing approximately one-third to one-half of the California tribes) and entered into eighteen treaties.
California Senator William M. Gwin's Act of March 3, 1851 created the Public Land Commission, whose purpose was to determine the validity of Spanish and Mexican land grants in California. On February 19, 1853 Archbishop J.S. Alemany filed petitions for the return of all former mission lands in the state. Ownership of 1,051.44 acres (4.2550 km2) (essentially exact area of land occupied by the original mission buildings, cemeteries, and gardens) was subsequently conveyed to the Church, along with the Cañada de los Pinos (or College Rancho) in Santa Barbara County comprising 35,499.73 acres (143.6623 km2), and La Laguna in San Luis Obispo County, consisting of 4,157.02 acres (16.8229 km2). As the result of a U.S. government investigation in 1873, a number of Indian reservations were assigned by executive proclamation in 1875. The commissioner of Indian affairs reported in 1879 that the number of Mission Indians in the state was down to around 3,000.
Mission locations and military districts
Prior to 1754, grants of mission lands were made directly by the Spanish Crown. But, given the remote locations and the inherent difficulties in communicating with the territorial governments, power was transferred to the viceroys of New Spain to grant lands and establish missions in North America. The 21 Alta California missions were established along the northernmost section of California's El Camino Real (Spanish for The Royal Highway, though often spoken as "The King's Highway"), christened in honor of King Charles III), much of which is now U.S. Route 101 and several Mission Streets. The mission planning was begun in 1767 under the leadership of Fray Junípero Serra, O.F.M. (who, in 1767, along with his fellow priests, had taken control over a group of missions in Baja California Peninsula previously administered by the Jesuits).
The Rev. Pedro Estévan Tápis proposed the establishment of a mission on one of the Channel Islands in the Pacific Ocean off San Pedro Harbor in 1784, with either Santa Catalina or Santa Cruz (known as Limú to the Tongva residents) being the most likely locations, the reasoning being that an offshore mission might have attracted potential people to convert who were not living on the mainland, and could have been an effective measure to restrict smuggling operations. Governor José Joaquín de Arrillaga approved the plan the following year, however an outbreak of sarampion (measles) killing some 200 Tongva people coupled with a scarcity of land for agriculture and potable water left the success of such a venture in doubt, so no effort to found an island mission was ever made. In September, 1821 the Rev. Mariano Payeras, "Comisario Prefecto" of the California missions, visited Cañada de Santa Ysabel east of Mission San Diego de Alcalá as part of a plan to establish an entire chain of inland missions. The Santa Ysabel Asistencia had been founded in 1818 as a "mother" mission, however the plan's expanding beyond never came to fruition.
Work on the mission chain was concluded in 1823, even though Serra had died in 1784 (plans to establish a twenty-second mission in Santa Rosa in 1827 were canceled).[notes 21] The Rev. Fermín Francisco de Lasuén took up Serra's work and established nine more mission sites, from 1786 through 1798; others established the last three compounds, along with at least five asistencias (mission assistance outposts). At the peak of its development in 1832, the mission system controlled an area equal to approximately one-sixth of Alta California.
There were 21 missions accompanied by military outposts in Alta California from San Diego to Sonoma, California. To facilitate travel between them on horse an foot, the mission settlements were situated approximately 30 miles (48 kilometers) apart, about one day's journey on horseback, or three days on foot. The entire trail eventually became a 600-mile (966-kilometer) long "California Mission Trail.":132:152 Heavy freight movement was practical only via water. Tradition has it that the padres sprinkled mustard seeds along the trail to mark it with bright yellow flowers.:79:260
During the Mission Period Alta California was divided into four military districts. Each was garrisoned (comandancias) by a presidio strategically placed along the California coast to protect the missions and other Spanish settlements in Upper California. Each of these functioned as a base of military operations for a specific region. They were independent of one another and were organized from south to north as follows:
- El Presidio Real de San Diego founded on July 16, 1769 – responsible for the defense of all installations located within the First Military District (the missions at San Diego, San Luis Rey, San Juan Capistrano, and San Gabriel);
- El Presidio Real de Santa Bárbara founded on April 12, 1782 – responsible for the defense of all installations located within the Second Military District (the missions at San Fernando, San Buenaventura, Santa Barbara, Santa Inés, and La Purísima, along with El Pueblo de Nuestra Señora la Reina de los Ángeles del Río de Porciúncula [Los Angeles]);
- El Presidio Real de San Carlos de Monterey (El Castillo) founded on June 3, 1770 – responsible for the defense of all installations located within the Third Military District (the missions at San Luis Obispo, San Miguel, San Antonio, Soledad, San Carlos, and San Juan Bautista, along with Villa Branciforte [Santa Cruz]); and
- El Presidio Real de San Francisco founded on December 17, 1776 – responsible for the defense of all installations located within the Fourth Military District (the missions at Santa Cruz, San José, Santa Clara, San Francisco, San Rafael, and Solano, along with El Pueblo de San José de Guadalupe [San Jose]).
- El Presidio de Sonoma, or "Sonoma Barracks" (a collection of guardhouses, storerooms, living quarters, and an observation tower) was established in 1836 by Mariano Guadalupe Vallejo (the "Commandante-General of the Northern Frontier of Alta California") as a part of Mexico's strategy to halt Russian incursions into the region. The Sonoma Presidio became the new headquarters of the Mexican Army in California, while the remaining presidios were essentially abandoned and, in time, fell into ruins.
An ongoing power struggle between church and state grew increasingly heated and lasted for decades. Originating as a feud between the Rev. Serra and Pedro Fages (the military governor of Alta California from 1770 to 1774, who regarded the Spanish installations in California as military institutions first and religious outposts second), the uneasy relationship persisted for more than sixty years.[notes 22] Dependent upon one another for their very survival, military leaders and mission padres nevertheless adopted conflicting stances regarding everything from land rights, the allocation of supplies, protection of the missions, the criminal propensities of the soldiers, and (in particular) the status of the native populations.[notes 23]
- Mission San Diego de Alcalá (1769–1771)
- Mission San Carlos Borromeo de Carmelo (1771–1815)
- Mission La Purísima Concepción*(1815–1819)
- Mission San Carlos Borromeo de Carmelo (1819–1824)
- Mission San José*(1824–1827)
- Mission San Carlos Borromeo de Carmelo (1827–1830)
- Mission San José*(1830–1833)
- Mission Santa Barbara (1833–1846)
† The Rev. Payeras and the Rev. Durán remained at their resident missions during their terms as Father-Presidente, therefore those settlements became the de facto headquarters (until 1833, when all mission records were permanently relocated to Santa Barbara).[notes 24]
- The Rev. Junípero Serra (1769–1784)
- The Rev. Francisco Palóu (presidente pro tempore) (1784–1785)
- The Rev. Fermín Francisco de Lasuén (1785–1803)
- The Rev. Pedro Estévan Tápis (1803–1812)
- The Rev. José Francisco de Paula Señan (1812–1815)
- The Rev. Mariano Payéras (1815–1820)
- The Rev. José Francisco de Paula Señan (1820–1823)
- The Rev. Vicente Francisco de Sarría (1823–1824)
- The Rev. Narciso Durán (1824–1827)
- The Rev. José Bernardo Sánchez (1827–1831)
- The Rev. Narciso Durán (1831–1838)
- The Rev. José Joaquin Jimeno (1838–1844)
- The Rev. Narciso Durán (1844–1846)
The "Father-Presidente" was the head of the Catholic missions in Alta and Baja California. He was appointed by the College of San Fernando de Mexico until 1812, when the position became known as the "Commissary Prefect" who was appointed by the Commissary General of the Indies (a Franciscan residing in Spain). Beginning in 1831, separate individuals were elected to oversee Upper and Lower California.
Site selection and layout
In addition to the presidio (royal fort) and pueblo (town), the misión was one of the three major agencies employed by the Spanish sovereign to extend its borders and consolidate its colonial territories. Asistencias ("satellite" or "sub" missions, sometimes referred to as "contributing chapels") were small-scale missions that regularly conducted Mass on days of obligation but lacked a resident priest; as with the missions, these settlements were typically established in areas with high concentrations of potential native converts. The Spanish Californians had never strayed from the coast when establishing their settlements; Mission Nuestra Señora de la Soledad was located farthest inland, being only some thirty miles (48 kilometers) from the shore. Each frontier station was forced to be self-supporting, as existing means of supply were inadequate to maintain a colony of any size. California was months away from the nearest base in colonized Mexico, and the cargo ships of the day were too small to carry more than a few months’ rations in their holds. To sustain a mission, the padres required the help of colonists or converted Native Americans, called neophytes, to cultivate crops and tend livestock in the volume needed to support a fair-sized establishment. The scarcity of imported materials, together with a lack of skilled laborers, compelled the missionaries to employ simple building materials and methods in the construction of mission structures.
Although the missions were considered temporary ventures by the Spanish hierarchy, the development of an individual settlement was not simply a matter of "priestly whim." The founding of a mission followed longstanding rules and procedures; the paperwork involved required months, sometimes years of correspondence, and demanded the attention of virtually every level of the bureaucracy. Once empowered to erect a mission in a given area, the men assigned to it chose a specific site that featured a good water supply, plenty of wood for fires and building materials, and ample fields for grazing herds and raising crops. The padres blessed the site, and with the aid of their military escort fashioned temporary shelters out of tree limbs or driven stakes, roofed with thatch or reeds (cañas). It was these simple huts that ultimately gave way to the stone and adobe buildings that exist to the present.
The first priority when beginning a settlement was the location and construction of the church (iglesia). The majority of mission sanctuaries were oriented on a roughly east-west axis to take the best advantage of the sun's position for interior illumination; the exact alignment depended on the geographic features of the particular site. Once the spot for the church had been selected, its position was marked and the remainder of the mission complex was laid out. The workshops, kitchens, living quarters, storerooms, and other ancillary chambers were usually grouped in the form of a quadrangle, inside which religious celebrations and other festive events often took place. The cuadrángulo was rarely a perfect square because the missionaries had no surveying instruments at their disposal and simply measured off all dimensions by foot. Some fanciful accounts regarding the construction of the missions claimed that underground tunnels were incorporated in the design, to be used as a means of emergency egress in the event of attack; however, no historical evidence (written or physical) has ever been uncovered to support these assertions.[notes 25]
Franciscans and Indians
The Alta California missions, known as reductions (reducciones) or congregations (congregaciones), were settlements founded by the Spanish colonizers of the New World with the purpose of totally assimilating indigenous populations into European culture and the Catholic religion. It was a doctrine established in 1531, which based the Spanish state's right over the land and persons of the Indies on the Papal charge to evangelize them. It was employed wherever the indigenous populations were not already concentrated in native pueblos. Indians were congregated around the mission proper through the use of means including forced resettlement, whereupon they were "reduced" from a perceived free "undisciplined'" state and ultimately converted into "civilized" members of colonial society. Their own civilized and disciplined culture, developed over 8,000 years of freedom, was not considered. A total of 146 Friars Minor, all of whom were ordained as priests (and mostly Spaniards by birth) served in California between 1769 and 1845. 67 missionaries died at their posts (two as martyrs: Padres Luis Jayme and Andrés Quintana), while the remainder returned to Europe due to illness, or upon completing their ten-year service commitment. As the rules of the Franciscan Order forbade friars to live alone, two missionaries were assigned to each settlement, sequestered in the mission's convento. To these the governor assigned a guard of five or six soldiers under the command of a corporal, who generally acted as steward of the mission's temporal affairs, subject to the priests' direction.
Indians were initially attracted into the mission compounds by gifts of food, colored beads, bits of bright cloth, and trinkets. Once a Native American "gentile" was baptized, they were labeled a neophyte, or new believer. This happened only after a brief period during which the initiates were instructed in the most basic aspects of the Catholic faith. But, while many natives were lured to join the missions out of curiosity and sincere desire to participate and engage in trade, many found themselves trapped once they were baptised.
To the padres, a baptized Indian person was no longer free to move about the country, but had to labor and worship at the mission under the strict observance of the priests and overseers, who herded them to daily masses and labors. If an Indian did not report for their duties for a period of a few days, they were searched for, and if it was discovered that they had left without permission, they were considered runaways. Large-scale military expeditions were organized to round up the escaped neophytes. Sometimes the Franciscans even permitted neophytes to escape to their villages, so that an expedition might be organized to follow them and in the process of capturing the fugitives, a dozen or more new "Christians" could be rounded up.
On one occasion," writes Hugo Reid, "they went as far as the present Rancho del Chino, where they tied and whipped every man, woman and child in the lodge, and drove part of them back.... On the road they did the same with those of the lodge at San Jose. On arriving home the men were instructed to throw their bows and arrows at the feet of the priest, and make due submission. The infants were then baptized, as were also all children under eight years of age; the former were left with their mothers, but the latter kept apart from all communication with their parents. The consequence was, first, the women consented to the rite and received it, for the love they bore their children; and finally the males gave way for the purpose of enjoying once more the society of wife and family. Marriage was then performed, and so this contaminated race, in their own sight and that of their kindred, became followers of Christ.
A total of 20,355 natives were "attached" to the California missions in 1806 (the highest figure recorded during in the Mission Period); under Mexican rule the number rose to 21,066 (in 1824, the record year during the entire era of the Franciscan missions).[notes 28] During the entire period of Mission rule, from 1769 to 1834, the Franciscans baptized 53,600 adult Indians and buried 37,000. Dr. Cook estimates that 15,250 or 45% of the population decrease was caused by disease. Two epidemics of measles, one in 1806 and the other in 1828, caused many deaths. The mortality rates were so high that the missions were constantly dependent upon new conversions.
Young native women were required to reside in the monjerío (or "nunnery") under the supervision of a trusted Indian matron who bore the responsibility for their welfare and education. Women only left the convent after they had been "won" by an Indian suitor and were deemed ready for marriage. Following Spanish custom, courtship took place on either side of a barred window. After the marriage ceremony the woman moved out of the mission compound and into one of the family huts. These "nunneries" were considered a necessity by the priests, who felt the women needed to be protected from the men, both Indian and de razón (real men, i.e. Europeans). The cramped and unsanitary conditions the girls lived in contributed to the fast spread of disease and population decline. So many died at times that many of the Indian residents of the missions urged the priests to raid new villages to supply them with more women. As of December 31, 1832 (the peak of the mission system's development) the mission padres had performed a combined total of 87,787 baptisms and 24,529 marriages, and recorded 63,789 deaths.
The neophytes were kept in well-guarded mission compounds. The policy of the Franciscans was to keep them constantly occupied. "If the Indian would not work," writes C. D. Willard, "he was starved and flogged. If he ran away he was pursued and brought back."
Bells were vitally important to daily life at any mission. The bells were rung at mealtimes, to call the Mission residents to work and to religious services, during births and funerals, to signal the approach of a ship or returning missionary, and at other times; novices were instructed in the intricate rituals associated with the ringing the mission bells. The daily routine began with sunrise Mass and morning prayers, followed by instruction of the natives in the teachings of the Roman Catholic faith. After a generous (by era standards) breakfast of atole, the able-bodied men and women were assigned their tasks for the day. The women were committed to dressmaking, knitting, weaving, embroidering, laundering, and cooking, while some of the stronger girls ground flour or carried adobe bricks (weighing 55 lb, or 25 kg each) to the men engaged in building. The men worked a variety of jobs, having learned from the missionaries how to plow, sow, irrigate, cultivate, reap, thresh, and glean. In addition, they were taught to build adobe houses, tan leather hides, shear sheep, weave rugs and clothing from wool, make ropes, soap, paint, and other useful duties.
The work day was six hours, interrupted by dinner (lunch) around 11:00 a.m. and a two-hour siesta, and ended with evening prayers and the rosary, supper, and social activities. About 90 days out of each year were designated as religious or civil holidays, free from manual labor. The labor organization of the missions resembled a slave plantation in many respects.[notes 29] Foreigners who visited the missions remarked at how the priests' control over the Indians appeared excessive, but necessary given the white men's isolation and numeric disadvantage.[notes 30] Indians were not paid wages as they were not considered free laborers and, as a result, the missions were able to profit from the goods produced by the Mission Indians to the detriment of the other Spanish and Mexican settlers of the time who could not compete economically with the advantage of the mission system.
The Franciscans began to send neophytes to work as servants of Spanish soldiers in the presidios. Each presidio was provided with land, el rancho del rey, which served as a pasture for the presidio livestock and as a source of food for the soldiers. Theoretically the soldiers were supposed to work on this land themselves but within a few years the neophytes were doing all the work on the presidio farm and, in addition, were serving domestics for the soldiers. While the fiction prevailed that neophytes were to receive wages for their work, no attempt was made to collect the wages for these services after 1790. It is recorded that the neophytes performed the work "under unmitigated compulsion."
In recent years, much debate has arisen as to the actual treatment of the Indians during the Mission period, and many claim that the California mission system is directly responsible for the decline of the native cultures.[notes 31] Evidence has now been brought to light that puts the Indians' experiences in a very different context.[notes 32][notes 33]
The missionaries of California were by-and-large well-meaning, devoted men...[whose] attitudes toward the Indians ranged from genuine (if paternalistic) affection to wrathful disgust. They were ill-equipped—nor did most truly desire—to understand complex and radically different Native American customs. Using European standards, they condemned the Indians for living in a "wilderness," for worshipping false gods or no God at all, and for having no written laws, standing armies, forts, or churches.
The goal of the missions was, above all, to become self-sufficient in relatively short order. Farming, therefore, was the most important industry of any mission. Barley, maize, and wheat were among the most common crops grown. Cereal grains were dried and ground by stone into flour. Even today, California is well known for the abundance and many varieties of fruit trees that are cultivated throughout the state. The only fruits indigenous to the region, however, consisted of wild berries or grew on small bushes. Spanish missionaries brought fruit seeds over from Europe, many of which had been introduced from Asia following earlier expeditions to the continent; orange, grape, apple, peach, pear, and fig seeds were among the most prolific of the imports. Grapes were also grown and fermented into wine for sacramental use and again, for trading. The specific variety, called the Criolla or Mission grape, was first planted at Mission San Juan Capistrano in 1779; in 1783, the first wine produced in Alta California emerged from the mission's winery. Ranching also became an important mission industry as cattle and sheep herds were raised.
Mission San Gabriel Arcángel unknowingly witnessed the origin of the California citrus industry with the planting of the region's first significant orchard in 1804, though the commercial potential of citrus was not realized until 1841. Olives (first cultivated at Mission San Diego de Alcalá) were grown, cured, and pressed under large stone wheels to extract their oil, both for use at the mission and to trade for other goods. The Rev. Serra set aside a portion of the Mission Carmel gardens in 1774 for tobacco plants, a practice that soon spread throughout the mission system.[notes 34]
It was also the missions' responsibility to provide the Spanish forts, or presidios, with the necessary foodstuffs, and manufactured goods to sustain operations. It was a constant point of contention between missionaries and the soldiers as to how many fanegas of barley, or how many shirts or blankets the mission had to provide the garrisons on any given year. At times these requirements were hard to meet, especially during years of drought, or when the much anticipated shipments from the port of San Blas failed to arrive. The Spaniards kept meticulous records of mission activities, and each year reports submitted to the Father-Presidente summarizing both the material and spiritual status at each of the settlements.
Livestock was raised, not only for the purpose of obtaining meat, but also for wool, leather, and tallow, and for cultivating the land. In 1832, at the height of their prosperity, the missions collectively owned:
- 151,180 head of cattle;
- 137,969 sheep;
- 14,522 horses;
- 1,575 mules or burros;
- 1,711 goats; and
- 1,164 swine.
All these grazing animals were originally brought up from Mexico. A great many Indians were required to guard the herds and flocks on the mission ranches, which created the need for "...a class of horsemen scarcely surpassed anywhere." These animals multiplied beyond the settler's expectations, often overrunning pastures and extending well-beyond the domains of the missions. The giant herds of horses and cows took well to the climate and the extensive pastures of the Coastal California region, but at a heavy price for the California Native American people. The uncontrolled spread of these new herds, and associated invasive exotic plant species, quickly exhausted the native plants in the grasslands, and the chaparral and woodlands that the Indians depended on for their seed, foliage, and bulb harvests. The grazing-overgrazing problems were also recognized by the Spaniards, who periodically had extermination parties cull and kill thousands of excess livestock, when herd populations grew beyond their control or the land's capacity. Years with a severe drought did this also.
Mission kitchens and bakeries prepared and served thousands of meals each day. Candles, soap, grease, and ointments were all made from tallow (rendered animal fat) in large vats located just outside the west wing. Also situated in this general area were vats for dyeing wool and tanning leather, and primitive looms for weaving. Large bodegas (warehouses) provided long-term storage for preserved foodstuffs and other treated materials.
Each mission had to fabricate virtually all of its construction materials from local materials. Workers in the carpintería (carpentry shop) used crude methods to shape beams, lintels, and other structural elements; more skilled artisans carved doors, furniture, and wooden implements. For certain applications bricks (ladrillos) were fired in ovens (kilns) to strengthen them and make them more resistant to the elements; when tejas (roof tiles) eventually replaced the conventional jacal roofing (densely packed reeds) they were placed in the kilns to harden them as well. Glazed ceramic pots, dishes, and canisters were also made in mission kilns.
Prior to the establishment of the missions, the native peoples knew only how to utilize bone, seashells, stone, and wood for building, tool making, weapons, and so forth. The missionaries established manual training in European skills and methods; in agriculture, mechanical arts, and the raising and care of livestock. Everything consumed and otherwise utilized by the natives was produced at the missions under the supervision of the padres; thus, the neophytes not only supported themselves, but after 1811 sustained the entire military and civil government of California. The foundry at Mission San Juan Capistrano was the first to introduce the Indians to the Iron Age. The blacksmith used the mission's forges (California's first) to smelt and fashion iron into everything from basic tools and hardware (such as nails) to crosses, gates, hinges, even cannon for mission defense. Iron in particular was a commodity that the mission acquired solely through trade, as the missionaries had neither the know-how nor technology to mine and process metal ores.
No study of the missions is complete without mention of their extensive water supply systems. Stone zanjas (aqueducts, sometimes spanning miles, brought fresh water from a nearby river or spring to the mission site. Open or covered lined ditches and/or baked clay pipes, joined together with lime mortar or bitumen, gravity-fed the water into large cisterns and fountains, and emptied into waterways where the force of the water was used to turn grinding wheels and other simple machinery, or dispensed for use in cleaning. Water used for drinking and cooking was allowed to trickle through alternate layers of sand and charcoal to remove the impurities. One of the best-preserved mission water systems is at Mission Santa Barbara.
Present-day California missions
No other group of structures in the United States elicits the intense interest as inspired by the Missions of California (California is home to the greatest number of well-preserved missions found in any U.S. state).[notes 35] The missions are collectively the best-known historic element of the coastal regions of California:
- Most of the missions are still owned and operated by some entity within the Catholic Church.
- Four of the missions are still run under the auspices of the Franciscan Order (San Antonio de Padua, Santa Barbara, San Miguel Arcángel, and San Luis Rey de Francia)
- Four of the missions (San Diego de Alcalá, San Carlos Borromeo de Carmelo, San Francisco de Asís, and San Juan Capistrano) have been designated minor basilicas by the Holy See due to their cultural, historic, architectural, and religious importance.
- Mission La Purísima Concepción, Mission San Francisco Solano, and the one remaining mission-era structure of Mission Santa Cruz are owned and operated by the California Department of Parks and Recreation as State Historic Parks;
- Seven mission sites are designated National Historic Landmarks, fourteen are listed in the National Register of Historic Places, and all are designated as California Historical Landmarks for their historic, architectural, and archaeological significance.
Because virtually all of the artwork at the missions served either a devotional or didactic purpose, there was no underlying reason for the mission residents to record their surroundings graphically; visitors, however, found them to be objects of curiosity. During the 1850s a number of artists found gainful employment as draftsmen attached to expeditions sent to map the Pacific coastline and the border between California and Mexico (as well as plot practical railroad routes); many of the drawings were reproduced as lithographs in the expedition reports.
In 1875 American illustrator Henry Chapman Ford began visiting each of the twenty-one mission sites, where he created a historically important portfolio of watercolors, oils, and etchings. His depictions of the missions were (in part) responsible for the revival of interest in the state's Spanish heritage, and indirectly for the restoration of the missions. The 1880s saw the appearance of a number of articles on the missions in national publications and the first books on the subject; as a result, a large number of artists did one or more mission paintings, though few attempted a series.
The popularity of the missions also stemmed largely from Helen Hunt Jackson's 1884 novel Ramona and the subsequent efforts of Charles Fletcher Lummis, William Randolph Hearst, and other members of the "Landmarks Club of Southern California" to restore three of the southern missions in the early 20th century (San Juan Capistrano, San Diego de Alcalá, and San Fernando; the Pala Asistencia was also restored by this effort).[notes 36] Lummis wrote in 1895,
In ten years from now—unless our intelligence shall awaken at once—there will remain of these noble piles nothing but a few indeterminable heaps of adobe. We shall deserve and shall have the contempt of all thoughtful people if we suffer our noble missions to fall.
In acknowledgement of the magnitude of the restoration efforts required and the urgent need to have acted quickly to prevent further or even total degradation, Lummis went on to state,
It is no exaggeration to say that human power could not have restored these four missions had there been a five-year delay in the attempt.
In 1911 author John Steven McGroarty penned The Mission Play, a three-hour pageant describing the California missions from their founding in 1769 through secularization in 1834, and ending with their "final ruin" in 1847.
Today, the missions exist in varying degrees of architectural integrity and structural soundness. The most common extant features at the mission grounds include the church building and an ancillary convento (convent) wing. In some cases (in San Rafael, Santa Cruz, and Soledad, for example), the current buildings are replicas constructed on or near the original site. Other mission compounds remain relatively intact and true to their original, Mission Era construction.
A notable example of an intact complex is the now-threatened Mission San Miguel Arcángel: its chapel retains the original interior murals created by Salinan Indians under the direction of Esteban Munras, a Spanish artist and last Spanish diplomat to California. This structure was closed to the public from 2003 to 2009 due to severe damage from the San Simeon earthquake. Many missions have preserved (or in some cases reconstructed) historic features in addition to chapel buildings.
The missions have earned a prominent place in California's historic consciousness, and a steady stream of tourists from all over the world visit them. In recognition of that fact, on November 30, 2004 President George W. Bush signed HR 1446, the California Mission Preservation Act, into law. The measure provided $10 million over a five-year period to the California Missions Foundation for projects related to the physical preservation of the missions, including structural rehabilitation, stabilization, and conservation of mission art and artifacts. The California Missions Foundation, a volunteer, tax-exempt organization, was founded in 1998 by Richard Ameil, an eighth generation Californian. A change to the California Constitution has also been proposed that would allow the use of State funds in restoration efforts.
Legacy and Native American controversy
There is controversy over the California Department of Education's treatment of the missions in the Department's elementary curriculum; in the tradition of historical revisionism, it has been alleged that the curriculum "waters down" the harsh treatment of Native Americans. Modern anthropologists cite a cultural bias on the part of the missionaries that blinded them to the natives' plight and caused them to develop strong negative opinions of the California Indians.[notes 37] European diseases that the California Native Americans had no immunity to caused a significant population reduction from the first encounter through the 19th century.
Gallery of missions
On California history:
- Juan Bautista de Anza National Historic Trail
- History of California through 1899
- California 4th Grade Mission Project
- History of the west coast of North America
On general missionary history:
On colonial Spanish American history:
- List of the oldest churches in Mexico
- Spanish colonization of the Americas
- Indian Reductions
- California mission clash of cultures
- Native Americans in the United States
- The Spanish claim to the Pacific Northwest dated back to a 1493 papal bull (Inter caetera) and rights contained in the 1494 Treaty of Tordesillas; in these two formal acts, Spain gave itself the exclusive right to colonize all of the Western Hemisphere (excluding Brazil), including all of the west coast of North America.
- The term Alta California as applies to the mission chain founded by Serra refers specifically to the modern-day United States State of California.
- Leffingwell: The Rev. Antonio de la Ascensión, a Carmelite who visited San Diego with Vizcaíno's 1602 expedition, "surveyed the area and concluded that the land was fertile, the fish plentiful, and gold abundant." Ascensión was convinced that California's potential wealth and strategic location merited colonization, and in 1620 recommended in a letter to Madrid that missions be established in the region, a venture that would involve military as well as religious personnel.
- Chapman: "It is usually stated that the Spanish court at Madrid received reports about Russian aggression in the Pacific northwest, and sent orders to meet them by the occupation of Alta California, wherefore the expeditions of 1769 were made. This view contains only a smattering of the truth. It is evident from [José de] Gálvez's correspondence of 1768 that he and [Carlos Francisco de] Croix had discussed the advisability of an immediate expedition to Monterey, long before any word came from Spain about the Russian activities."
- Bennett: California had been visited a number of times since Cabrillo's discovery in 1542, which initially included notable expeditions led by Englishmen Francis Drake in 1579 and Thomas Cavendish 1587, and later on by Woodes Rogers (1710), George Shelvocke (1719), James Cook (1778), and finally George Vancouver in 1792. Spanish explorer Sebastián Vizcaíno made landfall in San Diego Bay in 1602, and the famed conquistador Hernán Cortés explored the California Gulf Coast in 1735.
- Bennett: "Other pioneers have blazed the way for civilization by the torch and the bullet, and the red man has disappeared before them; but it remained for the Spanish priests to undertake to preserve the Indian and seek to make his existence compatible with a higher civilization."
- Kroeber: "In the matter of population, too, the effect of Caucasian contact cannot be wholly slighted, since all statistics date from a late period. The disintegration of Native numbers and Native culture have proceeded hand in hand, but in very different rations according to locality. The determination of population strength before the arrival of whites is, on the other hand, of considerable significance toward the understanding of Indian culture, on account of the close relations which are manifest between type of culture and density of population."
- Chapman, p. 383: "...there may have been about 133,000 [Native inhabitants] in what is now the state as a whole, and 70,000 in or near the conquered area. The missions included only the Indians of given localities, though it is true that they were situated on the best lands and in the most populous centres. Even in the vicinity of the missions, there were some unconverted groups, however." See Population of Native California.
- Bennett: Due to the isolation of the Baja California missions, the decree for expulsion did not arrive in June 1767, as it did in the rest of New Spain, but was delayed until the new governor, Portolà, arrived with the news on November 30. Jesuits from the operating missions gathered in Loreto, whereupon they left for exile on February 3, 1768.
- Engelhardt: Today, the site (located at Marine Corps Base Camp Pendleton in San Diego County) is in Los Christianitos ("The Little Christians") Canyon, and is designated as La Christiana California Historical Landmark #562. on
- Kroeber: "Some of the missionaries evidently regarded compliance with the instructions of the questionnaire as an official requirement which was perfunctorily performed. In many cases no answers were given various questions at certain of the missions."
- There is a great contrast between the legacy of Bouchard in Argentina versus his reputation in the United States. In Buenos Aires, Bouchard is honored as a brave patriot, while in California he is most often remembered as a pirate, and not a privateer. See Hippolyte Bouchard.
- Hittell: "...it [Mission San Francisco Solano] was quite frequently known as the mission of Sonoma. From the beginning it was rather a military than a religious establishment—a sort of outpost or barrier, first against the Russians and afterwards against the Americans; but still a large adobe church was built and Indians were baptized."
- Hittell: "By that time, it was found that the Russians were not such undesirable neighbors as in 1817 it was thought they might become...the Russian scare, for the time being at least was over; and as for the old enthusiasm for new spiritual conquests, there was none left."
- Bennett 1897b, p. 154: "Up to 1817 the 'spiritual conquest' of California had been confined to the territory south of San Francisco Bay. And this, it might be said, was as far as possible under the mission system. There had been a few years prior to that time certain alarming incursions of the Russians, which distressed Spain, and it was ordered that missions be started across the bay."
- Chapman: "...the Russians and the English were by no means the only foreign peoples who threatened Spain's domination of the Pacific coast. The Indians and the Chinese had their opportunity before Spain appeared upon the scene. The Japanese were at one time a potential concern, and the Portuguese and Dutch voyagers occasionally gave Spain concern. The French for many years were the most dangerous enemy of all, but with their disappearance from North America in 1763, as a result of their defeat in the Seven Years' War, they were no longer a menace. The people of the United States were eventually to become the most powerful outstanding element."
- Robinson: The cortes (legislature) of New Spain issued a decree in 1813 for at least partial secularization that affected all missions in America and was to apply to all outposts that had operated for ten years or more; however, the decree was never enforced in California.
- Catholic historian Zephyrin Engelhardt referred to Echeandía as "...an avowed enemy of the religious orders."
- Settlers made numerous false claims to diminish the natives' abilities: "The Indians are by nature slovenly and indolent," stated one newcomer. "They have unfeelingly appropriated the region," claimed another.
- Yenne: In 1833, Figueroa replaced the Spanish-born Franciscan padres at all of the settlements north of Mission San Antonio de Padua with Mexican-born Franciscan priests from the College of Guadalupe de Zacatecas. In response, Father-Presidente Narciso Durán transferred the headquarters of the Alta California Mission System to Mission Santa Bárbara, where it remained until 1846.
- : "By that time, it was found that the Russian colonies were not such undesirable neighbors as in 1817 it was thought they might become... the Russian scare, for the time being at least was over; and as for the old enthusiasm for new spiritual conquests, there was none left."
- Bennett: "...Junípero had in California insisted that the military should be subservient to the priests, that the conquest was spiritual, not temporal..."
- Engelhardt: "Recruited from the scum of society in Mexico, frequently convicts and jailbirds, it is not surprising that the mission guards, leather-jacket soldiers, as they were called, should be guilty of...crimes at nearly all the Missions...In truth, the guards counted among the worst obstacles to missionary progress. The wonder is, that the missionaries nevertheless succeeded so well in attracting converts."
- In 1833 Figueroa replaced the padres at all of the settlements north of Mission San Antonio de Padua with Mexican-born Franciscan priests from the College of Guadalupe de Zacatecas. In response, Father-Presidente Narciso Durán transferred the headquarters of the Alta California Mission System to Mission Santa Bárbara, where they remained until 1846.
- Engelhardt: One such hypothesis was put forth by author by Prent Duel in his 1919 work Mission Architecture as Exemplified in San Xavier Del Bac: "Most missions of early date possessed secret passages as a means of escape in case they were besieged. It is difficult to locate any of them now as they are well concealed."
- Chapman: "Latter-day historians have been altogether too prone to regard the hostility to the Spaniards on the part of the California Indians as a matter of small consequence, since no disaster in fact ever happened...On the other hand the San Diego plot involved untold thousands of Indians, being virtually a national uprising, and owing to the distance from New Spain to and the extreme difficulty of maintaining communications a victory for the Indians would have ended Spanish settlement in Alta California." As it turned out, "...the position of the Spaniards was strengthened by the San Diego outbreak, for the Indians felt from that time forth that it was impossible to throw out their conquerors." See also Mission Puerto de Purísima Concepción and Mission San Pedro y San Pablo de Bicuñer regarding the Yuma 'massacres' of 1781.
- Engelhardt: Not all of the native cultures responded with hostility to the Spaniards' presence; Engelhardt portrayed the natives at Mission San Juan Capistrano (dubbed the "Juaneño" by the missionaries), where there was never any instance of unrest, as being "uncommonly friendly and docile." The Rev. Juan Crespí, who accompanied the 1769 expedition, described the first encounter with the area's inhabitants: "They came unarmed and with a gentleness which has no name they brought their poor seeds to us as gifts...The locality itself and the docility of the Indians invited the establishment of a Mission for them."
- Chapman: "Over the hills of the Coast Range, in the valleys of the Sacramento and San Joaquin, north of San Francisco Bay, and in the Sierra Nevadas of the south there were untold thousands whom the mission system never reached...they were as if in a world apart from the narrow strip of coast which was all there was of the Spanish California."
- Bennett: "The system had singularly failed in its purposes. It was the design of the Spanish government to have the missions educate, elevate, civilize, the Indians into citizens. When this was done, citizenship should be extended them and the missions should be dissolved as having served their purpose...[instead] the priests returned them projects of conversion, schemes of faith, which they never comprehended...He [the Indian] became a slave; the mission was a plantation; the friar was a taskmaster."
- Bennett: "In 1825 Governor Argüello wrote that the slavery of the Indians at the missions was bestial...Governor Figueroa declared that the missions were 'entrenchments of monastic despotism'..."
- Bennett: "It cannot be said that the mission system made the Indians more able to sustain themselves in civilization than it had found them...Upon the whole it may be said that this mission experiment was a failure."
- Lippy: "A matter of debate in reflecting on the role of Spanish missions concerns the degree to which the Spanish colonial regimes regarded the work of the priests as a legitimate religious enterprise and the degree to which it was viewed as a 'frontier institution,' part of a colonial defense program. That is, were Spanish motives based on a desire to promote conversion or on a desire to have religious missions serve as a buffer to protect the main colonial settlements and an aid in controlling the Indians?"
- Bennett: The missions in effect served as "...the citadels of the theocracy which was planted in California by Spain, under which its wild inhabitants were subjected, which stood as their guardians, civil and religious, and whose duty it was to elevate them and make them acceptable as citizens and Spanish subjects...it remained for the Spanish priests to undertake to preserve the Indian and seek to make his existence compatible with higher civilization."
- Bean: "Serra's decision to plant tobacco at the missions was prompted by the fact that from San Diego to Monterey the natives invariably begged him for Spanish tobacco."
- Morrison: That the buildings in the California mission chain are in large part intact is due in no small measure to their relatively recent construction; Mission San Diego de Alcalá was founded more than two centuries after the establishment of the Mission of Nombre de Dios in St. Augustine, Florida in 1565 and 170 years following the founding of Mission San Gabriel del Yunque in present-day Santa Fe, New Mexico in 1598.
- Thompson: In the words of Charles Lummis, the historic structures "...were falling to ruin with frightful rapidity, their roofs being breached or gone, the adobe walls melting under the winter rains."
- Hittell: "Boscana himself and his brother missionaries were men of narrow range of thought, continually seeking among the superstitions of the natives for resemblances of the true faith and ever ready to catch at the slightest hints and magnify them into complicated dogmas corresponding afar of those which they themselves taught."
- Saunders and Chase, p. 65
- Kelsey, p. 18
- Leffingwell, p. 10
- Winship. pp. 32–4, 37
- Flint, R. (Winter 2005). "What They Never Told You about the Coronado Expedition". Kiva 71 (2): 203–217. doi:10.2307/30246725 (inactive 2015-02-08). JSTOR 30246725.
- Kelsey, Harry (1986). Juan Rodríguez Cabrillo. San Marino: The Huntington Library.
- Kelsey, Harry. "The Queen's Pirate". The New York Times. Retrieved 11 December 2015.
- "Drake Claims California for England". History.com. Retrieved 11 December 2015.
- Morrison, p. 214
- Frost, Orcutt William, ed. (2003), Bering: The Russian Discovery of America, New Haven, Connecticut: Yale University Press, ISBN 0-300-10059-0
- Chapman, p. 216
- Bennett 1897a, pp. 11–12
- Rawls, p. 3
- Bennett 1897a, p. 10
- "Old Mission Santa Inés:" Clerical historian Maynard Geiger, "This was to be a cooperative effort, imperial in origin, protective in purpose, but primarily spiritual in execution."
- Chapman, Charles E. Ph.D. (1921). A History of California; The Spanish Period. The MacMillan Company location= New York. ISBN 978-1148507927.
- Orfalea, Gregory. "Hungry for Souls Was Junípero Serra a Saint?". Commonweal magazine. Retrieved 11 December 2015.
- Rawls, p. 6
- Kroeber 1925, p. vi.
- Bennett, p. 15
- Bennett 1897a, p. 16
- James, p. 11
- Engelhardt 1922, p. 258
- Yenne, p. 10
- Leffingwell, p. 25
- Engelhardt 1920, p. 76
- Robinson, p. 28
- Engelhardt 1908, pp. 3–18
- Bennett 1897a, p. 13
- Rawls, p. 106
- Milliken, pp. 172–173, 193
- Kroeber, p. 1
- Kroeber, p. 2
- Kelsey, p. 4
- Nordlander, p. 10
- Jones, p. 170
- Young, p. 102
- Hittell, p. 499
- Chapman, pp. 254–255
- Bacich, Damian. "The Zacatecan Franciscans in Alta California: A Misunderstood Legacy."Boletín: Journal of the California Mission Studies Association, Vol. 28, Nos. 1&2, 2011-12
- Robinson, p. 29
- Engelhardt 1922, p. 80
- Bancroft, vol. i, pp. 100–101: The motives behind the issuance of Echeandía's premature decree may have had more to do with his desire to appease "...some prominent Californians who had already had their eyes on the mission lands..." than with concern for the welfare of the natives.
- Stern and Miller, pp. 51–52
- Forbes, p. 201: In 1831, the number of Indians under missionary control in all of Upper California stood at 18,683; garrison soldiers, free settlers, and "other classes" totaled 4,342.
- Kelsey, p. 21
- Bancroft, vol. iii, pp. 322; 626
- Engelhard 1922, p. 223
- Yenne, pp. 18–19
- Engelhardt 1922, p. 114
- Yenne, pp. 83, 93
- Robinson, p. 42
- Cook, p. 200
- James, p. 215
- Engelhardt 1922, p. 248
- Bancroft, H. H. (1886). The works of Hubert Howe Bancroft: History of California : vol. IV, 1840-1845, pp73-74. San Francisco [Calif.: A.L. Bancroft
- Robinson, p. 14
- Robinson, p. 100
- Robinson, pp. 31–32: The area shown is that stated in the Corrected Reports of Spanish and Mexican Grants in California Complete to February 25, 1886 as a supplement to the Official Report of 1883–1884. Patents for each mission were issued to Archbishop J.S. Alemany based on his claim filed with the Public Land Commission on February 19, 1853.
- Rawls, pp. 112–113
- Capron, p. 3
- Bancroft, pp. 33–34
- Young, p. 17
- Robinson, p. 25
- Yenne, Bill (2004). The Missions of California. Advantage Publishers Group, San Diego, California. ISBN 1-59223-319-8.
- Bennett, John E. (January 1897a). "Should the California Missions Be Preserved? – Part I". Overland Monthly XXIX (169): 9–24.
- Markham, Edwin (1914). California the Wonderful: Her Romantic History, Her Picturesque People, Her Wild Shores... Hearst's International Library Company, Inc., New York.
- Riesenberg, Felix (1962). The Golden Road: The Story of California's Spanish Mission Trail. McGraw-Hill, New York. ISBN 0-07-052740-7.
- Engelhardt 1920, p. 228
- Leffingwell, p. 22
- Forbes, p. 202: In 1831, the number of Indians under missionary control stood at 6,465; garrison soldiers totaled 796.
- Leffingwell, p. 68
- Forbes, p. 202: In 1831, the number of Indians under missionary control stood at 3,292; garrison soldiers totaled 613; the population of El Pueblo de los Ángeles numbered 1,388.
- Leffingwell, p. 119
- Forbes, p. 202: In 1831, the number of Indians under missionary control stood at 3,305; garrison soldiers totaled 708; the population of Villa Branciforte numbered 130.
- Leffingwell, p. 154
- Forbes, p. 202: In 1831, the number of Indians under missionary control stood at 5,433; garrison soldiers totaled 371; the population of El Pueblo de San José numbered 524.
- Leffingwell, p. 170
- Paddison, p. 23
- Bennett 1897a, p. 20
- Engelhardt 1922, pp. 8–10
- Yenne, p. 186
- Ruscin, p. 196
- Ruscin, p. 61
- Chapman, p. 418: Chapman does not consider the sub-missions (asistencias) that make up the inland chain in this regard.
- Engelhardt 1920, pp. 350–351
- Ruscin, p. 12
- Paddison, p. 48
- Chapman, pp. 310–311
- Engelhardt 1922, p. 12
- Rawls, pp. 14–16
- Leffingwell, pp. 19, 132
- Bennett 1897a, p. 20: Priests were paid an annual salary of $400.
- Carey McWilliams.Southern California:An Island on the Land
- Chapman, p. 383
- Paddison, p. 130
- Newcomb, p. viii
- Krell, p. 316
- Engelhardt 1922, p. 30
- Bennett 1897b, p. 156
- Bennett 1897b, p. 158
- Bennett 1897b, p. 160: "The fathers claimed all the land in California in trust for the Indians, yet the Indians received no visible benefit from the trust."
- Lippy, p. 47
- Paddison, p. xiv
- A. Thompson, p. 341
- Bean and Lawson, p. 37
- A fanega is equal to 100 pounds.
- Krell, p. 316: As of December 31, 1832.
- California Native Grass Association
- Engelhardt 1922, p. 211
- Mission Historical Park - City of Santa Barbara
- Young, p. 18
- Stern and Miller, p. 85
- Stern and Neuerburg, p. 95
- Thompson, Mark, pp. 185–186
- "Past Campaigns"
- Stern and Miller, p. 60
- California Missions Preservation Act
- Coronado and Ignatin
- McKanna, p. 15; also, per Hittell, p. 753
- Bancroft, Hubert Howe (1886). History of California, Volume II (1801–1894). The History Company, San Francisco, California.
- Bean, Lowell John and Harry Lawton (1976). Native Californians: A Theoretical Perspective. Ballena Press, Banning, California.
- Bennett, John E. (January 1897a). "Should the California Missions Be Preserved? – Part I". Overland Monthly XXIX (169): 9–24.
- Bennett, John E. (February 1897b). "Should the California Missions Be Preserved? – Part II". Overland Monthly XXIX (170): 150–161.
- Capron, E.S. (1854). History of California from its Discovery to the Present Time. John P. Jewett & Company, Cleveland, Ohio.
- Chapman, Charles E., Ph.D. (1921). A History of California; The Spanish Period. The MacMillan Company, New York.
- Cook, Sherburne F., Ph.D. (1976). The Population of the California Indians, 1769–1970. University of California Press, Berkeley, California. ISBN 0-520-02923-2.
- Coronado, Michael; Heather Ignatin (June 5, 2006). "Plan would open Prop. 40 funds to missions". The Orange County Register. Retrieved 2008-03-08.
- Engelhardt, Zephyrin, O.F.M. (1908). The Missions and Missionaries of California, Volume One. The James H. Barry Co., San Francisco, California.
- Engelhardt, Zephyrin, O.F.M. (1920). San Diego Mission. James H. Barry Company, San Francisco, California.
- Engelhardt, Zephyrin, O.F.M. (1922). San Juan Capistrano Mission. Standard Printing Co., Los Angeles, California.
- Forbes, Alexander (1839). California: A History of Upper and Lower California. Smith, Elder and Co., Cornhill, London. ISBN 0-405-04972-2.
- Geiger, Maynard J., O.F.M., Ph.D. (1969). Franciscan Missionaries in Hispanic California, 1769–1848: A Biographical Dictionary. Huntington Library, San Marino, California.
- Harley, R. Bruce (1997–2003). "The San Bernardino Asistencias". California Mission Studies Association. Archived from the original on 2006-06-13. Retrieved 2006-11-21.
- Hittell, Theodore H. (1898). History of California, Volume I. N.J. Stone & Company, San Francisco, California.
- James, George Wharton (1913). The Old Franciscan Missions Of California. Little, Brown, and Co. Inc., Boston, Massachusetts. ISBN 0-89341-321-6.
- Jones, Roger W. (1997). California from the Conquistadores to the Legends of Laguna. Rockledge Enterprises, Laguna Hills, California.
- Jones, Terry L.; Kathryn A. Klar (2005). "Linguistic Evidence for a Prehistoric Polynesia-Southern California Contact Event". Anthropological Linguistics (47): 369–400.
- Jones, Terry L. and Kathryn A. Klar (eds.) (2007). California Prehistory: Colonization, Culture, and Complexity. Altimira Press, Landham, Maryland. ISBN 0-7591-0872-2.
- Kelsey, H. (1993). Mission San Juan Capistrano: A Pocket History. Interdisciplinary Research, Inc., Altadena, California. ISBN 0-9785881-0-X.
- Krell, Dorothy (ed.) (1979). The California Missions: A Pictorial History. Sunset Publishing Corporation, Menlo Park, California. ISBN 0-376-05172-8.
- Kroeber, Alfred L. (1908). "A Mission Record of the California Indians". University of California Publications in American Archaeology and Ethnology 8 (1): 1–27.
- Kroeber, Alfred L. (1925). Handbook of the Indians of California. Dover Publications, Inc., New York. ISBN 0-486-23368-5.
- Leffingwell, Randy (2005). California Missions and Presidios: The History & Beauty of the Spanish Missions. Voyageur Press, Inc., Stillwater, Minnesota. ISBN 0-89658-492-5.
- Lippy, Charles H. (1985). Bibliography of Religion in the South. Mercer University Press, Macon, Georgia. ISBN 0-86554-161-2.
- Markham, Edwin (1914). California the Wonderful: Her Romantic History, Her Picturesque People, Her Wild Shores... Hearst's International Library Company, Inc., New York.
- Margolin, Malcolm (1993). The Way We Lived: California Indian Stories, Songs & Remembrances. Heyday Books, Berkeley, California. ISBN 0-930588-55-X.
- McKanna, Clare Vernon (2002). Race and Homicide in Nineteenth-Century California. University of Nevada Press, Reno, Nevada. ISBN 0-87417-515-1.
- Milliken, Randall (1995). A Time of Little Choice: The Disintegration of Tribal Culture in the San Francisco Bay Area 1769–1910. Ballena Press, Menlo Park, California. ISBN 0-87919-132-5.
- Morrison, Hugh (1987). Early American Architecture: From the First Colonial Settlements to the National Period. Dover Publications, New York. ISBN 0-486-25492-5.
- Newcomb, Rexford (1973). The Franciscan Mission Architecture of Alta California. Dover Publications, Inc., New York. ISBN 0-486-21740-X.
- Nordlander, David J. (1994). For God & Tsar: A Brief History of Russian America 1741–1867. Alaska Natural History Association, Anchorage, AK. ISBN 0-930931-15-7.
- Oakley, Kenneth P. (September 1963). "Relative Dating of Arlington Springs Man". Science 20 (3586): 41–1172. doi:10.1126/science.141.3586.1172. PMID 14043359.
- Paddison, Joshua (ed.) (1999). A World Transformed: Firsthand Accounts of California Before the Gold Rush. Heyday Books, Berkeley, California. ISBN 1-890771-13-9.
- "Past Campaigns". California Mission Studies Association. 2000. Retrieved 2007-07-08.
- "The Pious Fund of the Californias". Catholic Encyclopedia. 1911. Retrieved 2007-07-08.
- "Pre-Mission History". Old Mission Santa Inés. 2007. Retrieved 2007-08-26.
- Rawls, James J. (1984). Indians of California: The Changing Image. University of Oklahoma Press, Norman, Oklahoma. ISBN 0-8061-2020-7.
- Riesenberg, Felix (1962). The Golden Road: The Story of California's Spanish Mission Trail. McGraw-Hill, New York. ISBN 0-07-052740-7.
- Robinson, W.W. (1948). Land in California. University of California Press, Berkeley and Los Angeles, California. ISBN 0-520-03875-4.
- Ruscin, Terry (1999). Mission Memoirs. Sunbelt Publications, San Diego, California. ISBN 0-932653-30-8.
- Saunders, Charles Francis and J. Smeaton Chase (1915). The California Padres and Their Missions. Houghton Mifflin, Boston and New York. ISBN 0-910118-53-1.
- Stern, Jean and Gerald J. Miller (1995). Romance of the Bells: The California Missions in Art. The Irvine Museum, Irvine, California. ISBN 0-9635468-5-6.
- Thompson, Anthony W., Robert J. Church, and Bruce H. Jones (2000). Pacific Fruit Express. Signature Press, Wilton, California. ISBN 1-930013-03-5.
- Thompson, Mark (2001). American Character: The Curious Life of Charles Fletcher Lummis and the Rediscovery of the Southwest. Arcade Publishing, New York. ISBN 1-55970-550-7.
- Vancouver, George (1801). A Voyage of Discovery to the North Pacific Ocean and Round the World, Volume III. Printed for John Stockdale, Piccadilly, London.
- Yenne, Bill (2004). The Missions of California. Advantage Publishers Group, San Diego, California. ISBN 1-59223-319-8.
- Young, S., and Levick, M. (1988). The Missions of California. Chronicle Books LLC, San Francisco, California. ISBN 0-8118-1938-8.
- Baer, Kurt (1958). Architecture of the California Missions. University of California Press, Los Angeles, California.
- Berger, John A. (1941). The Franciscan Missions of California. G.P. Putnam's Sons, New York.
- Carillo, J. M., O.F.M. (1967). The Story of Mission San Antonio de Padua. Paisano Press, Inc., Balboa Island, California.
- Camphouse, M. (1974). Guidebook to the Missions of California. Anderson, Ritchie & Simon, Los Angeles, California. ISBN 0-378-03792-7.
- Crespí, Juan: A Description of Distant Roads: Original Journals of the First Expedition into California, 1796–1770, edited and translated by Alan K. Brown, San Diego State University Press, 2001, ISBN 978-1-879691-64-3
- Crump, S. (1975). California's Spanish Missions: Their Yesterdays and Todays. Trans-Anglo Books, Del Mar, California. ISBN 0-87046-028-5.
- Drager, K., and Fracchia, C. (1997). The Golden Dream: California from Gold Rush to Statehood. Graphic Arts Center Publishing Company, Portland, Oregon. ISBN 1-55868-312-7.
- Johnson, P., ed. (1964). The California Missions. Lane Book Company, Menlo Park, California.
- Moorhead, Max L. (1991). The Presidio: Bastion Of The Spanish Borderlands. University of Oklahoma Press, Norman, Oklahoma. ISBN 0-8061-2317-6.
- Rawls, J. and Bean, W. (1997). California: An Interpretive History. McGraw-Hill, New York. ISBN 0-07-052411-4.
- Robinson, W.W. (1953). Panorama: A Picture History of Southern California. Anderson, Ritchie & Simon, Los Angeles, California.
- Weitze, Karen J. (1984). California's Mission Revival. Hennessy & Ingalls, Inc., Los Angeles, California. ISBN 0-912158-89-1.
- Wright, Ralph B., Ed. (1984). California's Missions. Lowman Publishing Company, Arroyo Grande, California.
Articles and archives
- Early California Population Project (ECPP) The Huntington Library, 2006. Provides public access to all the information contained in California's historic mission registers.
- California Missions article at the Catholic Encyclopedia
- The California Missions, 2001.
- Matrimonial Investigation records of the San Gabriel Mission Claremont Colleges Digital Library, 2008, 169 records digitized and searchable by priest name or by the names of the couple requesting marriage.
- Junipero Serra, the Vatican, & Enslavement Theology Preview of Fogel, Daniel. ISM Press Books. Offers a critical perspective on the missions' impact on California's Indians.
- MissionTour Tom Simondi, 2001–2005.
- The Old Franciscan Missions of California James, George Wharton, 1913. eText at Project Gutenberg.
- The San Diego Founders Trail 2001–2008 website.
- Trails and Roads: El Camino Real Faigin, Daniel P. California Highways, 1996–2004
- Almanac: California Missions GAzis-SAx, Joel, 1999.
|Wikimedia Commons has media related to California missions.|
- California Mission Studies Association
- California's Spanish Missions
- Library of Congress: American Memory Project: Early California History, The Missions
- Tricia Anne Weber: The Spanish Missions of California
- Album of Views of the Missions of California, Souvenir Publishing Company, San Francisco, Los Angeles, 1890's.
- The Missions of California, by Eugene Leslie Smyth, Chicago: Alexander Belford & Co., 1899.
- California Historical Society
- California Mission Visitors Guide
- National Register of Historic Places: Early History of the California Coast: List of Sites
- California Mission Sketches by Henry Miller, 1856 and Finding Aid to the Documents relating to Missions of the Californias : typescript, 1768–1802 at The Bancroft Library
- Howser, Huell (December 8, 2000). "Art of the Missions (110)". California Missions. Chapman University Huell Howser Archive. | https://en.wikipedia.org/wiki/Spanish_Missions_of_California |
4.03125 | - What is Planetary Science?
- Science Nuggets
- Call for Comet Tail Images
- MSSL Home
- The First Akon Europa Penetrator Workshop
Rosetta is the European Space Agency's first cornerstone mission in planetary exploration. Its mission is to orbit and land on a comet, following gravity assist flybys of inner planets (Earth three times and Mars once) and scientific flybys of asteroids. Launched successfully 2 March 2004 to comet 67P/Churyumov-Gerasimenko.
Rosetta is an ambitious and comprehensive mission consising of two spacecraft - an orbiter and a lander. The orbiter will be delivered to a comet, where it will become the first spacecraft to go into orbit around a cometary nucleus. The first encounter will be far out in the comet's orbit where it is cold and inactive. The lander will be deployed soon afterwards, to make the first ever measurements on the surface of a comet.
The orbiter will travel with the target comet as it nears the Sun, the nucleus warms, material boils away and activity develops. Close to the Sun, comets have two tails - a dust tail and a plasma tail which can be millions of km long. Rosetta will make detailed close-up studies of the comet's nucleus, including the development of activity, imaging studies, composition measurements and dust.
Comets are important to study as they are ancient objects, almost unchanged since the beginning of the solar system 4.6 billion years ago. They are often called the building blocks of the outer solar system, and are the nearest surviving objects to the early 'planetesimals'. Comets are thought to be stored in two 'reservoirs': the Oort cloud and the Kuiper belt. Every so often comets are nudged inwards in their orbits, passing through the inner solar system and in some cases being trapped in a lower solar orbit by Jupiter. Some comets, likely to be from the Oort cloud, have orbits with 'long' periods (eg Halley 76 years, Hale Bopp thousands of years) and others such as Wirtanen (Rosetta's original target) and Grigg-Skjellerup have interacted significantly with Jupiter and have 'short' periods near 5 years.
Comets are also important due to their role early in the solar system of bombarding the Earth's and other atmospheres with volatile substances like water and carbon based compounds. There is evidence that at the end of the solar system's early bombardment phase, 4.6 to 3.8 billion years ago, the compositition of Earth's atmosphere bore some similarities to the composition of comets. Comets therefore may have played an important role in bringing water and other volatiles to Earth and other inner solar system objects, in addition to outgassing of volatile material from the forming planets. They are also important because of collisions which can still happen, as seen with comet Shoemaker-Levy 9 hitting Jupiter in 1994.
In space exploration terms the missions to Giacobini-Zinner (ICE), Halley (Giotto, Vega-1 and 2, Suisei and Sakigake) and Grigg-Skjellerup (Giotto) started cometary exploration in the mid-1980s and early 1990s. The 2001 encounter with Borrelly (Deep Space 1) provided some more data, and Stardust flew by comet Wild-2 in 2004. Stardust images showed only the third cometary nucleus, after Halley and Borrelly, seen by humankind, and will return some cometary dust to Earth in 2006. The imminent NASA mission to Tempel 1 (Deep Impact, launch January 2005) will also provide vital and complementary data. But ESA's Rosetta has a relatively broad suite of scientific objectives compared to these smaller missions with more tightly focussed sceintific aims. Rosetta will take cometary exploration to the next stage by orbiting and landing on a comet.
At MSSL we have studied data from our Johnstone Plasma Analyser on Giotto, which visited Halley and Grigg Skjellerup, providing key information on how the ion tail forms, via 'ion pickup', at two different comets with a factor 100 difference in gas production rate. This process is also important in many other solar system contexts, including the 'scavenging' of the Mars and Venus atmospheres by the solar wind, and even in fusion experiments on Earth.
MSSL role in Rosetta
Andrew Coates of MSSL is a co-investigator in the Rosetta Plasma Consortium, and leads the science team for this set of five complementary sensors on the Rosetta Orbiter. The sensor teams are led by IRF-Uppsala and Kiruna (S), T.U.Braunschweig (D), SwRI (USA), and LPCE (F); the plasma interface unit is from IC (UK). We expect that the RPC instrument will be one of the first to detect the signs of activity as the comet warms. The UK involvement in RPC consists of Imperial College (plasma interface unit hardware) and UCL-MSSL (science team lead).
Scientific goals of the RPC include
- Study of onset of activity of a comet, and development of its interaction with the solar wind, as the spacecraft approaches the Sun.
- The gas production will change over many orders of magnitude during the mission, changing the activity from a quiescent, bare nucleus to a complex interaction region, with a plasma tail.
- Interaction between the nucleus and its environment: studies of conductivity, magnetization, sputtering, charging, dust levitation.
- First exploration of the detailed structure of the inner coma of a comet.
- First exploration of plasma tail formation region and determination of its relationship to other plasma boundaries.
- Determination of the role of tail rays, long observed at comets but role in tail formation and relation to in-situ data not understood as yet.
- Determination of permanence of boundaries in the comet-solar wind interaction.
- Determination of particle acceleration processes near the comet.
- Study of interaction between gas expansion, ionization and photochemical.
Page last modified on 16 apr 15 15:55 | http://www.ucl.ac.uk/mssl/planetary-science/missions/rosetta |
4.25 | In the first half of the 20th cent., most of the Maya region looked much as it had centuries earlier. Society was divided between a commercial and administrative elite group of Spanish-speaking whites and ladinos, who resided in the larger towns, and a much larger group of Maya-speaking agriculturists, who resided in rural villages. In few areas of Latin America was a racial divide so clearly demarcated, with castelike divisions separating ladinos from the indigenous population. Although the political division between Mexico and Guatemala occurred early in the 19th cent., there were few discernible consequences prior to the years following the Mexican revolution (1910–17). At this time a land redistribution program, together with a set of legal guarantees preventing the expropriation of village lands, were applied to rural populations throughout Mexico; in contrast, no such guarantees were respected with regard to the Guatemalan population.
Demographic growth among Maya-speaking populations increasingly led to pressure on available resources, leading to widespread deforestation and erosion and forcing many groups to adopt commercial specializations to supplement income derived from agriculture. Among the better-known examples of the latter are the colorful cotton textiles produced in the Guatemalan highlands, marketed both locally and in industrialized countries. Also in Guatemala, seasonal labor on the growing number of coffee plantations along the Pacific coast became increasingly important throughout the first half of the 20th cent. Beginning in the 1930s and 40s, improved communications throughout the Maya region opened many new and often local economic opportunities for wage employment and commercial activity.
As Maya populations have become more tightly integrated into national economies, their distinctive ethnic markers, including dress, language, and religious practices, have often been abandoned, leaving increasing numbers culturally indistinguishable from the ladino population. Conversely, economically autonomous communities have used the same ethnic markers as a means of preserving the integrity of group boundaries and corporately held resources. Partly for this reason, the Guatemalan military unleashed a campaign of terror beginning in the mid-1970s, specifically targeting the indigenous population. All markers of traditional ethnic identity, including distinctive dress, language, and even Catholicism, became targets of military repression. Village lands were subject to widespread seizure, and government-sponsored resettlement programs were widely applied. In the 1970s and 80s there were tens of thousands of deaths and "disappearances" and an exodus of many hundreds of thousands, most from Maya-speaking regions, seeking sanctuary primarily in Mexico and the United States. However, over a million Maya remain in Guatemala. In Mexico, a 1994 uprising in Chiapas drew much of its strength from the support of Mayan peasants.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | http://www.factmonster.com/encyclopedia/society/maya-indigenous-people-mexico-central-america-the-twentieth-century.html |
4.1875 | Skip to Content
View Additional Content In This Section
Methicillin-resistant Staphylococcus aureus (MRSA)
is a type of staphylococcus or "staph" bacterium that is resistant to many
antibiotics. Staph bacteria, like other kinds of
bacteria, normally live on your skin and in your nose, usually without causing
problems. But if these bacteria become resistant to antibiotics, they can cause serious infections, especially in people who are ill or weak. MRSA is different from other types of staph because it cannot be
treated with certain antibiotics such as methicillin.
MRSA infections are more difficult to treat than ordinary
staph infections. This is because the strains of staph known as MRSA
do not respond well to many common antibiotics used to kill bacteria. When methicillin and other antibiotics do not kill the bacteria causing an infection, it
becomes harder to get rid of the infection.
MRSA bacteria are
more likely to develop when antibiotics are used too often or are not used
correctly. Given enough time, bacteria can change so that these
antibiotics no longer work well.
MRSA, like all staph bacteria, can be
spread from one person to another through casual contact or through
contaminated objects. It is commonly spread from the hands of someone who has
MRSA. This could be anyone in a health care setting or in the community. MRSA
is usually not spread through the air like the common cold or flu virus, unless
a person has MRSA
pneumonia and is coughing.
MRSA that is
acquired in a hospital or health care setting is called healthcare-associated
methicillin-resistant Staphylococcus aureus (HA-MRSA).
In most cases, a person who is already sick or who has a weakened
immune system becomes infected with HA-MRSA. These
infections can occur in wounds or skin, burns, and IV or other sites where
tubes enter the body, as well as in the eyes, bones, heart, or blood.
In the past, MRSA infected people who had chronic illnesses. But now MRSA has become more common in healthy people. These infections can occur among people
who have scratches, cuts, or wounds and who have close contact with one
another, such as members of sports teams. This type of MRSA is called
community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA).
Symptoms of a MRSA infection depend on where the
infection is. If MRSA is causing an infection in a wound, that area of your
skin may be red or tender. If you have pneumonia, you may develop a
Community-associated MRSA commonly causes skin infections,
cellulitis. Often, people think they have been bitten
by a spider or insect. Because MRSA infections can become serious in a short
amount of time, it is important to see your doctor right away if you notice a
boil or other skin problem.
If your doctor thinks that you are infected with
MRSA, he or she will send a sample of your infected wound, blood, or urine to a
lab. The lab will grow the bacteria and then test to see which kinds of
antibiotics kill the bacteria. This test may take several days.
You may also be tested if your doctor suspects that you are a MRSA
carrier. A MRSA carrier is a person who has the bacteria living on the skin and in the nose but
who is not sick. The test is done by taking a swab from the inside of the
Depending on how serious your infection is, the doctor may drain your
wound, prescribe antibiotic medicine, give you an IV (intravenous)
antibiotic, or hospitalize you.
If you have a MRSA infection and need to be in a
hospital, you may be isolated in a private room to reduce the chances of
spreading the bacteria to others. When your doctors and nurses are caring for
you, they may use extra precautions such as wearing gloves and gowns. If you
have a MRSA pneumonia, they may also wear masks.
Most cases of
community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) begin as mild skin infections such as pimples or boils.
Your doctor may be able to treat these infections without antibiotics by using
a minor surgical procedure that opens and drains the sores.
your doctor prescribes antibiotic medicine, be sure to take all the medicine
even if you begin to feel better right away. If you do not take all the
medicine, you may not kill all the bacteria. No matter what your treatment, be sure to call your doctor if your infection does not get better as
As more antibiotic-resistant bacteria develop,
hospitals are taking extra care to practice infection control, which includes
frequent hand-washing and isolation of patients who are infected with
You can also take steps to protect yourself from
If you have an infection with MRSA, you can keep from
spreading the bacteria.
If you need to go to the hospital for some reason, and you have staph bacteria living on your skin and in your nose, you may be treated to try to prevent getting or spreading a MRSA infection. You may be given an ointment to put on
your skin or inside your nose. And you need to wash your skin daily with a
special soap that can get rid of the bacteria.
Other Works Consulted
American Academy of Pediatrics (2012). Staphylococcal infections. In LK Pickering et al., eds., Red Book: 2012 Report of the Committee on Infectious Diseases, 29th ed., pp. 653–668. Elk Grove Village, IL: American Academy of Pediatrics.
Centers for Disease Control and Prevention (CDC) (2010). MRSA infections. Available online: http://www.cdc.gov/mrsa/index.html.
Kallen AJ, et al. (2010). Health care-associated invasive MRSA infections, 2005–2008. JAMA, 304(6): 641–647.
Liu C, et al. (2011). Clinical practice guidelines by the Infectious Diseases Society of America for the treatment of methicillin-resistant Staphylococcus aureus infections in adults and children. Clinical Infectious Diseases, 52(3): 285–292.
ByHealthwise StaffPrimary Medical ReviewerE. Gregory Thompson, MD - Internal MedicineSpecialist Medical ReviewerTheresa O'Young, PharmD - Clinical Pharmacy
Current as ofMay 22, 2015
Current as of:
May 22, 2015
E. Gregory Thompson, MD - Internal Medicine & Theresa O'Young, PharmD - Clinical Pharmacy
To learn more about Healthwise, visit Healthwise.org.
© 1995-2015 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. | http://www.asante.org/app/healthwise/document.aspx?navigationNode=/1/59/1/&id=tp23379spec |
4.0625 | Shift in feeding behavior of mosquitoes sheds light on West Nile virus outbreaks
Since its introduction to the United States in 1999, West Nile virus has become the major vector-borne disease in the U.S., with 770 reported deaths, 20,000 reported illnesses, and perhaps around a million people infected. The virus is transmitted by Culex mosquitoes (the "vector") and cycles between birds that the mosquitoes feed on. Humans can also be infected with the virus when bitten by these mosquitoes.
Scientists have struggled to explain these large outbreaks in the U.S., which stand in stark contrast to the sporadic European infections. In a new study published in the open access journal PLoS Biology, Drs. Marm Kilpatrick, Peter Daszak, and colleagues now present evidence that the major vector of West Nile virus in the USA, Culex pipiens mosquitoes, change their feeding behavior in the fall from their preferred host, American robins, to humans, resulting in large scale outbreaks of disease.
These feeding shifts appear to be a "continent-wide phenomenon," the researchers conclude, and may explain why West Nile virus outbreaks are so intense in the U.S. compared to Europe and Africa, where the virus originates.
From May through September 2005,Dr. Kilpatrick, senior research scientist with the Consortium for Conservation Medicine, and his team collected mosquitoes and caught birds at six sites in Maryland and Washington, D.C. They determined the changes in mosquito populations throughout the West Nile virus transmission season, the abundance and diversity of bird species at these sites, and tested samples for West Nile virus.
Dr. Kilpatrick says, "To find out which species mosquitoes favored as hosts, we collected thousands of Culex pipiens mosquitoes and selected those that had just fed and still had bloodmeals in them. We sequenced the DNA in the bloodmeal to identify the species of host they had fed on."
Their findings showed that from May to June, the American robin, which represented just 4.5% bird population at their sites, accounted for more than half of Culex pipiens' meals. As the summer wore on and robins left their breeding grounds, the probability that humans were fed on increased sevenfold. Because the overall number of birds increased during this time, Kilpatrick and his team concluded that mosquitoes changed to humans as a result of robin dispersal, rather than a lack of avian hosts. "This feeding shift happened, even though the total number of birds at our site increased as other species' offspring joined the population," said Kilpatrick.
With the data collected from the Washington, D.C., area, the researchers presented a model of the risks of infection of the West Nile virus in humans. The model predicted that the risk of human infection peaked in late July to mid-August, declined toward the end of August, and then rose slightly at the end of September. The actual human cases in the area that year, the authors point out, "showed a strikingly similar pattern." This same pattern was seen in California and Colorado, with numbers of infected Culex tarsalis mosquitoes (the main vectors in the western USA) peaking in June and July, followed by a late-summer spike in human infections, suggesting a continent-wide phenomenon.
Dr. Peter Daszak, Executive Director of the Consortium for Conservation Medicine, comments: "This is a case study in how to understand emerging diseases. Our collaborative team includes ecologists, virologists, and entomologists, and uses state-of-the-art techniques, including DNA sequencing of mosquito blood meals, to piece together what drives a virus to cause outbreaks in people. At the CCM we study the ecology of diseases and develop predictive models that can help us prevent future outbreaks. We are now using this approach to help understand the emergence and spread of other viruses such as SARS, Nipah virus and avian influenza."
The study is funded by the National Institute of Allergy and Infectious Disease.
Citation: Kilpatrick AM, Kramer LD, Jones MJ, Marra PP, Daszak P (2006) West Nile virus epidemics in North America are driven by shifts in mosquito feeding behavior. PLoS Biol 4(4): e82.
Consortium for Conservation Medicine
460 W 34th St.
New York, NY USA 10001
Consortium for Conservation Medicine
61 Route 9W
Palisades, NY USA10964
PLEASE MENTION THE OPEN-ACCESS JOURNAL PLoS BIOLOGY (www.plosbiology.org) AS THE SOURCE FOR THESE ARTICLES AND PROVIDE A LINK TO THE FREELY-AVAILABLE TEXT. THANK YOU.
All works published in PLoS Biology are open access. Everything is imately available--to read, download, redistribute, include in databases, and otherwise use--without cost to anyone, anywhere, subject only to the condition that the original authorship and source are properly attributed. Copyright is retained by the authors. The Public Library of Science uses the Creative Commons Attribution License.
Last reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved. | http://psychcentral.com/news/archives/2006-02/plos-sif022306.html |
4.0625 | Thalassemia is a group of inherited blood disorders that interfere with the body's normal production of hemoglobin. Hemoglobin is a substance that red blood cells need in order to carry oxygen to body tissues.
Thalassemia is inherited, passed on through genes from parent to child.
Symptoms of the disease vary. Some people have no symptoms or very mild symptoms, in which case they may not need treatment. Others develop symptoms of anemia, such as weakness, fatigue, lightheadedness, and pale skin.
People who have moderate to severe symptoms of anemia may require treatment. Treatment depends on the severity of the thalassemia. Treatment can include folic acid supplements, medicine, blood transfusions, or stem cell transplants from blood or bone marrow. Very rare forms of thalassemia may cause organ damage that can result in death.
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
© 1995-2014 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
- Early Care for Your Premature Baby
- What to Eat When You Have Cancer
- When to Take More Pain Medication | http://www.emedicinehealth.com/script/main/art.asp?articlekey=134295&ref=136280 |